If you are not already logged in to the Cluster, then a browser window will open for your to authenticate yourself. You can read more about logging in here. You can also login directly using the octelium connect command with an authentication token Credential via the --auth-token flag (read more here).
Connecting to a Cluster
Before you connect to a Cluster, you might want to check out the available Namespaces and Services in the Cluster (read more in detail here) as follows:
export OCTELIUM_DOMAIN=example.com# In Windows the PowerShell equivalent of the above command is $env:OCTELIUM_DOMAIN = "example.com"# List the Cluster Servicesoctelium get service#OR simplyoctelium get svc
Read more about the OCTELIUM_DOMAIN environment variable and the --domain flag here
Now you can connect to Cluster very simply as follows:
# Connect via the detached modeoctelium connect -d
The -d or --detached flag runs octelium connect in detached mode. For linux, this is achieved via systemd-run. For Windows, this is achieved by running as a Windows service. Note that the -d flag is currently unsupported for MacOS.
If you do not wish to connect to the Cluster in detached mode, you can run it directly in foreground as follows:
sudo -E octelium connect
In Windows, you need to run your PowerShell as administrator as equivalent of sudo in Linux and MacOS. In Windows 11, however, sudo is natively supported.
Rootless
As you will see later in this guide here, you can also run octelium connect as an unprivileged OS process owned by an ordinary/non-root OS user simply as follows:
octelium connect
In practice, however, running octelium connect as an unprivileged OS process without any additional flags is useless for accessing Services. In other words, when the rootless unprivileged mode is used, the only way to access Services is by explicitly mapping/publishing Services to your host as will be shown below.
Mapping Services to Host
In some cases, you might want to have something similar to what Docker does where the container port is mapped or "published" to the host. Octelium enables you to map/publish a Service port to the localhost via the -p or --publish flag.
Here is an example when you want to map the HTTP Service myapi of the default Namespace to your localhost port 8080:
octelium connect -p myapi:8080
Here is another example that maps the HTTP Service myapi of the ns1 Namespace to your localhost port 8080:
octelium connect -p myapi.ns1:8080
Now, once you're connected, you can access to the Service as follows:
curl localhost:8080
You can also publish several Services or several ports of the same Service if it is a multi-port Service. Here is an example:
octelium connect -p svc1:8080 -p svc2:8081 -p svc3.ns1:8082
By default the Service is published to a localhost listener on the host. You can explicitly set a specific IP address for the listener as follows:
octelium connect -p svc:0.0.0.0:8080# Or use an IPv6 addressoctelium connect -p svc.ns:[::1]:9000
Serving Services
To serve one, multiple or all Services assigned to the User (read more about remotely serving Services here), you can use the --serve and --serve-all flags.
Here is an example where you want to serve the Service svc1 which belongs to the default Namespace:
octelium connect --serve svc1
Here is another example where you want to serve the Service svc1 that belongs to the ns1 Namespace:
octelium connect --serve svc1.ns1
You can also serve multiple Services as follows:
octelium connect --serve svc1 --serve svc2 --serve svc1.ns1
If you want to serve all Services that are available for their User, you can do using the --serve-all flag so as follows:
octelium connect --serve-all
You can serve Services while regardless of whether running as a privileged or an unprivileged process.
Tunnel Implementation
You're mostly not required to understand the different modes used by Octelium to run WireGuard. The command octelium connect by default automatically uses whatever suits the environment it runs from.
The octelium CLI tool supports different tunnel implementation modes depending on the environment it runs within. For example, in Linux it will first try to use the kernel implementation, if the kernel module is not installed or loaded, then it will try to use the wireguard-go tun device implementation, if it does not have permissions to create a TUN device (i.e. not running as root or does not have NET_ADMIN capability), then it will create its own userspace TCP/IP stack using gVisor's Netstack and run as an unprivileged OS process owned by an ordinary/non-root OS user.
-
Kernel mode Currently supported for Linux 5.6 or later versions as well as any older version that has the WireGuard kernel module installed. This is by far the most performant mode. This mode needs root user or
NET_ADMINcapability. -
TUN device mode This mode uses the
wireguard-goTUN device-based implementation and therefore supports all platforms. This mode needs root user or specificallyNET_ADMINcapability and (possiblyMKNODif there is not/dev/net/tundevice file that exists on the system) capabilities on Linux, root user in MacOS or admin user in Windows to create a TUN device and set up the WireGuard link. This mode can be explicitly forced over the kernel mode as follows:
octelium connect --implementation tun
- Userspace mode over gVisor Netstack This mode does not need any OS privileges. It can run as an unprivileged OS process owned by an ordinary/non-root OS user in any platform but in that case since there is no WireGuard virtual network device that's running on the host OS, the only way to access a Service in such case is by mapping it to the host as illustrated above. This mode is also less performant compared to the kernel and the TUN device modes. This mode can be explicitly forced over the kernel and tun device mods as follows:
octelium connect --implementation gvisor
Containers
The official octelium CLI as a container can be pulled as follows:
docker pull ghcr.io/octelium/octelium
You can now connect to the Cluster using an authentication token Credential as follows:
docker run --cap-add NET_ADMIN ghcr.io/octelium/octelium connect --domain <DOMAIN> --auth-token <TOKEN>
If you want to force the TUN mode then you have to either add the host device /dev/net/tun as follows:
docker run --cap-add NET_ADMIN --device /dev/net/tun ghcr.io/octelium/octelium connect --domain <DOMAIN> --auth-token <TOKEN>
Or you can add the MKNOD capability instead as follows:
docker run --cap-add NET_ADMIN --cap-add MKNOD ghcr.io/octelium/octelium connect --domain <DOMAIN> --auth-token <TOKEN>
You can also use the unprivileged gVisor netstack mode without having the need to add any capability and simply map the Services you need to use to the container's own network namespace as follows:
docker run ghcr.io/octelium/octelium connect --domain <DOMAIN> --auth-token <TOKEN> -p svc1:8080 -p svc2:8081
You can also use "rootless" containers, where your container can have additional Linux capabilities (i.e. NET_ADMIN or MKNOD in our case) within a Linux user namespace while running as an unprivileged Linux user. You can read more here.
Kubernetes
Running Octelium inside Kubernetes is not really any different from running it in containers. You might want to use it as a sidecar container for your microservices to access Services. You can also use it to serve Services. You can do both. Here is an example:
1apiVersion: apps/v12kind: Deployment3metadata:4name: octelium5spec:6selector:7matchLabels:8run: octelium9template:10metadata:11labels:12run: octelium13spec:14containers:15- name: octelium16image: ghcr.io/octelium/octelium17command: ["octelium"]18args: ["connect"]19env:20- name: OCTELIUM_DOMAIN21value: <DOMAIN>22- name: OCTELIUM_AUTH_TOKEN23valueFrom:24secretKeyRef:25name: <K8S_SECRET_NAME>26key: data27securityContext:28capabilities:29# You might also need to add the "MKNOD" capability if you're explicitly using the TUN mode30add: ["NET_ADMIN"]
If the container serves one or more busy Services, you might want to increase the number of the Kubernetes deployment replicas to enforce load balancing. By default, Octelium Clusters use round robin load balancing among the Service upstreams.
Helm
You can use the official Octelium Helm charts to deploy the octelium containers in any remote Kubernetes clusters. Here is a minimal example:
helm install my-octelium-chart oci://ghcr.io/octelium/helm-charts/octelium --set octelium.domain=<DOMAIN> --set octelium.authToken=<AUTHENTICATION_TOKEN>
You can read in detail about deploying Octelium containers via Helm here.
GitHub Actions
You can use connect to your Cluster from your GitHub Actions by using the octelium/github-action Action. See detailed examples here.
DNS
Each Service has the private FQDN <SERVICE>.<NAMESPACE>.local.<DOMAIN>(additionally, <SERVICE>.local.<DOMAIN> if it belongs to the default Namespace for a shorter FQDN). The octelium CLI by default automatically sets a private split DNS server in order to resolve queries belonging to the Cluster by setting the suffix v.<DOMAIN> as a search domain for the private DNS server. For Linux, this requires running as root since it uses resolvectl under the hood. For MacOS, the networksetup utility is used.
You can choose not to use the private DNS at all by using the --no-dns flag.
Local DNS Server
This feature is not currently enabled by default except for when running octelium inside containers but eventually it will be enabled by default.
By default, the octelium client sets the host's DNS server to the Cluster DNS server which is simply an ordinary Cluster Service that is accessible to all Users. However, this setup might not be the most stable and performant way to resolve the Cluster Service domain names for the following reasons causing for a need to use a proxy local DNS server that runs inside the connected octelium client:
-
The Cluster DNS Service has stable addresses, but not static. While the
octeliumclient is able to synchronize the DNS addresses with the host in most cases, there are other cases like deployingocteliumas a sidecar container in Kubernetes where it is impossible for it to access the filesystems of the other containers in the same pod to update the/etc/resolv.conffiles. -
When connecting to the Cluster via IPv6-only or IPv4-only modes, some applications on the host accessing the Cluster's Services might prefer to resolve domain names on the other mode (i.e. resolving IPv4 on IPv6-only mode or vice versa), the Cluster's DNS will always respond to the DNS queries and then the application finds itself unable to access the Service. The Local DNS server solves this by simply refusing to answer
ADNS queries when connecting via the IPv6-only mode orAAAAqueries when connecting via the IPv-4 only mode. -
The Local DNS server applies simple caching from within the host to accelerate access for recently used domain names.
You can use the local DNS server by enabling the --localdns flag as follows:
octelium connect --localdns
By default, the local DNS server listens on the address 127.0.0.100:53 which is accessible by any application on the host. However, you can override that address as follows:
octelium connect --localdns --localdns-addr 127.0.0.127:53
Scopes
If you're connecting to the Cluster and authenticating at the same time (i.e. via the --auth-token or --assertion flags), you might also add scopes via the --scope flag. Scopes act as a simple self-enforced authorization mechanism that can be used to further limit the scope of permissions for a Session. You can read more about scopes here.
Serving Embedded SSH
You can serve embedded SSH from the octelium client (read more about embedded SSH here) to the Users authorized by the Cluster via the --essh flag as follows:
octelium connect --essh
If you are running octelium connect as root, you can also explicitly set the host user as follows:
octelium connect --essh --essh-user ubuntu
This forces Users SSH'ing into to the host to run as the host user ubuntu instead of root.
Layer-3 Mode
By default, octelium will automatically try to use IPv6-only mode (i.e. only use a private IPv6 address for the WireGuard/QUIC network device connecting to the Cluster) unless the Cluster does not IPv6 private networking. You can however override that defaut behavior via the --ip-mode flag. You can force an IPv4-only mode as follows:
octelium connect --ip-mode v4
You can also force a dual-stack mode (i.e. both IPv6 and IPv4) as follows:
octelium connect --ip-mode both
QUIC Mode
By default, Octelium uses WireGuard for tunneling the traffic between the client and the Cluster. Moreover, the Cluster currently additionally supports a QUIC-based tunneling mode. The QUIC-based tunneling mode is currently experimental and is not recommended for production purposes. If the QUIC mode is enabled by the Cluster, you can use the QUIC mode as follows:
octelium connect --tunnel-mode quicv0
You can also set the OCTELIUM_QUIC environment variable to true instead of using the --tunnel-mode quicv0 flag as follows:
export OCTELIUM_QUIC=trueoctelium connect
Disconnecting
You can disconnect from the Cluster as follows:
octelium disconnect
MTU
While not recommended as it may cause unpredictable connection, you can explicitly set an MTU value for the octelium client via the OCTELIUM_MTU environment variable as follows:
export OCTELIUM_MTU=1100octelium connect