Connecting to Clusters

Connecting to a Cluster

Before you connect to a Cluster, you might want to check out the available Namespaces and Services in the Cluster as follows:

export OCTELIUM_DOMAIN=example.com
# List the Cluster Services
octelium get service
#OR simply
octelium get svc
# List the Cluster Namespaces
octelium get namespace
#OR simply
octelium get ns
NOTE

Read more about the OCTELIUM_DOMAIN environment variable and the --domain flag here

Now you can connect to Cluster very simply as follows:

octelium connect -d

If you are not already logged in to the Cluster, then a browser window will open for your to authenticate yourself. You can read more about logging in here. You can also login directly using the octelium connect command with an authentication token Credential. This can be useful when used in non-interactive environments such as containers. Here is an example:

octelium connect -d --auth-token <TOKEN>
NOTE

The -d or --detached flag runs octelium connect in detached mode. For linux, this is achieved via systemd-run. For Windows, this is achieved by running as a Windows service. For MacOS, this is achieved by just running the binary in the background.

If you do not wish to run the binary in detached mode, you can run it instead as follows (currently available in Linux and MacOS):

sudo -E octelium connect

As you will see later in this guide here, you can also run octelium connect as an ordinary unprivileged user using the gVisor Netstack mode as follows:

octelium connect
NOTE

When the gVisor mode is used, you have to explicitly publish or map Services to your host in order to be able to access them as we will see below.

Mapping Services to Host

In some cases, you might want to have something similar to what Docker does where the container port is mapped or published to the host. Octelium enables you to map/publish a Service port to the localhost via the -p or --publish flag.

Here is an example when you want to map the HTTP Service myapi of the default Namespace to your localhost port 8080:

octelium connect -p myapi:8080

Here is another example that maps the HTTP Service myapi of the ns1 Namespace to your localhost port 8080:

octelium connect -p myapi.ns1:8080

Now, once you're connected, you can access to the Service as follows:

curl localhost:8080

You can also publish several Services or several ports of the same Service if it is a multi-port Service. Here is an example:

octelium connect -p svc1:8080 -p svc2:8081 -p svc3.ns1:8082

Serving Services

By default, connecting to a Cluster has no intention to serve any Services that can be served by the owner User (read more here). To serve one, multiple or all Services assigned to the User, you can use the --serve and --serve-all flags.

Here is an example where you want to serve the Service svc1 which belongs to the default Namespace:

octelium connect --serve svc1

Here is another example where you want to serve the Service svc1 that belongs to the ns1 Namespace:

octelium connect --serve svc1.ns1

You can also serve multiple Services as follows:

octelium connect --serve svc1 --serve svc2 --serve svc1.ns1

If you want to serve all Services that are available for their User, you can do using the --serve-all flag so as follows:

octelium connect --serve-all

Tunnel Implementation

NOTE

You're mostly not required to understand the different modes used by Octelium to run WireGuard. The command octelium connect by default automatically uses whatever suits the environment it runs from.

The octelium CLI tool supports different tunnel implementation modes depending on the environment it runs within. For example, in Linux it will first try to use the kernel implementation, if the kernel module is not installed or loaded, then it will try to use the wireguard-go tun device implementation, if it does not have permissions to create a TUN device (i.e. not running as root or does not have NET_ADMIN capability), then it will create its own userspace TCP/IP stack using gVisor's Netstack and run as an ordinary user.

  1. Kernel mode Currently supported for Linux 5.6 or later versions as well as any older version that has the WireGuard kernel module installed. This is by far the most performant mode. This mode needs root user or NET_ADMIN capability.

  2. TUN device mode This mode uses the wireguard-go TUN device-based implementation and therefore supports all platforms. This mode needs root user or specifically NET_ADMIN capability and (possibly MKNOD if there is not /dev/net/tun device file that exists on the system) capabilities on Linux, root user in MacOS or admin user in Windows to create a TUN device and set up the WireGuard link. This mode can be explicitly forced over the kernel mode as follows:

octelium connect --implementation tun
  1. Userspace mode over gVisor Netstack This mode does not need any OS privileges. It can run as an ordinary user in any platform but in that case since there is no WireGuard virtual network device that's running on the host OS, the only way to access a Service in such case is by mapping it to the host as illustrated above. This mode is also less performant compared to the kernel and the TUN device modes. This mode can be explicitly forced over the kernel and tun device mods as follows:
octelium connect --implementation gvisor

Containers

The official octelium CLI as a container can be pulled as follows:

docker pull ghcr.io/octelium/octelium

You can now connect to the Cluster using an authentication token Credential as follows:

docker run --cap-add NET_ADMIN ghcr.io/octelium/octelium connect --domain <DOMAIN> --auth-token <TOKEN>

If you want to force the TUN mode then you have to either add the host device /dev/net/tun as follows:

docker run --cap-add NET_ADMIN --device /dev/net/tun ghcr.io/octelium/octelium connect --domain <DOMAIN> --auth-token <TOKEN>

Or you can add the MKNOD capability instead as follows:

docker run --cap-add NET_ADMIN --cap-add MKNOD ghcr.io/octelium/octelium connect --domain <DOMAIN> --auth-token <TOKEN>

Kubernetes

Running Octelium inside Kubernetes is not really any different from running it in containers. You might want to use it as a sidecar container for your microservices to access Services. You can also use it to serve Services. You can do both. Here is an example:

1
apiVersion: apps/v1
2
kind: Deployment
3
metadata:
4
name: octelium
5
spec:
6
selector:
7
matchLabels:
8
run: octelium
9
template:
10
metadata:
11
labels:
12
run: octelium
13
spec:
14
containers:
15
- name: octelium
16
image: ghcr.io/octelium/octelium
17
command: ["octelium"]
18
args: ["connect"]
19
env:
20
- name: OCTELIUM_DOMAIN
21
value: <DOMAIN>
22
- name: OCTELIUM_AUTH_TOKEN
23
valueFrom:
24
secretKeyRef:
25
name: <K8S_SECRET_NAME>
26
key: data
27
securityContext:
28
capabilities:
29
# You might also need to add the "MKNOD" capability if you're explicitly using the TUN mode
30
add: ["NET_ADMIN"]

If the container serves one or more busy Services, you might want to increase the number of the Kubernetes deployment replicas to enforce load balancing. By default, Octelium Clusters use round robin load balancing among the Service upstreams.

DNS

Each Service has the private FQDN <SERVICE>.<NAMESPACE>.local.<DOMAIN>(additionally, <SERVICE>.local.<DOMAIN> if it belongs to the default Namespace for a shorter FQDN). The octelium CLI by default automatically sets a private split DNS server in order to resolve queries belonging to the Cluster by setting the suffix v.<DOMAIN> as a search domain for the private DNS server. For Linux, this requires running as root since it uses resolvectl under the hood. For MacOS, the networksetup utility is used.

You can choose not to use the private DNS at all by using the --no-dns flag.

Local DNS Server

NOTE

This feature is not currently enabled by default except for when running octelium inside containers but eventually it will be enabled by default.

By default, the octelium client sets the host's DNS server to the Cluster DNS server which is simply an ordinary Cluster Service that is accessible to all Users. However, this setup might not be the most stable and performant way to resolve the Cluster Service domain names for the following reasons causing for a need to use a proxy local DNS server that runs inside the connected octelium client:

  1. The Cluster DNS Service has stable addresses, but not static. While the octelium client is able to synchronize the DNS addresses with the host in most cases, there are other cases like deploying octelium as a sidecar container in Kubernetes where it is impossible for it to access the filesystems of the other containers in the same pod to update the /etc/resolv.conf files.

  2. When connecting to the Cluster via IPv6-only or IPv4-only modes, some applications on the host accessing the Cluster's Services might prefer to resolve domain names on the other mode (i.e. resolving IPv4 on IPv6-only mode or vice versa), the Cluster's DNS will always respond to the DNS queries and then the application finds itself unable to access the Service. The Local DNS server solves this by simply refusing to answer A DNS queries when connecting via the IPv6-only mode or AAAA queries when connecting via the IPv-4 only mode.

  3. The Local DNS server applies simple caching from within the host to accelerate access for recently used domain names.

You can use the local DNS server by enabling the --localdns flag as follows:

octelium connect --localdns

By default, the local DNS server listens on the address 127.0.0.100:53 which is accessible by any application on the host. However, you can override that address as follows:

octelium connect --localdns --localdns-addr 127.0.0.127:53

Scopes

If you're connecting to the Cluster and authenticating at the same time (i.e. via the --auth-token or --assertion flags), you might also add scopes via the --scope flag. Scopes act as a simple self-enforced authorization mechanism that can be used to further limit the scope of permissions for a Session. You can read more about scopes here.

Serving Embedded SSH

You can serve embedded SSH from the octelium client (read more about embedded SSH here) to the Users authorized by the Cluster via the --essh flag as follows:

octelium connect --essh

If you are running octelium connect as root, you can also explicitly set the host user as follows:

octelium connect --essh --essh-user ubuntu

This forces Users SSH'ing into to the host to run as the host user ubuntu instead of root.

Layer-3 Mode

By default, octelium will automatically try to use IPv6-only mode (i.e. only use a private IPv6 address for the WireGuard/QUIC network device connecting to the Cluster) unless the Cluster does not IPv6 private networking. You can however override that defaut behavior via the --ip-mode flag. You can force an IPv4-only mode as follows:

octelium connect --ip-mode v4

You can also force a dual-stack mode (i.e. both IPv6 and IPv4) as follows:

octelium connect --ip-mode both

QUIC Mode

By default, Octelium uses WireGuard for tunneling the traffic between the client and the Cluster. Moreover, the Cluster currently additionally supports a QUIC-based tunneling mode. The QUIC-based tunneling mode is currently extremely experimental and is not recommended to be used in production. If the QUIC mode is enabled by the Cluster, you can use the QUIC mode as follows:

octelium connect --tunnel-mode quicv0

Disconnecting from a Cluster

You can disconnect from Cluster as follows:

octelium disconnect
© 2025 octelium.comOctelium Labs, LLCAll rights reserved
Octelium and Octelium logo are trademarks of Octelium Labs, LLC.
WireGuard is a registered trademark of Jason A. Donenfeld