Connecting to a Cluster
Before you connect to a Cluster, you might want to check out the available Namespaces and Services in the Cluster as follows:
export OCTELIUM_DOMAIN=example.com# List the Cluster Servicesoctelium get service#OR simplyoctelium get svc# List the Cluster Namespacesoctelium get namespace#OR simplyoctelium get ns
Read more about the OCTELIUM_DOMAIN
environment variable and the --domain
flag here
Now you can connect to Cluster very simply as follows:
octelium connect -d
If you are not already logged in to the Cluster, then a browser window will open for your to authenticate yourself. You can read more about logging in here. You can also login directly using the octelium connect
command with an authentication token Credential. This can be useful when used in non-interactive environments such as containers. Here is an example:
octelium connect -d --auth-token <TOKEN>
The -d
or --detached
flag runs octelium connect
in detached mode. For linux, this is achieved via systemd-run
. For Windows, this is achieved by running as a Windows service. For MacOS, this is achieved by just running the binary in the background.
If you do not wish to run the binary in detached mode, you can run it instead as follows (currently available in Linux and MacOS):
sudo -E octelium connect
As you will see later in this guide here, you can also run octelium connect
as an ordinary unprivileged user using the gVisor Netstack mode as follows:
octelium connect
When the gVisor mode is used, you have to explicitly publish or map Services to your host in order to be able to access them as we will see below.
Mapping Services to Host
In some cases, you might want to have something similar to what Docker does where the container port is mapped or published to the host. Octelium enables you to map/publish a Service port to the localhost via the -p
or --publish
flag.
Here is an example when you want to map the HTTP Service myapi
of the default
Namespace to your localhost port 8080
:
octelium connect -p myapi:8080
Here is another example that maps the HTTP Service myapi
of the ns1
Namespace to your localhost port 8080
:
octelium connect -p myapi.ns1:8080
Now, once you're connected, you can access to the Service as follows:
curl localhost:8080
You can also publish several Services or several ports of the same Service if it is a multi-port Service. Here is an example:
octelium connect -p svc1:8080 -p svc2:8081 -p svc3.ns1:8082
Serving Services
By default, connecting to a Cluster has no intention to serve any Services that can be served by the owner User (read more here). To serve one, multiple or all Services assigned to the User, you can use the --serve
and --serve-all
flags.
Here is an example where you want to serve the Service svc1
which belongs to the default
Namespace:
octelium connect --serve svc1
Here is another example where you want to serve the Service svc1
that belongs to the ns1
Namespace:
octelium connect --serve svc1.ns1
You can also serve multiple Services as follows:
octelium connect --serve svc1 --serve svc2 --serve svc1.ns1
If you want to serve all Services that are available for their User, you can do using the --serve-all
flag so as follows:
octelium connect --serve-all
Tunnel Implementation
You're mostly not required to understand the different modes used by Octelium to run WireGuard. The command octelium connect
by default automatically uses whatever suits the environment it runs from.
The octelium
CLI tool supports different tunnel implementation modes depending on the environment it runs within. For example, in Linux it will first try to use the kernel implementation, if the kernel module is not installed or loaded, then it will try to use the wireguard-go
tun device implementation, if it does not have permissions to create a TUN device (i.e. not running as root or does not have NET_ADMIN
capability), then it will create its own userspace TCP/IP stack using gVisor's Netstack and run as an ordinary user.
-
Kernel mode Currently supported for Linux 5.6 or later versions as well as any older version that has the WireGuard kernel module installed. This is by far the most performant mode. This mode needs root user or
NET_ADMIN
capability. -
TUN device mode This mode uses the
wireguard-go
TUN device-based implementation and therefore supports all platforms. This mode needs root user or specificallyNET_ADMIN
capability and (possiblyMKNOD
if there is not/dev/net/tun
device file that exists on the system) capabilities on Linux, root user in MacOS or admin user in Windows to create a TUN device and set up the WireGuard link. This mode can be explicitly forced over the kernel mode as follows:
octelium connect --implementation tun
- Userspace mode over gVisor Netstack This mode does not need any OS privileges. It can run as an ordinary user in any platform but in that case since there is no WireGuard virtual network device that's running on the host OS, the only way to access a Service in such case is by mapping it to the host as illustrated above. This mode is also less performant compared to the kernel and the TUN device modes. This mode can be explicitly forced over the kernel and tun device mods as follows:
octelium connect --implementation gvisor
Containers
The official octelium
CLI as a container can be pulled as follows:
docker pull ghcr.io/octelium/octelium
You can now connect to the Cluster using an authentication token Credential as follows:
docker run --cap-add NET_ADMIN ghcr.io/octelium/octelium connect --domain <DOMAIN> --auth-token <TOKEN>
If you want to force the TUN mode then you have to either add the host device /dev/net/tun
as follows:
docker run --cap-add NET_ADMIN --device /dev/net/tun ghcr.io/octelium/octelium connect --domain <DOMAIN> --auth-token <TOKEN>
Or you can add the MKNOD
capability instead as follows:
docker run --cap-add NET_ADMIN --cap-add MKNOD ghcr.io/octelium/octelium connect --domain <DOMAIN> --auth-token <TOKEN>
Kubernetes
Running Octelium inside Kubernetes is not really any different from running it in containers. You might want to use it as a sidecar container for your microservices to access Services. You can also use it to serve Services. You can do both. Here is an example:
1apiVersion: apps/v12kind: Deployment3metadata:4name: octelium5spec:6selector:7matchLabels:8run: octelium9template:10metadata:11labels:12run: octelium13spec:14containers:15- name: octelium16image: ghcr.io/octelium/octelium17command: ["octelium"]18args: ["connect"]19env:20- name: OCTELIUM_DOMAIN21value: <DOMAIN>22- name: OCTELIUM_AUTH_TOKEN23valueFrom:24secretKeyRef:25name: <K8S_SECRET_NAME>26key: data27securityContext:28capabilities:29# You might also need to add the "MKNOD" capability if you're explicitly using the TUN mode30add: ["NET_ADMIN"]
If the container serves one or more busy Services, you might want to increase the number of the Kubernetes deployment replicas to enforce load balancing. By default, Octelium Clusters use round robin load balancing among the Service upstreams.
DNS
Each Service has the private FQDN <SERVICE>.<NAMESPACE>.local.<DOMAIN>
(additionally, <SERVICE>.local.<DOMAIN>
if it belongs to the default
Namespace for a shorter FQDN). The octelium
CLI by default automatically sets a private split DNS server in order to resolve queries belonging to the Cluster by setting the suffix v.<DOMAIN>
as a search domain for the private DNS server. For Linux, this requires running as root since it uses resolvectl
under the hood. For MacOS, the networksetup
utility is used.
You can choose not to use the private DNS at all by using the --no-dns
flag.
Local DNS Server
This feature is not currently enabled by default except for when running octelium
inside containers but eventually it will be enabled by default.
By default, the octelium
client sets the host's DNS server to the Cluster DNS server which is simply an ordinary Cluster Service that is accessible to all Users. However, this setup might not be the most stable and performant way to resolve the Cluster Service domain names for the following reasons causing for a need to use a proxy local DNS server that runs inside the connected octelium
client:
-
The Cluster DNS Service has stable addresses, but not static. While the
octelium
client is able to synchronize the DNS addresses with the host in most cases, there are other cases like deployingoctelium
as a sidecar container in Kubernetes where it is impossible for it to access the filesystems of the other containers in the same pod to update the/etc/resolv.conf
files. -
When connecting to the Cluster via IPv6-only or IPv4-only modes, some applications on the host accessing the Cluster's Services might prefer to resolve domain names on the other mode (i.e. resolving IPv4 on IPv6-only mode or vice versa), the Cluster's DNS will always respond to the DNS queries and then the application finds itself unable to access the Service. The Local DNS server solves this by simply refusing to answer
A
DNS queries when connecting via the IPv6-only mode orAAAA
queries when connecting via the IPv-4 only mode. -
The Local DNS server applies simple caching from within the host to accelerate access for recently used domain names.
You can use the local DNS server by enabling the --localdns
flag as follows:
octelium connect --localdns
By default, the local DNS server listens on the address 127.0.0.100:53
which is accessible by any application on the host. However, you can override that address as follows:
octelium connect --localdns --localdns-addr 127.0.0.127:53
Scopes
If you're connecting to the Cluster and authenticating at the same time (i.e. via the --auth-token
or --assertion
flags), you might also add scopes via the --scope
flag. Scopes act as a simple self-enforced authorization mechanism that can be used to further limit the scope of permissions for a Session. You can read more about scopes here.
Serving Embedded SSH
You can serve embedded SSH from the octelium
client (read more about embedded SSH here) to the Users authorized by the Cluster via the --essh
flag as follows:
octelium connect --essh
If you are running octelium connect
as root, you can also explicitly set the host user as follows:
octelium connect --essh --essh-user ubuntu
This forces Users SSH'ing into to the host to run as the host user ubuntu
instead of root
.
Layer-3 Mode
By default, octelium
will automatically try to use IPv6-only mode (i.e. only use a private IPv6 address for the WireGuard/QUIC network device connecting to the Cluster) unless the Cluster does not IPv6 private networking. You can however override that defaut behavior via the --ip-mode
flag. You can force an IPv4-only mode as follows:
octelium connect --ip-mode v4
You can also force a dual-stack mode (i.e. both IPv6 and IPv4) as follows:
octelium connect --ip-mode both
QUIC Mode
By default, Octelium uses WireGuard for tunneling the traffic between the client and the Cluster. Moreover, the Cluster currently additionally supports a QUIC-based tunneling mode. The QUIC-based tunneling mode is currently extremely experimental and is not recommended to be used in production. If the QUIC mode is enabled by the Cluster, you can use the QUIC mode as follows:
octelium connect --tunnel-mode quicv0
Disconnecting from a Cluster
You can disconnect from Cluster as follows:
octelium disconnect