InstallCluster
Pre-Installation Considerations
NOTE

You only need to consider the information below if you are going to install an Octelium Cluster using the octops init command on an already active production-ready Kubernetes cluster. Check out the quick installation guide to quickly install a single-node Octelium Cluster here.

Cluster Domain

A Cluster is defined and addressed by its domain. The Cluster domain can be any FQDN whether it's a root domain or a subdomain (e.g. example.com, octelium.example.com, sub.sub.example.com, etc...) of a domain name that you own.

Architecture

Starting from version v0.27.0, Octelium Cluster supports both x86_64/amd64 and arm64/aarch64 architectures. Octelium CLIs (namely octelium, octeliumctl and octops) have always supported both architectures.

Kubernetes

Namespace

The Octelium Cluster uses the octelium Kubernetes namespace for itself and its components. During the Cluster installation, this namespace is automatically created by the Cluster installer. If it already exists, it will be deleted and then created again.

Data Plane and Control Plane Nodes

A single Octelium Cluster works on top of Kubernetes. For personal, undemanding or non-production environments, an Octelium Cluster can work perfectly on top of a single-node Kubernetes cluster that can be installed on a single cloud VM such as DigitalOcean droplets, or EC2 instances.

In production, Octelium Cluster should be deployed on top of a scalable multi-node Kubernetes cluster whether on-prem or managed (e.g. GCP GKE, DigitalOcean managed Kubernetes, etc...) since the Octelium Cluster includes various components that can be classified into data-plane and control-plane components that should be running in separate Kubernetes nodes. The Cluster uses data-plane nodes as Gateways to host the Services and control-plane nodes to host control-plane components such as Nocturne. The minimum requirement is obviously to use a single node for the control-plane and another one for the data-plane and from there you can scale up your data-plane nodes as your Services, Users and traffic grow. You should use an instance with at least 2vCPU and 2GB RAM for each node.

NOTE

Octelium's control plane has nothing to do with Kubernetes control plane whose nodes usually have the label node-role.kubernetes.io/control-plane. In other words, an Octelium control-plane Kubernetes node is an ordinary Kubernetes worker node that is used for Octelium's own control plane.

Prior to the installation, you must label your data-plane nodes with the label having the key octelium.com/node-mode-dataplane and an empty value as follows:

kubectl label nodes <NODE_NAME> octelium.com/node-mode-dataplane=

Similarly, control-plane nodes must have a label with the key octelium.com/node-mode-controlplane and an empty value as follows:

kubectl label nodes <NODE_NAME> octelium.com/node-mode-controlplane=

Additionally, while not required, it is generally recommended taint every data-plane node with the key octelium.com/gateway-init and the effect NO_SCHEDULE as follows:

kubectl taint nodes <NODE_NAME> octelium.com/gateway-init=true:NoSchedule

This taint is used whenever you scale up your data-plane to defer creating Service pods by the Kubernetes scheduler on the newly created data-plane node until the Gateway Agent is successfully initialized. This is especially useful when scaling up your data-plane nodes after the Cluster installation so that you can add more Gateways during runtime while ensuring that the GatewayAgent is running on those newly created nodes before potentially running the Service pods by the Kubernetes scheduler.

Network Plugin

The Octelium Cluster requires that your Kubernetes cluster use a network plugin. There are free and open source CNI options such as Cilium or Calico. The Octelium Cluster currently also requires Multus CNI to be installed. If your Kubernetes cluster uses containerd as the container runtime (which is the default option in many Kubernetes deployments), you need to install Multus as follows:

kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset.yml

However, if your Kubernetes cluster uses CRI-O instead of containerd, then you need to install Multus as follows:

kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset-crio.yml

Octelium's deploys the GatewayAgent daemonset on each and every Kubernetes data-plane node. The GatewayAgent mounts the node host's /etc/cni in order to write the NetworkAttachmentDefinition config that Multus uses to install the additional link/device used by Vigil pods.

One final important note is that the standard bridge and host-local plugins need to be inside the CNI bin directory which is usually /opt/cni/bin. Usually such standard plugins are automatically installed by the CNI plugin or by cloud-based Kubernetes installation.

Cilium

In order for Multus to work properly with Cilium, you need to make sure that Cilium is operating as a non-exclusive CNI. This behavior can be achieved by using --set cni.exclusive=false flag when installing Cilium via the cilium install command.

Node Public IP Address

This step is usually not required. However, if your Kubernetes cluster data-plane nodes are behind NAT, or if your node has multiple IP addresses belonging to multiple Linux network devices, you might want to manually inform the Cluster of the node's public IP of that node to be used for WireGuard by the Gateway as follows:

kubectl annotate node <NODE_NAME> octelium.com/override-gw-ip=<PUBLIC_IP>

If you do not manually provide the node public IP address via the kubectl annotate node command, Octelium's Gateway Agent will try to find that public IP on startup. Octelium always prioritizes the manually set annotation values over attempting to obtain the public IP on its own.

Ingress

Octelium uses a component called Ingress that is exposed publicly to the internet by being deployed as Kubernetes LoadBalancer service listening to the TCP port 443. Most managed Kubernetes providers automatically create load balancers for such services for you so you most probably do not have to do anything here unless when using custom Kubernetes installations (e.g. on-prem, single-node Kubernetes clusters on AWS EC2 or GCP compute).

Front Proxy

By default Octelium's Ingress data-plane, implemented by Envoy, is listening over TLS and does TLS termination. You can serve the Octelium Ingress behind a L7 front proxy/load balancer to terminate TLS and proxy the traffic to the Octelium Ingress Kubernetes service which is located at octelium-ingress-dataplane.octelium.svc:8080 by setting the OCTELIUM_FRONT_PROXY_MODE environment variable to true before running the octops init and octops upgrade commands. Note that you need to set OCTELIUM_FRONT_PROXY_MODE to true for every octops upgrade in order to keep using the front proxy mode. Here is an example:

export OCTELIUM_FRONT_PROXY_MODE=true
octops init example.com

SPIFFE Support

Starting from version v0.24.0, the different Cluster-side components can communicate to one another over mTLS using X.509-SVIDs (read more here). You can enable SPIFFE support by setting the environment variable OCTELIUM_ENABLE_SPIFFE_CSI to true before running the octops init and octops upgrade commands. Here is an example:

export OCTELIUM_ENABLE_SPIFFE_CSI=true
octops init example.com

When enabled, Octelium uses the csi.spiffe.io CSI driver by default. You can, however, override the CSI driver by setting the OCTELIUM_SPIFFE_CSI_DRIVER environment variable before running the octops init and octops upgrade commands. Here is an example:

export OCTELIUM_SPIFFE_CSI_DRIVER=csi.spiffe.example.com
octops init example.com

By default, Octelium components trust any certificate. You can specify your trust domain via the OCTELIUM_SPIFFE_TRUST_DOMAIN environment variable. Here is an example:

export OCTELIUM_SPIFFE_TRUST_DOMAIN=example.com
octops init example.com

Data Store

Octelium currently uses 2 types of data stores:

  1. PostgreSQL is used as the primary store for all of its resources.
  2. Redis is used as the secondary store which is used for caching as well as acting as a pub/sub infrastructure to send events among the different Cluster components.

Bootstrap Config File

A Bootstrap file is the sole source of truth for all the configs needed to properly install and initialize the Cluster such as configs related to the primary store (i.e. PostgreSQL database) and secondary store (i.e. Redis database) as well as network related configs (e.g. private network ranges, etc...). You can read everything about the Bootstrap config in details here.

Private Network Ranges

The Cluster's network range is the private network range that encompasses all ranges used by the Cluster's Services as well as private IP addresses assigned to Users whenever connected via the octelium CLI tool. By default, this range is dual-stack but you can override the mode in the Bootstrap config file to become IPv4-only or IPv6-only. Moreover, the Bootstrap config lets you to override both the IPv4 and IPv6 network ranges. You can read more about network range-related configurations here.

Inbound Traffic for Gateways

The Cluster Kubernetes data-plane nodes must have public IP addresses in order for the WireGuard devices maintained by Octelium Gateway Agents to receive traffic coming from the connected Users using the octelium CLI tool over the internet. By default, Octelium currently uses the 53820 UDP port for WireGuard; however, this port number can be overridden from within the Bootstrap configuration (read more here).

NOTE

The agent (i.e. the octelium CLI tool) does not need open ports or internet gateways for ingress (i.e. only outbound internet connection is needed and thus running behind NAT is perfectly fine). In other words, the above requirements apply only for the kubernetes data-plane nodes in the Cluster itself not at the Users' side.

WireGuard Kernel Module

For maximum performance, we recommend that you have WireGuard kernel modules installed and loaded on your Cluster data-plane nodes before the installation. If Octelium cannot find the kernel module installed on the node, it will automatically fall back to using the userspace implementation wireguard-go.

QUIC Mode

By default, the Cluster uses WireGuard for tunneling. Moreover, Octelium currently supports an experimental QUIC-based tunneling mode. Using this QUIC-based mode is currently not recommended in production. You can enable the UIC mode in the Bootstrap config file as shown here.

© 2026 octelium.comOctelium Labs, LLCAll rights reserved
Octelium and Octelium logo are trademarks of Octelium Labs, LLC.
WireGuard is a registered trademark of Jason A. Donenfeld