InstallCluster
Pre-Installation Considerations
NOTE

You only need to consider the information below if you are going to install an Octelium Cluster using the octops init command on an already active production-ready Kubernetes cluster. Check out the quick installation guide to quickly install a single-node Octelium Cluster here.

Cluster Domain

A Cluster is defined and addressed by its domain. The Cluster domain can be any FQDN whether it's a root domain or a subdomain (e.g. example.com, octelium.example.com, sub.sub.example.com, etc...) of a domain name that you own.

Kubernetes

Data Plane and Control Plane Nodes

A single Octelium Cluster works on top of Kubernetes. For personal, undemanding or non-production environments, an Octelium Cluster can work perfectly on top of a single-node Kubernetes cluster that can be installed on a single cloud VM such as DigitalOcean droplets, or EC2 instances.

In production, Octelium Cluster should be deployed on top of a scalable multi-node Kubernetes cluster whether on-prem or managed (e.g. GCP GKE, DigitalOcean managed Kubernetes, etc...) since the Octelium Cluster includes various components that can be classified into data-plane and control-plane components that should be running in separate Kubernetes nodes. The Cluster uses data-plane nodes as Gateways to host the Services and control-plane nodes to host control-plane components such as Nocturne. The minimum requirement is obviously to use a single node for the control-plane and another one for the data-plane and from there you can scale up your data-plane nodes as your Services, Users and traffic grow. You should use an instance with at least 2vCPU and 2GB RAM for each node.

Prior to the installation, you must label your data-plane nodes with the label having the key octelium.com/node-mode-dataplane and an empty value as follows:

kubectl label nodes <NODE_NAME> octelium.com/node-mode-dataplane=

Similarly, control-plane nodes must have a label with the key octelium.com/node-mode-controlplane and an empty value as follows:

kubectl label nodes <NODE_NAME> octelium.com/node-mode-controlplane=

Additionally, you have to taint every data-plane node with the key octelium.com/gateway-init and the effect NO_SCHEDULE as follows:

kubectl taint nodes <NODE_NAME> octelium.com/gateway-init=true:NoSchedule

This taint is used whenever you scale up your data-plane to defer creating Service pods by the Kubernetes scheduler on the newly created data-plane node until the Gateway Agent is successfully initialized.

Network Plugin

The Octelium Cluster requires that your Kubernetes cluster use a network plugin. There are free and open source CNI options such as Cilium or Calico. The Octelium Cluster currently also requires Multus CNI to be installed. If your Kubernetes cluster uses containerd as the container runtime (which is the default option in many Kubernetes deployments), you need to install Multus as follows:

kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset.yml

However, if your Kubernetes cluster uses CRI-O instead of containerd, then you need to install Multus as follows:

kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset-crio.yml

Node Public IP Address

This step is usually not required. However, if your Kubernetes cluster data-plane nodes are behind NAT, or if your node has multiple IP addresses belonging to multiple Linux network devices, you might want to manually inform the Cluster of the node's public IP of that node to be used for WireGuard by the Gateway as follows:

kubectl annotate node <NODE_NAME> octelium.com/public-ip=<PUBLIC_IP>

If you do not manually provide the node public IP address via the kubectl annotate node command, Octelium's Gateway Agent will try to find that public IP on startup. Octelium always prioritizes the manually set annotation values over attempting to obtain the public IP on its own.

Ingress

Octelium uses a component called Ingress that is exposed publicly to the internet by being deployed as Kubernetes LoadBalancer service listening to the TCP port 443. Most managed Kubernetes providers automatically create load balancers for such services for you so you most probably do not have to do anything here unless when using custom Kubernetes installations (e.g. on-prem, single-node Kubernetes clusters on AWS EC2 or GCP compute).

Data Store

Octelium currently uses 2 types of data stores:

  1. PostgreSQL is used as the primary store for all of its resources.
  2. Redis is used as the secondary store which is used for caching as well as acting as a pub/sub infrastructure to send events among the different Cluster components.

Bootstrap Config File

A Bootstrap file is the sole source of truth for all the configs needed to properly install and initialize the Cluster such as configs related to the primary store (i.e. PostgreSQL database) and secondary store (i.e. Redis database) as well as network related configs (e.g. private network ranges, etc...). You can read everything about the Bootstrap config in details here.

Private Network Ranges

The Cluster's network range is the private network range that encompasses all ranges used by the Cluster's Services as well as private IP addresses assigned to Users whenever connected via the octelium CLI tool. By default, this range is dual-stack but you can override the mode in the Bootstrap config file to become IPv4-only or IPv6-only. Moreover, the Bootstrap config lets you to override both the IPv4 and IPv6 network ranges. You can read more about network range-related configurations here.

Inbound Traffic for Gateways

The Cluster Kubernetes data-plane nodes must have public IP addresses in order for the WireGuard devices maintained by Octelium Gateway Agents to receive traffic coming from the connected Users using the octelium CLI tool over the internet. By default, Octelium currently uses the 53820 UDP port for WireGuard; however, this port number can be overridden from within the Bootstrap configuration (read more here).

NOTE

The agent (i.e. the octelium CLI tool) does not need open ports or internet gateways for ingress (i.e. only outbound internet connection is needed and thus running behind NAT is perfectly fine). In other words, the above requirements apply only for the kubernetes data-plane nodes in the Cluster itself not at the Users' side.

WireGuard Kernel Module

For maximum performance, we recommend that you have WireGuard kernel modules installed and loaded on your Cluster data-plane nodes before the installation. If Octelium cannot find the kernel module installed on the node, it will automatically fall back to using the userspace implementation wireguard-go.

QUIC Mode

By default, the Cluster uses WireGuard for tunneling. Moreover, Octelium currently supports a very experimental QUIC-based tunneling mode. Using this QUIC-based mode is currently not recommended in production. You can enable the UIC mode in the Bootstrap config file as shown here.

© 2025 octelium.comOctelium Labs, LLCAll rights reserved
Octelium and Octelium logo are trademarks of Octelium Labs, LLC.
WireGuard is a registered trademark of Jason A. Donenfeld