Cluster Domain
A Cluster is defined and addressed by its domain. The Cluster domain can be any FQDN whether a root domain or a subdomain (e.g. example.com
, octelium.example.com
, sub.sub.example.com
, etc...) of a domain name that you own.
Kubernetes
Data Plane and Control Plane Nodes
A single Octelium Cluster works on top of Kubernetes. For personal, undemanding or non-production environments, an Octelium Cluster can perfectly work on top of a single-node Kubernetes cluster that can be installed on a single cloud VM such as DigitalOcean droplets or EC2 instances.
In production, Octelium Cluster should be deployed on top of a scalable multi-node Kubernetes cluster whether on-prem or managed (e.g. GCP GKE, DigitalOcean managed Kubernetes, etc...) since the Octelium Cluster include various components that can be classified into data plane and control plane that should be running in separate Kubernetes nodes. The Cluster uses data-plane nodes as Gateways to host the Services and control-plane nodes to host control-plane components such as Nocturne. The minimum requirements is obviously to use a single node for the control plane and another one for the data plane and from there you can scale up your data-plane nodes as your Services, Users and traffic grow. You should use an instance with at least 2vCPU
and 2GB
RAM for each node.
Prior to the installation, you must label your data plane nodes with the label having the key octelium.com/node-mode-dataplane
and an empty value as follows:
kubectl label nodes <NODE_NAME> octelium.com/node-mode-dataplane=
Similarly, control plane nodes must have a label with the key octelium.com/node-mode-controlplane
and an empty value as follows:
kubectl label nodes <NODE_NAME> octelium.com/node-mode-controlplane=
Additionally, you have to taint every data plane node with the key octelium.com/gateway-init
and the effect NO_SCHEDULE
as follows:
kubectl taint nodes <NODE_NAME> octelium.com/gateway-init=true:NoSchedule
This taint is used whenever you scale up your data plane to defer creating Service pods by the Kubernetes scheduler on the newly created data plane node until the Gateway Agent is successfully initialized.
Network Plugin
The Octelium Cluster requires that your Kubernetes cluster uses a network plugin. There are free and open source CNI options such as Cilium or Calico. The Octelium Cluster currently also requires Multus CNI to be installed. If your Kubernetes cluster uses containerd as the container runtime (which is the default option in many Kubernetes deployments) you need to install Multus as follows:
kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset.yml
However, if your Kubernetes cluster uses CRI-O instead of containerd, then you need to install Multus as follows:
kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset-crio.yml
Nodes behind NAT
If your Kubernetes cluster data plane nodes are behind NAT and the default interfaces are set to a private IP, Octelium will try to get the node public addresses first by using the node.status.addresses[]
whose type
field is set to ExternalIP
. If such value does not exist or is set to a private IP, then you must inform Octelium prior to the Cluster installation or adding new nodes with the public IP of each node by annotating the node as follows:
kubectl annotate node <NODE_NAME> octelium.com/public-ip=<PUBLIC_IP>
Octelium always prioritizes the annotation values over trying to obtain the public IP on its own.
Ingress
Octelium uses a component called Ingress that is exposed publicly to the internet by being deployed as Kubernetes LoadBalancer
service listening to the TCP port 443
. Most managed Kubernetes providers automatically create load balancers for such services for you so you most probably do not have to do anything here unless when using custom Kubernetes installations (e.g. on-prem, single-node Kubernetes clusters on AWS EC2 or GCP compute).
Data Store
Octelium currently uses 2 types of data stores:
- PostgreSQL as the primary store for all of its resources.
- Redis as the secondary store which is used for being a cache store as well as pub/sub infrastructure to send events among the different Cluster components.
Bootstrap Config File
A Bootstrap file is the sole source of truth for all the configs needed to properly install and initialize the Cluster such as configs related to the primary store (i.e. PostgreSQL database) and secondary store (i.e. Redis database) as well as network related configs (e.g. private network ranges, etc...). You can read everything about the Bootstrap config in details here.
Private Network Ranges
The Cluster's network range is the private network range that encompasses all ranges used by the Cluster's Services as well as private IP addresses assigned to Users whenever connected via the octelium
CLI tool. By default, this range is dual-stack but you can override the mode in the Bootsrap config file to become IPv4-only or IPv6-only. Moreover, the Bootsrap config lets you to override both the IPv4 and IPv6 network ranges. You can read more about network range-related configurations here.
Inbound Traffic for Gateways
The Cluster Kubernetes data plane nodes must have public IP addresses in order for the WireGuard devices maintained by Octelium Gateway Agents to receive traffic coming from the connected Users using the octelium
CLI tool via the internet. By default, Octelium currently uses the 53820
UDP port by default for WireGuard however such number can be overridden from within the Cluster configuration (read more here).
The agent (i.e. the octelium
CLI tool) does not need open ports or internet
gateways for ingress (i.e. only outbound internet connection is needed and
thus running behind NAT is totally fine). In other words, the above
requirements apply only for the kubernetes data plane nodes in the Cluster
itself not at Users' side.
WireGuard Kernel Module
For maximum performance, we recommend that you have WireGuard kernel modules installed and loaded on your Cluster data plane nodes before the installation. If Octelium cannot find the kernel module installed on the node, it will automatically fallback to using the userspace implementation wireguard-go.
QUIC Mode
By default, the Cluster uses WireGuard for tunneling. Moreover, Octelium currently supports a very experimental QUIC-based tunneling mode. Using this QUIC-based mode is currently not recommended in production. You can enable the UIC mode in the Bootstrap config file as shown here.