This is a quick guide for you to install a full-fledged single-node Octelium Cluster which is good enough for development, personal, or undemanding production use cases.
To install a production, scalable multi-node Cluster over a typical cloud-based or on-premise Kubernetes installation, we recommend referring to this guide here.
Requirements
This guide only needs 2 requirements:
-
Having a cheap cloud server/VM instance (e.g. DigitalOcean Droplet, Hetzner server, AWS EC2, etc...) that's running a recent Linux distribution (e.g. Ubuntu 24.04 LTS or later, Debian 12+, etc...), preferably freshly installed, with at the very least 2GB of RAM, 2vCPUs and 20 GB of disk storage as a sensible minimum requirement.
-
Having a domain or a subdomain of a domain name that you actually own (e.g.
example.com
,octelium.example.com
,sub.sub.example.com
, etc...). This domain is the Cluster's domain since an Octelium Cluster is defined and addressed by its domain once installed (e.g. through theoctelium
andocteliumctl
commands).
For testing and playground purposes, you can also install the Cluster locally on any Linux machine or a Linux VM inside a MacOS or Windows machine (e.g. Podman machines, VirtualBox, etc...) and use localhost
as a Cluster domain. In fact, there is already a playground GitHub repository that installs an Octelium Cluster inside of a running GitHub Codespace. You can visit the playground repository here.
Installation
Once you SSH into your VPS/VM as the Linux root
user, you install the Cluster by running the following command:
curl -o install-demo-cluster.sh https://octelium.com/install-demo-cluster.shchmod +x install-demo-cluster.sh# IMPORTANT: Replace <DOMAIN> with your actual domain/subdomain to be used as the Cluster domain./install-demo-cluster.sh --domain <DOMAIN>
The script should take a few minutes depending on your VM's capabilities to finish. Upon completion of the Cluster installation, you will see the following message, which includes an octelium login
command at the end of it:
The Cluster installation is now complete!You can start interacting with the Cluster once you set the Cluster TLS certificate and the public DNS.For more information, you might want to visit the docs at https://octelium.com/docs# ...Once you set up your public DNS and Cluster TLS certificate,use the following command to login and start interacting with the Cluster.octelium login --domain <DOMAIN> --auth-token <AUTHENTICATION_TOKEN>
You can now copy that octelium login
command in order to use it later from your own machine to log in to the Cluster and begin using it.
If your VM/VPS is behind NAT where the IP address of the default/main network device of the machine is a private/internal IP address instead of using its outer public IP address, as is the case with EC2 and GCP instances under default configurations, you will need to use the --nat
flag as follows:
./install-demo-cluster.sh --domain <DOMAIN> --nat
If the VM/VPS is behind a firewall (e.g. EC2, Google GCP VMs, etc...), you will need to open the TCP port 443
for Octelium ingress and the UDP port 53820
for WireGuard.
By default the latest
Cluster version is installed. You can install a specific Cluster version via the --version
flag as follows:
./install-demo-cluster.sh --domain <DOMAIN> --version 0.1.2
By default the Octelium Cluster uses only WireGuard for tunneling. You can additionally enable the experimental QUIC-based tunneling mode (read more here) via the --quicv0
flag as follows:
./install-demo-cluster.sh --domain <DOMAIN> --quicv0
If your VPS is behind a firewall, you will also have to open the UDP port 8443
for the QUIC listener. You also need to create an additional public DNS entry as shown in detail here.
Post-Installation
To complete the installation and start interacting with the Cluster, 2 final steps are required:
- Setting the public DNS for the Cluster domain You can do that by simply getting the newly installed cloud instance public IP from your cloud provider dashboard and then using the IP value in your DNS provider (e.g. Cloudflare, Namecheap, GoDaddy, etc...) to set a DNS entry that points to your Cluster domain. You can read more here. For now, you need to add 2 DNS entries:
- An
A
entry to resolve<DOMAIN>
to the VM/VPS's public IP address as follows:
Entry Field | Value |
---|---|
Type | A |
Name / Host | <DOMAIN> |
Value | <PUBLIC_IP_ADDRESS> |
- A
CNAME
entry resolving the wildcard domain*.<DOMAIN>
to<DOMAIN>
. This entry effectively resolves all of the<DOMAIN>
sub-domains to the VM/VPS public IP address. You simply need to set yourCNAME
DNS entry as follows:
Entry Field | Value |
---|---|
Type | CNAME |
Name / Host | *.<DOMAIN> |
Value | <DOMAIN> |
- Setting the Cluster domain TLS certificate in order for the Cluster, its API Server as well as its public Services to be able to communicate over HTTPS. For example, you can use Let's Encrypt via Certbot for example to issue a certificate for your Cluster domain (you can read more here) and then provide the issued certificate to the Cluster. Here is an example of certbot issuing a certificate via the DNS-01 challenge (read more in the Let's Encrypt docs) and then the newly issued certificate is fed to the Cluster via either
kubectl create secret
oroctops cert
:
# Run as root from within your Cluster VM/VPSapt-get updateapt install certbot# Replace <DOMAIN> with your own domaincertbot certonly --email <YOUR_EMAIL> --agree-tos --cert-name <DOMAIN> -d "<DOMAIN>,*.<DOMAIN>,*.local.<DOMAIN>" --manual --preferred-challenges dns
The newly issued certificate is now stored in the /etc/letsencrypt/live/<DOMAIN>
directory. Now, from your VPS/VM, you can provide the certificate to the Cluster simply via the octops cert
command starting from version v0.13.0
(you can check your current version via octops version
command) as follows:
# Replace <DOMAIN> with your own domainoctops cert <DOMAIN> --key /etc/letsencrypt/live/<DOMAIN>/privkey.pem --cert /etc/letsencrypt/live/<DOMAIN>/fullchain.pem --kubeconfig /etc/rancher/k3s/k3s.yaml
You can alternatively use kubectl create secret tls
to provide the issued certificate to the Cluster as follows:
export KUBECONFIG="/etc/rancher/k3s/k3s.yaml"# Replace <DOMAIN> with your own domainkubectl create secret tls cert-cluster -n octelium --key /etc/letsencrypt/live/<DOMAIN>/privkey.pem --cert /etc/letsencrypt/live/<DOMAIN>/fullchain.pem
- Now that we use the
octelium login
command that we copied earlier to login to the Cluster from our local machine (e.g. your own laptop) as follows:
octelium login --domain <DOMAIN> --auth-token <AUTHENTICATION_TOKEN>
If you try to interact with the Cluster via octelium
or octeliumctl
commands before setting your actual domain's certificate, you will be met with an authentication handshake failed
error due to the initial self-signed certificate created by the Cluster during installation. You can skip that error by setting the OCTELIUM_INSECURE_TLS
or OCTELIUM_DEV
environment variable to true
. Here is an example:
export OCTELIUM_INSECURE_TLS=trueoctelium login --domain <DOMAIN> --auth-token <AUTHENTICATION_TOKEN>
Or you can also you the OCTELIUM_DEV
environment variable which is generally used for development and debugging and it throws the debug logs of the Octelium clients as follows:
export OCTELIUM_DEV=trueoctelium login --domain <DOMAIN> --auth-token <AUTHENTICATION_TOKEN>
Initial Configuration
This step is not required; however, it might be useful if this is your first Octelium Cluster. In Octelium, Cluster resources are mainly managed via the octelium apply
command, which is very similar to how kubectl apply
works (read more here). For now, we are going to use an initial configuration for your Cluster that includes only two resources:
- A User for yourself that includes your primary email. We use the name
alice
and the email[email protected]
in the example configuration below. - An IdentityProvider in order for you to login to the Cluster using your User's email via the web Portal. Octelium supports three types of such IdentityProviders:
In this example, we're going to use a Gitlab OpenID Connect IdentityProvider. Once you create the Gitlab OIDC application and obtain your Gitlab client ID and client secret, you now store the client secret as an Octelium Secret as follows:
# Set the "OCTELIUM_DOMAIN" environment variable to your Cluster domain in order to skip using "--domain" flag for every commandexport OCTELIUM_DOMAIN=<DOMAIN>octelium create secret idp-client-secret
Now, you create a YAML file in the machine you just logged in via octelium login
and add the initial configuration as follows:
1kind: User2metadata:3name: alice4spec:5type: HUMAN67authorization:8policies: ["allow-all"]9---10kind: IdentityProvider11metadata:12name: gitlab13spec:14displayName: Login with Gitlab15oidc:16issuerURL: https://gitlab.com17clientID: abcd...18clientSecret:19fromSecret: idp-client-secret
And now, we apply the creation of these two resources to the Cluster via the octeliumctl apply
as follows:
octeliumctl apply /path/to/config_file.yaml
Troubleshooting
If you experienced trouble while trying to octelium login
for the first time from your own local machine to the Cluster, then you might resolve it by doing the following:
- If your
octelium login
command produces the following error:
Error: rpc error: code = Unavailable desc = name resolver error: produced zero addresses
That means you did not set your CNAME
DNS entry as shown above. You can even verify that by using ping
as follows:
ping octelium-api.<DOMAIN>
If it does not resolve, it means that your CNAME
entry is simply not set.
Also make sure that you did set your A
DNS entry. You can verify by using ping
as follows:
ping <DOMAIN>
And if this ping also does not resolve, it means that your A
entry is simply not set too.
You might also need to flush your local machine DNS cache to make sure that your local machine is not resolving to any old, now invalid, DNS entries as follows:
In Linux:
sudo resolvectl flush-caches
In Windows as an Admin:
ipconfig /flushdns
In MacOS:
sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder
- If your
octelium login
command produces the following error:
rpc error: code = Unavailable desc = connection error: desc = "transport: authentication handshake failed: tls: failed to verify certificate: x509: certificate signed by unknown authority"
Then you still have not set your TLS certificate via octops cert
or kubectl create secret tls
as shown above. You can proceed with the command just by setting the OCTELIUM_INSECURE_TLS
or OCTELIUM_DEV
environment variables to true
as follows:
export OCTELIUM_INSECURE_TLS=trueoctelium login --domain <DOMAIN> --auth-token <AUTHENTICATION_TOKEN>
However, it's recommended to set your own TLS certificate that's signed by a real CA such as Let's Encrypt as soon as possible.
- If your
octelium login
shows something as follows:
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp ... connect: connection refused"
From your Cluster VM itself, verify that the Octelium ingress is actually running and bound to an external IP address as follows:
export KUBECONFIG="/etc/rancher/k3s/k3s.yaml"kubectl get svc -n octelium octelium-ingress-dataplane
The EXTERNAL-IP
field of the output that command should be set to your actual main IP address and does not have a <pending>
or an empty value. If your Cluster VM is behind NAT, you might need to uninstall the Cluster via ./install-demo-cluster.sh --uninstall
and re-install by adding the --nat
flag as shown above.
You can also try to curl curl -k https://<DOMAIN>
to verify that the Octelium ingress is running. The -k
flag is used here in order to ignore the TLS certificate errors since the initial TLS certificate is a self-signed certificate that should be replaced with octops cert
or kubectl create secret tls
as shown above. That curl
command should output an HTML of the login web page.
Tips
If this is your first Octelium Cluster, the following tips might be useful for you:
- Once you login for the first time via the
octelium login
command as shown above, you should create at least one User with enough access permissions for yourself (see the initial configuration example here). The authentication token Credential (read more about Credentials here) used in this initialoctelium login
command is issued for theroot
User, which is installed automatically by the Cluster during the installation process. It should be noted that there is nothing special about theroot
User in Octelium and you can actually delete it once you add your own Users and other resources. Once you add your own Users with enough permissions to act as the Cluster administrators, IdentityProviders, and other resources, you can safely remove the initial authentication token _Credential as follows:
octeliumctl delete cred root-init
- Octelium CLIs, namely
octelium
andocteliumctl
, are designed to simultaneously work with multiple Octelium Clusters. That's why you need to add the--domain <DOMAIN>
flag to each command. Obviously, this becomes tiresome and annoying if you're doing too many commands in the same shell for a single Cluster. That's why it's recommended to set the environment variableOCTELIUM_DOMAIN
to your domain to your domain to use all your commands without having to use the--domain
flag for each command. Here is an example:
export OCTELIUM_DOMAIN=<DOMAIN># List the Cluster Servicesocteliumctl get service# ORocteliumctl get svc# List the Cluster Usersocteliumctl get user# List the Cluster Sessionsocteliumctl get session# ORocteliumctl get sess# Create an authentication token Credential for "root" Userocteliumctl create credential --user root --policy allow-all first-cred
- After you login for the first time via the
octelium login
command, you might want to connect to the Cluster and access its available Services (read more about connecting via theoctelium
CLI here). For example, the Cluster creates ademo-nginx
Service during the installation. You can access it as follows:
export OCTELIUM_DOMAIN=<DOMAIN># Connect via the detached modeoctelium connect -d# OR in foregroundsudo -E octelium connect# Then access the "demo-nginx" Service# This Service is installed during the Cluster installationcurl demo-nginx# OR via the rootless mode and mapping the Service to a localhost portoctelium connect -p demo-nginx:9000# And then access the Servicecurl localhost:9000
-
TLS certificates usually expire within a few months. For example, certificates issued by Let's Encrypt expire after 90 days by default. It would be better for you to automate the process of rotating TLS certificates and providing them to your Octelium Cluster either via
octops cert
orkubectl update secret
using some simple bash scripts, or, even better, by using a FOSS solution like cert-manager (read more here). -
Since Octelium is a zero trust architecture, all access to protected resources, represented by Services, must be explicitly allowed by Policies (read more about Policies and access control here). As shown in the initial configuration example here, the User
alice
has theallow-all
Policy attached, which grants her access to all Services, unless there are further Policies that override it by explicitly denying her access. Octelium installs theallow-all
anddeny-all
Policies during the installation process to make it easier for you to directly use them and attach them to your different resources. Attaching Policies is not restricted to just Users as in the example above, you can actually attach Policies to Users (read more here), Services (read more here), Groups (read more here), Namespaces (read more here). You can also attach Policies to your issued Credentials (read more here).
Upgrade the Cluster
You can later upgrade your Octelium Cluster via the octops upgrade
command (read more here) from within your VPS/VM as follows:
export KUBECONFIG="/etc/rancher/k3s/k3s.yaml"# First you might want to check for available upgrades via --check flagoctops upgrade <DOMAIN> --check# Now you can actually upgrade the Clusteroctops upgrade <DOMAIN>
Uninstall the Cluster
If you ever want to uninstall the Cluster later for whatever reason, including to re-install the Cluster, you can simply do that using the same installation script via the --uninstall
flag as follows:
./install-demo-cluster.sh --uninstall
This command removes the Octelium Cluster and its underlying k3s Kubernetes cluster.
What Now?
Our Cluster has now been successfully installed and is now running. You can learn more about how to manage and use the Cluster in the following guides: