Overview
Quick Installation Guide

This is a quick guide for you to install a full-fledged single-node Octelium Cluster which is good enough for development, personal, or undemanding production use cases.

NOTE

To install a production, scalable multi-node Cluster over a typical cloud-based or on-premise Kubernetes installation, we recommend referring to this guide here.

Requirements

This guide only needs 2 requirements:

  • Having a cheap cloud server/VM instance (e.g. DigitalOcean Droplet, Hetzner server, AWS EC2, etc...) that's running a recent Linux distribution (e.g. Ubuntu 24.04 LTS or later, Debian 12+, etc...), preferably freshly installed, with at the very least 2GB of RAM, 2vCPUs and 20 GB of disk storage as a sensible minimum requirement.

  • Having a domain or a subdomain of a domain name that you actually own (e.g. example.com, octelium.example.com, sub.sub.example.com, etc...). This domain is the Cluster's domain since an Octelium Cluster is defined and addressed by its domain once installed (e.g. through the octelium and octeliumctl commands).

NOTE

For testing and playground purposes, you can also install the Cluster locally on any Linux machine or a Linux VM inside a MacOS or Windows machine (e.g. Podman machines, VirtualBox, etc...) and use localhost as a Cluster domain. In fact, there is already a playground GitHub repository that installs an Octelium Cluster inside of a running GitHub Codespace. You can visit the playground repository here.

Installation

Once you SSH into your VPS/VM as the Linux root user, you install the Cluster by running the following command:

curl -o install-demo-cluster.sh https://octelium.com/install-demo-cluster.sh
chmod +x install-demo-cluster.sh
# IMPORTANT: Replace <DOMAIN> with your actual domain/subdomain to be used as the Cluster domain
./install-demo-cluster.sh --domain <DOMAIN>

The script should take a few minutes depending on your VM's capabilities to finish. Upon completion of the Cluster installation, you will see the following message, which includes an octelium login command at the end of it:

The Cluster installation is now complete!
You can start interacting with the Cluster once you set the Cluster TLS certificate and the public DNS.For more information, you might want to visit the docs at https://octelium.com/docs
# ...
Once you set up your public DNS and Cluster TLS certificate,
use the following command to login and start interacting with the Cluster.
octelium login --domain <DOMAIN> --auth-token <AUTHENTICATION_TOKEN>

You can now copy that octelium login command in order to use it later from your own machine to log in to the Cluster and begin using it.

NOTE

If your VM/VPS is behind NAT where the IP address of the default/main network device of the machine is a private/internal IP address instead of using its outer public IP address, as is the case with EC2 and GCP instances under default configurations, you will need to use the --nat flag as follows:

./install-demo-cluster.sh --domain <DOMAIN> --nat
NOTE

If the VM/VPS is behind a firewall (e.g. EC2, Google GCP VMs, etc...), you will need to open the TCP port 443 for Octelium ingress and the UDP port 53820 for WireGuard.

NOTE

By default the latest Cluster version is installed. You can install a specific Cluster version via the --version flag as follows:

./install-demo-cluster.sh --domain <DOMAIN> --version 0.1.2
NOTE

By default the Octelium Cluster uses only WireGuard for tunneling. You can additionally enable the experimental QUIC-based tunneling mode (read more here) via the --quicv0 flag as follows:

./install-demo-cluster.sh --domain <DOMAIN> --quicv0

If your VPS is behind a firewall, you will also have to open the UDP port 8443 for the QUIC listener. You also need to create an additional public DNS entry as shown in detail here.

Post-Installation

To complete the installation and start interacting with the Cluster, 2 final steps are required:

  1. Setting the public DNS for the Cluster domain You can do that by simply getting the newly installed cloud instance public IP from your cloud provider dashboard and then using the IP value in your DNS provider (e.g. Cloudflare, Namecheap, GoDaddy, etc...) to set a DNS entry that points to your Cluster domain. You can read more here. For now, you need to add 2 DNS entries:
  • An A entry to resolve <DOMAIN> to the VM/VPS's public IP address as follows:
Entry FieldValue
TypeA
Name / Host<DOMAIN>
Value<PUBLIC_IP_ADDRESS>
  • A CNAME entry resolving the wildcard domain *.<DOMAIN> to <DOMAIN>. This entry effectively resolves all of the <DOMAIN> sub-domains to the VM/VPS public IP address. You simply need to set your CNAME DNS entry as follows:
Entry FieldValue
TypeCNAME
Name / Host*.<DOMAIN>
Value<DOMAIN>
  1. Setting the Cluster domain TLS certificate in order for the Cluster, its API Server as well as its public Services to be able to communicate over HTTPS. For example, you can use Let's Encrypt via Certbot for example to issue a certificate for your Cluster domain (you can read more here) and then provide the issued certificate to the Cluster. Here is an example of certbot issuing a certificate via the DNS-01 challenge (read more in the Let's Encrypt docs) and then the newly issued certificate is fed to the Cluster via either kubectl create secret or octops cert:
# Run as root from within your Cluster VM/VPS
apt-get update
apt install certbot
# Replace <DOMAIN> with your own domain
certbot certonly --email <YOUR_EMAIL> --agree-tos --cert-name <DOMAIN> -d "<DOMAIN>,*.<DOMAIN>,*.local.<DOMAIN>" --manual --preferred-challenges dns

The newly issued certificate is now stored in the /etc/letsencrypt/live/<DOMAIN> directory. Now, from your VPS/VM, you can provide the certificate to the Cluster simply via the octops cert command starting from version v0.13.0 (you can check your current version via octops version command) as follows:

# Replace <DOMAIN> with your own domain
octops cert <DOMAIN> --key /etc/letsencrypt/live/<DOMAIN>/privkey.pem --cert /etc/letsencrypt/live/<DOMAIN>/fullchain.pem --kubeconfig /etc/rancher/k3s/k3s.yaml

You can alternatively use kubectl create secret tls to provide the issued certificate to the Cluster as follows:

export KUBECONFIG="/etc/rancher/k3s/k3s.yaml"
# Replace <DOMAIN> with your own domain
kubectl create secret tls cert-cluster -n octelium --key /etc/letsencrypt/live/<DOMAIN>/privkey.pem --cert /etc/letsencrypt/live/<DOMAIN>/fullchain.pem
  1. Now that we use the octelium login command that we copied earlier to login to the Cluster from our local machine (e.g. your own laptop) as follows:
octelium login --domain <DOMAIN> --auth-token <AUTHENTICATION_TOKEN>
NOTE

If you try to interact with the Cluster via octelium or octeliumctl commands before setting your actual domain's certificate, you will be met with an authentication handshake failed error due to the initial self-signed certificate created by the Cluster during installation. You can skip that error by setting the OCTELIUM_INSECURE_TLS or OCTELIUM_DEV environment variable to true. Here is an example:

export OCTELIUM_INSECURE_TLS=true
octelium login --domain <DOMAIN> --auth-token <AUTHENTICATION_TOKEN>

Or you can also you the OCTELIUM_DEV environment variable which is generally used for development and debugging and it throws the debug logs of the Octelium clients as follows:

export OCTELIUM_DEV=true
octelium login --domain <DOMAIN> --auth-token <AUTHENTICATION_TOKEN>

Initial Configuration

This step is not required; however, it might be useful if this is your first Octelium Cluster. In Octelium, Cluster resources are mainly managed via the octelium apply command, which is very similar to how kubectl apply works (read more here). For now, we are going to use an initial configuration for your Cluster that includes only two resources:

  • A User for yourself that includes your primary email. We use the name alice and the email [email protected] in the example configuration below.
  • An IdentityProvider in order for you to login to the Cluster using your User's email via the web Portal. Octelium supports three types of such IdentityProviders:
    • GitHub OAuth2. You can see a detailed example here.
    • OpenID Connect IdentityProviders (read more here). You can see a detailed example if you have a Gitlab cloud account here.
    • SAML 2.0 IdentityProviders. You can read more here

In this example, we're going to use a Gitlab OpenID Connect IdentityProvider. Once you create the Gitlab OIDC application and obtain your Gitlab client ID and client secret, you now store the client secret as an Octelium Secret as follows:

# Set the "OCTELIUM_DOMAIN" environment variable to your Cluster domain in order to skip using "--domain" flag for every command
export OCTELIUM_DOMAIN=<DOMAIN>
octelium create secret idp-client-secret

Now, you create a YAML file in the machine you just logged in via octelium login and add the initial configuration as follows:

1
kind: User
2
metadata:
3
name: alice
4
spec:
5
type: HUMAN
7
authorization:
8
policies: ["allow-all"]
9
---
10
kind: IdentityProvider
11
metadata:
12
name: gitlab
13
spec:
14
displayName: Login with Gitlab
15
oidc:
16
issuerURL: https://gitlab.com
17
clientID: abcd...
18
clientSecret:
19
fromSecret: idp-client-secret

And now, we apply the creation of these two resources to the Cluster via the octeliumctl apply as follows:

octeliumctl apply /path/to/config_file.yaml

Troubleshooting

If you experienced trouble while trying to octelium login for the first time from your own local machine to the Cluster, then you might resolve it by doing the following:

  • If your octelium login command produces the following error:
Error: rpc error: code = Unavailable desc = name resolver error: produced zero addresses

That means you did not set your CNAME DNS entry as shown above. You can even verify that by using ping as follows:

ping octelium-api.<DOMAIN>

If it does not resolve, it means that your CNAME entry is simply not set.

Also make sure that you did set your A DNS entry. You can verify by using ping as follows:

ping <DOMAIN>

And if this ping also does not resolve, it means that your A entry is simply not set too.

You might also need to flush your local machine DNS cache to make sure that your local machine is not resolving to any old, now invalid, DNS entries as follows:

In Linux:

sudo resolvectl flush-caches

In Windows as an Admin:

ipconfig /flushdns

In MacOS:

sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder
  • If your octelium login command produces the following error:
rpc error: code = Unavailable desc = connection error: desc = "transport: authentication handshake failed: tls: failed to verify certificate: x509: certificate signed by unknown authority"

Then you still have not set your TLS certificate via octops cert or kubectl create secret tls as shown above. You can proceed with the command just by setting the OCTELIUM_INSECURE_TLS or OCTELIUM_DEV environment variables to true as follows:

export OCTELIUM_INSECURE_TLS=true
octelium login --domain <DOMAIN> --auth-token <AUTHENTICATION_TOKEN>

However, it's recommended to set your own TLS certificate that's signed by a real CA such as Let's Encrypt as soon as possible.

  • If your octelium login shows something as follows:
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp ... connect: connection refused"

From your Cluster VM itself, verify that the Octelium ingress is actually running and bound to an external IP address as follows:

export KUBECONFIG="/etc/rancher/k3s/k3s.yaml"
kubectl get svc -n octelium octelium-ingress-dataplane

The EXTERNAL-IP field of the output that command should be set to your actual main IP address and does not have a <pending> or an empty value. If your Cluster VM is behind NAT, you might need to uninstall the Cluster via ./install-demo-cluster.sh --uninstall and re-install by adding the --nat flag as shown above.

You can also try to curl curl -k https://<DOMAIN> to verify that the Octelium ingress is running. The -k flag is used here in order to ignore the TLS certificate errors since the initial TLS certificate is a self-signed certificate that should be replaced with octops cert or kubectl create secret tls as shown above. That curl command should output an HTML of the login web page.

NOTE

Still having problems trying to install the Cluster? Join our Discord, Slack or Reddit channels for support.

Tips

If this is your first Octelium Cluster, the following tips might be useful for you:

  • Once you login for the first time via the octelium login command as shown above, you should create at least one User with enough access permissions for yourself (see the initial configuration example here). The authentication token Credential (read more about Credentials here) used in this initial octelium login command is issued for the root User, which is installed automatically by the Cluster during the installation process. It should be noted that there is nothing special about the root User in Octelium and you can actually delete it once you add your own Users and other resources. Once you add your own Users with enough permissions to act as the Cluster administrators, IdentityProviders, and other resources, you can safely remove the initial authentication token _Credential as follows:
octeliumctl delete cred root-init
  • Octelium CLIs, namely octelium and octeliumctl, are designed to simultaneously work with multiple Octelium Clusters. That's why you need to add the --domain <DOMAIN> flag to each command. Obviously, this becomes tiresome and annoying if you're doing too many commands in the same shell for a single Cluster. That's why it's recommended to set the environment variable OCTELIUM_DOMAIN to your domain to your domain to use all your commands without having to use the --domain flag for each command. Here is an example:
export OCTELIUM_DOMAIN=<DOMAIN>
# List the Cluster Services
octeliumctl get service
# OR
octeliumctl get svc
# List the Cluster Users
octeliumctl get user
# List the Cluster Sessions
octeliumctl get session
# OR
octeliumctl get sess
# Create an authentication token Credential for "root" User
octeliumctl create credential --user root --policy allow-all first-cred
  • After you login for the first time via the octelium login command, you might want to connect to the Cluster and access its available Services (read more about connecting via the octelium CLI here). For example, the Cluster creates a demo-nginx Service during the installation. You can access it as follows:
export OCTELIUM_DOMAIN=<DOMAIN>
# Connect via the detached mode
octelium connect -d
# OR in foreground
sudo -E octelium connect
# Then access the "demo-nginx" Service
# This Service is installed during the Cluster installation
curl demo-nginx
# OR via the rootless mode and mapping the Service to a localhost port
octelium connect -p demo-nginx:9000
# And then access the Service
curl localhost:9000
  • TLS certificates usually expire within a few months. For example, certificates issued by Let's Encrypt expire after 90 days by default. It would be better for you to automate the process of rotating TLS certificates and providing them to your Octelium Cluster either via octops cert or kubectl update secret using some simple bash scripts, or, even better, by using a FOSS solution like cert-manager (read more here).

  • Since Octelium is a zero trust architecture, all access to protected resources, represented by Services, must be explicitly allowed by Policies (read more about Policies and access control here). As shown in the initial configuration example here, the User alice has the allow-all Policy attached, which grants her access to all Services, unless there are further Policies that override it by explicitly denying her access. Octelium installs the allow-all and deny-all Policies during the installation process to make it easier for you to directly use them and attach them to your different resources. Attaching Policies is not restricted to just Users as in the example above, you can actually attach Policies to Users (read more here), Services (read more here), Groups (read more here), Namespaces (read more here). You can also attach Policies to your issued Credentials (read more here).

Upgrade the Cluster

You can later upgrade your Octelium Cluster via the octops upgrade command (read more here) from within your VPS/VM as follows:

export KUBECONFIG="/etc/rancher/k3s/k3s.yaml"
# First you might want to check for available upgrades via --check flag
octops upgrade <DOMAIN> --check
# Now you can actually upgrade the Cluster
octops upgrade <DOMAIN>

Uninstall the Cluster

If you ever want to uninstall the Cluster later for whatever reason, including to re-install the Cluster, you can simply do that using the same installation script via the --uninstall flag as follows:

./install-demo-cluster.sh --uninstall

This command removes the Octelium Cluster and its underlying k3s Kubernetes cluster.

What Now?

Our Cluster has now been successfully installed and is now running. You can learn more about how to manage and use the Cluster in the following guides:

  • First Steps to Managing the Cluster here.
  • Connecting the Cluster here.
  • Managing Services here.
  • Access control and Policies here.
  • Adding IdentityProviders here.
  • Managing Users here.
  • Issuing Credentials (e.g. authentication tokens) for your Users here.
© 2025 octelium.comOctelium Labs, LLCAll rights reserved
Octelium and Octelium logo are trademarks of Octelium Labs, LLC.
WireGuard is a registered trademark of Jason A. Donenfeld