Overview
Quick Installation Guide

This is a quick guide for you to install a full-fledged single-node Octelium Cluster on any cheap Linux machine, VM or cloud VPS (e.g. DigitalOcean droplet, Hetzner, AWS EC2, etc...). This single-node Cluster is good enough for development, personal, or undemanding production use cases.

NOTE

To install a production, scalable multi-node Cluster over a typical cloud-based or on-premise Kubernetes installation, we recommend referring to this guide here.

Requirements

This guide only needs 2 requirements:

  • Having a cheap cloud server/VM instance (e.g. DigitalOcean Droplet, Hetzner server, AWS EC2, etc...) that's running a recent Linux distribution (e.g. Ubuntu 24.04 LTS or later, Debian 12+, etc...), preferably freshly installed, with at the very least 2GB of RAM, 2vCPUs and 20 GB of disk storage as a sensible minimum requirement.

  • Having a domain or a subdomain of a domain name that you actually own (e.g. example.com, octelium.example.com, sub.sub.example.com, etc...). This domain is the Cluster's domain since an Octelium Cluster is defined and addressed by its domain once installed (e.g. through the octelium and octeliumctl commands).

NOTE

The installer script in this guide takes care of automatically installing the Octelium Cluster and all of its dependencies. In other words, You do not need to have your own Kubernetes cluster or PostgreSQL and Redis database installed and running before installing the Cluster in this guide as they are installed automatically by the script.

NOTE

For testing and playground purposes, you can also install the Cluster locally on any Linux machine or a Linux VM inside a MacOS or Windows machine (e.g. Podman machines, VirtualBox, etc...) and use localhost as a Cluster domain. In fact, there is already a playground GitHub repository that installs an Octelium Cluster inside of a running GitHub Codespace. You can visit the playground repository here.

Installation

Once you SSH into your VPS/VM as the Linux root user, you install the Cluster by running the following commands:

curl -o install-cluster.sh https://octelium.com/install-cluster.sh
chmod +x install-cluster.sh
# IMPORTANT: Replace <DOMAIN> with your actual domain/subdomain to be used as the Cluster domain
./install-cluster.sh --domain <DOMAIN>
NOTE

The above installation command should work flawlessly with any VM with a public IP address (e.g. DigitalOcean droplets, Hetzner VMs, etc...). For more advanced installation options (e.g. the Cluster's VM is behind NAT/firewall such as EC2 machines and Google Cloud Compute instances), please refer to this section below.

The script should take a few minutes depending on your VM's capabilities to finish. Here is a demo video to show how the installation process looks like:

Upon completion of the Cluster installation, you will see the following message, which includes an octelium login command at the end of it:

The Cluster installation is now complete!
You can start interacting with the Cluster once you set the Cluster TLS certificate and the public DNS.For more information, you might want to visit the docs at https://octelium.com/docs
# ...
Once you set up your public DNS and Cluster TLS certificate,
use the following command to login and start interacting with the Cluster.
octelium login --domain <DOMAIN> --auth-token <AUTHENTICATION_TOKEN>

You can now copy that octelium login command in order to use it later from your own machine to log in to the Cluster and begin using it.

Post-Installation

To complete the installation and start interacting with the Cluster, 2 final steps are required: Setting the public DNS for the Cluster domain and Setting the Cluster domain TLS certificate.

Public DNS

You need to set two DNS entries in your DNS provider (e.g. Cloudflare, Namecheap, GoDaddy, etc...) in order for the Cluster to be publicly addressable via its domain name:

  1. An A entry to resolve <DOMAIN> to the VM/VPS's public IP address as follows:
Entry FieldValue
TypeA
Name / Host<DOMAIN>
Value<PUBLIC_IP_ADDRESS>
  1. A CNAME entry resolving the wildcard domain *.<DOMAIN> to <DOMAIN>. This entry effectively resolves all of the <DOMAIN> sub-domains to the VM/VPS public IP address. You simply need to set your CNAME DNS entry as follows:
Entry FieldValue
TypeCNAME
Name / Host*.<DOMAIN>
Value<DOMAIN>

TLS Certificate

You need to set the Cluster domain TLS certificate in order for the Cluster, its API Server as well as its public Services to be able to communicate over HTTPS. For example, you can use Let's Encrypt via Certbot to issue a certificate for your Cluster domain (you can read more here) and then provide the issued certificate to the Cluster. This x509 certificate needs to be issued for the following domains (i.e. the domains that need to be included in the certificate's SAN list):

  • <DOMAIN>.
  • *.<DOMAIN> wildcard.
  • *.local.<DOMAIN> wildcard. This is not required but recommend if you want to have TLS-based Services (read more here).

Here is an example of certbot issuing a certificate via the DNS-01 challenge (read more in the Let's Encrypt docs):

# Run as root from within your Cluster VM/VPS
apt-get update
apt install certbot
# Replace <DOMAIN> with your own domain
certbot certonly --email <YOUR_EMAIL> --agree-tos --cert-name <DOMAIN> -d "<DOMAIN>,*.<DOMAIN>,*.local.<DOMAIN>" --manual --preferred-challenges dns

The newly issued certificate is now stored in the /etc/letsencrypt/live/<DOMAIN> directory. Now, from your VPS/VM, you can provide the certificate to the Cluster simply via the octops cert command as follows:

# Replace <DOMAIN> with your own domain
octops cert <DOMAIN> --key /etc/letsencrypt/live/<DOMAIN>/privkey.pem --cert /etc/letsencrypt/live/<DOMAIN>/fullchain.pem --kubeconfig /etc/rancher/k3s/k3s.yaml
NOTE

Alternatively to using the octops cert command, you can also use kubectl create secret tls to provide the issued certificate to the Cluster as follows:

export KUBECONFIG="/etc/rancher/k3s/k3s.yaml"
# Replace <DOMAIN> with your own domain
kubectl create secret tls cert-cluster -n octelium --key /etc/letsencrypt/live/<DOMAIN>/privkey.pem --cert /etc/letsencrypt/live/<DOMAIN>/fullchain.pem

Login to the Cluster

Now that we did set the Cluster public DNS and TLS certificate, we can now use the octelium login command that we copied earlier to login to the Cluster from our local machine (e.g. your own laptop) as follows:

octelium login --domain <DOMAIN> --auth-token <AUTHENTICATION_TOKEN>

If you try to invoke the command above before setting your actual domain's certificate, you will be met with an authentication handshake failed since the current Cluster TLS certificate is an initial self-signed certificate created by the Cluster during installation. You can skip that error, for now until you set your own real TLS certificate, by setting the OCTELIUM_INSECURE_TLS environment variable to true. Here is an example:

export OCTELIUM_INSECURE_TLS=true
octelium login --domain <DOMAIN> --auth-token <AUTHENTICATION_TOKEN>
NOTE

In addition to using OCTELIUM_INSECURE_TLS environment variable to skip verifying the server TLS certificate, you can also you the OCTELIUM_DEV environment variable which is generally used for development and debugging and it throws the debug logs of the Octelium clients as follows:

export OCTELIUM_DEV=true
octelium login --domain <DOMAIN> --auth-token <AUTHENTICATION_TOKEN>

Initial Configuration

This step is not required; however, it might be useful if this is your first Octelium Cluster. In Octelium, Cluster resources are mainly managed via the octelium apply command, which is very similar to how kubectl apply works (read more here). For now, we are going to use an initial configuration for your Cluster that includes only two resources:

  • A User for yourself that includes your primary email. We use the name alice and the email alice@example.com in the example configuration below.
  • An IdentityProvider in order for you to login to the Cluster using your User's email via the web Portal. Octelium supports three types of such IdentityProviders:
    • GitHub OAuth2. You can see a detailed example here.
    • OpenID Connect IdentityProviders (read more here). You can see a detailed example if you have a Gitlab cloud account here.
    • SAML 2.0 IdentityProviders. You can read more here

In this example, we're going to use a Gitlab OpenID Connect IdentityProvider. Once you create the Gitlab OIDC application and obtain your Gitlab client ID and client secret, you now store the client secret as an Octelium Secret as follows:

# Set the "OCTELIUM_DOMAIN" environment variable to your Cluster domain in order to skip using "--domain" flag for every command
export OCTELIUM_DOMAIN=<DOMAIN>
octelium create secret idp-client-secret

Now, you create a YAML file in the machine you just logged in via octelium login and add the initial configuration as follows:

1
kind: User
2
metadata:
3
name: alice
4
spec:
5
type: HUMAN
6
email: alice@example.com
7
authorization:
8
policies: ["allow-all"]
9
---
10
kind: IdentityProvider
11
metadata:
12
name: gitlab
13
spec:
14
displayName: Login with Gitlab
15
oidc:
16
issuerURL: https://gitlab.com
17
clientID: abcd...
18
clientSecret:
19
fromSecret: idp-client-secret

And now, we apply the creation of these two resources to the Cluster via the octeliumctl apply as follows:

octeliumctl apply /path/to/config_file.yaml

Troubleshooting

Here are some of the common problems you might experience when trying to access the Cluster for the first time, typically via the initial octelium login command, from your own local machine to the Cluster:

Produced Zero Addresses

If your octelium login command produces the following error:

Error: rpc error: code = Unavailable desc = name resolver error: produced zero addresses

That means you did not set your CNAME DNS entry as shown above. You can even verify that by using ping as follows:

ping octelium-api.<DOMAIN>

If it does not resolve, it means that your CNAME entry is simply not set.

Also make sure that you did set your A DNS entry. You can verify by using ping as follows:

ping <DOMAIN>

And if this ping also does not resolve, it means that your A entry is simply not set too.

You might also need to flush your local machine DNS cache to make sure that your local machine is not resolving to any old, now invalid, DNS entries as follows:

In Linux:

sudo resolvectl flush-caches

In Windows as an Admin:

ipconfig /flushdns

In MacOS:

sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder

Certificate Signed by Unknown Authority

If your octelium login command produces the following error:

gRPC error Unavailable: connection error: desc = "transport: authentication handshake failed: tls: failed to verify certificate: x509: certificate signed by unknown authority"

Then you still have not set your TLS certificate via octops cert or kubectl create secret tls as shown above. You can proceed with the command just by setting the OCTELIUM_INSECURE_TLS or OCTELIUM_DEV environment variables to true as follows:

export OCTELIUM_INSECURE_TLS=true
octelium login --domain <DOMAIN> --auth-token <AUTHENTICATION_TOKEN>

However, it's recommended to set your own TLS certificate that's signed by a real CA such as Let's Encrypt as soon as possible.

Connection Refused

If your octelium login shows something as follows:

rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp ... connect: connection refused"

From your Cluster VM itself, verify that the Octelium ingress is actually running and bound to an external IP address as follows:

export KUBECONFIG="/etc/rancher/k3s/k3s.yaml"
kubectl get svc -n octelium octelium-ingress-dataplane

The EXTERNAL-IP field of the output that command should be set to your actual main IP address and does not have a <pending> or an empty value. If your Cluster VM is behind NAT, you might need to uninstall the Cluster via ./install-cluster.sh --uninstall and re-install by adding the --nat flag as shown above.

You can also try to curl curl -k https://<DOMAIN> to verify that the Octelium ingress is running. The -k flag is used here in order to ignore the TLS certificate errors since the initial TLS certificate is a self-signed certificate that should be replaced with octops cert or kubectl create secret tls as shown above. That curl command should output an HTML of the login web page.

Unable to Access Services

NOTE

This section assumes that you can successfully access the Cluster via the octelium and octeliumctl commands (e.g. octeliumctl get service, octelium status, etc...) and you are also able to access the Cluster's web login page located at the URL https://<DOMAIN> via your browser.

If you can successfully connect to the Cluster via octelium connect but cannot access any Service at all when connected even though you can access public Services through your web browser, then this might be a NAT-related issue in your Cluster VM/machine. This problem shows up, once connected via sudo -E octelium connect, when you try to access the demo-nginx Service that is created by default during the Cluster installation via curl demo-nginx but you end up getting a timeout error such as Could not resolve host: demo-nginx. You can verify this is a NAT-related issue in your VM as follows:

  • First verify that you can actually successfully access that demo-nginx Service via your web browser. You can go to portal at https://portal.<DOMAIN> and visit that Service from the list of Service shown. If you can not access https://portal.<DOMAIN> or https://demo-nginx.<DOMAIN> then this is not a NAT-related issue.

  • Now open a terminal and connect to the Cluster as a non-root unprivileged OS user as follows:

# Enable debugging in the octelium CLI
export OCTELIUM_DEV=true
# Now connect as a normal/non-root OS user
# And map the demo-nginx Service to localhost:9090
octelium connect -p demo-nginx:9090
# Now access it from another terminal
curl localhost:9090

You should be getting the same timeout error in this unprivileged mode (read more here). If you can successfully curl the Service now in the unprivileged mode then this is not a NAT-related issue, but most probably a DNS-related issue on your own client-side machine.

  • Now, from your Cluster VM/machine, obtain your main IP address as follows:
echo $(ip addr show "$(ip route show default | awk '/default/ {print $5}')" | grep "inet " | awk '{print $2}' | cut -d'/' -f1)

This should be a public IP address (read more in Wikipedia here) if you installed the Cluster via ./install-cluster.sh --domain <DOMAIN> as shown in the beginning of this guide. If you obtained a private IP address as an output to the command above, then the solution to this problem is to add the --nat flag (read more the advanced installation options section here) as follows:

# First uninstall the currently installed Cluster
./install-cluster.sh --uninstall
# Then re-install the Cluster
./install-cluster.sh --domain <DOMAIN> --nat

If you want to install the Cluster on some internal virtual machine inside an internal/private network (e.g. microVM inside your host) for internal testing purposes, then add also the --force-machine-ip as follows:

./install-cluster.sh --domain <DOMAIN> --nat --force-machine-ip
NOTE

Still having problems trying to install the Cluster? Join our Discord, Slack or Reddit channels or open a GitHub issue for support.

Tips

If this is your first Octelium Cluster, the following tips might be useful for you:

  • Once you login for the first time via the octelium login command as shown above, you should create at least one User with enough access permissions for yourself (see the initial configuration example here). The authentication token Credential (read more about Credentials here) used in this initial octelium login command is issued for the root User, which is installed automatically by the Cluster during the installation process. It should be noted that there is nothing special about the root User in Octelium and you can actually delete it once you add your own Users and other resources. Once you add your own Users with enough permissions to act as the Cluster administrators, IdentityProviders, and other resources, you can safely remove the initial authentication token _Credential as follows:
octeliumctl delete cred root-init
  • Octelium CLIs, namely octelium and octeliumctl, are designed to simultaneously work with multiple Octelium Clusters. That's why you need to add the --domain <DOMAIN> flag to each command. Obviously, this becomes tiresome and annoying if you're doing too many commands in the same shell for a single Cluster. That's why it's recommended to set the environment variable OCTELIUM_DOMAIN to your domain to your domain to use all your commands without having to use the --domain flag for each command. Here is an example:
export OCTELIUM_DOMAIN=<DOMAIN>
# List the Cluster Services
octeliumctl get service
# OR
octeliumctl get svc
# List the Cluster Users
octeliumctl get user
# List the Cluster Sessions
octeliumctl get session
# OR
octeliumctl get sess
# Create an authentication token Credential for "root" User
octeliumctl create credential --user root --policy allow-all first-cred
  • After you login for the first time via the octelium login command, you might want to connect to the Cluster and access its available Services (read more about connecting via the octelium CLI here). For example, the Cluster creates a demo-nginx Service during the installation. You can access it as follows:
export OCTELIUM_DOMAIN=<DOMAIN>
# Connect via the detached mode
octelium connect -d
# OR in foreground
sudo -E octelium connect
# Then access the "demo-nginx" Service
# This Service is installed during the Cluster installation
curl demo-nginx
# OR via the rootless mode and mapping the Service to a localhost port
octelium connect -p demo-nginx:9000
# And then access the Service
curl localhost:9000
  • TLS certificates usually expire within a few months. For example, certificates issued by Let's Encrypt expire after 90 days by default. It would be better for you to automate the process of rotating TLS certificates and providing them to your Octelium Cluster either via octops cert or kubectl update secret using some simple bash scripts, or, even better, by using a FOSS solution like cert-manager (read more here).

  • Since Octelium is a zero trust architecture, all access to protected resources, represented by Services, must be explicitly allowed by Policies (read more about Policies and access control here). As shown in the initial configuration example here, the User alice has the allow-all Policy attached, which grants her access to all Services, unless there are further Policies that override it by explicitly denying her access. Octelium installs the allow-all and deny-all Policies during the installation process to make it easier for you to directly use them and attach them to your different resources. Attaching Policies is not restricted to just Users as in the example above, you can actually attach Policies to Users (read more here), Services (read more here), Groups (read more here), Namespaces (read more here). You can also attach Policies to your issued Credentials (read more here).

Kubernetes

The installer in this guide automatically installs k3s as the underlying Kubernetes cluster, and then installs the Octelium Cluster on top of it. You can access that Kubernetes cluster, typically via kubectl, by setting the kubeconfig file to /etc/rancher/k3s/k3s.yaml. Here is an example:

export KUBECONFIG="/etc/rancher/k3s/k3s.yaml"
kubectl get pods -A
kubectl get pods -n octelium

Advanced Installation Options

The above ./install-cluster.sh --domain <DOMAIN> should automatically work with any VM/VPS with a public IP address (e.g. DigitalOcean, Hetzner, Vultr, etc...). Therefore, you generally do not need to read this section unless you want to install the Cluster in a special environment (e.g. VM is behind NAT and/or firewall, you want to use QUIC tunneling, the Cluster is meant for internal testing/playground use cases, etc...).

  • If your VM/VPS is behind NAT where the IP address of the default/main network device of the machine is a private/internal IP address instead of its public IP address (e.g. EC2 and GCP instances under default configurations), then you will need to use the --nat flag as follows:
./install-cluster.sh --domain <DOMAIN> --nat
NOTE

The --nat flag instructs the Cluster's Ingress to use and listen on the main internal/private VM address instead of using the machine's public address.

If you want to install the Cluster in an internal network behind NAT and access it internally from within that private network (this is only recommended for internal testing use cases), you can add the --force-machine-ip flag in addition to the --nat flag. The --force-machine-ip instructs the WireGuard listener in the Cluster's GatewayAgent to use and listen on the main internal/private IP address instead of the machine's public address.

NOTE

If the VM/VPS is behind a firewall (e.g. EC2, Google GCP VMs, etc...), you will need to open the TCP port 443 for Octelium ingress and the UDP port 53820 for WireGuard.

  • By default, the installation script attempts to automatically obtain the public IP address of the VM/VPS, you can explicitly set the public IP address via the --public-ip flag. Here is an example:
./install-cluster.sh --domain <DOMAIN> --public-ip 1.2.3.4
  • By default the Octelium Cluster uses only WireGuard for tunneling. You can additionally enable the experimental QUIC-based tunneling mode (read more here) via the --quicv0 flag as follows:
./install-cluster.sh --domain <DOMAIN> --quicv0
NOTE

If your VPS is behind a firewall, you will also have to open the UDP port 8443 for the QUIC listener. You also need to create an additional public DNS entry as shown in detail here.

  • By default the latest Cluster version is installed. You can install a specific Cluster version via the --version flag as follows:
./install-cluster.sh --domain <DOMAIN> --version 0.1.2

Upgrade the Cluster

You can later upgrade your Octelium Cluster via the octops upgrade command (read more here) from within your VPS/VM as follows:

export KUBECONFIG="/etc/rancher/k3s/k3s.yaml"
# First you might want to check for available upgrades via --check flag
octops upgrade <DOMAIN> --check
# Now you can actually upgrade the Cluster
octops upgrade <DOMAIN>

Uninstall the Cluster

If you ever want to uninstall the Cluster later for whatever reason, including to re-install the Cluster, you can simply do that using the same installation script via the --uninstall flag as follows:

./install-cluster.sh --uninstall

This command removes the Octelium Cluster and its underlying k3s Kubernetes cluster.

What Now?

Our Cluster has now been successfully installed and is now running. You can learn more about how to manage and use the Cluster in the following guides:

  • First Steps to Managing the Cluster here.
  • Connecting the Cluster via the octelium client here.
  • Managing Services here.
  • Access control and Policies here.
  • Adding IdentityProviders here.
  • Managing Users here.
  • Issuing Credentials (e.g. authentication tokens) for your Users here.
  • Using Authenticators (e.g. enabling FIDO Passkey login and TOTP MFA) here.

You can also visit some detailed examples in the following guides:

  • Deploy and secure access to Next.js/Vite web apps here.
  • Octelium as an ngrok alternative here.
  • Octelium as an API gateway here.
  • Octelium as an AI gateway here.
  • Octelium as an MCP gateway here.
  • Passwordless access to NeonDB here.
  • Deploying Pi-Hole as a managed container here.
  • Deploying and securing access to VSCode here.
  • Hosting a website from behind NAT here.
  • Secretless secure access to AWS S3 here and Lambda functions here.
© 2026 octelium.comOctelium Labs, LLCAll rights reserved
Octelium and Octelium logo are trademarks of Octelium Labs, LLC.
WireGuard is a registered trademark of Jason A. Donenfeld