How Octelium Works

Introduction

A single Octelium system is called a Cluster and is defined and addressed by its domain (e.g. example.com, octelium.example.com, etc...). The Cluster provides a self-hosted, scalable and unified zero trust architecture to provide identity-based, application-layer aware secret-less secure access for Users, both humans and workloads, to any private/internal resource behind NAT in any environment (e.g. on prem, one or more private clouds, your own laptop behind a NAT, IoT, etc...) as well as to publicly protected resources such as SaaS APIs and databases via identity-based, L-7 aware, context-aware access control through policy-as-code on a per-request basis. Octelium is built, however, with features that far exceed what a typical remote access solution can do including operating as a PaaS-like deployment platform as well as providing public anonymous access besides providing secure zero trust access.

The Cluster's architecture is designed so that Users can access such protected resources, where each protected resource is represented in the Cluster by a Service that's implemented by an identity-aware proxy (IaP), through two zero-trust network access (ZTNA) modes:

  • Client-based (also called agent-based or client-initiated ZTNA) where Users use a lightweight client (namely the octelium CLI tool) to connect to the Cluster through WireGuard tunnels (as well as experimentally QUIC-based tunnels) and access Services. This mode simply acts as a zero-config VPN from the User's perspective where Users can address Services via stable private FQDNs and hostnames assigned by the Cluster.

  • Client-less (also called BeyondCorp or service-initiated ZTNA) where Users can access Services through an internet facing reverse proxy without the need for any client to be installed at their side. Not only this mode enables human Users to access web-based Services via their browsers like typical protected internal or public SaaS web apps, but it also enables workload Users to directly access any HTTP-based Service such as APIs, Kubernetes clusters, gRPC services, etc... via standard OAuth2 client credentials flow as well as standard bearer access tokens where your applications written in any programming languages can securely access such Services without having to use any additional client or special SDK. You can read more about the BeyondCorp mode here. Moreover, Octelium also can also provide public anonymous access (read more here).

An Octelium Cluster runs on top of Kubernetes. The Cluster can fit to run over a single-node Kubernetes cluster that is installed on top of a single cheap EC2 or DigitalOcean VM instance. In a typical production environment, however, the Cluster should typically run on top of a scalable on-prem or a managed Kubernetes cluster. It's noteworthy to point out that even though Octelium operates on top of Kubernetes, you don't really need to have experience with Kubernetes to manage, operate or use Octelium.

The Cluster is designed to be managed in a centralized, declarative as well as programmatic way that is very similar to the way Kubernetes itself is managed. The Cluster administrators can use one command (namely octeliumctl apply) to (re)produce the entire state of the Cluster in order to enable the Cluster's administrators to define the Cluster resources in yaml files and store them in git repos where the entire state can be updated/rollbacked effortlessly in a similar way to other GitOps friendly systems such as Kubernetes (read more here).

The Cluster consists of various components which can be classified into a data plane and a control plane. In a typical production environment, the Cluster should have at least one Kubernetes node dedicated solely for the control plane. The Cluster uses one or more kubernetes nodes for its data plane. Each data plane Kubernetes node is used as a Gateway by the Cluster acting as a host for the Services running on it.

Data Plane Overview

Human Users can authenticate themselves using OpenID Connect or SAML IdP IdentityProviders (read more about IdentityProviders here) and workload Users can authenticate themselves via authentication tokens (read more here) or federated OpenID Connect-based assertions (read more here). Once authenticated, a Session is created for that User and is tied to a short-lived (4 hours by default but it is configurable via the ClusterConfig) access token credential representing the Session enabling it to access protected resources through Services if the request is authorized (read more here).

Now let's understand how the flow of a request from a User to a protected resource through its Service is done.

From the User to the Service

When a User starts accessing a Service, the data plane flow takes different but equivalent paths to reach the Service depending on the mode ( i.e whether privately using the client-based mode via the octelium CLI tool or publicly via the client-less mode) as follows:

  • In the client-based mode (i.e. via the octelium connect command), a User is connected to the Cluster through WireGuard/QUIC tunnels to all of its Gateways. Whenever the User accesses a Service, the request traffic gets carried from the User's side over the internet through the tunnel to the Gateway hosting the Service depending on the Service's private address, and once it reaches the Cluster's side of the tunnel (i.e. the Gateway), the inner traffic gets de-encapsulated and forwarded to the corresponding Service according to its private IP address.
  • In the client-less BeyondCorp mode, the request is carried out by addressing the Service using its public FQDN like any public resource with a public DNS entry, the request reaches the Cluster through its frontend reverse-proxy component called Ingress which forwards the requests to the corresponding Service based on its FQDN.

At the Service

A Service is implemented by an identity-aware proxy (IAP) via a component called Vigil that runs as a Kubernetes pod inside a Gateway (i.e. a data plane Kubernetes node). Each Service belongs to a Namespace which acts as the parent of its Services according to a common functionality (e.g. project names, environments such as production or staging, etc...) as well as being the parent domain name such Services (read more here). You can read in detail about Namespaces here.

For every new request coming from the User, Vigil builds up the request context, simply the information required to authenticate and identify the User's Session as well as other information that is tied to the request such as application-layer specific information (e.g. HTTP request headers, path, method, etc...), Vigil then forwards such information to another component called Octovigil which mainly acts as the policy decision point (PDP). Octovigil is the component that actually authenticates and then authorizes the request through evaluating all Policies required to control the access of that specific request. Once the decision for the request to be allowed or denied is taken, Octovigil forwards the decision back to Vigil. If the request is allowed, Vigil proceeds with the request to the actual protected upstream resource. You can read in detail about access control and Policy management here.

Unlike in VPNs which operate and enforce access control at layer-3 using segmentation, Octelium's application-layer awareness enables it to understand various layer-7 protocols such as HTTP-based Services including web apps, APIs, gRPC services and Kubernetes clusters, as well as SSH, DNS and PostgreSQL-based and MySQL-based databases. Application-layer awareness simply unlocks new capabilities in 4 main directions:

  • Access control Vigil is capable of extracting layer-7 information that can be taken into calculation inside your policies. For example, in HTTP-based Services, this can include access control by request headers, method, path, body content, etc... (read more here). For Kubernetes, this can include access control by the resource, namespace, verb, etc... (read more here). For PostgreSQL-based and MySQL, this can include access control by users, databases and queries (read more here and here). Octelium provides you, via Octovigil, the policy-decision-point (PDP), a modern, centralized, scalable, fine-grained, dynamic, context-aware, layer-7 aware, attribute-based access control system (ABAC) on a per-request basis using modular and composable Policies that enable you to write your policy-as-code using CEL as well as OPA (Open Policy Agent). You can read in detail about Policy management here. It's also noteworthy to point out that Octelium intentionally has no notion whatsoever of an "admin" or "superuser" User. In other words, zero standing privileges are the default state and all permissions including those to the API Server can be restricted via Policies and tied to time and context on a per-request basis.

  • Secret-less access to upstreams Vigil is capable of injecting application-layer specific credentials required by the upstream protected resource on-the-fly and thus totally eliminating the need to share and manage such typically long-lived and over-privileged credentials as well as having to distribute them to Users who, in turn, must carry the burden of managing and storing them securely. For example, in HTTP-based Services this can be API keys and access tokens obtained from OAuth2 client credentials flows (read more here), in Kubernetes this can be kubeconfig files (read more here), in SSH it can be passwords or private keys (read more here), in Postgres-based and mySQL-based databases this can be passwords (read more here and here). Vigil can also inject mTLS keys used by any generic upstream that requires mTLS. This mechanism also allows you to grant access a per-request basis to Users only via your Policies regardless of the permissions granted to the injected application-layer specific credentials used to access the upstream. You can read in detail about secret-less access here.

  • Dynamic configuration and routing Octelium's application-layer awareness enables you to dynamically route to different upstreams (e.g. an API with multiple versions where each version is served by a different upstream) set different upstream L-7 credentials to provide dynamic secret-less access where each configuration corresponds to a different upstream context or account, as well as setting other L-7 configurations depending on the mode (You can read more here).

  • Visibility and auditing Vigil is built to be OpenTelemetry ready and emits in real-time access logs which not only clearly identify the subject (i.e. the User, their Session and Device if available) and resource represented by the Service, but can also provide you with application-layer specific details of the request (e.g. HTTP request such as paths and methods, PostgreSQL and MySQL database queries, etc...). You can read in detail about visibility and access logs here.

It's noteworthy to point out to the mindful decision of separating Vigil, the policy enforcement point (PEP) from the Octovigil, the policy decision point (PDP). This architecture enables both components to horizontally scale independently from one another. Since the PDP needs to locally store possibly immense amount of information about the subjects (i.e. Sessions, Users, Groups and Devices) and resources (i.e. Services and Namespaces) as well as the need to evaluate complex logic set in Policies, it could be very impractical in terms of resource usage to embed all such information inside every Vigil instance at scale.

From the Service to the Upstream

Once the request is authorized by the Service, it proceeds to the protected resource whose address can be either:

  • A static IP/FQDN private/public address that is directly reachable from within the Cluster (e.g. a Kubernetes service/pod running on the same cluster, a private resource running in the same private network (e.g. same AWS VPC) as the Cluster, a public SaaS API protected by an access token, etc...).

  • A static IP/FQDN private/public address that is reachable indirectly through a connected User assigned to serve the Service. This opens the door to serving Services from anywhere outside the direct reach of the Cluster (e.g. private resources hosted by your laptop behind a NAT, Docker containers from anywhere, Kubernetes pods from other clusters, resources in multiple private clouds, etc...). In such case, the request proceeds from the Service over the WireGuard/QUIC tunnel corresponding to the client whose Session is serving the resource.

  • The Octelium Cluster is also capable of acting as PaaS-like deployment platform by reusing the underlying Kubernetes infrastructure to automatically deploy and scale Dockerized images/containerized applications and serve them as Service upstreams. You can read in detail about managed containers here.

It is noteworthy to point out that upstreams need to know absolutely nothing about the existence Cluster. From the upstream point of view, it's just another request coming from its own network. That means that the entire resource-intensive process of access control as well as visibility is completely done within the Cluster.

A Platform on top of Kubernetes

So far, we have explained the data plane flow from the User to the Service's upstream. However, to automate, manage and scale this process for a system with an arbitrary number of Services and Users we need to have a control plane. This is somewhat similar to what Kubernetes does with containers but in our case the identity-aware proxy, Vigil, is the elementary unit in the platform instead of containers in the case of Kubernetes.

Interestingly enough, the Octelium Cluster runs on top of Kubernetes. While the Cluster Users and administrators need to know absolutely nothing about Kubernetes, the choice of using Kubernetes as an infrastructure platform for Octelium Clusters is a crucial one as it enables the Octelium Cluster to seamlessly operate as a scalable and a reliable distributed system without any manual intervention from the Cluster administrators. For example, the Cluster administrators need not to bother about how to deploy the identity-aware proxy components whenever they need to add a new Service, simply creating a Service using the Octelium Cluster APIs or via the octeliumctl CLI will automatically deploy all the underlying Kubernetes resources (e.g. pods that run Vigil containers) implementing that Service. This automatic orchestration provided by Kubernetes enables the Cluster administrators to forget about the operational side where all components can be managed, run and scaled up/down solely via the abstract Octelium Cluster APIs.

Moreover, using Kubernetes as infrastructure seamlessly enables Octelium Clusters to operate at any scale. For example, horizontally scaling Services translates to scaling Kubernetes Vigil pods/containers; adding a new Gateway is automatically done by simply adding a new Kubernetes node. Finally, using Kubernetes as an infrastructure platform for Octelium enables it reuse the same infrastructure and effortlessly deploy your containerized applications and serve behind typical Octelium Services without any manual Kubernetes management effectively providing PaaS-like capabilities (read more here).

A Unified Platform for Resource Access

While Octelium is primarily meant to operate as Zero Trust Network Access (ZTNA) platform/ BeyondCorp architecture to provide a modern zero trust alternative to commercial remote access/corporate VPNs and remote access tools, Octelium goals exceeds merely achieving secure remote access to internal and private resources to include securing access to publicly protected resources such as SaaS APIs and databases, deploying containerized applications and serving them as upstream as well as even providing public anonymous access (read more here). Octelium can be used as a Zero Trust Network Access (ZTNA) platform/ BeyondCorp architecture, a modern L-7 aware zero-config VPN, a self-hosted secure tunnels and reverse proxy infrastructure, a PaaS-like hosting/deployment platform for both secure access as well as anonymous public access, a secure API gateway, an AI gateway to any AI LLM providers as well as a personal infrastructure for a homelab.

  • Quick Cluster installation over a VPS/cloud VM here.
  • First Steps Managing the Cluster here.
  • Managing Services here.
  • Access control and Policies here.
  • Adding IdentityProviders here.
© 2025 octelium.comOctelium Labs, LLCAll rights reserved
Octelium and Octelium logo are trademarks of Octelium Labs, LLC.
WireGuard is a registered trademark of Jason A. Donenfeld