An Octelium Cluster is designed to be managed similarly to Kubernetes clusters. The main way to manage an Octelium Cluster is via the octeliumctl
CLI. If you're accustomed with kubectl
you will find yourself at home very quickly as it follows the same the declarative philosophy enabling you to define your resources in one or more yaml
files and by just using a single command (i.e. octeliumctl apply
) you can apply all changes to the Cluster and synchronize its state. The declarative way enables you to grow your resources very easily in an organized and trackable way that can be stored into a git repository where you can effortlessly update/rollback your Cluster state with a single command.
Creating our first Service
Each protected resource is represented in the Cluster by a Service. A Service is implemented by an identity-aware proxy (IaP) called Vigil abstracting all dynamic network-layer details of the protected resource behind it and is capable of providing secure secret-less access that eliminates sharing and managing L-7 credentials such as API keys and database passwords (read more about secret-less access here) for various protocols including HTTP, SSH, PostgreSQL, MySQL, among others besides protecting raw typical TCP and UDP (read more about Service modes here). The upstream of the Service (i.e. the actual protected resource) can be:
- An internal resource running in any private network (e.g. on prem, private cloud, your own laptop behind a NAT, etc...) which can be a static IPv4 or IPv6 address or a FQDN with dynamic endpoints as is the case with Kubernetes services for example. You can also use an internal resource that can be directly accessible from the private network within which the Octelium Cluster, and its underlying Kubernetes cluster, is running.
- A publicly protected resource such as SaaS APIs, databases and SSH servers. See some examples (here and here).
- Octelium can also automatically deploy, scale and secure access for your containerized apps and microservices to be served as Services (read more about managed containers here).
We are going to define our first Service with the name first-service
whose upstream is the public URL https://www.google.com
:
1kind: Service2metadata:3name: first-service4spec:5mode: HTTP6config:7upstream:8url: https://www.google.com
Serving the public website https://www.google.com
as a Service, however, isn't very useful besides the fact that it shows that a Service can serve public or internal/private FQDNs that could be pointing to dynamic upstreams with changing IP addresses which is something that remote access VPNs cannot do since they operate at layer-3. We can do something a little more interesting, like deploying the nginx
container image and serve it as a Service (read more about managed containers here) as follows:
1kind: Service2metadata:3name: first-service4spec:5mode: HTTP6config:7upstream:8container:9port: 8010image: nginx:latest11replicas: 3
A Service can also provide a "secret-less" access for the authorized Users to an upstream that requires an application-layer credential such as HTTP bearer access tokens (read more about secret-less access here). Octelium's application-layer awareness allows you to simply eliminate managing and distributing such, in many practical cases, long-lived, over-privileged L-7 credentials at any scale. Here is an example:
1kind: Service2metadata:3name: first-service4spec:5mode: HTTP6config:7upstream:8url: https://api.openai.com9http:10path:11addPrefix: /v112auth:13bearer:14fromSecret: openai-api-key
The highlighted openai-api-key
is the name of a Secret that actually represents the API access token, referenced in the Service by its name in order to not store sensitive data along with other Cluster configurations which can be stored in git repositories for example (read more about creating Secrets here).
Octelium's application-layer awareness is not just exclusive to HTTP
-based Services. For example you can define an SSH
Service that provides password-less access to the server as follows:
1kind: Service2metadata:3name: ssh14spec:5mode: SSH6port: 20227config:8upstream:9url: ssh://address-to-host10ssh:11user: root12auth:13password:14fromSecret: ssh1-password15upstreamHostKey:16key: ssh-rsa AAAA...
Octelium also supports secret-less access for SSH using private keys (read more here). It also supports "embedded SSH" mode where you can can serve SSH via an embedded SSH server from within the octelium
client without having to rely on an existing SSH server on the host it is running on. This might be especially useful for constrained environments such as containers and IoT fleets (read more about the embedded SSH mode here).
Octelium currently supports several L-7 aware modes: SSH, PostgreSQL, MySQL, DNS, gRPC, Web, Kubernetes besides the raw TCP and UDP modes.
Now, to actually apply the creation of our Service first-service
, we use the octeliumctl apply
command as follows:
octeliumctl apply --domain <DOMAIN> /path/to/octelium/<DOMAIN>
Since the Octelium CLIs (i.e. octelium
and octeliumctl
) are designed to work with multiple Clusters as defined by their own domains, you will have to add the --domain
flag for every command. It's much easier to define the domain as an environment variable in your shell once and then be able to use any command without having to add the --domain
flag as follows:
export OCTELIUM_DOMAIN=<DOMAIN>
Now let's connect to our Cluster as follows:
octelium connect -d
Now that we're connected to the Cluster, we can access the Service first-service
simply using any tool that can talk HTTP, let's try curl
for example as follows:
curl first-service# This is equivalent tocurl first-service.local.<DOMAIN>
Why does this hostname first-service
work? simply because when you connect to a Cluster via the command octelium connect
, Octelium automatically configures your machine DNS and adds the suffix .local.<DOMAIN>
, which is the common suffix for all Services in the Cluster, to your machine's DNS search domains so you don't have to type the entire private FQDN yourself.
You can also run octelium connect
as a completely unprivileged process and map the Service first-service
to a localhost port as follows:
octelium connect -p first-service:8080
In the above example, we mapped the Service to the localhost port 8080
. Now we can access it as follows:
curl localhost:8080
You can read more about connecting to Clusters via the octelium
CLI and its more advanced options
here.
So far, in order to access our Service first-service
, Users will have to use the octelium
CLI and connect to the Cluster first. Octelium, however, also supports the client-less BeyondCorp mode which enables you to securely expose an HTTP-based Service publicly in order to be accessed by authorized HUMAN
Users via their browsers and even by WORKLOAD
Users through standard OAuth2 client credentials authentication flow (read more here). You can very simply do so as follows:
1kind: Service2metadata:3name: first-service4spec:5mode: WEB6isPublic: true7config:8upstream:9container:10port: 8011image: nginx:latest
And now the Service can be accessed publicly by authorized Users via their browsers over the public URL https://first-service.<DOMAIN>
Octelium can also totally expose a Service for anonymous access. This allows you to effectively use Octelium as a self-hosted PaaS or a hosting platform where you can publicly expose HTTP-based Services to the public internet whose upstreams might be running anywhere or be deployed as managed containers as we have seen earlier. You can read more about the anonymous access mode here. Here is an example:
1kind: Service2metadata:3name: first-service4spec:5mode: WEB6isPublic: true7isAnonymous: true8config:9upstream:10container:11port: 8012image: nginx:latest
You can also use dynamic configuration in order to, for example, route to different upstreams and/or setting different different upstream credentials, set different request/response headers, etc... depending on the request's context (read more about dynamic configuration here). Here is a simple example that can be used for an API gateway use case (read more here):
1kind: Service2metadata:3name: my-api4spec:5mode: HTTP6isPublic: true7dynamicConfig:8configs:9- name: v110upstream:11url: https://apiv1.example.com12http:13path:14removePrefix: /v115- name: v216upstream:17url: https://apiv2.example.com18http:19path:20removePrefix: /v221rules:22- condition:23match: ctx.request.http.path.startsWith("/v1")24configName: v125- condition:26match: ctx.request.http.path.startsWith("/v2")27configName: v2
Dynamic configuration can also be used for other modes such as SSH
, POSTGRES
and MYSQL
. For example, you can dynamically enforce some Users to log in as certain SSH users based on identity or context via policy-as-code. For PostgreSQL or MySQL, you can enforce Users to use certain database users/passwords as well as different databases and even upstreams based on identity and context.
Creating a User
There are 2 types of Users (You can read in detail about User management here): HUMAN
Users and WORKLOAD
Users which can be by non-human entities and service accounts such as servers, containers, applications, etc.... Both User types can use the private client-based ZTNA mode which acts as a zero-config VPN from the User's perspective where Users can address Services via stable private FQDNs and hostnames assigned by the Cluster. Both User types can also securely access publicly exposed HTTP-based Services via the client-less public BeyondCorp mode (read more here). For example, HUMAN
Users can access web-based Services only via their browsers. WORKLOAD
Users can use the OAuth2 client credentials flow enabling your applications written in any programming language to access any publicly exposed HTTP-based Service (e.g. HTTP and gRPC APIs, Kubernetes API servers, etc...) only via standard OAuth2 libraries and without having to use any special SDKs or install any clients (read more here). Moreover, Golang-based applications can use the official Golang SDK (read more here) to control the Cluster and access its Services.
We are now going to create our first User for a friend in order to be able to connect to our Cluster and access access our Service first-service
. As always, we add a new yaml
file, in this example users.yaml
, in the same directory (i.e. /path/to/octelium/<DOMAIN>
) dedicated for Users.
Now we can define our User john
as follows:
And to actually apply the creation of our User, we use the octelium apply
command as follows:
octeliumctl apply /path/to/octelium/<DOMAIN>
HUMAN
Users can use their emails to authenticate to the Cluster via their web browsers using any OpenID Connect or SAML 2.0 SSO identity providers (IdPs) as well as GitHub OAuth2 via IdentityProviders. You can read more in details about creating and managing IdentityProviders here.
For WORKLOAD
Users, they can authenticate themselves via the octelium login
or octeliumctl login
commands using authentication token Credentials (read more here), OAuth2 client credentials as shown above, or they can even authenticate themselves via the "secret-less" OpenID Connect identity assertion mode (read more here).
A User can interact with the Cluster and access its Services only through a valid Session that is automatically created by the Cluster upon a valid authentication via an authentication token, OAuth2 client credential authentication or an IdentityProvider. A User needs to periodically re-authenticate in order to keep the Session valid until it eventually expires. You can read more about Session management here.
Access Control
Now, while our friend john
can actually connect to our Cluster, he still cannot access the Service first-service
unless we explicitly allow him via a Policy. In order to do so, we can for example, allow anybody whose email belongs to our domain example.com
to access first-service
as follows:
1kind: Service2metadata:3name: first-service4spec:5mode: HTTP6config:7upstream:8url: https://example.com9authorization:10inlinePolicies:11- spec:12rules:13- effect: ALLOW14condition:15match: ctx.user.spec.email.endsWith("@example.com")
As you can see, we just defined an inline Policy inside our Service to explicitly allow Users whose email belongs to the example.com
domain to access the Service first-service
. This is a very rudimentary example of a resource-based Policy that's defined inside our Service. However, we can actually define the Policy as a standalone reusable resource and attach it to any Service, Namespace, User, Group or a Session. Here is how to define the above inline Policy as a standalone Policy:
1kind: Policy2metadata:3name: first-policy4spec:5rules:6- effect: ALLOW7condition:8match: ctx.user.spec.email.endsWith("@example.com")
Now we attach it to our Service as follows:
1kind: Service2metadata:3name: first-service4spec:5mode: HTTP6config:7upstream:8url: https://example.com9authorization:10policies: ["first-policy"]
We can also define a Policy for a whole set of Users. This is what a Group is for among other functionalities (read more about Groups here). Let's define our first Group friends
and attach john
to it.
1kind: Group2metadata:3name: friends4spec: {}
Now lets attach john
to our Group friends
as follows:
1kind: User2metadata:3name: john4spec:5type: HUMAN6groups: ["friends"]
Now let's redefine our Policy first-policy
to allow any User belonging to the Group friends
as follows:
1kind: Policy2metadata:3name: first-policy4spec:5rules:6- effect: ALLOW7condition:8match: '"friends" in ctx.user.spec.groups'
We could make our Policy more fine grained. Let's say we want to allow our friends
to access the Service except for any URL with a path that starts with /admin
.
1kind: Policy2metadata:3name: first-policy4spec:5rules:6- effect: DENY7condition:8all:9of:10- match: '"friends" in ctx.user.spec.groups'11- match: ctx.request.http.path.startsWith("/admin")12- effect: ALLOW13condition:14match: '"friends" in ctx.user.spec.groups'
Because DENY
rules override ALLOW
rules, any User belonging to the friends
Group who tries to access any path that starts with /admin
such as /admin/homepage
will be denied access. However other paths such as /
or /dashboard
will be allowed as they are explicitly allowed by the ALLOW
rule.
Access control is the essence of the zero trust security model. This example is just the simplest use case of what you can do with Octelium's access control system. Octelium's attribute-based access control system (ABAC) using policy-as-code is designed to be extremely dynamic at any scale. It also supports writing rules written in Open Policy Agent (OPA), priorities for rules, nested conditions and extending it by adding attributes to your different resources (e.g. Users, Groups, Services, etc...) via information fed from external tools such as IAM platforms, SIEM tools, threat intelligence tools, incident alerting and on-call management tools, etc.... You can read more in detail about Policies and access control here.
What Now?
This was just a quick guide to show you the main different features of Octelium. Octelium's architecture is designed to be flexible enough to be used as a Zero Trust Network Access (ZTNA) solution, a complete solution for secure tunnels, an API gateway (read more here), an AI gateway (read more here) and even can be used as a more advanced Kubernetes ingress/load balancer alternative as well as a self-hosted PaaS-like deployment and hosting platform to deploy, scale and provide secure or public anonymous access for your containerized applications such as Vite.js/Next.js/Astro web apps (see more here).