Overview

Each and every protected resource is represented in the Cluster by a Service. A Service is implemented by an identity-aware proxy (IaP) called Vigil abstracting all dynamic network-layer details of the protected resource behind it and is capable of providing access control, dynamic configuration such as routing as well as visibility and access logging at the application-layer for various protocols including HTTP, SSH, PostgreSQL, MySQL, among others (read more about Service modes here).

NOTE

This guide is concerned about the configuration of Services. If you want instead to know about Service policies, please learn more about access control here.

Namespaces

Each Service belongs to a single Namespace. Namespaces are a way of grouping a collection of Services according to your needs and you are free to create as many Namespaces as you want. By default, if you do not explicitly specify a Namespace, the Service will automatically belong to the default Namespace which is created automatically upon the Cluster's installation. You can read more about how to manage Namespaces and how they affect DNS and addressing Services here.

Upstream

Users can only access the protected resources by the Cluster through Services. Therefore, a Service must have an upstream whose address belong to the actual protected resource. An upstream can be either directly accessible by the Service from within the Cluster or indirectly via an actively connected User assigned to serve the Service. Moreover, a Service can deploy, manage and scale containers on top on the underlying Kubernetes infrastructure and serve them as an upstream (read morehere).

Directly Accessible by the Cluster

The upstream can be a private or public IP address or FQDN that's reachable directly from within the Cluster. Some examples are:

  • A Kubernetes pod IP address or Kubernetes service FQDN in the same Kubernetes cluster hosting the Octelium Cluster (e.g. http://10.244.0.13, postgres://pg.default.svc, etc...)
  • A private cloud resource reachable from within the Cluster (e.g. postgresql://db1.123456789012.us-east-1.rds.amazonaws.com:5432, tcp://my-custom-app:9090)
  • A public FQDN of a public resource (e.g. https://api.sandbox.paypal.com, https://news.ycombinator.com)

Here is an example:

1
kind: Service
2
metadata:
3
name: svc1
4
spec:
5
mode: POSTGRES
6
config:
7
upstream:
8
url: postgres://pg.my-private-network.local

Served through a Connected User

The upstream can also be a private/public IP address or FQDN that is reachable from a User's connected Session. This opens the door to serving Services from anywhere from outside the Cluster (e.g. your laptop behind a NAT, Docker containers, Kubernetes pods from other clusters, IoT, etc...). Here is an example:

1
kind: Service
2
metadata:
3
name: svc1
4
spec:
5
mode: POSTGRES
6
config:
7
upstream:
8
url: postgres://pg.my-private-network.local
9
user: usr1

Now the Service is served by the upstream postgres://pg.my-private-network.local that is accessible from any connected Session of the User usr1 that is willing to serve that Service.

When serving Services from the User's side, that User's client has to explicitly inform the Cluster (i.e. via the octelium connect command) that they are willing to serve one, multiple or all Services assigned to them. Read more about serving Services here.

Load Balancing

You're not restricted to having only a single upstream for your Service. Octelium supports having more than one upstream addresses where currently random load balancing is automatically enforced among the different upstreams. Here is an example:

1
kind: Service
2
metadata:
3
name: svc1
4
spec:
5
mode: HTTP
6
config:
7
upstream:
8
loadbalance:
9
endpoints:
10
- url: https://backend1.example.com
11
- url: https://backend2.example.com
12
- url: https://backend3.example.com

If an upstream hosted by a User is served by multiple active Sessions (i.e. multiple instances of octelium connect --serve-all or octelium connect --serve <SERVICE_NAME>), then round robin load balancing is automatically enforced among the different hosting Sessions of that upstream.

Managed Containers

In addition to serving upstreams using their addresses as illustrated above, the Cluster is also capable of reusing the underlying Kubernetes cluster to automatically deploy your containerized applications and seamlessly serve them as the upstream for the Service. You can read more about the managed containers mode here.

Embedded SSH

In addition to serving SSH via the application-layer aware SSH mode (see Service modes below here), Services can also provide secret-less SSH access served by connected octelium clients which can run an embedded SSH server from within. You can read more about the embedded SSH mode here.

DNS

Each Service has a private FQDN <SERVICE>.<NAMESPACE>.local.<DOMAIN>. Moreover, if a Service is exposed publicly (read more here), then it additionally assumes the public FQDN <SERVICE>.<NAMESPACE>.<DOMAIN>. For example if the Cluster domain is example.com, the above Service will have the private FQDN webapp.production.local.example.com and the public FQDN webapp.production.example.com if the BeyondCorp mode is enabled.

The default value for both the Service and the Namespace affects the domain name of the Service as it makes it shorter and easier to access for Users. Any Service that belongs to the default Namespace, which is created automatically by the Cluster upon its installation, has the additional private FQDN <SERVICE>.local.<DOMAIN> and the public FQDN <SERVICE>.<DOMAIN>.

Likewise, a Service with the name default in any given Namespace, has the additional private FQDN <NAMESPACE>.local.<DOMAIN> and the public FQDN <NAMESPACE>.<DOMAIN>. The default Service in the default Namespace additional private FQDN local.<DOMAIN> and the public FQDN <DOMAIN>.

For example the Service webapp.default which exists in the default Namespace, additionally has the private FQDN <SERVICE>.local.<DOMAIN> webapp.local.example.com

The default Namespace is a special Namespace that is created automatically by the Cluster upon its installation and it has a unique feature when it comes to DNS. It provides for its Services the additional private FQDN <SERVICE>.local.<DOMAIN> and the public FQDN <SERVICE>.<DOMAIN> leading to shorter and easier to type FQDNs. For example, if the Cluster domain is example.com, the above Service will have the private FQDN webapp.local.example.com and the public FQDN, when the BeyondCorp mode is enabled, webapp.example.com.

To sum it up. Here is a table showing the FQDN and hostname values for different Service and Namespace names:

ServicePrivate FQDNsPublic FQDNs
webapp.production
  • webapp.production.local.example.com
  • webapp.production.example.com
webapp.default
  • webapp.local.example.com
  • webapp.default.local.example.com
  • webapp.example.com
  • webapp.default.example.com
default.webapp
  • webapp.local.example.com
  • default.webapp.local.example.com
  • webapp.example.com
  • default.webapp.example.com

Port

The port at which the identity-aware proxy implementing the Service is listening to can be automatically inferred from the upstream URL or explicitly set. Here are some examples of port numbers inferred from the upstream URL:

Backend URLPort
http://example.com80
https://example.com with TLS disabled (read about Service TLS here)80
https://example.com with TLS enabled443
https://example.com:30003000
postgres://pg.my-private-network.local5432
ssh://usr-1.users22

Since there are many L-7 protocols with different default ports, Octelium cannot recognize every possibility on its own and you might have to provide the port in the URL in the form of PROTOCOL://HOSTNAME:PORT (e.g. myprotocol://svc.local:9090) in order to extract the port value.

You can also explicitly set a the Service's port value to be different from that used by the upstream. Here is an example:

1
kind: Service
2
metadata:
3
name: ssh1
4
spec:
5
mode: SSH
6
config:
7
upstream:
8
url: ssh://my-ssh.private.local
9
port: 2022

Now Users can access ssh1 when connected to net1 as follows:

ssh <USER>@ssh1 -p 2022

Mode

A Service is implemented by Vigil. Vigil is an identity-aware proxy (IAP) that is capable of understanding various widely used layer-7 protocols. The application-layer awareness provided by Vigil not only enables you to control access based on L-7 information (e.g. HTTP headers and paths, Kubernetes verbs and resources, SSH users, etc...), provide secret-less access that eliminates L-7 credentials (read more here), L-7 aware dynamic configuration such as routing to different upstreams and their credentials (read more here), but it also provides you with L-7 aware auditing and access logging (read more here). Currently a Service supports the following modes:

  • TCP (Read more here) This should be the fallback mode for a generic TCP-based application-layer protocol that is not supported by default by Vigil or when you do not want to benefit from the L-7 aware mode provided by the Service and simply treat it as raw generic TCP traffic.
  • UDP (Read more here)
  • HTTP (Read more about HTTP mode which can be used for any HTTP-based resources (e.g. APIs) here).
  • SSH(Read more here).
  • POSTGRES (Read more about the The PostgreSQL mode here).
  • MYSQL (Read more about the MySQL mode here).
  • KUBERNETES (Read more about the Kubernetes mode here).
  • GRPC (Read more about the gRPC mode here.
  • DNS (Read more about the DNS mode here).
  • WEB (Read more about the Web mode which can be used for Web-based applications here).

TLS

A Service by default listens over plaintext TCP. If you want for a Service whose mode supports TLS (e.g. TCP, HTTP, POSTGRES) to be accessed over over TLS, then you have to enable TLS as follows:

1
kind: Service
2
metadata:
3
name: svc1
4
spec:
5
mode: HTTP
6
config:
7
upstream:
8
url: https://nginx.local
9
isTLS: true

Now, Users can access the Service svc1 using at the private URL https://svc1.local.<DOMAIN>.

Deployment

By default, a Service is implemented as a Kubernetes deployment with a single replica. Octelium enables you to effortlessly scale up/down your Services either by changing the number of replicas (which translates to deployment replicas) as shown below.

Setting Replicas

You can set replicas for a Service as follows:

1
kind: Service
2
metadata:
3
name: svc1
4
spec:
5
mode: HTTP
6
config:
7
upstream:
8
url: https://example.com
9
deployment:
10
replicas: 5

Region

By default, a Service is deployed in the default Region which represents the initial Kubernetes cluster in the Octelium Cluster. Here is a example:

1
kind: Service
2
metadata:
3
name: svc1
4
spec:
5
mode: HTTP
6
config:
7
upstream:
8
url: https://example.com
9
region: "aws-eu-1"

Policies

Policies (read more about Policies and access control here) can be created and/or attached to Services where they can act as resource-based policies for a certain Service. In the case of a request coming to that Service, all the rules of the InlinePolicies and attached Policies of that Service are evaluated.

1
kind: Service
2
metadata:
3
name: example
4
spec:
5
mode: HTTP
6
config:
7
upstream:
8
url: https://example.com
9
authorization:
10
policies: ["policy-1", "policy-2"]
11
inlinePolicies:
12
- spec:
13
rules:
14
- effect: DENY
15
condition:
16
match: '"group-1" in ctx.user.spec.groups'
© 2025 octelium.comOctelium Labs, LLCAll rights reserved
Octelium and Octelium logo are trademarks of Octelium Labs, LLC.
WireGuard is a registered trademark of Jason A. Donenfeld