Kubernetes

Octelium supports a Kubernetes mode by setting the Service mode to KUBERNETES. This mode provides:

  • Secretless access for authorized Users to Kubernetes cluster APIs.

  • Application-layer aware access control where you, for example, allow certain Users to access certain Kubernetes resources in certain namespaces under certain conditions.

  • Application-layer aware dynamic configuration such as routing to different upstreams and/or credentials (read more here).

  • Application-layer aware visibility and auditing that records the Kubernetes request resource, verb, apiVersion, namespace, etc...

note

You can read more about accessing KUBERNETES-based Services via the client-based mode here. You can also read more about the clientless access mode for workloads here.

Secretless Access

In order to provide secretless access to KUBERNETES Services, you need to set the credentials required for the authentication to the upstream Kubernetes cluster. There are three authentication methods: Kubeconfig files, client certificates and bearer tokens.

Secretless access enables you to provide secretless access for authorized Users to KUBERNETES-based Service without having to issue and distribute Kubeconfigs or access tokens to Users.

Kubeconfig

First, you need to create a Secret to store the content of the upstream's kubeconfig (read more here) as follows:

octeliumctl create secret --file /PATH/TO/KUBECONFIG kubeconfig-k8s1

Now, you define your Service as follows:

kind: Service metadata: name: k8s1 spec: mode: KUBERNETES config: upstream: url: https://k8s-cluster.example.com:6443 kubernetes: kubeconfig: fromSecret: kubeconfig-k8s1

You can also choose a specific Kubeconfig context if you have more than one as follows:

kind: Service metadata: name: k8s1 spec: mode: KUBERNETES config: upstream: url: https://k8s-cluster.example.com:6443 kubernetes: kubeconfig: context: ctx-abcd fromSecret: kubeconfig-k8s1

Client Certificate

You can also manually set the client certificate credentials without using a kubeconfig. First, you need to create a Secret to store the content of the upstream's client PEM private key (read more here) as follows:

octeliumctl create secret --file /PATH/TO/CLIENT_PRIVATE_KEY.PEM k8s-client-cert

Now, you define your Service as follows:

kind: Service metadata: name: k8s1 spec: mode: KUBERNETES config: upstream: url: https://k8s-cluster.example.com:6443 kubernetes: clientCertificate: fromSecret: k8s-client-cert trustedCAs: - | -----BEGIN CERTIFICATE----- MIIDBTCCAe2gAwIBAgIIf811HQrTMBAwDQYJKoZIhvcNAQELBQAwFTETMBEGA1UE AxMKa3ViZXJuZXRlczAeFw0yNDA4MTkxMTEzNThaFw0zNDA4MTcxMTE4NThaMBUx EzARBgNVBAMTCmt1YmVybmV0ZXMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK AoIBAQCg/Bw/rDlvZE6If1aVGfU324owXgLWsaI8MXThedmauhEp1xSPK9gtPaVR omG6JwAgyln5YyFWSYDSLn2Pb/jBVPWslkdnTGfd3cDPe22+dpQinF1WsBylNuji p00gst4rUl8bsxL/9a+3fcl4FBBUsyc6jQGBzHeH8Ilj+pUizIX1dj3oN97h1qiQ VB0OiX/UNn/BePfoIzPBFtAqBQMKosLQ4aoVru0J4xDf4upDAN90bjWYiuxqMhXX N49PZVZS8hw0k3zlTjOXxkrcQNCoPOA16a+gHbq8klUVhPoiwEIDpvu9QQ61Ppp/ gUIG98XviG6uPuRNa11zaRrFGcVJAgMBAAGjWTBXMA4GA1UdDwEB/wQEAwICpDAP BgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTVR79cj39ezb0jHsS13gPBc1VMzTAV BgNVHREEDjAMggprdWJlcm5ldGVzMA0GCSqGSIb3DQEBCwUAA4IBAQA9K9tQlqW4 vVPdWHb1LngRmvafgtrSN5K8m0yj3cCW3wOAms/zLN7QF6L5x9BwHCIzub3mbJUa QG+Hgs071q8UdBjWRtosh30L7LEpV0UZkLNPYjzFtbA+OVZH/htfijfAp68xnQ1E E3nZah06C59InOZHC5zbNdybhbiyVIr+0zbGFbHWrA19tIakM5o34uEnC2QEnkQu CKu1lpmacjJMiztGgEIq9GSc67hYdrHiy3oThkno/jdeBETCFeWB4TDuUSUIefAp 4MjEh8mhH3An8HEmXtfpCxXc5HaFf3gavakAzwshe+zBs5L4CS7/IZvoOzW8P3MN tGKzn7JqDH2O -----END CERTIFICATE-----

Bearer Token

You can also manually set a bearer token if required by the upstream cluster. First, you need to create a Secret to store the content of the upstream's bearer token (read more here) as follows:

octeliumctl create secret bearer-token-k8s1

Now, you define your Service as follows:

kind: Service metadata: name: k8s1 spec: mode: KUBERNETES config: upstream: url: https://k8s-cluster.example.com:6443 kubernetes: bearerToken: fromSecret: bearer-token-k8s1 trustedCAs: - | -----BEGIN CERTIFICATE----- MIIDBTCCAe2gAwIBAgIIf811HQrTMBAwDQYJKoZIhvcNAQELBQAwFTETMBEGA1UE AxMKa3ViZXJuZXRlczAeFw0yNDA4MTkxMTEzNThaFw0zNDA4MTcxMTE4NThaMBUx EzARBgNVBAMTCmt1YmVybmV0ZXMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK AoIBAQCg/Bw/rDlvZE6If1aVGfU324owXgLWsaI8MXThedmauhEp1xSPK9gtPaVR omG6JwAgyln5YyFWSYDSLn2Pb/jBVPWslkdnTGfd3cDPe22+dpQinF1WsBylNuji p00gst4rUl8bsxL/9a+3fcl4FBBUsyc6jQGBzHeH8Ilj+pUizIX1dj3oN97h1qiQ VB0OiX/UNn/BePfoIzPBFtAqBQMKosLQ4aoVru0J4xDf4upDAN90bjWYiuxqMhXX N49PZVZS8hw0k3zlTjOXxkrcQNCoPOA16a+gHbq8klUVhPoiwEIDpvu9QQ61Ppp/ gUIG98XviG6uPuRNa11zaRrFGcVJAgMBAAGjWTBXMA4GA1UdDwEB/wQEAwICpDAP BgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTVR79cj39ezb0jHsS13gPBc1VMzTAV BgNVHREEDjAMggprdWJlcm5ldGVzMA0GCSqGSIb3DQEBCwUAA4IBAQA9K9tQlqW4 vVPdWHb1LngRmvafgtrSN5K8m0yj3cCW3wOAms/zLN7QF6L5x9BwHCIzub3mbJUa QG+Hgs071q8UdBjWRtosh30L7LEpV0UZkLNPYjzFtbA+OVZH/htfijfAp68xnQ1E E3nZah06C59InOZHC5zbNdybhbiyVIr+0zbGFbHWrA19tIakM5o34uEnC2QEnkQu CKu1lpmacjJMiztGgEIq9GSc67hYdrHiy3oThkno/jdeBETCFeWB4TDuUSUIefAp 4MjEh8mhH3An8HEmXtfpCxXc5HaFf3gavakAzwshe+zBs5L4CS7/IZvoOzW8P3MN tGKzn7JqDH2O -----END CERTIFICATE-----

Access Control

You can control access based on the Kubernetes HTTP request information. Such information are stored in ctx.request.kubernetes where it contains information such as the resource, namespace, verb, API group, API version and sub-resource if available. Additionally all the HTTP related information that are exposed in the HTTP mode are also exposed in the variable ctx.request.kubernetes.http. Here is a detailed example of a inline Policy that controls access based on Kubernetes-specific information:

kind: Service metadata: name: svc1 spec: mode: KUBERNETES config: upstream: url: https://k8s-cluster.example.com:6443 kubernetes: kubeconfig: context: ctx-abcd fromSecret: kubeconfig-k8s1 authorization: inlinePolicies: - spec: rules: - effect: ALLOW condition: any: of: - match: ctx.request.kubernetes.apiGroup in ["k8s.cni.cncf.io", "events.k8s.io"] - match: ctx.request.kubernetes.apiVersion == "v1" - match: ctx.request.kubernetes.verb in ["create", "get", "list", "update", "watch", "patch", "delete"] - match: ctx.request.kubernetes.namespace == "production" - match: ctx.request.kubernetes.resource in ["pods", "services"] - match: ctx.request.kubernetes.name == "pod-abcdef" - match: ctx.request.kubernetes.subresource == "status" - match: ctx.request.kubernetes.http.headers["x-custom-header"] == "this-value"

Visibility

The Service emits access logs in real time to the audit collector. Each log provides Kubernetes application-layer aware information about the request such as the request's namespace, resource, sub-resource, API version and API prefix. Additionally, all underlying HTTP-based information are also included. Here is an example:

{ "apiVersion": "core/v1", "kind": "AccessLog", "metadata": { "id": "d7vr-ndfa-w35tl5g9ft3h3u3dcwtnfpja-wfi9-2d4q", "createdAt": "2025-09-10T22:26:17.479478294Z", "actorRef": { "apiVersion": "core/v1", "kind": "Session", "uid": "b1bc6aaa-df51-456d-aa37-b77377ea26f0", "name": "root-1x09ce", "resourceVersion": "019935ba-ee91-766b-a11e-968c67478387" }, "targetRef": { "apiVersion": "core/v1", "kind": "Service", "uid": "75a1a442-2e8b-48ea-9608-fa64cb74b506", "name": "k8s1.default", "resourceVersion": "019935bc-524e-73dc-8b96-c14b109380fd" } }, "entry": { "common": { "startedAt": "2025-09-10T22:26:17.216993779Z", "endedAt": "2025-09-10T22:26:17.479481774Z", "status": "ALLOWED", "mode": "KUBERNETES", "reason": { "type": "POLICY_MATCH", "details": { "policyMatch": { "inlinePolicy": { "resourceRef": { "apiVersion": "core/v1", "kind": "Session", "uid": "b1bc6aaa-df51-456d-aa37-b77377ea26f0", "name": "root-1x09ce", "resourceVersion": "019935ba-ee91-766b-a11e-968c67478387" }, "name": "first-session-allow-all" } } } }, "sessionRef": { "apiVersion": "core/v1", "kind": "Session", "uid": "b1bc6aaa-df51-456d-aa37-b77377ea26f0", "name": "root-1x09ce", "resourceVersion": "019935ba-ee91-766b-a11e-968c67478387" }, "userRef": { "apiVersion": "core/v1", "kind": "User", "uid": "d72a39da-6f1c-43f7-ad75-dbaf76111b10", "name": "root", "resourceVersion": "019934c7-3d99-7864-8ccf-abe9eadfe023" }, "serviceRef": { "apiVersion": "core/v1", "kind": "Service", "uid": "75a1a442-2e8b-48ea-9608-fa64cb74b506", "name": "k8s1.default", "resourceVersion": "019935bc-524e-73dc-8b96-c14b109380fd" }, "namespaceRef": { "apiVersion": "core/v1", "kind": "Namespace", "uid": "2073e6f7-24c2-49d2-b0df-e5cd3636d82c", "name": "default", "resourceVersion": "019934c7-6d7a-73cd-a510-dfadfdfa6682" }, "regionRef": { "apiVersion": "core/v1", "kind": "Region", "uid": "85477de2-67d3-48ed-bda7-6c914489badf", "name": "default" }, "isPublic": true }, "info": { "kubernetes": { "http": { "request": { "path": "/api/v1/namespaces/octelium/pods", "userAgent": "kubectl/v1.33.4 (linux/amd64) kubernetes/74cdb42", "method": "GET", "uri": "/api/v1/namespaces/octelium/pods?limit=500" }, "response": { "code": 200, "bodyBytes": "25575", "contentType": "application/json" }, "httpVersion": "HTTP11" }, "verb": "list", "apiPrefix": "api", "apiVersion": "v1", "namespace": "octelium", "resource": "pods" } } } }