Managed Containers

In addition to serving internal and private resources behind NAT from anywhere as well as serving protected public resources such as SaaS APIs and databases, Octelium Clusters enable you to seamlessly deploy, manage and scale your containerized applications and serve them as Services by reusing the underlying Kubernetes infrastructure that runs the Octelium Cluster and deploying your Dockerized images on top of it. Here is a simple example of running an nginx image as an Octelium Service:

kind: Service metadata: name: nginx spec: mode: HTTP config: upstream: container: port: 80 image: nginx:latest
note

Managed containers are automatically scheduled over data-plane nodes (read more here).

Environment Variables

You can also add your environment variables as follows:

kind: Service metadata: name: nginx spec: mode: HTTP config: upstream: container: port: 80 image: nginx:latest env: - name: KEY1 value: VALUE1 - name: KEY2 value: VALUE2

Using a Secret as an Environment Variable

You can also use an Octelium Secret (read more here) as a value for your environment variable. The Cluster automatically creates a Kubernetes secret for every Octelium Secret upon the deployment of the managed container. Here is an example:

kind: Service metadata: name: my-api spec: mode: HTTP config: upstream: container: port: 80 image: ghcr.io/org/my-api:latest env: - name: KEY1 value: VALUE1 - name: VAULT_ACCESS_TOKEN fromSecret: vault-access-token

Using a Kubernetes Secret as an Environment Variable

You can also use an existent Kubernetes secret in the octelium Kubernetes namespace as an environment variable value. Here is an example:

kind: Service metadata: name: my-api spec: mode: HTTP config: upstream: container: port: 80 image: ghcr.io/org/my-api:latest env: - name: VAULT_ACCESS_TOKEN kubernetesSecretRef: name: my-k8s-secret key: data

Command and Arguments

You can override the container's "entrypoint" command via the command array as follows:

kind: Service metadata: name: my-api spec: mode: HTTP config: upstream: container: port: 80 image: my-api:latest command: - /bin/another-binary

You can also add any command-line arguments via the args array as follows:

kind: Service metadata: name: my-api spec: mode: HTTP config: upstream: container: port: 80 image: my-api:latest args: - --arg1=value1 - --arg2=value2 - --arg3 - val3

Scaling

You can easily scale up/down your upstream containerized application by controlling the number of replicas as follows:

kind: Service metadata: name: nginx spec: mode: HTTP config: upstream: container: port: 80 image: nginx:latest replicas: 4

Private Registry

You can deploy Docker images from private container registries. In such case, all you have to do is add the registry credentials as follows:

kind: Service metadata: name: nginx spec: mode: HTTP config: upstream: container: port: 80 image: ghcr.io/org/image:tag credentials: usernamePassword: username: username password: fromSecret: secret-1
note

You can read more about Secret management here.

Resource Limits

You can restrict how much memory and CPU your application can consume by forcing resource limits. Here is an example:

kind: Service metadata: name: nginx spec: mode: HTTP config: upstream: container: port: 80 image: nginx:latest resourceLimit: cpu: millicores: 1500 memory: megabytes: 512

Extended Resources

You can also set an extended resource limit to request custom resource that can be recognized by your Kubernetes cluster. For example this can be useful to request and schedule a GPU for your managed container (read more here). Here is an example:

kind: Service metadata: name: nginx spec: mode: HTTP config: upstream: container: port: 80 image: nginx:latest resourceLimit: cpu: millicores: 1500 memory: megabytes: 512 ext: nvidia.com/gpu: "1"

Security Context

You can also set a few security related configurations for your managed container such as enforcing a read-only root filesystem and setting the user ID. Here is an example:

kind: Service metadata: name: nginx spec: mode: HTTP config: upstream: container: port: 80 image: nginx:latest securityContext: readOnlyRootFilesystem: true runAsUser: 1000

You can also add and drop Linux capabilities as follows:

kind: Service metadata: name: nginx spec: mode: HTTP config: upstream: container: port: 80 image: nginx:latest securityContext: capabilities: add: ["NET_ADMIN", "SYS_TIME"] drop: ["ALL"]

Volumes

You can also define and mount Kubernetes volumes (read more here) inside your managed containers.

Persistent Volume Claim

You can use persistent volume claims (read more here) as follows:

kind: Service metadata: name: nginx spec: mode: HTTP config: upstream: container: port: 80 image: nginx:latest volumes: - name: volume-1 persistentVolumeClaim: name: pvc-1 volumeMounts: - name: volume-1 mountPath: /mnt/volume-1 readOnly: true

Empty Dir

You can use emptyDir volumes (read more here) as follows:

kind: Service metadata: name: nginx spec: mode: HTTP config: upstream: container: port: 80 image: nginx:latest volumes: - name: volume-1 emptyDir: sizeLimitMegabytes: 1000 volumeMounts: - name: volume-1 mountPath: /mnt/volume-1

Readiness and Liveness Probes

Octelium supports Kubernetes readiness and liveness probes (read more here) via httpGet, tcpSocket and grpc probes. Here is an example via httpGet probe:

kind: Service metadata: name: nginx spec: mode: HTTP config: upstream: container: port: 80 image: nginx:latest livenessProbe: httpGet: port: 8080 path: /healthz initialDelaySeconds: 20 periodSeconds: 30 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 readinessProbe: httpGet: port: 8080 path: /healthz initialDelaySeconds: 20 periodSeconds: 30 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3

You can also use the tcpSocket as follows:

kind: Service metadata: name: nginx spec: mode: HTTP config: upstream: container: port: 80 image: nginx:latest livenessProbe: tcpSocket: port: 8080 initialDelaySeconds: 20 periodSeconds: 30 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3

And for gRPC-based Services, you might want to use grpc probes as follows:

kind: Service metadata: name: my-grpc-api spec: mode: GRPC config: upstream: container: port: 8080 image: ghcr.io/org/grpc-api:latest livenessProbe: grpc: port: 8080 initialDelaySeconds: 20 periodSeconds: 30 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3

Dynamic Configuration

You can also simultaneously deploy more than one container and dynamically route among them depending on identity as well as context (read more in detail about dynamic configuration here). Here is an example:

kind: Service metadata: name: nginx spec: mode: HTTP dynamicConfig: configs: - name: c1 upstream: container: port: 80 image: ghcr.io/org/image:v1 - name: c2 upstream: container: port: 80 image: ghcr.io/org/image:v2 rules: - condition: match: ctx.request.http.path.startsWith("/v1") configName: c1 - condition: match: ctx.request.http.path.startsWith("/v2") configName: c2