In addition to serving internal and private resources behind NAT from anywhere as well as serving protected public resources such as SaaS APIs and databases, Octelium Clusters enable you to seamlessly deploy, manage and scale your containerized applications and serve them as Services by reusing the underlying Kubernetes infrastructure that runs the Octelium Cluster and deploying your Dockerized images on top of it. Here is a simple example of running an nginx
image as an Octelium Service:
1kind: Service2metadata:3name: nginx4spec:5mode: HTTP6config:7upstream:8container:9port: 8010image: nginx:latest
Managed containers are automatically scheduled over data-plane nodes (read more here).
Environment Variables
You can also add your environment variables as follows:
1kind: Service2metadata:3name: nginx4spec:5mode: HTTP6config:7upstream:8container:9port: 8010image: nginx:latest11env:12- name: KEY113value: VALUE114- name: KEY215value: VALUE2
Command and Arguments
You can override the container's "entrypoint" command via the command
array as follows:
1kind: Service2metadata:3name: my-api4spec:5mode: HTTP6config:7upstream:8container:9port: 8010image: my-api:latest11command:12- /bin/another-binary
You can also add any command-line arguments via the args
array as follows:
1kind: Service2metadata:3name: my-api4spec:5mode: HTTP6config:7upstream:8container:9port: 8010image: my-api:latest11args:12- --arg1=value113- --arg2=value214- --arg315- val3
Scaling
You can easily scale up/down your upstream containerized application by controlling the number of replicas as follows:
1kind: Service2metadata:3name: nginx4spec:5mode: HTTP6config:7upstream:8container:9port: 8010image: nginx:latest11replicas: 4
Private Registry
You can deploy Docker images from private container registries. In such case, all you have to do is add the registry credentials as follows:
1kind: Service2metadata:3name: nginx4spec:5mode: HTTP6config:7upstream:8container:9port: 8010image: ghcr.io/org/image:tag11credentials:12usernamePassword:13username: username14password:15fromSecret: secret-1
You can read more about Secret management here.
Resource Limits
You can restrict how much memory and CPU your application can consume by forcing resource limits. Here is an example:
1kind: Service2metadata:3name: nginx4spec:5mode: HTTP6config:7upstream:8container:9port: 8010image: nginx:latest11resourceLimit:12cpu:13millicores: 150014memory:15megabytes: 512
Extended Resources
You can also set an extended resource limit to request custom resource that can be recognized by your Kubernetes cluster. For example this can be useful to request and schedule a GPU for your managed container (read more here). Here is an example:
1kind: Service2metadata:3name: nginx4spec:5mode: HTTP6config:7upstream:8container:9port: 8010image: nginx:latest11resourceLimit:12cpu:13millicores: 150014memory:15megabytes: 51216ext:17nvidia.com/gpu: "1"
Security Context
You can also set a few security related configurations for your managed container such as enforcing a read-only root filesystem and setting the user ID. Here is an example:
1kind: Service2metadata:3name: nginx4spec:5mode: HTTP6config:7upstream:8container:9port: 8010image: nginx:latest11securityContext:12readOnlyRootFilesystem: true13runAsUser: 1000
Dynamic Configuration
You can also simultaneously deploy more than one container and dynamically route among them depending on identity as well as context (read more in detail about dynamic configuration here). Here is an example:
1kind: Service2metadata:3name: nginx4spec:5mode: HTTP6dynamicConfig:7configs:8- name: c19upstream:10container:11port: 8012image: ghcr.io/org/image:v113- name: c214upstream:15container:16port: 8017image: ghcr.io/org/image:v218rules:19- condition:20match: ctx.request.http.path.startsWith("/v1")21configName: c122- condition:23match: ctx.request.http.path.startsWith("/v2")24configName: c2