Self-Hosted, Open Source MCP Infrastructure

Octelium provides a complete secure, free and open source, self-hosted infrastructure to build your MCP-based architectures and gateways. It provides secure access, deployment, authentication and unified identity management, L-7 aware pre-request authorization and visibility.

You can develop your MCP servers over streamable HTTP/SSE transport and run anywhere even behind NAT. Here is a simple example of a my-mcp Service whose upstream is running at http://localhost:8787 and is remotely served by a connected octelium client used by the User mcp-01 (read more about serving Services via connected Users here):

1
kind: Service
2
metadata:
3
name: my-mcp
4
spec:
5
port: 8080
6
mode: HTTP
7
isPublic: true
8
config:
9
upstream:
10
url: http://localhost:8787
11
user: mcp-01

To serve the MCP server, the User mcp-01 needs to explicitly serve that Service via the --serve flag (read more here) as follows:

octelium connect --serve my-mcp

You can also deploy and scale your containerized MCP server and serve it as a Service by reusing the underlying Kubernetes infrastructure that runs the Octelium Cluster (read more about managed containers here) including from private container registries. Here is an example:

1
kind: Service
2
metadata:
3
name: my-mcp
4
spec:
5
port: 8787
6
mode: HTTP
7
isPublic: true
8
config:
9
upstream:
10
container:
11
port: 11434
12
image: <MY_REGISTRY>/<MY_ORG>/<MY_MCP_IMAGE:version>
13
credentials:
14
usernamePassword:
15
username: <USER>
16
password:
17
fromSecret: registry-token
18
resourceLimit:
19
cpu:
20
millicores: 1000
21
memory:
22
megabytes: 2000

You can now apply the creation of the Service as follows (read more here):

octeliumctl apply /PATH/TO/SERVICE.YAML
NOTE

You might also need to take a look at Namespaces (read more here) where you can organize your MCP server Services and affect their hostnames as well as access control to a whole set of Services that share a certain purpose or functionality according to your needs.

Octelium also supports secret-less access for Users to public MCP servers that are protected by standard bearer access tokens, basic authentication, API keys set in custom headers as well as OAuth2 client credential flows. You can read more here.

Now we move on to the client-side of MCPs. You can develop your MCP clients in any language and connect to the MCP server Service either privately over the octelium client (read more here). Furthermore, MCP clients can use the standard OAuth client credentials flow to obtain a bearer access token and publicly access the MCP server service (read more about the public clientless access mode here). You can read more about OAuth2-based client-less access here.

Octelium's application-layer (L7) awareness seamlessly enables you to control access based on HTTP request paths, methods and more importantly in our use case for MCP, JSON body of the requests. Here is an example:

1
kind: Service
2
metadata:
3
name: my-mcp
4
spec:
5
port: 8080
6
mode: HTTP
7
isPublic: true
8
config:
9
upstream:
10
url: https://mcp.example.com
11
http:
12
enableRequestBuffering: true
13
body:
14
mode: JSON
15
maxRequestSize: 100000
16
authorization:
17
inlinePolicies:
18
- spec:
19
rules:
20
- effect: ALLOW
21
condition:
22
all:
23
of:
24
- match: ctx.request.http.bodyMap.jsonrpc == "2.0"
25
- match: ctx.request.http.bodyMap.method == "tools/call"
26
- match: ctx.request.http.bodyMap.params.name in ["openFile", "readFile"]
27
- match: ctx.request.http.bodyMap.params.arguments.filePath.startsWith("/bin")
28
- match: ctx.request.http.bodyMap.params.arguments.count < 1000

Here are a few more features that you might be interested in:

  • Request/response header manipulation (read more here).
  • Application layer-aware ABAC access control via policy-as-code using CEL and Open Policy Agent (read more here).
  • Exposing the API publicly for anonymous access (read more here).
© 2025 octelium.comOctelium Labs, LLCAll rights reserved
Octelium and Octelium logo are trademarks of Octelium Labs, LLC.
WireGuard is a registered trademark of Jason A. Donenfeld