Description
When running Obot in Docker mode with OBOT_CONTAINER_ENV=true, MCP shim containers receive OAuth, audit log, and API key auth URLs pointing to the public OBOT_SERVER_HOSTNAME instead of Docker network-internal IPs. This causes all MCP container-to-Obot communication to hairpin through external infrastructure (reverse proxies, WAFs, etc.) instead of staying on the Docker network.
Expected behavior
Shim container env vars like NANOBOT_RUN_OAUTH_TOKEN_URL, NANOBOT_RUN_AUDIT_LOG_SEND_URL, etc. should contain Docker network-internal URLs (e.g., http://172.21.0.5:8080/oauth/token).
Actual behavior
All shim container callback URLs use the public hostname:
NANOBOT_RUN_OAUTH_JWKSURL=https://obot.example.com/oauth/jwks.json
NANOBOT_RUN_OAUTH_TOKEN_URL=https://obot.example.com/oauth/token
NANOBOT_RUN_OAUTH_AUTHORIZE_URL=https://obot.example.com/oauth/authorize
NANOBOT_RUN_APIKEY_AUTH_WEBHOOK_URL=https://obot.example.com/api/api-keys/auth
NANOBOT_RUN_AUDIT_LOG_SEND_URL=https://obot.example.com/api/mcp-audit-logs
NANOBOT_RUN_TRUSTED_ISSUER=https://obot.example.com
Root cause
transformObotHostname() in pkg/mcp/docker.go matches ^http://localhost(:\d+)? and rewrites to the detected Docker host IP. However, the server endpoint fields (TokenExchangeEndpoint, AuditLogEndpoint, JWKSEndpoint, AuthorizeEndpoint, Issuer) are populated from OBOT_SERVER_HOSTNAME (e.g., https://obot.example.com) rather than from InternalServerURL (http://localhost:8080).
Since the URLs are already public HTTPS URLs when transformObotHostname() runs, the ^http://localhost regex never matches, and the function is a no-op.
The call chain:
ensureDeployment() (~line 294) calls transformObotHostname() on server.TokenExchangeEndpoint, server.AuditLogEndpoint, server.JWKSEndpoint
- These fields already contain
https://obot.example.com/... (from OBOT_SERVER_HOSTNAME)
transformObotHostname() checks for ^http://localhost — no match, returns unchanged
createAndStartContainer() (~line 828) sets shim env vars from these unchanged public URLs
Additionally, server.AuthorizeEndpoint and server.Issuer (standalone) are never passed through transformObotHostname() at all.
Impact
In environments where Obot is behind a reverse proxy with security middleware (CrowdSec, fail2ban, etc.), hairpinning traffic from MCP containers can:
- Overwhelm security middleware: Under memory pressure, failing MCP sessions retry rapidly, flooding the reverse proxy with WAF checks
- Trigger cascade failures: If the security middleware operates fail-closed, timeouts cause 403 for ALL traffic — including legitimate external requests
- Add unnecessary latency: Every MCP auth/audit call takes a round trip through DNS, TLS, reverse proxy, and security middleware instead of a direct Docker network hop
In our environment, this caused a complete outage of all reverse-proxied services when the host running Obot became resource-constrained and the MCP retry storm overwhelmed CrowdSec.
Environment
- Obot:
ghcr.io/obot-platform/obot:latest
- Docker: 28.5.1, Compose v2.40.0
OBOT_CONTAINER_ENV=true
OBOT_SERVER_HOSTNAME set to public domain (required for Google OAuth + SSL via Cloudflare)
Suggested fix
Populate endpoint fields using InternalServerURL (http://localhost:8080) so transformObotHostname() can match and rewrite to the Docker network IP. Use the public OBOT_SERVER_HOSTNAME only for browser-facing URLs (NANOBOT_RUN_TRUSTED_ISSUER, NANOBOT_RUN_TRUSTED_AUDIENCES, OAuth redirects) that must match the SSL certificate and OAuth provider config.
Workaround
Remove security middleware from the reverse proxy route serving OBOT_SERVER_HOSTNAME and restrict to internal IPs via allowlist.
Related
Description
When running Obot in Docker mode with
OBOT_CONTAINER_ENV=true, MCP shim containers receive OAuth, audit log, and API key auth URLs pointing to the publicOBOT_SERVER_HOSTNAMEinstead of Docker network-internal IPs. This causes all MCP container-to-Obot communication to hairpin through external infrastructure (reverse proxies, WAFs, etc.) instead of staying on the Docker network.Expected behavior
Shim container env vars like
NANOBOT_RUN_OAUTH_TOKEN_URL,NANOBOT_RUN_AUDIT_LOG_SEND_URL, etc. should contain Docker network-internal URLs (e.g.,http://172.21.0.5:8080/oauth/token).Actual behavior
All shim container callback URLs use the public hostname:
Root cause
transformObotHostname()inpkg/mcp/docker.gomatches^http://localhost(:\d+)?and rewrites to the detected Docker host IP. However, the server endpoint fields (TokenExchangeEndpoint,AuditLogEndpoint,JWKSEndpoint,AuthorizeEndpoint,Issuer) are populated fromOBOT_SERVER_HOSTNAME(e.g.,https://obot.example.com) rather than fromInternalServerURL(http://localhost:8080).Since the URLs are already public HTTPS URLs when
transformObotHostname()runs, the^http://localhostregex never matches, and the function is a no-op.The call chain:
ensureDeployment()(~line 294) callstransformObotHostname()onserver.TokenExchangeEndpoint,server.AuditLogEndpoint,server.JWKSEndpointhttps://obot.example.com/...(fromOBOT_SERVER_HOSTNAME)transformObotHostname()checks for^http://localhost— no match, returns unchangedcreateAndStartContainer()(~line 828) sets shim env vars from these unchanged public URLsAdditionally,
server.AuthorizeEndpointandserver.Issuer(standalone) are never passed throughtransformObotHostname()at all.Impact
In environments where Obot is behind a reverse proxy with security middleware (CrowdSec, fail2ban, etc.), hairpinning traffic from MCP containers can:
In our environment, this caused a complete outage of all reverse-proxied services when the host running Obot became resource-constrained and the MCP retry storm overwhelmed CrowdSec.
Environment
ghcr.io/obot-platform/obot:latestOBOT_CONTAINER_ENV=trueOBOT_SERVER_HOSTNAMEset to public domain (required for Google OAuth + SSL via Cloudflare)Suggested fix
Populate endpoint fields using
InternalServerURL(http://localhost:8080) sotransformObotHostname()can match and rewrite to the Docker network IP. Use the publicOBOT_SERVER_HOSTNAMEonly for browser-facing URLs (NANOBOT_RUN_TRUSTED_ISSUER,NANOBOT_RUN_TRUSTED_AUDIENCES, OAuth redirects) that must match the SSL certificate and OAuth provider config.Workaround
Remove security middleware from the reverse proxy route serving
OBOT_SERVER_HOSTNAMEand restrict to internal IPs via allowlist.Related