Feature Description
AuthBridge's outbound token exchange routes are matched against the HTTP Host header using static glob patterns in authproxy-routes. An AI agent with kubectl access can trivially bypass this by resolving a Service's ClusterIP and curling it directly.
This is demonstrated in https://github.com/usize/kagenti-ctf/blob/main/demos/leaked-access-token/reports/2026-04-05-run1/REPORT.md
Claude resolved the document-service ClusterIP via kubectl get svc -o json | jq '.spec.clusterIP' and exfiltrated HR documents in 6 steps. The ext-proc logged:
No route for host "10.96.218.95:8081", default policy is passthrough — skipping token exchange.
The current workaround is a ** catch-all route, which exchanges all outbound tokens. This works but is blunt -- it hardcodes a single target audience for all outbound traffic and can't distinguish between multiple target services with different audiences.
Report with fix is here
Proposed Solution
Add a Kubernetes-native control plane that resolves routes from cluster state instead of static hostname patterns.
Proposed CRD:
apiVersion: kagenti.io/v1alpha1
kind: AuthBridgePolicy
metadata:
name: document-service-policy
namespace: ctf-claude
spec:
agentSelector:
matchLabels:
kagenti.io/type: agent
targets:
- service:
name: document-service
namespace: ctf-demo
audience: "document-service"
scopes: "openid document-service-aud"
defaultPolicy: passthrough # or "deny"
A controller watches AuthBridgePolicy resources and the referenced Services, resolves all addressable forms (FQDN, short name, ClusterIP, Endpoint IPs), and writes the computed routes to the authproxy-routes ConfigMap. When a Service is recreated with a new ClusterIP, routes update automatically.
Alternative, annotation-based (no CRD):
apiVersion: v1
kind: Service
metadata:
name: document-service
annotations:
authbridge.kagenti.io/audience: "document-service"
authbridge.kagenti.io/scopes: "openid document-service-aud"
The controller discovers annotated Services and generates routes. Lighter weight, but the policy is scattered across Service annotations rather than centralized.
Both approaches require the ext-proc to support route reload (file watch or config push).
Want to contribute?
Additional Context
No response
Feature Description
AuthBridge's outbound token exchange routes are matched against the HTTP Host header using static glob patterns in authproxy-routes. An AI agent with kubectl access can trivially bypass this by resolving a Service's ClusterIP and curling it directly.
This is demonstrated in https://github.com/usize/kagenti-ctf/blob/main/demos/leaked-access-token/reports/2026-04-05-run1/REPORT.md
Claude resolved the document-service ClusterIP via kubectl get svc -o json | jq '.spec.clusterIP' and exfiltrated HR documents in 6 steps. The ext-proc logged:
No route for host "10.96.218.95:8081", default policy is passthrough — skipping token exchange.The current workaround is a ** catch-all route, which exchanges all outbound tokens. This works but is blunt -- it hardcodes a single target audience for all outbound traffic and can't distinguish between multiple target services with different audiences.
Report with fix is here
Proposed Solution
Add a Kubernetes-native control plane that resolves routes from cluster state instead of static hostname patterns.
Proposed CRD:
A controller watches AuthBridgePolicy resources and the referenced Services, resolves all addressable forms (FQDN, short name, ClusterIP, Endpoint IPs), and writes the computed routes to the authproxy-routes ConfigMap. When a Service is recreated with a new ClusterIP, routes update automatically.
Alternative, annotation-based (no CRD):
The controller discovers annotated Services and generates routes. Lighter weight, but the policy is scattered across Service annotations rather than centralized.
Both approaches require the ext-proc to support route reload (file watch or config push).
Want to contribute?
Additional Context
No response