Run Claude Code against DigitalOcean Gradient AI instead of Anthropic's API.
claudo spins up a local LiteLLM proxy that bridges Claude Code's native Anthropic API format to DO's OpenAI-compatible endpoint — so you get the full Claude Code experience billed through your DigitalOcean account.
┌────────────┐ ┌────────────┐ ┌────────────┐
│ Claude Code│ ──▶ │ LiteLLM │ ──▶ │ DO Grad. │
│ (CLI) │Anthropic │ Proxy │ OpenAI │ AI API │
│ │ format │ (localhost) │ format │ │
└────────────┘ └────────────┘ └────────────┘
- Node.js ≥ 18
- Python 3 (for LiteLLM)
- Claude Code —
npm install -g @anthropic-ai/claude-code - A DigitalOcean Gradient AI API key — get one from the DO control panel
Note: LiteLLM is installed automatically into a local virtualenv (
~/.config/claudo/venv) on first run. You do not need to install it manually.
npm install -g claudo# First-time setup — enter your DO Gradient AI API key
claudo setup
# Start an interactive Claude session via DO
claudoclaudo Start interactive Claude session via DO
claudo <claude args> Pass arguments to claude (e.g. claudo -p "hello")
claudo setup Configure API key and discover models
claudo status Show running proxy instances
claudo stop-all Kill all proxy instances
claudo models Show discovered model mappings
claudo version Show version
claudo help Show help
# One-shot prompt
claudo -p "explain this codebase"
# Use a specific model
claudo --model claude-sonnet-4-5 -p "hello"
# Check what models are available on DO
claudo models
# See running proxy instances
claudo status- Loads your DO API key from
~/.config/claudo/config.env - Fetches available Claude models from DO's
/v1/modelsendpoint (cached 24h) - Generates a LiteLLM config that maps Claude Code model names to DO model IDs
- Starts a LiteLLM proxy on a free port in the
4100–4200range - Sets
ANTHROPIC_BASE_URLto point at the local proxy - Launches
claudewith your arguments - Shuts down the proxy cleanly on exit
Multiple claudo sessions can run concurrently — each gets its own port.
| File | Purpose |
|---|---|
~/.config/claudo/config.env |
API key (chmod 600) |
~/.config/claudo/models_cache.json |
Cached model list (24h TTL) |
~/.config/claudo/litellm_config.yaml |
Auto-generated LiteLLM config |
~/.config/claudo/venv/ |
LiteLLM virtualenv |
~/.config/claudo/logs/ |
Per-instance proxy logs |
Proxy fails to start — check ~/.config/claudo/logs/proxy-<port>.log for LiteLLM errors.
Models not found — run claudo setup to refresh the model cache.
Port range exhausted — run claudo stop-all to clean up stale instances.
Python errors on 3.14+ — the script patches uvicorn's uvloop dependency automatically; if you hit issues, ensure your venv is up to date by deleting ~/.config/claudo/venv and re-running claudo setup.
MIT