get 0$ sota llm models API via an OpenAI/Ollama-compatible server.
Important
at the moment only the app.outlier.ai/playground is supported, so you need go through the annoying process of making an outlier/scaleai account to use the playground.
prerequisites:
you have two options to get started, cloning the repo is easier, but first option is more lightweight:
-
make a
docker-compose.ymlfile (copy docker-compose.public.yml content) and modify the volume paths to point to your local chrome executable, chrome profile and data directories: -
make a
.envfile:# security OAI_API_KEY= # ports wormhole_port=8765 chrome_debug_port=9222 proxy_port=8766 oai_port=11434
-
run your local chrome, login to outlier and exit, after that run chrome in headless mode to convert your profile and exit:
/path/to/chrome --user-data-dir=/path/to/chrome-profile --headless --remote-debugging-port=9222
-
point the docker-compose file to the chrome executable and copy (recommended) the profile folder to the location you specified in the
docker-compose.yml. -
start the services:
docker compose up --build -d
-
clone this repository & rename
.env.exampleto.env& replace the content of.envwith your data -
run
get_sessionto log into Outlier and store your session:
- with
python(activate the venv before or install dependencies):- run
python3 scripts/get_session.py --loginand login to Outlier.ai when the browser opens than close it. - run
python3 scripts/get_session.py --headlessto convert your session
- run
-
start the services:
just build just start
-
point any OpenAI-compatible client to
http://localhost:11434
-
download johnny-zhao.oai-compatible-copilot and add these to
.vscode/settings.json:"oaicopilot.baseUrl": "http://localhost:11434", "oaicopilot.retry": { "enabled": true, "max_attempts": 3, "interval_ms": 1000 }, "oaicopilot.models": [ { // all the models from outlier "id": "gpt-5-chat", "name": "GPT-5 Chat", "owned_by": "openai" }, // ... rest of the models ]
- in Cursor settings, configure the API endpoint:
- open Settings (
Cmd/Ctrl + ,) - go to
Cursor Settings>Models - in the OpenAI API Key section, click "Override OpenAI Base URL (when using key)"
- toggle it ON and enter:
http://localhost:11434/v1 - add your openai API key (can be any non-empty string, e.g.,
outlier) - the model picker will automatically show available models from your endpoint
- open Settings (
docker compose up -d # Start services
docker compose start # Start services
docker compose stop # Stop services
docker compose down # Stop services and delete containers
docker compose restart # Restart services
docker compose ps # Check status
docker compose logs -f [service] # View logs- webSocket relay server that connects OAI and bridge services. manages request/response routing and maintains persistent client connections:
- Port: 8765
- automated Chrome browser that logs into outlier webpage and injects control scripts. acts as the interface between the wormhole system and Outlier's web app.
- ports: 8766 (proxy), 9222 (chrome debug)
- fastAPI server providing OpenAI-compatible endpoints. transforms requests into outlier api calls, manages conversation state, and logs all interactions.
- port: 11434
- a service that uses cloudflare to expose the oai endpoint publicly.
- a clean-up service to keep the logs size in check, fully configurable.
the end goal here is to create a plug-n-play solution to hijack as many llm providers as possible and get as many free powerful llm agents as we can.
right now only outlier is supported and no account rotation is even feasable, but in the future i will keep adding more providers and proxy/account rotation features to maximize free agent use.
stop serfing the bigweb and enjoy the wildwest of post-truth AI-induced collective coma we're in.
-
- OpenAI-compatible API
-
- Ollama-compatible API
-
- template-based prompt injection
-
- support for file uploads
-
- support for streaming responses
-
- support for more platforms
-
- proxy rotation
-
- account rotation