Support for alternative LLM backends (OpenAI-compatible / local vLLM) #505
Emilien-Etadam
started this conversation in
Ideas
Replies: 1 comment
-
|
Small correction to my message above — having looked at the code, the right names are So my real questions, more precisely:
Thanks. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
Thanks for open-sourcing Breeze.
Would you consider decoupling the AI Brain from Anthropic-only BYOK? Use cases:
Config surface could stay minimal:
provider:anthropic(default) |openai-compatibleI realize this isn't a trivial swap given how the Agent SDK drives tool-use and the risk engine — a LiteLLM-style translation layer or a provider abstraction inside the Brain Connector both seem viable, you'll know better which fits.
Transparency: I'd be happy to test against a local vLLM instance, and potentially contribute a PoC behind a feature flag. Any code I'd submit would be AI-assisted (Claude Code) with me driving design and testing — flagging upfront so you know what to expect on review.
Two questions:
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions