Releases: doofinder/llm_composer
Releases · doofinder/llm_composer
0.14.0
0.13.1
0.13.0
Changed
- Replaced the previous auto function execution workflow with a manual process powered by
FunctionExecutorandFunctionCallHelpers, and added README guidance for executing OpenAI/OpenRouter/Google function calls explicitly.
Breeaking Changes
- Removed the auto-execution helpers/tests and related documentation that assumed functions ran automatically.
0.12.2
What's Changed
- change license to MIT by @addUsername in #57
- a bit docs for custom params by @sonic182 in #56
- release version 0.12.2 by @sonic182 in #58
New Contributors
- @addUsername made their first contribution in #57
Full Changelog: 0.12.0...0.12.2
0.12.0
0.11.1
0.11.0
- Implement multi-provider support with provider routing and failover:
- Introduced a new
:providerslist inLlmComposer.Settingsto replace deprecated:providerand:provider_optskeys. - Added validation in
LlmComposerto enforce/suggest exclusive use of:providersand warn about deprecated keys. - Implemented
LlmComposer.ProvidersRunnerto handle provider execution, supporting multiple providers with fallback logic. - Added
LlmComposer.ProviderRouterbehaviour for routing strategies on provider selection, failure handling, and blocking. - Provided a simple default provider router
LlmComposer.ProviderRouter.Simplewith exponential backoff blocking on provider failures. - Refactored
LlmComposer.run_completion/3to delegate toProvidersRunnerfor provider selection and execution.
- Introduced a new
- Optimized
LlmComposer.Cache.Etsby switchingputanddeletecalls to asynchronous casts, improving performance. - Maintained backward compatibility with deprecated settings keys, issuing warnings and supporting legacy calls until version 0.12.0.
- Changed
response_formatkey toresponse_schemafor better structured output definition that works across multiple providers.- Structured outputs now available for OpenAI provider as well.
- Default JSON module is JSON; falls back to Jason if JSON is not loaded.
0.10.0
- Add Google (Gemini) provider: Full feature support including chat, functions, streaming, and structured outputs.
- Add Vertex AI integration: Same Google provider but can be used with it's Vertex API. Enterprise support with OAuth 2.0 authentication via Goth library.
0.8.0
0.7.0
- Add HttpClient module: Introduced a new HttpClient module for improved HTTP handling
- Add streaming read capability: Added the capability of streaming read for LLM providers completions.
- Documentation and config updates: Updates to README.md and configuration files.