Skip to content

Releases: doofinder/llm_composer

0.14.0

04 Feb 10:22
60a19a0

Choose a tag to compare

What's Changed

Full Changelog: 0.13.1...0.14.0

0.13.1

04 Feb 10:17
d1e0fb1

Choose a tag to compare

What's Changed

  • feat: ability to set http request headers (openrouter) by @NexPB in #64

New Contributors

  • @NexPB made their first contribution in #64

Full Changelog: 0.13.0...0.13.1

0.13.0

01 Dec 09:10
14e76b2

Choose a tag to compare

Changed

  • Replaced the previous auto function execution workflow with a manual process powered by FunctionExecutor and FunctionCallHelpers, and added README guidance for executing OpenAI/OpenRouter/Google function calls explicitly.

Breeaking Changes

  • Removed the auto-execution helpers/tests and related documentation that assumed functions ran automatically.

0.12.2

06 Nov 13:40
139d10e

Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.12.0...0.12.2

0.12.0

17 Oct 08:32
c6310d0

Choose a tag to compare

What's Changed

Full Changelog: 0.11.2...0.12.0

0.11.1

23 Sep 14:30
b979782

Choose a tag to compare

  • Added OpenRouter function-call support and provider message-mapping fixes; preserved assistant tool_calls during auto-executed functions.

0.11.0

23 Sep 13:34
ee4f7d5

Choose a tag to compare

  • Implement multi-provider support with provider routing and failover:
    • Introduced a new :providers list in LlmComposer.Settings to replace deprecated :provider and :provider_opts keys.
    • Added validation in LlmComposer to enforce/suggest exclusive use of :providers and warn about deprecated keys.
    • Implemented LlmComposer.ProvidersRunner to handle provider execution, supporting multiple providers with fallback logic.
    • Added LlmComposer.ProviderRouter behaviour for routing strategies on provider selection, failure handling, and blocking.
    • Provided a simple default provider router LlmComposer.ProviderRouter.Simple with exponential backoff blocking on provider failures.
    • Refactored LlmComposer.run_completion/3 to delegate to ProvidersRunner for provider selection and execution.
  • Optimized LlmComposer.Cache.Ets by switching put and delete calls to asynchronous casts, improving performance.
  • Maintained backward compatibility with deprecated settings keys, issuing warnings and supporting legacy calls until version 0.12.0.
  • Changed response_format key to response_schema for better structured output definition that works across multiple providers.
    • Structured outputs now available for OpenAI provider as well.
    • Default JSON module is JSON; falls back to Jason if JSON is not loaded.

0.10.0

03 Sep 13:58
284d791

Choose a tag to compare

  • Add Google (Gemini) provider: Full feature support including chat, functions, streaming, and structured outputs.
  • Add Vertex AI integration: Same Google provider but can be used with it's Vertex API. Enterprise support with OAuth 2.0 authentication via Goth library.

0.8.0

12 Aug 13:43
2bb7a01

Choose a tag to compare

  • Add tracking costs for OpenRouter provider: Introduced cost tracking functionality specifically for the OpenRouter provider to monitor API usage expenses.

0.7.0

31 Jul 07:57
665e27b

Choose a tag to compare

  • Add HttpClient module: Introduced a new HttpClient module for improved HTTP handling
  • Add streaming read capability: Added the capability of streaming read for LLM providers completions.
  • Documentation and config updates: Updates to README.md and configuration files.