All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
Major refactor with breaking changes. Complete provider rewrite. New modular architecture.
Requirements: Ruby 3.1+, Rails 7.0+/8.0+/8.1+
- Major Framework Refactor: ActiveAgent v1.0.0 by @sirwolfgang in #259
- Add API gem version testing and fix Anthropic 1.14.0 compatibility by @sirwolfgang in #265
- Fix version compatiblity issue for vitepress by @sirwolfgang in #266
- Add missing API Keys by @sirwolfgang in #267
- Fix website links by @sirwolfgang in #268
- chore: remove
standardfrom dev dependencies by @okuramasafumi in #272 - Add thread safety tests by @sirwolfgang in #275
- Refactor: Leverage Native Gem Types Across All Providers by @sirwolfgang in #271
- Improved Usage Tracking by @sirwolfgang in #274
- @okuramasafumi made their first contribution in #272
Full Changelog: https://github.com/activeagents/activeagent/compare/v0.6.3...v1.0.0
Universal Tools Format
# Single format works across all providers (Anthropic, OpenAI, OpenRouter, Ollama, Mock)
tools: [{
name: "get_weather",
description: "Get current weather",
parameters: {
type: "object",
properties: {
location: { type: "string", description: "City and state" }
},
required: ["location"]
}
}]
# Tool choice normalization
tool_choice: "auto" # Let model decide
tool_choice: "required" # Force tool use
tool_choice: { name: "get_weather" } # Force specific toolAutomatic conversion to provider-specific formats. Old formats still work (backward compatible).
Model Context Protocol (MCP) Support
# Universal MCP format works across providers (Anthropic, OpenAI)
class MyAgent < ActiveAgent::Base
generate_with :anthropic, model: "claude-haiku-4-5"
def research
prompt(
message: "Research AI developments",
mcps: [{
name: "github",
url: "https://api.githubcopilot.com/mcp/",
authorization: ENV["GITHUB_MCP_TOKEN"]
}]
)
end
end- Common format:
{name: "server", url: "https://...", authorization: "token"} - Auto-converts to provider native formats
- Anthropic: Beta API support, up to 20 servers per request
- OpenAI: Responses API with pre-built connectors (Dropbox, Google Drive, etc.)
- Backwards compatible: accepts both
mcpsandmcp_serversparameters - Comprehensive documentation with tested examples
- Full VCR test coverage with real MCP endpoints
- Shared
ToolChoiceClearingconcern eliminates duplication across providers
# Gemfile - Remove unofficial gems
gem "ruby-openai"
gem "ruby-anthropic"
# Add official provider SDKs
gem "openai" # Official OpenAI SDK
gem "anthropic" # Official Anthropic SDKRun bundle install after updating.
# Before
class MyAgent < ActiveAgent::ActionPrompt::Base
end
# After
class MyAgent < ActiveAgent::Base
end# Before - options wrapped in options key
class MyAgent < ActiveAgent::Base
def chat
prompt(message: "Hello", options: { temperature: 0.7 })
end
end
# After - options passed directly (at class or call level)
class MyAgent < ActiveAgent::Base
generate_with :openai, model: "gpt-4o-mini", temperature: 0.7
def chat
prompt("Hello") # Uses class-level config
end
def chat_creative
prompt("Hello", temperature: 1.0) # Override per-call
end
end# Before
module ActiveAgent::GenerationProvider
class CustomProvider < Base
end
end
# After
module ActiveAgent::Providers
class CustomProvider < BaseProvider
end
end# Before
rails g active_agent MyAgent action
# After
rails g active_agent:agent MyAgent action# Remove from config/initializers/activeagent.rb
ActiveAgent.configure do |config|
config.retries = true
config.retries_count = 5
end
# Use provider-specific settings in config/active_agent.yml
openai:
service: "OpenAI"
max_retries: 5
timeout: 600.0Template paths:
app/views/agents/{agent}/instructions.md(no.erbextension by default for instructions)app/views/agents/{agent}/{action}.md.erb
Mock Provider for Testing
class MyAgent < ActiveAgent::Base
generate_with :mock
end
response = MyAgent.prompt("Test").generate_now
# Returns predictable responses without API callsMixed Provider Support
class MyAgent < ActiveAgent::Base
generate_with :openai, model: "gpt-4o-mini"
embed_with :anthropic, model: "claude-3-5-sonnet-20241022"
endPrompt Previews
preview = MyAgent.prompt("Hello").prompt_preview
# Shows instructions, messages, tools before executionCallback Lifecycle
before_generation,after_generation,around_generationbefore_prompt,after_prompt,around_promptbefore_embed,after_embed,around_embedon_stream_open,on_stream,on_stream_close- Rails-style callback control:
prepend_*,skip_*,append_*
Multi-Input Embeddings
response = MyAgent.embed(inputs: ["Text 1", "Text 2"]).embed_now
vectors = response.data.map { |d| d[:embedding] }Normalized Usage Statistics
response = MyAgent.prompt("Hello").generate_now
# Works across all providers
response.usage.input_tokens
response.usage.output_tokens
response.usage.total_tokens
# Provider-specific fields when available
response.usage.cached_tokens # OpenAI, Anthropic
response.usage.reasoning_tokens # OpenAI o1 models
response.usage.service_tier # AnthropicEnhanced Instrumentation for APM Integration
- Unified event structure:
prompt.active_agentandembed.active_agent(top-level) plusprompt.provider.active_agentandembed.provider.active_agent(per-API-call) - Event payloads include comprehensive data for monitoring tools (New Relic, DataDog, etc.):
- Request parameters:
model,temperature,max_tokens,top_p,stream,message_count,has_tools - Usage data:
input_tokens,output_tokens,total_tokens,cached_tokens,reasoning_tokens,audio_tokens,cache_creation_tokens(critical for cost tracking) - Response metadata:
finish_reason,response_model,response_id,embedding_count
- Request parameters:
- Top-level events report cumulative usage across all API calls in multi-turn conversations
- Provider-level events report per-call usage for granular tracking
Multi-Turn Usage Tracking
response.usagenow returns cumulative token counts across all API calls during tool calling- New
response.usagesarray contains individual usage objects from each API call Usageobjects support addition:usage1 + usage2for combining statistics
Provider Enhancements
- OpenAI Responses API:
api: :responsesorapi: :chat - Anthropic JSON object mode with automatic extraction
- OpenRouter: quantization, provider preferences, web search
- Flexible naming:
:openaior:open_ai,:openrouteror:open_router
Rails 8.1 Support
Comprehensive Documentation
- VitePress site at docs.activeagents.ai
- All examples tested and validated
Provider Architecture
- Unified
BaseProviderinterface across all providers - Retry logic moved to provider SDKs (automatic exponential backoff)
- Migrated to official SDKs:
openaigem andanthropicgem - Type-safe options with per-provider definitions
Configuration
- Options configurable at class level, instance level, or per-call
- Simplified parameter handling pattern
Requirements
- Ruby 3.1+ (previously 3.0+)
Testing
- Reorganized by feature and provider integration
- All documentation examples validated
Providers
- OpenAI streaming with functions/tools
- Ollama streaming support
- Anthropic tool choice modes (
anyandtool) - OpenRouter model fallback and parameter naming
- Provider gem loading errors
Framework
- Streaming lifecycle with function/tool calls
- Multi-tool and multi-turn conversation handling
- Options mutation during generation
- Template rendering without blocks
- Schema generator key symbolization
- Rails 8.0 and 8.1 compatibility
- Usage extraction across OpenAI/Anthropic response formats
Namespaces
ActiveAgent::ActionPrompt→ useActiveAgent::BaseActiveAgent::GenerationProvider→ useActiveAgent::Providers
Configuration
ActiveAgent.configuration.retries→ use providermax_retriesActiveAgent.configuration.retries_count→ use providermax_retriesActiveAgent.configuration.retries_on→ handled by provider SDKs
Modules
ActiveAgent::QueuedGeneration→QueueingconcernActiveAgent::Rescuable→RescueconcernActiveAgent::Sanitizers→ moved to concernsActiveAgent::PromptHelper→ moved to concerns
- CI configuration for stable GitHub releases moving forward.
- Test coverage for core features: ActionPrompt rendering, tool calls, and embeddings.
- Enhance streaming to support tool calls during stream. Previously, streaming mode blocked tool call execution.
- Fix layout rendering bug when no block is passed and views now render correctly without requiring a block.
- Generation Provider module and Action Prompt READMEs have been removed, but will be updated along with the main README in the next release.