Conversation
Reviewer's GuideAdds configurable LLM provider support to AIAssistant, deferring provider-specific imports to runtime and making OpenAI/Anthropic/Google dependencies optional, while updating structured output behavior, tests, and packaging to reflect provider-based selection instead of a hard ChatOpenAI dependency. Sequence diagram for provider-based LLM resolution in get_llmsequenceDiagram
actor Caller
participant AIAssistant
participant ProviderLookup as PROVIDER_LLM_LOOKUP
participant ImportSystem as importlib
participant LangchainModule
participant LLMClass
Caller->>AIAssistant: __init__(provider)
AIAssistant-->>AIAssistant: store _provider
Caller->>AIAssistant: get_llm()
activate AIAssistant
AIAssistant->>AIAssistant: get_model(), get_temperature(), get_model_kwargs()
AIAssistant->>ProviderLookup: get(_provider)
alt provider not found
ProviderLookup-->>AIAssistant: None
AIAssistant-->>Caller: raise AIAssistantMisconfiguredError
else provider found
ProviderLookup-->>AIAssistant: ProviderConfig
AIAssistant->>ImportSystem: import_module(langchain_module)
alt langchain module missing
ImportSystem-->>AIAssistant: ImportError
AIAssistant-->>Caller: raise ImportError (extra install message)
else module imported
ImportSystem-->>LangchainModule: module
AIAssistant->>LangchainModule: getattr(llm_class)
alt llm_class attribute missing
LangchainModule-->>AIAssistant: AttributeError
AIAssistant-->>Caller: raise ImportError (invalid LLM class)
else llm_class found
LangchainModule-->>LLMClass: class
AIAssistant-->>AIAssistant: construct llm with model, temperature, model_kwargs
AIAssistant-->>Caller: llm instance
end
end
end
deactivate AIAssistant
Class diagram for updated AIAssistant provider supportclassDiagram
class AIAssistant {
<<abstract>>
- user
- request
- view
- _method_tools Sequence_BaseTool
- _provider ProviderName
- _init_kwargs dict_str_Any
+ __init__(user, request, view, provider, **kwargs)
+ get_llm() BaseChatModel
+ get_structured_output_llm() Runnable
+ get_model() str
+ get_temperature() float
+ get_model_kwargs() dict_str_Any
- _import_llm_class()
}
class ProviderConfig {
<<TypedDict>>
+ langchain_module str
+ llm_class str
}
class ProviderName {
<<Literal>>
+ openai
+ anthropic
+ google
}
class PROVIDER_LLM_LOOKUP {
<<dict[ProviderName,ProviderConfig]>>
}
AIAssistant --> ProviderName : uses
AIAssistant --> ProviderConfig : resolves
PROVIDER_LLM_LOOKUP --> ProviderName : keys
PROVIDER_LLM_LOOKUP --> ProviderConfig : values
File-Level Changes
Assessment against linked issues
Possibly linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #216 +/- ##
==========================================
+ Coverage 93.90% 94.10% +0.20%
==========================================
Files 19 19
Lines 706 730 +24
Branches 51 54 +3
==========================================
+ Hits 663 687 +24
Misses 33 33
Partials 10 10
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
…example group to simplify poetry install
| f"Install it with: pip install django-ai-assistant[{self._provider}]" | ||
| ) from err | ||
|
|
||
| return getattr( |
There was a problem hiding this comment.
I thought about adding a test for this getattr call since in theory it could fail if PROVIDER_LLM_LOOKUP is malformed (i.e.: the langchain lib name is correct but the chat class name isn't), but opted to leave it out since this is fully setup on our side (and the test itself wouldn't be actually testing if the real-life classes exist).
There was a problem hiding this comment.
to avoid malformed names, could we rely on some typing maybe?
There was a problem hiding this comment.
I'll give a look at that
There was a problem hiding this comment.
Updated. Before we merge this, I'll get back to investigate what's the issue with npm publishing flow.
There was a problem hiding this comment.
Hey - I've found 4 issues, and left some high level feedback:
- In
_import_llm_class, the error message for an invalid provider currently interpolatesPROVIDER_LLM_LOOKUP.keys(), which will render as adict_keysobject; consider converting to a sorted, comma-separated string (e.g.,', '.join(PROVIDER_LLM_LOOKUP.keys())) to make the message clearer. - The provider identifiers (
"openai","anthropic","google") are repeated acrossProviderName,PROVIDER_LLM_LOOKUP, and call sites; consider centralizing them (e.g., via an Enum or module-level constants) to reduce the risk of typos and keep provider names in sync.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `_import_llm_class`, the error message for an invalid provider currently interpolates `PROVIDER_LLM_LOOKUP.keys()`, which will render as a `dict_keys` object; consider converting to a sorted, comma-separated string (e.g., `', '.join(PROVIDER_LLM_LOOKUP.keys())`) to make the message clearer.
- The provider identifiers (`"openai"`, `"anthropic"`, `"google"`) are repeated across `ProviderName`, `PROVIDER_LLM_LOOKUP`, and call sites; consider centralizing them (e.g., via an Enum or module-level constants) to reduce the risk of typos and keep provider names in sync.
## Individual Comments
### Comment 1
<location path="tests/test_helpers/test_assistants.py" line_range="478-479" />
<code_context>
+ AIAssistant.clear_cls_registry()
+
+
+def test_AIAssistant_get_llm_invalid_provider():
+ class InvalidAIAssistant(AIAssistant):
+ id = "override_invalid_assistant" # noqa: A003
+ name = "Override Invalid Assistant"
+ instructions = "Instructions"
+ model = "gpt-test"
+
+ assistant = InvalidAIAssistant(provider="invalid")
+ with pytest.raises(AIAssistantMisconfiguredError):
+ assistant.get_llm()
+
</code_context>
<issue_to_address>
**suggestion (testing):** Assert the error message for invalid provider to guarantee helpful feedback remains intact
Right now the test only checks that `AIAssistantMisconfiguredError` is raised. To also verify the error remains user-friendly and still lists valid providers, consider capturing the exception and asserting on part of its message, e.g. `with pytest.raises(AIAssistantMisconfiguredError) as exc:` followed by `assert "supported providers" in str(exc.value)`.
```suggestion
with pytest.raises(AIAssistantMisconfiguredError) as exc:
assistant.get_llm()
assert "supported providers" in str(exc.value)
```
</issue_to_address>
### Comment 2
<location path="tests/test_helpers/test_assistants.py" line_range="507-508" />
<code_context>
+ assistant.get_llm()
+
+
+def test_AIAssistant_get_llm_uninstalled_provider(monkeypatch):
+ class UninstalledAIAssistant(AIAssistant):
+ id = "override_uninstalled_assistant" # noqa: A003
+ name = "Override Uninstalled Assistant"
+ instructions = "Instructions"
+ model = "gpt-test"
+
+ assistant = UninstalledAIAssistant(provider="uninstalled")
+
+ # Simulates a scenario where the user tries to use a valid provider
+ # that isn't installed with lib (i.e.: user tries to access the
+ # openai provider, but langchain_openai isn't installed)
+ from django_ai_assistant.helpers import assistants
+
+ monkeypatch.setattr(
+ assistants,
+ "PROVIDER_LLM_LOOKUP",
+ {
+ "uninstalled": {
+ "langchain_module": "test_module",
+ "llm_class": "UninstalledChat",
+ },
+ },
+ )
+
+ with pytest.raises(ImportError):
+ assistant.get_llm()
+
</code_context>
<issue_to_address>
**suggestion (testing):** Tighten assertions for uninstalled provider to cover the guidance in the ImportError message
Right now the test only asserts that an `ImportError` is raised. Since the runtime error includes a specific installation hint (`pip install django-ai-assistant[provider]`), please also assert that the message contains the expected guidance (e.g. references `pip install django-ai-assistant[uninstalled]` or the configured module name) to better lock in the expected developer experience and avoid regressions in the error text.
```suggestion
with pytest.raises(ImportError) as excinfo:
assistant.get_llm()
assert "pip install django-ai-assistant[uninstalled]" in str(excinfo.value)
```
</issue_to_address>
### Comment 3
<location path="tests/test_helpers/test_assistants.py" line_range="537-538" />
<code_context>
+ assistant.get_llm()
+
+
@pytest.mark.vcr
def test_AIAssistant_pydantic_structured_output():
from pydantic import BaseModel
</code_context>
<issue_to_address>
**suggestion (testing):** Add tests to ensure `get_structured_output_llm` behavior varies correctly by provider
This change now branches on `self._provider == "openai"` instead of `isinstance(llm, ChatOpenAI)`, but current tests only cover `get_llm`, not this provider-specific behavior. Please add tests that assert: (1) for `openai`, `with_structured_output(..., method="json_schema")` is used, and (2) for non‑OpenAI providers, `method="json_mode"` is used. You can likely do this with small unit tests by mocking `get_llm` / the LLM object rather than using VCR.
</issue_to_address>
### Comment 4
<location path="django_ai_assistant/helpers/assistants.py" line_range="56" />
<code_context>
from django_ai_assistant.langchain.tools import tool as tool_decorator
+ProviderName = Literal["openai", "anthropic", "google"]
+
+
</code_context>
<issue_to_address>
**issue (complexity):** Consider replacing the string-based provider config and inline conditionals with small factory and method maps that resolve provider-specific classes and behavior once, making the multi-provider design clearer and more maintainable.
You can keep the multi‑provider feature but simplify the indirection and make behavior more explicit with a few small refactors.
### 1. Replace `ProviderConfig` + stringly imports with small factory map
You don’t really need `TypedDict`, string class names, and `getattr`. A small factory map gives you:
- deferred imports per provider
- type safety (no string class names)
- no repeated `importlib` + `getattr` on every call
```python
# keep ProviderName
ProviderName = Literal["openai", "anthropic", "google"]
def _resolve_llm_class(provider: ProviderName) -> type[BaseChatModel]:
try:
return PROVIDER_LLM_FACTORIES[provider]()
except KeyError:
valid = ", ".join(PROVIDER_LLM_FACTORIES.keys())
raise AIAssistantMisconfiguredError(
f"Invalid provider={provider}, please use one of: {valid}"
)
# small, explicit factories with deferred imports
PROVIDER_LLM_FACTORIES: dict[ProviderName, Callable[[], type[BaseChatModel]]] = {
"openai": lambda: __import__("langchain_openai").langchain_openai.ChatOpenAI,
"anthropic": lambda: __import__("langchain_anthropic").langchain_anthropic.ChatAnthropic,
"google": lambda: __import__("langchain_google_genai").langchain_google_genai.ChatGoogleGenerativeAI,
}
```
Then cache the resolved class on the instance to avoid repeated lookup and keep the public surface “provider”-based:
```python
class AIAssistant(abc.ABC):
_provider: ProviderName
_llm_class: type[BaseChatModel]
def __init__(..., provider: ProviderName = "openai", **kwargs: Any):
...
self._provider = provider
self._llm_class = _resolve_llm_class(provider)
...
def get_llm(self) -> BaseChatModel:
model = self.get_model()
temperature = self.get_temperature()
model_kwargs = self.get_model_kwargs()
kwargs: dict[str, Any] = {"model": model, "model_kwargs": model_kwargs}
if temperature is not None:
kwargs["temperature"] = temperature
return self._llm_class(**kwargs)
```
This keeps:
- the `provider` API
- deferred, per‑provider imports
- no stringly‑typed config or `TypedDict`
- no dynamic imports inside `get_llm` (resolution happens once in `__init__`)
### 2. Remove provider string check for structured output
Instead of a hard‑coded `if self._provider == "openai"`, centralize provider‑specific behavior in a small map. This scales better and avoids scattering provider checks:
```python
PROVIDER_STRUCTURED_METHOD: dict[ProviderName, str] = {
"openai": "json_schema",
# other providers can override in the future
}
class AIAssistant(abc.ABC):
...
def get_structured_output_llm(self) -> Runnable:
if not self.structured_output:
raise ValueError("structured_output is not defined")
llm = self.get_llm()
method = PROVIDER_STRUCTURED_METHOD.get(self._provider, "json_mode")
return llm.with_structured_output(self.structured_output, method=method)
```
This keeps all current behavior (OpenAI uses `json_schema`, others use `json_mode`), but removes a growing `if self._provider == ...` chain and keeps provider‑specific behavior explicit and local.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
Summary by Sourcery
Introduce provider selection to AIAssistant and wire it into LLM and structured output creation.
New Features:
Enhancements:
Closes #207
Summary by Sourcery
Introduce configurable LLM providers for AIAssistant and remove the hard runtime and install-time dependency on OpenAI-specific classes.
New Features:
Enhancements:
Build:
Tests: