Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion src/praisonai-agents/praisonaiagents/llm/llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ class LLM:
MODEL_WINDOWS = {
# OpenAI
"gpt-4": 6144, # 8,192 actual
"gpt-4o-mini": 96000, # 128,000 actual
"gpt-4o": 96000, # 128,000 actual
"gpt-4o-mini": 96000, # 128,000 actual
Comment on lines +110 to 111

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Provider-prefixed model mismatch 🐞 Bug ✓ Correctness

When using the LiteLLM path (provider/model strings like openai/gpt-4o), MODEL_WINDOWS keys
(e.g., gpt-4o) will not match because get_context_size() uses
self.model.startswith(model_prefix) without normalizing the provider prefix. This means the
newly-added gpt-4o entry can still fall through to the 4000-token default, so the PR may not fix
the reported behavior for LiteLLM usage.
Agent Prompt
### Issue description
`LLM.get_context_size()` uses `self.model.startswith(model_prefix)` against `MODEL_WINDOWS` keys that are *not* provider-prefixed. For LiteLLM usage where `self.model` commonly looks like `openai/gpt-4o`, no entries match and the method returns the 4000-token fallback.

### Issue Context
- Agent passes provider/model strings directly into `LLM(model=...)` when the model contains `/`.
- `MODEL_WINDOWS` contains keys like `gpt-4o`, `gpt-4o-mini`, etc.

### Fix Focus Areas
- src/praisonai-agents/praisonaiagents/llm/llm.py[3793-3798]
- src/praisonai-agents/praisonaiagents/llm/llm.py[106-114]

### Suggested implementation notes
- In `get_context_size()`, compute a normalized model id:
  - `raw = self.model.lower().strip()`
  - `name = raw.split('/', 1)[-1]` (strip provider prefix)
- Do exact match first: `if name in self.MODEL_WINDOWS: return ...`
- Then do prefix match (prefer longest prefix; see next finding) and finally fallback.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines 107 to 111
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Normalize provider-qualified model names before relying on this entry.

This fixes bare "gpt-4o", but Line 3795 still matches with self.model.startswith(model_prefix). If callers pass LiteLLM-style ids like openai/gpt-4o or openai/gpt-4o-mini—which this class already acknowledges elsewhere—this table is skipped and get_context_size() falls back to 4000. That leaves the context-window bug in place for provider-prefixed models.

💡 Proposed fix
     def get_context_size(self) -> int:
         """Get safe input size limit for this model"""
-        for model_prefix, size in self.MODEL_WINDOWS.items():
-            if self.model.startswith(model_prefix):
+        normalized_model = self.model.split("/", 1)[-1] if self.model else ""
+        for model_prefix, size in self.MODEL_WINDOWS.items():
+            if normalized_model.startswith(model_prefix):
                 return size
         return 4000  # Safe default
🧰 Tools
🪛 Ruff (0.15.4)

[warning] 107-147: Mutable default value for class attribute

(RUF012)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai-agents/praisonaiagents/llm/llm.py` around lines 107 - 111, The
MODEL_WINDOWS lookup and the startswith check in get_context_size assume bare
model ids but callers may pass provider-qualified ids like "openai/gpt-4o";
update get_context_size to normalize self.model (e.g., strip provider prefixes
before using MODEL_WINDOWS and before the self.model.startswith(model_prefix)
checks) so entries in MODEL_WINDOWS are matched for provider-prefixed ids;
reference MODEL_WINDOWS, get_context_size and the
self.model.startswith(model_prefix) usage when making the change.

"gpt-4-turbo": 96000, # 128,000 actual
"o1-preview": 96000, # 128,000 actual
Expand Down