Skip to content

fix: correct duplicate gpt-4o-mini key in MODEL_WINDOWS to gpt-4o#1115

Open
jnMetaCode wants to merge 1 commit intoMervinPraison:mainfrom
jnMetaCode:fix-model-windows-duplicate-key
Open

fix: correct duplicate gpt-4o-mini key in MODEL_WINDOWS to gpt-4o#1115
jnMetaCode wants to merge 1 commit intoMervinPraison:mainfrom
jnMetaCode:fix-model-windows-duplicate-key

Conversation

@jnMetaCode
Copy link

@jnMetaCode jnMetaCode commented Mar 8, 2026

Problem

The MODEL_WINDOWS dictionary in llm.py has a duplicate key bug:

MODEL_WINDOWS = {
    "gpt-4o-mini": 96000,    # line 110 — should be "gpt-4o"
    "gpt-4o-mini": 96000,    # line 111 — actual gpt-4o-mini
    ...
}

Python dictionaries silently discard duplicate keys, keeping only the last value. The first entry was clearly intended to be "gpt-4o" (the full model, not the mini variant), but it was typed as "gpt-4o-mini" by mistake.

As a result, there is no context window entry for gpt-4o, so any agent using that model falls through to the default/fallback window size instead of getting the correct 96,000 token window.

Fix

Changed the first entry from "gpt-4o-mini" to "gpt-4o" so both models are properly represented in the lookup table.

How I found it

Noticed it while reviewing the model configuration — the comment structure and surrounding entries make it clear the intent was to list gpt-4o followed by gpt-4o-mini, matching the pattern used for other model families.

Summary by CodeRabbit

New Features

  • Added support for GPT-4o model with a 96,000 token context window.

MODEL_WINDOWS had two entries for gpt-4o-mini and none for gpt-4o.
The first entry was clearly intended to be gpt-4o based on context.
This meant gpt-4o users got no context window match, potentially
causing suboptimal chunking or fallback behavior.

Signed-off-by: JiangNan <1394485448@qq.com>
@qodo-code-review
Copy link

Review Summary by Qodo

Fix duplicate gpt-4o-mini key in MODEL_WINDOWS

🐞 Bug fix

Grey Divider

Walkthroughs

Description
• Fixed duplicate key in MODEL_WINDOWS dictionary
• Changed first gpt-4o-mini entry to gpt-4o
• Ensures gpt-4o model gets correct 96000 token window
• Prevents fallback behavior for gpt-4o users
Diagram
flowchart LR
  A["MODEL_WINDOWS<br/>duplicate keys"] -- "remove duplicate<br/>gpt-4o-mini" --> B["MODEL_WINDOWS<br/>corrected entries"]
  B -- "gpt-4o: 96000" --> C["gpt-4o gets<br/>correct window"]
  B -- "gpt-4o-mini: 96000" --> D["gpt-4o-mini<br/>preserved"]
Loading

Grey Divider

File Changes

1. src/praisonai-agents/praisonaiagents/llm/llm.py 🐞 Bug fix +1/-1

Correct duplicate gpt-4o-mini key to gpt-4o

• Corrected duplicate dictionary key from gpt-4o-mini to gpt-4o
• Line 110: Changed first entry to properly represent the full gpt-4o model
• Maintains 96000 token window size for gpt-4o (128,000 actual)
• Preserves existing gpt-4o-mini entry on line 111

src/praisonai-agents/praisonaiagents/llm/llm.py


Grey Divider

Qodo Logo

@qodo-code-review
Copy link

qodo-code-review bot commented Mar 8, 2026

Code Review by Qodo

🐞 Bugs (2) 📘 Rule violations (0) 📎 Requirement gaps (0)

Grey Divider


Action required

1. Provider-prefixed model mismatch 🐞 Bug ✓ Correctness
Description
When using the LiteLLM path (provider/model strings like openai/gpt-4o), MODEL_WINDOWS keys
(e.g., gpt-4o) will not match because get_context_size() uses
self.model.startswith(model_prefix) without normalizing the provider prefix. This means the
newly-added gpt-4o entry can still fall through to the 4000-token default, so the PR may not fix
the reported behavior for LiteLLM usage.
Code

src/praisonai-agents/praisonaiagents/llm/llm.py[R110-111]

+        "gpt-4o": 96000,                       # 128,000 actual
        "gpt-4o-mini": 96000,            # 128,000 actual
Evidence
Agent explicitly passes provider-prefixed model strings into the LiteLLM-backed LLM(model=...)
constructor, but LLM.get_context_size() only checks startswith() against unprefixed keys like
gpt-4o, and otherwise returns a 4000-token fallback. Therefore openai/gpt-4o will not match
gpt-4o and will fall back.

src/praisonai-agents/praisonaiagents/agent/agent.py[1313-1328]
src/praisonai-agents/praisonaiagents/llm/llm.py[3793-3798]
src/praisonai-agents/praisonaiagents/llm/llm.py[106-113]
src/praisonai-agents/praisonaiagents/init.py[573-584]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`LLM.get_context_size()` uses `self.model.startswith(model_prefix)` against `MODEL_WINDOWS` keys that are *not* provider-prefixed. For LiteLLM usage where `self.model` commonly looks like `openai/gpt-4o`, no entries match and the method returns the 4000-token fallback.

### Issue Context
- Agent passes provider/model strings directly into `LLM(model=...)` when the model contains `/`.
- `MODEL_WINDOWS` contains keys like `gpt-4o`, `gpt-4o-mini`, etc.

### Fix Focus Areas
- src/praisonai-agents/praisonaiagents/llm/llm.py[3793-3798]
- src/praisonai-agents/praisonaiagents/llm/llm.py[106-114]

### Suggested implementation notes
- In `get_context_size()`, compute a normalized model id:
 - `raw = self.model.lower().strip()`
 - `name = raw.split(&#x27;/&#x27;, 1)[-1]` (strip provider prefix)
- Do exact match first: `if name in self.MODEL_WINDOWS: return ...`
- Then do prefix match (prefer longest prefix; see next finding) and finally fallback.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

2. gpt-4o matches gpt-4 🐞 Bug ✓ Correctness
Description
With unprefixed model names, get_context_size() can return the wrong window for gpt-4o* models
because it checks prefixes in insertion order and "gpt-4" is evaluated before "gpt-4o". This can
cause gpt-4o to be treated as gpt-4 and return 6144 instead of 96000.
Code

src/praisonai-agents/praisonaiagents/llm/llm.py[R109-111]

        "gpt-4": 6144,                    # 8,192 actual
-        "gpt-4o-mini": 96000,                  # 128,000 actual
+        "gpt-4o": 96000,                       # 128,000 actual
        "gpt-4o-mini": 96000,            # 128,000 actual
Evidence
MODEL_WINDOWS lists gpt-4 before gpt-4o, and get_context_size() uses startswith() with the
first match winning. Since gpt-4o.startswith(gpt-4) is true, the gpt-4 entry will be returned
first when self.model is unprefixed (possible when using the LLM wrapper via base_url or dict
config).

src/praisonai-agents/praisonaiagents/llm/llm.py[106-113]
src/praisonai-agents/praisonaiagents/llm/llm.py[3793-3798]
src/praisonai-agents/praisonaiagents/agent/agent.py[1263-1283]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`get_context_size()` uses `startswith()` iteration over `MODEL_WINDOWS` in insertion order. Because `gpt-4` appears before `gpt-4o`, the model name `gpt-4o` will match `gpt-4` first and return the wrong (smaller) window.

### Issue Context
This becomes relevant whenever `LLM` is instantiated with unprefixed model names (e.g., via `base_url` usage or dict-based configuration), and any logic depends on `get_context_size()`.

### Fix Focus Areas
- src/praisonai-agents/praisonaiagents/llm/llm.py[3793-3798]
- src/praisonai-agents/praisonaiagents/llm/llm.py[106-113]

### Suggested implementation notes
- After normalizing the model name, do:
 1) exact lookup
 2) prefix lookup using prefixes sorted by length descending (longest-first)
- Alternatively, reorder `MODEL_WINDOWS` to list longer/more-specific prefixes before shorter ones (less robust than sorting by length).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a configuration error within the MODEL_WINDOWS dictionary, ensuring that the gpt-4o model is correctly mapped to its intended context window size. Previously, a duplicate key entry for gpt-4o-mini inadvertently overshadowed the gpt-4o configuration, potentially leading to incorrect fallback behavior for agents utilizing this model. The change guarantees accurate model configuration and consistent behavior.

Highlights

  • Corrected MODEL_WINDOWS entry for gpt-4o: The MODEL_WINDOWS dictionary in llm.py had a duplicate key for gpt-4o-mini. The first instance, which was intended for gpt-4o, was incorrectly set to gpt-4o-mini. This error meant that gpt-4o was not properly recognized with its correct context window size. The fix involves changing the incorrect key to gpt-4o.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • src/praisonai-agents/praisonaiagents/llm/llm.py
    • Corrected the MODEL_WINDOWS dictionary entry by changing a duplicate gpt-4o-mini key to gpt-4o to ensure proper model configuration.
Activity
  • No activity to report yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 8, 2026

📝 Walkthrough

Walkthrough

The MODEL_WINDOWS mapping in the LLM class was updated to include a new entry for the "gpt-4o" model with a context window size of 96000 tokens, with "gpt-4o-mini" reordered to follow immediately after.

Changes

Cohort / File(s) Summary
Model Configuration
src/praisonai-agents/praisonaiagents/llm/llm.py
Added "gpt-4o" model with 96000 token window size and reordered "gpt-4o-mini" in the MODEL_WINDOWS mapping.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Poem

🐰 A model new joins the window queue,
Gpt-4o arrives with tokens so true,
Ninety-six thousand contexts to explore,
Our agents grow wiser, always wanting more! ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: fixing a duplicate gpt-4o-mini key in MODEL_WINDOWS by renaming it to gpt-4o.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request corrects a typo in the MODEL_WINDOWS dictionary where a duplicate gpt-4o-mini key was used instead of gpt-4o. This fix ensures that the gpt-4o model is assigned the correct context window size, resolving the bug described. The change is accurate and addresses the issue effectively. I have no further feedback on the change itself.

As a side note, while reviewing this dictionary, I noticed that some other model context window sizes might be outdated (e.g., claude-3-5-sonnet). It would be beneficial to review all values in MODEL_WINDOWS in a separate effort to ensure they are up-to-date.

Note: Security Review did not run due to the size of the PR.

Comment on lines +110 to 111
"gpt-4o": 96000, # 128,000 actual
"gpt-4o-mini": 96000, # 128,000 actual

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Provider-prefixed model mismatch 🐞 Bug ✓ Correctness

When using the LiteLLM path (provider/model strings like openai/gpt-4o), MODEL_WINDOWS keys
(e.g., gpt-4o) will not match because get_context_size() uses
self.model.startswith(model_prefix) without normalizing the provider prefix. This means the
newly-added gpt-4o entry can still fall through to the 4000-token default, so the PR may not fix
the reported behavior for LiteLLM usage.
Agent Prompt
### Issue description
`LLM.get_context_size()` uses `self.model.startswith(model_prefix)` against `MODEL_WINDOWS` keys that are *not* provider-prefixed. For LiteLLM usage where `self.model` commonly looks like `openai/gpt-4o`, no entries match and the method returns the 4000-token fallback.

### Issue Context
- Agent passes provider/model strings directly into `LLM(model=...)` when the model contains `/`.
- `MODEL_WINDOWS` contains keys like `gpt-4o`, `gpt-4o-mini`, etc.

### Fix Focus Areas
- src/praisonai-agents/praisonaiagents/llm/llm.py[3793-3798]
- src/praisonai-agents/praisonaiagents/llm/llm.py[106-114]

### Suggested implementation notes
- In `get_context_size()`, compute a normalized model id:
  - `raw = self.model.lower().strip()`
  - `name = raw.split('/', 1)[-1]` (strip provider prefix)
- Do exact match first: `if name in self.MODEL_WINDOWS: return ...`
- Then do prefix match (prefer longest prefix; see next finding) and finally fallback.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/praisonai-agents/praisonaiagents/llm/llm.py`:
- Around line 107-111: The MODEL_WINDOWS lookup and the startswith check in
get_context_size assume bare model ids but callers may pass provider-qualified
ids like "openai/gpt-4o"; update get_context_size to normalize self.model (e.g.,
strip provider prefixes before using MODEL_WINDOWS and before the
self.model.startswith(model_prefix) checks) so entries in MODEL_WINDOWS are
matched for provider-prefixed ids; reference MODEL_WINDOWS, get_context_size and
the self.model.startswith(model_prefix) usage when making the change.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 51e814d5-ef0e-4243-86ba-6a205ca1b1ed

📥 Commits

Reviewing files that changed from the base of the PR and between 16f9325 and ad8ee02.

📒 Files selected for processing (1)
  • src/praisonai-agents/praisonaiagents/llm/llm.py

Comment on lines 107 to 111
MODEL_WINDOWS = {
# OpenAI
"gpt-4": 6144, # 8,192 actual
"gpt-4o-mini": 96000, # 128,000 actual
"gpt-4o": 96000, # 128,000 actual
"gpt-4o-mini": 96000, # 128,000 actual
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Normalize provider-qualified model names before relying on this entry.

This fixes bare "gpt-4o", but Line 3795 still matches with self.model.startswith(model_prefix). If callers pass LiteLLM-style ids like openai/gpt-4o or openai/gpt-4o-mini—which this class already acknowledges elsewhere—this table is skipped and get_context_size() falls back to 4000. That leaves the context-window bug in place for provider-prefixed models.

💡 Proposed fix
     def get_context_size(self) -> int:
         """Get safe input size limit for this model"""
-        for model_prefix, size in self.MODEL_WINDOWS.items():
-            if self.model.startswith(model_prefix):
+        normalized_model = self.model.split("/", 1)[-1] if self.model else ""
+        for model_prefix, size in self.MODEL_WINDOWS.items():
+            if normalized_model.startswith(model_prefix):
                 return size
         return 4000  # Safe default
🧰 Tools
🪛 Ruff (0.15.4)

[warning] 107-147: Mutable default value for class attribute

(RUF012)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/praisonai-agents/praisonaiagents/llm/llm.py` around lines 107 - 111, The
MODEL_WINDOWS lookup and the startswith check in get_context_size assume bare
model ids but callers may pass provider-qualified ids like "openai/gpt-4o";
update get_context_size to normalize self.model (e.g., strip provider prefixes
before using MODEL_WINDOWS and before the self.model.startswith(model_prefix)
checks) so entries in MODEL_WINDOWS are matched for provider-prefixed ids;
reference MODEL_WINDOWS, get_context_size and the
self.model.startswith(model_prefix) usage when making the change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant