Skip to content

feat: add MiniMax as LLM provider with M2.7 model support#2639

Open
octo-patch wants to merge 2 commits intocoze-dev:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as LLM provider with M2.7 model support#2639
octo-patch wants to merge 2 commits intocoze-dev:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

@octo-patch octo-patch commented Mar 17, 2026

Summary

  • Add MiniMax as a first-class LLM provider via OpenAI-compatible eino-ext integration
  • Support MiniMax-M2.5, M2.5-highspeed, M2.7, and M2.7-highspeed models
  • M2.7 is the latest model with advanced reasoning capabilities and 16K output tokens
  • Temperature clamping (MiniMax requires temperature > 0)
  • Response format forced to text (MiniMax does not support JSON mode)
  • Full model builder with unit and integration tests

Changed Files

  • backend/bizpkg/llm/modelbuilder/minimax.go - MiniMax model builder
  • backend/bizpkg/llm/modelbuilder/minimax_test.go - Unit + integration tests
  • backend/bizpkg/llm/modelbuilder/model_builder.go - Registration
  • backend/conf/model/model_meta.json - Model metadata with M2.5/M2.7 entries
  • backend/conf/model/template/model_template_minimax.yaml - Template config (M2.7 default)
  • helm/charts/opencoze/files/conf/model/model_meta.json - Helm chart model metadata
  • Multiple auto-generated IDL files with ModelClass_MiniMax enum

Test Plan

  • Unit tests for temperature clamping
  • Unit tests for model builder config
  • Unit tests for model builder registration
  • Integration tests for M2.5, M2.5-highspeed, M2.7, M2.7-highspeed models
  • JSON and YAML file validation

Add MiniMax (MiniMax-M2.5 and MiniMax-M2.5-highspeed) as a first-class
LLM provider using the OpenAI-compatible API endpoint.

Changes:
- Add minimax.go model builder with temperature clamping (0,1.0]
- Register MiniMax in model builder factory map
- Add ProtocolMiniMax protocol constant and mapping
- Add MiniMax to provider list with icon and description
- Add model_template_minimax.yaml with 204K context config
- Add MiniMax models to model_meta.json (default, M2.5, M2.5-highspeed)
- Add comprehensive unit tests (7 test cases) and integration tests
- Update README, README.zh_CN, and CLAUDE.md with MiniMax in provider list
Add MiniMax-M2.7 and MiniMax-M2.7-highspeed model entries to model
metadata and update the default template model to M2.7 with advanced
reasoning capabilities and 16K output tokens.
@CLAassistant
Copy link
Copy Markdown

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
1 out of 2 committers have signed the CLA.

✅ octo-patch
❌ PR Bot


PR Bot seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@octo-patch octo-patch changed the title feat: add MiniMax as LLM provider feat: add MiniMax as LLM provider with M2.7 model support Mar 18, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants