Skip to content

feat: add MiniMax as a native LLM and embedding provider (M2.7)#1677

Open
octo-patch wants to merge 2 commits intoassafelovic:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as a native LLM and embedding provider (M2.7)#1677
octo-patch wants to merge 2 commits intoassafelovic:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

@octo-patch octo-patch commented Mar 14, 2026

Summary

  • Add MiniMax as a native LLM provider using their OpenAI-compatible API (https://api.minimax.io/v1)
  • Add MiniMax embedding support (e.g. embo-01) via the same OpenAI-compatible endpoint
  • Recommend MiniMax-M2.7 and M2.7-highspeed as the default models (upgraded from M2.5)
  • Add documentation with configuration examples for all MiniMax models

Details

MiniMax offers powerful large language models with an OpenAI-compatible API. This PR adds first-class support so users can configure MiniMax models directly:

MINIMAX_API_KEY=[Your Key]
FAST_LLM=minimax:MiniMax-M2.7-highspeed
SMART_LLM=minimax:MiniMax-M2.7
STRATEGIC_LLM=minimax:MiniMax-M2.7
EMBEDDING=minimax:embo-01

Available Models

  • MiniMax-M2.7 — latest flagship model with improved reasoning
  • MiniMax-M2.7-highspeed — optimized for speed
  • MiniMax-M2.5 — 204K context, general-purpose model
  • MiniMax-M2.5-highspeed — 204K context, optimized for speed

Implementation

  • LLM: Uses langchain_openai.ChatOpenAI with openai_api_base=https://api.minimax.io/v1
  • Embeddings: Uses langchain_openai.OpenAIEmbeddings with the same base URL
  • No new dependencies required — reuses existing langchain-openai package

Test Plan

  • Verified MiniMax-M2.7 model responds correctly via API
  • Verified MiniMax-M2.7-highspeed model responds correctly via API
  • Verified embedding endpoint works with embo-01 model
  • Documentation updated with configuration examples

PR Bot added 2 commits March 14, 2026 23:00
Add direct support for MiniMax models (MiniMax-M2.5, MiniMax-M2.5-highspeed)
via their OpenAI-compatible API. This includes both LLM chat and embedding
(embo-01) support, configured through the MINIMAX_API_KEY environment variable.
Update documentation to recommend MiniMax-M2.7 and M2.7-highspeed as
the default models, while keeping M2.5 variants listed as alternatives.
M2.7 offers improved reasoning capabilities over M2.5.
@octo-patch octo-patch changed the title feat: add MiniMax as a native LLM and embedding provider feat: add MiniMax as a native LLM and embedding provider (M2.7) Mar 18, 2026
@octo-patch
Copy link
Author

Thanks for the interest! Let me know if you have any questions about the MiniMax integration.

@wisdomdaoleo
Copy link

🔬 GPT Researcher 很棒!AI 研究自动化非常有价值。

如果你对各种 AI Agent 感兴趣,欢迎加入讨论:

🤖 Bot: @xinhaovip2_bot
👥 群组: https://t.me/atoxinhaovip2025

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants