Skip to content

feat: add MiniMax as LLM provider#1571

Open
octo-patch wants to merge 1 commit intoPortkey-AI:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as LLM provider#1571
octo-patch wants to merge 1 commit intoPortkey-AI:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax as a first-class LLM provider to the Portkey AI Gateway, enabling seamless routing to MiniMax's advanced language models via their OpenAI-compatible API.

What's included

  • Provider implementation: api.ts, chatComplete.ts, index.ts following the established provider pattern with streaming support and response transforms
  • Models: MiniMax-M2.7 and MiniMax-M2.7-highspeed (204K context window, up to 192K output)
  • Registration: Added to globals, provider registry, providers.json, and models.json
  • Tests: 22 unit tests + 3 integration tests covering config, response transforms, streaming, and real API validation

Implementation details

  • Uses open-ai-base helper for OpenAI-compatible chat completion params
  • Base URL: https://api.minimax.io/v1 with Bearer auth
  • Temperature default: 1.0
  • Excludes unsupported params: logit_bias, logprobs, top_logprobs
  • Full streaming SSE support with proper chunk transforms

Provider info

Test plan

  • 22 unit tests pass (provider config, API config, response transforms, stream chunk transforms)
  • 3 integration tests pass (non-streaming completion, streaming completion, error handling)
  • TypeScript compilation clean (no new errors)
  • Prettier formatting check passes
  • Build succeeds (rollup -c)

Add MiniMax as a first-class LLM provider to the Portkey AI Gateway,
supporting MiniMax-M2.7 and MiniMax-M2.7-highspeed models via their
OpenAI-compatible API at https://api.minimax.io/v1.

- Add provider files (api.ts, chatComplete.ts, index.ts) with
  streaming support and response transforms
- Register provider in globals, provider index, providers.json,
  and models.json
- Use open-ai-base for chat completion params with temperature
  default of 1.0
- Exclude unsupported params: logit_bias, logprobs, top_logprobs
- Add 22 unit tests covering config, transforms, and streaming
- Add 3 integration tests with real API validation
- Add test variables entry for MINIMAX_API_KEY
@narengogi
Copy link
Copy Markdown
Member

@octo-patch please sign your commits

@octo-patch
Copy link
Copy Markdown
Author

Hi @narengogi, thanks for the note! I'll rebase and sign the commits. Will push an update shortly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants