MiniMax-M2.7 Guide
Implementation-accurate integration, pricing, and rollout guide for MiniMax-M2.7 in this deployment.
MiniMax-M2.7 Guide
This page documents how MiniMax-M2.7 is implemented in this deployment. It focuses on API compatibility, model resolution behavior, pricing controls, and safe rollout.
If you need the official model narrative and benchmark report, see:
What You Get by Enabling M2.7
MiniMax-M2.7 is wired as a first-class registered model in src/lib/minimax/models.ts.
- Model ID:
MiniMax-M2.7 - Upstream model ID:
MiniMax-M2.7 - Billing model ID:
MiniMax-M2.7(independent attribution) - Aliases:
codex-MiniMax-M2.7,MiniMax M2.7 - Channels: supported in web chat, OpenAI-compatible API, and Anthropic-compatible API
This means you can switch existing integrations to M2.7 by changing only the model value.
API Compatibility
M2.7 uses the exact same request/response contracts as other MiniMax models in this project.
POST /api/v1/chat/completions(OpenAI-compatible)POST /api/v1/messages(Anthropic-compatible)POST /api/minimax/chat(hosted web chat backend)
No endpoint path changes or schema changes are required.
Pricing and Context Controls (Environment Variables)
Unlike M2, M2.1, and M2.5 (fixed in code), M2.7 pricing and context are configurable via environment variables.
| Variable | Purpose | Default |
|---|---|---|
MINIMAX_M27_INPUT_RATE_USD | Input price (USD per 1M tokens) | 0.5 |
MINIMAX_M27_OUTPUT_RATE_USD | Output price (USD per 1M tokens) | 1.5 |
MINIMAX_M27_MAX_TOKENS | Advertised capability max tokens in /api/v1/models | 200000 |
Validation behavior:
- Invalid or negative price values fall back to defaults.
- Invalid or non-positive
MAX_TOKENSfalls back to default. - Fallbacks are logged with warnings.
Defaults and Fallback Logic
Channel defaults remain unchanged after adding M2.7:
web_chatdefault:MiniMax-M2.5api_openaidefault:MiniMax-M2api_anthropicdefault:MiniMax-M2
Model resolution behavior:
- If
modelis omitted, channel default is used. - If
modelis an alias (for examplecodex-MiniMax-M2.7), it resolves toMiniMax-M2.7. - If
modelis unknown:- default behavior: fallback to channel default
- strict behavior (
MINIMAX_MODEL_STRICT_API=trueon API routes): return model error
This design keeps backward compatibility for old clients while allowing controlled adoption of M2.7.
Discover Live Effective Values
Always inspect live model metadata before cost planning or rollout.
The response includes model-level pricing, capabilities, and per-channel defaults.
Treat this endpoint as the source of truth for your running environment.
Request Examples
OpenAI-compatible
Anthropic-compatible
Alias-based model selection
These are equivalent and resolve to MiniMax-M2.7:
MiniMax-M2.7codex-MiniMax-M2.7MiniMax M2.7
Billing Semantics
Usage recording and charging bind to billingModelId, not the incoming alias string. For M2.7, billing attribution is independent:
- Usage records persist as model
MiniMax-M2.7 - Charge calculation uses resolved M2.7 input/output rates
- Dashboard breakdown can compare M2, M2.1, M2.5, and M2.7 consumption separately
Safe Rollout Checklist
- Keep current defaults unchanged to avoid regressions.
- Route a small percentage of traffic with
model: "MiniMax-M2.7". - Compare output quality, latency, and token spend versus baseline.
- Validate
/api/v1/modelsvalues in production before budget forecasting. - Expand traffic gradually and keep a quick rollback path to default models.