[GH-ISSUE #14220] MiniMax M2.5 does not work in coding agents #71320

Closed
opened 2026-05-05 01:12:17 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @Cephei-OpenSource on GitHub (Feb 12, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14220

Originally assigned to: @BruceMacD on GitHub.

What is the issue?

While MiniMax M2.1:cloud works fine with coding agents such as Continue in VSCodium or OpenCode, MiniMax M2.5:cloud does not. The new model only works in Open WebUI, but with agents throws an error. "error":"invalid params, invalid role: thinking". Log output of Continue extension for VSCodium attached. Config should be ok if I just replace M2.5 with M2.1 in the config all is good. Kind regards

Relevant log output

[@continuedev] error: HTTP 400 Bad Request from https://api.ollama.com/api/chat
{"error":"invalid params, invalid role: thinking"}
 {"context":"llm_fetch","url":"https://api.ollama.com/api/chat","method":"POST","model":"minimax-m2.5:cloud","provider":"ollama"}
[@continuedev] error: HTTP 400 Bad Request from https://api.ollama.com/api/chat
{"error":"invalid params, invalid role: thinking"}
 {"context":"llm_stream_chat","model":"minimax-m2.5:cloud","provider":"ollama","useOpenAIAdapter":false,"streamEnabled":true,"templateMessages":false}

OS

Linux

GPU

No response

CPU

AMD

Ollama version

0.15.6

Originally created by @Cephei-OpenSource on GitHub (Feb 12, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14220 Originally assigned to: @BruceMacD on GitHub. ### What is the issue? While MiniMax M2.1:cloud works fine with coding agents such as Continue in VSCodium or OpenCode, MiniMax M2.5:cloud does not. The new model only works in Open WebUI, but with agents throws an error. "error":"invalid params, invalid role: thinking". Log output of Continue extension for VSCodium attached. Config should be ok if I just replace M2.5 with M2.1 in the config all is good. Kind regards ### Relevant log output ```shell [@continuedev] error: HTTP 400 Bad Request from https://api.ollama.com/api/chat {"error":"invalid params, invalid role: thinking"} {"context":"llm_fetch","url":"https://api.ollama.com/api/chat","method":"POST","model":"minimax-m2.5:cloud","provider":"ollama"} [@continuedev] error: HTTP 400 Bad Request from https://api.ollama.com/api/chat {"error":"invalid params, invalid role: thinking"} {"context":"llm_stream_chat","model":"minimax-m2.5:cloud","provider":"ollama","useOpenAIAdapter":false,"streamEnabled":true,"templateMessages":false} ``` ### OS Linux ### GPU _No response_ ### CPU AMD ### Ollama version 0.15.6
GiteaMirror added the bug label 2026-05-05 01:12:17 -05:00
Author
Owner

@Cephei-OpenSource commented on GitHub (Feb 12, 2026):

I oversaw that 0.16.0 came out, addressing MiniMax M2.5 specifically. I installed the new version but the problem persists. Same errors. My Ollama version is now 0.16.0 but that did not help. Kind regards

<!-- gh-comment-id:3892890583 --> @Cephei-OpenSource commented on GitHub (Feb 12, 2026): I oversaw that 0.16.0 came out, addressing MiniMax M2.5 specifically. I installed the new version but the problem persists. Same errors. My Ollama version is now 0.16.0 but that did not help. Kind regards
Author
Owner

@Cephei-OpenSource commented on GitHub (Feb 12, 2026):

With some further testing, the issue got even more strange. Rarely it does even work, in about (estd) 1/6 of my tries. Sometimes it does give an answer, but mostly it says the mentioned error of bad role (thinking). Like, if the prompt is constructed in one way, it fails, if (rarely) the prompt is constructed in some other way, it works. Kind regards

<!-- gh-comment-id:3893190195 --> @Cephei-OpenSource commented on GitHub (Feb 12, 2026): With some further testing, the issue got even more strange. Rarely it does even work, in about (estd) 1/6 of my tries. Sometimes it does give an answer, but mostly it says the mentioned error of bad role (thinking). Like, if the prompt is constructed in one way, it fails, if (rarely) the prompt is constructed in some other way, it works. Kind regards
Author
Owner

@Cephei-OpenSource commented on GitHub (Feb 13, 2026):

I installed 0.16.1 today. Problem persists. Kind regards

<!-- gh-comment-id:3897819111 --> @Cephei-OpenSource commented on GitHub (Feb 13, 2026): I installed 0.16.1 today. Problem persists. Kind regards
Author
Owner

@BruceMacD commented on GitHub (Feb 13, 2026):

Thanks for the report @Cephei-OpenSource, I just deployed a change that I believe should fix this.

<!-- gh-comment-id:3899490193 --> @BruceMacD commented on GitHub (Feb 13, 2026): Thanks for the report @Cephei-OpenSource, I just deployed a change that I believe should fix this.
Author
Owner

@Cephei-OpenSource commented on GitHub (Feb 13, 2026):

Dear Bruce, thanks a lot for taking care of the issue. I can't say where you deployed the fix to, was that to Ollama Cloud? Anyway, I just tested and now it is working. The issue seems to be resolved. Great! As info, there was an update to Codium in the meanwhile, but none to Ollama and none to Continue. So I assume you fixed it on your side in the Ollama Cloud. By the way, thanks for this great service which we use a lot! Kind regards

<!-- gh-comment-id:3899703360 --> @Cephei-OpenSource commented on GitHub (Feb 13, 2026): Dear Bruce, thanks a lot for taking care of the issue. I can't say where you deployed the fix to, was that to Ollama Cloud? Anyway, I just tested and now it is working. The issue seems to be resolved. Great! As info, there was an update to Codium in the meanwhile, but none to Ollama and none to Continue. So I assume you fixed it on your side in the Ollama Cloud. By the way, thanks for this great service which we use a lot! Kind regards
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71320