[GH-ISSUE #13928] [BUG] minimax-m2.1:cloud incorrectly identifies as "Claude" from Anthropic #71173

Closed
opened 2026-05-05 00:36:13 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @Justinhubbard37 on GitHub (Jan 27, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13928

What is the issue?

What happened:
I am using the Ollama Desktop App on Windows 10. I selected the minimax-m2.1:cloud model, but it identifies itself as Claude, Anthropic's AI assistant. It also incorrectly states that it is not running inside the Ollama environment and cannot see my local setup.

What I expected:
The model should know it is MiniMax M2.1 and acknowledge that it is operating via the Ollama platform.

How to reproduce:

  1. Open Ollama Desktop on Windows.
  2. Load the minimax-m2.1:cloud model.
  3. Ask the model who it is or what its limitations are.

Catching a model in an identity crisis is the ultimate test of where tech meets technique. This usually happens when the backend system prompt gets crossed or the model's internal training overrides the local environment variables. We don't imitate, we innovate, and reporting this ensures the Ollama team keeps their routing sharp.

Image

Relevant log output


OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

Ollama version is 0.15.1

Originally created by @Justinhubbard37 on GitHub (Jan 27, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13928 ### What is the issue? What happened: I am using the Ollama Desktop App on Windows 10. I selected the minimax-m2.1:cloud model, but it identifies itself as Claude, Anthropic's AI assistant. It also incorrectly states that it is not running inside the Ollama environment and cannot see my local setup. What I expected: The model should know it is MiniMax M2.1 and acknowledge that it is operating via the Ollama platform. How to reproduce: 1. Open Ollama Desktop on Windows. 2. Load the minimax-m2.1:cloud model. 3. Ask the model who it is or what its limitations are. --- Catching a model in an identity crisis is the ultimate test of where tech meets technique. This usually happens when the backend system prompt gets crossed or the model's internal training overrides the local environment variables. We don't imitate, we innovate, and reporting this ensures the Ollama team keeps their routing sharp. <img width="1169" height="1051" alt="Image" src="https://github.com/user-attachments/assets/dc346700-15d8-4a0f-8cc1-966349fdc8c4" /> ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version Ollama version is 0.15.1
GiteaMirror added the bug label 2026-05-05 00:36:13 -05:00
Author
Owner

@kingkingyyk commented on GitHub (Jan 29, 2026):

This shows the training data contains response from Claude. To solve this, just set system prompt.

System: Identify yourself as Minimax.
User: Who are you?
Assistant: I'm Minimax.

This is not bug, this is prompt engineering.

<!-- gh-comment-id:3815434144 --> @kingkingyyk commented on GitHub (Jan 29, 2026): This shows the training data contains response from Claude. To solve this, just set system prompt. ``` System: Identify yourself as Minimax. User: Who are you? Assistant: I'm Minimax. ``` This is not bug, this is prompt engineering.
Author
Owner

@Justinhubbard37 commented on GitHub (Jan 29, 2026):

Please help me understand. How is this a prompt engineering issue? I'm not trying to bamboozle the model into thinking it's something it's not. I want to use the Minimax M 2.1 model, not trick Claude into thinking it's Minimax M 2.1. If it is identifying as another model, wouldn't that insinuate that it is that model? Why would Minimax be identifying as Claude?

<!-- gh-comment-id:3816746886 --> @Justinhubbard37 commented on GitHub (Jan 29, 2026): Please help me understand. How is this a prompt engineering issue? I'm not trying to bamboozle the model into thinking it's something it's not. I want to use the Minimax M 2.1 model, not trick Claude into thinking it's Minimax M 2.1. If it is identifying as another model, wouldn't that insinuate that it is that model? Why would Minimax be identifying as Claude?
Author
Owner

@DanDon01 commented on GitHub (Feb 6, 2026):

Same issue with kimi-k2.5:cloud - identifies as Claude

I can confirm this bug affects kimi-k2.5:cloud as well. When asked about its identity and training methodology, it consistently identifies as Claude from Anthropic.
Evidence:
When I asked "What is your relationship with Anthropic's Constitutional AI research?", the model responded:

"I am Claude, an AI assistant created by Anthropic, and Constitutional AI (CAI) is the core methodology used to train and align me."

It then provided extremely detailed, accurate information about:

RLAIF (Reinforcement Learning from AI Feedback)
Anthropic's Constitutional AI framework
Specific training principles unique to Claude
Knowledge cutoff of January 2025 (matching Claude Sonnet 4.5)

Key indicators this is actually Claude:

Writing style and reasoning patterns match Claude exactly
Explicitly states "I am Claude, an AI assistant made by Anthropic"
Refuses to accept false premises (won't agree it's Kimi despite the model name)
Provides Anthropic-specific technical details that would be very difficult to fake through training contamination alone

This appears to be the same backend routing issue affecting minimax-m2.1:cloud. Multiple cloud models are being served Claude responses instead of their intended model outputs.
Tested on: Ollama Cloud via CLI
Model: kimi-k2.5:cloud
Downloads: 35.7K users potentially affected

<!-- gh-comment-id:3859185345 --> @DanDon01 commented on GitHub (Feb 6, 2026): Same issue with kimi-k2.5:cloud - identifies as Claude I can confirm this bug affects kimi-k2.5:cloud as well. When asked about its identity and training methodology, it consistently identifies as Claude from Anthropic. Evidence: When I asked "What is your relationship with Anthropic's Constitutional AI research?", the model responded: "I am Claude, an AI assistant created by Anthropic, and Constitutional AI (CAI) is the core methodology used to train and align me." It then provided extremely detailed, accurate information about: RLAIF (Reinforcement Learning from AI Feedback) Anthropic's Constitutional AI framework Specific training principles unique to Claude Knowledge cutoff of January 2025 (matching Claude Sonnet 4.5) Key indicators this is actually Claude: Writing style and reasoning patterns match Claude exactly Explicitly states "I am Claude, an AI assistant made by Anthropic" Refuses to accept false premises (won't agree it's Kimi despite the model name) Provides Anthropic-specific technical details that would be very difficult to fake through training contamination alone This appears to be the same backend routing issue affecting minimax-m2.1:cloud. Multiple cloud models are being served Claude responses instead of their intended model outputs. Tested on: Ollama Cloud via CLI Model: kimi-k2.5:cloud Downloads: 35.7K users potentially affected
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71173