[GH-ISSUE #5787] ollama run deepseek-coder-v2 creates gibberish output #50116

Closed
opened 2026-04-28 14:10:33 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @flo-ivar on GitHub (Jul 19, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5787

What is the issue?

Hi,

I am trying to run the 16b ollama deepseek-coder-v2, which leads to a "gibberish" output.
Strangely enough it works after a fresh download, but then after trying to run it in Aider it doesnt.

image

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.2.7

Originally created by @flo-ivar on GitHub (Jul 19, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5787 ### What is the issue? Hi, I am trying to run the 16b ollama deepseek-coder-v2, which leads to a "gibberish" output. Strangely enough it works after a fresh download, but then after trying to run it in Aider it doesnt. ![image](https://github.com/user-attachments/assets/9e6df4f7-dc47-49bc-a306-2e73c73b4098) ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.2.7
GiteaMirror added the bug label 2026-04-28 14:10:33 -05:00
Author
Owner

@RobinEccleston commented on GitHub (Jul 19, 2024):

I have also experienced the same issue, but for me it only occurs with codegeex4 and glm4. All other models are fine. I removed and reinstalled the models with no luck and also updated ollama with no difference. Generally a few messages are fine and then it turns to gibberish seemingly at random.

<!-- gh-comment-id:2238727625 --> @RobinEccleston commented on GitHub (Jul 19, 2024): I have also experienced the same issue, but for me it only occurs with codegeex4 and glm4. All other models are fine. I removed and reinstalled the models with no luck and also updated ollama with no difference. Generally a few messages are fine and then it turns to gibberish seemingly at random.
Author
Owner

@rick-github commented on GitHub (Jul 19, 2024):

Server logs might enable diagnosis of the problem.

<!-- gh-comment-id:2238848188 --> @rick-github commented on GitHub (Jul 19, 2024): Server logs might enable diagnosis of the problem.
Author
Owner

@hieuminh65 commented on GitHub (Jul 19, 2024):

I have this too. I dont know if this is a bug or what. I have this with qwen 2 too.
I chat "hello" but it outputs a lot of non-sense things.

Screenshot 2024-07-19 at 11 35 19 AM
<!-- gh-comment-id:2239476648 --> @hieuminh65 commented on GitHub (Jul 19, 2024): I have this too. I dont know if this is a bug or what. I have this with qwen 2 too. I chat "hello" but it outputs a lot of non-sense things. <img width="1247" alt="Screenshot 2024-07-19 at 11 35 19 AM" src="https://github.com/user-attachments/assets/126813d4-a8a4-4277-885a-b5ce009d067e">
Author
Owner

@alzubitariq commented on GitHub (Jul 20, 2024):

I have also experienced the same issue with deepseek-coder-v2

<!-- gh-comment-id:2241045864 --> @alzubitariq commented on GitHub (Jul 20, 2024): I have also experienced the same issue with deepseek-coder-v2
Author
Owner

@nicolasraj commented on GitHub (Jul 20, 2024):

I am experiencing the same issue with Llama3:8b

<!-- gh-comment-id:2241062401 --> @nicolasraj commented on GitHub (Jul 20, 2024): I am experiencing the same issue with Llama3:8b
Author
Owner

@strangeryf commented on GitHub (Jul 20, 2024):

I have experienced similar issue with glm4 and ollama 0.2.7
image

<!-- gh-comment-id:2241086524 --> @strangeryf commented on GitHub (Jul 20, 2024): I have experienced similar issue with glm4 and ollama 0.2.7 ![image](https://github.com/user-attachments/assets/f0c4d646-2c9d-4d70-be75-84328f908e0d)
Author
Owner

@RobinEccleston commented on GitHub (Jul 20, 2024):

I managed to trigger this error reliably on my system. If I first use codegeex4, it's fine, then I used phi3 for a single message, and then change back to codegeex4 for one more message, this triggers the problem. I have 4GB of VRAM so only enough for one model at a time. Phi3 has no issues.

<!-- gh-comment-id:2241106735 --> @RobinEccleston commented on GitHub (Jul 20, 2024): I managed to trigger this error reliably on my system. If I first use codegeex4, it's fine, then I used phi3 for a single message, and then change back to codegeex4 for one more message, this triggers the problem. I have 4GB of VRAM so only enough for one model at a time. Phi3 has no issues.
Author
Owner

@pdevine commented on GitHub (Sep 17, 2024):

Dupe of #5339

<!-- gh-comment-id:2354337158 --> @pdevine commented on GitHub (Sep 17, 2024): Dupe of #5339
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#50116