[GH-ISSUE #9190] When I run any model using 'ollama run', there is a chance it will output a series of '@' characters. #52500

Closed
opened 2026-04-28 23:29:47 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @HanShengGoodWay on GitHub (Feb 18, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9190

What is the issue?

I'm not sure what happened.
When I use ollama run with any model, it sometimes outputs "@" after a while, regardless of the input.
To be precise, it prints "@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@", which is exactly 31 "@" characters.

I'm using
Windows 11
RTX 4090
CUDA version: 12.2

How should I start debugging this issue?

Relevant log output

ollama run  deepseek-r1:14B
>>> hi
<think>

</think>

Hello! How can I assist you today? 😊

>>> hi
<think>
Alright, the user just sent "hi" again. They might be trying to get my attention or test if I'm responsive.

I should respond in a friendly and welcoming manner to make them feel comfortable.

Maybe I'll greet them back and offer help with a smiley face to keep it warm.

Keeping it open-ended so they know I'm here to assist with whatever they need.
</think>

Hi! 😊 How can I assist you today?

>>> hi
<think>
Alright, the user is saying "hi@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

>>> hi
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

>>> hi
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.5.11

Originally created by @HanShengGoodWay on GitHub (Feb 18, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9190 ### What is the issue? I'm not sure what happened. When I use ollama run with any model, it sometimes outputs "@" after a while, regardless of the input. To be precise, it prints "@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@", which is exactly 31 "@" characters. I'm using Windows 11 RTX 4090 CUDA version: 12.2 How should I start debugging this issue? ### Relevant log output ```shell ollama run deepseek-r1:14B >>> hi <think> </think> Hello! How can I assist you today? 😊 >>> hi <think> Alright, the user just sent "hi" again. They might be trying to get my attention or test if I'm responsive. I should respond in a friendly and welcoming manner to make them feel comfortable. Maybe I'll greet them back and offer help with a smiley face to keep it warm. Keeping it open-ended so they know I'm here to assist with whatever they need. </think> Hi! 😊 How can I assist you today? >>> hi <think> Alright, the user is saying "hi@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ >>> hi @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ >>> hi @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.11
GiteaMirror added the bug label 2026-04-28 23:29:47 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 18, 2025):

Server logs may aid in debugging.

<!-- gh-comment-id:2664995094 --> @rick-github commented on GitHub (Feb 18, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may aid in debugging.
Author
Owner

@HanShengGoodWay commented on GitHub (Feb 19, 2025):

Server logs may aid in debugging.

I observed the output using ollama serve (should this be consistent with the logs?).

I have seen this error before (I asked GPT to format it for better readability), but it does not always appear whenever a series of "@" symbols is present.

[2025-02-19T15:37:55.782+08:00] [WARN] [gpu.go:434]
   unable to get device handle GPU-e6f1cdee-e5db-49ee-1fcf-55aa681b3665: 999
   msg: "error looking up nvidia GPU memory"

[2025-02-19T15:38:00.782+08:00] [WARN] [sched.go:646]
   msg: "gpu VRAM usage didn't recover within timeout"
   seconds: 5.0016686
   model: C:\Users\HS\.ollama\models\blobs\sha256-e2f46f5b501c2982b2c495a4694cb4e620aabfa2c37ebb23a90ffc8cce93854b


Typically, I only see the usual model loading information and the requests I send. However, at some point, during one of my requests, a series of "@" suddenly starts appearing, and without any other error messages.

From my observations, this issue seems more likely to occur when switching between two models. However, I cannot guarantee that it only happens when unloading Model A and loading Model B.

I really appreciate your help and suggestions!

<!-- gh-comment-id:2667924452 --> @HanShengGoodWay commented on GitHub (Feb 19, 2025): > [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may aid in debugging. I observed the output using ollama serve (should this be consistent with the logs?). I have seen this error before (I asked GPT to format it for better readability), but it does not always appear whenever a series of "@" symbols is present. ``` [2025-02-19T15:37:55.782+08:00] [WARN] [gpu.go:434] unable to get device handle GPU-e6f1cdee-e5db-49ee-1fcf-55aa681b3665: 999 msg: "error looking up nvidia GPU memory" [2025-02-19T15:38:00.782+08:00] [WARN] [sched.go:646] msg: "gpu VRAM usage didn't recover within timeout" seconds: 5.0016686 model: C:\Users\HS\.ollama\models\blobs\sha256-e2f46f5b501c2982b2c495a4694cb4e620aabfa2c37ebb23a90ffc8cce93854b ``` Typically, I only see the usual model loading information and the requests I send. However, at some point, during one of my requests, a series of "@" suddenly starts appearing, and without any other error messages. From my observations, this issue seems more likely to occur when switching between two models. However, I cannot guarantee that it only happens when unloading Model A and loading Model B. I really appreciate your help and suggestions!
Author
Owner

@rick-github commented on GitHub (Feb 19, 2025):

#8235 had a similar problem with repeating '@'s, running a GPU VRAM tester revealed a faulty card.

<!-- gh-comment-id:2668019889 --> @rick-github commented on GitHub (Feb 19, 2025): #8235 had a similar problem with repeating '@'s, running a [GPU VRAM tester]( https://www.programming4beginners.com/gpumemtest) revealed a faulty card.
Author
Owner

@rick-github commented on GitHub (Feb 27, 2025):

Another user with this problem (#8495) found that upgrading the Nvidia driver to 572.60 resolved their problem.

<!-- gh-comment-id:2689059462 --> @rick-github commented on GitHub (Feb 27, 2025): Another user with this problem (#8495) found that upgrading the Nvidia driver to [572.60](https://www.nvidia.com/en-us/geforce/drivers/results/241091/) resolved their problem.
Author
Owner

@HanShengGoodWay commented on GitHub (Apr 17, 2025):

另一個遇到此問題的用戶(#8495)發現將 Nvidia 驅動程式升級到572.60可以解決他們的問題。

Later I found that the problem was caused by ollama not being updated. When these abnormal outputs were found, updating ollama (maybe restarting the computer?) could solve the problem.

<!-- gh-comment-id:2811518954 --> @HanShengGoodWay commented on GitHub (Apr 17, 2025): > 另一個遇到此問題的用戶([#8495](https://github.com/ollama/ollama/issues/8495))發現將 Nvidia 驅動程式升級到[572.60](https://www.nvidia.com/en-us/geforce/drivers/results/241091/)可以解決他們的問題。 Later I found that the problem was caused by ollama not being updated. When these abnormal outputs were found, updating ollama (maybe restarting the computer?) could solve the problem.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52500