[GH-ISSUE #1881] Only generate lots of hashes #1076

Closed
opened 2026-04-12 10:49:47 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @ZhihaoZhang97 on GitHub (Jan 10, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/1881

Screenshot from 2024-01-10 11-52-07

Not sure if I am the first to encounter with this issue, when I installed the ollama and run the llama2 from the Quickstart, it only outputs a lots of '####'.

I suspect that might be caused by the hardware or software settings with my newly updated system?

Since it works with my old rig with i9-9900K and dual RTX 3090.

As shown in the screenshot below, I am currently using Pop!OS with AMD Threadripper 3960X and dual RTX 3090.
Screenshot from 2024-01-10 11-52-54

Any help would be greatly appreciated, thank you!

Originally created by @ZhihaoZhang97 on GitHub (Jan 10, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/1881 ![Screenshot from 2024-01-10 11-52-07](https://github.com/jmorganca/ollama/assets/31653817/30f08c0d-c924-471f-b740-896ba804c2bf) Not sure if I am the first to encounter with this issue, when I installed the ollama and run the llama2 from the Quickstart, it only outputs a lots of '####'. I suspect that might be caused by the hardware or software settings with my newly updated system? Since it works with my old rig with i9-9900K and dual RTX 3090. As shown in the screenshot below, I am currently using Pop!OS with AMD Threadripper 3960X and dual RTX 3090. ![Screenshot from 2024-01-10 11-52-54](https://github.com/jmorganca/ollama/assets/31653817/ebc410c8-d635-4d7c-9d31-9115d67b1516) Any help would be greatly appreciated, thank you!
Author
Owner

@lasseedfast commented on GitHub (Jan 10, 2024):

Before there has been a workaround for this, but the problem seems to be back again. Here are some more info https://github.com/jmorganca/ollama/pull/1261#issuecomment-1881823438

<!-- gh-comment-id:1884433321 --> @lasseedfast commented on GitHub (Jan 10, 2024): Before there has been a workaround for this, but the problem seems to be back again. Here are some more info https://github.com/jmorganca/ollama/pull/1261#issuecomment-1881823438
Author
Owner

@maciejmajek commented on GitHub (Jan 10, 2024):

Same here. Tested on: v0.1.19, v0.1.17 and docker

2x4090, i9-13900k, ubuntu 20.04
Driver Version: 545.23.08
CUDA Version: 12.1

I was able to run the models using latest version just fine for some time but at some point every output became a stream of hashes.

Edit:
mixtral outputs hashes only
phi outputs empty lines
mistral works fine

<!-- gh-comment-id:1884681869 --> @maciejmajek commented on GitHub (Jan 10, 2024): Same here. Tested on: [v0.1.19](https://github.com/jmorganca/ollama/releases/tag/v0.1.19), [v0.1.17](https://github.com/jmorganca/ollama/releases/tag/v0.1.17) and [docker](https://hub.docker.com/r/ollama/ollama) 2x4090, i9-13900k, ubuntu 20.04 Driver Version: 545.23.08 CUDA Version: 12.1 I was able to run the models using latest version just fine for some time but at some point every output became a stream of hashes. Edit: mixtral outputs hashes only phi outputs empty lines mistral works fine
Author
Owner

@elven2016 commented on GitHub (Jan 12, 2024):

the same error too, have you found the solution?

<!-- gh-comment-id:1888290493 --> @elven2016 commented on GitHub (Jan 12, 2024): the same error too, have you found the solution?
Author
Owner

@lasseedfast commented on GitHub (Jan 16, 2024):

My solution has become to downgrade to .17

curl https://ollama.ai/install.sh | sed 's#https://ollama.ai/download#https://github.com/jmorganca/ollama/releases/download/v0.1.17#' | sh.

<!-- gh-comment-id:1893224659 --> @lasseedfast commented on GitHub (Jan 16, 2024): My solution has become to downgrade to .17 ```curl https://ollama.ai/install.sh | sed 's#https://ollama.ai/download#https://github.com/jmorganca/ollama/releases/download/v0.1.17#' | sh.```
Author
Owner

@ZhihaoZhang97 commented on GitHub (Jan 22, 2024):

It seems downgrade the Nvidia Driver back to 535.x.x can also resolve the problem with the latest ollama.

<!-- gh-comment-id:1904968929 --> @ZhihaoZhang97 commented on GitHub (Jan 22, 2024): It seems downgrade the Nvidia Driver back to 535.x.x can also resolve the problem with the latest ollama.
Author
Owner

@lasseedfast commented on GitHub (Jan 23, 2024):

Thanks. If you know some up to date instructions on how to downgrade please share, I've not found any easy enough for me to follow.

<!-- gh-comment-id:1905427419 --> @lasseedfast commented on GitHub (Jan 23, 2024): Thanks. If you know some up to date instructions on how to downgrade please share, I've not found any easy enough for me to follow.
Author
Owner

@maciejmajek commented on GitHub (Jan 24, 2024):

Still happening. v0.1.20 + nvidia 545
Tested both locally and inside docker with and without gpus.
image
Models ran using cpu only docker image run fine.

<!-- gh-comment-id:1908255329 --> @maciejmajek commented on GitHub (Jan 24, 2024): Still happening. v0.1.20 + nvidia 545 Tested both locally and inside docker with and without gpus. ![image](https://github.com/ollama/ollama/assets/46171033/7052c561-01d5-4610-b1a9-f3813123aace) Models ran using cpu only docker image run fine.
Author
Owner

@pdevine commented on GitHub (Jan 27, 2024):

Sorry guys, can you try again w/ 0.1.22 and make sure you the model you're trying to use.

<!-- gh-comment-id:1912885624 --> @pdevine commented on GitHub (Jan 27, 2024): Sorry guys, can you try again w/ `0.1.22` and make sure you the model you're trying to use.
Author
Owner

@ZhihaoZhang97 commented on GitHub (Jan 27, 2024):

Thanks @pdevine can confirm the 0.1.22 version fix the bug with the latest Nvidia 545 driver! Nice work!
Screenshot from 2024-01-27 13-45-13

<!-- gh-comment-id:1912927953 --> @ZhihaoZhang97 commented on GitHub (Jan 27, 2024): Thanks @pdevine can confirm the `0.1.22` version fix the bug with the latest Nvidia 545 driver! Nice work! ![Screenshot from 2024-01-27 13-45-13](https://github.com/ollama/ollama/assets/31653817/6dfa17d8-430c-4243-bb87-a435af7237e1)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1076