[GH-ISSUE #6249] ollama run llama3.1 command outputs nonsense #29669

Closed
opened 2026-04-22 08:45:20 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @erfan-khalaji on GitHub (Aug 8, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6249

What is the issue?

After installing Ollama on macOS, I attempted to run the model using the ollama run llama3.1 command. However, when I tried running the model by inputting "hello," it returned what appeared to be random ASCII characters, which didn't make sense. I then used ollama pull llama2 and ollama pull llama3 to see if that would resolve the issue. While ollama run llama3.1 still resulted in nonsensical output, ollama run llama3 and ollama run llama2 worked perfectly. I thought I would share my experience in case it helps someone facing a similar issue.

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

3.1

Originally created by @erfan-khalaji on GitHub (Aug 8, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6249 ### What is the issue? After installing Ollama on macOS, I attempted to run the model using the `ollama run llama3.1` command. However, when I tried running the model by inputting "hello," it returned what appeared to be random ASCII characters, which didn't make sense. I then used `ollama pull llama2` and `ollama pull llama3` to see if that would resolve the issue. While `ollama run llama3.1` still resulted in nonsensical output, `ollama run llama3` and `ollama run llama2` worked perfectly. I thought I would share my experience in case it helps someone facing a similar issue. ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 3.1
GiteaMirror added the bug label 2026-04-22 08:45:20 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 8, 2024):

It seems that the Q4_0 quant of llama3.1 doesn't work for a subset of Mac users. What type of Mac are you running? Can you provide server logs?

<!-- gh-comment-id:2276230505 --> @rick-github commented on GitHub (Aug 8, 2024): It seems that the Q4_0 quant of llama3.1 doesn't work for a subset of Mac users. What type of Mac are you running? Can you provide [server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues)?
Author
Owner

@rick-github commented on GitHub (Aug 8, 2024):

If you are running a Mac M1 with 8GB of RAM, the PR https://github.com/ollama/ollama/pull/6260 may fix the issue in the next release.

<!-- gh-comment-id:2276515798 --> @rick-github commented on GitHub (Aug 8, 2024): If you are running a Mac M1 with 8GB of RAM, the PR https://github.com/ollama/ollama/pull/6260 may fix the issue in the next release.
Author
Owner

@jmorganca commented on GitHub (Aug 8, 2024):

Hi @erfan-khalaji , this should be fixed now. You'll need to re-pull llama3.1: ollama pull llama3.1 (sorry!). Let me know if you're still seeing it afterwards.

<!-- gh-comment-id:2276760231 --> @jmorganca commented on GitHub (Aug 8, 2024): Hi @erfan-khalaji , this should be fixed now. You'll need to re-pull `llama3.1`: `ollama pull llama3.1` (sorry!). Let me know if you're still seeing it afterwards.
Author
Owner

@erfan-khalaji commented on GitHub (Aug 9, 2024):

@rick-github @jmorganca
After re-pulling Llama 3.1, the problem was resolved. Thank you, folks!

<!-- gh-comment-id:2277180661 --> @erfan-khalaji commented on GitHub (Aug 9, 2024): @rick-github @jmorganca After re-pulling Llama 3.1, the problem was resolved. Thank you, folks!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#29669