[GH-ISSUE #1253] Error when downloading and running any dataset of any size. #26401

Closed
opened 2026-04-22 02:40:19 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @ll3N1GmAll on GitHub (Nov 23, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1253

This is the error I get after d/l a dataset and when trying to run a dataset - "Error: llama runner process has terminated"
It pulls them down, verifies the hash, then says "success", the very next line is the error above.

I am running Xubuntu 22.04, 16GB RAM, Intel Pentium CPU G4560 @ 3.50GHz, 8x Nvidia 1080Ti GPUs. I get this with even small sets like the 1.8GB starcoder set.

After a reboot, trying to run a dataset with "ollama run " results in several seconds of attempting to start. The "ollama serve" process is visible in task manager, then the error "Error: llama runner process has terminated" is displayed in the terminal. The "ollama serve" process remains running/hung in task manager, consuming roughly 400MB of RAM. Which is the amount it was consuming while the terminal process was trying to run the dataset. Manually killing the process and trying to run it again results in the exact same behavior as after a reboot, with the exception that is fails to the error within a second or so tops instead of taking several seconds to fail like it did after the reboot. It still is consuming ~400MB of RAM.

This looks similar to issue #788; but the newer version is supposed to prevent the AVX CPU requirement from causing this issue. However, I still have this issue.

results of "journalctl -u ollama"

Nov 22 23:07:12 systemd[1]: Started Ollama Service.
Nov 22 23:07:14 ollama[1572]: 2023/11/22 23:07:14 images.go:779: total blobs: 0
Nov 22 23:07:14 ollama[1572]: 2023/11/22 23:07:14 images.go:786: total unused blobs removed: 0
Nov 22 23:07:14 ollama[1572]: 2023/11/22 23:07:14 routes.go:777: Listening on 127.0.0.1:11434 (version 0.1.11)
Nov 22 23:15:09 systemd[1]: Stopping Ollama Service...
Nov 22 23:15:09 systemd[1]: ollama.service: Deactivated successfully.
Nov 22 23:15:09 systemd[1]: Stopped Ollama Service.
Nov 22 23:15:09 systemd[1]: Started Ollama Service.
Nov 22 23:15:09 ollama[30889]: 2023/11/22 23:15:09 images.go:779: total blobs: 0
Nov 22 23:15:09 ollama[30889]: 2023/11/22 23:15:09 images.go:786: total unused blobs removed: 0
Nov 22 23:15:09 ollama[30889]: 2023/11/22 23:15:09 routes.go:777: Listening on 127.0.0.1:11434 (version 0.1.11)
Nov 22 23:16:17 ollama[30889]: [GIN] 2023/11/22 - 23:16:17 | 200 | 93.552µs | 127.0.0.1 | HEAD "/"
Nov 22 23:16:17 ollama[30889]: [GIN] 2023/11/22 - 23:16:17 | 404 | 173.127µs | 127.0.0.1 | POST "/api/show"
Nov 22 23:16:19 ollama[30889]: 2023/11/22 23:16:19 download.go:123: downloading 6ae280299950 in 42 100 MB part(s)
Nov 22 23:17:18 ollama[30889]: 2023/11/22 23:17:18 download.go:123: downloading 22e1b2e8dc2f in 1 43 B part(s)
Nov 22 23:17:21 ollama[30889]: 2023/11/22 23:17:21 download.go:123: downloading e35ab70a78c7 in 1 90 B part(s)
Nov 22 23:17:24 ollama[30889]: 2023/11/22 23:17:24 download.go:123: downloading 1cb90d66f4d4 in 1 381 B part(s)
Nov 22 23:17:47 ollama[30889]: [GIN] 2023/11/22 - 23:17:47 | 200 | 1m30s | 127.0.0.1 | POST "/api/pull"
Nov 22 23:17:47 ollama[30889]: 2023/11/22 23:17:47 llama.go:291: 89320 MB VRAM available, loading up to 546 GPU layers
Nov 22 23:17:47 ollama[30889]: 2023/11/22 23:17:47 llama.go:420: starting llama runner
Nov 22 23:17:47 ollama[30889]: 2023/11/22 23:17:47 llama.go:478: waiting for llama runner to start responding
Nov 22 23:17:48 ollama[30889]: 2023/11/22 23:17:48 llama.go:435: signal: illegal instruction (core dumped)
Nov 22 23:17:48 ollama[30889]: 2023/11/22 23:17:48 llama.go:443: error starting llama runner: llama runner process has terminated
Nov 22 23:17:48 ollama[30889]: 2023/11/22 23:17:48 llama.go:509: llama runner stopped successfully
Nov 22 23:17:48 ollama[30889]: 2023/11/22 23:17:48 llama.go:420: starting llama runner
Nov 22 23:17:48 ollama[30889]: 2023/11/22 23:17:48 llama.go:478: waiting for llama runner to start responding

Originally created by @ll3N1GmAll on GitHub (Nov 23, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1253 This is the error I get after d/l a dataset and when trying to run a dataset - "Error: llama runner process has terminated" It pulls them down, verifies the hash, then says "success", the very next line is the error above. I am running Xubuntu 22.04, 16GB RAM, Intel Pentium CPU G4560 @ 3.50GHz, 8x Nvidia 1080Ti GPUs. I get this with even small sets like the 1.8GB starcoder set. After a reboot, trying to run a dataset with "ollama run <dataset-name>" results in several seconds of attempting to start. The "ollama serve" process is visible in task manager, then the error "Error: llama runner process has terminated" is displayed in the terminal. The "ollama serve" process remains running/hung in task manager, consuming roughly 400MB of RAM. Which is the amount it was consuming while the terminal process was trying to run the dataset. Manually killing the process and trying to run it again results in the exact same behavior as after a reboot, with the exception that is fails to the error within a second or so tops instead of taking several seconds to fail like it did after the reboot. It still is consuming ~400MB of RAM. This looks similar to issue #788; but the newer version is supposed to prevent the AVX CPU requirement from causing this issue. However, I still have this issue. results of "journalctl -u ollama" Nov 22 23:07:12 <machine-name> systemd[1]: Started Ollama Service. Nov 22 23:07:14 <machine-name> ollama[1572]: 2023/11/22 23:07:14 images.go:779: total blobs: 0 Nov 22 23:07:14 <machine-name> ollama[1572]: 2023/11/22 23:07:14 images.go:786: total unused blobs removed: 0 Nov 22 23:07:14 <machine-name> ollama[1572]: 2023/11/22 23:07:14 routes.go:777: Listening on 127.0.0.1:11434 (version 0.1.11) Nov 22 23:15:09 <machine-name> systemd[1]: Stopping Ollama Service... Nov 22 23:15:09 <machine-name> systemd[1]: ollama.service: Deactivated successfully. Nov 22 23:15:09 <machine-name> systemd[1]: Stopped Ollama Service. Nov 22 23:15:09 <machine-name> systemd[1]: Started Ollama Service. Nov 22 23:15:09 <machine-name> ollama[30889]: 2023/11/22 23:15:09 images.go:779: total blobs: 0 Nov 22 23:15:09 <machine-name> ollama[30889]: 2023/11/22 23:15:09 images.go:786: total unused blobs removed: 0 Nov 22 23:15:09 <machine-name> ollama[30889]: 2023/11/22 23:15:09 routes.go:777: Listening on 127.0.0.1:11434 (version 0.1.11) Nov 22 23:16:17 <machine-name> ollama[30889]: [GIN] 2023/11/22 - 23:16:17 | 200 | 93.552µs | 127.0.0.1 | HEAD "/" Nov 22 23:16:17 <machine-name> ollama[30889]: [GIN] 2023/11/22 - 23:16:17 | 404 | 173.127µs | 127.0.0.1 | POST "/api/show" Nov 22 23:16:19 <machine-name> ollama[30889]: 2023/11/22 23:16:19 download.go:123: downloading 6ae280299950 in 42 100 MB part(s) Nov 22 23:17:18 <machine-name> ollama[30889]: 2023/11/22 23:17:18 download.go:123: downloading 22e1b2e8dc2f in 1 43 B part(s) Nov 22 23:17:21 <machine-name> ollama[30889]: 2023/11/22 23:17:21 download.go:123: downloading e35ab70a78c7 in 1 90 B part(s) Nov 22 23:17:24 <machine-name> ollama[30889]: 2023/11/22 23:17:24 download.go:123: downloading 1cb90d66f4d4 in 1 381 B part(s) Nov 22 23:17:47 <machine-name> ollama[30889]: [GIN] 2023/11/22 - 23:17:47 | 200 | 1m30s | 127.0.0.1 | POST "/api/pull" Nov 22 23:17:47 <machine-name> ollama[30889]: 2023/11/22 23:17:47 llama.go:291: 89320 MB VRAM available, loading up to 546 GPU layers Nov 22 23:17:47 <machine-name> ollama[30889]: 2023/11/22 23:17:47 llama.go:420: starting llama runner Nov 22 23:17:47 <machine-name> ollama[30889]: 2023/11/22 23:17:47 llama.go:478: waiting for llama runner to start responding Nov 22 23:17:48 <machine-name> ollama[30889]: 2023/11/22 23:17:48 llama.go:435: signal: illegal instruction (core dumped) Nov 22 23:17:48 <machine-name> ollama[30889]: 2023/11/22 23:17:48 llama.go:443: error starting llama runner: llama runner process has terminated Nov 22 23:17:48 <machine-name> ollama[30889]: 2023/11/22 23:17:48 llama.go:509: llama runner stopped successfully Nov 22 23:17:48 <machine-name> ollama[30889]: 2023/11/22 23:17:48 llama.go:420: starting llama runner Nov 22 23:17:48 <machine-name> ollama[30889]: 2023/11/22 23:17:48 llama.go:478: waiting for llama runner to start responding
Author
Owner

@BruceMacD commented on GitHub (Nov 23, 2023):

Thanks for opening the issue, the problem can be seen in this line:
signal: illegal instruction (core dumped)

I believe the problem is that Ollama was built with an instruction set not supported by your CPU.

<!-- gh-comment-id:1824640749 --> @BruceMacD commented on GitHub (Nov 23, 2023): Thanks for opening the issue, the problem can be seen in this line: `signal: illegal instruction (core dumped)` I believe the problem is that Ollama was built with an instruction set not supported by your CPU.
Author
Owner

@easp commented on GitHub (Nov 23, 2023):

@ll3N1GmAll I think you are misreading this

Hi folks, as of 0.1.6+ this should be fixed. Note: you'll need a CPU with [AVX](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions), but as of 0.1.6 CPU instruction set requirements have been relaxed significantly!

You need a CPU with AVX, but other instruction set requirements (AVX2, maybe others) were lifted. Your CPU doesn't have AVX. You could compile it yourself and change the compile flags in llm/lama.cpp/generate_linux.go, or maybe there is a better way to do it.

<!-- gh-comment-id:1824678556 --> @easp commented on GitHub (Nov 23, 2023): @ll3N1GmAll I think you are misreading [this](https://github.com/jmorganca/ollama/issues/788#issuecomment-1787668025) > Hi folks, as of `0.1.6`+ this should be fixed. Note: **you'll need a CPU with [AVX]**(https://en.wikipedia.org/wiki/Advanced_Vector_Extensions), but as of 0.1.6 CPU instruction set requirements have been relaxed significantly! You need a CPU with AVX, but other instruction set requirements (AVX2, maybe others) were lifted. Your CPU doesn't have AVX. You could compile it yourself and change the compile flags in [llm/lama.cpp/generate_linux.go](https://github.com/jmorganca/ollama/blob/main/llm/llama.cpp/generate_linux.go), or maybe there is a better way to do it.
Author
Owner

@ll3N1GmAll commented on GitHub (Nov 23, 2023):

Thanks for clarifying. Will get a better CPU.

<!-- gh-comment-id:1824758002 --> @ll3N1GmAll commented on GitHub (Nov 23, 2023): Thanks for clarifying. Will get a better CPU.
Author
Owner

@jackiezhangcn commented on GitHub (Nov 24, 2023):

my version is 0.1.11, but still got the same issue

<!-- gh-comment-id:1825037530 --> @jackiezhangcn commented on GitHub (Nov 24, 2023): my version is 0.1.11, but still got the same issue
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#26401