[GH-ISSUE #788] i got this issue from orca-mini 7b #26135

Closed
opened 2026-04-22 02:10:17 -05:00 by GiteaMirror · 36 comments
Owner

Originally created by @Boluex on GitHub (Oct 14, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/788

i am using an 8gb of RAM cpu system.....No vram...Downloaded the orca-mini 7b model on ollama.....but got this error....... Error: llama runner process has terminated.....How can i fix this?...please guys

Originally created by @Boluex on GitHub (Oct 14, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/788 i am using an 8gb of RAM cpu system.....No vram...Downloaded the orca-mini 7b model on ollama.....but got this error....... Error: llama runner process has terminated.....How can i fix this?...please guys
GiteaMirror added the bug label 2026-04-22 02:10:17 -05:00
Author
Owner

@jmorganca commented on GitHub (Oct 14, 2023):

Hi @Boluex sorry you hit an error. Would it be possible to check the logs to help us find the error?

  • On Linux: journalctl -u ollama
  • On macOS: cat ~/.ollama/logs/server.log

Also, make sure to try with the latest version 0.1.3 as we've added some improvements with memory detection & allocation

<!-- gh-comment-id:1762546298 --> @jmorganca commented on GitHub (Oct 14, 2023): Hi @Boluex sorry you hit an error. Would it be possible to check the logs to help us find the error? * On Linux: `journalctl -u ollama` * On macOS: `cat ~/.ollama/logs/server.log` Also, make sure to try with the latest version 0.1.3 as we've added some improvements with memory detection & allocation
Author
Owner

@Boluex commented on GitHub (Oct 14, 2023):

ollama run orca-mini
⠸ Error: llama runner process has terminated

this is what i am getting

and this...
Invalid unit name "ollama~" escaped as "ollama\x7e" (maybe you should use systemd-escape?).
-- No entries --
.....when i typed this...journalctl -u ollama......I am using a lnux system

<!-- gh-comment-id:1762590246 --> @Boluex commented on GitHub (Oct 14, 2023): ollama run orca-mini ⠸ Error: llama runner process has terminated this is what i am getting and this... Invalid unit name "ollama~" escaped as "ollama\x7e" (maybe you should use systemd-escape?). -- No entries -- .....when i typed this...journalctl -u ollama......I am using a lnux system
Author
Owner

@zhougsoft commented on GitHub (Oct 15, 2023):

hello! old Thinkpad/Ubuntu guy here again, getting this same error so might as well throw my logs in the ring too, hopefully helps:

getting Error: llama runner process has terminated like immediately after running ollama run open-orca

the logs say llama runner stopped with error: signal: illegal instruction (core dumped)

  • running ollama 0.1.3
  • same error when trying ollama run open-orca:3b

ollama log:

Oct 15 13:23:34 mferbox ollama[1380]: [GIN] 2023/10/15 - 13:23:34 | 200 |      34.367µs |       127.0.0.1 | HEAD     "/"
Oct 15 13:23:34 mferbox ollama[1380]: [GIN] 2023/10/15 - 13:23:34 | 200 |     344.681µs |       127.0.0.1 | GET      "/api/tags"
Oct 15 13:23:34 mferbox ollama[1380]: 2023/10/15 13:23:34 llama.go:333: skipping accelerated runner because num_gpu=0
Oct 15 13:23:34 mferbox ollama[1380]: 2023/10/15 13:23:34 llama.go:356: starting llama runner
Oct 15 13:23:34 mferbox ollama[1380]: 2023/10/15 13:23:34 llama.go:408: waiting for llama runner to start responding
Oct 15 13:23:34 mferbox ollama[1380]: 2023/10/15 13:23:34 llama.go:373: error starting llama runner: llama runner process has terminated
Oct 15 13:23:34 mferbox ollama[1380]: 2023/10/15 13:23:34 llama.go:438: llama runner stopped with error: signal: illegal instruction (core dumped)
Oct 15 13:23:34 mferbox ollama[1380]: [GIN] 2023/10/15 - 13:23:34 | 500 |  128.925806ms |       127.0.0.1 | POST     "/api/generate"

system info:

image

<!-- gh-comment-id:1763441443 --> @zhougsoft commented on GitHub (Oct 15, 2023): hello! old Thinkpad/Ubuntu guy here again, getting this same error so might as well throw my logs in the ring too, hopefully helps: getting `Error: llama runner process has terminated` like _immediately_ after running `ollama run open-orca` the logs say `llama runner stopped with error: signal: illegal instruction (core dumped)` - running ollama 0.1.3 - same error when trying `ollama run open-orca:3b` ### ollama log: ``` Oct 15 13:23:34 mferbox ollama[1380]: [GIN] 2023/10/15 - 13:23:34 | 200 | 34.367µs | 127.0.0.1 | HEAD "/" Oct 15 13:23:34 mferbox ollama[1380]: [GIN] 2023/10/15 - 13:23:34 | 200 | 344.681µs | 127.0.0.1 | GET "/api/tags" Oct 15 13:23:34 mferbox ollama[1380]: 2023/10/15 13:23:34 llama.go:333: skipping accelerated runner because num_gpu=0 Oct 15 13:23:34 mferbox ollama[1380]: 2023/10/15 13:23:34 llama.go:356: starting llama runner Oct 15 13:23:34 mferbox ollama[1380]: 2023/10/15 13:23:34 llama.go:408: waiting for llama runner to start responding Oct 15 13:23:34 mferbox ollama[1380]: 2023/10/15 13:23:34 llama.go:373: error starting llama runner: llama runner process has terminated Oct 15 13:23:34 mferbox ollama[1380]: 2023/10/15 13:23:34 llama.go:438: llama runner stopped with error: signal: illegal instruction (core dumped) Oct 15 13:23:34 mferbox ollama[1380]: [GIN] 2023/10/15 - 13:23:34 | 500 | 128.925806ms | 127.0.0.1 | POST "/api/generate" ``` ### system info: ![image](https://github.com/jmorganca/ollama/assets/90252209/287f2740-d603-4188-a07a-124967403696)
Author
Owner

@alix2013 commented on GitHub (Oct 17, 2023):

Hi, I got the same error on Ubuntu 22.04, x86 CPU host
llama.go:373: error starting llama runner: llama runner process has terminated
llama.go:438: llama runner stopped with error: signal: illegal instruction (core dumped)

ollama run mistral
Error: llama runner process has terminated

pull the latest code to the ubuntu x86, build on the host, and got the same errors.
I tried installing ollama on google colab with T4 GPU runtime, seems colab cuda version is lower than
required, what is the required cuda driver version?
NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0

<!-- gh-comment-id:1765990898 --> @alix2013 commented on GitHub (Oct 17, 2023): Hi, I got the same error on Ubuntu 22.04, x86 CPU host llama.go:373: error starting llama runner: llama runner process has terminated llama.go:438: llama runner stopped with error: signal: illegal instruction (core dumped) ollama run mistral Error: llama runner process has terminated pull the latest code to the ubuntu x86, build on the host, and got the same errors. I tried installing ollama on google colab with T4 GPU runtime, seems colab cuda version is lower than required, what is the required cuda driver version? NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0
Author
Owner

@dominiksr commented on GitHub (Oct 21, 2023):

I have a similar problem:
Error: llama runner process has terminated

$ journalctl -u ollama
(...)
ollama[5093]: 2023/10/21 11:23:49 llama.go:333: skipping accelerated runner because num_gpu=0
ollama[5093]: 2023/10/21 11:23:49 llama.go:356: starting llama runner
ollama[5093]: 2023/10/21 11:23:49 llama.go:408: waiting for llama runner to start responding
ollama[5093]: 2023/10/21 11:23:49 llama.go:373: error starting llama runner: llama runner process has terminated
ollama[5093]: 2023/10/21 11:23:49 llama.go:438: llama runner stopped with error: signal: illegal instruction
ollama[5093]: [GIN] 2023/10/21 - 11:23:49 | 500 | 545.543144ms | 127.0.0.1 | POST "/api/generate"
ollama[5093]: [GIN] 2023/10/21 - 11:24:41 | 200 | 36.703µs | 127.0.0.1 | HEAD "/"
ollama[5093]: [GIN] 2023/10/21 - 11:24:41 | 200 | 431.918µs | 127.0.0.1 | GET "/api/tags"

Debian 12(I try fedora 38 and same problem)/VM on proxmox.
On a laptop with nvidia 4060 everything works great.

<!-- gh-comment-id:1773746760 --> @dominiksr commented on GitHub (Oct 21, 2023): I have a similar problem: Error: llama runner process has terminated $ journalctl -u ollama (...) ollama[5093]: 2023/10/21 11:23:49 llama.go:333: skipping accelerated runner because num_gpu=0 ollama[5093]: 2023/10/21 11:23:49 llama.go:356: starting llama runner ollama[5093]: 2023/10/21 11:23:49 llama.go:408: waiting for llama runner to start responding ollama[5093]: 2023/10/21 11:23:49 llama.go:373: error starting llama runner: llama runner process has terminated ollama[5093]: 2023/10/21 11:23:49 llama.go:438: llama runner stopped with error: signal: illegal instruction ollama[5093]: [GIN] 2023/10/21 - 11:23:49 | 500 | 545.543144ms | 127.0.0.1 | POST "/api/generate" ollama[5093]: [GIN] 2023/10/21 - 11:24:41 | 200 | 36.703µs | 127.0.0.1 | HEAD "/" ollama[5093]: [GIN] 2023/10/21 - 11:24:41 | 200 | 431.918µs | 127.0.0.1 | GET "/api/tags" Debian 12(I try fedora 38 and same problem)/VM on proxmox. On a laptop with nvidia 4060 everything works great.
Author
Owner

@push-panjali23 commented on GitHub (Oct 22, 2023):

Same error any luck ??..

pulling manifest
pulling e84705205f71... 100% |█████████████| (1.9/1.9 GB, 3.1 MB/s)
pulling e7214e2f1a0f... 100% |██████████████████████| (66/66 B, 16 B/s)
pulling 93ca9b3d83dc... 100% |██████████████████████| (89/89 B, 30 B/s)
pulling 65009e4e7fee... 100% |████████████████████| (359/359 B, 92 B/s)
verifying sha256 digest
writing manifest
removing any unused layers
success
⠧ Error: llama runner process has terminated
(bot-py3.10) blacks@blacks-Inspiron-3647:~/Desktop/Ollama$ ollama run orca-mini
⠋ Error: llama runner process has terminated

<!-- gh-comment-id:1773976962 --> @push-panjali23 commented on GitHub (Oct 22, 2023): Same error any luck ??.. pulling manifest pulling e84705205f71... 100% |█████████████| (1.9/1.9 GB, 3.1 MB/s) pulling e7214e2f1a0f... 100% |██████████████████████| (66/66 B, 16 B/s) pulling 93ca9b3d83dc... 100% |██████████████████████| (89/89 B, 30 B/s) pulling 65009e4e7fee... 100% |████████████████████| (359/359 B, 92 B/s) verifying sha256 digest writing manifest removing any unused layers success ⠧ Error: llama runner process has terminated (bot-py3.10) blacks@blacks-Inspiron-3647:~/Desktop/Ollama$ ollama run orca-mini ⠋ Error: llama runner process has terminated
Author
Owner

@SonicWarrior1 commented on GitHub (Oct 22, 2023):

I have a similar problem:
Command: ollama run mistral
Error: llama runner process has terminated

$ journalctl -u ollama
Oct 22 18:10:25 UBUNTU ollama(816): 2023/10/22 18:10:25 llama.go:333: skipping accelerated runner because num_gpu=0
Oct 22 18:10:25 UBUNTU ollama[816]: 2023/10/22 18:10:25 llama.go:356: starting llama runner
Oct 22 18:10:25 UBUNTU ollama[816]: 2023/10/22 18:10:25 llama.go:408: waiting for llama runner to start responding
Oct 22 18:10:25 UBUNTU ollama[816]: 2023/10/22 18:10:25 llama.go:373: error starting llama runner: llama runner process has terminated
Oct 22 18:10:25 UBUNTU ollama[816]: 2023/10/22 18:10:25 llama.go:438: llama runner stopped with error: signal: illegal instruction (core dumped)
Oct 22 18:10:25 UBUNTU ollama[816]: [GIN] 2023/10/22 18:10:25 | 500 | 353.853052ms | 127.0.0.1 POST "/api/generate"
Oct 22 18:11:48 UBUNTU ollama[816]: [GIN] 2023/10/22-18:11:48 | 200| 22.020267ms | 127.0.0.1 | HEAD
Oct 22 18:11:48 UBUNTU ollama[816]: [GIN] 2023/10/22-18:11:48 | 200| 224.278ps | 127.0.0.1 GET "/api/tags"

System Info:
image

<!-- gh-comment-id:1774089863 --> @SonicWarrior1 commented on GitHub (Oct 22, 2023): I have a similar problem: Command: ollama run mistral Error: llama runner process has terminated $ journalctl -u ollama Oct 22 18:10:25 UBUNTU ollama(816): 2023/10/22 18:10:25 llama.go:333: skipping accelerated runner because num_gpu=0 Oct 22 18:10:25 UBUNTU ollama[816]: 2023/10/22 18:10:25 llama.go:356: starting llama runner Oct 22 18:10:25 UBUNTU ollama[816]: 2023/10/22 18:10:25 llama.go:408: waiting for llama runner to start responding Oct 22 18:10:25 UBUNTU ollama[816]: 2023/10/22 18:10:25 llama.go:373: error starting llama runner: llama runner process has terminated Oct 22 18:10:25 UBUNTU ollama[816]: 2023/10/22 18:10:25 llama.go:438: llama runner stopped with error: signal: illegal instruction (core dumped) Oct 22 18:10:25 UBUNTU ollama[816]: [GIN] 2023/10/22 18:10:25 | 500 | 353.853052ms | 127.0.0.1 POST "/api/generate" Oct 22 18:11:48 UBUNTU ollama[816]: [GIN] 2023/10/22-18:11:48 | 200| 22.020267ms | 127.0.0.1 | HEAD Oct 22 18:11:48 UBUNTU ollama[816]: [GIN] 2023/10/22-18:11:48 | 200| 224.278ps | 127.0.0.1 GET "/api/tags" System Info: ![image](https://github.com/jmorganca/ollama/assets/73881129/8a3c0cc8-23d1-42bf-935c-46bbc2f5a3e4)
Author
Owner

@kandotrun commented on GitHub (Oct 23, 2023):

@jmorganca
I'm experiencing the same error in this environment, but there are no errors in journalctl. Is there anything I can verify?

CleanShot 2023-10-23 at 18 24 35@2x

CleanShot 2023-10-23 at 18 24 46@2x

CleanShot 2023-10-23 at 18 24 54@2x

<!-- gh-comment-id:1774780565 --> @kandotrun commented on GitHub (Oct 23, 2023): @jmorganca I'm experiencing the same error in this environment, but there are no errors in `journalctl`. Is there anything I can verify? ![CleanShot 2023-10-23 at 18 24 35@2x](https://github.com/jmorganca/ollama/assets/79746996/301a9a60-993d-4303-ab4c-42764a04adcd) ![CleanShot 2023-10-23 at 18 24 46@2x](https://github.com/jmorganca/ollama/assets/79746996/66738eff-8608-4500-9dbb-78240702c685) ![CleanShot 2023-10-23 at 18 24 54@2x](https://github.com/jmorganca/ollama/assets/79746996/8db10e3d-fbb2-4b9f-9311-06eba6b9b2e4)
Author
Owner

@dominiksr commented on GitHub (Oct 25, 2023):

Works again on v0.1.5 for me. Nice

<!-- gh-comment-id:1779724099 --> @dominiksr commented on GitHub (Oct 25, 2023): Works again on v0.1.5 for me. Nice
Author
Owner

@Boluex commented on GitHub (Oct 25, 2023):

Still got the same error

<!-- gh-comment-id:1779847901 --> @Boluex commented on GitHub (Oct 25, 2023): Still got the same error
Author
Owner

@mxyng commented on GitHub (Oct 25, 2023):

This looks related #644

<!-- gh-comment-id:1779973121 --> @mxyng commented on GitHub (Oct 25, 2023): This looks related #644
Author
Owner

@dominiksr commented on GitHub (Oct 25, 2023):

Still got the same error

Did you remove the model and ollama before and install everything from scratch?

<!-- gh-comment-id:1780044841 --> @dominiksr commented on GitHub (Oct 25, 2023): > Still got the same error Did you remove the model and ollama before and install everything from scratch?
Author
Owner

@MadathilSA commented on GitHub (Oct 26, 2023):

Works again on v0.1.5 for me. Nice

Hi dominiksr,

I have simillar setup: proxmox and ubuntu 22.04.
Did you reinstall ollama to v0.1.5? how?

<!-- gh-comment-id:1781069736 --> @MadathilSA commented on GitHub (Oct 26, 2023): > Works again on v0.1.5 for me. Nice Hi dominiksr, I have simillar setup: proxmox and ubuntu 22.04. Did you reinstall ollama to v0.1.5? how?
Author
Owner

@dominiksr commented on GitHub (Oct 26, 2023):

Works again on v0.1.5 for me. Nice

Hi dominiksr,

I have simillar setup: proxmox and ubuntu 22.04. Did you reinstall ollama to v0.1.5? how?

I remove files in:
/usr/bin/ollama
and install one more time.

Manual installation:
https://github.com/jmorganca/ollama/blob/main/docs/linux.md

<!-- gh-comment-id:1781082360 --> @dominiksr commented on GitHub (Oct 26, 2023): > > Works again on v0.1.5 for me. Nice > > Hi dominiksr, > > I have simillar setup: proxmox and ubuntu 22.04. Did you reinstall ollama to v0.1.5? how? I remove files in: /usr/bin/ollama and install one more time. Manual installation: https://github.com/jmorganca/ollama/blob/main/docs/linux.md
Author
Owner

@Boluex commented on GitHub (Oct 26, 2023):

Screenshot from 2023-10-27 00-13-10
this is my system specs but ollama is not working.....I think i will uninstall and install it again

<!-- gh-comment-id:1782049111 --> @Boluex commented on GitHub (Oct 26, 2023): ![Screenshot from 2023-10-27 00-13-10](https://github.com/jmorganca/ollama/assets/90112749/3539ed73-262d-4c38-9c8b-35133f128eba) this is my system specs but ollama is not working.....I think i will uninstall and install it again
Author
Owner

@jmorganca commented on GitHub (Oct 31, 2023):

Hi folks, as of 0.1.6+ this should be fixed. Note: you'll need a CPU with AVX, but as of 0.1.6 CPU instruction set requirements have been relaxed significantly!

<!-- gh-comment-id:1787668025 --> @jmorganca commented on GitHub (Oct 31, 2023): Hi folks, as of `0.1.6`+ this should be fixed. Note: you'll need a CPU with [AVX](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions), but as of 0.1.6 CPU instruction set requirements have been relaxed significantly!
Author
Owner

@jmorganca commented on GitHub (Oct 31, 2023):

Please feel free to re-open if this is still an issue

<!-- gh-comment-id:1787668243 --> @jmorganca commented on GitHub (Oct 31, 2023): Please feel free to re-open if this is still an issue
Author
Owner

@ttio2tech commented on GitHub (Nov 1, 2023):

Trying to run it in an old CPU (without AVX). Built from source (0.1.7) but got the same issue. Is it possible to disable the AVX ?

<!-- gh-comment-id:1788629965 --> @ttio2tech commented on GitHub (Nov 1, 2023): Trying to run it in an old CPU (without AVX). Built from source (0.1.7) but got the same issue. Is it possible to disable the AVX ?
Author
Owner

@ttio2tech commented on GitHub (Nov 1, 2023):

Hi folks, as of 0.1.6+ this should be fixed. Note: you'll need a CPU with AVX, but as of 0.1.6 CPU instruction set requirements have been relaxed significantly!

is it possible to add support for CPU without AVX? maybe detect if cpu has AVX, if not add "-DLLAMA_F16C=OFF -DLLAMA_FMA=OFF"(saw it in one llama.cpp discussion) to CMAKE?

<!-- gh-comment-id:1788641723 --> @ttio2tech commented on GitHub (Nov 1, 2023): > Hi folks, as of `0.1.6`+ this should be fixed. Note: you'll need a CPU with [AVX](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions), but as of 0.1.6 CPU instruction set requirements have been relaxed significantly! is it possible to add support for CPU without AVX? maybe detect if cpu has AVX, if not add "-DLLAMA_F16C=OFF -DLLAMA_FMA=OFF"(saw it in one llama.cpp discussion) to CMAKE?
Author
Owner

@defconhaya commented on GitHub (Nov 3, 2023):

Works again on v0.1.5 for me. Nice

Hi dominiksr,

I have simillar setup: proxmox and ubuntu 22.04. Did you reinstall ollama to v0.1.5? how?

Make sure your VM CPU supports AVX. You can set the CPU type to host
image

<!-- gh-comment-id:1792519130 --> @defconhaya commented on GitHub (Nov 3, 2023): > > Works again on v0.1.5 for me. Nice > > Hi dominiksr, > > I have simillar setup: proxmox and ubuntu 22.04. Did you reinstall ollama to v0.1.5? how? Make sure your VM CPU supports AVX. You can set the CPU type to host ![image](https://github.com/jmorganca/ollama/assets/186805/b5f4b36c-295b-4a52-8690-f9cf4d618e5b)
Author
Owner

@dominiksr commented on GitHub (Nov 4, 2023):

Works again on v0.1.5 for me. Nice

Hi dominiksr,
I have simillar setup: proxmox and ubuntu 22.04. Did you reinstall ollama to v0.1.5? how?

Make sure your VM CPU supports AVX. You can set the CPU type to host image

In my case it just works. I think it's a good idea. Using proxmox I had a lot of problems if the hardware was a little unique.

<!-- gh-comment-id:1793369296 --> @dominiksr commented on GitHub (Nov 4, 2023): > > > Works again on v0.1.5 for me. Nice > > > > > > Hi dominiksr, > > I have simillar setup: proxmox and ubuntu 22.04. Did you reinstall ollama to v0.1.5? how? > > Make sure your VM CPU supports AVX. You can set the CPU type to host ![image](https://user-images.githubusercontent.com/186805/280305475-b5f4b36c-295b-4a52-8690-f9cf4d618e5b.png) In my case it just works. I think it's a good idea. Using proxmox I had a lot of problems if the hardware was a little unique.
Author
Owner

@TunaFFish commented on GitHub (Nov 7, 2023):

I have the same problem.

MacBook Air (13-inch, 2017)
macOS Big Sur version 11.7.6 (20G1231)
Processor: 1,8 GHz Dual-Core Intel Core i5
Memory: 8 GB 1600 MHz DDR3
Graphics: Intel HD Graphics 6000 1536 MB

Running the command sysctl -a | grep machdep.cpu.featuresin the Terminal shows me that I have AVX1.0

Bugger, I thought at least I could play around with orca-mini:3b

% ollama run orca-mini:3b
pulling manifest
pulling 66002b78c70a... 100% |██████████████████████████████████████████| (2.0/2.0 GB, 5.0 TB/s)
pulling dd90d0f2b7ee... 100% |█████████████████████████████████████████████| (95/95 B, 1.4 MB/s)
pulling 93ca9b3d83dc... 100% |█████████████████████████████████████████████| (89/89 B, 1.4 MB/s)
pulling 33eb43a1488d... 100% |█████████████████████████████████████████████| (52/52 B, 468 kB/s)
pulling fd52b10ee3ee... 100% |███████████████████████████████████████████| (455/455 B, 2.3 MB/s)
verifying sha256 digest
writing manifest
removing any unused layers
success
⠇ Error: llama runner process has terminated

<!-- gh-comment-id:1798327926 --> @TunaFFish commented on GitHub (Nov 7, 2023): I have the same problem. MacBook Air (13-inch, 2017) macOS Big Sur version 11.7.6 (20G1231) Processor: 1,8 GHz Dual-Core Intel Core i5 Memory: 8 GB 1600 MHz DDR3 Graphics: Intel HD Graphics 6000 1536 MB Running the command `sysctl -a | grep machdep.cpu.features`in the Terminal shows me that I have `AVX1.0` Bugger, I thought at least I could play around with orca-mini:3b % ollama run orca-mini:3b pulling manifest pulling 66002b78c70a... 100% |██████████████████████████████████████████| (2.0/2.0 GB, 5.0 TB/s) pulling dd90d0f2b7ee... 100% |█████████████████████████████████████████████| (95/95 B, 1.4 MB/s) pulling 93ca9b3d83dc... 100% |█████████████████████████████████████████████| (89/89 B, 1.4 MB/s) pulling 33eb43a1488d... 100% |█████████████████████████████████████████████| (52/52 B, 468 kB/s) pulling fd52b10ee3ee... 100% |███████████████████████████████████████████| (455/455 B, 2.3 MB/s) verifying sha256 digest writing manifest removing any unused layers success ⠇ Error: llama runner process has terminated
Author
Owner

@tjlcast commented on GitHub (Nov 8, 2023):

I have same problem.

I reinstall ollama(from 0.1.3 to 0.1.8 ). But when I run ollama run llama2, it shows: Error: llama runner process has terminated

Memory: 8 GB 1600 MHz DDR3
Graphics: Intel HD Graphics 6000 1536 MB

And ./.ollama/logs/server.log like below:

[GIN] 2023/11/08 - 18:37:15 | 200 |      27.403µs |       127.0.0.1 | HEAD     "/"
[GIN] 2023/11/08 - 18:37:15 | 200 |    3.545476ms |       127.0.0.1 | POST     "/api/show"
2023/11/08 18:37:15 llama.go:384: starting llama runner
2023/11/08 18:37:15 llama.go:386: error starting the external llama runner: fork/exec /var/folders/1w/bfjzbwc53hbgzsk1spq8f_5w0000gn/T/ollama1055606081/llama.cpp/ggml/build/metal/bin/ollama-runner: bad CPU type in executable
2023/11/08 18:37:15 llama.go:384: starting llama runner
2023/11/08 18:37:15 llama.go:442: waiting for llama runner to start responding
{"timestamp":1699439835,"level":"WARNING","function":"server_params_parse","line":847,"message":"Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support","n_gpu_layers":0}
{"timestamp":1699439835,"level":"INFO","function":"main","line":1191,"message":"build info","build":1009,"commit":"9e232f0"}
{"timestamp":1699439835,"level":"INFO","function":"main","line":1196,"message":"system info","n_threads":2,"total_threads":4,"system_info":"AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 | "}
llama.cpp: loading model from /Users/jialtang/.ollama/models/blobs/sha256:8daa9615cce30c259a9555b1cc250d461d1bc69980a274b44d7eda0be78076d8
llama_model_load_internal: format     = ggjt v3 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 2048
llama_model_load_internal: n_embd     = 4096
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 32
llama_model_load_internal: n_head_kv  = 32
llama_model_load_internal: n_layer    = 32
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: n_gqa      = 1
llama_model_load_internal: rnorm_eps  = 5.0e-06
llama_model_load_internal: n_ff       = 11008
llama_model_load_internal: freq_base  = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size =    0.08 MB
llama_model_load_internal: mem required  = 3615.73 MB (+ 1024.00 MB per state)
llama_new_context_with_model: kv self size  = 1024.00 MB
llama_new_context_with_model: compute buffer total size =  153.35 MB
2023/11/08 18:37:15 llama.go:399: signal: segmentation fault
2023/11/08 18:37:15 llama.go:407: error starting llama runner: llama runner process has terminated
2023/11/08 18:37:15 llama.go:473: llama runner stopped successfully

Before reinstalling I can do ollama run llama2 in ollama(0.1.3)

So how can I fix it?

<!-- gh-comment-id:1801645801 --> @tjlcast commented on GitHub (Nov 8, 2023): I have same problem. I reinstall ollama(from 0.1.3 to 0.1.8 ). But when I run `ollama run llama2`, it shows: `Error: llama runner process has terminated` Memory: 8 GB 1600 MHz DDR3 Graphics: Intel HD Graphics 6000 1536 MB And `./.ollama/logs/server.log` like below: ``` [GIN] 2023/11/08 - 18:37:15 | 200 | 27.403µs | 127.0.0.1 | HEAD "/" [GIN] 2023/11/08 - 18:37:15 | 200 | 3.545476ms | 127.0.0.1 | POST "/api/show" 2023/11/08 18:37:15 llama.go:384: starting llama runner 2023/11/08 18:37:15 llama.go:386: error starting the external llama runner: fork/exec /var/folders/1w/bfjzbwc53hbgzsk1spq8f_5w0000gn/T/ollama1055606081/llama.cpp/ggml/build/metal/bin/ollama-runner: bad CPU type in executable 2023/11/08 18:37:15 llama.go:384: starting llama runner 2023/11/08 18:37:15 llama.go:442: waiting for llama runner to start responding {"timestamp":1699439835,"level":"WARNING","function":"server_params_parse","line":847,"message":"Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support","n_gpu_layers":0} {"timestamp":1699439835,"level":"INFO","function":"main","line":1191,"message":"build info","build":1009,"commit":"9e232f0"} {"timestamp":1699439835,"level":"INFO","function":"main","line":1196,"message":"system info","n_threads":2,"total_threads":4,"system_info":"AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 | "} llama.cpp: loading model from /Users/jialtang/.ollama/models/blobs/sha256:8daa9615cce30c259a9555b1cc250d461d1bc69980a274b44d7eda0be78076d8 llama_model_load_internal: format = ggjt v3 (latest) llama_model_load_internal: n_vocab = 32000 llama_model_load_internal: n_ctx = 2048 llama_model_load_internal: n_embd = 4096 llama_model_load_internal: n_mult = 256 llama_model_load_internal: n_head = 32 llama_model_load_internal: n_head_kv = 32 llama_model_load_internal: n_layer = 32 llama_model_load_internal: n_rot = 128 llama_model_load_internal: n_gqa = 1 llama_model_load_internal: rnorm_eps = 5.0e-06 llama_model_load_internal: n_ff = 11008 llama_model_load_internal: freq_base = 10000.0 llama_model_load_internal: freq_scale = 1 llama_model_load_internal: ftype = 2 (mostly Q4_0) llama_model_load_internal: model size = 7B llama_model_load_internal: ggml ctx size = 0.08 MB llama_model_load_internal: mem required = 3615.73 MB (+ 1024.00 MB per state) llama_new_context_with_model: kv self size = 1024.00 MB llama_new_context_with_model: compute buffer total size = 153.35 MB 2023/11/08 18:37:15 llama.go:399: signal: segmentation fault 2023/11/08 18:37:15 llama.go:407: error starting llama runner: llama runner process has terminated 2023/11/08 18:37:15 llama.go:473: llama runner stopped successfully ``` Before reinstalling I can do `ollama run llama2` in ollama(0.1.3) So how can I fix it?
Author
Owner

@defconhaya commented on GitHub (Nov 8, 2023):

bad CPU type in executable 2023/11/08 18:37:15 llama.go:384: starting llama runner
do a lscpu and make sure the CPU is AVX capable

<!-- gh-comment-id:1801679134 --> @defconhaya commented on GitHub (Nov 8, 2023): `bad CPU type in executable 2023/11/08 18:37:15 llama.go:384: starting llama runner` do a lscpu and make sure the CPU is AVX capable
Author
Owner

@tjlcast commented on GitHub (Nov 8, 2023):

Running the command 'sysctl -a | grep machdep.cpu.featuresin ' like below:

machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE SSE3 PCLMULQDQ DTES64 MON DSCPL VMX SMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC MOVBE POPCNT AES PCID XSAVE OSXSAVE SEGLIM64 TSCTMR AVX1.0 RDRAND F16C

There is AVX1.0

<!-- gh-comment-id:1801694685 --> @tjlcast commented on GitHub (Nov 8, 2023): Running the command 'sysctl -a | grep machdep.cpu.featuresin ' like below: ``` machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE SSE3 PCLMULQDQ DTES64 MON DSCPL VMX SMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC MOVBE POPCNT AES PCID XSAVE OSXSAVE SEGLIM64 TSCTMR AVX1.0 RDRAND F16C ``` There is AVX1.0
Author
Owner

@TunaFFish commented on GitHub (Nov 8, 2023):

The lscpucommand mentioned by @defconhaya on macOS would be sysctl with sysctl -a to dump them all.
Here a list of props with AVX in the name,
for example sysctl -n hw.optional.avx1_0 I get: 1

hw.optional.avx1_0: 1
hw.optional.avx2_0: 1
hw.optional.avx512bw: 0
hw.optional.avx512cd: 0
hw.optional.avx512dq: 0
hw.optional.avx512f: 0
hw.optional.avx512ifma: 0
hw.optional.avx512vbmi: 0
hw.optional.avx512vl: 0
<!-- gh-comment-id:1801851793 --> @TunaFFish commented on GitHub (Nov 8, 2023): The `lscpu`command mentioned by @defconhaya on macOS would be `sysctl` with `sysctl -a` to dump them all. Here a list of props with AVX in the name, for example `sysctl -n hw.optional.avx1_0` I get: 1 ``` hw.optional.avx1_0: 1 hw.optional.avx2_0: 1 hw.optional.avx512bw: 0 hw.optional.avx512cd: 0 hw.optional.avx512dq: 0 hw.optional.avx512f: 0 hw.optional.avx512ifma: 0 hw.optional.avx512vbmi: 0 hw.optional.avx512vl: 0 ```
Author
Owner

@tjlcast commented on GitHub (Nov 8, 2023):

I check it just now, it as same as you :

% sysctl -n hw.optional.avx1_0
1
% sysctl -a | grep avx
hw.optional.avx1_0: 1
hw.optional.avx2_0: 1
hw.optional.avx512f: 0
hw.optional.avx512cd: 0
hw.optional.avx512dq: 0
hw.optional.avx512bw: 0
hw.optional.avx512vl: 0
hw.optional.avx512ifma: 0
hw.optional.avx512vbmi: 0
<!-- gh-comment-id:1801873402 --> @tjlcast commented on GitHub (Nov 8, 2023): I check it just now, it as same as you : ``` % sysctl -n hw.optional.avx1_0 1 ``` ``` % sysctl -a | grep avx hw.optional.avx1_0: 1 hw.optional.avx2_0: 1 hw.optional.avx512f: 0 hw.optional.avx512cd: 0 hw.optional.avx512dq: 0 hw.optional.avx512bw: 0 hw.optional.avx512vl: 0 hw.optional.avx512ifma: 0 hw.optional.avx512vbmi: 0 ```
Author
Owner

@TunaFFish commented on GitHub (Nov 8, 2023):

Strange thing is that if I run cat ~/.ollama/logs/server.log and search for AVX:
AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0

2023/11/08 14:09:29 llama.go:384: starting llama runner
2023/11/08 14:09:29 llama.go:386: error starting the external llama runner: fork/exec /var/folders/xn/63p1jx4130l1rb_01t_ynd3r0000gn/T/ollama339294303/llama.cpp/gguf/build/metal/bin/ollama-runner: bad CPU type in executable
2023/11/08 14:09:29 llama.go:384: starting llama runner
2023/11/08 14:09:29 llama.go:442: waiting for llama runner to start responding
{"timestamp":1699448970,"level":"WARNING","function":"server_params_parse","line":873,"message":"Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support","n_gpu_layers":-1}
{"timestamp":1699448970,"level":"INFO","function":"main","line":1324,"message":"build info","build":219,"commit":"9e70cc0"}
{"timestamp":1699448970,"level":"INFO","function":"main","line":1330,"message":"system info","n_threads":2,"n_threads_batch":-1,"total_threads":4,"system_info":"AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | "}
...
2023/11/08 14:09:37 llama.go:399: signal: segmentation fault
2023/11/08 14:09:37 llama.go:407: error starting llama runner: llama runner process has terminated
2023/11/08 14:09:37 llama.go:473: llama runner stopped successfully
[GIN] 2023/11/08 - 14:09:37 | 500 |   7.66148366s |       127.0.0.1 | POST     "/api/generate"
<!-- gh-comment-id:1801874813 --> @TunaFFish commented on GitHub (Nov 8, 2023): Strange thing is that if I run `cat ~/.ollama/logs/server.log` and search for AVX: `AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0` ``` 2023/11/08 14:09:29 llama.go:384: starting llama runner 2023/11/08 14:09:29 llama.go:386: error starting the external llama runner: fork/exec /var/folders/xn/63p1jx4130l1rb_01t_ynd3r0000gn/T/ollama339294303/llama.cpp/gguf/build/metal/bin/ollama-runner: bad CPU type in executable 2023/11/08 14:09:29 llama.go:384: starting llama runner 2023/11/08 14:09:29 llama.go:442: waiting for llama runner to start responding {"timestamp":1699448970,"level":"WARNING","function":"server_params_parse","line":873,"message":"Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support","n_gpu_layers":-1} {"timestamp":1699448970,"level":"INFO","function":"main","line":1324,"message":"build info","build":219,"commit":"9e70cc0"} {"timestamp":1699448970,"level":"INFO","function":"main","line":1330,"message":"system info","n_threads":2,"n_threads_batch":-1,"total_threads":4,"system_info":"AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | "} ... 2023/11/08 14:09:37 llama.go:399: signal: segmentation fault 2023/11/08 14:09:37 llama.go:407: error starting llama runner: llama runner process has terminated 2023/11/08 14:09:37 llama.go:473: llama runner stopped successfully [GIN] 2023/11/08 - 14:09:37 | 500 | 7.66148366s | 127.0.0.1 | POST "/api/generate" ```
Author
Owner

@jpmcarvalho commented on GitHub (Nov 8, 2023):

Strange thing is that if I run cat ~/.ollama/logs/server.log and search for AVX: AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0

2023/11/08 14:09:29 llama.go:384: starting llama runner
2023/11/08 14:09:29 llama.go:386: error starting the external llama runner: fork/exec /var/folders/xn/63p1jx4130l1rb_01t_ynd3r0000gn/T/ollama339294303/llama.cpp/gguf/build/metal/bin/ollama-runner: bad CPU type in executable
2023/11/08 14:09:29 llama.go:384: starting llama runner
2023/11/08 14:09:29 llama.go:442: waiting for llama runner to start responding
{"timestamp":1699448970,"level":"WARNING","function":"server_params_parse","line":873,"message":"Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support","n_gpu_layers":-1}
{"timestamp":1699448970,"level":"INFO","function":"main","line":1324,"message":"build info","build":219,"commit":"9e70cc0"}
{"timestamp":1699448970,"level":"INFO","function":"main","line":1330,"message":"system info","n_threads":2,"n_threads_batch":-1,"total_threads":4,"system_info":"AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | "}
...
2023/11/08 14:09:37 llama.go:399: signal: segmentation fault
2023/11/08 14:09:37 llama.go:407: error starting llama runner: llama runner process has terminated
2023/11/08 14:09:37 llama.go:473: llama runner stopped successfully
[GIN] 2023/11/08 - 14:09:37 | 500 |   7.66148366s |       127.0.0.1 | POST     "/api/generate"

Did you solve the problem? I have the same issue in my macOS

<!-- gh-comment-id:1801992662 --> @jpmcarvalho commented on GitHub (Nov 8, 2023): > Strange thing is that if I run `cat ~/.ollama/logs/server.log` and search for AVX: `AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0` > > ``` > 2023/11/08 14:09:29 llama.go:384: starting llama runner > 2023/11/08 14:09:29 llama.go:386: error starting the external llama runner: fork/exec /var/folders/xn/63p1jx4130l1rb_01t_ynd3r0000gn/T/ollama339294303/llama.cpp/gguf/build/metal/bin/ollama-runner: bad CPU type in executable > 2023/11/08 14:09:29 llama.go:384: starting llama runner > 2023/11/08 14:09:29 llama.go:442: waiting for llama runner to start responding > {"timestamp":1699448970,"level":"WARNING","function":"server_params_parse","line":873,"message":"Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support","n_gpu_layers":-1} > {"timestamp":1699448970,"level":"INFO","function":"main","line":1324,"message":"build info","build":219,"commit":"9e70cc0"} > {"timestamp":1699448970,"level":"INFO","function":"main","line":1330,"message":"system info","n_threads":2,"n_threads_batch":-1,"total_threads":4,"system_info":"AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | "} > ... > 2023/11/08 14:09:37 llama.go:399: signal: segmentation fault > 2023/11/08 14:09:37 llama.go:407: error starting llama runner: llama runner process has terminated > 2023/11/08 14:09:37 llama.go:473: llama runner stopped successfully > [GIN] 2023/11/08 - 14:09:37 | 500 | 7.66148366s | 127.0.0.1 | POST "/api/generate" > ``` Did you solve the problem? I have the same issue in my macOS
Author
Owner

@antreev-brar commented on GitHub (Nov 9, 2023):

I have mac 2019 pro with 8 Gb RAM, i thought i could play with orca mini but it keeps giving seg fault. My error is not listed above, apparently it doesn't have access to a file as seen in second line of server log but the ./ollama serve gave success as an output. I have cleaned everything and built again from scratch

pulling 66002b78c70a... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ (2.0/2.0 GB, 42 MB/s)         
pulling dd90d0f2b7ee... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ (95/95 B, 31 B/s)        
pulling 93ca9b3d83dc... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ (89/89 B, 37 B/s)        
pulling 33eb43a1488d... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ (52/52 B, 18 B/s)        
pulling fd52b10ee3ee... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ (455/455 B, 155 B/s)        
verifying sha256 digest
writing manifest
removing any unused layers
success
⠏   Error: llama runner process has terminated

> ```2023/11/09 15:39:17 llama.go:358: llama runner not found: stat 

> /var/folders/7y/4f0kcdjs6ss4t96jdxc3xlmm0000gn/T/ollama3035339153/llama.cpp/gguf/build/metal/bin/ollama-runner: no such file or directory
> 2023/11/09 15:39:17 llama.go:384: starting llama runner
> 2023/11/09 15:39:17 llama.go:442: waiting for llama runner to start responding
> {"timestamp":1699524557,"level":"INFO","function":"main","line":1324,"message":"build info","build":1412,"commit":"9e70cc0"}
> {"timestamp":1699524557,"level":"INFO","function":"main","line":1330,"message":"system info","n_threads":4,"n_threads_batch":-1,"total_threads":8,"system_info":"AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | "}
> llama_model_loader: loaded meta data with 19 key-value pairs and 237 tensors from /Users/antreevsinghbrar/.ollama/models/blobs/sha256:66002b78c70a22ab25e16cc9a1736c6cc6335398c7312e3eb33db202350afe66 (version GGUF V2 (latest))
> llama_model_loader: - tensor    0:                token_embd.weight q4_0     [  3200, 32000,     1,     1 ]
> llama_model_loader: - tensor    1:              blk.0.attn_q.weight q4_0     [  3200,  3200,     1,     1 ]
> llama_model_loader: - tensor    2:              blk.0.attn_k.weight q4_0     [  3200,  3200,     1,     1 ]
> llama_model_loader: - tensor    3:              blk.0.attn_v.weight q4_0     [  3200,  3200,     1,     1 ]
> llama_model_loader: - tensor    4:         blk.0.attn_output.weight q4_0     [  3200,  3200,     1,     1 ]

> llama_model_loader: - tensor    5:            blk.0.ffn_gate.weight q4_0     [  3200,  8640,     1,     1 ]
> 
> ...
> llm_load_tensors: mem required  = 1887.57 MB
> ..............................................................................................
> llama_new_context_with_model: n_ctx      = 2048
> llama_new_context_with_model: freq_base  = 10000.0
> llama_new_context_with_model: freq_scale = 1
> llama_new_context_with_model: kv self size  =  650.00 MB
> ggml_metal_init: allocating
> ggml_metal_init: found device: Intel(R) Iris(TM) Plus Graphics 645
> ggml_metal_init: picking default device: Intel(R) Iris(TM) Plus Graphics 645
> ggml_metal_init: default.metallib not found, loading from source
> 2023/11/09 15:39:17 llama.go:399: signal: segmentation fault
> 2023/11/09 15:39:17 llama.go:407: error starting llama runner: llama runner process has terminated
> 2023/11/09 15:39:17 llama.go:473: llama runner stopped successfully

> ```
<!-- gh-comment-id:1803404276 --> @antreev-brar commented on GitHub (Nov 9, 2023): I have mac 2019 pro with 8 Gb RAM, i thought i could play with orca mini but it keeps giving seg fault. My error is not listed above, apparently it doesn't have access to a file as seen in second line of server log but the `./ollama serve ` gave success as an output. I have cleaned everything and built again from scratch > ```pulling manifest > pulling 66002b78c70a... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ (2.0/2.0 GB, 42 MB/s) > pulling dd90d0f2b7ee... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ (95/95 B, 31 B/s) > pulling 93ca9b3d83dc... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ (89/89 B, 37 B/s) > pulling 33eb43a1488d... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ (52/52 B, 18 B/s) > pulling fd52b10ee3ee... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ (455/455 B, 155 B/s) > verifying sha256 digest > writing manifest > removing any unused layers > success > ⠏ Error: llama runner process has terminated ``` > ```2023/11/09 15:39:17 llama.go:358: llama runner not found: stat > /var/folders/7y/4f0kcdjs6ss4t96jdxc3xlmm0000gn/T/ollama3035339153/llama.cpp/gguf/build/metal/bin/ollama-runner: no such file or directory > 2023/11/09 15:39:17 llama.go:384: starting llama runner > 2023/11/09 15:39:17 llama.go:442: waiting for llama runner to start responding > {"timestamp":1699524557,"level":"INFO","function":"main","line":1324,"message":"build info","build":1412,"commit":"9e70cc0"} > {"timestamp":1699524557,"level":"INFO","function":"main","line":1330,"message":"system info","n_threads":4,"n_threads_batch":-1,"total_threads":8,"system_info":"AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | "} > llama_model_loader: loaded meta data with 19 key-value pairs and 237 tensors from /Users/antreevsinghbrar/.ollama/models/blobs/sha256:66002b78c70a22ab25e16cc9a1736c6cc6335398c7312e3eb33db202350afe66 (version GGUF V2 (latest)) > llama_model_loader: - tensor 0: token_embd.weight q4_0 [ 3200, 32000, 1, 1 ] > llama_model_loader: - tensor 1: blk.0.attn_q.weight q4_0 [ 3200, 3200, 1, 1 ] > llama_model_loader: - tensor 2: blk.0.attn_k.weight q4_0 [ 3200, 3200, 1, 1 ] > llama_model_loader: - tensor 3: blk.0.attn_v.weight q4_0 [ 3200, 3200, 1, 1 ] > llama_model_loader: - tensor 4: blk.0.attn_output.weight q4_0 [ 3200, 3200, 1, 1 ] > llama_model_loader: - tensor 5: blk.0.ffn_gate.weight q4_0 [ 3200, 8640, 1, 1 ] > > ... > llm_load_tensors: mem required = 1887.57 MB > .............................................................................................. > llama_new_context_with_model: n_ctx = 2048 > llama_new_context_with_model: freq_base = 10000.0 > llama_new_context_with_model: freq_scale = 1 > llama_new_context_with_model: kv self size = 650.00 MB > ggml_metal_init: allocating > ggml_metal_init: found device: Intel(R) Iris(TM) Plus Graphics 645 > ggml_metal_init: picking default device: Intel(R) Iris(TM) Plus Graphics 645 > ggml_metal_init: default.metallib not found, loading from source > 2023/11/09 15:39:17 llama.go:399: signal: segmentation fault > 2023/11/09 15:39:17 llama.go:407: error starting llama runner: llama runner process has terminated > 2023/11/09 15:39:17 llama.go:473: llama runner stopped successfully > ```
Author
Owner

@jackiezhangcn commented on GitHub (Nov 24, 2023):

when I run codellama, have the same issue

<!-- gh-comment-id:1825035141 --> @jackiezhangcn commented on GitHub (Nov 24, 2023): when I run codellama, have the same issue
Author
Owner

@erickhavel commented on GitHub (Nov 24, 2023):

MacOS here (Air M1, 16GB).

$ ollama run llama2-uncensored
>>> hi
Error: llama runner process has terminated

server.log (dates censored, ironically):

[GIN] 2023/11/2X XX03:24:55 | 200 |     106.959µs |       127.0.0.1 | HEAD     "/"
2023/11/2X XX:24:58 llama.go:420: starting llama runner
2023/11/2X XX:24:58 llama.go:478: waiting for llama runner to start responding
2023/11/2X XX:24:58 llama.go:435: signal: segmentation fault
2023/11/2X XX:24:58 llama.go:443: error starting llama runner: llama runner process has terminated
2023/11/2X XX:24:58 llama.go:509: llama runner stopped successfully
[GIN] 2023/11/2X XX03:24:58 | 500 |    120.5145ms |       127.0.0.1 | POST     "/api/generate"

It was working fine a couple of weeks ago. I also ran ollama pull llama2-uncensored, no change.

$ ollama list
NAME                    	SIZE  	MODIFIED
llama2-uncensored:latest	3.8 GB	14 minutes ago
<!-- gh-comment-id:1825044167 --> @erickhavel commented on GitHub (Nov 24, 2023): MacOS here (Air M1, 16GB). ``` $ ollama run llama2-uncensored >>> hi Error: llama runner process has terminated ``` `server.log` (dates censored, ironically): ``` [GIN] 2023/11/2X XX03:24:55 | 200 | 106.959µs | 127.0.0.1 | HEAD "/" 2023/11/2X XX:24:58 llama.go:420: starting llama runner 2023/11/2X XX:24:58 llama.go:478: waiting for llama runner to start responding 2023/11/2X XX:24:58 llama.go:435: signal: segmentation fault 2023/11/2X XX:24:58 llama.go:443: error starting llama runner: llama runner process has terminated 2023/11/2X XX:24:58 llama.go:509: llama runner stopped successfully [GIN] 2023/11/2X XX03:24:58 | 500 | 120.5145ms | 127.0.0.1 | POST "/api/generate" ``` It was working fine a couple of weeks ago. I also ran `ollama pull llama2-uncensored`, no change. ``` $ ollama list NAME SIZE MODIFIED llama2-uncensored:latest 3.8 GB 14 minutes ago ```
Author
Owner

@Brockmerkwan commented on GitHub (Dec 12, 2023):

i am just getting the llama runner process has terminated error as well i dont have logs i am on a mac mini the 8 gig version but i have had sucsess on running it before was trying to run mistral
Screenshot 2023-12-12 at 2 56 58 PM
!

<!-- gh-comment-id:1852800974 --> @Brockmerkwan commented on GitHub (Dec 12, 2023): i am just getting the llama runner process has terminated error as well i dont have logs i am on a mac mini the 8 gig version but i have had sucsess on running it before was trying to run mistral ![Screenshot 2023-12-12 at 2 56 58 PM](https://github.com/jmorganca/ollama/assets/24733579/4a9001b9-d380-4ccc-aa71-88ab3749a888) !
Author
Owner

@EdByrnee commented on GitHub (Dec 19, 2023):

i am just getting the llama runner process has terminated error as well i dont have logs i am on a mac mini the 8 gig version but i have had sucsess on running it before was trying to run mistral Screenshot 2023-12-12 at 2 56 58 PM !

Is this because we need more RAM?

<!-- gh-comment-id:1862976322 --> @EdByrnee commented on GitHub (Dec 19, 2023): > i am just getting the llama runner process has terminated error as well i dont have logs i am on a mac mini the 8 gig version but i have had sucsess on running it before was trying to run mistral ![Screenshot 2023-12-12 at 2 56 58 PM](https://private-user-images.githubusercontent.com/24733579/290001657-4a9001b9-d380-4ccc-aa71-88ab3749a888.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTEiLCJleHAiOjE3MDI5OTk2NjUsIm5iZiI6MTcwMjk5OTM2NSwicGF0aCI6Ii8yNDczMzU3OS8yOTAwMDE2NTctNGE5MDAxYjktZDM4MC00Y2NjLWFhNzEtODhhYjM3NDlhODg4LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFJV05KWUFYNENTVkVINTNBJTJGMjAyMzEyMTklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjMxMjE5VDE1MjI0NVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTM4NTMzYTJkM2QyNWJkY2FmMjIyMzRhNTFjNDg4Yjk4YzhkMGU5NmY2ZTJjNzlkZmFlZTAyMWJmZTM1OTQ3NTYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.d1RRSmYEqvzhTFtERlEvQDbQJrbRN5yQ9Y6b0-eaahY) ! Is this because we need more RAM?
Author
Owner

@wyy511511 commented on GitHub (Jul 11, 2024):

Is this because we need more RAM?

I think so.
image

when it reached the max(24G), it immediately be killed and gave me the error info (Error: llama runner process has terminated: signal: killed ).

<!-- gh-comment-id:2222500845 --> @wyy511511 commented on GitHub (Jul 11, 2024): > Is this because we need more RAM? I think so. <img width="401" alt="image" src="https://github.com/ollama/ollama/assets/45887805/a2821c55-1b54-47bd-bac9-57294e7702ed"> when it reached the max(24G), it immediately be killed and gave me the error info (Error: llama runner process has terminated: signal: killed ).
Author
Owner

@marvinHC54654g commented on GitHub (Jul 26, 2024):

error: llama runner process has terminated: signal: killed是什么意思哦

<!-- gh-comment-id:2252988819 --> @marvinHC54654g commented on GitHub (Jul 26, 2024): error: llama runner process has terminated: signal: killed是什么意思哦
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#26135