[GH-ISSUE #5970] run glm4 Error: llama runner process has terminated: signal: aborted (core dumped) #3731

Closed
opened 2026-04-12 14:32:32 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @x-future on GitHub (Jul 26, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5970

Originally assigned to: @dhiltgen on GitHub.

Error: llama runner process has terminated: signal: aborted (core dumped)

ollama run glm4

pulling manifest
pulling b506a070d115... 100% ▕█████████████████████████████████████████████████████████████████████████████████████▏ 5.5 GB
pulling e7e7aebd710c... 100% ▕█████████████████████████████████████████████████████████████████████████████████████▏ 137 B
pulling e4f0dc83900a... 100% ▕█████████████████████████████████████████████████████████████████████████████████████▏ 6.5 KB
pulling 4134f3eb0516... 100% ▕█████████████████████████████████████████████████████████████████████████████████████▏ 81 B
pulling ca0dd08dd282... 100% ▕█████████████████████████████████████████████████████████████████████████████████████▏ 489 B
verifying sha256 digest
writing manifest
removing any unused layers
success
Error: llama runner process has terminated: signal: aborted (core dumped)

system info: LSB Version: core-11.1.0ubuntu4-noarch:security-11.1.0ubuntu4-noarch
gpu info:
image

Originally created by @x-future on GitHub (Jul 26, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5970 Originally assigned to: @dhiltgen on GitHub. Error: llama runner process has terminated: signal: aborted (core dumped) # ollama run glm4 pulling manifest pulling b506a070d115... 100% ▕█████████████████████████████████████████████████████████████████████████████████████▏ 5.5 GB pulling e7e7aebd710c... 100% ▕█████████████████████████████████████████████████████████████████████████████████████▏ 137 B pulling e4f0dc83900a... 100% ▕█████████████████████████████████████████████████████████████████████████████████████▏ 6.5 KB pulling 4134f3eb0516... 100% ▕█████████████████████████████████████████████████████████████████████████████████████▏ 81 B pulling ca0dd08dd282... 100% ▕█████████████████████████████████████████████████████████████████████████████████████▏ 489 B verifying sha256 digest writing manifest removing any unused layers success Error: llama runner process has terminated: signal: aborted (core dumped) system info: LSB Version: core-11.1.0ubuntu4-noarch:security-11.1.0ubuntu4-noarch gpu info: <img width="760" alt="image" src="https://github.com/user-attachments/assets/accf2f79-e24b-4eb0-a094-525db8c31f96">
GiteaMirror added the needs more infobug labels 2026-04-12 14:32:32 -05:00
Author
Owner

@rick-github commented on GitHub (Jul 26, 2024):

Server logs would help with diagnosis.

<!-- gh-comment-id:2252375135 --> @rick-github commented on GitHub (Jul 26, 2024): Server logs would help with diagnosis.
Author
Owner

@chrisbward commented on GitHub (Jul 26, 2024):

also hitting this after leaving machine idle for some time - restarting the service didn't help, machine was not awoken from a sleep state

<!-- gh-comment-id:2253271117 --> @chrisbward commented on GitHub (Jul 26, 2024): also hitting this after leaving machine idle for some time - restarting the service didn't help, machine was not awoken from a sleep state
Author
Owner

@rick-github commented on GitHub (Jul 26, 2024):

Server logs would help with diagnosis.

<!-- gh-comment-id:2253274004 --> @rick-github commented on GitHub (Jul 26, 2024): Server logs would help with diagnosis.
Author
Owner

@chrisbward commented on GitHub (Jul 26, 2024):

how would I go about obtaining those? Is there a dedicated log just for ollama?

also to note, my issue arises when I run ollama run codegeex4:9b-all-fp16 but not ollama run llama3.1:8b-instruct-fp16

<!-- gh-comment-id:2253304169 --> @chrisbward commented on GitHub (Jul 26, 2024): how would I go about obtaining those? Is there a dedicated log just for ollama? also to note, my issue arises when I run `ollama run codegeex4:9b-all-fp16` but not `ollama run llama3.1:8b-instruct-fp16`
Author
Owner

@dhiltgen commented on GitHub (Jul 26, 2024):

This might be resolved by following the steps in the troublshooting guide for nvidia drivers unloading. https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#linux-nvidia-troubleshooting

If not, please share your server logs so we can see why it ran into problems

<!-- gh-comment-id:2253305800 --> @dhiltgen commented on GitHub (Jul 26, 2024): This might be resolved by following the steps in the troublshooting guide for nvidia drivers unloading. https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#linux-nvidia-troubleshooting If not, please share your server logs so we can see why it ran into problems
Author
Owner

@rick-github commented on GitHub (Jul 26, 2024):

Getting logs is covered at the top of the page that Daniel linked.

<!-- gh-comment-id:2253317421 --> @rick-github commented on GitHub (Jul 26, 2024): Getting logs is covered at the top of the page that Daniel linked.
Author
Owner

@chrisbward commented on GitHub (Jul 26, 2024):

journalctl -u ollama comes up empty, last log messages were from July 5th, and can confirm my drivers are loaded in fine as I can run inference on the other model / other apps

➜  ~ ollama -v    
ollama version is 0.1.47
➜  ~ nvidia-smi
Fri Jul 26 20:17:17 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.183.01             Driver Version: 535.183.01   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3090 Ti     Off | 00000000:01:00.0  On |                    0 |
| 33%   52C    P5              33W / 450W |   1510MiB / 23028MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|

<!-- gh-comment-id:2253328503 --> @chrisbward commented on GitHub (Jul 26, 2024): `journalctl -u ollama` comes up empty, last log messages were from July 5th, and can confirm my drivers are loaded in fine as I can run inference on the other model / other apps ``` ➜ ~ ollama -v ollama version is 0.1.47 ``` ``` ➜ ~ nvidia-smi Fri Jul 26 20:17:17 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.183.01 Driver Version: 535.183.01 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce RTX 3090 Ti Off | 00000000:01:00.0 On | 0 | | 33% 52C P5 33W / 450W | 1510MiB / 23028MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| ```
Author
Owner

@chrisbward commented on GitHub (Jul 26, 2024):

okay, upgrading ollama has resolved this issue for myself;

➜  ~ ollama -v
ollama version is 0.3.0
<!-- gh-comment-id:2253341516 --> @chrisbward commented on GitHub (Jul 26, 2024): okay, upgrading ollama has resolved this issue for myself; ``` ➜ ~ ollama -v ollama version is 0.3.0 ```
Author
Owner

@x-future commented on GitHub (Jul 29, 2024):

thanks, I upgrading ollama has resolved this issue too.
~# ollama -v
ollama version is 0.3.0

<!-- gh-comment-id:2254812629 --> @x-future commented on GitHub (Jul 29, 2024): thanks, I upgrading ollama has resolved this issue too. ~# ollama -v ollama version is 0.3.0
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3731