[GH-ISSUE #4824] Error: llama runner process has terminated: signal: aborted (core dumped) #3048

Closed
opened 2026-04-12 13:28:30 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @revalue-it on GitHub (Jun 5, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4824

What is the issue?

When I run the MiniCPM-Llama3-V-2_5, I get an error:"Error: llama runner process has terminated: signal: aborted (core dumped)",This is the case for both version 0.1.39 and 0.1.41

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.41

Originally created by @revalue-it on GitHub (Jun 5, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4824 ### What is the issue? When I run the MiniCPM-Llama3-V-2_5, I get an error:"Error: llama runner process has terminated: signal: aborted (core dumped)",This is the case for both version 0.1.39 and 0.1.41 ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.41
GiteaMirror added the bug label 2026-04-12 13:28:30 -05:00
Author
Owner

@imaxiaolong commented on GitHub (Jun 6, 2024):

Me Too

OS

Macos

GPU & CPU

M3

Ollama version

0.1.41

<!-- gh-comment-id:2151764605 --> @imaxiaolong commented on GitHub (Jun 6, 2024): Me Too ### OS Macos ### GPU & CPU M3 ### Ollama version 0.1.41
Author
Owner

@henryclw commented on GitHub (Jun 7, 2024):

May I ask how did you pull and run the MiniCPM-Llama3-V-2_5? Would you mind telling the specify model name?

<!-- gh-comment-id:2153623700 --> @henryclw commented on GitHub (Jun 7, 2024): May I ask how did you pull and run the MiniCPM-Llama3-V-2_5? Would you mind telling the specify model name?
Author
Owner

@revalue-it commented on GitHub (Jun 7, 2024):

May I ask how did you pull and run the MiniCPM-Llama3-V-2_5? Would you mind telling the specify model name?

Sure,you can pull by running “ollama pull hhao/openbmb-minicpm-llama3-v-2_5:q4_K_M“,but there is no official merge request to adapt this model, even in 0.1.41. You can compile the project, refer to: https://github.com/OpenBMB/ollama/tree/minicpm-v2.5/examples/minicpm-v2.5. If you compiled successfully on nvidia gpu, welcome to tell me your method.

<!-- gh-comment-id:2153709114 --> @revalue-it commented on GitHub (Jun 7, 2024): > May I ask how did you pull and run the MiniCPM-Llama3-V-2_5? Would you mind telling the specify model name? Sure,you can pull by running “ollama pull hhao/openbmb-minicpm-llama3-v-2_5:q4_K_M“,but there is no official merge request to adapt this model, even in 0.1.41. You can compile the project, refer to: https://github.com/OpenBMB/ollama/tree/minicpm-v2.5/examples/minicpm-v2.5. If you compiled successfully on nvidia gpu, welcome to tell me your method.
Author
Owner

@TonyHuang6666 commented on GitHub (Jun 7, 2024):

我这两天也一直在死磕这个模型,最终结果就是不管是WSL2还是Ubuntu 22.04都可以处理文本,但是就是不能分析图片,一旦载入图片,模型就会挂起,占着显存但不干活。不管是你发的那个hhao的库,还是我自己下载原始作者的gguf模型再加个Modelfile导入都是这个结果😭目前只看过mac能成功运行,Linux和docker都没成功的

<!-- gh-comment-id:2155642463 --> @TonyHuang6666 commented on GitHub (Jun 7, 2024): 我这两天也一直在死磕这个模型,最终结果就是不管是WSL2还是Ubuntu 22.04都可以处理文本,但是就是不能分析图片,一旦载入图片,模型就会挂起,占着显存但不干活。不管是你发的那个hhao的库,还是我自己下载原始作者的gguf模型再加个Modelfile导入都是这个结果😭目前只看过mac能成功运行,Linux和docker都没成功的
Author
Owner

@hmd78 commented on GitHub (Jun 8, 2024):

i have the same problem when pulling qwen2, Any idea?

<!-- gh-comment-id:2156052518 --> @hmd78 commented on GitHub (Jun 8, 2024): i have the same problem when pulling qwen2, Any idea?
Author
Owner

@jmorganca commented on GitHub (Jun 9, 2024):

Closing for https://github.com/ollama/ollama/issues/4900

<!-- gh-comment-id:2156703573 --> @jmorganca commented on GitHub (Jun 9, 2024): Closing for https://github.com/ollama/ollama/issues/4900
Author
Owner

@QCadjunct commented on GitHub (Jul 25, 2024):

root@TwinTower1:/mnt/c/Users/thehitman# ollama run mistral-nemo

verifying sha256 digest
writing manifest
removing any unused layers
success
Error: llama runner process has terminated: signal: aborted

root@TwinTower1:/mnt/c/Users/thehitman# ollama list
NAME ID SIZE MODIFIED
mistral-nemo:latest 4b300b8c6a97 7.1 GB 2 minutes ago
llama3.1:latest a23da2a80395 4.7 GB 42 hours ago
phi3:14b 1e67dff39209 7.9 GB 42 hours ago
gemma2:27b 53261bc9c192 15 GB 9 days ago
codestral:latest fcc0019dcee9 12 GB 6 weeks ago

<!-- gh-comment-id:2250225691 --> @QCadjunct commented on GitHub (Jul 25, 2024): root@TwinTower1:/mnt/c/Users/thehitman# ollama run mistral-nemo verifying sha256 digest writing manifest removing any unused layers success Error: llama runner process has terminated: signal: aborted root@TwinTower1:/mnt/c/Users/thehitman# ollama list NAME ID SIZE MODIFIED mistral-nemo:latest 4b300b8c6a97 7.1 GB 2 minutes ago llama3.1:latest a23da2a80395 4.7 GB 42 hours ago phi3:14b 1e67dff39209 7.9 GB 42 hours ago gemma2:27b 53261bc9c192 15 GB 9 days ago codestral:latest fcc0019dcee9 12 GB 6 weeks ago
Author
Owner

@calonye commented on GitHub (Aug 7, 2024):

Same question.
https://ollama.com/aiden_lu/minicpm-v2.6
ollama run aiden_lu/minicpm-v2.6:Q4_K_M

<!-- gh-comment-id:2273184708 --> @calonye commented on GitHub (Aug 7, 2024): Same question. `https://ollama.com/aiden_lu/minicpm-v2.6` `ollama run aiden_lu/minicpm-v2.6:Q4_K_M`
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3048