[GH-ISSUE #1741] Process 2984649 (ollama-runner) of user 946 dumped core. #26756

Closed
opened 2026-04-22 03:18:47 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @Alexandrsv on GitHub (Dec 29, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1741

Originally assigned to: @dhiltgen on GitHub.

An error occurs when using WebUI and wizardlm-uncensored:latest

https://pastebin.com/cdvxsEQ7

image

mangaro linux
RTX 4090

Originally created by @Alexandrsv on GitHub (Dec 29, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1741 Originally assigned to: @dhiltgen on GitHub. An error occurs when using WebUI and wizardlm-uncensored:latest https://pastebin.com/cdvxsEQ7 ![image](https://github.com/jmorganca/ollama/assets/15097064/a67a51e3-9a0f-4dc9-9db1-8fcfa81a9fdb) mangaro linux RTX 4090
GiteaMirror added the bug label 2026-04-22 03:18:47 -05:00
Author
Owner

@BruceMacD commented on GitHub (Jan 2, 2024):

Hi @Alexandrsv, it looks like you're building from source/pre-release and running Ollama in Docker. Based on the error it could be that Ollama was compiled for the wrong CPU architecture. Are you running this container on an ARM machine such as a Mac or a Raspberry Pi?

Root cause:

дек 30 02:17:31 mj ollama[1322]: llama_new_context_with_model: freq_scale = 1
дек 30 02:17:32 mj systemd[1]: run-docker-runtime\x2drunc-moby-589bb498255b8b8b5b51dc560730e8ae1e1d5c67bf9ba1ceba8a1c22540085fe-runc.l3kOgV.mount: Deactivated successfully.
дек 30 02:17:36 mj kernel: ollama-runner[2984649]: segfault at 0 ip 0000000000539be6 sp 00007ffcedbba3e0 error 4 in ollama-runner[408000+175000] likely on CPU 8 (core 2, socket 0)
дек 30 02:17:36 mj kernel: Code: da fe ff ff 31 f6 49 8d 94 24 80 01 00 00 48 89 df 4c 89 54 24 10 4c 89 5c 24 18 e8 b4 4a fe ff 4c 8b 5c 24 18 4c 8b 54 24 10 <4c> 8b 00 4c 03 43 08 4d 85 e4 74 3d 4d 8d a0 80 01 00 00 45 31 c9
дек 30 02:17:37 mj systemd[1]: Started Process Core Dump (PID 2984802/UID 0).
дек 30 02:17:43 mj systemd[1]: run-docker-runtime\x2drunc-moby-589bb498255b8b8b5b51dc560730e8ae1e1d5c67bf9ba1ceba8a1c22540085fe-runc.lVRPYN.mount: Deactivated successfully.
дек 30 02:17:54 mj plasmashell[2705484]: [2023-12-30 02:17:54.603] [   ] [debug] autosave no need
дек 30 02:17:59 mj systemd[1]: run-docker-runtime\x2drunc-moby-d3d02aa2aa1b507df5e07243327b2fc89627d937b3b1cf3de4183c4463d4352a-runc.vzn7YS.mount: Deactivated successfully.
дек 30 02:18:03 mj systemd[1]: run-docker-runtime\x2drunc-moby-589bb498255b8b8b5b51dc560730e8ae1e1d5c67bf9ba1ceba8a1c22540085fe-runc.lZauyG.mount: Deactivated successfully.
дек 30 02:18:05 mj systemd-coredump[2984803]: [🡕] Process 2984649 (ollama-runner) of user 946 dumped core.
                                                 
                                                 Stack trace of thread 2984649:
                                                 #0  0x0000000000539be6 n/a (/tmp/ollama505739042/llama.cpp/gguf/build/cpu/bin/ollama-runner + 0x139be6)
                                                 #1  0x000000000053fd55 n/a (/tmp/ollama505739042/llama.cpp/gguf/build/cpu/bin/ollama-runner + 0x13fd55)
                                                 #2  0x00000000004c0213 n/a (/tmp/ollama505739042/llama.cpp/gguf/build/cpu/bin/ollama-runner + 0xc0213)
                                                 #3  0x000000000046e50c n/a (/tmp/ollama505739042/llama.cpp/gguf/build/cpu/bin/ollama-runner + 0x6e50c)
                                                 #4  0x000000000041c693 n/a (/tmp/ollama505739042/llama.cpp/gguf/build/cpu/bin/ollama-runner + 0x1c693)
                                                 #5  0x00007fe488b58cd0 n/a (libc.so.6 + 0x27cd0)
                                                 #6  0x00007fe488b58d8a __libc_start_main (libc.so.6 + 0x27d8a)
                                                 #7  0x00000000004215ee n/a (/tmp/ollama505739042/llama.cpp/gguf/build/cpu/bin/ollama-runner + 0x215ee)
                                                 ELF object binary architecture: AMD x86-64
<!-- gh-comment-id:1874083465 --> @BruceMacD commented on GitHub (Jan 2, 2024): Hi @Alexandrsv, it looks like you're building from source/pre-release and running Ollama in Docker. Based on the error it could be that Ollama was compiled for the wrong CPU architecture. Are you running this container on an ARM machine such as a Mac or a Raspberry Pi? Root cause: ``` дек 30 02:17:31 mj ollama[1322]: llama_new_context_with_model: freq_scale = 1 дек 30 02:17:32 mj systemd[1]: run-docker-runtime\x2drunc-moby-589bb498255b8b8b5b51dc560730e8ae1e1d5c67bf9ba1ceba8a1c22540085fe-runc.l3kOgV.mount: Deactivated successfully. дек 30 02:17:36 mj kernel: ollama-runner[2984649]: segfault at 0 ip 0000000000539be6 sp 00007ffcedbba3e0 error 4 in ollama-runner[408000+175000] likely on CPU 8 (core 2, socket 0) дек 30 02:17:36 mj kernel: Code: da fe ff ff 31 f6 49 8d 94 24 80 01 00 00 48 89 df 4c 89 54 24 10 4c 89 5c 24 18 e8 b4 4a fe ff 4c 8b 5c 24 18 4c 8b 54 24 10 <4c> 8b 00 4c 03 43 08 4d 85 e4 74 3d 4d 8d a0 80 01 00 00 45 31 c9 дек 30 02:17:37 mj systemd[1]: Started Process Core Dump (PID 2984802/UID 0). дек 30 02:17:43 mj systemd[1]: run-docker-runtime\x2drunc-moby-589bb498255b8b8b5b51dc560730e8ae1e1d5c67bf9ba1ceba8a1c22540085fe-runc.lVRPYN.mount: Deactivated successfully. дек 30 02:17:54 mj plasmashell[2705484]: [2023-12-30 02:17:54.603] [ ] [debug] autosave no need дек 30 02:17:59 mj systemd[1]: run-docker-runtime\x2drunc-moby-d3d02aa2aa1b507df5e07243327b2fc89627d937b3b1cf3de4183c4463d4352a-runc.vzn7YS.mount: Deactivated successfully. дек 30 02:18:03 mj systemd[1]: run-docker-runtime\x2drunc-moby-589bb498255b8b8b5b51dc560730e8ae1e1d5c67bf9ba1ceba8a1c22540085fe-runc.lZauyG.mount: Deactivated successfully. дек 30 02:18:05 mj systemd-coredump[2984803]: [🡕] Process 2984649 (ollama-runner) of user 946 dumped core. Stack trace of thread 2984649: #0 0x0000000000539be6 n/a (/tmp/ollama505739042/llama.cpp/gguf/build/cpu/bin/ollama-runner + 0x139be6) #1 0x000000000053fd55 n/a (/tmp/ollama505739042/llama.cpp/gguf/build/cpu/bin/ollama-runner + 0x13fd55) #2 0x00000000004c0213 n/a (/tmp/ollama505739042/llama.cpp/gguf/build/cpu/bin/ollama-runner + 0xc0213) #3 0x000000000046e50c n/a (/tmp/ollama505739042/llama.cpp/gguf/build/cpu/bin/ollama-runner + 0x6e50c) #4 0x000000000041c693 n/a (/tmp/ollama505739042/llama.cpp/gguf/build/cpu/bin/ollama-runner + 0x1c693) #5 0x00007fe488b58cd0 n/a (libc.so.6 + 0x27cd0) #6 0x00007fe488b58d8a __libc_start_main (libc.so.6 + 0x27d8a) #7 0x00000000004215ee n/a (/tmp/ollama505739042/llama.cpp/gguf/build/cpu/bin/ollama-runner + 0x215ee) ELF object binary architecture: AMD x86-64 ```
Author
Owner

@dhiltgen commented on GitHub (Jan 27, 2024):

@Alexandrsv we've changed around how we load the LLM library since you reported this. Could you try to repro this on the latest release 0.1.22 and see if you still have problems?

<!-- gh-comment-id:1912879448 --> @dhiltgen commented on GitHub (Jan 27, 2024): @Alexandrsv we've changed around how we load the LLM library since you reported this. Could you try to repro this on the latest release 0.1.22 and see if you still have problems?
Author
Owner

@Alexandrsv commented on GitHub (Jan 27, 2024):

I updated the ollama, checked it, the problem was fixed.
Thank you

<!-- gh-comment-id:1913364929 --> @Alexandrsv commented on GitHub (Jan 27, 2024): I updated the ollama, checked it, the problem was fixed. Thank you
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#26756