[GH-ISSUE #6882] Core Dump | applicationError #4352

Closed
opened 2026-04-12 15:17:24 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @ghost on GitHub (Sep 19, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6882

What is the issue?

I have been trying to do something with deepseek-coder-v2 but it takes constant prompt revisions of which it then core dumps and I've got to start all over trying to figure out how to get it back to the understanding it had before it died.

None of my GPU's are overly taxed and have plenty of VRAM available when it dies as system memory is also no where near capacity.

System / Kernel log shows (ollama_llama_se) of user XXX terminated abnormally with signal 6/ABRT while in the used term I get applicationError: an unknown error was encountered while running the model.

As an additional side note to this I see guys on here trying basic word problems with models to test logic which touches on why this crash is extra frustrating. I don't know how they expect this to work at all for the same reason starting over after crashes is extremely frustrating prompt wise. If you give a lengthy detailed prompt it seems to make things worse, where as too little a prompt yields the same useless answers.

For example with all the models I've tested if I (over simplifying here) say "I have three apples, I like cloudy days, James Hetfields song writing is as bad as Donald Trumps hair style. How many Apples do I have?" The model will respond something like "It's sad you don't like Donald Trumps personal style or the musical stylings of Metallica...perhaps I could help you find some music you do like." Face and palm, ask again, CRASH...start over, CRASH start over...never mind asking anything technical...So given this it seems insane for the "A Train leaves moving east at 30kph" style word questions as the model is at the back of the class eating paste and will reply "I like trains, they go choo choo." So perhaps this is a "featureRANTquest" where there needs to be some kind of session persistence for crashes. At present there is rather literally no way to actually move forward without a potentially infinite prompt testing which still has RnD failure baked in.

As an additional tidbit I also read for some pushing the questions into llama.cpp seems to work, but outside that you end up with the "2/3rd's the details ignored" then some irrelevant reply on the last bit of info...perhaps a related issue showing it's not the model (that report is on llama3.1) but something in ollama?

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.3.8

Originally created by @ghost on GitHub (Sep 19, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6882 ### What is the issue? I have been trying to do something with deepseek-coder-v2 but it takes constant prompt revisions of which it then core dumps and I've got to start all over trying to figure out how to get it back to the understanding it had before it died. None of my GPU's are overly taxed and have plenty of VRAM available when it dies as system memory is also no where near capacity. System / Kernel log shows (ollama_llama_se) of user XXX terminated abnormally with signal 6/ABRT while in the used term I get applicationError: an unknown error was encountered while running the model. As an additional side note to this I see guys on here trying basic word problems with models to test logic which touches on why this crash is extra frustrating. I don't know how they expect this to work at all for the same reason starting over after crashes is extremely frustrating prompt wise. If you give a lengthy detailed prompt it seems to make things worse, where as too little a prompt yields the same useless answers. For example with all the models I've tested if I (over simplifying here) say "I have three apples, I like cloudy days, James Hetfields song writing is as bad as Donald Trumps hair style. How many Apples do I have?" The model will respond something like "It's sad you don't like Donald Trumps personal style or the musical stylings of Metallica...perhaps I could help you find some music you do like." Face and palm, ask again, CRASH...start over, CRASH start over...never mind asking anything technical...So given this it seems insane for the "A Train leaves moving east at 30kph" style word questions as the model is at the back of the class eating paste and will reply "I like trains, they go choo choo." So perhaps this is a "featureRANTquest" where there needs to be some kind of session persistence for crashes. At present there is rather literally no way to actually move forward without a potentially infinite prompt testing which still has RnD failure baked in. As an additional tidbit I also read for some pushing the questions into llama.cpp seems to work, but outside that you end up with the "2/3rd's the details ignored" then some irrelevant reply on the last bit of info...perhaps a related issue showing it's not the model (that report is on llama3.1) but something in ollama? ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.3.8
GiteaMirror added the bug label 2026-04-12 15:17:24 -05:00
Author
Owner

@pdevine commented on GitHub (Sep 19, 2024):

Thanks for the issue. This is a dupe of #6715

<!-- gh-comment-id:2362294514 --> @pdevine commented on GitHub (Sep 19, 2024): Thanks for the issue. This is a dupe of #6715
Author
Owner

@ghost commented on GitHub (Sep 19, 2024):

Sorry it's a dupe, I did a search before opening but as always language can cause misses. Checking the 6715 - yup he doesn't use any of the terms I used when searching heh.

<!-- gh-comment-id:2362331044 --> @ghost commented on GitHub (Sep 19, 2024): Sorry it's a dupe, I did a search before opening but as always language can cause misses. Checking the 6715 - yup he doesn't use any of the terms I used when searching heh.
Author
Owner

@ghost commented on GitHub (Sep 20, 2024):

I just wanted to toss in qwen2.5-coder:7b is doing the same thing. In fact it's worse.

<!-- gh-comment-id:2364690092 --> @ghost commented on GitHub (Sep 20, 2024): I just wanted to toss in qwen2.5-coder:7b is doing the same thing. In fact it's worse.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4352