[GH-ISSUE #4675] phi3: Error: llama runner process has terminated: exit status 0xc0000409 #2940

Closed
opened 2026-04-12 13:18:43 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @FreemanFeng on GitHub (May 28, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4675

What is the issue?

ollama run phi3:medium-128k
ollama run phi3:3.8-mini-128k-instruct-q4_0

above two models will cause issue
Error: llama runner process has terminated: exit status 0xc0000409

OS

Windows

GPU

Other

CPU

Intel

Ollama version

0.1.38

Originally created by @FreemanFeng on GitHub (May 28, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4675 ### What is the issue? ollama run phi3:medium-128k ollama run phi3:3.8-mini-128k-instruct-q4_0 above two models will cause issue Error: llama runner process has terminated: exit status 0xc0000409 ### OS Windows ### GPU Other ### CPU Intel ### Ollama version 0.1.38
GiteaMirror added the bug label 2026-04-12 13:18:43 -05:00
Author
Owner

@SleeplessBegonia commented on GitHub (May 28, 2024):

mee too

<!-- gh-comment-id:2134658854 --> @SleeplessBegonia commented on GitHub (May 28, 2024): mee too
Author
Owner

@timfpark commented on GitHub (May 28, 2024):

Also seeing something simliar with Apple M1 Max with Sonoma 14.5:

❯ ollama run phi3:14b-medium-128k-instruct-q4_0
Error: llama runner process has terminated: signal: abort trap error:done_getting_tensors: wrong number of tensors; expected 245, got 243

<!-- gh-comment-id:2136234480 --> @timfpark commented on GitHub (May 28, 2024): Also seeing something simliar with Apple M1 Max with Sonoma 14.5: ❯ ollama run phi3:14b-medium-128k-instruct-q4_0 Error: llama runner process has terminated: signal: abort trap error:done_getting_tensors: wrong number of tensors; expected 245, got 243
Author
Owner

@leuchtetgruen commented on GitHub (Jun 2, 2024):

Same here

<!-- gh-comment-id:2143978336 --> @leuchtetgruen commented on GitHub (Jun 2, 2024): Same here
Author
Owner

@akhilsbehl commented on GitHub (Jun 3, 2024):

I'm seeing the same error. Looking at journalctl, it says 'phi3' is not a recognised model name. Which is an issue that has been seen before. Funnily enough, phi3 mini was working earlier and had model family name as llama but that has also stopped working now.

<!-- gh-comment-id:2145243361 --> @akhilsbehl commented on GitHub (Jun 3, 2024): I'm seeing the same error. Looking at journalctl, it says 'phi3' is not a recognised model name. Which is an issue that has been seen before. Funnily enough, phi3 mini was working earlier and had model family name as llama but that has also stopped working now.
Author
Owner

@akhilsbehl commented on GitHub (Jun 3, 2024):

Also, I've been seeing this issue with 1.38, 1.39, and 1.41 as of today.

<!-- gh-comment-id:2145247869 --> @akhilsbehl commented on GitHub (Jun 3, 2024): Also, I've been seeing this issue with 1.38, 1.39, and 1.41 as of today.
Author
Owner

@hujunyao commented on GitHub (Jun 4, 2024):

I got same error when ollama run hhao/openbmb-minicpm-llama3-v-2_5:latest, ollama version is 1.41, windows version

<!-- gh-comment-id:2146476009 --> @hujunyao commented on GitHub (Jun 4, 2024): I got same error when ollama run hhao/openbmb-minicpm-llama3-v-2_5:latest, ollama version is 1.41, windows version
Author
Owner

@jmorganca commented on GitHub (Jun 9, 2024):

Hi there, this should be fixed in the latest version of Ollama: https://ollama.com/download

@hujunyao RE Minicpm - will track that here: https://github.com/ollama/ollama/issues/4900

Thanks for the issue!

<!-- gh-comment-id:2156704213 --> @jmorganca commented on GitHub (Jun 9, 2024): Hi there, this should be fixed in the latest version of Ollama: https://ollama.com/download @hujunyao RE Minicpm - will track that here: https://github.com/ollama/ollama/issues/4900 Thanks for the issue!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#2940