[GH-ISSUE #4657] [Windows 10] Error: llama runner process has terminated: exit status 0xc0000139 #28686

Closed
opened 2026-04-22 07:11:25 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @bogdandinga on GitHub (May 27, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4657

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Steps to reproduce:

  1. Install Olama on Windows 10
  2. Run: ollama run llama3:70b (same for llama3 simply)
  3. Wait for the download to finish
  4. Wait for Olama to start

Actual results:
Error: llama runner process has terminated: exit status 0xc0000139

  • Using Dependency Walker I see that a lot of DLCs are missing, example: API-MS-WIN-CORE-APPCOMPAT-L1-1-0.DLL and EXT-MS-ONECORE-APPMODEL-STATEREPOSITORY-CACHE-L1-1-0.DLL

OS

Windows

GPU

AMD

CPU

AMD

Ollama version

0.1.38

Originally created by @bogdandinga on GitHub (May 27, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4657 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Steps to reproduce: 1. Install Olama on Windows 10 2. Run: ollama run llama3:70b (same for llama3 simply) 3. Wait for the download to finish 4. Wait for Olama to start Actual results: Error: llama runner process has terminated: exit status 0xc0000139 * Using Dependency Walker I see that a lot of DLCs are missing, example: API-MS-WIN-CORE-APPCOMPAT-L1-1-0.DLL and EXT-MS-ONECORE-APPMODEL-STATEREPOSITORY-CACHE-L1-1-0.DLL ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.1.38
GiteaMirror added the bugwindows labels 2026-04-22 07:11:26 -05:00
Author
Owner

@barclaybrown commented on GitHub (May 28, 2024):

Seems to occur on all 128k models?

(base) C:\Users\barcl>ollama run phi3:3.8-mini-128k-instruct-q4_0
Error: llama runner process has terminated: exit status 0xc0000409
<!-- gh-comment-id:2136091572 --> @barclaybrown commented on GitHub (May 28, 2024): Seems to occur on all 128k models? ``` (base) C:\Users\barcl>ollama run phi3:3.8-mini-128k-instruct-q4_0 Error: llama runner process has terminated: exit status 0xc0000409 ```
Author
Owner

@wwlwgo commented on GitHub (May 29, 2024):

Seems to occur on all 128k models?

(base) C:\Users\barcl>ollama run phi3:3.8-mini-128k-instruct-q4_0
Error: llama runner process has terminated: exit status 0xc0000409

all 128k models need the latest version of ollama(above 0.1.39). download the latest version.

<!-- gh-comment-id:2137764010 --> @wwlwgo commented on GitHub (May 29, 2024): > Seems to occur on all 128k models? > > ``` > (base) C:\Users\barcl>ollama run phi3:3.8-mini-128k-instruct-q4_0 > Error: llama runner process has terminated: exit status 0xc0000409 > ``` all 128k models need the latest version of ollama(above 0.1.39). download the latest version.
Author
Owner

@barclaybrown commented on GitHub (May 29, 2024):

Seems to have fixed it for me! Thanks much to the team!

<!-- gh-comment-id:2138239731 --> @barclaybrown commented on GitHub (May 29, 2024): Seems to have fixed it for me! Thanks much to the team!
Author
Owner

@bogdandinga commented on GitHub (May 30, 2024):

Unfortunately, it still reproduces on my end, even after the new 0.1.39 install :(

Logs attached:
app.log
server.log

<!-- gh-comment-id:2138732951 --> @bogdandinga commented on GitHub (May 30, 2024): Unfortunately, it still reproduces on my end, even after the new 0.1.39 install :( Logs attached: [app.log](https://github.com/ollama/ollama/files/15494248/app.log) [server.log](https://github.com/ollama/ollama/files/15494249/server.log)
Author
Owner

@wwlwgo commented on GitHub (Jun 6, 2024):

A 70B model requires a very large amount of VRAM! Check if your VRAM is sufficient.

Unfortunately, it still reproduces on my end, even after the new 0.1.39 install :(

Logs attached: app.log server.log

<!-- gh-comment-id:2151297406 --> @wwlwgo commented on GitHub (Jun 6, 2024): A 70B model requires a very large amount of VRAM! Check if your VRAM is sufficient. > Unfortunately, it still reproduces on my end, even after the new 0.1.39 install :( > > Logs attached: [app.log](https://github.com/ollama/ollama/files/15494248/app.log) [server.log](https://github.com/ollama/ollama/files/15494249/server.log)
Author
Owner

@mogocat commented on GitHub (Jul 3, 2024):

Same issue

<!-- gh-comment-id:2204976117 --> @mogocat commented on GitHub (Jul 3, 2024): Same issue
Author
Owner

@dhiltgen commented on GitHub (Jul 3, 2024):

If folks are still seeing the failure, please upgrade to the latest version, and if that doesn't resolve it, lets get some debug logs to see if that helps show what's going wrong.

Quit Ollama tray app, then in a powershell terminal

$env:OLLAMA_DEBUG="1"
& "ollama app"

and share the latest server.log

<!-- gh-comment-id:2207182437 --> @dhiltgen commented on GitHub (Jul 3, 2024): If folks are still seeing the failure, please upgrade to the latest version, and if that doesn't resolve it, lets get some debug logs to see if that helps show what's going wrong. Quit Ollama tray app, then in a powershell terminal ``` $env:OLLAMA_DEBUG="1" & "ollama app" ``` and share the latest server.log
Author
Owner

@hljhyb commented on GitHub (Jul 8, 2024):

Same issue
ollama run qwen2:0.5b
pulling manifest
pulling 8de95da68dc4... 100% ▕████████████████████████████████████████████████████████▏ 352 MB
pulling 62fbfd9ed093... 100% ▕████████████████████████████████████████████████████████▏ 182 B
pulling c156170b718e... 100% ▕████████████████████████████████████████████████████████▏ 11 KB
pulling f02dd72bb242... 100% ▕████████████████████████████████████████████████████████▏ 59 B
pulling 2184ab82477b... 100% ▕████████████████████████████████████████████████████████▏ 488 B
verifying sha256 digest
writing manifest
removing any unused layers
success
Error: llama runner process has terminated: exit status 0xc0000139
server.log
app.log

<!-- gh-comment-id:2213165059 --> @hljhyb commented on GitHub (Jul 8, 2024): Same issue ollama run qwen2:0.5b pulling manifest pulling 8de95da68dc4... 100% ▕████████████████████████████████████████████████████████▏ 352 MB pulling 62fbfd9ed093... 100% ▕████████████████████████████████████████████████████████▏ 182 B pulling c156170b718e... 100% ▕████████████████████████████████████████████████████████▏ 11 KB pulling f02dd72bb242... 100% ▕████████████████████████████████████████████████████████▏ 59 B pulling 2184ab82477b... 100% ▕████████████████████████████████████████████████████████▏ 488 B verifying sha256 digest writing manifest removing any unused layers success Error: llama runner process has terminated: exit status 0xc0000139 [server.log](https://github.com/user-attachments/files/16123678/server.log) [app.log](https://github.com/user-attachments/files/16123679/app.log)
Author
Owner

@dhiltgen commented on GitHub (Jul 8, 2024):

I think what's probably going on here is a missing C runtime dependency DLL isn't installed - I'll fix our build so this gets statically linked or included as a payload, but until then, as a workaround you should be able to install the VC Redist package from Microsoft.

https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170#latest-microsoft-visual-c-redistributable-version

<!-- gh-comment-id:2215311594 --> @dhiltgen commented on GitHub (Jul 8, 2024): I think what's probably going on here is a missing C runtime dependency DLL isn't installed - I'll fix our build so this gets statically linked or included as a payload, but until then, as a workaround you should be able to install the VC Redist package from Microsoft. https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170#latest-microsoft-visual-c-redistributable-version
Author
Owner

@hljhyb commented on GitHub (Jul 10, 2024):

The issue still exists. I installed this runtime, and I was able to run it before.

<!-- gh-comment-id:2219177373 --> @hljhyb commented on GitHub (Jul 10, 2024): The issue still exists. I installed this runtime, and I was able to run it before.
Author
Owner

@dhiltgen commented on GitHub (Jul 12, 2024):

This should be fixed in 0.2.2 - let us know if you still have any problems after updating to that release once it's out.

<!-- gh-comment-id:2226392470 --> @dhiltgen commented on GitHub (Jul 12, 2024): This should be fixed in 0.2.2 - let us know if you still have any problems after updating to that release once it's out.
Author
Owner

@wujianyouhun commented on GitHub (Mar 13, 2025):

root@2525a526f7a9:/# ollama --version
ollama version is 0.5.13
root@2525a526f7a9:/# ollama run gemma3:12b
Error: llama runner process has terminated: this model is not supported by your version of Ollama. You may need to upgrade

<!-- gh-comment-id:2719643415 --> @wujianyouhun commented on GitHub (Mar 13, 2025): root@2525a526f7a9:/# ollama --version ollama version is 0.5.13 root@2525a526f7a9:/# ollama run gemma3:12b Error: llama runner process has terminated: this model is not supported by your version of Ollama. You may need to upgrade
Author
Owner

@nameissakthi25 commented on GitHub (Mar 14, 2025):

root@2525a526f7a9:/# ollama --version ollama version is 0.5.13 root@2525a526f7a9:/# ollama run gemma3:12b Error: llama runner process has terminated: this model is not supported by your version of Ollama. You may need to upgrade

Got the same issue for gemma3 did you find any solution?

<!-- gh-comment-id:2724024809 --> @nameissakthi25 commented on GitHub (Mar 14, 2025): > root@2525a526f7a9:/# ollama --version ollama version is 0.5.13 root@2525a526f7a9:/# ollama run gemma3:12b Error: llama runner process has terminated: this model is not supported by your version of Ollama. You may need to upgrade Got the same issue for gemma3 did you find any solution?
Author
Owner

@nameissakthi25 commented on GitHub (Mar 14, 2025):

root@2525a526f7a9:/# ollama --version ollama version is 0.5.13 root@2525a526f7a9:/# ollama run gemma3:12b Error: llama runner process has terminated: this model is not supported by your version of Ollama. You may need to upgrade

Upgrading it to 0.6.0 works for me!

<!-- gh-comment-id:2724038855 --> @nameissakthi25 commented on GitHub (Mar 14, 2025): > root@2525a526f7a9:/# ollama --version ollama version is 0.5.13 root@2525a526f7a9:/# ollama run gemma3:12b Error: llama runner process has terminated: this model is not supported by your version of Ollama. You may need to upgrade Upgrading it to 0.6.0 works for me!
Author
Owner

@wujianyouhun commented on GitHub (Mar 25, 2025):

root@2525a526f7a9:/# ollama --version ollama version is 0.5.13 root@2525a526f7a9:/# ollama run gemma3:12b Error: llama runner process has terminated: this model is not supported by your version of Ollama. You may need to upgrade

Upgrading it to 0.6.0 works for me!t

thanks!

<!-- gh-comment-id:2749734997 --> @wujianyouhun commented on GitHub (Mar 25, 2025): > > root@2525a526f7a9:/# ollama --version ollama version is 0.5.13 root@2525a526f7a9:/# ollama run gemma3:12b Error: llama runner process has terminated: this model is not supported by your version of Ollama. You may need to upgrade > > Upgrading it to 0.6.0 works for me!t thanks!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28686