[GH-ISSUE #7266] Windows ARM64 fails when loading model, error code 0xc000001d #4616

Open
opened 2026-04-12 15:31:58 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @mikechambers84 on GitHub (Oct 19, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7266

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I installed the latest Ollama for Windows (ARM64 build) on my 2023 Windows Dev Kit, which has an 8-core ARM processor, a Snapdragon 8cx Gen 3. It's running Windows 11 Pro.

I can pull models, but when I go to run them, I get an error. It doesn't matter what model I run, I've tried several. Here's an example.

C:\Users\Mike Chambers>ollama pull gemma2:2b
pulling manifest
pulling 7462734796d6... 100% ▕████████████████████████████████████████████████████████▏ 1.6 GB
pulling e0a42594d802... 100% ▕████████████████████████████████████████████████████████▏  358 B
pulling 097a36493f71... 100% ▕████████████████████████████████████████████████████████▏ 8.4 KB
pulling 2490e7468436... 100% ▕████████████████████████████████████████████████████████▏   65 B
pulling e18ad7af7efb... 100% ▕████████████████████████████████████████████████████████▏  487 B
verifying sha256 digest
writing manifest
success

C:\Users\Mike Chambers>ollama run gemma2:2b
Error: llama runner process has terminated: exit status 0xc000001d

C:\Users\Mike Chambers>

OS

Windows

GPU

No response

CPU

Other

Ollama version

0.3.13

server.log

Originally created by @mikechambers84 on GitHub (Oct 19, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7266 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I installed the latest Ollama for Windows (ARM64 build) on my 2023 Windows Dev Kit, which has an 8-core ARM processor, a Snapdragon 8cx Gen 3. It's running Windows 11 Pro. I can pull models, but when I go to run them, I get an error. It doesn't matter what model I run, I've tried several. Here's an example. ``` C:\Users\Mike Chambers>ollama pull gemma2:2b pulling manifest pulling 7462734796d6... 100% ▕████████████████████████████████████████████████████████▏ 1.6 GB pulling e0a42594d802... 100% ▕████████████████████████████████████████████████████████▏ 358 B pulling 097a36493f71... 100% ▕████████████████████████████████████████████████████████▏ 8.4 KB pulling 2490e7468436... 100% ▕████████████████████████████████████████████████████████▏ 65 B pulling e18ad7af7efb... 100% ▕████████████████████████████████████████████████████████▏ 487 B verifying sha256 digest writing manifest success C:\Users\Mike Chambers>ollama run gemma2:2b Error: llama runner process has terminated: exit status 0xc000001d C:\Users\Mike Chambers> ``` ### OS Windows ### GPU _No response_ ### CPU Other ### Ollama version 0.3.13 [server.log](https://github.com/user-attachments/files/17448637/server.log)
GiteaMirror added the bugwindows labels 2026-04-12 15:31:58 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 19, 2024):

Server logs will aid in debugging.

<!-- gh-comment-id:2423733517 --> @rick-github commented on GitHub (Oct 19, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Author
Owner

@mikechambers84 commented on GitHub (Oct 20, 2024):

Server logs will aid in debugging.

Edited original post and added at the end.

<!-- gh-comment-id:2424533046 --> @mikechambers84 commented on GitHub (Oct 20, 2024): > [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging. Edited original post and added at the end.
Author
Owner

@rick-github commented on GitHub (Oct 20, 2024):

Unfortunately the logs don't show anything interesting. Please add OLLAMA_DEBUG=1 to the server environment and try again.

<!-- gh-comment-id:2424840360 --> @rick-github commented on GitHub (Oct 20, 2024): Unfortunately the logs don't show anything interesting. Please add `OLLAMA_DEBUG=1` to the [server environment](https://github.com/ollama/ollama/blob/main/docs/faq.md#setting-environment-variables-on-windows) and try again.
Author
Owner

@dhiltgen commented on GitHub (Oct 22, 2024):

We've been primarily focused on the new Snapdragon X Elite CPUs, so perhaps we're compiling the binary with features enabled that aren't supported on the Snapdragon 8cx Gen 3.

@mikechambers84 if you have the ability to build from source, https://github.com/ollama/ollama/blob/main/llama/llama.go#L53-L54 might need some defines turned off.

https://github.com/ollama/ollama/blob/main/docs/development.md#windows-arm64-1

<!-- gh-comment-id:2429945574 --> @dhiltgen commented on GitHub (Oct 22, 2024): We've been primarily focused on the new Snapdragon X Elite CPUs, so perhaps we're compiling the binary with features enabled that aren't supported on the Snapdragon 8cx Gen 3. @mikechambers84 if you have the ability to build from source, https://github.com/ollama/ollama/blob/main/llama/llama.go#L53-L54 might need some defines turned off. https://github.com/ollama/ollama/blob/main/docs/development.md#windows-arm64-1
Author
Owner

@aliuq commented on GitHub (Oct 25, 2024):

same issue

<!-- gh-comment-id:2436771285 --> @aliuq commented on GitHub (Oct 25, 2024): same issue
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4616