[GH-ISSUE #12622] Ollama fails to detect GPU after upgrading to 0.12.5 (WSL) #8378

Closed
opened 2026-04-12 21:01:24 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @Railway9784 on GitHub (Oct 15, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12622

What is the issue?

After upgrading Ollama to 0.12.5 Ollama serve won't detect GPU and runs CPU only.
Downgrading to 0.12.3 fixes the issue.

In both versions OLLAMA_LLM_LIBRARY is unset.

Relevant log output


OS

WSL2

GPU

Nvidia

CPU

AMD

Ollama version

0.12.5

Originally created by @Railway9784 on GitHub (Oct 15, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12622 ### What is the issue? After upgrading Ollama to 0.12.5 Ollama serve won't detect GPU and runs CPU only. Downgrading to 0.12.3 fixes the issue. - Here's 0.12.5 log with `OLLAMA_DEBUG=2` - [ollama.v0.12.5.log](https://github.com/user-attachments/files/22873999/ollama.v0.12.5.log) - And here's 0.12.3 log with `OLLAMA_DEBUG=2` - [ollama.v0.12.3.log](https://github.com/user-attachments/files/22873998/ollama.v0.12.3.log) showing GPU properly detected. In both versions `OLLAMA_LLM_LIBRARY` is unset. ### Relevant log output ```shell ``` ### OS WSL2 ### GPU Nvidia ### CPU AMD ### Ollama version 0.12.5
GiteaMirror added the bug label 2026-04-12 21:01:24 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 15, 2025):

//ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:329: GGML_ASSERT(ggml_cuda_has_arch(info.devices[id].cc) && "ggml was not compiled with support for this arch") failed

Try limiting the GPU probe to just v12 by setting OLLAMA_LLM_LIBRARY=cuda_v12.

<!-- gh-comment-id:3405895108 --> @rick-github commented on GitHub (Oct 15, 2025): ``` //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:329: GGML_ASSERT(ggml_cuda_has_arch(info.devices[id].cc) && "ggml was not compiled with support for this arch") failed ``` Try limiting the GPU probe to just v12 by setting `OLLAMA_LLM_LIBRARY=cuda_v12`.
Author
Owner

@Railway9784 commented on GitHub (Oct 15, 2025):

Set OLLAMA_LLM_LIBRARY=cuda_v12 and still no GPU detection - here is the log - 0.12.5-cuda12.log

<!-- gh-comment-id:3408344459 --> @Railway9784 commented on GitHub (Oct 15, 2025): Set `OLLAMA_LLM_LIBRARY=cuda_v12` and still no GPU detection - here is the log - [0.12.5-cuda12.log](https://github.com/user-attachments/files/22936531/0.12.5-cuda12.log)
Author
Owner

@Railway9784 commented on GitHub (Oct 17, 2025):

After updating to 0.12.6 GPU is properly detected. Thank you!

<!-- gh-comment-id:3413731375 --> @Railway9784 commented on GitHub (Oct 17, 2025): After updating to 0.12.6 GPU is properly detected. Thank you!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8378