[GH-ISSUE #2897] Windows preview CUDA 5.2 support #1772

Closed
opened 2026-04-12 11:47:25 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @lyczak on GitHub (Mar 3, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2897

Originally assigned to: @dhiltgen on GitHub.

Hello folks,

I've been trying to get started with the Windows preview version of ollama. However, I'm currently encountering an issue where my GTX 970 is not detected by the software. I've tried updating drivers and updating Windows to no avail. Assuming this is related to old CUDA version (CUDA 5.2) as mentioned in #1865 then it should've been fixed by #2116 but I don't know if this fix has been tested on the Windows preview version of ollama. Poking around in that PR, it seems like the commit which adds support for CUDA 5.0, 7.5, and 8.0 touches gen_common.sh, gen_linux.sh, and gen_windows.ps1 under llm/generate whereas the commit targeting CUDA 5.2 only touches gen_linux.sh. Could this be the source of the issue?

Assuming this was the problem, I was hoping to try using WSL but unfortunately I'm running Windows Server 2019 and can't install WSL2. I may set up a dual boot with Ubuntu later today to see if my GPU is recognized there. I'm more than happy to help with additional testing although I don't have time to setup the toolchain and build things myself right this moment. Thanks for helping to maintain this project!

Relevant lines from my server.log are as follows:

time=2024-03-02T11:54:38.873-05:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-03-02T11:54:39.015-05:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library nvml.dll"
time=2024-03-02T11:54:45.186-05:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []"
time=2024-03-02T11:54:45.186-05:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library rocm_smi64.dll"
time=2024-03-02T11:54:45.190-05:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []"
time=2024-03-02T11:54:45.190-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-02T11:54:45.190-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-02T11:54:45.191-05:00 level=INFO source=llm.go:77 msg="GPU not available, falling back to CPU"
Originally created by @lyczak on GitHub (Mar 3, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2897 Originally assigned to: @dhiltgen on GitHub. Hello folks, I've been trying to get started with the Windows preview version of ollama. However, I'm currently encountering an issue where my GTX 970 is not detected by the software. I've tried updating drivers and updating Windows to no avail. Assuming this is related to old CUDA version (CUDA 5.2) as mentioned in #1865 then it should've been fixed by #2116 but I don't know if this fix has been tested on the Windows preview version of ollama. Poking around in that PR, it seems like the commit which [adds support for CUDA 5.0, 7.5, and 8.0 ](https://github.com/ollama/ollama/pull/2116/commits/a447a083f2169e2a3c975cb5951d8b0b0dcddb04) touches `gen_common.sh`, `gen_linux.sh`, and `gen_windows.ps1` under `llm/generate` whereas the commit [targeting CUDA 5.2](https://github.com/ollama/ollama/pull/2116/commits/681a91499010be819dd45a1390e668b0817e7338) only touches `gen_linux.sh`. Could this be the source of the issue? Assuming this was the problem, I was hoping to try using WSL but unfortunately I'm running Windows Server 2019 and can't install WSL2. I may set up a dual boot with Ubuntu later today to see if my GPU is recognized there. I'm more than happy to help with additional testing although I don't have time to setup the toolchain and build things myself right this moment. Thanks for helping to maintain this project! Relevant lines from my server.log are as follows: ``` time=2024-03-02T11:54:38.873-05:00 level=INFO source=gpu.go:94 msg="Detecting GPU type" time=2024-03-02T11:54:39.015-05:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library nvml.dll" time=2024-03-02T11:54:45.186-05:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []" time=2024-03-02T11:54:45.186-05:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library rocm_smi64.dll" time=2024-03-02T11:54:45.190-05:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []" time=2024-03-02T11:54:45.190-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-02T11:54:45.190-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-02T11:54:45.191-05:00 level=INFO source=llm.go:77 msg="GPU not available, falling back to CPU" ```
GiteaMirror added the bugnvidia labels 2026-04-12 11:47:25 -05:00
Author
Owner

@lyczak commented on GitHub (Mar 3, 2024):

Update on this: I tested ollama on Ubuntu bare metal running on the same hardware and was able to make use of the GTX 970 without issue.

<!-- gh-comment-id:1975277724 --> @lyczak commented on GitHub (Mar 3, 2024): Update on this: I tested ollama on Ubuntu bare metal running on the same hardware and was able to make use of the GTX 970 without issue.
Author
Owner

@dhiltgen commented on GitHub (Mar 6, 2024):

The driver install should have installed nvml.dll into the system. The fact that we're not finding it is a bit odd.

Can you try running the server with OLLAMA_DEBUG="1" so we can get a little more information on what paths it's trying and maybe poke around a bit on your system to see if you can find where that library exists?

<!-- gh-comment-id:1981343706 --> @dhiltgen commented on GitHub (Mar 6, 2024): The driver install should have installed `nvml.dll` into the system. The fact that we're not finding it is a bit odd. Can you try running the server with `OLLAMA_DEBUG="1"` so we can get a little more information on what paths it's trying and maybe poke around a bit on your system to see if you can find where that library exists?
Author
Owner

@dhiltgen commented on GitHub (Mar 21, 2024):

If you're still having troubles, please grab a debug log so we can investigate.

<!-- gh-comment-id:2012037942 --> @dhiltgen commented on GitHub (Mar 21, 2024): If you're still having troubles, please grab a debug log so we can investigate.
Author
Owner

@lyczak commented on GitHub (Mar 21, 2024):

Hi Daniel,

Sorry I haven't gotten back to y'all on this issue. I've been swamped with other work and don't currently have access to the machine I was using to test this. You're welcome to keep this issue closed for now and once I have the time and access to the machine, I'll run some more tests and grab some debugging information for you. Thanks and take care!

<!-- gh-comment-id:2013144838 --> @lyczak commented on GitHub (Mar 21, 2024): Hi Daniel, Sorry I haven't gotten back to y'all on this issue. I've been swamped with other work and don't currently have access to the machine I was using to test this. You're welcome to keep this issue closed for now and once I have the time and access to the machine, I'll run some more tests and grab some debugging information for you. Thanks and take care!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1772