[GH-ISSUE #5625] gpu discovery crashes on nvidia CC 2.1 GPU on windows 10 #65546

Closed
opened 2026-05-03 21:38:39 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @snufflemarlstar-rg on GitHub (Jul 11, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5625

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I have repeatedly installed and uninstalled ollama and searched for some advice regarding
"Warning: could not connect to a running Ollama instance" for windows 10 but I have not found a solution.

2024/07/11 10:49:03 routes.go:1033: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:C:\Users\hp\.ollama\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\Users\hp\AppData\Local\Programs\Ollama\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-11T10:49:03.902+07:00 level=INFO source=images.go:751 msg="total blobs: 0"
time=2024-07-11T10:49:03.905+07:00 level=INFO source=images.go:758 msg="total unused blobs removed: 0"
time=2024-07-11T10:49:03.906+07:00 level=INFO source=routes.go:1080 msg="Listening on 127.0.0.1:11434 (version 0.2.1)"
time=2024-07-11T10:49:03.907+07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7 cpu]"
time=2024-07-11T10:49:03.907+07:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
Exception 0xc0000005 0x8 0x1ec23f01c10 0x1ec23f01c10
PC=0x1ec23f01c10
signal arrived during external code execution

OS

Windows

GPU

Intel

CPU

Intel

Ollama version

0.2.1

Originally created by @snufflemarlstar-rg on GitHub (Jul 11, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5625 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I have repeatedly installed and uninstalled ollama and searched for some advice regarding "Warning: could not connect to a running Ollama instance" for windows 10 but I have not found a solution. 2024/07/11 10:49:03 routes.go:1033: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:C:\\Users\\hp\\.ollama\\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\hp\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-07-11T10:49:03.902+07:00 level=INFO source=images.go:751 msg="total blobs: 0" time=2024-07-11T10:49:03.905+07:00 level=INFO source=images.go:758 msg="total unused blobs removed: 0" time=2024-07-11T10:49:03.906+07:00 level=INFO source=routes.go:1080 msg="Listening on 127.0.0.1:11434 (version 0.2.1)" time=2024-07-11T10:49:03.907+07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7 cpu]" time=2024-07-11T10:49:03.907+07:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs" Exception 0xc0000005 0x8 0x1ec23f01c10 0x1ec23f01c10 PC=0x1ec23f01c10 signal arrived during external code execution ### OS Windows ### GPU Intel ### CPU Intel ### Ollama version 0.2.1
GiteaMirror added the bugneeds more infonvidiawindows labels 2026-05-03 21:38:41 -05:00
Author
Owner

@rick-github commented on GitHub (Jul 12, 2024):

Have you tried setting OLLAMA_INTEL_GPU=1 in your environment? However, as I understand it, support for Intel GPUs is not as mature as Nvidia, so you might be out of luck with you current hardware platoform.

<!-- gh-comment-id:2226425435 --> @rick-github commented on GitHub (Jul 12, 2024): Have you tried setting `OLLAMA_INTEL_GPU=1` in your environment? However, as I understand it, support for Intel GPUs is not as mature as Nvidia, so you might be out of luck with you current hardware platoform.
Author
Owner

@NasonZ commented on GitHub (Jul 19, 2024):

Hey did you manage to resolve this issue? Facing a similar problem #5625

<!-- gh-comment-id:2239167892 --> @NasonZ commented on GitHub (Jul 19, 2024): Hey did you manage to resolve this issue? Facing a similar problem #5625
Author
Owner

@dhiltgen commented on GitHub (Jul 23, 2024):

@snufflemarlstar-rg it's not clear from the logs which GPU discovery code is crashing, so we'll need to turn on debug logging. The simplest approach will likely be; quit Ollama in the tray then in a powershell terminal, run:

$env:OLLAMA_DEBUG="1"
ollama serve 2>&1 | % ToString | Tee-Object server.log

then share that server.log

<!-- gh-comment-id:2246419219 --> @dhiltgen commented on GitHub (Jul 23, 2024): @snufflemarlstar-rg it's not clear from the logs which GPU discovery code is crashing, so we'll need to turn on debug logging. The simplest approach will likely be; quit Ollama in the tray then in a powershell terminal, run: ```powershell $env:OLLAMA_DEBUG="1" ollama serve 2>&1 | % ToString | Tee-Object server.log ``` then share that `server.log`
Author
Owner

@22878120 commented on GitHub (Oct 11, 2024):

@dhiltgen My apologies for jumping into this issue, but I haven't seen a follow-up since July. I have the same problem. I have an old laptop with an Nvidia GT540M card and Intel (disabled) card. I ran the debug and command above and attached is the log file. Appreciate your assistance...

server.log

<!-- gh-comment-id:2406433445 --> @22878120 commented on GitHub (Oct 11, 2024): @dhiltgen My apologies for jumping into this issue, but I haven't seen a follow-up since July. I have the same problem. I have an old laptop with an Nvidia GT540M card and Intel (disabled) card. I ran the debug and command above and attached is the log file. Appreciate your assistance... [server.log](https://github.com/user-attachments/files/17336879/server.log)
Author
Owner

@rick-github commented on GitHub (Oct 11, 2024):

GT540M has a Compute Capability of 2.1, ollama requires 5.0 or better. In this case, ollama is crashing while initializing the NVML library. You can run ollama on the CPU by setting OLLAMA_LLM_LIBRARY=cpu in the server environment.

<!-- gh-comment-id:2407184579 --> @rick-github commented on GitHub (Oct 11, 2024): GT540M has a [Compute Capability](https://developer.nvidia.com/cuda-gpus) of 2.1, ollama requires [5.0 or better](https://github.com/ollama/ollama/blob/main/docs/gpu.md#nvidia). In this case, ollama is crashing while initializing the NVML library. You can run ollama on the CPU by setting `OLLAMA_LLM_LIBRARY=cpu` in the [server environment](https://github.com/ollama/ollama/blob/main/docs/faq.md#setting-environment-variables-on-windows).
Author
Owner

@22878120 commented on GitHub (Oct 11, 2024):

Thank you @rick-github
I added the OLLAMA_LLM_LIBRARY=cpu as a system variable, but it has no effect. Starting the server using ollama serve gives the same error and Ollama searches for GPU

Capture

<!-- gh-comment-id:2407490185 --> @22878120 commented on GitHub (Oct 11, 2024): Thank you @rick-github I added the `OLLAMA_LLM_LIBRARY=cpu` as a system variable, but it has no effect. Starting the server using `ollama serve` gives the same error and Ollama searches for GPU ![Capture](https://github.com/user-attachments/assets/19672db2-c2d1-451e-a465-ee8e0f4ae6c2)
Author
Owner

@rick-github commented on GitHub (Oct 11, 2024):

My mistake, I thought setting OLLAMA_LLM_LIBRARY=cpu would skip the check for GPUs but that's not the case (although the code contains a TODO that might address that). I don't know if this will work, but you could try updating the nvidia drivers on your machine, that might fix the crash problem. If that doesn't help (or there are no newer drivers), you could try setting OLLAMA_SKIP_CUDA_GENERATE=1 and building a custom version of ollama.

<!-- gh-comment-id:2407754786 --> @rick-github commented on GitHub (Oct 11, 2024): My mistake, I thought setting `OLLAMA_LLM_LIBRARY=cpu` would skip the check for GPUs but that's not the case (although the code contains a `TODO` that might address that). I don't know if this will work, but you could try updating the nvidia drivers on your machine, that might fix the crash problem. If that doesn't help (or there are no newer drivers), you could try setting `OLLAMA_SKIP_CUDA_GENERATE=1` and building a custom version of ollama.
Author
Owner

@dhiltgen commented on GitHub (Oct 15, 2024):

The discovery code should gracefully detect the incompatibility and continue, but there's a bug in there somewhere leading to a crash. I can't see an obvious cause, so I'll try to set up a test environment to reproduce this.

<!-- gh-comment-id:2414880596 --> @dhiltgen commented on GitHub (Oct 15, 2024): The discovery code should gracefully detect the incompatibility and continue, but there's a bug in there somewhere leading to a crash. I can't see an obvious cause, so I'll try to set up a test environment to reproduce this.
Author
Owner

@dhiltgen commented on GitHub (Nov 13, 2024):

Release 0.4.1 contains additional debug logging to try to help narrow down where the problem lies.

After updating, quit the tray app, and in a powershell terminal:

$env:OLLAMA_DEBUG="1"
ollama serve

Then share the output.

<!-- gh-comment-id:2474845455 --> @dhiltgen commented on GitHub (Nov 13, 2024): Release 0.4.1 contains additional debug logging to try to help narrow down where the problem lies. After updating, quit the tray app, and in a powershell terminal: ```powershell $env:OLLAMA_DEBUG="1" ollama serve ``` Then share the output.
Author
Owner

@pdevine commented on GitHub (Mar 21, 2025):

I'm going to go ahead and close the issue since it seems pretty stale.

<!-- gh-comment-id:2744180935 --> @pdevine commented on GitHub (Mar 21, 2025): I'm going to go ahead and close the issue since it seems pretty stale.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65546