[GH-ISSUE #1877] CUDA error 999 #26835

Closed
opened 2026-04-22 03:30:33 -05:00 by GiteaMirror · 17 comments
Owner

Originally created by @pierreuuuuu on GitHub (Jan 9, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/1877

Originally assigned to: @dhiltgen on GitHub.

Hello,
I'm sorry I'm reopening a ticket on that issue, as I'm still facing the problem.
I've updated ollama to v0.1.19, but I'm getting the same issue (I guess) from #1838 and #1865 .
I have a gtx 950M (maybe it's too old ^^'), cuda 12.3, Nvidia driver 545.23.08, ubuntu 22.04.3

My logs:
debug_logs_0_1_19.txt

Thanks for reading, and thank you for the reactiveness on that issue :) !

Originally created by @pierreuuuuu on GitHub (Jan 9, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/1877 Originally assigned to: @dhiltgen on GitHub. Hello, I'm sorry I'm reopening a ticket on that issue, as I'm still facing the problem. I've updated ollama to v0.1.19, but I'm getting the same issue (I guess) from #1838 and #1865 . I have a gtx 950M (maybe it's too old ^^'), cuda 12.3, Nvidia driver 545.23.08, ubuntu 22.04.3 My logs: [debug_logs_0_1_19.txt](https://github.com/jmorganca/ollama/files/13878904/debug_logs_0_1_19.txt) Thanks for reading, and thank you for the reactiveness on that issue :) !
GiteaMirror added the nvidia label 2026-04-22 03:30:33 -05:00
Author
Owner

@dhiltgen commented on GitHub (Jan 9, 2024):

The GTX 950 is a Compute Capability 5.2 card, which is not currently supported by our build configuration of the CUDA libs. We just merged a change to correctly detect min 6.0 compute capability and fallback to CPU mode for older cards, but I'm guessing you picked up a pre-release build of 0.1.19 before that was fix merged. If you grab the latest pre-release build of 0.1.19 it should have that fix and fallback to CPU gracefully without crashing.

I don't believe we currently have an issue tracking the feature request for CUDA support for 5.2 cards such as yours. Please go ahead and file one.. Lets use https://github.com/jmorganca/ollama/issues/1865 to track it

https://developer.nvidia.com/cuda-gpus

<!-- gh-comment-id:1883832409 --> @dhiltgen commented on GitHub (Jan 9, 2024): The GTX 950 is a Compute Capability 5.2 card, which is not currently supported by our build configuration of the CUDA libs. We just merged a change to correctly detect min 6.0 compute capability and fallback to CPU mode for older cards, but I'm guessing you picked up a pre-release build of 0.1.19 before that was fix merged. If you grab the latest pre-release build of 0.1.19 it should have that fix and fallback to CPU gracefully without crashing. ~~I don't believe we currently have an issue tracking the feature request for CUDA support for 5.2 cards such as yours. Please go ahead and file one.~~. Lets use https://github.com/jmorganca/ollama/issues/1865 to track it https://developer.nvidia.com/cuda-gpus
Author
Owner

@dhiltgen commented on GitHub (Jan 10, 2024):

0.1.19 is now out and should resolve the crash by falling back to CPU. We'll track enabling CUDA support on these older GPUs with #1865

If you're still seeing crashes for any reason on this card please re-open with updated server logs on the 0.1.19 release.

<!-- gh-comment-id:1884023855 --> @dhiltgen commented on GitHub (Jan 10, 2024): 0.1.19 is now out and should resolve the crash by falling back to CPU. We'll track enabling CUDA support on these older GPUs with #1865 If you're still seeing crashes for any reason on this card please re-open with updated server logs on the 0.1.19 release.
Author
Owner

@sonovice commented on GitHub (Jan 10, 2024):

Hi there, I am using an RTX 3090 on Linux (x64, Kernel v6.6.6) with Ollama v0.1.19 and run into the same error with every model that I've tried.

Here is my log.txt

<!-- gh-comment-id:1884519282 --> @sonovice commented on GitHub (Jan 10, 2024): Hi there, I am using an RTX 3090 on Linux (x64, Kernel v6.6.6) with Ollama v0.1.19 and run into the same error with every model that I've tried. [Here is my log.txt](https://github.com/jmorganca/ollama/files/13885908/log.txt)
Author
Owner

@mattjax16 commented on GitHub (Jan 20, 2024):

Same here on an rtx 3080 but works with my 3060 ti

<!-- gh-comment-id:1901916895 --> @mattjax16 commented on GitHub (Jan 20, 2024): Same here on an rtx 3080 but works with my 3060 ti
Author
Owner

@dhiltgen commented on GitHub (Jan 20, 2024):

Relevant excerpt from the log: (v0.1.18)

Jan 10 10:46:43 pop-os ollama[2092143]: 2024/01/10 10:46:43 gpu.go:84: CUDA Compute Capability detected: 8.6
Jan 10 10:46:44 pop-os ollama[2092143]: CUDA error 999 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:495: unknown error
Jan 10 10:46:44 pop-os ollama[2092143]: current device: 203949216
Jan 10 10:46:44 pop-os ollama[2092143]: Lazy loading /tmp/ollama4149470556/cuda/libext_server.so library
Jan 10 10:46:44 pop-os ollama[2092143]: GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:495: !"CUDA error"
Jan 10 10:46:44 pop-os ollama[2092378]: SIGABRT: abort
Jan 10 10:46:44 pop-os ollama[2092378]: PC=0x71a5fcc969fc m=37 sigcode=18446744073709551610
Jan 10 10:46:44 pop-os ollama[2092378]: signal arrived during cgo execution
Jan 10 10:46:44 pop-os ollama[2092378]: goroutine 53 [syscall]:
Jan 10 10:46:44 pop-os ollama[2092378]: runtime.cgocall(0x9c2f70, 0xc0003443d0)
Jan 10 10:46:44 pop-os ollama[2092378]: #011/usr/local/go/src/runtime/cgocall.go:157 +0x4b fp=0xc0003443a8 sp=0xc000344370 pc=0x42918b
Jan 10 10:46:44 pop-os ollama[2092378]: github.com/jmorganca/ollama/llm._Cfunc_dynamic_shim_llama_server_init({0x71a50c001e40, 0x71a4f8dfa2d0, 0x71a4f8deca80, 0x71a4f8df0270, 0x71a4f8e02840, 0x71a4f8df78f0, 0x71a4f8df0430, 0x71a4f8decb00, 0x71a4f8dfdad0, 0x71a4f8dfd680, ...}, ...)
<!-- gh-comment-id:1902427128 --> @dhiltgen commented on GitHub (Jan 20, 2024): Relevant excerpt from the log: (v0.1.18) ``` Jan 10 10:46:43 pop-os ollama[2092143]: 2024/01/10 10:46:43 gpu.go:84: CUDA Compute Capability detected: 8.6 ``` ``` Jan 10 10:46:44 pop-os ollama[2092143]: CUDA error 999 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:495: unknown error Jan 10 10:46:44 pop-os ollama[2092143]: current device: 203949216 Jan 10 10:46:44 pop-os ollama[2092143]: Lazy loading /tmp/ollama4149470556/cuda/libext_server.so library Jan 10 10:46:44 pop-os ollama[2092143]: GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:495: !"CUDA error" Jan 10 10:46:44 pop-os ollama[2092378]: SIGABRT: abort Jan 10 10:46:44 pop-os ollama[2092378]: PC=0x71a5fcc969fc m=37 sigcode=18446744073709551610 Jan 10 10:46:44 pop-os ollama[2092378]: signal arrived during cgo execution Jan 10 10:46:44 pop-os ollama[2092378]: goroutine 53 [syscall]: Jan 10 10:46:44 pop-os ollama[2092378]: runtime.cgocall(0x9c2f70, 0xc0003443d0) Jan 10 10:46:44 pop-os ollama[2092378]: #011/usr/local/go/src/runtime/cgocall.go:157 +0x4b fp=0xc0003443a8 sp=0xc000344370 pc=0x42918b Jan 10 10:46:44 pop-os ollama[2092378]: github.com/jmorganca/ollama/llm._Cfunc_dynamic_shim_llama_server_init({0x71a50c001e40, 0x71a4f8dfa2d0, 0x71a4f8deca80, 0x71a4f8df0270, 0x71a4f8e02840, 0x71a4f8df78f0, 0x71a4f8df0430, 0x71a4f8decb00, 0x71a4f8dfdad0, 0x71a4f8dfd680, ...}, ...) ```
Author
Owner

@dhiltgen commented on GitHub (Jan 20, 2024):

I don't think we've made any changes in 0.1.21 that will impact this defect, but let us know if you see any change in behavior.

Also you can force it to use the CPU as a workaround until we figure out what's causing the cuda error by setting OLLAMA_LLM_LIBRARY to one of the cpu variants. Instructions are located here.

<!-- gh-comment-id:1902429193 --> @dhiltgen commented on GitHub (Jan 20, 2024): I don't think we've made any changes in [0.1.21](https://github.com/jmorganca/ollama/releases/tag/v0.1.21) that will impact this defect, but let us know if you see any change in behavior. Also you can force it to use the CPU as a workaround until we figure out what's causing the cuda error by setting OLLAMA_LLM_LIBRARY to one of the cpu variants. Instructions are located [here](https://github.com/jmorganca/ollama/blob/main/docs/troubleshooting.md#llm-libraries).
Author
Owner

@pierreuuuuu commented on GitHub (Jan 21, 2024):

I tested 0.1.21 with mistral, (I have GTX 950M), and now the logs message are more explicit:

"gpu.go:140: INFO CUDA GPU is too old. Falling back to CPU mode. Compute Capability detected: 5.0"

Only the truth hurts ^^

But it automatically switch with the cpu, I don't have to set the OLLAMA_LLM_LIBRARY variable for the model to work.

My complete logs:
logs_1877.txt

<!-- gh-comment-id:1902765750 --> @pierreuuuuu commented on GitHub (Jan 21, 2024): I tested 0.1.21 with mistral, (I have GTX 950M), and now the logs message are more explicit: "gpu.go:140: INFO CUDA GPU is too old. Falling back to CPU mode. Compute Capability detected: 5.0" Only the truth hurts ^^ But it automatically switch with the cpu, I don't have to set the OLLAMA_LLM_LIBRARY variable for the model to work. My complete logs: [logs_1877.txt](https://github.com/jmorganca/ollama/files/14002848/logs_1877.txt)
Author
Owner

@dhiltgen commented on GitHub (Jan 22, 2024):

@pierreuuuuu we're close to having support for 5.0+ cards - keep an eye on #2116

<!-- gh-comment-id:1904358910 --> @dhiltgen commented on GitHub (Jan 22, 2024): @pierreuuuuu we're close to having support for 5.0+ cards - keep an eye on #2116
Author
Owner

@dhiltgen commented on GitHub (Jan 22, 2024):

    /**
     * This indicates that an unknown internal error has occurred.
     */
    cudaErrorUnknown                      =    999,

@sonovice from your log, it doesn't look like you're in a WSL2 setup. Is that correct? This error code is generic, so it makes it a little difficult to understand why CUDA is having problems connecting to your card. Do other GPU based apps work for you? Are there any interesting errors related to the GPU in other logs (dmesg, /var/log/*)? Are there any other aspects about your configuration that are notable/unique we should know about?

@mattjax16 can you confirm your 3080 failure is the same CUDA error 999? Can you share your logs as well?

If these are in fact WSL2 systems, one other possible explanation might be a mistaken driver install in the WSL2 setup. According to the CUDA WSL2 docs, you're not supposed to install the linux driver, as they have wired up a pass-through model for WSL2, but it's possible to accidentally install the driver and cause things not to work.

<!-- gh-comment-id:1904406171 --> @dhiltgen commented on GitHub (Jan 22, 2024): ``` /** * This indicates that an unknown internal error has occurred. */ cudaErrorUnknown = 999, ``` @sonovice from your log, it doesn't look like you're in a WSL2 setup. Is that correct? This error code is generic, so it makes it a little difficult to understand why CUDA is having problems connecting to your card. Do other GPU based apps work for you? Are there any interesting errors related to the GPU in other logs (dmesg, /var/log/*)? Are there any other aspects about your configuration that are notable/unique we should know about? @mattjax16 can you confirm your 3080 failure is the same `CUDA error 999`? Can you share your logs as well? If these are in fact WSL2 systems, one other possible explanation might be a mistaken driver install in the WSL2 setup. According to the [CUDA WSL2 docs](https://docs.nvidia.com/cuda/wsl-user-guide/index.html#cuda-support-for-wsl-2), you're not supposed to install the linux driver, as they have wired up a pass-through model for WSL2, but it's possible to accidentally install the driver and cause things not to work.
Author
Owner

@mattjax16 commented on GitHub (Jan 22, 2024):

@dhiltgen I am on WSL 2 and I will post the logs when I get home if I can reproduce, however I lost the entire windows image when I went and tried to install tux OS on a secondary drive to try it out there (ended up wiping all my drives because it never gave a warning that it would begin setup and didn't let me manually partition or even choose which drive it's installed on) if I can reproduce on the new windows install when o get home I'll post the logs!

<!-- gh-comment-id:1904483016 --> @mattjax16 commented on GitHub (Jan 22, 2024): @dhiltgen I am on WSL 2 and I will post the logs when I get home if I can reproduce, however I lost the entire windows image when I went and tried to install tux OS on a secondary drive to try it out there (ended up wiping all my drives because it never gave a warning that it would begin setup and didn't let me manually partition or even choose which drive it's installed on) if I can reproduce on the new windows install when o get home I'll post the logs!
Author
Owner

@mattjax16 commented on GitHub (Jan 23, 2024):

So I managed to get it working fine on wsl on a fresh windows install with my 3060 will now try in the machine with the 3080 and also testing to see if any differences with a native wsl install vs docker

<!-- gh-comment-id:1905589664 --> @mattjax16 commented on GitHub (Jan 23, 2024): So I managed to get it working fine on wsl on a fresh windows install with my 3060 will now try in the machine with the 3080 and also testing to see if any differences with a native wsl install vs docker
Author
Owner

@dhiltgen commented on GitHub (Jan 26, 2024):

Based on this comment it sounds like this may be the result of mismatched driver and cuda libraries. If you're seeing this CUDA error 999 crash, please check your driver/library versions.

<!-- gh-comment-id:1912737355 --> @dhiltgen commented on GitHub (Jan 26, 2024): Based on [this comment](https://github.com/ollama/ollama/issues/1991#issuecomment-1902710497) it sounds like this may be the result of mismatched driver and cuda libraries. If you're seeing this CUDA error 999 crash, please check your driver/library versions.
Author
Owner

@dhiltgen commented on GitHub (Feb 19, 2024):

If folks are still seeing this, please comment and I'll re-open.

<!-- gh-comment-id:1953075585 --> @dhiltgen commented on GitHub (Feb 19, 2024): If folks are still seeing this, please comment and I'll re-open.
Author
Owner

@Wanhack commented on GitHub (Mar 11, 2024):

Running Ollama using Docker container in Ubuntu VM Promox. I am able to use the GPU inside the Ubuntu VM with no issues (I used hashcat -b and it was able to use the GPU) Getting a "unable to load CUDA management library. Full error:

 time=2024-03-11T13:14:33.736Z level=INFO source=gpu.go:77 msg="Detecting GPU type"
 time=2024-03-11T13:14:33.737Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so"
 time=2024-03-11T13:14:33.739Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.14]"
 time=2024-03-11T13:14:33.744Z level=INFO source=gpu.go:249 msg="Unable to load CUDA management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.14: nvml vram init failure: 999
 time=2024-03-11T13:14:33.744Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
 time=2024-03-11T13:14:33.744Z level=INFO source=routes.go:1042 msg="no GPU detected"

Any idea how to fix this issue? I am using the Nvidia 550.54.14 with CUDA 12.4 based on the nvidia-smi command. Using 4060ti and Ryzen 7950x

<!-- gh-comment-id:1988604326 --> @Wanhack commented on GitHub (Mar 11, 2024): Running Ollama using Docker container in Ubuntu VM Promox. I am able to use the GPU inside the Ubuntu VM with no issues (I used hashcat -b and it was able to use the GPU) Getting a "unable to load CUDA management library. Full error: ``` time=2024-03-11T13:14:33.736Z level=INFO source=gpu.go:77 msg="Detecting GPU type" time=2024-03-11T13:14:33.737Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so" time=2024-03-11T13:14:33.739Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.14]" time=2024-03-11T13:14:33.744Z level=INFO source=gpu.go:249 msg="Unable to load CUDA management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.14: nvml vram init failure: 999 time=2024-03-11T13:14:33.744Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-11T13:14:33.744Z level=INFO source=routes.go:1042 msg="no GPU detected" ``` Any idea how to fix this issue? I am using the Nvidia 550.54.14 with CUDA 12.4 based on the nvidia-smi command. Using 4060ti and Ryzen 7950x
Author
Owner

@dhiltgen commented on GitHub (Mar 11, 2024):

@Wanhack others have reported that running nvidia-modprobe -u on your host may resolve the issue (might require a reboot)

<!-- gh-comment-id:1989216749 --> @dhiltgen commented on GitHub (Mar 11, 2024): @Wanhack others have reported that running `nvidia-modprobe -u` on your host may resolve the issue (might require a reboot)
Author
Owner

@navr32 commented on GitHub (Dec 31, 2024):

for me i have to do :

sudo nvidia-modprobe -u
sudo rmmod nvidia_uvm
sudo modprobe nvidia_uvm
<!-- gh-comment-id:2566357372 --> @navr32 commented on GitHub (Dec 31, 2024): for me i have to do : ``` sudo nvidia-modprobe -u sudo rmmod nvidia_uvm sudo modprobe nvidia_uvm ```
Author
Owner

@Ghost3972 commented on GitHub (Feb 20, 2025):

for me i have to do :对我来说,我必须这样做:

sudo nvidia-modprobe -u
sudo rmmod nvidia_uvm
sudo modprobe nvidia_uvm

他工作,谢谢你,运行了第一个命令重启后,他正确识别并调用了GPU

<!-- gh-comment-id:2671674951 --> @Ghost3972 commented on GitHub (Feb 20, 2025): > for me i have to do :对我来说,我必须这样做: > > ``` > sudo nvidia-modprobe -u > sudo rmmod nvidia_uvm > sudo modprobe nvidia_uvm > ``` 他工作,谢谢你,运行了第一个命令重启后,他正确识别并调用了GPU
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#26835