[GH-ISSUE #1649] Llama not using cuda cuBLAS error 13 #62958

Closed
opened 2026-05-03 11:00:49 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @hbqdev on GitHub (Dec 21, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1649

Originally assigned to: @dhiltgen on GitHub.

It seems this issue was first reported here

https://github.com/jmorganca/ollama/issues/920****

Dec 20 17:03:07 NightFuryX ollama[12288]: llama_new_context_with_model: total VRAM used: 5913.56 MiB (model: 3577.55 MiB, context: 2336.00 MiB)
Dec 20 17:03:11 NightFuryX ollama[12288]: CUDA error 700 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:8111: an illegal memory access was encountered
Dec 20 17:03:11 NightFuryX ollama[12288]: current device: 1
Dec 20 17:03:11 NightFuryX ollama[12288]: GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:8111: !"CUDA error"
Dec 20 17:03:12 NightFuryX ollama[12288]: 2023/12/20 17:03:12 llama.go:451: 700 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:8111: an illegal memory access was encountered
Dec 20 17:03:12 NightFuryX ollama[12288]: current device: 1
Dec 20 17:03:12 NightFuryX ollama[12288]: GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:8111: !"CUDA error"
Dec 20 17:03:12 NightFuryX ollama[12288]: 2023/12/20 17:03:12 llama.go:459: error starting llama runner: llama runner process has terminated
Dec 20 17:03:12 NightFuryX ollama[12288]: 2023/12/20 17:03:12 llama.go:525: llama runner stopped successfully
Dec 20 17:03:12 NightFuryX ollama[12288]: 2023/12/20 17:03:12 llama.go:436: starting llama runner
Dec 20 17:03:12 NightFuryX ollama[12288]: 2023/12/20 17:03:12 llama.go:494: waiting for llama runner to start responding
Dec 20 17:03:12 NightFuryX ollama[12381]: {"timestamp":1703120592,"level":"WARNING","function":"server_params_parse","line":2160,"message":"Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support","n_gpu_layers":-1}

however on the latest build I still have this error. I tried on both linux and WSL2 and same issue. NVCC is installed

Originally created by @hbqdev on GitHub (Dec 21, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1649 Originally assigned to: @dhiltgen on GitHub. It seems this issue was first reported here https://github.com/jmorganca/ollama/issues/920**** ``` Dec 20 17:03:07 NightFuryX ollama[12288]: llama_new_context_with_model: total VRAM used: 5913.56 MiB (model: 3577.55 MiB, context: 2336.00 MiB) Dec 20 17:03:11 NightFuryX ollama[12288]: CUDA error 700 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:8111: an illegal memory access was encountered Dec 20 17:03:11 NightFuryX ollama[12288]: current device: 1 Dec 20 17:03:11 NightFuryX ollama[12288]: GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:8111: !"CUDA error" Dec 20 17:03:12 NightFuryX ollama[12288]: 2023/12/20 17:03:12 llama.go:451: 700 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:8111: an illegal memory access was encountered Dec 20 17:03:12 NightFuryX ollama[12288]: current device: 1 Dec 20 17:03:12 NightFuryX ollama[12288]: GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:8111: !"CUDA error" Dec 20 17:03:12 NightFuryX ollama[12288]: 2023/12/20 17:03:12 llama.go:459: error starting llama runner: llama runner process has terminated Dec 20 17:03:12 NightFuryX ollama[12288]: 2023/12/20 17:03:12 llama.go:525: llama runner stopped successfully Dec 20 17:03:12 NightFuryX ollama[12288]: 2023/12/20 17:03:12 llama.go:436: starting llama runner Dec 20 17:03:12 NightFuryX ollama[12288]: 2023/12/20 17:03:12 llama.go:494: waiting for llama runner to start responding Dec 20 17:03:12 NightFuryX ollama[12381]: {"timestamp":1703120592,"level":"WARNING","function":"server_params_parse","line":2160,"message":"Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support","n_gpu_layers":-1} ``` however on the latest build I still have this error. I tried on both linux and WSL2 and same issue. NVCC is installed
GiteaMirror added the nvidia label 2026-05-03 11:00:50 -05:00
Author
Owner

@hbqdev commented on GitHub (Dec 21, 2023):

To update the error is the following

llama_new_context_with_model: KV self size  = 4096.00 MiB, K (f16): 2048.00 MiB, V (f16): 2048.00 MiB
llama_build_graph: non-view tensors processed: 676/676
llama_new_context_with_model: compute buffer total size = 2139.19 MiB
llama_new_context_with_model: VRAM scratch buffer: 2136.00 MiB
llama_new_context_with_model: total VRAM used: 10079.56 MiB (model: 3847.55 MiB, context: 6232.00 MiB)

cuBLAS error 13 at /home/nightfury/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:7460: the function failed to launch on the GPU
current device: 1
GGML_ASSERT: /home/nightfury/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:7460: !"cuBLAS error"

I have tried

  1. Installed using the script
  2. Build from source
  3. Reinstall cuda and cuda toolkit

Tried on both linux and wsl, still cannot get past this error. The model would load to vram, then the running crashed, and reverts back to CPU

<!-- gh-comment-id:1865415333 --> @hbqdev commented on GitHub (Dec 21, 2023): To update the error is the following ``` llama_new_context_with_model: KV self size = 4096.00 MiB, K (f16): 2048.00 MiB, V (f16): 2048.00 MiB llama_build_graph: non-view tensors processed: 676/676 llama_new_context_with_model: compute buffer total size = 2139.19 MiB llama_new_context_with_model: VRAM scratch buffer: 2136.00 MiB llama_new_context_with_model: total VRAM used: 10079.56 MiB (model: 3847.55 MiB, context: 6232.00 MiB) cuBLAS error 13 at /home/nightfury/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:7460: the function failed to launch on the GPU current device: 1 GGML_ASSERT: /home/nightfury/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:7460: !"cuBLAS error" ``` I have tried 1) Installed using the script 2) Build from source 3) Reinstall cuda and cuda toolkit Tried on both linux and wsl, still cannot get past this error. The model would load to vram, then the running crashed, and reverts back to CPU
Author
Owner

@BruceMacD commented on GitHub (Dec 22, 2023):

Hi @hbqdev, do you know which cuda version you have? You should be able to see it in the output of the nvidia-smi command. Ideally you'll be on 12.

<!-- gh-comment-id:1867796022 --> @BruceMacD commented on GitHub (Dec 22, 2023): Hi @hbqdev, do you know which cuda version you have? You should be able to see it in the output of the `nvidia-smi` command. Ideally you'll be on 12.
Author
Owner

@dhiltgen commented on GitHub (Jan 27, 2024):

@hbqdev could you update to the latest release 0.1.22 and see if that resolves the problem? We've fixed a number of CUDA related integration issues.

<!-- gh-comment-id:1912890406 --> @dhiltgen commented on GitHub (Jan 27, 2024): @hbqdev could you update to the latest release 0.1.22 and see if that resolves the problem? We've fixed a number of CUDA related integration issues.
Author
Owner

@dhiltgen commented on GitHub (Feb 1, 2024):

If you're still having problems with 0.1.22 or newer, please re-open.

<!-- gh-comment-id:1922465282 --> @dhiltgen commented on GitHub (Feb 1, 2024): If you're still having problems with 0.1.22 or newer, please re-open.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#62958