[GH-ISSUE #5395] CUBLAS_STATUS_ALLOC_FAILED with deepseek-coder-v2:16b #49886

Open
opened 2026-04-28 13:18:55 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @hgourvest on GitHub (Jun 30, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5395

Originally assigned to: @mxyng on GitHub.

What is the issue?

when running deepseek-coder-v2:16b on NVIDIA GeForce RTX 3080 Laptop GPU, I have this crash report:

Error: llama runner process has terminated: signal: aborted (core dumped) CUDA error: CUBLAS_STATUS_ALLOC_FAILED
  current device: 0, in function cublas_handle at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda/common.cuh:826
  cublasCreate_v2(&cublas_handles[device])
GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:100: !"CUDA error"

if I run "16b-lite-instruct-q8_0" version, it works just fine.

OS

Linux

GPU

Nvidia, AMD

CPU

AMD

Ollama version

0.1.48

Originally created by @hgourvest on GitHub (Jun 30, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5395 Originally assigned to: @mxyng on GitHub. ### What is the issue? when running deepseek-coder-v2:16b on NVIDIA GeForce RTX 3080 Laptop GPU, I have this crash report: ``` Error: llama runner process has terminated: signal: aborted (core dumped) CUDA error: CUBLAS_STATUS_ALLOC_FAILED current device: 0, in function cublas_handle at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda/common.cuh:826 cublasCreate_v2(&cublas_handles[device]) GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:100: !"CUDA error" ``` if I run "16b-lite-instruct-q8_0" version, it works just fine. ### OS Linux ### GPU Nvidia, AMD ### CPU AMD ### Ollama version 0.1.48
GiteaMirror added the memorybugnvidia labels 2026-04-28 13:18:56 -05:00
Author
Owner

@LukeMauldin commented on GitHub (Jul 1, 2024):

I am getting a similar error error. Nvidia RTX 4050 on Ubuntu 24.04. Ollama 0.1.48.

ollama run --verbose deepseek-coder-v2:16b-lite-instruct-q4_K_M "You are an expert software developer.  Write a Rust hello world program using the latest axum 0.7 version and include only the code and the Cargo.toml file"
Error: llama runner process has terminated: signal: aborted (core dumped) CUDA error: CUBLAS_STATUS_NOT_INITIALIZED
  current device: 0, in function cublas_handle at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda/common.cuh:826
  cublasCreate_v2(&cublas_handles[device])
GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:100: !"CUDA error"
<!-- gh-comment-id:2198832620 --> @LukeMauldin commented on GitHub (Jul 1, 2024): I am getting a similar error error. Nvidia RTX 4050 on Ubuntu 24.04. Ollama 0.1.48. ``` ollama run --verbose deepseek-coder-v2:16b-lite-instruct-q4_K_M "You are an expert software developer. Write a Rust hello world program using the latest axum 0.7 version and include only the code and the Cargo.toml file" Error: llama runner process has terminated: signal: aborted (core dumped) CUDA error: CUBLAS_STATUS_NOT_INITIALIZED current device: 0, in function cublas_handle at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda/common.cuh:826 cublasCreate_v2(&cublas_handles[device]) GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:100: !"CUDA error" ```
Author
Owner

@dhiltgen commented on GitHub (Jul 2, 2024):

@hgourvest I believe that is an 8G card. I don't have the exact same GPU, but on another nvidia 8G card, I was able to load the model. Are you by any chance setting a non-default context size, or other settings in your request? Can you share your server log?

% ollama ps
NAME                 	ID          	SIZE 	PROCESSOR      	UNTIL
deepseek-coder-v2:16b	6d3369b54a0e	10 GB	22%/78% CPU/GPU	3 minutes from now
<!-- gh-comment-id:2204381637 --> @dhiltgen commented on GitHub (Jul 2, 2024): @hgourvest I believe that is an 8G card. I don't have the exact same GPU, but on another nvidia 8G card, I was able to load the model. Are you by any chance setting a non-default context size, or other settings in your request? Can you share your server log? ``` % ollama ps NAME ID SIZE PROCESSOR UNTIL deepseek-coder-v2:16b 6d3369b54a0e 10 GB 22%/78% CPU/GPU 3 minutes from now ```
Author
Owner

@hgourvest commented on GitHub (Jul 2, 2024):

@dhiltgen It is a 8GB card. I do anything special, just run the model in command line. This is the server log

<!-- gh-comment-id:2204546747 --> @hgourvest commented on GitHub (Jul 2, 2024): @dhiltgen It is a 8GB card. I do anything special, just run the model in command line. This is the [server log](https://github.com/user-attachments/files/16074747/log.txt)
Author
Owner

@wrapss commented on GitHub (Jul 2, 2024):

I have the same problem but with 3X3090, I lower the num_ctx to avoid errors (with deepseek-coder-v2:236b-instruct-q2_K)
log.txt

<!-- gh-comment-id:2204694042 --> @wrapss commented on GitHub (Jul 2, 2024): I have the same problem but with 3X3090, I lower the num_ctx to avoid errors (with deepseek-coder-v2:236b-instruct-q2_K) [log.txt](https://github.com/user-attachments/files/16075244/log.txt)
Author
Owner

@dhiltgen commented on GitHub (Jul 4, 2024):

It looks like our prediction logic was just slightly low and we overshot by 1 layer.

juil. 03 00:06:26 archlinux ollama[1590]: time=2024-07-03T00:06:26.952+02:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=28 layers.offload=21 layers.split="" memory.available="[7.5 GiB]" memory.required.full="9.6 GiB" memory.required.partial="7.5 GiB" memory.required.kv="540.0 MiB" memory.required.allocations="[7.5 GiB]" memory.weights.total="8.5 GiB" memory.weights.repeating="8.4 GiB" memory.weights.nonrepeating="164.1 MiB" memory.graph.full="212.0 MiB" memory.graph.partial="376.1 MiB"

You can workaround by setting num_gpu to 20 which I expect should work until we get this fixed.

<!-- gh-comment-id:2207631940 --> @dhiltgen commented on GitHub (Jul 4, 2024): It looks like our prediction logic was just slightly low and we overshot by 1 layer. ``` juil. 03 00:06:26 archlinux ollama[1590]: time=2024-07-03T00:06:26.952+02:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=28 layers.offload=21 layers.split="" memory.available="[7.5 GiB]" memory.required.full="9.6 GiB" memory.required.partial="7.5 GiB" memory.required.kv="540.0 MiB" memory.required.allocations="[7.5 GiB]" memory.weights.total="8.5 GiB" memory.weights.repeating="8.4 GiB" memory.weights.nonrepeating="164.1 MiB" memory.graph.full="212.0 MiB" memory.graph.partial="376.1 MiB" ``` You can workaround by setting num_gpu to 20 which I expect should work until we get this fixed.
Author
Owner

@hgourvest commented on GitHub (Jul 4, 2024):

Thanks @dhiltgen for the tip, I put this parameter in a model file and was able to start the model. But when I use it, I have another crash. Here's the log

<!-- gh-comment-id:2208339237 --> @hgourvest commented on GitHub (Jul 4, 2024): Thanks @dhiltgen for the tip, I put this parameter in a model file and was able to start the model. But when I use it, I have another crash. Here's the [log](https://github.com/user-attachments/files/16095223/log.txt)
Author
Owner

@ProjectMoon commented on GitHub (Jul 11, 2024):

So I've noticed that I also get crashes w/ Deepseek at big context sizes. I have it set to 5 layers on GPU, use_mmap false, and 128k context. The machine has 64 GB of RAM + 16 GB VRAM. About 43 GB of system RAM is taken up in this configuration, so there's still plenty left to go around. The model loads and starts ingesting, but it eventually crashes with this kind of error:

llm_load_tensors: offloading 5 repeating layers to GPU
llm_load_tensors: offloaded 5/28 layers to GPU
llm_load_tensors:      ROCm0 buffer size =  2502.85 MiB
llm_load_tensors:  ROCm_Host buffer size = 10908.65 MiB
llama_new_context_with_model: n_ctx      = 128000
llama_new_context_with_model: n_batch    = 1024
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 0.025
llama_kv_cache_init:      ROCm0 KV buffer size =  6250.00 MiB
llama_kv_cache_init:  ROCm_Host KV buffer size = 27500.00 MiB
llama_new_context_with_model: KV self size  = 33750.00 MiB, K (f16): 20250.00 MiB, V (f16): 13500.00 MiB
llama_new_context_with_model:  ROCm_Host  output buffer size =     0.40 MiB
llama_new_context_with_model:      ROCm0 compute buffer size =  5524.92 MiB
llama_new_context_with_model:  ROCm_Host compute buffer size =   260.01 MiB
llama_new_context_with_model: graph nodes  = 1924
llama_new_context_with_model: graph splits = 352
time=2024-07-11T21:39:46.461+02:00 level=INFO source=server.go:609 msg="waiting for server to become available" status="llm server not responding"
time=2024-07-11T21:39:49.421+02:00 level=INFO source=server.go:614 msg="llama runner started in 76.83 seconds"
CUDA error: out of memory
  current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:290
  ggml_cuda_device_malloc(&ptr, look_ahead_size, device)
GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:100: !"CUDA error"
<!-- gh-comment-id:2223738986 --> @ProjectMoon commented on GitHub (Jul 11, 2024): So I've noticed that I also get crashes w/ Deepseek at big context sizes. I have it set to 5 layers on GPU, use_mmap false, and 128k context. The machine has 64 GB of RAM + 16 GB VRAM. About 43 GB of system RAM is taken up in this configuration, so there's still plenty left to go around. The model loads and starts ingesting, but it eventually crashes with this kind of error: ``` llm_load_tensors: offloading 5 repeating layers to GPU llm_load_tensors: offloaded 5/28 layers to GPU llm_load_tensors: ROCm0 buffer size = 2502.85 MiB llm_load_tensors: ROCm_Host buffer size = 10908.65 MiB llama_new_context_with_model: n_ctx = 128000 llama_new_context_with_model: n_batch = 1024 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 0.025 llama_kv_cache_init: ROCm0 KV buffer size = 6250.00 MiB llama_kv_cache_init: ROCm_Host KV buffer size = 27500.00 MiB llama_new_context_with_model: KV self size = 33750.00 MiB, K (f16): 20250.00 MiB, V (f16): 13500.00 MiB llama_new_context_with_model: ROCm_Host output buffer size = 0.40 MiB llama_new_context_with_model: ROCm0 compute buffer size = 5524.92 MiB llama_new_context_with_model: ROCm_Host compute buffer size = 260.01 MiB llama_new_context_with_model: graph nodes = 1924 llama_new_context_with_model: graph splits = 352 time=2024-07-11T21:39:46.461+02:00 level=INFO source=server.go:609 msg="waiting for server to become available" status="llm server not responding" time=2024-07-11T21:39:49.421+02:00 level=INFO source=server.go:614 msg="llama runner started in 76.83 seconds" CUDA error: out of memory current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:290 ggml_cuda_device_malloc(&ptr, look_ahead_size, device) GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:100: !"CUDA error" ```
Author
Owner

@ProjectMoon commented on GitHub (Jul 12, 2024):

So I've noticed a fairly consistent pattern: if I start a conversation with Deepseek Coder V2 loaded, it'll continue just fine. It can get quite long. But if I come back later, after the model has been removed from memory, and try to start up the long conversation again, the above situation will happen, where the model loads but crashes during ingestion.

<!-- gh-comment-id:2224991583 --> @ProjectMoon commented on GitHub (Jul 12, 2024): So I've noticed a fairly consistent pattern: if I start a conversation with Deepseek Coder V2 loaded, it'll continue just fine. It can get quite long. But if I come back later, after the model has been removed from memory, and try to start up the long conversation again, the above situation will happen, where the model loads but crashes during ingestion.
Author
Owner

@hgourvest commented on GitHub (Sep 15, 2024):

This bug seems to be resolved in my case, it doesn't crash anymore in the latest version.

<!-- gh-comment-id:2351799530 --> @hgourvest commented on GitHub (Sep 15, 2024): This bug seems to be resolved in my case, it doesn't crash anymore in the latest version.
Author
Owner

@ProjectMoon commented on GitHub (Sep 17, 2024):

One thing I just noticed for Deepseek Coder v2 on ROCm: it really hates use_mmap. I had it set to false on one model, and it would segfault every time when loading, no matter what num_ctx or num_gpu were. This didn't used to happen, but turning use_mmap back to true made everything work perfectly fine again.

I'm not sure how this will stand up against long context sizes, though. The long context problem (where it crashes when trying to load a large context back into memory from a previous session) might still be there.

<!-- gh-comment-id:2354850285 --> @ProjectMoon commented on GitHub (Sep 17, 2024): One thing I just noticed for Deepseek Coder v2 on ROCm: it really hates `use_mmap`. I had it set to false on one model, and it would segfault every time when loading, no matter what `num_ctx` or `num_gpu` were. This didn't used to happen, but turning `use_mmap` back to `true` made everything work perfectly fine again. I'm not sure how this will stand up against long context sizes, though. The long context problem (where it crashes when trying to load a large context back into memory from a previous session) might still be there.
Author
Owner

@Monolitho commented on GitHub (Jan 26, 2025):

Hi everyone, do I have the option to fall back to CPU in the model deepseek? I tried the

export CUDA_VISIBLE_DEVICES="-1"

but it failed and gave me the same error.

<!-- gh-comment-id:2614468110 --> @Monolitho commented on GitHub (Jan 26, 2025): Hi everyone, do I have the option to fall back to CPU in the model deepseek? I tried the ```bash export CUDA_VISIBLE_DEVICES="-1" ``` but it failed and gave me the same error.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#49886