[GH-ISSUE #14446] Vision inference crashes with CUDA error on RTX 5080 (Blackwell, compute 12.0) #35142

Closed
opened 2026-04-22 19:25:49 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @azat403 on GitHub (Feb 26, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14446

What is the issue?

What is the issue?

Vision (image) inference crashes with CUDA error: invalid argument and illegal memory access on NVIDIA GeForce RTX 5080 (Blackwell architecture, compute capability 12.0). Text-only inference on the same model with the same GPU layers works perfectly.

Environment

  • Ollama version: 0.17.1 (Windows)
  • OS: Windows 11 Pro 10.0.26200
  • GPU: NVIDIA GeForce RTX 5080 (16 GB VRAM, compute capability 12.0)
  • NVIDIA Driver: 32.0.15.9174 (591.74)
  • CUDA Toolkit: 13.1 (V13.1.115)
  • RAM: 64 GB
  • Ollama CUDA library used: cuda_v13 (correctly selected over cuda_v12)

Model

qwen3.5:35b-a3b-q4_K_M (36B MoE, Q4_K_M quantization, ~24 GB)

Also tested with qwen3-vl family — same issue with vision on GPU.

Steps to reproduce

1. Text-only inference — WORKS

curl http://localhost:11434/api/generate -d '{
  "model": "qwen3.5:35b-a3b-q4_K_M",
  "prompt": "Explain neural networks in 2 sentences",
  "stream": false,
  "options": {"num_gpu": 25, "num_ctx": 4096}
}'

Result: Success, 23.1 tok/s, 25/41 layers offloaded to GPU, VRAM usage 14.2 GiB / 15.9 GiB.

2. Vision inference (with image) — CRASHES

# Same model, same num_gpu, same num_ctx, only difference is the "images" field
curl http://localhost:11434/api/generate -d '{
  "model": "qwen3.5:35b-a3b-q4_K_M",
  "prompt": "Describe this image",
  "images": ["<base64_encoded_png>"],
  "stream": false,
  "options": {"num_gpu": 25, "num_ctx": 4096}
}'

Result: HTTP 500 — model runner crashes immediately.

3. Vision with num_gpu=0 (CPU only) — WORKS

curl http://localhost:11434/api/generate -d '{
  "model": "qwen3.5:35b-a3b-q4_K_M",
  "prompt": "Describe this image",
  "images": ["<base64_encoded_png>"],
  "stream": false,
  "options": {"num_gpu": 0}
}'

Result: Success, 12.1 tok/s on CPU. Correct output.

Error logs

Error 1: cudaMemcpyAsyncReserve failure (most common)

CUDA error: invalid argument
  current device: 0, in function ggml_cuda_cpy at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:438
  cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream)
C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error

Error 2: Illegal memory access (sometimes follows Error 1)

CUDA error: an illegal memory access was encountered
  current device: 0, in function ggml_backend_cuda_synchronize at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:2981
  cudaStreamSynchronize(cuda_ctx->stream())
C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error

Successful text inference log (for comparison)

msg="inference compute" id=GPU-f3fdc209-000e-1e89-1a60-02954a59d9f5 library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5080" libdirs=ollama,cuda_v13 driver=13.1 total="15.9 GiB" available="14.6 GiB"
msg="offloaded 25/41 layers to GPU"
msg="finished setting up" runner.vram="14.2 GiB" runner.num_ctx=4096
# ← No errors, text generation completes successfully at 23.1 tok/s

Tested GPU layer configurations

num_gpu Text inference Vision inference
0 (CPU) 8 tok/s 12 tok/s
1 CUDA crash
20 20.3 tok/s CUDA crash
25 23.1 tok/s CUDA crash
auto CUDA crash CUDA crash

Key observation: Vision crashes at ANY num_gpu >= 1. Text works fine up to num_gpu=25. Auto (all layers) crashes even for text due to VRAM overflow + cudaMemcpyAsyncReserve bug on Blackwell.

Analysis

The crash occurs specifically in the vision encoder / image projector pipeline when running on CUDA with Blackwell architecture (sm_120). The cudaMemcpyAsyncReserve function (used in ggml_cuda_cpy) appears to be incompatible with RTX 5080.

The text inference path through the same transformer layers on the same GPU works correctly, suggesting the issue is isolated to:

  1. The vision encoder forward pass on CUDA
  2. Or the tensor copy operations specific to image processing on Blackwell GPUs
  • #13338 — RTX 5090 intermittent GPU detection / CUDA crashes
  • #13163 — RTX 5070 Ti GPU not used (Blackwell CC 12.0)
  • #11849 — RTX 5080 CPU fallback
  • #10402 — Official RTX 5090 support request

Expected behavior

Vision inference should work on RTX 5080 with GPU acceleration, the same way text inference does.

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @azat403 on GitHub (Feb 26, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14446 ### What is the issue? ## What is the issue? Vision (image) inference crashes with `CUDA error: invalid argument` and `illegal memory access` on NVIDIA GeForce RTX 5080 (Blackwell architecture, compute capability 12.0). **Text-only inference on the same model with the same GPU layers works perfectly.** ## Environment - **Ollama version:** 0.17.1 (Windows) - **OS:** Windows 11 Pro 10.0.26200 - **GPU:** NVIDIA GeForce RTX 5080 (16 GB VRAM, compute capability 12.0) - **NVIDIA Driver:** 32.0.15.9174 (591.74) - **CUDA Toolkit:** 13.1 (V13.1.115) - **RAM:** 64 GB - **Ollama CUDA library used:** `cuda_v13` (correctly selected over `cuda_v12`) ## Model `qwen3.5:35b-a3b-q4_K_M` (36B MoE, Q4_K_M quantization, ~24 GB) Also tested with `qwen3-vl` family — same issue with vision on GPU. ## Steps to reproduce ### 1. Text-only inference — WORKS ```bash curl http://localhost:11434/api/generate -d '{ "model": "qwen3.5:35b-a3b-q4_K_M", "prompt": "Explain neural networks in 2 sentences", "stream": false, "options": {"num_gpu": 25, "num_ctx": 4096} }' ``` **Result:** Success, 23.1 tok/s, 25/41 layers offloaded to GPU, VRAM usage 14.2 GiB / 15.9 GiB. ### 2. Vision inference (with image) — CRASHES ```bash # Same model, same num_gpu, same num_ctx, only difference is the "images" field curl http://localhost:11434/api/generate -d '{ "model": "qwen3.5:35b-a3b-q4_K_M", "prompt": "Describe this image", "images": ["<base64_encoded_png>"], "stream": false, "options": {"num_gpu": 25, "num_ctx": 4096} }' ``` **Result:** HTTP 500 — model runner crashes immediately. ### 3. Vision with num_gpu=0 (CPU only) — WORKS ```bash curl http://localhost:11434/api/generate -d '{ "model": "qwen3.5:35b-a3b-q4_K_M", "prompt": "Describe this image", "images": ["<base64_encoded_png>"], "stream": false, "options": {"num_gpu": 0} }' ``` **Result:** Success, 12.1 tok/s on CPU. Correct output. ## Error logs ### Error 1: `cudaMemcpyAsyncReserve` failure (most common) ``` CUDA error: invalid argument current device: 0, in function ggml_cuda_cpy at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:438 cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error ``` ### Error 2: Illegal memory access (sometimes follows Error 1) ``` CUDA error: an illegal memory access was encountered current device: 0, in function ggml_backend_cuda_synchronize at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:2981 cudaStreamSynchronize(cuda_ctx->stream()) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error ``` ### Successful text inference log (for comparison) ``` msg="inference compute" id=GPU-f3fdc209-000e-1e89-1a60-02954a59d9f5 library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5080" libdirs=ollama,cuda_v13 driver=13.1 total="15.9 GiB" available="14.6 GiB" msg="offloaded 25/41 layers to GPU" msg="finished setting up" runner.vram="14.2 GiB" runner.num_ctx=4096 # ← No errors, text generation completes successfully at 23.1 tok/s ``` ## Tested GPU layer configurations | num_gpu | Text inference | Vision inference | |---------|---------------|-----------------| | 0 (CPU) | 8 tok/s ✅ | 12 tok/s ✅ | | 1 | ✅ | ❌ CUDA crash | | 20 | 20.3 tok/s ✅ | ❌ CUDA crash | | 25 | 23.1 tok/s ✅ | ❌ CUDA crash | | auto | ❌ CUDA crash | ❌ CUDA crash | **Key observation:** Vision crashes at ANY `num_gpu >= 1`. Text works fine up to `num_gpu=25`. Auto (all layers) crashes even for text due to VRAM overflow + `cudaMemcpyAsyncReserve` bug on Blackwell. ## Analysis The crash occurs specifically in the vision encoder / image projector pipeline when running on CUDA with Blackwell architecture (sm_120). The `cudaMemcpyAsyncReserve` function (used in `ggml_cuda_cpy`) appears to be incompatible with RTX 5080. The text inference path through the same transformer layers on the same GPU works correctly, suggesting the issue is isolated to: 1. The vision encoder forward pass on CUDA 2. Or the tensor copy operations specific to image processing on Blackwell GPUs ## Related issues - #13338 — RTX 5090 intermittent GPU detection / CUDA crashes - #13163 — RTX 5070 Ti GPU not used (Blackwell CC 12.0) - #11849 — RTX 5080 CPU fallback - #10402 — Official RTX 5090 support request ## Expected behavior Vision inference should work on RTX 5080 with GPU acceleration, the same way text inference does. ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-22 19:25:49 -05:00
Author
Owner

@Vyse777 commented on GitHub (Feb 28, 2026):

I can confirm this issue is also happening with qwen3-coder-next when used with Claude Code (for some reason running the model with the Ollama interface doesn't have an issue).
But with Claude Code it causes an endless loop of crashes due to the same error you experienced, on the same line of cpy.cu and ggml-cuda.cu too.

I too am running a RTX 5090.

Error from server.log:

CUDA error: invalid argument
  current device: 0, in function ggml_cuda_cpy at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:438
  cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream)
C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error
time=2026-02-27T22:37:38.446-07:00 level=ERROR source=server.go:1610 msg="post predict" error="Post \"http://127.0.0.1:54830/completion\": read tcp 127.0.0.1:54834->127.0.0.1:54830: wsarecv: An existing connection was forcibly closed by the remote host."

Also, this only started happening recently. After updating to the latest Ollama this started for me. Though, and I apologize, I cannot figure out what version I was on prior when it was still working.

<!-- gh-comment-id:3976452939 --> @Vyse777 commented on GitHub (Feb 28, 2026): I can confirm this issue is also happening with `qwen3-coder-next` when used with Claude Code (for some reason running the model with the Ollama interface doesn't have an issue). But with Claude Code it causes an endless loop of crashes due to the same error you experienced, on the same line of cpy.cu and ggml-cuda.cu too. I too am running a RTX 5090. Error from `server.log`: ``` CUDA error: invalid argument current device: 0, in function ggml_cuda_cpy at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\cpy.cu:438 cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error time=2026-02-27T22:37:38.446-07:00 level=ERROR source=server.go:1610 msg="post predict" error="Post \"http://127.0.0.1:54830/completion\": read tcp 127.0.0.1:54834->127.0.0.1:54830: wsarecv: An existing connection was forcibly closed by the remote host." ``` Also, this only started happening recently. After updating to the latest Ollama this started for me. Though, and I apologize, I cannot figure out what version I was on prior when it was still working.
Author
Owner

@Scafir commented on GitHub (Mar 1, 2026):

I have similar issue with qwen3-coder-next and qwen3.5:35b on a RTX4090 on linux:

Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-e460d78f-5af5-6749-c9a1-b253eb0b03b3
load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v13/libggml-cuda.so
[...]
CUDA error: invalid argument
   current device: 0, in function ggml_cuda_cpy at //ml/backend/ggml/ggml/src/ggml-cuda/cpy.cu:438
   cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream)
//ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:94: CUDA error
kernel: NVRM: Xid (PCI:0000:01:00): 31, pid=971689, name=ollama, channel 0x0000000e, intr 00000000. MMU Fault: ENGINE GRAPHICS GPC2 GPCCLIENT_T1_0 faulted @ 0x7f86_b6b96000. Fault is of type FAULT_PDE ACCESS_>

I had just upgraded ollama to version 0.17.4. I do not know the previous version number, but it must not have been much older than a week.

It works perfectly locally, but this error is always encountered whenever I try to use ollama with opencode through the API.

EDIT: no issue with version 0.17.0, fail on 0.17.1.
WORKAROUND: You can downgrade to the latest working version with curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.17.0 sh

<!-- gh-comment-id:3980915463 --> @Scafir commented on GitHub (Mar 1, 2026): I have similar issue with `qwen3-coder-next` and `qwen3.5:35b` on a RTX4090 on linux: ``` Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-e460d78f-5af5-6749-c9a1-b253eb0b03b3 load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v13/libggml-cuda.so [...] CUDA error: invalid argument current device: 0, in function ggml_cuda_cpy at //ml/backend/ggml/ggml/src/ggml-cuda/cpy.cu:438 cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream) //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:94: CUDA error kernel: NVRM: Xid (PCI:0000:01:00): 31, pid=971689, name=ollama, channel 0x0000000e, intr 00000000. MMU Fault: ENGINE GRAPHICS GPC2 GPCCLIENT_T1_0 faulted @ 0x7f86_b6b96000. Fault is of type FAULT_PDE ACCESS_> ``` I had just upgraded ollama to version `0.17.4`. I do not know the previous version number, but it must not have been much older than a week. It works perfectly locally, but this error is always encountered whenever I try to use ollama with opencode through the API. EDIT: no issue with version `0.17.0`, fail on `0.17.1`. WORKAROUND: You can downgrade to the latest working version with `curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.17.0 sh`
Author
Owner

@molysgaard commented on GitHub (Mar 1, 2026):

I'm hitting the exact same crash. Adding my data points as I've done some workaround testing that may be useful.

Environment

  • OS: Ubuntu, kernel 6.8.0-87-generic (x86_64)
  • GPU: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
  • CUDA driver: 12.8
  • Ollama version: 0.17.4
  • Model: qwen3.5:35b-a22b-q4_K_M (architecture: qwen35moe, Q4_K_M)
  • Trigger: API call via POST /v1/chat/completions from a remote client

Error

CUDA error: invalid argument
  current device: 0, in function ggml_cuda_cpy at //ml/backend/ggml/ggml/src/ggml-cuda/cpy.cu:438
  cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream)
//ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:94: CUDA error

The crash occurs consistently on the first inference request after the model loads successfully. The model loads fine (weights, KV cache, and compute graph all allocate without error).

Workarounds attempted — all failed

  1. OLLAMA_FLASH_ATTENTION=0 — still crashes at the same line. Side effect: GPU layer count drops from 39/41 to 29/41, and compute graph size balloons from 941 MiB to 6.4 GiB.

  2. GGML_CUDA_NO_VMM=1 — still crashes. Notably, the log still reports VMM: yes after setting this variable, suggesting the pre-compiled libggml-cuda.so does not respect this environment variable at runtime.

Load comparison

With flash attn (default) FLASH_ATTENTION=0 FLASH_ATTENTION=0 + NO_VMM=1
GPU layers 39/41 29/41 29/41
Weights (GPU) 19.8 GiB 14.7 GiB 14.7 GiB
Compute graph (GPU) 941 MiB 6.4 GiB 6.4 GiB
Crash

None of the environment variable workarounds change the outcome. Currently falling back to OLLAMA_NUM_GPU=0 (CPU-only) as a temporary measure.

<!-- gh-comment-id:3980951718 --> @molysgaard commented on GitHub (Mar 1, 2026): I'm hitting the exact same crash. Adding my data points as I've done some workaround testing that may be useful. **Environment** - **OS:** Ubuntu, kernel 6.8.0-87-generic (x86_64) - **GPU:** NVIDIA GeForce RTX 3090, compute capability 8.6, `VMM: yes` - **CUDA driver:** 12.8 - **Ollama version:** 0.17.4 - **Model:** `qwen3.5:35b-a22b-q4_K_M` (architecture: `qwen35moe`, Q4_K_M) - **Trigger:** API call via `POST /v1/chat/completions` from a remote client **Error** ``` CUDA error: invalid argument current device: 0, in function ggml_cuda_cpy at //ml/backend/ggml/ggml/src/ggml-cuda/cpy.cu:438 cudaMemcpyAsyncReserve(src1_ddc, src0_ddc, ggml_nbytes(src0), cudaMemcpyDeviceToDevice, main_stream) //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:94: CUDA error ``` The crash occurs consistently on the first inference request after the model loads successfully. The model loads fine (weights, KV cache, and compute graph all allocate without error). **Workarounds attempted — all failed** 1. `OLLAMA_FLASH_ATTENTION=0` — still crashes at the same line. Side effect: GPU layer count drops from 39/41 to 29/41, and compute graph size balloons from 941 MiB to 6.4 GiB. 2. `GGML_CUDA_NO_VMM=1` — still crashes. Notably, the log still reports `VMM: yes` after setting this variable, suggesting the pre-compiled `libggml-cuda.so` does not respect this environment variable at runtime. **Load comparison** | | With flash attn (default) | `FLASH_ATTENTION=0` | `FLASH_ATTENTION=0` + `NO_VMM=1` | |---|---|---|---| | GPU layers | 39/41 | 29/41 | 29/41 | | Weights (GPU) | 19.8 GiB | 14.7 GiB | 14.7 GiB | | Compute graph (GPU) | 941 MiB | 6.4 GiB | 6.4 GiB | | Crash | ✅ | ✅ | ✅ | None of the environment variable workarounds change the outcome. Currently falling back to `OLLAMA_NUM_GPU=0` (CPU-only) as a temporary measure.
Author
Owner

@Vyse777 commented on GitHub (Mar 3, 2026):

Looks like the latest version of Ollama I updated to today 0.17.5 might have resolved my issue.
I recommend you two @azat403 and @molysgaard give it a try and see if it helps.

<!-- gh-comment-id:3988739611 --> @Vyse777 commented on GitHub (Mar 3, 2026): Looks like the latest version of Ollama I updated to today [0.17.5](https://github.com/ollama/ollama/releases/tag/v0.17.5) might have resolved my issue. I recommend you two @azat403 and @molysgaard give it a try and see if it helps.
Author
Owner

@azat403 commented on GitHub (Mar 3, 2026):

Confirmed fixed on RTX 5080 (Blackwell) with Ollama v0.17.5.

Updated from v0.17.4 → v0.17.5. Vision inference on GPU now works for MoE models (qwen3.5:35b-a3b-q4_K_M) with num_gpu=25.

Environment:

  • GPU: RTX 5080, 16 GB VRAM, compute 12.0 (Blackwell)
  • OS: Windows 11 Pro
  • Driver: 591.74, CUDA 13.1

Thanks for the fix! 🎉

<!-- gh-comment-id:3988843895 --> @azat403 commented on GitHub (Mar 3, 2026): Confirmed fixed on RTX 5080 (Blackwell) with Ollama v0.17.5. Updated from v0.17.4 → v0.17.5. Vision inference on GPU now works for MoE models (qwen3.5:35b-a3b-q4_K_M) with num_gpu=25. Environment: - GPU: RTX 5080, 16 GB VRAM, compute 12.0 (Blackwell) - OS: Windows 11 Pro - Driver: 591.74, CUDA 13.1 Thanks for the fix! 🎉
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35142