[GH-ISSUE #14374] RTX 5090 D v2(Blackwell SM_120): MMQ CUDA kernel crash - device kernel image is invalid (v0.16.3, Windows) #9342

Open
opened 2026-04-12 22:12:28 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @XHXIAIEIN on GitHub (Feb 23, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14374

What is the issue?

On Windows with RTX 5090 D v2 (China-market variant, GB202-240 die, Blackwell compute capability 12.0), every quantized model fails to load with a CUDA kernel crash in the MMQ (quantized matrix multiplication) function. This affects ALL models (tested qwen3:32b, qwen2.5:14b, qwen2.5:7b, glm-4.7-flash, glm-ocr, deepseek-r1:32b).

The CUDA backend does report SM_120 in its architecture list (CUDA.0.ARCHS=...1200), but the compiled MMQ kernels crash at runtime.

Note: The RTX 5090 D v2 is a China-specific variant using the GB202-240 die (not GB202-300 used by the standard RTX 5090). It has the same 21,760 CUDA cores and compute capability 12.0, but with reduced memory (24 GB / 384-bit vs 32 GB / 512-bit). This variant may not be in the Ollama/llama.cpp CUDA test matrix.

GPU Hardware Details

Spec Standard RTX 5090 RTX 5090 D v2 (this card)
GPU Die GB202-300 GB202-240
GPU Part Number 2B8C-240-A1
VRAM 32 GB GDDR7 / 512-bit 24 GB GDDR7 / 384-bit
AI TOPS 3,352 2,375
CUDA Cores 21,760 21,760
Compute Capability 12.0 12.0
VBIOS 98.02.69.00.55

Error

CUDA error: device kernel image is invalid
  current device: 0, in function ggml_cuda_mul_mat_q at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\mmq.cu:128
  cudaGetLastError()
C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error

Full Server Log

load_backend: loaded CPU backend from D:\Agent\Ollama\lib\ollama\ggml-cpu-icelake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 5090 D v2, compute capability 12.0, VMM: yes
load_backend: loaded CUDA backend from D:\Agent\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
system: CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200

CUDA error: device kernel image is invalid
  current device: 0, in function ggml_cuda_mul_mat_q at mmq.cu:128
nvidia-smi output:
  Product Name        : NVIDIA GeForce RTX 5090 D v2
  Product Architecture: Blackwell
  GPU Part Number     : 2B8C-240-A1
  VBIOS Version       : 98.02.69.00.55
  Driver Version      : 591.86
  CUDA Version        : 13.1

Steps to Reproduce

  1. Install Ollama 0.16.3 on Windows with an RTX 5090 D v2
  2. Pull a small quantized model: ollama pull qwen2.5:0.5b (~400 MB)
  3. Try to run: ollama run qwen2.5:0.5b "hello"
  4. Model fails to load with the CUDA error above

Minimal reproduction script (Python):

import json, urllib.request

# Requires Ollama running on localhost:11434
payload = json.dumps({
    "model": "qwen2.5:0.5b",  # Any quantized model triggers this
    "prompt": "Hello",
    "stream": False,
    "options": {"num_predict": 1}
}).encode()

req = urllib.request.Request(
    "http://localhost:11434/api/generate",
    data=payload,
    headers={"Content-Type": "application/json"},
)

with urllib.request.urlopen(req, timeout=120) as r:
    data = json.loads(r.read())
    print(data)
    # Expected: response with generated text
    # Actual: {"error": "model failed to load..."} or HTTP 500

Environment

  • OS: Windows 11 Pro 10.0.26200
  • Ollama: 0.16.3
  • GPU: NVIDIA GeForce RTX 5090 D v2 (GB202-240, compute capability 12.0, Blackwell)
  • GPU Part Number: 2B8C-240-A1
  • Driver: 591.86
  • CUDA: 13.1
  • RAM: 64 GB
  • VRAM: 24 GB

What I've Tried

Workaround Result
OLLAMA_FLASH_ATTENTION=false Flash attention disabled, but MMQ crash persists (different code path)
GGML_CUDA_FORCE_CUBLAS=1 as env var No effect — this is a compile-time CMake define, not a runtime env var
OLLAMA_NUM_GPU=0 via Git Bash Not inherited by Ollama runner process on Windows
CUDA_VISIBLE_DEVICES=-1 via batch file Works — falls back to CPU, but loses all GPU acceleration
Reinstall Ollama 0.16.3 Same binary, same issue

Root Cause Analysis

Based on ggml-org/llama.cpp#18331, this is likely caused by an nvcc compiler optimization bug where the CUDA Toolkit generates incorrect machine code for Blackwell SM_120 MMQ kernels at -O3. The pre-built ggml-cuda.dll shipped with Ollama 0.16.3 contains these buggy kernels.

Additionally, since this is the China-variant GB202-240 die (not the standard GB202-300), it may have subtle microarchitectural differences that exacerbate or differently trigger the nvcc compiler bug. This variant may not be included in the Ollama/llama.cpp CUDA test matrix.

The upstream workaround is to either:

  • Compile with -DCMAKE_CUDA_ARCHITECTURES="89" (Ada PTX fallback, works on Blackwell via forward compatibility)
  • Compile with -DCMAKE_CUDA_FLAGS="-O2" (reduced optimization avoids the nvcc bug)
  • Compile with -DGGML_CUDA_FORCE_CUBLAS=ON (bypass MMQ entirely, use cuBLAS)

Expected Behavior

Models should load and run on RTX 5090 D v2 GPU. The CUDA architecture 1200 is listed in the compiled backends, so SM_120 is intended to be supported.

Suggested Fix

  1. Ship the CUDA backend DLL compiled with -DCMAKE_CUDA_ARCHITECTURES="89-real;120" and -O2 flag for sm_120
  2. Or include a runtime fallback to cuBLAS when MMQ kernels fail on Blackwell
  3. Ensure GB202-240 (RTX 5090 D v2) is included in the GPU compatibility test matrix alongside GB202-300 (standard RTX 5090)
Originally created by @XHXIAIEIN on GitHub (Feb 23, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14374 ## What is the issue? On Windows with **RTX 5090 D v2** (China-market variant, **GB202-240** die, Blackwell compute capability 12.0), **every** quantized model fails to load with a CUDA kernel crash in the MMQ (quantized matrix multiplication) function. This affects ALL models (tested qwen3:32b, qwen2.5:14b, qwen2.5:7b, glm-4.7-flash, glm-ocr, deepseek-r1:32b). The CUDA backend **does** report SM_120 in its architecture list (`CUDA.0.ARCHS=...1200`), but the compiled MMQ kernels crash at runtime. > **Note:** The RTX 5090 D v2 is a China-specific variant using the **GB202-240** die (not GB202-300 used by the standard RTX 5090). It has the same 21,760 CUDA cores and compute capability 12.0, but with reduced memory (24 GB / 384-bit vs 32 GB / 512-bit). This variant may not be in the Ollama/llama.cpp CUDA test matrix. ## GPU Hardware Details | Spec | Standard RTX 5090 | RTX 5090 D v2 (this card) | |---|---|---| | GPU Die | GB202-300 | **GB202-240** | | GPU Part Number | — | **2B8C-240-A1** | | VRAM | 32 GB GDDR7 / 512-bit | **24 GB GDDR7 / 384-bit** | | AI TOPS | 3,352 | **2,375** | | CUDA Cores | 21,760 | 21,760 | | Compute Capability | 12.0 | 12.0 | | VBIOS | — | 98.02.69.00.55 | ## Error ``` CUDA error: device kernel image is invalid current device: 0, in function ggml_cuda_mul_mat_q at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\mmq.cu:128 cudaGetLastError() C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:94: CUDA error ``` ## Full Server Log ``` load_backend: loaded CPU backend from D:\Agent\Ollama\lib\ollama\ggml-cpu-icelake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5090 D v2, compute capability 12.0, VMM: yes load_backend: loaded CUDA backend from D:\Agent\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll system: CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA error: device kernel image is invalid current device: 0, in function ggml_cuda_mul_mat_q at mmq.cu:128 ``` ``` nvidia-smi output: Product Name : NVIDIA GeForce RTX 5090 D v2 Product Architecture: Blackwell GPU Part Number : 2B8C-240-A1 VBIOS Version : 98.02.69.00.55 Driver Version : 591.86 CUDA Version : 13.1 ``` ## Steps to Reproduce 1. Install Ollama 0.16.3 on Windows with an RTX 5090 D v2 2. Pull a small quantized model: `ollama pull qwen2.5:0.5b` (~400 MB) 3. Try to run: `ollama run qwen2.5:0.5b "hello"` 4. Model fails to load with the CUDA error above **Minimal reproduction script (Python):** ```python import json, urllib.request # Requires Ollama running on localhost:11434 payload = json.dumps({ "model": "qwen2.5:0.5b", # Any quantized model triggers this "prompt": "Hello", "stream": False, "options": {"num_predict": 1} }).encode() req = urllib.request.Request( "http://localhost:11434/api/generate", data=payload, headers={"Content-Type": "application/json"}, ) with urllib.request.urlopen(req, timeout=120) as r: data = json.loads(r.read()) print(data) # Expected: response with generated text # Actual: {"error": "model failed to load..."} or HTTP 500 ``` ## Environment - **OS**: Windows 11 Pro 10.0.26200 - **Ollama**: 0.16.3 - **GPU**: NVIDIA GeForce RTX 5090 D v2 (GB202-240, compute capability 12.0, Blackwell) - **GPU Part Number**: 2B8C-240-A1 - **Driver**: 591.86 - **CUDA**: 13.1 - **RAM**: 64 GB - **VRAM**: 24 GB ## What I've Tried | Workaround | Result | |---|---| | `OLLAMA_FLASH_ATTENTION=false` | Flash attention disabled, but MMQ crash persists (different code path) | | `GGML_CUDA_FORCE_CUBLAS=1` as env var | **No effect** — this is a compile-time CMake define, not a runtime env var | | `OLLAMA_NUM_GPU=0` via Git Bash | Not inherited by Ollama runner process on Windows | | `CUDA_VISIBLE_DEVICES=-1` via batch file | **Works** — falls back to CPU, but loses all GPU acceleration | | Reinstall Ollama 0.16.3 | Same binary, same issue | ## Root Cause Analysis Based on [ggml-org/llama.cpp#18331](https://github.com/ggml-org/llama.cpp/issues/18331), this is likely caused by an **nvcc compiler optimization bug** where the CUDA Toolkit generates incorrect machine code for Blackwell SM_120 MMQ kernels at `-O3`. The pre-built `ggml-cuda.dll` shipped with Ollama 0.16.3 contains these buggy kernels. Additionally, since this is the **China-variant GB202-240 die** (not the standard GB202-300), it may have subtle microarchitectural differences that exacerbate or differently trigger the nvcc compiler bug. This variant may not be included in the Ollama/llama.cpp CUDA test matrix. The upstream workaround is to either: - Compile with `-DCMAKE_CUDA_ARCHITECTURES="89"` (Ada PTX fallback, works on Blackwell via forward compatibility) - Compile with `-DCMAKE_CUDA_FLAGS="-O2"` (reduced optimization avoids the nvcc bug) - Compile with `-DGGML_CUDA_FORCE_CUBLAS=ON` (bypass MMQ entirely, use cuBLAS) ## Expected Behavior Models should load and run on RTX 5090 D v2 GPU. The CUDA architecture 1200 is listed in the compiled backends, so SM_120 is intended to be supported. ## Suggested Fix 1. Ship the CUDA backend DLL compiled with `-DCMAKE_CUDA_ARCHITECTURES="89-real;120"` and `-O2` flag for sm_120 2. Or include a runtime fallback to cuBLAS when MMQ kernels fail on Blackwell 3. Ensure GB202-240 (RTX 5090 D v2) is included in the GPU compatibility test matrix alongside GB202-300 (standard RTX 5090)
Author
Owner

@XHXIAIEIN commented on GitHub (Feb 24, 2026):

Update: GPU Hardware Confirmed Working via Alternative Path

Since Ollama's MMQ kernel crashes on this GPU, I switched to HuggingFace Transformers for local inference as a temporary alternative. This confirms the GPU hardware itself is fully functional:

What Works (PyTorch + HuggingFace Transformers)

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load Qwen3-8B via HuggingFace (CPU → GPU manual transfer)
model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen3-8B", dtype=torch.bfloat16, low_cpu_mem_usage=True
)
model = model.to("cuda")  # Works on RTX 5090 D v2
Metric Value
Model Qwen3-8B (bf16, ~16 GB)
Runtime PyTorch 2.12.0.dev (nightly, cu128)
VRAM 15.3 GB / 24 GB
Speed ~8.9 tok/s

Successfully ran a complete RAG pipeline (vector retrieval + generation + self-reflection) with correct, high-confidence results.

What This Tells Us About the Ollama Issue

  • The GPU's CUDA compute (sm_120) works correctly — matrix operations, weight loading, bf16 inference all pass
  • The problem is specifically in llama.cpp's compiled MMQ CUDA kernels — the nvcc-generated machine code for sm_120 is faulty (as noted in the upstream ggml-org/llama.cpp#18331)
  • cuBLAS path works on this GPU (PyTorch uses cuBLAS internally), which supports the suggestion that compiling with -DGGML_CUDA_FORCE_CUBLAS=ON would fix it

Temporary Alternative for RTX 5090 D v2 Users

If you need local LLM inference while waiting for the Ollama fix:

  1. Install PyTorch nightly with cu128: pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu128
  2. Use HuggingFace Transformers to load models directly (bypass device_map="auto" — that has a separate bug)
  3. Models up to ~16 GB bf16 fit in the 24 GB VRAM

This is obviously not a fix — just confirming the hardware is fine and the issue is purely in the MMQ kernel compilation.

<!-- gh-comment-id:3950122718 --> @XHXIAIEIN commented on GitHub (Feb 24, 2026): ## Update: GPU Hardware Confirmed Working via Alternative Path Since Ollama's MMQ kernel crashes on this GPU, I switched to **HuggingFace Transformers** for local inference as a temporary alternative. This confirms the GPU hardware itself is fully functional: ### What Works (PyTorch + HuggingFace Transformers) ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer # Load Qwen3-8B via HuggingFace (CPU → GPU manual transfer) model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen3-8B", dtype=torch.bfloat16, low_cpu_mem_usage=True ) model = model.to("cuda") # Works on RTX 5090 D v2 ``` | Metric | Value | |---|---| | Model | Qwen3-8B (bf16, ~16 GB) | | Runtime | PyTorch 2.12.0.dev (nightly, cu128) | | VRAM | 15.3 GB / 24 GB | | Speed | ~8.9 tok/s | Successfully ran a complete RAG pipeline (vector retrieval + generation + self-reflection) with correct, high-confidence results. ### What This Tells Us About the Ollama Issue - **The GPU's CUDA compute (sm_120) works correctly** — matrix operations, weight loading, bf16 inference all pass - **The problem is specifically in llama.cpp's compiled MMQ CUDA kernels** — the nvcc-generated machine code for sm_120 is faulty (as noted in the upstream [ggml-org/llama.cpp#18331](https://github.com/ggml-org/llama.cpp/issues/18331)) - **cuBLAS path works on this GPU** (PyTorch uses cuBLAS internally), which supports the suggestion that compiling with `-DGGML_CUDA_FORCE_CUBLAS=ON` would fix it ### Temporary Alternative for RTX 5090 D v2 Users If you need local LLM inference while waiting for the Ollama fix: 1. Install PyTorch nightly with cu128: `pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu128` 2. Use HuggingFace Transformers to load models directly (bypass `device_map="auto"` — that has a [separate bug](https://github.com/huggingface/accelerate/issues/3933)) 3. Models up to ~16 GB bf16 fit in the 24 GB VRAM This is obviously not a fix — just confirming the hardware is fine and the issue is purely in the MMQ kernel compilation.
Author
Owner

@XHXIAIEIN commented on GitHub (Feb 24, 2026):

Correction: AI TOPS Specification

The hardware comparison table in the original report contains an error:

Field Originally reported for RTX 5090 D v2 Correction
AI TOPS 2,375 This figure is from the original RTX 5090 D (GB202-250), not the D v2 (GB202-240). The D v2's actual AI TOPS are not independently confirmed — the 384-bit / 1,344 GB/s bandwidth reduction (vs. 512-bit / 1,792 GB/s on the original D) may further reduce effective AI throughput beyond the firmware-limited 2,375 TOPS.

All other hardware specs (die GB202-240, 21,760 CUDA cores, 24 GB / 384-bit, sm_120) are confirmed correct via nvidia-smi -q and PyTorch device properties (170 SMs × 128 = 21,760 cores, compute capability 12.0).

The core issue (MMQ kernel crash) and root cause analysis remain unchanged — this is an nvcc compiler bug for sm_120 MMQ kernels, not a hardware issue.

<!-- gh-comment-id:3951108439 --> @XHXIAIEIN commented on GitHub (Feb 24, 2026): ## Correction: AI TOPS Specification The hardware comparison table in the original report contains an error: | Field | Originally reported for RTX 5090 D v2 | Correction | |---|---|---| | AI TOPS | 2,375 | **This figure is from the original RTX 5090 D (GB202-250), not the D v2 (GB202-240).** The D v2's actual AI TOPS are not independently confirmed — the 384-bit / 1,344 GB/s bandwidth reduction (vs. 512-bit / 1,792 GB/s on the original D) may further reduce effective AI throughput beyond the firmware-limited 2,375 TOPS. | All other hardware specs (die GB202-240, 21,760 CUDA cores, 24 GB / 384-bit, sm_120) are confirmed correct via `nvidia-smi -q` and PyTorch device properties (170 SMs × 128 = 21,760 cores, compute capability 12.0). The core issue (MMQ kernel crash) and root cause analysis remain unchanged — this is an nvcc compiler bug for sm_120 MMQ kernels, not a hardware issue.
Author
Owner

@lingfan36 commented on GitHub (Feb 25, 2026):

👋 你好!

看起来你是安装相关的问题。我们整理了 Ollama 安装常见问题的解决方案:

🔗 安装问题解决方案: https://ollamahub.space/pages/solutions/installation/

如果这里没有你遇到的问题,欢迎在下方补充详细信息(如操作系统、错误信息等),我们会及时更新文档。


由 OllamaHub 自动生成

<!-- gh-comment-id:3957146782 --> @lingfan36 commented on GitHub (Feb 25, 2026): 👋 你好! 看起来你是安装相关的问题。我们整理了 Ollama 安装常见问题的解决方案: 🔗 **安装问题解决方案**: https://ollamahub.space/pages/solutions/installation/ 如果这里没有你遇到的问题,欢迎在下方补充详细信息(如操作系统、错误信息等),我们会及时更新文档。 --- *由 OllamaHub 自动生成*
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#9342