[GH-ISSUE #12442] CUDA error running gpt-oss:20b on version 0.12.2 on Nvidia Xavier #70324

Closed
opened 2026-05-04 21:07:18 -05:00 by GiteaMirror · 14 comments
Owner

Originally created by @jcestibariz on GitHub (Sep 29, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12442

What is the issue?

After updating ollama to v0.12.2 and pulling the latest gpt-oss:20b I'm getting the error pasted below. An older version of gpt-oss:20b was running fine on the previous version of ollama. Unfortunately I updated both at the same time.

Running gemma3:12b on the same installation works fine.

Relevant log output

Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.499-04:00 level=INFO source=server.go:200 msg="model wants flash attention"
Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.499-04:00 level=INFO source=server.go:217 msg="enabling flash attention"
Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.500-04:00 level=INFO source=server.go:399 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --port 34411"
Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.501-04:00 level=INFO source=server.go:672 msg="loading model" "model layers"=25 requested=-1
Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.579-04:00 level=INFO source=runner.go:1252 msg="starting ollama engine"
Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.597-04:00 level=INFO source=runner.go:1287 msg="Server listening on 127.0.0.1:34411"
Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.599-04:00 level=INFO source=server.go:678 msg="system memory" total="30.3 GiB" free="24.2 GiB" free_swap="15.0 GiB"
Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.599-04:00 level=INFO source=server.go:686 msg="gpu memory" id=GPU-7621d6ea-ac89-5671-b96b-e93ba6c8e19f available="22.3 GiB" free="22.7 GiB" minimum="457.0 MiB" overhead="0 B"
Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.602-04:00 level=INFO source=runner.go:1171 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-7621d6ea-ac89-5671-b96b-e93ba6c8e19f Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.747-04:00 level=INFO source=ggml.go:131 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30
Sep 28 22:25:24 xavier ollama[41771]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu.so
Sep 28 22:25:24 xavier ollama[41771]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Sep 28 22:25:24 xavier ollama[41771]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Sep 28 22:25:24 xavier ollama[41771]: ggml_cuda_init: found 1 CUDA devices:
Sep 28 22:25:24 xavier ollama[41771]:   Device 0: Xavier, compute capability 7.2, VMM: yes, ID: GPU-7621d6ea-ac89-5671-b96b-e93ba6c8e19f
Sep 28 22:25:24 xavier ollama[41771]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_jetpack5/libggml-cuda.so
Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.827-04:00 level=INFO source=ggml.go:104 msg=system CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.LLAMAFILE=1 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.LLAMAFILE=1 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
Sep 28 22:25:25 xavier ollama[41771]: time=2025-09-28T22:25:25.760-04:00 level=INFO source=runner.go:1171 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-7621d6ea-ac89-5671-b96b-e93ba6c8e19f Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.052-04:00 level=INFO source=runner.go:1171 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-7621d6ea-ac89-5671-b96b-e93ba6c8e19f Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.053-04:00 level=INFO source=ggml.go:487 msg="offloading 24 repeating layers to GPU"
Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.053-04:00 level=INFO source=ggml.go:493 msg="offloading output layer to GPU"
Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.053-04:00 level=INFO source=ggml.go:498 msg="offloaded 25/25 layers to GPU"
Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.053-04:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="11.8 GiB"
Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.054-04:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB"
Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.054-04:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="300.0 MiB"
Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.054-04:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="121.8 MiB"
Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.054-04:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB"
Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.054-04:00 level=INFO source=backend.go:342 msg="total memory" size="13.3 GiB"
Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.054-04:00 level=INFO source=sched.go:470 msg="loaded runners" count=1
Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.054-04:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.056-04:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
Sep 28 22:25:36 xavier ollama[41771]: time=2025-09-28T22:25:36.366-04:00 level=INFO source=server.go:1289 msg="llama runner started in 11.87 seconds"
Sep 28 22:25:39 xavier ollama[48999]: //ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu:437: ERROR: CUDA kernel flash_attn_ext_f16 has no device code compatible with CUDA arch 720. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__
Sep 28 22:25:39 xavier ollama[48999]: //ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu:437: ERROR: CUDA kernel flash_attn_ext_f16 has no device code compatible with CUDA arch 720. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__
Sep 28 22:25:39 xavier ollama[48999]: //ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu:437: ERROR: CUDA kernel flash_attn_ext_f16 has no device code compatible with CUDA arch 720. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__
Sep 28 22:25:39 xavier ollama[48999]: //ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu:437: ERROR: CUDA kernel flash_attn_ext_f16 has no device code compatible with CUDA arch 720. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__
Sep 28 22:25:39 xavier ollama[48999]: //ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu:437: ERROR: CUDA kernel flash_attn_ext_f16 has no device code compatible with CUDA arch 720. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__

OS

Linux

GPU

Nvidia

CPU

Other

Ollama version

0.12.2

Originally created by @jcestibariz on GitHub (Sep 29, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12442 ### What is the issue? After updating ollama to v0.12.2 and pulling the latest gpt-oss:20b I'm getting the error pasted below. An older version of gpt-oss:20b was running fine on the previous version of ollama. Unfortunately I updated both at the same time. Running gemma3:12b on the same installation works fine. ### Relevant log output ```shell Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.499-04:00 level=INFO source=server.go:200 msg="model wants flash attention" Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.499-04:00 level=INFO source=server.go:217 msg="enabling flash attention" Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.500-04:00 level=INFO source=server.go:399 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --port 34411" Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.501-04:00 level=INFO source=server.go:672 msg="loading model" "model layers"=25 requested=-1 Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.579-04:00 level=INFO source=runner.go:1252 msg="starting ollama engine" Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.597-04:00 level=INFO source=runner.go:1287 msg="Server listening on 127.0.0.1:34411" Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.599-04:00 level=INFO source=server.go:678 msg="system memory" total="30.3 GiB" free="24.2 GiB" free_swap="15.0 GiB" Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.599-04:00 level=INFO source=server.go:686 msg="gpu memory" id=GPU-7621d6ea-ac89-5671-b96b-e93ba6c8e19f available="22.3 GiB" free="22.7 GiB" minimum="457.0 MiB" overhead="0 B" Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.602-04:00 level=INFO source=runner.go:1171 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-7621d6ea-ac89-5671-b96b-e93ba6c8e19f Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.747-04:00 level=INFO source=ggml.go:131 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30 Sep 28 22:25:24 xavier ollama[41771]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu.so Sep 28 22:25:24 xavier ollama[41771]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Sep 28 22:25:24 xavier ollama[41771]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Sep 28 22:25:24 xavier ollama[41771]: ggml_cuda_init: found 1 CUDA devices: Sep 28 22:25:24 xavier ollama[41771]: Device 0: Xavier, compute capability 7.2, VMM: yes, ID: GPU-7621d6ea-ac89-5671-b96b-e93ba6c8e19f Sep 28 22:25:24 xavier ollama[41771]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_jetpack5/libggml-cuda.so Sep 28 22:25:24 xavier ollama[41771]: time=2025-09-28T22:25:24.827-04:00 level=INFO source=ggml.go:104 msg=system CPU.0.NEON=1 CPU.0.ARM_FMA=1 CPU.0.LLAMAFILE=1 CPU.1.NEON=1 CPU.1.ARM_FMA=1 CPU.1.LLAMAFILE=1 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) Sep 28 22:25:25 xavier ollama[41771]: time=2025-09-28T22:25:25.760-04:00 level=INFO source=runner.go:1171 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-7621d6ea-ac89-5671-b96b-e93ba6c8e19f Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.052-04:00 level=INFO source=runner.go:1171 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-7621d6ea-ac89-5671-b96b-e93ba6c8e19f Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.053-04:00 level=INFO source=ggml.go:487 msg="offloading 24 repeating layers to GPU" Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.053-04:00 level=INFO source=ggml.go:493 msg="offloading output layer to GPU" Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.053-04:00 level=INFO source=ggml.go:498 msg="offloaded 25/25 layers to GPU" Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.053-04:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="11.8 GiB" Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.054-04:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB" Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.054-04:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="300.0 MiB" Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.054-04:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="121.8 MiB" Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.054-04:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB" Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.054-04:00 level=INFO source=backend.go:342 msg="total memory" size="13.3 GiB" Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.054-04:00 level=INFO source=sched.go:470 msg="loaded runners" count=1 Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.054-04:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" Sep 28 22:25:28 xavier ollama[41771]: time=2025-09-28T22:25:28.056-04:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" Sep 28 22:25:36 xavier ollama[41771]: time=2025-09-28T22:25:36.366-04:00 level=INFO source=server.go:1289 msg="llama runner started in 11.87 seconds" Sep 28 22:25:39 xavier ollama[48999]: //ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu:437: ERROR: CUDA kernel flash_attn_ext_f16 has no device code compatible with CUDA arch 720. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__ Sep 28 22:25:39 xavier ollama[48999]: //ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu:437: ERROR: CUDA kernel flash_attn_ext_f16 has no device code compatible with CUDA arch 720. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__ Sep 28 22:25:39 xavier ollama[48999]: //ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu:437: ERROR: CUDA kernel flash_attn_ext_f16 has no device code compatible with CUDA arch 720. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__ Sep 28 22:25:39 xavier ollama[48999]: //ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu:437: ERROR: CUDA kernel flash_attn_ext_f16 has no device code compatible with CUDA arch 720. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__ Sep 28 22:25:39 xavier ollama[48999]: //ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu:437: ERROR: CUDA kernel flash_attn_ext_f16 has no device code compatible with CUDA arch 720. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__ ``` ### OS Linux ### GPU Nvidia ### CPU Other ### Ollama version 0.12.2
GiteaMirror added the bug label 2026-05-04 21:07:19 -05:00
Author
Owner

@Mungbeanz commented on GitHub (Sep 29, 2025):

Do you just get gibberish response. or sometimes no response at all?

I have uninstalled and resintalled many different versions.
I have pinpointed the issue to start with Ollama version 0.11.8

These were the fixes introduced:
readme: add Neuro SAN to community integrations (#12109)
ggml: Avoid allocating CUDA primary context on unused GPUs
convert(gptoss): mxfp4 to ggml layout to avoid jit conversion (#12018)
convert: fix tensor sorting (#12015)
fix keep alive (#12041)
convert: fix tensor sorting (#12015)
gptoss: enable flash attention by default (#11996)
remove extra field attr (#11205)

make sure you re-download the model post upgrade. But I think this wont fix the issue.

I am guessing the conversion from mxfp4 to ggml has broken this as other models work fine for me too.

If you are running linux you can force a rollback:
curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.11.7 sh

<!-- gh-comment-id:3345620859 --> @Mungbeanz commented on GitHub (Sep 29, 2025): Do you just get gibberish response. or sometimes no response at all? I have uninstalled and resintalled many different versions. I have pinpointed the issue to start with Ollama version 0.11.8 These were the fixes introduced: readme: add Neuro SAN to community integrations (#12109) ggml: Avoid allocating CUDA primary context on unused GPUs convert(gptoss): mxfp4 to ggml layout to avoid jit conversion (#12018) convert: fix tensor sorting (#12015) fix keep alive (#12041) convert: fix tensor sorting (#12015) gptoss: enable flash attention by default (#11996) remove extra field attr (#11205) make sure you re-download the model post upgrade. But I think this wont fix the issue. I am guessing the conversion from mxfp4 to ggml has broken this as other models work fine for me too. If you are running linux you can force a rollback: curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.11.7 sh
Author
Owner

@rick-github commented on GitHub (Sep 29, 2025):

Sep 28 22:25:39 xavier ollama[48999]: //ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu:437: ERROR: CUDA kernel flash_attn_ext_f16 has no device code compatible with CUDA arch 720. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__

Looks like the same type of issue as #12403, where ollama is using flash attention (#11996) on a device that doesn't support it.

<!-- gh-comment-id:3346186180 --> @rick-github commented on GitHub (Sep 29, 2025): ``` Sep 28 22:25:39 xavier ollama[48999]: //ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu:437: ERROR: CUDA kernel flash_attn_ext_f16 has no device code compatible with CUDA arch 720. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__ ``` Looks like the same type of issue as #12403, where ollama is using flash attention (#11996) on a device that doesn't support it.
Author
Owner

@jcestibariz commented on GitHub (Sep 30, 2025):

@Mungbeanz I got no response. I was able to downgrade to 0.11.7 and the model ran fine, I didn't have to pull the model again. And I can confirm that the issue starts in v0.11.8.

@rick-github you are probably right, I did try setting OLLAMA_FLASH_ATTENTION to false but that didn't work.

Looking closely at the logs, after a long list of CUDA errors looks like there's a very long stack trace and finally a message saying msg="llama runner terminated" error="exit status 2"

Sep 28 22:25:39 xavier ollama[41771]: CUDA error: unspecified launch failure
Sep 28 22:25:39 xavier ollama[41771]:   current device: 0, in function ggml_cuda_mul_mat_id at //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:2198
Sep 28 22:25:39 xavier ollama[41771]:   cudaMemcpyAsync(ids_host.data(), ids->data, ggml_nbytes(ids), cudaMemcpyDeviceToHost, stream)
Sep 28 22:25:39 xavier ollama[41771]: //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:84: CUDA error
Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49002]
Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49003]
Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49004]
Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49005]
Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49006]
Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49007]
Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49008]
Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49011]
Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49012]
Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49013]
Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49014]
Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49015]
Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49016]
Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49017]
Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49018]
Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49019]
Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49020]
Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49021]
Sep 28 22:25:39 xavier ollama[49022]: [Thread debugging using libthread_db enabled]
Sep 28 22:25:39 xavier ollama[49022]: Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1".
Sep 28 22:25:40 xavier ollama[49022]: 0x0000aaaaea30ff4c in ?? ()
Sep 28 22:25:40 xavier ollama[49022]: #0  0x0000aaaaea30ff4c in ?? ()
Sep 28 22:25:40 xavier ollama[49022]: #1  0x0000000000000080 in ?? ()
Sep 28 22:25:40 xavier ollama[49022]: Backtrace stopped: previous frame identical to this frame (corrupt stack?)
Sep 28 22:25:40 xavier ollama[49022]: [Inferior 1 (process 48999) detached]
Sep 28 22:25:40 xavier ollama[41771]: SIGABRT: abort
Sep 28 22:25:40 xavier ollama[41771]: PC=0xffff9f9f1d88 m=7 sigcode=18446744073709551610
Sep 28 22:25:40 xavier ollama[41771]: signal arrived during cgo execution
Sep 28 22:25:40 xavier ollama[41771]: goroutine 405 gp=0x4000d37500 m=7 mp=0x4000500008 [syscall]:
Sep 28 22:25:40 xavier ollama[41771]: runtime.cgocall(0xaaaaeae421c4, 0x4000080a98)
Sep 28 22:25:40 xavier ollama[41771]:         runtime/cgocall.go:167 +0x44 fp=0x4000080a50 sp=0x4000080a10 pc=0xaaaaea303c64
...
Sep 28 22:25:40 xavier ollama[41771]: runtime.goexit({})
Sep 28 22:25:40 xavier ollama[41771]:         runtime/asm_arm64.s:1223 +0x4 fp=0x4000081fd0 sp=0x4000081fd0 pc=0xaaaaea30eaf4
Sep 28 22:25:40 xavier ollama[41771]: created by github.com/ollama/ollama/runner/ollamarunner.(*Server).run in goroutine 6
Sep 28 22:25:40 xavier ollama[41771]:         github.com/ollama/ollama/runner/ollamarunner/runner.go:425 +0x244
Sep 28 22:25:40 xavier ollama[41771]: r0      0x0
Sep 28 22:25:40 xavier ollama[41771]: r1      0xffff527fa4c8
Sep 28 22:25:40 xavier ollama[41771]: r2      0x0
Sep 28 22:25:40 xavier ollama[41771]: r3      0x8
Sep 28 22:25:40 xavier ollama[41771]: r4      0x0
Sep 28 22:25:40 xavier ollama[41771]: r5      0xffff9fb2b000
Sep 28 22:25:40 xavier ollama[41771]: r6      0xffffffffffffffff
Sep 28 22:25:40 xavier ollama[41771]: r7      0xffffffffffffffff
Sep 28 22:25:40 xavier ollama[41771]: r8      0x87
Sep 28 22:25:40 xavier ollama[41771]: r9      0xffff527fa6a0
Sep 28 22:25:40 xavier ollama[41771]: r10     0xffffffffffffffff
Sep 28 22:25:40 xavier ollama[41771]: r11     0xffffffffffffffff
Sep 28 22:25:40 xavier ollama[41771]: r12     0xffffffffffffffff
Sep 28 22:25:40 xavier ollama[41771]: r13     0xffffffffffffffff
Sep 28 22:25:40 xavier ollama[41771]: r14     0xffffffffffffffff
Sep 28 22:25:40 xavier ollama[41771]: r15     0xffffffffffffffff
Sep 28 22:25:40 xavier ollama[41771]: r16     0xffffffffffffffff
Sep 28 22:25:40 xavier ollama[41771]: r17     0xffffffffffffffff
Sep 28 22:25:40 xavier ollama[41771]: r18     0xffffffffffffffff
Sep 28 22:25:40 xavier ollama[41771]: r19     0xffffffffffffffff
Sep 28 22:25:40 xavier ollama[41771]: r20     0xffff527fc190
Sep 28 22:25:40 xavier ollama[41771]: r21     0xffff9fb2dac0
Sep 28 22:25:40 xavier ollama[41771]: r22     0x896
Sep 28 22:25:40 xavier ollama[41771]: r23     0xffff581c0268
Sep 28 22:25:40 xavier ollama[41771]: r24     0xffff23ffb000
Sep 28 22:25:40 xavier ollama[41771]: r25     0xffff34099290
Sep 28 22:25:40 xavier ollama[41771]: r26     0xffff23ffbaa0
Sep 28 22:25:40 xavier ollama[41771]: r27     0xfffe5c0c3f20
Sep 28 22:25:40 xavier ollama[41771]: r28     0xaaab2907a840
Sep 28 22:25:40 xavier ollama[41771]: r29     0xffff527fa4a0
Sep 28 22:25:40 xavier ollama[41771]: lr      0xffff527fa4c8
Sep 28 22:25:40 xavier ollama[41771]: sp      0xffff527fa4a0
Sep 28 22:25:40 xavier ollama[41771]: pc      0xffff9f9f1d88
Sep 28 22:25:40 xavier ollama[41771]: fault   0x0
Sep 28 22:25:40 xavier ollama[41771]: time=2025-09-28T22:25:40.676-04:00 level=ERROR source=server.go:1459 msg="post predict" error="Post \"http://127.0.0.1:34411/completion\": EOF"
Sep 28 22:25:40 xavier ollama[41771]: [GIN] 2025/09/28 - 22:25:40 | 500 | 22.409574247s |    192.168.1.11 | POST     "/api/chat"
Sep 28 22:25:41 xavier ollama[41771]: time=2025-09-28T22:25:41.246-04:00 level=ERROR source=server.go:425 msg="llama runner terminated" error="exit status 2"```
<!-- gh-comment-id:3349698377 --> @jcestibariz commented on GitHub (Sep 30, 2025): @Mungbeanz I got no response. I was able to downgrade to 0.11.7 and the model ran fine, I didn't have to pull the model again. And I can confirm that the issue starts in v0.11.8. @rick-github you are probably right, I did try setting `OLLAMA_FLASH_ATTENTION` to `false` but that didn't work. Looking closely at the logs, after a long list of CUDA errors looks like there's a very long stack trace and finally a message saying `msg="llama runner terminated" error="exit status 2"` ```Sep 28 22:25:39 xavier ollama[48999]: //ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu:437: ERROR: CUDA kernel flash_attn_ext_f16 has no device code compatible with CUDA arch 720. ggml-cuda.cu was compiled for: __CUDA_ARCH_LIST__ Sep 28 22:25:39 xavier ollama[41771]: CUDA error: unspecified launch failure Sep 28 22:25:39 xavier ollama[41771]: current device: 0, in function ggml_cuda_mul_mat_id at //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:2198 Sep 28 22:25:39 xavier ollama[41771]: cudaMemcpyAsync(ids_host.data(), ids->data, ggml_nbytes(ids), cudaMemcpyDeviceToHost, stream) Sep 28 22:25:39 xavier ollama[41771]: //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:84: CUDA error Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49002] Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49003] Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49004] Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49005] Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49006] Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49007] Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49008] Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49011] Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49012] Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49013] Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49014] Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49015] Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49016] Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49017] Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49018] Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49019] Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49020] Sep 28 22:25:39 xavier ollama[49022]: [New LWP 49021] Sep 28 22:25:39 xavier ollama[49022]: [Thread debugging using libthread_db enabled] Sep 28 22:25:39 xavier ollama[49022]: Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1". Sep 28 22:25:40 xavier ollama[49022]: 0x0000aaaaea30ff4c in ?? () Sep 28 22:25:40 xavier ollama[49022]: #0 0x0000aaaaea30ff4c in ?? () Sep 28 22:25:40 xavier ollama[49022]: #1 0x0000000000000080 in ?? () Sep 28 22:25:40 xavier ollama[49022]: Backtrace stopped: previous frame identical to this frame (corrupt stack?) Sep 28 22:25:40 xavier ollama[49022]: [Inferior 1 (process 48999) detached] Sep 28 22:25:40 xavier ollama[41771]: SIGABRT: abort Sep 28 22:25:40 xavier ollama[41771]: PC=0xffff9f9f1d88 m=7 sigcode=18446744073709551610 Sep 28 22:25:40 xavier ollama[41771]: signal arrived during cgo execution Sep 28 22:25:40 xavier ollama[41771]: goroutine 405 gp=0x4000d37500 m=7 mp=0x4000500008 [syscall]: Sep 28 22:25:40 xavier ollama[41771]: runtime.cgocall(0xaaaaeae421c4, 0x4000080a98) Sep 28 22:25:40 xavier ollama[41771]: runtime/cgocall.go:167 +0x44 fp=0x4000080a50 sp=0x4000080a10 pc=0xaaaaea303c64 ... Sep 28 22:25:40 xavier ollama[41771]: runtime.goexit({}) Sep 28 22:25:40 xavier ollama[41771]: runtime/asm_arm64.s:1223 +0x4 fp=0x4000081fd0 sp=0x4000081fd0 pc=0xaaaaea30eaf4 Sep 28 22:25:40 xavier ollama[41771]: created by github.com/ollama/ollama/runner/ollamarunner.(*Server).run in goroutine 6 Sep 28 22:25:40 xavier ollama[41771]: github.com/ollama/ollama/runner/ollamarunner/runner.go:425 +0x244 Sep 28 22:25:40 xavier ollama[41771]: r0 0x0 Sep 28 22:25:40 xavier ollama[41771]: r1 0xffff527fa4c8 Sep 28 22:25:40 xavier ollama[41771]: r2 0x0 Sep 28 22:25:40 xavier ollama[41771]: r3 0x8 Sep 28 22:25:40 xavier ollama[41771]: r4 0x0 Sep 28 22:25:40 xavier ollama[41771]: r5 0xffff9fb2b000 Sep 28 22:25:40 xavier ollama[41771]: r6 0xffffffffffffffff Sep 28 22:25:40 xavier ollama[41771]: r7 0xffffffffffffffff Sep 28 22:25:40 xavier ollama[41771]: r8 0x87 Sep 28 22:25:40 xavier ollama[41771]: r9 0xffff527fa6a0 Sep 28 22:25:40 xavier ollama[41771]: r10 0xffffffffffffffff Sep 28 22:25:40 xavier ollama[41771]: r11 0xffffffffffffffff Sep 28 22:25:40 xavier ollama[41771]: r12 0xffffffffffffffff Sep 28 22:25:40 xavier ollama[41771]: r13 0xffffffffffffffff Sep 28 22:25:40 xavier ollama[41771]: r14 0xffffffffffffffff Sep 28 22:25:40 xavier ollama[41771]: r15 0xffffffffffffffff Sep 28 22:25:40 xavier ollama[41771]: r16 0xffffffffffffffff Sep 28 22:25:40 xavier ollama[41771]: r17 0xffffffffffffffff Sep 28 22:25:40 xavier ollama[41771]: r18 0xffffffffffffffff Sep 28 22:25:40 xavier ollama[41771]: r19 0xffffffffffffffff Sep 28 22:25:40 xavier ollama[41771]: r20 0xffff527fc190 Sep 28 22:25:40 xavier ollama[41771]: r21 0xffff9fb2dac0 Sep 28 22:25:40 xavier ollama[41771]: r22 0x896 Sep 28 22:25:40 xavier ollama[41771]: r23 0xffff581c0268 Sep 28 22:25:40 xavier ollama[41771]: r24 0xffff23ffb000 Sep 28 22:25:40 xavier ollama[41771]: r25 0xffff34099290 Sep 28 22:25:40 xavier ollama[41771]: r26 0xffff23ffbaa0 Sep 28 22:25:40 xavier ollama[41771]: r27 0xfffe5c0c3f20 Sep 28 22:25:40 xavier ollama[41771]: r28 0xaaab2907a840 Sep 28 22:25:40 xavier ollama[41771]: r29 0xffff527fa4a0 Sep 28 22:25:40 xavier ollama[41771]: lr 0xffff527fa4c8 Sep 28 22:25:40 xavier ollama[41771]: sp 0xffff527fa4a0 Sep 28 22:25:40 xavier ollama[41771]: pc 0xffff9f9f1d88 Sep 28 22:25:40 xavier ollama[41771]: fault 0x0 Sep 28 22:25:40 xavier ollama[41771]: time=2025-09-28T22:25:40.676-04:00 level=ERROR source=server.go:1459 msg="post predict" error="Post \"http://127.0.0.1:34411/completion\": EOF" Sep 28 22:25:40 xavier ollama[41771]: [GIN] 2025/09/28 - 22:25:40 | 500 | 22.409574247s | 192.168.1.11 | POST "/api/chat" Sep 28 22:25:41 xavier ollama[41771]: time=2025-09-28T22:25:41.246-04:00 level=ERROR source=server.go:425 msg="llama runner terminated" error="exit status 2"```
Author
Owner

@rick-github commented on GitHub (Sep 30, 2025):

For the gpt-oss models, ollama ignores the value of OLLAMA_FLASH_ATTENTION, enabling it by default.

<!-- gh-comment-id:3349973366 --> @rick-github commented on GitHub (Sep 30, 2025): For the gpt-oss models, ollama ignores the value of `OLLAMA_FLASH_ATTENTION`, enabling it by default.
Author
Owner

@Mungbeanz commented on GitHub (Sep 30, 2025):

yes I tryed the flash attention being forced off in 0.11.8. But this is overridden during loading.
On 0.11.7 by default flash attention is off and model works fine. Forcing flash attention on in 0.11.7 and the model is now incoherent.
I have Volta tensor cores so flash attention is technically possible but not supported correctly. Ampere or newer is suggested.

Is there a way to force the 0.11.8+ to disable flash attention and for the model loading process to actually respect it?

<!-- gh-comment-id:3350013105 --> @Mungbeanz commented on GitHub (Sep 30, 2025): yes I tryed the flash attention being forced off in 0.11.8. But this is overridden during loading. On 0.11.7 by default flash attention is off and model works fine. Forcing flash attention on in 0.11.7 and the model is now incoherent. I have Volta tensor cores so flash attention is technically possible but not supported correctly. Ampere or newer is suggested. Is there a way to force the 0.11.8+ to disable flash attention and for the model loading process to actually respect it?
Author
Owner

@jessegross commented on GitHub (Oct 3, 2025):

For those that are running into this issue, please test out one of the 0.12.4 RCs and let us know if it fixes the issue - we have updated the kernels. In that release, it will also be possible to disable flash attention for models that have it on by default.

<!-- gh-comment-id:3366852073 --> @jessegross commented on GitHub (Oct 3, 2025): For those that are running into this issue, please test out one of the 0.12.4 RCs and let us know if it fixes the issue - we have updated the kernels. In that release, it will also be possible to disable flash attention for models that have it on by default.
Author
Owner

@Mungbeanz commented on GitHub (Oct 4, 2025):

Hi,
I just did some testing and it looks like its respecting the environment overrides.
I added these to my startup service and its running better. Think it got a 10% performance boot too...
Thanks for sorting this out.

Environment="OLLAMA_FLASH_ATTENTION=0"
Environment="OLLAMA_NUM_PARALLEL=4"
Environment="OLLAMA_GPU_OVERHEAD=0"
Environment="OLLAMA_KV_CACHE_TYPE=gpu"

<!-- gh-comment-id:3367805973 --> @Mungbeanz commented on GitHub (Oct 4, 2025): Hi, I just did some testing and it looks like its respecting the environment overrides. I added these to my startup service and its running better. Think it got a 10% performance boot too... Thanks for sorting this out. Environment="OLLAMA_FLASH_ATTENTION=0" Environment="OLLAMA_NUM_PARALLEL=4" Environment="OLLAMA_GPU_OVERHEAD=0" Environment="OLLAMA_KV_CACHE_TYPE=gpu"
Author
Owner

@jcestibariz commented on GitHub (Oct 4, 2025):

I installed 0.12.4-rc4 and set OLLAMA_FLASH_ATTENTION=0, and I can also confirm that gpt-oss works fine for me.

Thank you so much for your quick response!

<!-- gh-comment-id:3367830660 --> @jcestibariz commented on GitHub (Oct 4, 2025): I installed 0.12.4-rc4 and set `OLLAMA_FLASH_ATTENTION=0`, and I can also confirm that gpt-oss works fine for me. Thank you so much for your quick response!
Author
Owner

@jessegross commented on GitHub (Oct 6, 2025):

Does it work with the default settings with flash attention enabled?

<!-- gh-comment-id:3373174741 --> @jessegross commented on GitHub (Oct 6, 2025): Does it work with the default settings with flash attention enabled?
Author
Owner

@Mungbeanz commented on GitHub (Oct 6, 2025):

I can try again on my installation. I believe I am in the same situation as OP where we have hardware that is only semi compliant for flash attention . First gen tensor cores.

<!-- gh-comment-id:3374179006 --> @Mungbeanz commented on GitHub (Oct 6, 2025): I can try again on my installation. I believe I am in the same situation as OP where we have hardware that is only semi compliant for flash attention . First gen tensor cores.
Author
Owner

@jcestibariz commented on GitHub (Oct 7, 2025):

@jessegross It doesn't work. I commented out OLLAMA_FLASH_ATTENTION=0 and I got the same errors as before.

<!-- gh-comment-id:3374745940 --> @jcestibariz commented on GitHub (Oct 7, 2025): @jessegross It doesn't work. I commented out `OLLAMA_FLASH_ATTENTION=0` and I got the same errors as before.
Author
Owner

@Mungbeanz commented on GitHub (Oct 7, 2025):

You beat me to it. Yeah didnt expect that to work. Some of the other inference engines show some signs of working on first gen tensor cores. So its possible to have this working in Ollama. I just dont know how...

<!-- gh-comment-id:3375260871 --> @Mungbeanz commented on GitHub (Oct 7, 2025): You beat me to it. Yeah didnt expect that to work. Some of the other inference engines show some signs of working on first gen tensor cores. So its possible to have this working in Ollama. I just dont know how...
Author
Owner

@jessegross commented on GitHub (Oct 7, 2025):

@Mungbeanz Are you also running on a Jetson Xavier (compute capability 7.2) or is it something else in that family like a V100 (compute capability 7.0)?

<!-- gh-comment-id:3377918191 --> @jessegross commented on GitHub (Oct 7, 2025): @Mungbeanz Are you also running on a Jetson Xavier (compute capability 7.2) or is it something else in that family like a V100 (compute capability 7.0)?
Author
Owner

@Mungbeanz commented on GitHub (Oct 8, 2025):

Running on Titan V's. Believe these are comoute 7.0

<!-- gh-comment-id:3382537543 --> @Mungbeanz commented on GitHub (Oct 8, 2025): Running on Titan V's. Believe these are comoute 7.0
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70324