[GH-ISSUE #12113] ollama panic when run gpt-oss:120b with OLLAMA_FLASH_ATTENTION=1 on V100 #54563

Closed
opened 2026-04-29 06:21:04 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @wufei1234 on GitHub (Aug 29, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12113

Originally assigned to: @jessegross on GitHub.

What is the issue?

ollama can't run if we set OLLAMA_FLASH_ATTENTION=1 (panic: failed to sample token: sample: logits sum to NaN, check model output). if we set OLLAMA_FLASH_ATTENTION=0, ollama can run successfully without any issue.

Relevant log output

time=2025-08-29T16:21:56.594+08:00 level=INFO source=server.go:1270 msg="waiting for server to become available" status="llm server loading model"
time=2025-08-29T16:22:00.363+08:00 level=INFO source=server.go:1274 msg="llama runner started in 7.18 seconds"
[GIN] 2025/08/29 - 16:22:00 | 200 | 11.073563287s |       127.0.0.1 | POST     "/api/generate"
panic: failed to sample token: sample: logits sum to NaN, check model output

goroutine 89 [running]:
github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0xc0002d30e0, {0x557e99eacb20, 0xc0006a79f0})
        github.com/ollama/ollama/runner/ollamarunner/runner.go:375 +0x6a
created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
        github.com/ollama/ollama/runner/ollamarunner/runner.go:1019 +0x4c9

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.11.7

Originally created by @wufei1234 on GitHub (Aug 29, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12113 Originally assigned to: @jessegross on GitHub. ### What is the issue? ollama can't run if we set OLLAMA_FLASH_ATTENTION=1 (panic: failed to sample token: sample: logits sum to NaN, check model output). if we set OLLAMA_FLASH_ATTENTION=0, ollama can run successfully without any issue. ### Relevant log output ```shell time=2025-08-29T16:21:56.594+08:00 level=INFO source=server.go:1270 msg="waiting for server to become available" status="llm server loading model" time=2025-08-29T16:22:00.363+08:00 level=INFO source=server.go:1274 msg="llama runner started in 7.18 seconds" [GIN] 2025/08/29 - 16:22:00 | 200 | 11.073563287s | 127.0.0.1 | POST "/api/generate" panic: failed to sample token: sample: logits sum to NaN, check model output goroutine 89 [running]: github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0xc0002d30e0, {0x557e99eacb20, 0xc0006a79f0}) github.com/ollama/ollama/runner/ollamarunner/runner.go:375 +0x6a created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1 github.com/ollama/ollama/runner/ollamarunner/runner.go:1019 +0x4c9 ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.11.7
GiteaMirror added the bug label 2026-04-29 06:21:04 -05:00
Author
Owner

@jessegross commented on GitHub (Aug 29, 2025):

Can you please post the full log?

<!-- gh-comment-id:3237809808 --> @jessegross commented on GitHub (Aug 29, 2025): Can you please post the full log?
Author
Owner

@wufei1234 commented on GitHub (Aug 29, 2025):

time=2025-08-29T16:16:42.727+08:00 level=INFO source=routes.go:1331 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:65536 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NEW_ESTIMATES:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-08-29T16:16:42.730+08:00 level=INFO source=images.go:477 msg="total blobs: 7"
time=2025-08-29T16:16:42.730+08:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2025-08-29T16:16:42.730+08:00 level=INFO source=routes.go:1384 msg="Listening on [::]:11434 (version 0.11.8)"
time=2025-08-29T16:16:42.731+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-08-29T16:16:44.472+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-720b1cbc-e07b-06e5-33b3-04f1795ed3b5 library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-08-29T16:16:44.473+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-dc20d4a3-b76b-33f3-74ff-765ecad0d833 library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-08-29T16:16:44.473+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-bbd9ab7d-a6b0-1db4-57c3-6ea3b3a9fab4 library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-08-29T16:16:44.473+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-9ba7e0f7-14b2-8c82-af0a-1e8fa8082d6e library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-08-29T16:16:44.473+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-d316bc72-0e0c-9b06-4544-616ecffae084 library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-08-29T16:16:44.473+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-6a0d29d6-ad75-c875-c285-6a04aa107f2b library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-08-29T16:16:44.473+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-67524c8d-6955-5c13-102d-9090717dc14b library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
time=2025-08-29T16:16:44.473+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-7b4762f2-9a22-89c9-c605-a570b2e68bd9 library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB"
[GIN] 2025/08/29 - 16:16:50 | 200 | 186.875µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/08/29 - 16:16:50 | 200 | 386.637276ms | 127.0.0.1 | POST "/api/show"
time=2025-08-29T16:16:52.978+08:00 level=INFO source=server.go:166 msg="enabling new memory estimates"
time=2025-08-29T16:16:54.407+08:00 level=INFO source=server.go:199 msg="model wants flash attention"
time=2025-08-29T16:16:54.407+08:00 level=INFO source=server.go:216 msg="enabling flash attention"
time=2025-08-29T16:16:54.407+08:00 level=WARN source=server.go:224 msg="kv cache type not supported by model" type=""
time=2025-08-29T16:16:54.408+08:00 level=INFO source=server.go:388 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /data/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 --port 43169"
time=2025-08-29T16:16:54.409+08:00 level=INFO source=server.go:661 msg="loading model" "model layers"=37 requested=-1
time=2025-08-29T16:16:54.438+08:00 level=INFO source=runner.go:1006 msg="starting ollama engine"
time=2025-08-29T16:16:54.438+08:00 level=INFO source=runner.go:1043 msg="Server listening on 127.0.0.1:43169"
time=2025-08-29T16:16:55.865+08:00 level=INFO source=server.go:667 msg="system memory" total="376.5 GiB" free="367.7 GiB" free_swap="0 B"
time=2025-08-29T16:16:55.865+08:00 level=INFO source=server.go:671 msg="gpu memory" id=GPU-720b1cbc-e07b-06e5-33b3-04f1795ed3b5 available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-08-29T16:16:55.865+08:00 level=INFO source=server.go:671 msg="gpu memory" id=GPU-dc20d4a3-b76b-33f3-74ff-765ecad0d833 available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-08-29T16:16:55.865+08:00 level=INFO source=server.go:671 msg="gpu memory" id=GPU-bbd9ab7d-a6b0-1db4-57c3-6ea3b3a9fab4 available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-08-29T16:16:55.865+08:00 level=INFO source=server.go:671 msg="gpu memory" id=GPU-9ba7e0f7-14b2-8c82-af0a-1e8fa8082d6e available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-08-29T16:16:55.865+08:00 level=INFO source=server.go:671 msg="gpu memory" id=GPU-d316bc72-0e0c-9b06-4544-616ecffae084 available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-08-29T16:16:55.865+08:00 level=INFO source=server.go:671 msg="gpu memory" id=GPU-6a0d29d6-ad75-c875-c285-6a04aa107f2b available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-08-29T16:16:55.865+08:00 level=INFO source=server.go:671 msg="gpu memory" id=GPU-67524c8d-6955-5c13-102d-9090717dc14b available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-08-29T16:16:55.865+08:00 level=INFO source=server.go:671 msg="gpu memory" id=GPU-7b4762f2-9a22-89c9-c605-a570b2e68bd9 available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-08-29T16:16:55.869+08:00 level=INFO source=runner.go:925 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:65536 KvCacheType: NumThreads:40 GPULayers:37[ID:GPU-7b4762f2-9a22-89c9-c605-a570b2e68bd9 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-08-29T16:16:56.069+08:00 level=INFO source=ggml.go:130 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=471 num_key_values=30
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 8 CUDA devices:
Device 0: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-720b1cbc-e07b-06e5-33b3-04f1795ed3b5
Device 1: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-dc20d4a3-b76b-33f3-74ff-765ecad0d833
Device 2: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-bbd9ab7d-a6b0-1db4-57c3-6ea3b3a9fab4
Device 3: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-9ba7e0f7-14b2-8c82-af0a-1e8fa8082d6e
Device 4: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-d316bc72-0e0c-9b06-4544-616ecffae084
Device 5: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-6a0d29d6-ad75-c875-c285-6a04aa107f2b
Device 6: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-67524c8d-6955-5c13-102d-9090717dc14b
Device 7: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-7b4762f2-9a22-89c9-c605-a570b2e68bd9
load_backend: loaded CUDA backend from /usr/lib/ollama/libggml-cuda.so
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-skylakex.so
time=2025-08-29T16:16:56.206+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 CUDA.3.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.3.USE_GRAPHS=1 CUDA.3.PEER_MAX_BATCH_SIZE=128 CUDA.4.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.4.USE_GRAPHS=1 CUDA.4.PEER_MAX_BATCH_SIZE=128 CUDA.5.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.5.USE_GRAPHS=1 CUDA.5.PEER_MAX_BATCH_SIZE=128 CUDA.6.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.6.USE_GRAPHS=1 CUDA.6.PEER_MAX_BATCH_SIZE=128 CUDA.7.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.7.USE_GRAPHS=1 CUDA.7.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-08-29T16:16:56.388+08:00 level=INFO source=runner.go:925 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:65536 KvCacheType: NumThreads:40 GPULayers:37[ID:GPU-720b1cbc-e07b-06e5-33b3-04f1795ed3b5 Layers:2(0..1) ID:GPU-dc20d4a3-b76b-33f3-74ff-765ecad0d833 Layers:5(2..6) ID:GPU-bbd9ab7d-a6b0-1db4-57c3-6ea3b3a9fab4 Layers:5(7..11) ID:GPU-9ba7e0f7-14b2-8c82-af0a-1e8fa8082d6e Layers:5(12..16) ID:GPU-d316bc72-0e0c-9b06-4544-616ecffae084 Layers:5(17..21) ID:GPU-6a0d29d6-ad75-c875-c285-6a04aa107f2b Layers:5(22..26) ID:GPU-67524c8d-6955-5c13-102d-9090717dc14b Layers:5(27..31) ID:GPU-7b4762f2-9a22-89c9-c605-a570b2e68bd9 Layers:5(32..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-08-29T16:16:57.601+08:00 level=INFO source=runner.go:925 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:65536 KvCacheType: NumThreads:40 GPULayers:37[ID:GPU-720b1cbc-e07b-06e5-33b3-04f1795ed3b5 Layers:2(0..1) ID:GPU-dc20d4a3-b76b-33f3-74ff-765ecad0d833 Layers:5(2..6) ID:GPU-bbd9ab7d-a6b0-1db4-57c3-6ea3b3a9fab4 Layers:5(7..11) ID:GPU-9ba7e0f7-14b2-8c82-af0a-1e8fa8082d6e Layers:5(12..16) ID:GPU-d316bc72-0e0c-9b06-4544-616ecffae084 Layers:5(17..21) ID:GPU-6a0d29d6-ad75-c875-c285-6a04aa107f2b Layers:5(22..26) ID:GPU-67524c8d-6955-5c13-102d-9090717dc14b Layers:5(27..31) ID:GPU-7b4762f2-9a22-89c9-c605-a570b2e68bd9 Layers:5(32..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-08-29T16:16:57.845+08:00 level=INFO source=runner.go:925 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:65536 KvCacheType: NumThreads:40 GPULayers:37[ID:GPU-720b1cbc-e07b-06e5-33b3-04f1795ed3b5 Layers:2(0..1) ID:GPU-dc20d4a3-b76b-33f3-74ff-765ecad0d833 Layers:5(2..6) ID:GPU-bbd9ab7d-a6b0-1db4-57c3-6ea3b3a9fab4 Layers:5(7..11) ID:GPU-9ba7e0f7-14b2-8c82-af0a-1e8fa8082d6e Layers:5(12..16) ID:GPU-d316bc72-0e0c-9b06-4544-616ecffae084 Layers:5(17..21) ID:GPU-6a0d29d6-ad75-c875-c285-6a04aa107f2b Layers:5(22..26) ID:GPU-67524c8d-6955-5c13-102d-9090717dc14b Layers:5(27..31) ID:GPU-7b4762f2-9a22-89c9-c605-a570b2e68bd9 Layers:5(32..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-08-29T16:16:57.845+08:00 level=INFO source=ggml.go:486 msg="offloading 36 repeating layers to GPU"
time=2025-08-29T16:16:57.845+08:00 level=INFO source=ggml.go:492 msg="offloading output layer to GPU"
time=2025-08-29T16:16:57.845+08:00 level=INFO source=ggml.go:497 msg="offloaded 37/37 layers to GPU"
time=2025-08-29T16:16:57.849+08:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="3.3 GiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA1 size="8.2 GiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA2 size="8.2 GiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA3 size="8.2 GiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA4 size="8.2 GiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA5 size="8.2 GiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA6 size="8.2 GiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA7 size="7.6 GiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="137.0 MiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA1 size="283.0 MiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA2 size="402.0 MiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA3 size="283.0 MiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA4 size="402.0 MiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA5 size="283.0 MiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA6 size="402.0 MiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA7 size="274.0 MiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="177.8 MiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA1 size="170.3 MiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA2 size="170.3 MiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA3 size="170.3 MiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA4 size="170.3 MiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA5 size="170.3 MiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA6 size="170.3 MiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA7 size="170.3 MiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:342 msg="total memory" size="64.6 GiB"
time=2025-08-29T16:16:57.850+08:00 level=INFO source=sched.go:473 msg="loaded runners" count=1
time=2025-08-29T16:16:57.850+08:00 level=INFO source=server.go:1236 msg="waiting for llama runner to start responding"
time=2025-08-29T16:16:57.851+08:00 level=INFO source=server.go:1270 msg="waiting for server to become available" status="llm server loading model"
time=2025-08-29T16:17:17.434+08:00 level=INFO source=server.go:1274 msg="llama runner started in 23.03 seconds"
[GIN] 2025/08/29 - 16:17:17 | 200 | 26.918377909s | 127.0.0.1 | POST "/api/generate"
panic: failed to sample token: sample: logits sum to NaN, check model output

goroutine 91 [running]:
github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0xc0002d50e0, {0x55c49e8dfb20, 0xc0003b2960})
github.com/ollama/ollama/runner/ollamarunner/runner.go:375 +0x6a
created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
github.com/ollama/ollama/runner/ollamarunner/runner.go:1019 +0x4c9
time=2025-08-29T16:17:35.190+08:00 level=ERROR source=server.go:1444 msg="post predict" error="Post "http://127.0.0.1:43169/completion": EOF"
[GIN] 2025/08/29 - 16:17:35 | 200 | 1.545018903s | 127.0.0.1 | POST "/api/chat"

<!-- gh-comment-id:3237827927 --> @wufei1234 commented on GitHub (Aug 29, 2025): time=2025-08-29T16:16:42.727+08:00 level=INFO source=routes.go:1331 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:65536 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NEW_ESTIMATES:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-08-29T16:16:42.730+08:00 level=INFO source=images.go:477 msg="total blobs: 7" time=2025-08-29T16:16:42.730+08:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2025-08-29T16:16:42.730+08:00 level=INFO source=routes.go:1384 msg="Listening on [::]:11434 (version 0.11.8)" time=2025-08-29T16:16:42.731+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-08-29T16:16:44.472+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-720b1cbc-e07b-06e5-33b3-04f1795ed3b5 library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-08-29T16:16:44.473+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-dc20d4a3-b76b-33f3-74ff-765ecad0d833 library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-08-29T16:16:44.473+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-bbd9ab7d-a6b0-1db4-57c3-6ea3b3a9fab4 library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-08-29T16:16:44.473+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-9ba7e0f7-14b2-8c82-af0a-1e8fa8082d6e library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-08-29T16:16:44.473+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-d316bc72-0e0c-9b06-4544-616ecffae084 library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-08-29T16:16:44.473+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-6a0d29d6-ad75-c875-c285-6a04aa107f2b library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-08-29T16:16:44.473+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-67524c8d-6955-5c13-102d-9090717dc14b library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" time=2025-08-29T16:16:44.473+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-7b4762f2-9a22-89c9-c605-a570b2e68bd9 library=cuda variant=v12 compute=7.0 driver=12.4 name="Tesla V100-SXM2-32GB" total="31.7 GiB" available="31.4 GiB" [GIN] 2025/08/29 - 16:16:50 | 200 | 186.875µs | 127.0.0.1 | HEAD "/" [GIN] 2025/08/29 - 16:16:50 | 200 | 386.637276ms | 127.0.0.1 | POST "/api/show" time=2025-08-29T16:16:52.978+08:00 level=INFO source=server.go:166 msg="enabling new memory estimates" time=2025-08-29T16:16:54.407+08:00 level=INFO source=server.go:199 msg="model wants flash attention" time=2025-08-29T16:16:54.407+08:00 level=INFO source=server.go:216 msg="enabling flash attention" time=2025-08-29T16:16:54.407+08:00 level=WARN source=server.go:224 msg="kv cache type not supported by model" type="" time=2025-08-29T16:16:54.408+08:00 level=INFO source=server.go:388 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /data/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 --port 43169" time=2025-08-29T16:16:54.409+08:00 level=INFO source=server.go:661 msg="loading model" "model layers"=37 requested=-1 time=2025-08-29T16:16:54.438+08:00 level=INFO source=runner.go:1006 msg="starting ollama engine" time=2025-08-29T16:16:54.438+08:00 level=INFO source=runner.go:1043 msg="Server listening on 127.0.0.1:43169" time=2025-08-29T16:16:55.865+08:00 level=INFO source=server.go:667 msg="system memory" total="376.5 GiB" free="367.7 GiB" free_swap="0 B" time=2025-08-29T16:16:55.865+08:00 level=INFO source=server.go:671 msg="gpu memory" id=GPU-720b1cbc-e07b-06e5-33b3-04f1795ed3b5 available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-08-29T16:16:55.865+08:00 level=INFO source=server.go:671 msg="gpu memory" id=GPU-dc20d4a3-b76b-33f3-74ff-765ecad0d833 available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-08-29T16:16:55.865+08:00 level=INFO source=server.go:671 msg="gpu memory" id=GPU-bbd9ab7d-a6b0-1db4-57c3-6ea3b3a9fab4 available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-08-29T16:16:55.865+08:00 level=INFO source=server.go:671 msg="gpu memory" id=GPU-9ba7e0f7-14b2-8c82-af0a-1e8fa8082d6e available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-08-29T16:16:55.865+08:00 level=INFO source=server.go:671 msg="gpu memory" id=GPU-d316bc72-0e0c-9b06-4544-616ecffae084 available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-08-29T16:16:55.865+08:00 level=INFO source=server.go:671 msg="gpu memory" id=GPU-6a0d29d6-ad75-c875-c285-6a04aa107f2b available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-08-29T16:16:55.865+08:00 level=INFO source=server.go:671 msg="gpu memory" id=GPU-67524c8d-6955-5c13-102d-9090717dc14b available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-08-29T16:16:55.865+08:00 level=INFO source=server.go:671 msg="gpu memory" id=GPU-7b4762f2-9a22-89c9-c605-a570b2e68bd9 available="31.0 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-08-29T16:16:55.869+08:00 level=INFO source=runner.go:925 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:65536 KvCacheType: NumThreads:40 GPULayers:37[ID:GPU-7b4762f2-9a22-89c9-c605-a570b2e68bd9 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-08-29T16:16:56.069+08:00 level=INFO source=ggml.go:130 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=471 num_key_values=30 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 8 CUDA devices: Device 0: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-720b1cbc-e07b-06e5-33b3-04f1795ed3b5 Device 1: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-dc20d4a3-b76b-33f3-74ff-765ecad0d833 Device 2: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-bbd9ab7d-a6b0-1db4-57c3-6ea3b3a9fab4 Device 3: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-9ba7e0f7-14b2-8c82-af0a-1e8fa8082d6e Device 4: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-d316bc72-0e0c-9b06-4544-616ecffae084 Device 5: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-6a0d29d6-ad75-c875-c285-6a04aa107f2b Device 6: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-67524c8d-6955-5c13-102d-9090717dc14b Device 7: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes, ID: GPU-7b4762f2-9a22-89c9-c605-a570b2e68bd9 load_backend: loaded CUDA backend from /usr/lib/ollama/libggml-cuda.so load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-skylakex.so time=2025-08-29T16:16:56.206+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 CUDA.3.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.3.USE_GRAPHS=1 CUDA.3.PEER_MAX_BATCH_SIZE=128 CUDA.4.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.4.USE_GRAPHS=1 CUDA.4.PEER_MAX_BATCH_SIZE=128 CUDA.5.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.5.USE_GRAPHS=1 CUDA.5.PEER_MAX_BATCH_SIZE=128 CUDA.6.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.6.USE_GRAPHS=1 CUDA.6.PEER_MAX_BATCH_SIZE=128 CUDA.7.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.7.USE_GRAPHS=1 CUDA.7.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-08-29T16:16:56.388+08:00 level=INFO source=runner.go:925 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:65536 KvCacheType: NumThreads:40 GPULayers:37[ID:GPU-720b1cbc-e07b-06e5-33b3-04f1795ed3b5 Layers:2(0..1) ID:GPU-dc20d4a3-b76b-33f3-74ff-765ecad0d833 Layers:5(2..6) ID:GPU-bbd9ab7d-a6b0-1db4-57c3-6ea3b3a9fab4 Layers:5(7..11) ID:GPU-9ba7e0f7-14b2-8c82-af0a-1e8fa8082d6e Layers:5(12..16) ID:GPU-d316bc72-0e0c-9b06-4544-616ecffae084 Layers:5(17..21) ID:GPU-6a0d29d6-ad75-c875-c285-6a04aa107f2b Layers:5(22..26) ID:GPU-67524c8d-6955-5c13-102d-9090717dc14b Layers:5(27..31) ID:GPU-7b4762f2-9a22-89c9-c605-a570b2e68bd9 Layers:5(32..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-08-29T16:16:57.601+08:00 level=INFO source=runner.go:925 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:65536 KvCacheType: NumThreads:40 GPULayers:37[ID:GPU-720b1cbc-e07b-06e5-33b3-04f1795ed3b5 Layers:2(0..1) ID:GPU-dc20d4a3-b76b-33f3-74ff-765ecad0d833 Layers:5(2..6) ID:GPU-bbd9ab7d-a6b0-1db4-57c3-6ea3b3a9fab4 Layers:5(7..11) ID:GPU-9ba7e0f7-14b2-8c82-af0a-1e8fa8082d6e Layers:5(12..16) ID:GPU-d316bc72-0e0c-9b06-4544-616ecffae084 Layers:5(17..21) ID:GPU-6a0d29d6-ad75-c875-c285-6a04aa107f2b Layers:5(22..26) ID:GPU-67524c8d-6955-5c13-102d-9090717dc14b Layers:5(27..31) ID:GPU-7b4762f2-9a22-89c9-c605-a570b2e68bd9 Layers:5(32..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-08-29T16:16:57.845+08:00 level=INFO source=runner.go:925 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:65536 KvCacheType: NumThreads:40 GPULayers:37[ID:GPU-720b1cbc-e07b-06e5-33b3-04f1795ed3b5 Layers:2(0..1) ID:GPU-dc20d4a3-b76b-33f3-74ff-765ecad0d833 Layers:5(2..6) ID:GPU-bbd9ab7d-a6b0-1db4-57c3-6ea3b3a9fab4 Layers:5(7..11) ID:GPU-9ba7e0f7-14b2-8c82-af0a-1e8fa8082d6e Layers:5(12..16) ID:GPU-d316bc72-0e0c-9b06-4544-616ecffae084 Layers:5(17..21) ID:GPU-6a0d29d6-ad75-c875-c285-6a04aa107f2b Layers:5(22..26) ID:GPU-67524c8d-6955-5c13-102d-9090717dc14b Layers:5(27..31) ID:GPU-7b4762f2-9a22-89c9-c605-a570b2e68bd9 Layers:5(32..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-08-29T16:16:57.845+08:00 level=INFO source=ggml.go:486 msg="offloading 36 repeating layers to GPU" time=2025-08-29T16:16:57.845+08:00 level=INFO source=ggml.go:492 msg="offloading output layer to GPU" time=2025-08-29T16:16:57.845+08:00 level=INFO source=ggml.go:497 msg="offloaded 37/37 layers to GPU" time=2025-08-29T16:16:57.849+08:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA0 size="3.3 GiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA1 size="8.2 GiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA2 size="8.2 GiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA3 size="8.2 GiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA4 size="8.2 GiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA5 size="8.2 GiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA6 size="8.2 GiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:310 msg="model weights" device=CUDA7 size="7.6 GiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA0 size="137.0 MiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA1 size="283.0 MiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA2 size="402.0 MiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA3 size="283.0 MiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA4 size="402.0 MiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA5 size="283.0 MiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA6 size="402.0 MiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:321 msg="kv cache" device=CUDA7 size="274.0 MiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA0 size="177.8 MiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA1 size="170.3 MiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA2 size="170.3 MiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA3 size="170.3 MiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA4 size="170.3 MiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA5 size="170.3 MiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA6 size="170.3 MiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:332 msg="compute graph" device=CUDA7 size="170.3 MiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=backend.go:342 msg="total memory" size="64.6 GiB" time=2025-08-29T16:16:57.850+08:00 level=INFO source=sched.go:473 msg="loaded runners" count=1 time=2025-08-29T16:16:57.850+08:00 level=INFO source=server.go:1236 msg="waiting for llama runner to start responding" time=2025-08-29T16:16:57.851+08:00 level=INFO source=server.go:1270 msg="waiting for server to become available" status="llm server loading model" time=2025-08-29T16:17:17.434+08:00 level=INFO source=server.go:1274 msg="llama runner started in 23.03 seconds" [GIN] 2025/08/29 - 16:17:17 | 200 | 26.918377909s | 127.0.0.1 | POST "/api/generate" panic: failed to sample token: sample: logits sum to NaN, check model output goroutine 91 [running]: github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0xc0002d50e0, {0x55c49e8dfb20, 0xc0003b2960}) github.com/ollama/ollama/runner/ollamarunner/runner.go:375 +0x6a created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1 github.com/ollama/ollama/runner/ollamarunner/runner.go:1019 +0x4c9 time=2025-08-29T16:17:35.190+08:00 level=ERROR source=server.go:1444 msg="post predict" error="Post \"http://127.0.0.1:43169/completion\": EOF" [GIN] 2025/08/29 - 16:17:35 | 200 | 1.545018903s | 127.0.0.1 | POST "/api/chat"
Author
Owner

@arbv commented on GitHub (Aug 30, 2025):

The same thing happens on RX 7900 XTX (partial GPU offload case, obviously):

сер 30 22:45:08 cauldron ollama[2294635]: time=2025-08-30T19:45:08.551Z level=INFO source=server.go:1274 msg="llama runner started in 11.89 seconds"
сер 30 22:45:34 cauldron ollama[2294635]: panic: failed to sample token: sample: logits sum to NaN, check model output
сер 30 22:45:34 cauldron ollama[2294635]: 
сер 30 22:45:34 cauldron ollama[2294635]: goroutine 69 [running]:
сер 30 22:45:34 cauldron ollama[2294635]: github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0xc0002610e0, {0x55fafd5bfb20, 0xc000130ff0})
сер 30 22:45:34 cauldron ollama[2294635]:         github.com/ollama/ollama/runner/ollamarunner/runner.go:375 +0x6a
сер 30 22:45:34 cauldron ollama[2294635]: created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
сер 30 22:45:34 cauldron ollama[2294635]:         github.com/ollama/ollama/runner/ollamarunner/runner.go:1019 +0x4c9
сер 30 22:45:36 cauldron ollama[2294635]: time=2025-08-30T19:45:36.062Z level=ERROR source=server.go:1444 msg="post predict" error="Post \"http://127.0.0.1:43811/completion\": EOF"


The latest docker build of ollama is used.

<!-- gh-comment-id:3239506657 --> @arbv commented on GitHub (Aug 30, 2025): The same thing happens on RX 7900 XTX (partial GPU offload case, obviously): ``` сер 30 22:45:08 cauldron ollama[2294635]: time=2025-08-30T19:45:08.551Z level=INFO source=server.go:1274 msg="llama runner started in 11.89 seconds" сер 30 22:45:34 cauldron ollama[2294635]: panic: failed to sample token: sample: logits sum to NaN, check model output сер 30 22:45:34 cauldron ollama[2294635]: сер 30 22:45:34 cauldron ollama[2294635]: goroutine 69 [running]: сер 30 22:45:34 cauldron ollama[2294635]: github.com/ollama/ollama/runner/ollamarunner.(*Server).run(0xc0002610e0, {0x55fafd5bfb20, 0xc000130ff0}) сер 30 22:45:34 cauldron ollama[2294635]: github.com/ollama/ollama/runner/ollamarunner/runner.go:375 +0x6a сер 30 22:45:34 cauldron ollama[2294635]: created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1 сер 30 22:45:34 cauldron ollama[2294635]: github.com/ollama/ollama/runner/ollamarunner/runner.go:1019 +0x4c9 сер 30 22:45:36 cauldron ollama[2294635]: time=2025-08-30T19:45:36.062Z level=ERROR source=server.go:1444 msg="post predict" error="Post \"http://127.0.0.1:43811/completion\": EOF" ``` The latest docker build of ollama is used.
Author
Owner

@jessegross commented on GitHub (Oct 3, 2025):

For those that are running into this issue, please test out one of the 0.12.4 RCs and let us know if it fixes the issue with V100s - we have updated the kernels.

<!-- gh-comment-id:3366842541 --> @jessegross commented on GitHub (Oct 3, 2025): For those that are running into this issue, please test out one of the 0.12.4 RCs and let us know if it fixes the issue with V100s - we have updated the kernels.
Author
Owner

@jessegross commented on GitHub (Oct 8, 2025):

Fixed by #12245

<!-- gh-comment-id:3382454063 --> @jessegross commented on GitHub (Oct 8, 2025): Fixed by #12245
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#54563