[GH-ISSUE #13097] panic: failed to sample token with Vulkan #70727

Open
opened 2026-05-04 22:46:21 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @rrevi on GitHub (Nov 15, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13097

What is the issue?

Hello,

When I run tinyllama model with a simple prompt, it works.

However, when I run gpt-oss or deekseek-r1 model with the same prompt, I get a failure.

$ ollama run deepseek-r1 --verbose
>>> how's it going?
Error: 500 Internal Server Error: model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details

Hardware and OS:

OS: CachyOS x86_64
Host: HP OmniBook Ultra Laptop 14-fd0xxx (SBKPF)
Kernel: Linux 6.17.7-5-cachyos
Shell: zsh 5.9
Terminal: kitty 0.44.0
CPU: AMD Ryzen AI 9 365 (20) @ 5.09 GHz
GPU: AMD Radeon 890M Graphics [Integrated]
Memory: 5.08 GiB / 30.65 GiB (17%)
Swap: 516.00 KiB / 30.65 GiB (0%)

I am running the ollama-vulkan package.

Relevant log output

$ ollama serve
time=2025-11-15T04:08:20.185-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/rrevi/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-11-15T04:08:20.186-05:00 level=INFO source=images.go:522 msg="total blobs: 17"
time=2025-11-15T04:08:20.186-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-11-15T04:08:20.187-05:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.12.11)"
time=2025-11-15T04:08:20.187-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-11-15T04:08:20.190-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43061"
time=2025-11-15T04:08:20.257-05:00 level=INFO source=types.go:42 msg="inference compute" id=00000000-c300-0000-0000-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="AMD Radeon 890M Graphics (RADV GFX1150)" libdirs=ollama driver=0.0 pci_id=0000:c3:00.0 type=iGPU total="15.8 GiB" available="15.0 GiB"
time=2025-11-15T04:08:20.257-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="15.8 GiB" threshold="20.0 GiB"
[GIN] 2025/11/15 - 04:08:27 | 200 |     104.787µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/11/15 - 04:08:27 | 200 |   50.492947ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/11/15 - 04:08:27 | 200 |    50.73331ms |       127.0.0.1 | POST     "/api/show"
time=2025-11-15T04:08:27.891-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41093"
time=2025-11-15T04:08:28.023-05:00 level=INFO source=server.go:209 msg="enabling flash attention"
time=2025-11-15T04:08:28.024-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /home/rrevi/.ollama/models/blobs/sha256-e6a7edc1a4d7d9b2de136a221a57336b76316cfe53a252aeba814496c5ae439d --port 39573"
time=2025-11-15T04:08:28.025-05:00 level=INFO source=sched.go:443 msg="system memory" total="30.7 GiB" free="25.6 GiB" free_swap="30.7 GiB"
time=2025-11-15T04:08:28.025-05:00 level=INFO source=sched.go:450 msg="gpu memory" id=00000000-c300-0000-0000-000000000000 library=Vulkan available="14.5 GiB" free="15.0 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-11-15T04:08:28.025-05:00 level=INFO source=server.go:702 msg="loading model" "model layers"=37 requested=-1
time=2025-11-15T04:08:28.041-05:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-11-15T04:08:28.042-05:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:39573"
time=2025-11-15T04:08:28.048-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:10 GPULayers:37[ID:00000000-c300-0000-0000-000000000000 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-15T04:08:28.074-05:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3 file_type=Q4_K_M name="DeepSeek R1 0528 Qwen3 8B" description="" num_tensors=399 num_key_values=33
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon 890M Graphics (RADV GFX1150) (radv) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat
load_backend: loaded Vulkan backend from /usr/lib/ollama/libggml-vulkan.so
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
time=2025-11-15T04:08:28.107-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.AVX512_BF16=1 CPU.0.LLAMAFILE=1 CPU.1.SSE3=1 CPU.1.SSSE3=1 CPU.1.AVX=1 CPU.1.AVX2=1 CPU.1.F16C=1 CPU.1.FMA=1 CPU.1.BMI2=1 CPU.1.AVX512=1 CPU.1.AVX512_VBMI=1 CPU.1.AVX512_VNNI=1 CPU.1.AVX512_BF16=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
ggml_backend_vk_get_device_memory called: uuid 00000000-c300-0000-0000-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
time=2025-11-15T04:08:28.123-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:10 GPULayers:37[ID:00000000-c300-0000-0000-000000000000 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_vk_get_device_memory called: uuid 00000000-c300-0000-0000-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
time=2025-11-15T04:08:28.491-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:10 GPULayers:37[ID:00000000-c300-0000-0000-000000000000 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-15T04:08:28.491-05:00 level=INFO source=device.go:240 msg="model weights" device=Vulkan0 size="4.5 GiB"
time=2025-11-15T04:08:28.491-05:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="333.8 MiB"
time=2025-11-15T04:08:28.491-05:00 level=INFO source=device.go:251 msg="kv cache" device=Vulkan0 size="576.0 MiB"
time=2025-11-15T04:08:28.491-05:00 level=INFO source=ggml.go:482 msg="offloading 36 repeating layers to GPU"
time=2025-11-15T04:08:28.491-05:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2025-11-15T04:08:28.491-05:00 level=INFO source=ggml.go:494 msg="offloaded 37/37 layers to GPU"
time=2025-11-15T04:08:28.491-05:00 level=INFO source=device.go:262 msg="compute graph" device=Vulkan0 size="100.0 MiB"
time=2025-11-15T04:08:28.491-05:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="8.0 MiB"
time=2025-11-15T04:08:28.491-05:00 level=INFO source=device.go:272 msg="total memory" size="5.5 GiB"
time=2025-11-15T04:08:28.491-05:00 level=INFO source=sched.go:517 msg="loaded runners" count=1
time=2025-11-15T04:08:28.491-05:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-11-15T04:08:28.492-05:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
time=2025-11-15T04:08:34.271-05:00 level=INFO source=server.go:1332 msg="llama runner started in 6.25 seconds"
[GIN] 2025/11/15 - 04:08:34 | 200 |   6.49054114s |       127.0.0.1 | POST     "/api/generate"
panic: failed to sample token

goroutine 10 [running]:
github.com/ollama/ollama/runner/ollamarunner.(*Server).computeBatch(0xc00022d0e0, {0x0, {0x55d44ea7be70, 0xc0009e8080}, {0x55d44ea86420, 0xc0009e7a40}, {0xc0009e80c0, 0x7, 0x8}, {{0x55d44ea86420, ...}, ...}, ...})
	/startdir/src/ollama/runner/ollamarunner/runner.go:763 +0x1aa7
created by github.com/ollama/ollama/runner/ollamarunner.(*Server).run in goroutine 7
	/startdir/src/ollama/runner/ollamarunner/runner.go:458 +0x2cd
time=2025-11-15T04:08:40.149-05:00 level=ERROR source=server.go:1539 msg="post predict" error="Post \"http://127.0.0.1:39573/completion\": EOF"
[GIN] 2025/11/15 - 04:08:40 | 500 |  303.691963ms |       127.0.0.1 | POST     "/api/chat"

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.12.11

Originally created by @rrevi on GitHub (Nov 15, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13097 ### What is the issue? Hello, When I run `tinyllama` model with a simple prompt, it works. However, when I run `gpt-oss` or `deekseek-r1` model with the same prompt, I get a failure. ``` $ ollama run deepseek-r1 --verbose >>> how's it going? Error: 500 Internal Server Error: model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details ``` Hardware and OS: ``` OS: CachyOS x86_64 Host: HP OmniBook Ultra Laptop 14-fd0xxx (SBKPF) Kernel: Linux 6.17.7-5-cachyos Shell: zsh 5.9 Terminal: kitty 0.44.0 CPU: AMD Ryzen AI 9 365 (20) @ 5.09 GHz GPU: AMD Radeon 890M Graphics [Integrated] Memory: 5.08 GiB / 30.65 GiB (17%) Swap: 516.00 KiB / 30.65 GiB (0%) ``` I am running the [ollama-vulkan](https://packages.cachyos.org/package/cachyos-extra-znver4/x86_64_v4/ollama-vulkan) package. ### Relevant log output ```shell $ ollama serve time=2025-11-15T04:08:20.185-05:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/rrevi/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-11-15T04:08:20.186-05:00 level=INFO source=images.go:522 msg="total blobs: 17" time=2025-11-15T04:08:20.186-05:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-11-15T04:08:20.187-05:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.12.11)" time=2025-11-15T04:08:20.187-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-11-15T04:08:20.190-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43061" time=2025-11-15T04:08:20.257-05:00 level=INFO source=types.go:42 msg="inference compute" id=00000000-c300-0000-0000-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="AMD Radeon 890M Graphics (RADV GFX1150)" libdirs=ollama driver=0.0 pci_id=0000:c3:00.0 type=iGPU total="15.8 GiB" available="15.0 GiB" time=2025-11-15T04:08:20.257-05:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="15.8 GiB" threshold="20.0 GiB" [GIN] 2025/11/15 - 04:08:27 | 200 | 104.787µs | 127.0.0.1 | HEAD "/" [GIN] 2025/11/15 - 04:08:27 | 200 | 50.492947ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/15 - 04:08:27 | 200 | 50.73331ms | 127.0.0.1 | POST "/api/show" time=2025-11-15T04:08:27.891-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41093" time=2025-11-15T04:08:28.023-05:00 level=INFO source=server.go:209 msg="enabling flash attention" time=2025-11-15T04:08:28.024-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /home/rrevi/.ollama/models/blobs/sha256-e6a7edc1a4d7d9b2de136a221a57336b76316cfe53a252aeba814496c5ae439d --port 39573" time=2025-11-15T04:08:28.025-05:00 level=INFO source=sched.go:443 msg="system memory" total="30.7 GiB" free="25.6 GiB" free_swap="30.7 GiB" time=2025-11-15T04:08:28.025-05:00 level=INFO source=sched.go:450 msg="gpu memory" id=00000000-c300-0000-0000-000000000000 library=Vulkan available="14.5 GiB" free="15.0 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-11-15T04:08:28.025-05:00 level=INFO source=server.go:702 msg="loading model" "model layers"=37 requested=-1 time=2025-11-15T04:08:28.041-05:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-11-15T04:08:28.042-05:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:39573" time=2025-11-15T04:08:28.048-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:10 GPULayers:37[ID:00000000-c300-0000-0000-000000000000 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-15T04:08:28.074-05:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3 file_type=Q4_K_M name="DeepSeek R1 0528 Qwen3 8B" description="" num_tensors=399 num_key_values=33 ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = AMD Radeon 890M Graphics (RADV GFX1150) (radv) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat load_backend: loaded Vulkan backend from /usr/lib/ollama/libggml-vulkan.so load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so time=2025-11-15T04:08:28.107-05:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.AVX512_BF16=1 CPU.0.LLAMAFILE=1 CPU.1.SSE3=1 CPU.1.SSSE3=1 CPU.1.AVX=1 CPU.1.AVX2=1 CPU.1.F16C=1 CPU.1.FMA=1 CPU.1.BMI2=1 CPU.1.AVX512=1 CPU.1.AVX512_VBMI=1 CPU.1.AVX512_VNNI=1 CPU.1.AVX512_BF16=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) ggml_backend_vk_get_device_memory called: uuid 00000000-c300-0000-0000-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 time=2025-11-15T04:08:28.123-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:10 GPULayers:37[ID:00000000-c300-0000-0000-000000000000 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_vk_get_device_memory called: uuid 00000000-c300-0000-0000-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 time=2025-11-15T04:08:28.491-05:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:10 GPULayers:37[ID:00000000-c300-0000-0000-000000000000 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-15T04:08:28.491-05:00 level=INFO source=device.go:240 msg="model weights" device=Vulkan0 size="4.5 GiB" time=2025-11-15T04:08:28.491-05:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="333.8 MiB" time=2025-11-15T04:08:28.491-05:00 level=INFO source=device.go:251 msg="kv cache" device=Vulkan0 size="576.0 MiB" time=2025-11-15T04:08:28.491-05:00 level=INFO source=ggml.go:482 msg="offloading 36 repeating layers to GPU" time=2025-11-15T04:08:28.491-05:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2025-11-15T04:08:28.491-05:00 level=INFO source=ggml.go:494 msg="offloaded 37/37 layers to GPU" time=2025-11-15T04:08:28.491-05:00 level=INFO source=device.go:262 msg="compute graph" device=Vulkan0 size="100.0 MiB" time=2025-11-15T04:08:28.491-05:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="8.0 MiB" time=2025-11-15T04:08:28.491-05:00 level=INFO source=device.go:272 msg="total memory" size="5.5 GiB" time=2025-11-15T04:08:28.491-05:00 level=INFO source=sched.go:517 msg="loaded runners" count=1 time=2025-11-15T04:08:28.491-05:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-11-15T04:08:28.492-05:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" time=2025-11-15T04:08:34.271-05:00 level=INFO source=server.go:1332 msg="llama runner started in 6.25 seconds" [GIN] 2025/11/15 - 04:08:34 | 200 | 6.49054114s | 127.0.0.1 | POST "/api/generate" panic: failed to sample token goroutine 10 [running]: github.com/ollama/ollama/runner/ollamarunner.(*Server).computeBatch(0xc00022d0e0, {0x0, {0x55d44ea7be70, 0xc0009e8080}, {0x55d44ea86420, 0xc0009e7a40}, {0xc0009e80c0, 0x7, 0x8}, {{0x55d44ea86420, ...}, ...}, ...}) /startdir/src/ollama/runner/ollamarunner/runner.go:763 +0x1aa7 created by github.com/ollama/ollama/runner/ollamarunner.(*Server).run in goroutine 7 /startdir/src/ollama/runner/ollamarunner/runner.go:458 +0x2cd time=2025-11-15T04:08:40.149-05:00 level=ERROR source=server.go:1539 msg="post predict" error="Post \"http://127.0.0.1:39573/completion\": EOF" [GIN] 2025/11/15 - 04:08:40 | 500 | 303.691963ms | 127.0.0.1 | POST "/api/chat" ``` ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.12.11
GiteaMirror added the vulkanbug labels 2026-05-04 22:46:21 -05:00
Author
Owner

@y-hattori commented on GitHub (Nov 17, 2025):

I’d like to report that the same panic: failed to sample token error also occurs in an environment different from the one previously mentioned.
In my setup—Windows running on an Intel Arc GPU—the issue appears only when using the gemma3:27b model. It does not occur with gemma3:12b, gpt-oss:20b, or deepseek-r1:32b.

PS C:\WINDOWS\system32> ollama run gemma3:27b --verbose
>>> how's it going?
Error: 500 Internal Server Error: model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details

PS C:\WINDOWS\system32> ollama run gemma3:12b --verbose
>>> how's it going?
It's going well, thanks for asking! As an AI, I don't *feel* things, but my systems are running smoothly and I'm
ready to help. 😊

How about you? How's it going with you?

total duration:       7.2137829s
load duration:        170.6806ms
prompt eval count:    15 token(s)
prompt eval duration: 876.738ms
prompt eval rate:     17.11 tokens/s
eval count:           52 token(s)
eval duration:        6.1142824s
eval rate:            8.50 tokens/s
>>> /bye

PS C:\WINDOWS\system32> ollama run gpt-oss:20b --verbose
>>> how's it going?
Thinking...
The user asks "how's it going?" This is a casual greeting. We should respond friendly, maybe mention we are here
to help. Probably brief.
...done thinking.

Hey! I'm doing great—just ready to help you out. How can I assist you today?

total duration:       8.8000835s
load duration:        137.3134ms
prompt eval count:    72 token(s)
prompt eval duration: 905.842ms
prompt eval rate:     79.48 tokens/s
eval count:           61 token(s)
eval duration:        7.6629961s
eval rate:            7.96 tokens/s
>>> /bye

PS C:\WINDOWS\system32> ollama run deepseek-r1:32b --verbose
>>> how's it going?
Hello! I'm just a virtual assistant, so I don't have feelings, but I'm here and ready to help you with whatever
you need. How are *you* doing? 😊

total duration:       20.7502084s
load duration:        89.9947ms
prompt eval count:    8 token(s)
prompt eval duration: 6.2129431s
prompt eval rate:     1.29 tokens/s
eval count:           44 token(s)
eval duration:        14.3651764s
eval rate:            3.06 tokens/s
>>> /bye

Hardware and OS:

OS: Windows 11 Pro 25H2 26200.7171
Host: GMKtec EVO-T1
Shell: Windows PowerShell
CPU: Intel(R) Core(TM) Ultra 9 285H (2.90 GHz)
GPU: Intel(R) Arc(TM) 140T GPU (32GB)
Memory: 64.0GB

Relevant log output

This server.log shows the output immediately after startup when I run ollama run gemma3:27b --verbose and then enter a prompt.

time=2025-11-17T19:38:38.531+09:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\htty\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES:]"
time=2025-11-17T19:38:38.558+09:00 level=INFO source=images.go:522 msg="total blobs: 33"
time=2025-11-17T19:38:38.560+09:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-11-17T19:38:38.562+09:00 level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.12.11)"
time=2025-11-17T19:38:38.563+09:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-11-17T19:38:38.575+09:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\htty\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61045"
time=2025-11-17T19:38:39.191+09:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\htty\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61056"
time=2025-11-17T19:38:46.360+09:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\htty\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61069"
time=2025-11-17T19:38:46.600+09:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\htty\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61075"
time=2025-11-17T19:38:46.868+09:00 level=INFO source=types.go:42 msg="inference compute" id=8680517d-0300-0000-0002-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="Intel(R) Arc(TM) 140T GPU (32GB)" libdirs=ollama,vulkan driver=0.0 pci_id="" type=iGPU total="36.3 GiB" available="18.2 GiB"
[GIN] 2025/11/17 - 19:39:05 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/11/17 - 19:39:06 | 200 |     66.5065ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/11/17 - 19:39:06 | 200 |     63.7005ms |       127.0.0.1 | POST     "/api/show"
time=2025-11-17T19:39:06.290+09:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\htty\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61094"
time=2025-11-17T19:39:06.659+09:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2025-11-17T19:39:06.660+09:00 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-11-17T19:39:06.660+09:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=16 efficiency=10 threads=16
time=2025-11-17T19:39:06.760+09:00 level=INFO source=server.go:209 msg="enabling flash attention"
time=2025-11-17T19:39:06.760+09:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\htty\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\htty\\.ollama\\models\\blobs\\sha256-e796792eba26c4d3b04b0ac5adb01a453dd9ec2dfd83b6c59cbf6fe5f30b0f68 --port 61100"
time=2025-11-17T19:39:06.762+09:00 level=INFO source=sched.go:443 msg="system memory" total="63.5 GiB" free="36.0 GiB" free_swap="37.0 GiB"
time=2025-11-17T19:39:06.762+09:00 level=INFO source=sched.go:450 msg="gpu memory" id=8680517d-0300-0000-0002-000000000000 library=Vulkan available="17.8 GiB" free="18.2 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-11-17T19:39:06.762+09:00 level=INFO source=server.go:702 msg="loading model" "model layers"=63 requested=-1
time=2025-11-17T19:39:06.805+09:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-11-17T19:39:06.813+09:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:61100"
time=2025-11-17T19:39:06.819+09:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:63[ID:8680517d-0300-0000-0002-000000000000 Layers:63(0..62)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-17T19:39:06.859+09:00 level=INFO source=ggml.go:136 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1247 num_key_values=37
load_backend: loaded CPU backend from C:\Users\htty\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Intel(R) Arc(TM) 140T GPU (32GB) (Intel Corporation) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 32768 | int dot: 1 | matrix cores: none
load_backend: loaded Vulkan backend from C:\Users\htty\AppData\Local\Programs\Ollama\lib\ollama\vulkan\ggml-vulkan.dll
time=2025-11-17T19:39:06.924+09:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang)
ggml_backend_vk_get_device_memory called: uuid 8680517d-0300-0000-0002-000000000000
ggml_backend_vk_get_device_memory called: luid 0x000000000000aae2
ggml_dxgi_pdh_init called
DXGI + PDH Initialized. Getting GPU free memory info
[DXGI] Adapter Description: Intel(R) Arc(TM) 140T GPU (32GB), LUID: 0x000000000000AAE2, Dedicated: 0.12 GB, Shared: 36.21 GB
[DXGI] Adapter Description: Intel(R) Arc(TM) 140T GPU (32GB), LUID: 0x00000000307180E4, Dedicated: 0.12 GB, Shared: 36.21 GB
[DXGI] Adapter Description: Microsoft Basic Render Driver, LUID: 0x000000000000AE86, Dedicated: 0.00 GB, Shared: 36.21 GB
Integrated GPU (Intel(R) Arc(TM) 140T GPU (32GB)) with LUID 0x000000000000aae2 detected. Shared Total: 38878611947.00 bytes (36.21 GB), Shared Usage: 19454943232.00 bytes (18.12 GB), Dedicated Total: 134217728.00 bytes (0.12 GB), Dedicated Usage: 0.00 bytes (0.00 GB)
ggml_backend_vk_get_device_memory utilizing DXGI + PDH memory reporting free: 19557886443 total: 39012829675
time=2025-11-17T19:39:07.204+09:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:63[ID:8680517d-0300-0000-0002-000000000000 Layers:63(0..62)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_vk_get_device_memory called: uuid 8680517d-0300-0000-0002-000000000000
ggml_backend_vk_get_device_memory called: luid 0x000000000000aae2
ggml_dxgi_pdh_init called
DXGI + PDH Initialized. Getting GPU free memory info
[DXGI] Adapter Description: Intel(R) Arc(TM) 140T GPU (32GB), LUID: 0x000000000000AAE2, Dedicated: 0.12 GB, Shared: 36.21 GB
[DXGI] Adapter Description: Intel(R) Arc(TM) 140T GPU (32GB), LUID: 0x00000000307180E4, Dedicated: 0.12 GB, Shared: 36.21 GB
[DXGI] Adapter Description: Microsoft Basic Render Driver, LUID: 0x000000000000AE86, Dedicated: 0.00 GB, Shared: 36.21 GB
Integrated GPU (Intel(R) Arc(TM) 140T GPU (32GB)) with LUID 0x000000000000aae2 detected. Shared Total: 38878611947.00 bytes (36.21 GB), Shared Usage: 19454943232.00 bytes (18.12 GB), Dedicated Total: 134217728.00 bytes (0.12 GB), Dedicated Usage: 0.00 bytes (0.00 GB)
ggml_backend_vk_get_device_memory utilizing DXGI + PDH memory reporting free: 19557886443 total: 39012829675
time=2025-11-17T19:39:10.622+09:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:63[ID:8680517d-0300-0000-0002-000000000000 Layers:63(0..62)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-17T19:39:10.622+09:00 level=INFO source=ggml.go:482 msg="offloading 62 repeating layers to GPU"
time=2025-11-17T19:39:10.622+09:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2025-11-17T19:39:10.622+09:00 level=INFO source=ggml.go:494 msg="offloaded 63/63 layers to GPU"
time=2025-11-17T19:39:10.623+09:00 level=INFO source=device.go:240 msg="model weights" device=Vulkan0 size="16.2 GiB"
time=2025-11-17T19:39:10.623+09:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="1.1 GiB"
time=2025-11-17T19:39:10.623+09:00 level=INFO source=device.go:251 msg="kv cache" device=Vulkan0 size="944.0 MiB"
time=2025-11-17T19:39:10.623+09:00 level=INFO source=device.go:262 msg="compute graph" device=Vulkan0 size="181.8 MiB"
time=2025-11-17T19:39:10.623+09:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="10.5 MiB"
time=2025-11-17T19:39:10.623+09:00 level=INFO source=device.go:272 msg="total memory" size="18.4 GiB"
time=2025-11-17T19:39:10.623+09:00 level=INFO source=sched.go:517 msg="loaded runners" count=1
time=2025-11-17T19:39:10.623+09:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-11-17T19:39:10.623+09:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
time=2025-11-17T19:39:30.658+09:00 level=INFO source=server.go:1332 msg="llama runner started in 23.90 seconds"
[GIN] 2025/11/17 - 19:39:30 | 200 |   24.5489256s |       127.0.0.1 | POST     "/api/generate"
panic: failed to sample token

goroutine 1115 [running]:
github.com/ollama/ollama/runner/ollamarunner.(*Server).computeBatch(0xc0002170e0, {0x0, {0x7ff6a3e1d680, 0xc001382180}, {0x7ff6a3e29ee8, 0xc001d682e8}, {0xc0000a0380, 0xf, 0x10}, {{0x7ff6a3e29ee8, ...}, ...}, ...})
	github.com/ollama/ollama/runner/ollamarunner/runner.go:763 +0x1a85
created by github.com/ollama/ollama/runner/ollamarunner.(*Server).run in goroutine 7
	github.com/ollama/ollama/runner/ollamarunner/runner.go:458 +0x2cd
time=2025-11-17T19:39:37.801+09:00 level=ERROR source=server.go:1539 msg="post predict" error="Post \"http://127.0.0.1:61100/completion\": read tcp 127.0.0.1:61106->127.0.0.1:61100: wsarecv: An existing connection was forcibly closed by the remote host."
[GIN] 2025/11/17 - 19:39:37 | 500 |    3.0343527s |       127.0.0.1 | POST     "/api/chat"

OS

Windows

GPU

Intel

CPU

Intel

Ollama version

0.12.11

<!-- gh-comment-id:3541836732 --> @y-hattori commented on GitHub (Nov 17, 2025): I’d like to report that the same `panic: failed to sample token` error also occurs in an environment different from the one previously mentioned. In my setup—Windows running on an Intel Arc GPU—the issue appears only when using the `gemma3:27b` model. It does **not** occur with `gemma3:12b`, `gpt-oss:20b`, or `deepseek-r1:32b`. ``` PS C:\WINDOWS\system32> ollama run gemma3:27b --verbose >>> how's it going? Error: 500 Internal Server Error: model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details PS C:\WINDOWS\system32> ollama run gemma3:12b --verbose >>> how's it going? It's going well, thanks for asking! As an AI, I don't *feel* things, but my systems are running smoothly and I'm ready to help. 😊 How about you? How's it going with you? total duration: 7.2137829s load duration: 170.6806ms prompt eval count: 15 token(s) prompt eval duration: 876.738ms prompt eval rate: 17.11 tokens/s eval count: 52 token(s) eval duration: 6.1142824s eval rate: 8.50 tokens/s >>> /bye PS C:\WINDOWS\system32> ollama run gpt-oss:20b --verbose >>> how's it going? Thinking... The user asks "how's it going?" This is a casual greeting. We should respond friendly, maybe mention we are here to help. Probably brief. ...done thinking. Hey! I'm doing great—just ready to help you out. How can I assist you today? total duration: 8.8000835s load duration: 137.3134ms prompt eval count: 72 token(s) prompt eval duration: 905.842ms prompt eval rate: 79.48 tokens/s eval count: 61 token(s) eval duration: 7.6629961s eval rate: 7.96 tokens/s >>> /bye PS C:\WINDOWS\system32> ollama run deepseek-r1:32b --verbose >>> how's it going? Hello! I'm just a virtual assistant, so I don't have feelings, but I'm here and ready to help you with whatever you need. How are *you* doing? 😊 total duration: 20.7502084s load duration: 89.9947ms prompt eval count: 8 token(s) prompt eval duration: 6.2129431s prompt eval rate: 1.29 tokens/s eval count: 44 token(s) eval duration: 14.3651764s eval rate: 3.06 tokens/s >>> /bye ``` Hardware and OS: ``` OS: Windows 11 Pro 25H2 26200.7171 Host: GMKtec EVO-T1 Shell: Windows PowerShell CPU: Intel(R) Core(TM) Ultra 9 285H (2.90 GHz) GPU: Intel(R) Arc(TM) 140T GPU (32GB) Memory: 64.0GB ``` ### Relevant log output This server.log shows the output immediately after startup when I run `ollama run gemma3:27b --verbose` and then enter a prompt. ``` time=2025-11-17T19:38:38.531+09:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\htty\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES:]" time=2025-11-17T19:38:38.558+09:00 level=INFO source=images.go:522 msg="total blobs: 33" time=2025-11-17T19:38:38.560+09:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-11-17T19:38:38.562+09:00 level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.12.11)" time=2025-11-17T19:38:38.563+09:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-11-17T19:38:38.575+09:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\htty\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61045" time=2025-11-17T19:38:39.191+09:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\htty\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61056" time=2025-11-17T19:38:46.360+09:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\htty\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61069" time=2025-11-17T19:38:46.600+09:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\htty\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61075" time=2025-11-17T19:38:46.868+09:00 level=INFO source=types.go:42 msg="inference compute" id=8680517d-0300-0000-0002-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="Intel(R) Arc(TM) 140T GPU (32GB)" libdirs=ollama,vulkan driver=0.0 pci_id="" type=iGPU total="36.3 GiB" available="18.2 GiB" [GIN] 2025/11/17 - 19:39:05 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/11/17 - 19:39:06 | 200 | 66.5065ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/17 - 19:39:06 | 200 | 63.7005ms | 127.0.0.1 | POST "/api/show" time=2025-11-17T19:39:06.290+09:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\htty\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61094" time=2025-11-17T19:39:06.659+09:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2025-11-17T19:39:06.660+09:00 level=INFO source=cpu_windows.go:164 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-11-17T19:39:06.660+09:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=16 efficiency=10 threads=16 time=2025-11-17T19:39:06.760+09:00 level=INFO source=server.go:209 msg="enabling flash attention" time=2025-11-17T19:39:06.760+09:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\htty\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\htty\\.ollama\\models\\blobs\\sha256-e796792eba26c4d3b04b0ac5adb01a453dd9ec2dfd83b6c59cbf6fe5f30b0f68 --port 61100" time=2025-11-17T19:39:06.762+09:00 level=INFO source=sched.go:443 msg="system memory" total="63.5 GiB" free="36.0 GiB" free_swap="37.0 GiB" time=2025-11-17T19:39:06.762+09:00 level=INFO source=sched.go:450 msg="gpu memory" id=8680517d-0300-0000-0002-000000000000 library=Vulkan available="17.8 GiB" free="18.2 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-11-17T19:39:06.762+09:00 level=INFO source=server.go:702 msg="loading model" "model layers"=63 requested=-1 time=2025-11-17T19:39:06.805+09:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-11-17T19:39:06.813+09:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:61100" time=2025-11-17T19:39:06.819+09:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:63[ID:8680517d-0300-0000-0002-000000000000 Layers:63(0..62)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-17T19:39:06.859+09:00 level=INFO source=ggml.go:136 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1247 num_key_values=37 load_backend: loaded CPU backend from C:\Users\htty\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = Intel(R) Arc(TM) 140T GPU (32GB) (Intel Corporation) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 32768 | int dot: 1 | matrix cores: none load_backend: loaded Vulkan backend from C:\Users\htty\AppData\Local\Programs\Ollama\lib\ollama\vulkan\ggml-vulkan.dll time=2025-11-17T19:39:06.924+09:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(clang) ggml_backend_vk_get_device_memory called: uuid 8680517d-0300-0000-0002-000000000000 ggml_backend_vk_get_device_memory called: luid 0x000000000000aae2 ggml_dxgi_pdh_init called DXGI + PDH Initialized. Getting GPU free memory info [DXGI] Adapter Description: Intel(R) Arc(TM) 140T GPU (32GB), LUID: 0x000000000000AAE2, Dedicated: 0.12 GB, Shared: 36.21 GB [DXGI] Adapter Description: Intel(R) Arc(TM) 140T GPU (32GB), LUID: 0x00000000307180E4, Dedicated: 0.12 GB, Shared: 36.21 GB [DXGI] Adapter Description: Microsoft Basic Render Driver, LUID: 0x000000000000AE86, Dedicated: 0.00 GB, Shared: 36.21 GB Integrated GPU (Intel(R) Arc(TM) 140T GPU (32GB)) with LUID 0x000000000000aae2 detected. Shared Total: 38878611947.00 bytes (36.21 GB), Shared Usage: 19454943232.00 bytes (18.12 GB), Dedicated Total: 134217728.00 bytes (0.12 GB), Dedicated Usage: 0.00 bytes (0.00 GB) ggml_backend_vk_get_device_memory utilizing DXGI + PDH memory reporting free: 19557886443 total: 39012829675 time=2025-11-17T19:39:07.204+09:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:63[ID:8680517d-0300-0000-0002-000000000000 Layers:63(0..62)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_vk_get_device_memory called: uuid 8680517d-0300-0000-0002-000000000000 ggml_backend_vk_get_device_memory called: luid 0x000000000000aae2 ggml_dxgi_pdh_init called DXGI + PDH Initialized. Getting GPU free memory info [DXGI] Adapter Description: Intel(R) Arc(TM) 140T GPU (32GB), LUID: 0x000000000000AAE2, Dedicated: 0.12 GB, Shared: 36.21 GB [DXGI] Adapter Description: Intel(R) Arc(TM) 140T GPU (32GB), LUID: 0x00000000307180E4, Dedicated: 0.12 GB, Shared: 36.21 GB [DXGI] Adapter Description: Microsoft Basic Render Driver, LUID: 0x000000000000AE86, Dedicated: 0.00 GB, Shared: 36.21 GB Integrated GPU (Intel(R) Arc(TM) 140T GPU (32GB)) with LUID 0x000000000000aae2 detected. Shared Total: 38878611947.00 bytes (36.21 GB), Shared Usage: 19454943232.00 bytes (18.12 GB), Dedicated Total: 134217728.00 bytes (0.12 GB), Dedicated Usage: 0.00 bytes (0.00 GB) ggml_backend_vk_get_device_memory utilizing DXGI + PDH memory reporting free: 19557886443 total: 39012829675 time=2025-11-17T19:39:10.622+09:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:63[ID:8680517d-0300-0000-0002-000000000000 Layers:63(0..62)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-17T19:39:10.622+09:00 level=INFO source=ggml.go:482 msg="offloading 62 repeating layers to GPU" time=2025-11-17T19:39:10.622+09:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2025-11-17T19:39:10.622+09:00 level=INFO source=ggml.go:494 msg="offloaded 63/63 layers to GPU" time=2025-11-17T19:39:10.623+09:00 level=INFO source=device.go:240 msg="model weights" device=Vulkan0 size="16.2 GiB" time=2025-11-17T19:39:10.623+09:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="1.1 GiB" time=2025-11-17T19:39:10.623+09:00 level=INFO source=device.go:251 msg="kv cache" device=Vulkan0 size="944.0 MiB" time=2025-11-17T19:39:10.623+09:00 level=INFO source=device.go:262 msg="compute graph" device=Vulkan0 size="181.8 MiB" time=2025-11-17T19:39:10.623+09:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="10.5 MiB" time=2025-11-17T19:39:10.623+09:00 level=INFO source=device.go:272 msg="total memory" size="18.4 GiB" time=2025-11-17T19:39:10.623+09:00 level=INFO source=sched.go:517 msg="loaded runners" count=1 time=2025-11-17T19:39:10.623+09:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-11-17T19:39:10.623+09:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" time=2025-11-17T19:39:30.658+09:00 level=INFO source=server.go:1332 msg="llama runner started in 23.90 seconds" [GIN] 2025/11/17 - 19:39:30 | 200 | 24.5489256s | 127.0.0.1 | POST "/api/generate" panic: failed to sample token goroutine 1115 [running]: github.com/ollama/ollama/runner/ollamarunner.(*Server).computeBatch(0xc0002170e0, {0x0, {0x7ff6a3e1d680, 0xc001382180}, {0x7ff6a3e29ee8, 0xc001d682e8}, {0xc0000a0380, 0xf, 0x10}, {{0x7ff6a3e29ee8, ...}, ...}, ...}) github.com/ollama/ollama/runner/ollamarunner/runner.go:763 +0x1a85 created by github.com/ollama/ollama/runner/ollamarunner.(*Server).run in goroutine 7 github.com/ollama/ollama/runner/ollamarunner/runner.go:458 +0x2cd time=2025-11-17T19:39:37.801+09:00 level=ERROR source=server.go:1539 msg="post predict" error="Post \"http://127.0.0.1:61100/completion\": read tcp 127.0.0.1:61106->127.0.0.1:61100: wsarecv: An existing connection was forcibly closed by the remote host." [GIN] 2025/11/17 - 19:39:37 | 500 | 3.0343527s | 127.0.0.1 | POST "/api/chat" ``` ### OS Windows ### GPU Intel ### CPU Intel ### Ollama version 0.12.11
Author
Owner

@zodiac715 commented on GitHub (Dec 9, 2025):

Same here, running Ollama :latest in Docker, tested with mistral-3:3b and mistral-3:8b
GPU: Intel Arc A770

ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Intel(R) Arc(tm) A770 Graphics (DG2) (Intel open-source Mesa driver) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none
load_backend: loaded Vulkan backend from /usr/lib/ollama/vulkan/libggml-vulkan.so
time=2025-12-09T06:04:59.125Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
ggml_backend_vk_get_device_memory called: uuid 8680a056-0800-0000-0300-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
time=2025-12-09T06:04:59.334Z level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:8680a056-0800-0000-0300-000000000000 Layers:35(0..34)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_vk_get_device_memory called: uuid 8680a056-0800-0000-0300-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
time=2025-12-09T06:04:59.667Z level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:8680a056-0800-0000-0300-000000000000 Layers:35(0..34)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-09T06:04:59.667Z level=INFO source=ggml.go:482 msg="offloading 34 repeating layers to GPU"
time=2025-12-09T06:04:59.667Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2025-12-09T06:04:59.667Z level=INFO source=ggml.go:494 msg="offloaded 35/35 layers to GPU"
time=2025-12-09T06:04:59.667Z level=INFO source=device.go:240 msg="model weights" device=Vulkan0 size="5.3 GiB"
time=2025-12-09T06:04:59.667Z level=INFO source=device.go:245 msg="model weights" device=CPU size="288.0 MiB"
time=2025-12-09T06:04:59.667Z level=INFO source=device.go:251 msg="kv cache" device=Vulkan0 size="544.0 MiB"
time=2025-12-09T06:04:59.667Z level=INFO source=device.go:262 msg="compute graph" device=Vulkan0 size="736.2 MiB"
time=2025-12-09T06:04:59.667Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="8.0 MiB"
time=2025-12-09T06:04:59.667Z level=INFO source=device.go:272 msg="total memory" size="6.9 GiB"
time=2025-12-09T06:04:59.667Z level=INFO source=sched.go:517 msg="loaded runners" count=2
time=2025-12-09T06:04:59.668Z level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-12-09T06:04:59.672Z level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
time=2025-12-09T06:05:00.927Z level=INFO source=server.go:1332 msg="llama runner started in 1.86 seconds"
panic: failed to sample token

goroutine 643 [running]:
github.com/ollama/ollama/runner/ollamarunner.(Server).computeBatch(0xc000226d20, {0x0, {0x562ae3261ef0, 0xc000be8240}, {0x562ae326c360, 0xc0006d6990}, {0xc000bd8908, 0xbf, 0x11f}, {{0x562ae326c360, ...}, ...}, ...})
github.com/ollama/ollama/runner/ollamarunner/runner.go:763 +0x1a85
created by github.com/ollama/ollama/runner/ollamarunner.(Server).run in goroutine 25
github.com/ollama/ollama/runner/ollamarunner/runner.go:458 +0x2cd
time=2025-12-09T06:05:01.652Z level=ERROR source=server.go:1539 msg="post predict" error="Post "http://127.0.0.1:42677/completion": EOF"
time=2025-12-09T06:05:49.966Z level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:vulkan OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[
http://localhost https://localhost http://localhost:
https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-12-09T06:05:49.969Z level=INFO source=images.go:522 msg="total blobs: 8"
time=2025-12-09T06:05:49.970Z level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-12-09T06:05:49.971Z level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.2)"
time=2025-12-09T06:05:49.971Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-12-09T06:05:49.972Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 46815"
time=2025-12-09T06:05:50.006Z level=INFO source=types.go:42 msg="inference compute" id=8680a056-0800-0000-0300-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="Intel(R) Arc(tm) A770 Graphics (DG2)" libdirs=ollama,vulkan driver=0.0 pci_id=0000:03:00.0 type=discrete total="15.9 GiB" available="14.3 GiB"
time=2025-12-09T06:05:50.006Z level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="15.9 GiB" threshold="20.0 GiB"
time=2025-12-09T06:06:00.764Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44207"
time=2025-12-09T06:06:00.822Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing "max": invalid syntax"
time=2025-12-09T06:06:00.918Z level=INFO source=server.go:209 msg="enabling flash attention"
time=2025-12-09T06:06:00.918Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-094eb0a75095db5a9f83e51323879750023d7050d008a9b2899bf9f47c4926e5 --port 40819"
time=2025-12-09T06:06:00.918Z level=INFO source=sched.go:443 msg="system memory" total="31.1 GiB" free="31.0 GiB" free_swap="0 B"
time=2025-12-09T06:06:00.918Z level=INFO source=sched.go:450 msg="gpu memory" id=8680a056-0800-0000-0300-000000000000 library=Vulkan available="13.9 GiB" free="14.3 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-12-09T06:06:00.918Z level=INFO source=server.go:702 msg="loading model" "model layers"=27 requested=-1
time=2025-12-09T06:06:00.927Z level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-12-09T06:06:00.929Z level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:40819"
time=2025-12-09T06:06:00.940Z level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:27[ID:8680a056-0800-0000-0300-000000000000 Layers:27(0..26)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-09T06:06:00.987Z level=INFO source=ggml.go:136 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=458 num_key_values=45
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-alderlake.so
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Intel(R) Arc(tm) A770 Graphics (DG2) (Intel open-source Mesa driver) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none
load_backend: loaded Vulkan backend from /usr/lib/ollama/vulkan/libggml-vulkan.so
time=2025-12-09T06:06:01.008Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
ggml_backend_vk_get_device_memory called: uuid 8680a056-0800-0000-0300-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
time=2025-12-09T06:06:01.228Z level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:27[ID:8680a056-0800-0000-0300-000000000000 Layers:27(0..26)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_vk_get_device_memory called: uuid 8680a056-0800-0000-0300-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
time=2025-12-09T06:06:01.552Z level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:27[ID:8680a056-0800-0000-0300-000000000000 Layers:27(0..26)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-12-09T06:06:01.552Z level=INFO source=ggml.go:482 msg="offloading 26 repeating layers to GPU"
time=2025-12-09T06:06:01.552Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2025-12-09T06:06:01.552Z level=INFO source=ggml.go:494 msg="offloaded 27/27 layers to GPU"
time=2025-12-09T06:06:01.552Z level=INFO source=device.go:240 msg="model weights" device=Vulkan0 size="2.7 GiB"
time=2025-12-09T06:06:01.552Z level=INFO source=device.go:245 msg="model weights" device=CPU size="315.0 MiB"
time=2025-12-09T06:06:01.552Z level=INFO source=device.go:251 msg="kv cache" device=Vulkan0 size="416.0 MiB"
time=2025-12-09T06:06:01.552Z level=INFO source=device.go:262 msg="compute graph" device=Vulkan0 size="736.2 MiB"
time=2025-12-09T06:06:01.552Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="6.0 MiB"
time=2025-12-09T06:06:01.552Z level=INFO source=device.go:272 msg="total memory" size="4.2 GiB"
time=2025-12-09T06:06:01.552Z level=INFO source=sched.go:517 msg="loaded runners" count=1
time=2025-12-09T06:06:01.552Z level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-12-09T06:06:01.556Z level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
time=2025-12-09T06:06:02.309Z level=INFO source=server.go:1332 msg="llama runner started in 1.39 seconds"
panic: failed to sample token

(dont know how to insert the reading-block for the logs, sorry)
Thank you!

<!-- gh-comment-id:3630514136 --> @zodiac715 commented on GitHub (Dec 9, 2025): Same here, running Ollama :latest in Docker, tested with mistral-3:3b and mistral-3:8b GPU: Intel Arc A770 ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = Intel(R) Arc(tm) A770 Graphics (DG2) (Intel open-source Mesa driver) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none load_backend: loaded Vulkan backend from /usr/lib/ollama/vulkan/libggml-vulkan.so time=2025-12-09T06:04:59.125Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) ggml_backend_vk_get_device_memory called: uuid 8680a056-0800-0000-0300-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 time=2025-12-09T06:04:59.334Z level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:8680a056-0800-0000-0300-000000000000 Layers:35(0..34)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_vk_get_device_memory called: uuid 8680a056-0800-0000-0300-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 time=2025-12-09T06:04:59.667Z level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:35[ID:8680a056-0800-0000-0300-000000000000 Layers:35(0..34)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-09T06:04:59.667Z level=INFO source=ggml.go:482 msg="offloading 34 repeating layers to GPU" time=2025-12-09T06:04:59.667Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2025-12-09T06:04:59.667Z level=INFO source=ggml.go:494 msg="offloaded 35/35 layers to GPU" time=2025-12-09T06:04:59.667Z level=INFO source=device.go:240 msg="model weights" device=Vulkan0 size="5.3 GiB" time=2025-12-09T06:04:59.667Z level=INFO source=device.go:245 msg="model weights" device=CPU size="288.0 MiB" time=2025-12-09T06:04:59.667Z level=INFO source=device.go:251 msg="kv cache" device=Vulkan0 size="544.0 MiB" time=2025-12-09T06:04:59.667Z level=INFO source=device.go:262 msg="compute graph" device=Vulkan0 size="736.2 MiB" time=2025-12-09T06:04:59.667Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="8.0 MiB" time=2025-12-09T06:04:59.667Z level=INFO source=device.go:272 msg="total memory" size="6.9 GiB" time=2025-12-09T06:04:59.667Z level=INFO source=sched.go:517 msg="loaded runners" count=2 time=2025-12-09T06:04:59.668Z level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-12-09T06:04:59.672Z level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" time=2025-12-09T06:05:00.927Z level=INFO source=server.go:1332 msg="llama runner started in 1.86 seconds" panic: failed to sample token goroutine 643 [running]: github.com/ollama/ollama/runner/ollamarunner.(*Server).computeBatch(0xc000226d20, {0x0, {0x562ae3261ef0, 0xc000be8240}, {0x562ae326c360, 0xc0006d6990}, {0xc000bd8908, 0xbf, 0x11f}, {{0x562ae326c360, ...}, ...}, ...}) github.com/ollama/ollama/runner/ollamarunner/runner.go:763 +0x1a85 created by github.com/ollama/ollama/runner/ollamarunner.(*Server).run in goroutine 25 github.com/ollama/ollama/runner/ollamarunner/runner.go:458 +0x2cd time=2025-12-09T06:05:01.652Z level=ERROR source=server.go:1539 msg="post predict" error="Post \"http://127.0.0.1:42677/completion\": EOF" time=2025-12-09T06:05:49.966Z level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY:vulkan OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-12-09T06:05:49.969Z level=INFO source=images.go:522 msg="total blobs: 8" time=2025-12-09T06:05:49.970Z level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-12-09T06:05:49.971Z level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.2)" time=2025-12-09T06:05:49.971Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-12-09T06:05:49.972Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 46815" time=2025-12-09T06:05:50.006Z level=INFO source=types.go:42 msg="inference compute" id=8680a056-0800-0000-0300-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="Intel(R) Arc(tm) A770 Graphics (DG2)" libdirs=ollama,vulkan driver=0.0 pci_id=0000:03:00.0 type=discrete total="15.9 GiB" available="14.3 GiB" time=2025-12-09T06:05:50.006Z level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="15.9 GiB" threshold="20.0 GiB" time=2025-12-09T06:06:00.764Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44207" time=2025-12-09T06:06:00.822Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" time=2025-12-09T06:06:00.918Z level=INFO source=server.go:209 msg="enabling flash attention" time=2025-12-09T06:06:00.918Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-094eb0a75095db5a9f83e51323879750023d7050d008a9b2899bf9f47c4926e5 --port 40819" time=2025-12-09T06:06:00.918Z level=INFO source=sched.go:443 msg="system memory" total="31.1 GiB" free="31.0 GiB" free_swap="0 B" time=2025-12-09T06:06:00.918Z level=INFO source=sched.go:450 msg="gpu memory" id=8680a056-0800-0000-0300-000000000000 library=Vulkan available="13.9 GiB" free="14.3 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-12-09T06:06:00.918Z level=INFO source=server.go:702 msg="loading model" "model layers"=27 requested=-1 time=2025-12-09T06:06:00.927Z level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-12-09T06:06:00.929Z level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:40819" time=2025-12-09T06:06:00.940Z level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:27[ID:8680a056-0800-0000-0300-000000000000 Layers:27(0..26)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-09T06:06:00.987Z level=INFO source=ggml.go:136 msg="" architecture=mistral3 file_type=Q4_K_M name="" description="" num_tensors=458 num_key_values=45 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-alderlake.so ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = Intel(R) Arc(tm) A770 Graphics (DG2) (Intel open-source Mesa driver) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none load_backend: loaded Vulkan backend from /usr/lib/ollama/vulkan/libggml-vulkan.so time=2025-12-09T06:06:01.008Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) ggml_backend_vk_get_device_memory called: uuid 8680a056-0800-0000-0300-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 time=2025-12-09T06:06:01.228Z level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:27[ID:8680a056-0800-0000-0300-000000000000 Layers:27(0..26)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ggml_backend_vk_get_device_memory called: uuid 8680a056-0800-0000-0300-000000000000 ggml_backend_vk_get_device_memory called: luid 0x0000000000000000 time=2025-12-09T06:06:01.552Z level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:27[ID:8680a056-0800-0000-0300-000000000000 Layers:27(0..26)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-12-09T06:06:01.552Z level=INFO source=ggml.go:482 msg="offloading 26 repeating layers to GPU" time=2025-12-09T06:06:01.552Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2025-12-09T06:06:01.552Z level=INFO source=ggml.go:494 msg="offloaded 27/27 layers to GPU" time=2025-12-09T06:06:01.552Z level=INFO source=device.go:240 msg="model weights" device=Vulkan0 size="2.7 GiB" time=2025-12-09T06:06:01.552Z level=INFO source=device.go:245 msg="model weights" device=CPU size="315.0 MiB" time=2025-12-09T06:06:01.552Z level=INFO source=device.go:251 msg="kv cache" device=Vulkan0 size="416.0 MiB" time=2025-12-09T06:06:01.552Z level=INFO source=device.go:262 msg="compute graph" device=Vulkan0 size="736.2 MiB" time=2025-12-09T06:06:01.552Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="6.0 MiB" time=2025-12-09T06:06:01.552Z level=INFO source=device.go:272 msg="total memory" size="4.2 GiB" time=2025-12-09T06:06:01.552Z level=INFO source=sched.go:517 msg="loaded runners" count=1 time=2025-12-09T06:06:01.552Z level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-12-09T06:06:01.556Z level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" time=2025-12-09T06:06:02.309Z level=INFO source=server.go:1332 msg="llama runner started in 1.39 seconds" panic: failed to sample token (dont know how to insert the reading-block for the logs, sorry) Thank you!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70727