[GH-ISSUE #13634] panic: failed to sample token #34729

Open
opened 2026-04-22 18:33:06 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @Chicob13 on GitHub (Jan 6, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13634

What is the issue?

When using gpt-oss my two nvidia gpu's are getting dropped with a panic error message.
I can use Llama3
Below is the output log at OLLAMA_DEBUG=1 from trying to use gpt-oss:20b

Relevant log output

####################################
Typed in console:

ollama run gpt-oss
####################################

time=2026-01-06T11:30:47.472Z level=INFO source=routes.go:1554 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-01-06T11:30:47.539Z level=INFO source=images.go:493 msg="total blobs: 49"
time=2026-01-06T11:30:47.567Z level=INFO source=images.go:500 msg="total unused blobs removed: 0"
time=2026-01-06T11:30:47.594Z level=INFO source=routes.go:1607 msg="Listening on [::]:11434 (version 0.13.5)"
time=2026-01-06T11:30:47.594Z level=DEBUG source=sched.go:120 msg="starting llm scheduler"
time=2026-01-06T11:30:47.594Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-01-06T11:30:47.601Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39525"
time=2026-01-06T11:30:47.601Z level=DEBUG source=server.go:430 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_ORIGINS=* OLLAMA_DEBUG=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12
time=2026-01-06T11:30:49.958Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=2.363296776s OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[]
time=2026-01-06T11:30:49.958Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38465"
time=2026-01-06T11:30:49.958Z level=DEBUG source=server.go:430 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_ORIGINS=* OLLAMA_DEBUG=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13
time=2026-01-06T11:30:51.364Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=1.406906099s OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[]
time=2026-01-06T11:30:51.365Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-01-06T11:30:51.365Z level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=2
time=2026-01-06T11:30:51.365Z level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=/usr/lib/ollama/cuda_v12 description="NVIDIA P102-100" compute=6.1 id=GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 pci_id=0000:03:00.0
time=2026-01-06T11:30:51.365Z level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=/usr/lib/ollama/cuda_v12 description="NVIDIA P102-100" compute=6.1 id=GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 pci_id=0000:42:00.0
time=2026-01-06T11:30:51.365Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34881"
time=2026-01-06T11:30:51.365Z level=DEBUG source=server.go:430 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_ORIGINS=* OLLAMA_DEBUG=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12 CUDA_VISIBLE_DEVICES=GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 GGML_CUDA_INIT=1
time=2026-01-06T11:30:51.365Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43173"
time=2026-01-06T11:30:51.365Z level=DEBUG source=server.go:430 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_ORIGINS=* OLLAMA_DEBUG=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12 CUDA_VISIBLE_DEVICES=GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 GGML_CUDA_INIT=1
time=2026-01-06T11:30:52.026Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=661.77533ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs="map[CUDA_VISIBLE_DEVICES:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 GGML_CUDA_INIT:1]"
time=2026-01-06T11:30:52.031Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=666.002331ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs="map[CUDA_VISIBLE_DEVICES:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 GGML_CUDA_INIT:1]"
time=2026-01-06T11:30:52.031Z level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=4.437009255s
time=2026-01-06T11:30:52.031Z level=INFO source=types.go:42 msg="inference compute" id=GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 filter_id="" library=CUDA compute=6.1 name=CUDA0 description="NVIDIA P102-100" libdirs=ollama,cuda_v12 driver=12.9 pci_id=0000:03:00.0 type=discrete total="10.0 GiB" available="9.9 GiB"
time=2026-01-06T11:30:52.031Z level=INFO source=types.go:42 msg="inference compute" id=GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 filter_id="" library=CUDA compute=6.1 name=CUDA1 description="NVIDIA P102-100" libdirs=ollama,cuda_v12 driver=12.9 pci_id=0000:42:00.0 type=discrete total="10.0 GiB" available="9.8 GiB"
[GIN] 2026/01/06 - 11:23:47 | 200 | 36.393232766s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2026/01/06 - 11:32:15 | 200 |   63.502849ms |    192.168.0.86 | GET      "/api/tags"
[GIN] 2026/01/06 - 11:53:51 | 200 |      44.109µs |       127.0.0.1 | HEAD     "/"
time=2026-01-06T11:53:51.814Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32
[GIN] 2026/01/06 - 11:53:51 | 200 |  312.846209ms |       127.0.0.1 | POST     "/api/show"
time=2026-01-06T11:53:52.122Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32
[GIN] 2026/01/06 - 11:53:52 | 200 |  304.120561ms |       127.0.0.1 | POST     "/api/show"
time=2026-01-06T11:53:52.665Z level=DEBUG source=runner.go:264 msg="refreshing free memory"
time=2026-01-06T11:53:52.665Z level=DEBUG source=runner.go:328 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery"
time=2026-01-06T11:53:52.665Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40033"
time=2026-01-06T11:53:52.665Z level=DEBUG source=server.go:430 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_ORIGINS=* OLLAMA_DEBUG=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12
time=2026-01-06T11:53:53.161Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=496.298657ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[]
time=2026-01-06T11:53:53.161Z level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=496.473188ms
time=2026-01-06T11:53:53.164Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
time=2026-01-06T11:53:53.164Z level=DEBUG source=sched.go:194 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=6 gpu_count=2
time=2026-01-06T11:53:53.248Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32
time=2026-01-06T11:53:53.249Z level=DEBUG source=sched.go:211 msg="loading first model" model=/root/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb
time=2026-01-06T11:53:53.535Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32
time=2026-01-06T11:53:53.536Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=gptoss.pooling_type default=0
time=2026-01-06T11:53:53.536Z level=INFO source=server.go:245 msg="enabling flash attention"
time=2026-01-06T11:53:53.537Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb --port 42057"
time=2026-01-06T11:53:53.537Z level=DEBUG source=server.go:430 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_ORIGINS=* OLLAMA_DEBUG=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12
time=2026-01-06T11:53:53.537Z level=INFO source=sched.go:443 msg="system memory" total="62.9 GiB" free="61.6 GiB" free_swap="0 B"
time=2026-01-06T11:53:53.537Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 library=CUDA available="9.5 GiB" free="9.9 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-01-06T11:53:53.537Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 library=CUDA available="9.3 GiB" free="9.8 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-01-06T11:53:53.537Z level=INFO source=server.go:746 msg="loading model" "model layers"=25 requested=-1
time=2026-01-06T11:53:53.555Z level=INFO source=runner.go:1405 msg="starting ollama engine"
time=2026-01-06T11:53:53.560Z level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:42057"
time=2026-01-06T11:53:53.569Z level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:24 GPULayers:25[ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-01-06T11:53:53.702Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32
time=2026-01-06T11:53:53.703Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.name default=""
time=2026-01-06T11:53:53.703Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.description default=""
time=2026-01-06T11:53:53.703Z level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32
time=2026-01-06T11:53:53.703Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-sandybridge.so
time=2026-01-06T11:53:53.713Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v12
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
  Device 0: NVIDIA P102-100, compute capability 6.1, VMM: yes, ID: GPU-d47e036d-17f8-d41b-f481-b576fd01fb68
  Device 1: NVIDIA P102-100, compute capability 6.1, VMM: yes, ID: GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
time=2026-01-06T11:53:54.126Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2026-01-06T11:53:54.130Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=gptoss.pooling_type default=0
time=2026-01-06T11:53:54.961Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1399 splits=2
time=2026-01-06T11:53:54.966Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1399 splits=2
time=2026-01-06T11:53:54.967Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="11.8 GiB"
time=2026-01-06T11:53:54.967Z level=DEBUG source=device.go:245 msg="model weights" device=CPU size="1.1 GiB"
time=2026-01-06T11:53:54.967Z level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="300.0 MiB"
time=2026-01-06T11:53:54.967Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="128.6 MiB"
time=2026-01-06T11:53:54.967Z level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="5.6 MiB"
time=2026-01-06T11:53:54.967Z level=DEBUG source=device.go:272 msg="total memory" size="13.3 GiB"
time=2026-01-06T11:53:54.967Z level=DEBUG source=server.go:771 msg=memory success=true required.InputWeights=1158266880 required.CPU.Graph=5898240 required.CUDA0.ID=GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 required.CUDA0.Weights="[477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 1158278400]" required.CUDA0.Cache="[9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 0]" required.CUDA0.Graph=134879360
time=2026-01-06T11:53:54.968Z level=DEBUG source=server.go:965 msg="available gpu" id=GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 library=CUDA "available layer vram"="9.3 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="128.6 MiB"
time=2026-01-06T11:53:54.968Z level=DEBUG source=server.go:965 msg="available gpu" id=GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 library=CUDA "available layer vram"="9.3 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="0 B"
time=2026-01-06T11:53:54.968Z level=DEBUG source=server.go:782 msg="new layout created" layers="25[ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Layers:13(0..12) ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Layers:12(13..24)]"
time=2026-01-06T11:53:54.968Z level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:24 GPULayers:25[ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Layers:13(0..12) ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Layers:12(13..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-01-06T11:53:55.091Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32
time=2026-01-06T11:53:55.094Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=gptoss.pooling_type default=0
time=2026-01-06T11:53:55.355Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1399 splits=3
time=2026-01-06T11:53:55.360Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1399 splits=3
time=2026-01-06T11:53:55.361Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="5.8 GiB"
time=2026-01-06T11:53:55.361Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA1 size="6.0 GiB"
time=2026-01-06T11:53:55.361Z level=DEBUG source=device.go:245 msg="model weights" device=CPU size="1.1 GiB"
time=2026-01-06T11:53:55.361Z level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="159.0 MiB"
time=2026-01-06T11:53:55.361Z level=DEBUG source=device.go:251 msg="kv cache" device=CUDA1 size="141.0 MiB"
time=2026-01-06T11:53:55.361Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="128.6 MiB"
time=2026-01-06T11:53:55.361Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA1 size="109.4 MiB"
time=2026-01-06T11:53:55.361Z level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="5.6 MiB"
time=2026-01-06T11:53:55.361Z level=DEBUG source=device.go:272 msg="total memory" size="13.4 GiB"
time=2026-01-06T11:53:55.361Z level=DEBUG source=server.go:771 msg=memory success=true required.InputWeights=1158266880 required.CPU.Graph=5898240 required.CUDA0.ID=GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 required.CUDA0.Weights="[477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 0 0 0 0 0 0 0 0 0 0 0 0]" required.CUDA0.Cache="[9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 0 0 0 0 0 0 0 0 0 0 0 0]" required.CUDA0.Graph=134879232 required.CUDA1.ID=GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 required.CUDA1.Weights="[0 0 0 0 0 0 0 0 0 0 0 0 0 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 1158278400]" required.CUDA1.Cache="[0 0 0 0 0 0 0 0 0 0 0 0 0 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 0]" required.CUDA1.Graph=114694272
time=2026-01-06T11:53:55.361Z level=DEBUG source=server.go:965 msg="available gpu" id=GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 library=CUDA "available layer vram"="9.3 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="128.6 MiB"
time=2026-01-06T11:53:55.361Z level=DEBUG source=server.go:965 msg="available gpu" id=GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 library=CUDA "available layer vram"="9.2 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="109.4 MiB"
time=2026-01-06T11:53:55.361Z level=DEBUG source=server.go:782 msg="new layout created" layers="25[ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Layers:13(0..12) ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Layers:12(13..24)]"
time=2026-01-06T11:53:55.361Z level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:24 GPULayers:25[ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Layers:13(0..12) ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Layers:12(13..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-01-06T11:53:55.483Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32
time=2026-01-06T11:53:55.489Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=gptoss.pooling_type default=0
time=2026-01-06T11:53:55.549Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1399 splits=3
time=2026-01-06T11:53:55.554Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1399 splits=3
time=2026-01-06T11:53:55.555Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="5.8 GiB"
time=2026-01-06T11:53:55.555Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA1 size="6.0 GiB"
time=2026-01-06T11:53:55.555Z level=DEBUG source=device.go:245 msg="model weights" device=CPU size="1.1 GiB"
time=2026-01-06T11:53:55.555Z level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="159.0 MiB"
time=2026-01-06T11:53:55.555Z level=DEBUG source=device.go:251 msg="kv cache" device=CUDA1 size="141.0 MiB"
time=2026-01-06T11:53:55.555Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="128.6 MiB"
time=2026-01-06T11:53:55.555Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA1 size="109.4 MiB"
time=2026-01-06T11:53:55.555Z level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="5.6 MiB"
time=2026-01-06T11:53:55.555Z level=DEBUG source=device.go:272 msg="total memory" size="13.4 GiB"
time=2026-01-06T11:53:55.555Z level=DEBUG source=server.go:771 msg=memory success=true required.InputWeights=1158266880 required.CPU.Graph=5898240 required.CUDA0.ID=GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 required.CUDA0.Weights="[477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 0 0 0 0 0 0 0 0 0 0 0 0]" required.CUDA0.Cache="[9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 0 0 0 0 0 0 0 0 0 0 0 0]" required.CUDA0.Graph=134879232 required.CUDA1.ID=GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 required.CUDA1.Weights="[0 0 0 0 0 0 0 0 0 0 0 0 0 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 1158278400]" required.CUDA1.Cache="[0 0 0 0 0 0 0 0 0 0 0 0 0 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 0]" required.CUDA1.Graph=114694272
time=2026-01-06T11:53:55.555Z level=DEBUG source=server.go:965 msg="available gpu" id=GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 library=CUDA "available layer vram"="9.3 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="128.6 MiB"
time=2026-01-06T11:53:55.555Z level=DEBUG source=server.go:965 msg="available gpu" id=GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 library=CUDA "available layer vram"="9.2 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="109.4 MiB"
time=2026-01-06T11:53:55.555Z level=DEBUG source=server.go:782 msg="new layout created" layers="25[ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Layers:13(0..12) ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Layers:12(13..24)]"
time=2026-01-06T11:53:55.555Z level=INFO source=runner.go:1278 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:24 GPULayers:25[ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Layers:13(0..12) ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Layers:12(13..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-01-06T11:53:55.555Z level=INFO source=ggml.go:482 msg="offloading 24 repeating layers to GPU"
time=2026-01-06T11:53:55.556Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2026-01-06T11:53:55.556Z level=INFO source=ggml.go:494 msg="offloaded 25/25 layers to GPU"
time=2026-01-06T11:53:55.556Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="5.8 GiB"
time=2026-01-06T11:53:55.556Z level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="6.0 GiB"
time=2026-01-06T11:53:55.556Z level=INFO source=device.go:245 msg="model weights" device=CPU size="1.1 GiB"
time=2026-01-06T11:53:55.556Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="159.0 MiB"
time=2026-01-06T11:53:55.556Z level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="141.0 MiB"
time=2026-01-06T11:53:55.556Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="128.6 MiB"
time=2026-01-06T11:53:55.556Z level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="109.4 MiB"
time=2026-01-06T11:53:55.556Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="5.6 MiB"
time=2026-01-06T11:53:55.556Z level=INFO source=device.go:272 msg="total memory" size="13.4 GiB"
time=2026-01-06T11:53:55.556Z level=INFO source=sched.go:517 msg="loaded runners" count=1
time=2026-01-06T11:53:55.556Z level=INFO source=server.go:1338 msg="waiting for llama runner to start responding"
time=2026-01-06T11:53:55.556Z level=INFO source=server.go:1372 msg="waiting for server to become available" status="llm server loading model"
time=2026-01-06T11:53:55.807Z level=DEBUG source=server.go:1382 msg="model load progress 0.01"
time=2026-01-06T11:53:56.058Z level=DEBUG source=server.go:1382 msg="model load progress 0.01"
time=2026-01-06T11:53:56.309Z level=DEBUG source=server.go:1382 msg="model load progress 0.02"
time=2026-01-06T11:53:56.560Z level=DEBUG source=server.go:1382 msg="model load progress 0.03"
time=2026-01-06T11:53:56.811Z level=DEBUG source=server.go:1382 msg="model load progress 0.04"
time=2026-01-06T11:53:57.062Z level=DEBUG source=server.go:1382 msg="model load progress 0.05"
time=2026-01-06T11:53:57.313Z level=DEBUG source=server.go:1382 msg="model load progress 0.05"
time=2026-01-06T11:53:57.564Z level=DEBUG source=server.go:1382 msg="model load progress 0.07"
time=2026-01-06T11:53:57.815Z level=DEBUG source=server.go:1382 msg="model load progress 0.07"
time=2026-01-06T11:53:58.066Z level=DEBUG source=server.go:1382 msg="model load progress 0.08"
time=2026-01-06T11:53:58.317Z level=DEBUG source=server.go:1382 msg="model load progress 0.08"
time=2026-01-06T11:53:58.568Z level=DEBUG source=server.go:1382 msg="model load progress 0.10"
time=2026-01-06T11:53:58.819Z level=DEBUG source=server.go:1382 msg="model load progress 0.10"
time=2026-01-06T11:53:59.070Z level=DEBUG source=server.go:1382 msg="model load progress 0.11"
time=2026-01-06T11:53:59.321Z level=DEBUG source=server.go:1382 msg="model load progress 0.12"
time=2026-01-06T11:53:59.572Z level=DEBUG source=server.go:1382 msg="model load progress 0.13"
time=2026-01-06T11:53:59.823Z level=DEBUG source=server.go:1382 msg="model load progress 0.13"
time=2026-01-06T11:54:00.074Z level=DEBUG source=server.go:1382 msg="model load progress 0.15"
time=2026-01-06T11:54:00.325Z level=DEBUG source=server.go:1382 msg="model load progress 0.15"
time=2026-01-06T11:54:00.576Z level=DEBUG source=server.go:1382 msg="model load progress 0.16"
time=2026-01-06T11:54:00.827Z level=DEBUG source=server.go:1382 msg="model load progress 0.17"
time=2026-01-06T11:54:01.078Z level=DEBUG source=server.go:1382 msg="model load progress 0.18"
time=2026-01-06T11:54:01.329Z level=DEBUG source=server.go:1382 msg="model load progress 0.19"
time=2026-01-06T11:54:01.580Z level=DEBUG source=server.go:1382 msg="model load progress 0.19"
time=2026-01-06T11:54:01.831Z level=DEBUG source=server.go:1382 msg="model load progress 0.20"
time=2026-01-06T11:54:02.082Z level=DEBUG source=server.go:1382 msg="model load progress 0.21"
time=2026-01-06T11:54:02.333Z level=DEBUG source=server.go:1382 msg="model load progress 0.22"
time=2026-01-06T11:54:02.584Z level=DEBUG source=server.go:1382 msg="model load progress 0.23"
time=2026-01-06T11:54:02.835Z level=DEBUG source=server.go:1382 msg="model load progress 0.24"
time=2026-01-06T11:54:03.086Z level=DEBUG source=server.go:1382 msg="model load progress 0.24"
time=2026-01-06T11:54:03.337Z level=DEBUG source=server.go:1382 msg="model load progress 0.26"
time=2026-01-06T11:54:03.588Z level=DEBUG source=server.go:1382 msg="model load progress 0.26"
time=2026-01-06T11:54:03.839Z level=DEBUG source=server.go:1382 msg="model load progress 0.28"
time=2026-01-06T11:54:04.090Z level=DEBUG source=server.go:1382 msg="model load progress 0.28"
time=2026-01-06T11:54:04.341Z level=DEBUG source=server.go:1382 msg="model load progress 0.29"
time=2026-01-06T11:54:04.592Z level=DEBUG source=server.go:1382 msg="model load progress 0.30"
time=2026-01-06T11:54:04.844Z level=DEBUG source=server.go:1382 msg="model load progress 0.31"
time=2026-01-06T11:54:05.095Z level=DEBUG source=server.go:1382 msg="model load progress 0.32"
time=2026-01-06T11:54:05.346Z level=DEBUG source=server.go:1382 msg="model load progress 0.33"
time=2026-01-06T11:54:05.597Z level=DEBUG source=server.go:1382 msg="model load progress 0.34"
time=2026-01-06T11:54:05.848Z level=DEBUG source=server.go:1382 msg="model load progress 0.34"
time=2026-01-06T11:54:06.100Z level=DEBUG source=server.go:1382 msg="model load progress 0.35"
time=2026-01-06T11:54:06.351Z level=DEBUG source=server.go:1382 msg="model load progress 0.37"
time=2026-01-06T11:54:06.602Z level=DEBUG source=server.go:1382 msg="model load progress 0.37"
time=2026-01-06T11:54:06.853Z level=DEBUG source=server.go:1382 msg="model load progress 0.38"
time=2026-01-06T11:54:07.104Z level=DEBUG source=server.go:1382 msg="model load progress 0.39"
time=2026-01-06T11:54:07.355Z level=DEBUG source=server.go:1382 msg="model load progress 0.40"
time=2026-01-06T11:54:07.607Z level=DEBUG source=server.go:1382 msg="model load progress 0.40"
time=2026-01-06T11:54:07.858Z level=DEBUG source=server.go:1382 msg="model load progress 0.42"
time=2026-01-06T11:54:08.109Z level=DEBUG source=server.go:1382 msg="model load progress 0.43"
time=2026-01-06T11:54:08.360Z level=DEBUG source=server.go:1382 msg="model load progress 0.44"
time=2026-01-06T11:54:08.611Z level=DEBUG source=server.go:1382 msg="model load progress 0.44"
time=2026-01-06T11:54:08.862Z level=DEBUG source=server.go:1382 msg="model load progress 0.45"
time=2026-01-06T11:54:09.113Z level=DEBUG source=server.go:1382 msg="model load progress 0.46"
time=2026-01-06T11:54:09.364Z level=DEBUG source=server.go:1382 msg="model load progress 0.48"
time=2026-01-06T11:54:09.615Z level=DEBUG source=server.go:1382 msg="model load progress 0.48"
time=2026-01-06T11:54:09.866Z level=DEBUG source=server.go:1382 msg="model load progress 0.49"
time=2026-01-06T11:54:10.117Z level=DEBUG source=server.go:1382 msg="model load progress 0.50"
time=2026-01-06T11:54:10.368Z level=DEBUG source=server.go:1382 msg="model load progress 0.51"
time=2026-01-06T11:54:10.619Z level=DEBUG source=server.go:1382 msg="model load progress 0.52"
time=2026-01-06T11:54:10.870Z level=DEBUG source=server.go:1382 msg="model load progress 0.53"
time=2026-01-06T11:54:11.122Z level=DEBUG source=server.go:1382 msg="model load progress 0.54"
time=2026-01-06T11:54:11.373Z level=DEBUG source=server.go:1382 msg="model load progress 0.54"
time=2026-01-06T11:54:11.623Z level=DEBUG source=server.go:1382 msg="model load progress 0.55"
time=2026-01-06T11:54:11.875Z level=DEBUG source=server.go:1382 msg="model load progress 0.56"
time=2026-01-06T11:54:12.125Z level=DEBUG source=server.go:1382 msg="model load progress 0.57"
time=2026-01-06T11:54:12.376Z level=DEBUG source=server.go:1382 msg="model load progress 0.58"
time=2026-01-06T11:54:12.628Z level=DEBUG source=server.go:1382 msg="model load progress 0.58"
time=2026-01-06T11:54:12.879Z level=DEBUG source=server.go:1382 msg="model load progress 0.59"
time=2026-01-06T11:54:13.130Z level=DEBUG source=server.go:1382 msg="model load progress 0.60"
time=2026-01-06T11:54:13.381Z level=DEBUG source=server.go:1382 msg="model load progress 0.60"
time=2026-01-06T11:54:13.631Z level=DEBUG source=server.go:1382 msg="model load progress 0.62"
time=2026-01-06T11:54:13.882Z level=DEBUG source=server.go:1382 msg="model load progress 0.63"
time=2026-01-06T11:54:14.133Z level=DEBUG source=server.go:1382 msg="model load progress 0.63"
time=2026-01-06T11:54:14.385Z level=DEBUG source=server.go:1382 msg="model load progress 0.64"
time=2026-01-06T11:54:14.636Z level=DEBUG source=server.go:1382 msg="model load progress 0.65"
time=2026-01-06T11:54:14.887Z level=DEBUG source=server.go:1382 msg="model load progress 0.65"
time=2026-01-06T11:54:15.138Z level=DEBUG source=server.go:1382 msg="model load progress 0.66"
time=2026-01-06T11:54:15.389Z level=DEBUG source=server.go:1382 msg="model load progress 0.67"
time=2026-01-06T11:54:15.640Z level=DEBUG source=server.go:1382 msg="model load progress 0.67"
time=2026-01-06T11:54:15.891Z level=DEBUG source=server.go:1382 msg="model load progress 0.68"
time=2026-01-06T11:54:16.142Z level=DEBUG source=server.go:1382 msg="model load progress 0.69"
time=2026-01-06T11:54:16.393Z level=DEBUG source=server.go:1382 msg="model load progress 0.70"
time=2026-01-06T11:54:16.644Z level=DEBUG source=server.go:1382 msg="model load progress 0.70"
time=2026-01-06T11:54:16.895Z level=DEBUG source=server.go:1382 msg="model load progress 0.71"
time=2026-01-06T11:54:17.146Z level=DEBUG source=server.go:1382 msg="model load progress 0.71"
time=2026-01-06T11:54:17.397Z level=DEBUG source=server.go:1382 msg="model load progress 0.72"
time=2026-01-06T11:54:17.648Z level=DEBUG source=server.go:1382 msg="model load progress 0.72"
time=2026-01-06T11:54:17.899Z level=DEBUG source=server.go:1382 msg="model load progress 0.73"
time=2026-01-06T11:54:18.150Z level=DEBUG source=server.go:1382 msg="model load progress 0.74"
time=2026-01-06T11:54:18.401Z level=DEBUG source=server.go:1382 msg="model load progress 0.75"
time=2026-01-06T11:54:18.652Z level=DEBUG source=server.go:1382 msg="model load progress 0.76"
time=2026-01-06T11:54:18.903Z level=DEBUG source=server.go:1382 msg="model load progress 0.77"
time=2026-01-06T11:54:19.154Z level=DEBUG source=server.go:1382 msg="model load progress 0.77"
time=2026-01-06T11:54:19.405Z level=DEBUG source=server.go:1382 msg="model load progress 0.78"
time=2026-01-06T11:54:19.656Z level=DEBUG source=server.go:1382 msg="model load progress 0.79"
time=2026-01-06T11:54:19.906Z level=DEBUG source=server.go:1382 msg="model load progress 0.79"
time=2026-01-06T11:54:20.157Z level=DEBUG source=server.go:1382 msg="model load progress 0.80"
time=2026-01-06T11:54:20.408Z level=DEBUG source=server.go:1382 msg="model load progress 0.81"
time=2026-01-06T11:54:20.659Z level=DEBUG source=server.go:1382 msg="model load progress 0.81"
time=2026-01-06T11:54:20.910Z level=DEBUG source=server.go:1382 msg="model load progress 0.82"
time=2026-01-06T11:54:21.162Z level=DEBUG source=server.go:1382 msg="model load progress 0.83"
time=2026-01-06T11:54:21.413Z level=DEBUG source=server.go:1382 msg="model load progress 0.84"
time=2026-01-06T11:54:21.664Z level=DEBUG source=server.go:1382 msg="model load progress 0.85"
time=2026-01-06T11:54:21.915Z level=DEBUG source=server.go:1382 msg="model load progress 0.86"
time=2026-01-06T11:54:22.166Z level=DEBUG source=server.go:1382 msg="model load progress 0.86"
time=2026-01-06T11:54:22.418Z level=DEBUG source=server.go:1382 msg="model load progress 0.87"
time=2026-01-06T11:54:22.669Z level=DEBUG source=server.go:1382 msg="model load progress 0.88"
time=2026-01-06T11:54:22.920Z level=DEBUG source=server.go:1382 msg="model load progress 0.89"
time=2026-01-06T11:54:23.172Z level=DEBUG source=server.go:1382 msg="model load progress 0.90"
time=2026-01-06T11:54:23.423Z level=DEBUG source=server.go:1382 msg="model load progress 0.91"
time=2026-01-06T11:54:23.674Z level=DEBUG source=server.go:1382 msg="model load progress 0.91"
time=2026-01-06T11:54:23.925Z level=DEBUG source=server.go:1382 msg="model load progress 0.92"
time=2026-01-06T11:54:24.176Z level=DEBUG source=server.go:1382 msg="model load progress 0.93"
time=2026-01-06T11:54:24.427Z level=DEBUG source=server.go:1382 msg="model load progress 0.94"
time=2026-01-06T11:54:24.678Z level=DEBUG source=server.go:1382 msg="model load progress 0.95"
time=2026-01-06T11:54:24.929Z level=DEBUG source=server.go:1382 msg="model load progress 0.95"
time=2026-01-06T11:54:25.180Z level=DEBUG source=server.go:1382 msg="model load progress 0.95"
time=2026-01-06T11:54:25.431Z level=DEBUG source=server.go:1382 msg="model load progress 0.95"
time=2026-01-06T11:54:25.682Z level=DEBUG source=server.go:1382 msg="model load progress 0.95"
time=2026-01-06T11:54:25.934Z level=DEBUG source=server.go:1382 msg="model load progress 0.95"
time=2026-01-06T11:54:26.185Z level=DEBUG source=server.go:1382 msg="model load progress 0.96"
time=2026-01-06T11:54:26.436Z level=DEBUG source=server.go:1382 msg="model load progress 0.96"
time=2026-01-06T11:54:26.687Z level=DEBUG source=server.go:1382 msg="model load progress 0.96"
time=2026-01-06T11:54:26.938Z level=DEBUG source=server.go:1382 msg="model load progress 0.96"
time=2026-01-06T11:54:27.189Z level=DEBUG source=server.go:1382 msg="model load progress 0.96"
time=2026-01-06T11:54:27.440Z level=DEBUG source=server.go:1382 msg="model load progress 0.97"
time=2026-01-06T11:54:27.692Z level=DEBUG source=server.go:1382 msg="model load progress 0.98"
time=2026-01-06T11:54:27.943Z level=DEBUG source=server.go:1382 msg="model load progress 0.98"
time=2026-01-06T11:54:28.194Z level=DEBUG source=server.go:1382 msg="model load progress 0.99"
time=2026-01-06T11:54:28.445Z level=DEBUG source=server.go:1382 msg="model load progress 1.00"
time=2026-01-06T11:54:28.492Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=gptoss.pooling_type default=0
time=2026-01-06T11:54:28.697Z level=INFO source=server.go:1376 msg="llama runner started in 35.16 seconds"
time=2026-01-06T11:54:28.697Z level=DEBUG source=sched.go:529 msg="finished setting up" runner.name=registry.ollama.ai/library/gpt-oss:latest runner.inference="[{ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Library:CUDA} {ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Library:CUDA}]" runner.size="13.4 GiB" runner.vram="13.4 GiB" runner.parallel=1 runner.pid=118 runner.model=/root/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=8192
time=2026-01-06T11:54:28.697Z level=DEBUG source=sched.go:537 msg="context for request finished"
[GIN] 2026/01/06 - 11:54:28 | 200 | 36.570596338s |       127.0.0.1 | POST     "/api/generate"
time=2026-01-06T11:54:28.697Z level=DEBUG source=sched.go:290 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gpt-oss:latest runner.inference="[{ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Library:CUDA} {ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Library:CUDA}]" runner.size="13.4 GiB" runner.vram="13.4 GiB" runner.parallel=1 runner.pid=118 runner.model=/root/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=8192 duration=5m0s
time=2026-01-06T11:54:28.697Z level=DEBUG source=sched.go:308 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gpt-oss:latest runner.inference="[{ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Library:CUDA} {ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Library:CUDA}]" runner.size="13.4 GiB" runner.vram="13.4 GiB" runner.parallel=1 runner.pid=118 runner.model=/root/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=8192 refCount=0
time=2026-01-06T11:55:51.648Z level=DEBUG source=sched.go:626 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb
time=2026-01-06T11:55:51.649Z level=DEBUG source=server.go:1509 msg="completion request" images=0 prompt=307 format=""
time=2026-01-06T11:55:51.726Z level=DEBUG source=cache.go:142 msg="loading cache slot" id=0 cache=0 prompt=68 used=0 remaining=68

####################################
Typed in question in console:

>>> Hello
Error: 500 Internal Server Error: model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details
####################################

panic: failed to sample token

goroutine 90 [running]:
github.com/ollama/ollama/runner/ollamarunner.(*Server).computeBatch(0xc0002790e0, {0x0, {0x5630acd8e250, 0xc0006b0040}, {0x5630acd98b20, 0xc002461830}, {0xc0002c3b08, 0x44, 0x8f}, {{0x5630acd98b20, ...}, ...}, ...})
        github.com/ollama/ollama/runner/ollamarunner/runner.go:763 +0x1a85
created by github.com/ollama/ollama/runner/ollamarunner.(*Server).run in goroutine 62
        github.com/ollama/ollama/runner/ollamarunner/runner.go:458 +0x2cd
time=2026-01-06T11:55:52.094Z level=ERROR source=server.go:1583 msg="post predict" error="Post \"http://127.0.0.1:42057/completion\": EOF"
[GIN] 2026/01/06 - 11:55:52 | 500 |  1.699267303s |       127.0.0.1 | POST     "/api/chat"
time=2026-01-06T11:55:52.094Z level=DEBUG source=sched.go:385 msg="context for request finished" runner.name=registry.ollama.ai/library/gpt-oss:latest runner.inference="[{ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Library:CUDA} {ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Library:CUDA}]" runner.size="13.4 GiB" runner.vram="13.4 GiB" runner.parallel=1 runner.pid=118 runner.model=/root/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=8192
time=2026-01-06T11:55:52.094Z level=DEBUG source=sched.go:290 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gpt-oss:latest runner.inference="[{ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Library:CUDA} {ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Library:CUDA}]" runner.size="13.4 GiB" runner.vram="13.4 GiB" runner.parallel=1 runner.pid=118 runner.model=/root/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=8192 duration=5m0s
time=2026-01-06T11:55:52.095Z level=DEBUG source=sched.go:308 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gpt-oss:latest runner.inference="[{ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Library:CUDA} {ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Library:CUDA}]" runner.size="13.4 GiB" runner.vram="13.4 GiB" runner.parallel=1 runner.pid=118 runner.model=/root/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=8192 refCount=0

OS

Docker

GPU

Nvidia

CPU

Intel

Ollama version

0.13.5

Originally created by @Chicob13 on GitHub (Jan 6, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13634 ### What is the issue? When using gpt-oss my two nvidia gpu's are getting dropped with a panic error message. I can use Llama3 Below is the output log at OLLAMA_DEBUG=1 from trying to use gpt-oss:20b ### Relevant log output ```shell #################################### Typed in console: ollama run gpt-oss #################################### time=2026-01-06T11:30:47.472Z level=INFO source=routes.go:1554 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-01-06T11:30:47.539Z level=INFO source=images.go:493 msg="total blobs: 49" time=2026-01-06T11:30:47.567Z level=INFO source=images.go:500 msg="total unused blobs removed: 0" time=2026-01-06T11:30:47.594Z level=INFO source=routes.go:1607 msg="Listening on [::]:11434 (version 0.13.5)" time=2026-01-06T11:30:47.594Z level=DEBUG source=sched.go:120 msg="starting llm scheduler" time=2026-01-06T11:30:47.594Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-01-06T11:30:47.601Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39525" time=2026-01-06T11:30:47.601Z level=DEBUG source=server.go:430 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_ORIGINS=* OLLAMA_DEBUG=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12 time=2026-01-06T11:30:49.958Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=2.363296776s OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[] time=2026-01-06T11:30:49.958Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38465" time=2026-01-06T11:30:49.958Z level=DEBUG source=server.go:430 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_ORIGINS=* OLLAMA_DEBUG=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v13 time=2026-01-06T11:30:51.364Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=1.406906099s OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v13]" extra_envs=map[] time=2026-01-06T11:30:51.365Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-01-06T11:30:51.365Z level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=2 time=2026-01-06T11:30:51.365Z level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=/usr/lib/ollama/cuda_v12 description="NVIDIA P102-100" compute=6.1 id=GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 pci_id=0000:03:00.0 time=2026-01-06T11:30:51.365Z level=DEBUG source=runner.go:146 msg="verifying if device is supported" library=/usr/lib/ollama/cuda_v12 description="NVIDIA P102-100" compute=6.1 id=GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 pci_id=0000:42:00.0 time=2026-01-06T11:30:51.365Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34881" time=2026-01-06T11:30:51.365Z level=DEBUG source=server.go:430 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_ORIGINS=* OLLAMA_DEBUG=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12 CUDA_VISIBLE_DEVICES=GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 GGML_CUDA_INIT=1 time=2026-01-06T11:30:51.365Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 43173" time=2026-01-06T11:30:51.365Z level=DEBUG source=server.go:430 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_ORIGINS=* OLLAMA_DEBUG=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12 CUDA_VISIBLE_DEVICES=GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 GGML_CUDA_INIT=1 time=2026-01-06T11:30:52.026Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=661.77533ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs="map[CUDA_VISIBLE_DEVICES:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 GGML_CUDA_INIT:1]" time=2026-01-06T11:30:52.031Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=666.002331ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs="map[CUDA_VISIBLE_DEVICES:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 GGML_CUDA_INIT:1]" time=2026-01-06T11:30:52.031Z level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=4.437009255s time=2026-01-06T11:30:52.031Z level=INFO source=types.go:42 msg="inference compute" id=GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 filter_id="" library=CUDA compute=6.1 name=CUDA0 description="NVIDIA P102-100" libdirs=ollama,cuda_v12 driver=12.9 pci_id=0000:03:00.0 type=discrete total="10.0 GiB" available="9.9 GiB" time=2026-01-06T11:30:52.031Z level=INFO source=types.go:42 msg="inference compute" id=GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 filter_id="" library=CUDA compute=6.1 name=CUDA1 description="NVIDIA P102-100" libdirs=ollama,cuda_v12 driver=12.9 pci_id=0000:42:00.0 type=discrete total="10.0 GiB" available="9.8 GiB" [GIN] 2026/01/06 - 11:23:47 | 200 | 36.393232766s | 127.0.0.1 | POST "/api/generate" [GIN] 2026/01/06 - 11:32:15 | 200 | 63.502849ms | 192.168.0.86 | GET "/api/tags" [GIN] 2026/01/06 - 11:53:51 | 200 | 44.109µs | 127.0.0.1 | HEAD "/" time=2026-01-06T11:53:51.814Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32 [GIN] 2026/01/06 - 11:53:51 | 200 | 312.846209ms | 127.0.0.1 | POST "/api/show" time=2026-01-06T11:53:52.122Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32 [GIN] 2026/01/06 - 11:53:52 | 200 | 304.120561ms | 127.0.0.1 | POST "/api/show" time=2026-01-06T11:53:52.665Z level=DEBUG source=runner.go:264 msg="refreshing free memory" time=2026-01-06T11:53:52.665Z level=DEBUG source=runner.go:328 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery" time=2026-01-06T11:53:52.665Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 40033" time=2026-01-06T11:53:52.665Z level=DEBUG source=server.go:430 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_ORIGINS=* OLLAMA_DEBUG=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12 time=2026-01-06T11:53:53.161Z level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=496.298657ms OLLAMA_LIBRARY_PATH="[/usr/lib/ollama /usr/lib/ollama/cuda_v12]" extra_envs=map[] time=2026-01-06T11:53:53.161Z level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=496.473188ms time=2026-01-06T11:53:53.164Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" time=2026-01-06T11:53:53.164Z level=DEBUG source=sched.go:194 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=6 gpu_count=2 time=2026-01-06T11:53:53.248Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32 time=2026-01-06T11:53:53.249Z level=DEBUG source=sched.go:211 msg="loading first model" model=/root/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb time=2026-01-06T11:53:53.535Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32 time=2026-01-06T11:53:53.536Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=gptoss.pooling_type default=0 time=2026-01-06T11:53:53.536Z level=INFO source=server.go:245 msg="enabling flash attention" time=2026-01-06T11:53:53.537Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb --port 42057" time=2026-01-06T11:53:53.537Z level=DEBUG source=server.go:430 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_ORIGINS=* OLLAMA_DEBUG=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 OLLAMA_HOST=0.0.0.0:11434 OLLAMA_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama/cuda_v12 time=2026-01-06T11:53:53.537Z level=INFO source=sched.go:443 msg="system memory" total="62.9 GiB" free="61.6 GiB" free_swap="0 B" time=2026-01-06T11:53:53.537Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 library=CUDA available="9.5 GiB" free="9.9 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-01-06T11:53:53.537Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 library=CUDA available="9.3 GiB" free="9.8 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-01-06T11:53:53.537Z level=INFO source=server.go:746 msg="loading model" "model layers"=25 requested=-1 time=2026-01-06T11:53:53.555Z level=INFO source=runner.go:1405 msg="starting ollama engine" time=2026-01-06T11:53:53.560Z level=INFO source=runner.go:1440 msg="Server listening on 127.0.0.1:42057" time=2026-01-06T11:53:53.569Z level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:24 GPULayers:25[ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-01-06T11:53:53.702Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32 time=2026-01-06T11:53:53.703Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.name default="" time=2026-01-06T11:53:53.703Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.description default="" time=2026-01-06T11:53:53.703Z level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32 time=2026-01-06T11:53:53.703Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-sandybridge.so time=2026-01-06T11:53:53.713Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama/cuda_v12 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA P102-100, compute capability 6.1, VMM: yes, ID: GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Device 1: NVIDIA P102-100, compute capability 6.1, VMM: yes, ID: GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so time=2026-01-06T11:53:54.126Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2026-01-06T11:53:54.130Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=gptoss.pooling_type default=0 time=2026-01-06T11:53:54.961Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1399 splits=2 time=2026-01-06T11:53:54.966Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1399 splits=2 time=2026-01-06T11:53:54.967Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="11.8 GiB" time=2026-01-06T11:53:54.967Z level=DEBUG source=device.go:245 msg="model weights" device=CPU size="1.1 GiB" time=2026-01-06T11:53:54.967Z level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="300.0 MiB" time=2026-01-06T11:53:54.967Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="128.6 MiB" time=2026-01-06T11:53:54.967Z level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="5.6 MiB" time=2026-01-06T11:53:54.967Z level=DEBUG source=device.go:272 msg="total memory" size="13.3 GiB" time=2026-01-06T11:53:54.967Z level=DEBUG source=server.go:771 msg=memory success=true required.InputWeights=1158266880 required.CPU.Graph=5898240 required.CUDA0.ID=GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 required.CUDA0.Weights="[477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 1158278400]" required.CUDA0.Cache="[9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 0]" required.CUDA0.Graph=134879360 time=2026-01-06T11:53:54.968Z level=DEBUG source=server.go:965 msg="available gpu" id=GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 library=CUDA "available layer vram"="9.3 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="128.6 MiB" time=2026-01-06T11:53:54.968Z level=DEBUG source=server.go:965 msg="available gpu" id=GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 library=CUDA "available layer vram"="9.3 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="0 B" time=2026-01-06T11:53:54.968Z level=DEBUG source=server.go:782 msg="new layout created" layers="25[ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Layers:13(0..12) ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Layers:12(13..24)]" time=2026-01-06T11:53:54.968Z level=INFO source=runner.go:1278 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:24 GPULayers:25[ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Layers:13(0..12) ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Layers:12(13..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-01-06T11:53:55.091Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32 time=2026-01-06T11:53:55.094Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=gptoss.pooling_type default=0 time=2026-01-06T11:53:55.355Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1399 splits=3 time=2026-01-06T11:53:55.360Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1399 splits=3 time=2026-01-06T11:53:55.361Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="5.8 GiB" time=2026-01-06T11:53:55.361Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA1 size="6.0 GiB" time=2026-01-06T11:53:55.361Z level=DEBUG source=device.go:245 msg="model weights" device=CPU size="1.1 GiB" time=2026-01-06T11:53:55.361Z level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="159.0 MiB" time=2026-01-06T11:53:55.361Z level=DEBUG source=device.go:251 msg="kv cache" device=CUDA1 size="141.0 MiB" time=2026-01-06T11:53:55.361Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="128.6 MiB" time=2026-01-06T11:53:55.361Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA1 size="109.4 MiB" time=2026-01-06T11:53:55.361Z level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="5.6 MiB" time=2026-01-06T11:53:55.361Z level=DEBUG source=device.go:272 msg="total memory" size="13.4 GiB" time=2026-01-06T11:53:55.361Z level=DEBUG source=server.go:771 msg=memory success=true required.InputWeights=1158266880 required.CPU.Graph=5898240 required.CUDA0.ID=GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 required.CUDA0.Weights="[477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 0 0 0 0 0 0 0 0 0 0 0 0]" required.CUDA0.Cache="[9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 0 0 0 0 0 0 0 0 0 0 0 0]" required.CUDA0.Graph=134879232 required.CUDA1.ID=GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 required.CUDA1.Weights="[0 0 0 0 0 0 0 0 0 0 0 0 0 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 1158278400]" required.CUDA1.Cache="[0 0 0 0 0 0 0 0 0 0 0 0 0 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 0]" required.CUDA1.Graph=114694272 time=2026-01-06T11:53:55.361Z level=DEBUG source=server.go:965 msg="available gpu" id=GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 library=CUDA "available layer vram"="9.3 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="128.6 MiB" time=2026-01-06T11:53:55.361Z level=DEBUG source=server.go:965 msg="available gpu" id=GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 library=CUDA "available layer vram"="9.2 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="109.4 MiB" time=2026-01-06T11:53:55.361Z level=DEBUG source=server.go:782 msg="new layout created" layers="25[ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Layers:13(0..12) ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Layers:12(13..24)]" time=2026-01-06T11:53:55.361Z level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:24 GPULayers:25[ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Layers:13(0..12) ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Layers:12(13..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-01-06T11:53:55.483Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=general.alignment default=32 time=2026-01-06T11:53:55.489Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=gptoss.pooling_type default=0 time=2026-01-06T11:53:55.549Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1399 splits=3 time=2026-01-06T11:53:55.554Z level=DEBUG source=ggml.go:852 msg="compute graph" nodes=1399 splits=3 time=2026-01-06T11:53:55.555Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="5.8 GiB" time=2026-01-06T11:53:55.555Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA1 size="6.0 GiB" time=2026-01-06T11:53:55.555Z level=DEBUG source=device.go:245 msg="model weights" device=CPU size="1.1 GiB" time=2026-01-06T11:53:55.555Z level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="159.0 MiB" time=2026-01-06T11:53:55.555Z level=DEBUG source=device.go:251 msg="kv cache" device=CUDA1 size="141.0 MiB" time=2026-01-06T11:53:55.555Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="128.6 MiB" time=2026-01-06T11:53:55.555Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA1 size="109.4 MiB" time=2026-01-06T11:53:55.555Z level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="5.6 MiB" time=2026-01-06T11:53:55.555Z level=DEBUG source=device.go:272 msg="total memory" size="13.4 GiB" time=2026-01-06T11:53:55.555Z level=DEBUG source=server.go:771 msg=memory success=true required.InputWeights=1158266880 required.CPU.Graph=5898240 required.CUDA0.ID=GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 required.CUDA0.Weights="[477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 0 0 0 0 0 0 0 0 0 0 0 0]" required.CUDA0.Cache="[9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 0 0 0 0 0 0 0 0 0 0 0 0]" required.CUDA0.Graph=134879232 required.CUDA1.ID=GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 required.CUDA1.Weights="[0 0 0 0 0 0 0 0 0 0 0 0 0 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 1158278400]" required.CUDA1.Cache="[0 0 0 0 0 0 0 0 0 0 0 0 0 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 9437184 16777216 0]" required.CUDA1.Graph=114694272 time=2026-01-06T11:53:55.555Z level=DEBUG source=server.go:965 msg="available gpu" id=GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 library=CUDA "available layer vram"="9.3 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="128.6 MiB" time=2026-01-06T11:53:55.555Z level=DEBUG source=server.go:965 msg="available gpu" id=GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 library=CUDA "available layer vram"="9.2 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="109.4 MiB" time=2026-01-06T11:53:55.555Z level=DEBUG source=server.go:782 msg="new layout created" layers="25[ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Layers:13(0..12) ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Layers:12(13..24)]" time=2026-01-06T11:53:55.555Z level=INFO source=runner.go:1278 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:24 GPULayers:25[ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Layers:13(0..12) ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Layers:12(13..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-01-06T11:53:55.555Z level=INFO source=ggml.go:482 msg="offloading 24 repeating layers to GPU" time=2026-01-06T11:53:55.556Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2026-01-06T11:53:55.556Z level=INFO source=ggml.go:494 msg="offloaded 25/25 layers to GPU" time=2026-01-06T11:53:55.556Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="5.8 GiB" time=2026-01-06T11:53:55.556Z level=INFO source=device.go:240 msg="model weights" device=CUDA1 size="6.0 GiB" time=2026-01-06T11:53:55.556Z level=INFO source=device.go:245 msg="model weights" device=CPU size="1.1 GiB" time=2026-01-06T11:53:55.556Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="159.0 MiB" time=2026-01-06T11:53:55.556Z level=INFO source=device.go:251 msg="kv cache" device=CUDA1 size="141.0 MiB" time=2026-01-06T11:53:55.556Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="128.6 MiB" time=2026-01-06T11:53:55.556Z level=INFO source=device.go:262 msg="compute graph" device=CUDA1 size="109.4 MiB" time=2026-01-06T11:53:55.556Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="5.6 MiB" time=2026-01-06T11:53:55.556Z level=INFO source=device.go:272 msg="total memory" size="13.4 GiB" time=2026-01-06T11:53:55.556Z level=INFO source=sched.go:517 msg="loaded runners" count=1 time=2026-01-06T11:53:55.556Z level=INFO source=server.go:1338 msg="waiting for llama runner to start responding" time=2026-01-06T11:53:55.556Z level=INFO source=server.go:1372 msg="waiting for server to become available" status="llm server loading model" time=2026-01-06T11:53:55.807Z level=DEBUG source=server.go:1382 msg="model load progress 0.01" time=2026-01-06T11:53:56.058Z level=DEBUG source=server.go:1382 msg="model load progress 0.01" time=2026-01-06T11:53:56.309Z level=DEBUG source=server.go:1382 msg="model load progress 0.02" time=2026-01-06T11:53:56.560Z level=DEBUG source=server.go:1382 msg="model load progress 0.03" time=2026-01-06T11:53:56.811Z level=DEBUG source=server.go:1382 msg="model load progress 0.04" time=2026-01-06T11:53:57.062Z level=DEBUG source=server.go:1382 msg="model load progress 0.05" time=2026-01-06T11:53:57.313Z level=DEBUG source=server.go:1382 msg="model load progress 0.05" time=2026-01-06T11:53:57.564Z level=DEBUG source=server.go:1382 msg="model load progress 0.07" time=2026-01-06T11:53:57.815Z level=DEBUG source=server.go:1382 msg="model load progress 0.07" time=2026-01-06T11:53:58.066Z level=DEBUG source=server.go:1382 msg="model load progress 0.08" time=2026-01-06T11:53:58.317Z level=DEBUG source=server.go:1382 msg="model load progress 0.08" time=2026-01-06T11:53:58.568Z level=DEBUG source=server.go:1382 msg="model load progress 0.10" time=2026-01-06T11:53:58.819Z level=DEBUG source=server.go:1382 msg="model load progress 0.10" time=2026-01-06T11:53:59.070Z level=DEBUG source=server.go:1382 msg="model load progress 0.11" time=2026-01-06T11:53:59.321Z level=DEBUG source=server.go:1382 msg="model load progress 0.12" time=2026-01-06T11:53:59.572Z level=DEBUG source=server.go:1382 msg="model load progress 0.13" time=2026-01-06T11:53:59.823Z level=DEBUG source=server.go:1382 msg="model load progress 0.13" time=2026-01-06T11:54:00.074Z level=DEBUG source=server.go:1382 msg="model load progress 0.15" time=2026-01-06T11:54:00.325Z level=DEBUG source=server.go:1382 msg="model load progress 0.15" time=2026-01-06T11:54:00.576Z level=DEBUG source=server.go:1382 msg="model load progress 0.16" time=2026-01-06T11:54:00.827Z level=DEBUG source=server.go:1382 msg="model load progress 0.17" time=2026-01-06T11:54:01.078Z level=DEBUG source=server.go:1382 msg="model load progress 0.18" time=2026-01-06T11:54:01.329Z level=DEBUG source=server.go:1382 msg="model load progress 0.19" time=2026-01-06T11:54:01.580Z level=DEBUG source=server.go:1382 msg="model load progress 0.19" time=2026-01-06T11:54:01.831Z level=DEBUG source=server.go:1382 msg="model load progress 0.20" time=2026-01-06T11:54:02.082Z level=DEBUG source=server.go:1382 msg="model load progress 0.21" time=2026-01-06T11:54:02.333Z level=DEBUG source=server.go:1382 msg="model load progress 0.22" time=2026-01-06T11:54:02.584Z level=DEBUG source=server.go:1382 msg="model load progress 0.23" time=2026-01-06T11:54:02.835Z level=DEBUG source=server.go:1382 msg="model load progress 0.24" time=2026-01-06T11:54:03.086Z level=DEBUG source=server.go:1382 msg="model load progress 0.24" time=2026-01-06T11:54:03.337Z level=DEBUG source=server.go:1382 msg="model load progress 0.26" time=2026-01-06T11:54:03.588Z level=DEBUG source=server.go:1382 msg="model load progress 0.26" time=2026-01-06T11:54:03.839Z level=DEBUG source=server.go:1382 msg="model load progress 0.28" time=2026-01-06T11:54:04.090Z level=DEBUG source=server.go:1382 msg="model load progress 0.28" time=2026-01-06T11:54:04.341Z level=DEBUG source=server.go:1382 msg="model load progress 0.29" time=2026-01-06T11:54:04.592Z level=DEBUG source=server.go:1382 msg="model load progress 0.30" time=2026-01-06T11:54:04.844Z level=DEBUG source=server.go:1382 msg="model load progress 0.31" time=2026-01-06T11:54:05.095Z level=DEBUG source=server.go:1382 msg="model load progress 0.32" time=2026-01-06T11:54:05.346Z level=DEBUG source=server.go:1382 msg="model load progress 0.33" time=2026-01-06T11:54:05.597Z level=DEBUG source=server.go:1382 msg="model load progress 0.34" time=2026-01-06T11:54:05.848Z level=DEBUG source=server.go:1382 msg="model load progress 0.34" time=2026-01-06T11:54:06.100Z level=DEBUG source=server.go:1382 msg="model load progress 0.35" time=2026-01-06T11:54:06.351Z level=DEBUG source=server.go:1382 msg="model load progress 0.37" time=2026-01-06T11:54:06.602Z level=DEBUG source=server.go:1382 msg="model load progress 0.37" time=2026-01-06T11:54:06.853Z level=DEBUG source=server.go:1382 msg="model load progress 0.38" time=2026-01-06T11:54:07.104Z level=DEBUG source=server.go:1382 msg="model load progress 0.39" time=2026-01-06T11:54:07.355Z level=DEBUG source=server.go:1382 msg="model load progress 0.40" time=2026-01-06T11:54:07.607Z level=DEBUG source=server.go:1382 msg="model load progress 0.40" time=2026-01-06T11:54:07.858Z level=DEBUG source=server.go:1382 msg="model load progress 0.42" time=2026-01-06T11:54:08.109Z level=DEBUG source=server.go:1382 msg="model load progress 0.43" time=2026-01-06T11:54:08.360Z level=DEBUG source=server.go:1382 msg="model load progress 0.44" time=2026-01-06T11:54:08.611Z level=DEBUG source=server.go:1382 msg="model load progress 0.44" time=2026-01-06T11:54:08.862Z level=DEBUG source=server.go:1382 msg="model load progress 0.45" time=2026-01-06T11:54:09.113Z level=DEBUG source=server.go:1382 msg="model load progress 0.46" time=2026-01-06T11:54:09.364Z level=DEBUG source=server.go:1382 msg="model load progress 0.48" time=2026-01-06T11:54:09.615Z level=DEBUG source=server.go:1382 msg="model load progress 0.48" time=2026-01-06T11:54:09.866Z level=DEBUG source=server.go:1382 msg="model load progress 0.49" time=2026-01-06T11:54:10.117Z level=DEBUG source=server.go:1382 msg="model load progress 0.50" time=2026-01-06T11:54:10.368Z level=DEBUG source=server.go:1382 msg="model load progress 0.51" time=2026-01-06T11:54:10.619Z level=DEBUG source=server.go:1382 msg="model load progress 0.52" time=2026-01-06T11:54:10.870Z level=DEBUG source=server.go:1382 msg="model load progress 0.53" time=2026-01-06T11:54:11.122Z level=DEBUG source=server.go:1382 msg="model load progress 0.54" time=2026-01-06T11:54:11.373Z level=DEBUG source=server.go:1382 msg="model load progress 0.54" time=2026-01-06T11:54:11.623Z level=DEBUG source=server.go:1382 msg="model load progress 0.55" time=2026-01-06T11:54:11.875Z level=DEBUG source=server.go:1382 msg="model load progress 0.56" time=2026-01-06T11:54:12.125Z level=DEBUG source=server.go:1382 msg="model load progress 0.57" time=2026-01-06T11:54:12.376Z level=DEBUG source=server.go:1382 msg="model load progress 0.58" time=2026-01-06T11:54:12.628Z level=DEBUG source=server.go:1382 msg="model load progress 0.58" time=2026-01-06T11:54:12.879Z level=DEBUG source=server.go:1382 msg="model load progress 0.59" time=2026-01-06T11:54:13.130Z level=DEBUG source=server.go:1382 msg="model load progress 0.60" time=2026-01-06T11:54:13.381Z level=DEBUG source=server.go:1382 msg="model load progress 0.60" time=2026-01-06T11:54:13.631Z level=DEBUG source=server.go:1382 msg="model load progress 0.62" time=2026-01-06T11:54:13.882Z level=DEBUG source=server.go:1382 msg="model load progress 0.63" time=2026-01-06T11:54:14.133Z level=DEBUG source=server.go:1382 msg="model load progress 0.63" time=2026-01-06T11:54:14.385Z level=DEBUG source=server.go:1382 msg="model load progress 0.64" time=2026-01-06T11:54:14.636Z level=DEBUG source=server.go:1382 msg="model load progress 0.65" time=2026-01-06T11:54:14.887Z level=DEBUG source=server.go:1382 msg="model load progress 0.65" time=2026-01-06T11:54:15.138Z level=DEBUG source=server.go:1382 msg="model load progress 0.66" time=2026-01-06T11:54:15.389Z level=DEBUG source=server.go:1382 msg="model load progress 0.67" time=2026-01-06T11:54:15.640Z level=DEBUG source=server.go:1382 msg="model load progress 0.67" time=2026-01-06T11:54:15.891Z level=DEBUG source=server.go:1382 msg="model load progress 0.68" time=2026-01-06T11:54:16.142Z level=DEBUG source=server.go:1382 msg="model load progress 0.69" time=2026-01-06T11:54:16.393Z level=DEBUG source=server.go:1382 msg="model load progress 0.70" time=2026-01-06T11:54:16.644Z level=DEBUG source=server.go:1382 msg="model load progress 0.70" time=2026-01-06T11:54:16.895Z level=DEBUG source=server.go:1382 msg="model load progress 0.71" time=2026-01-06T11:54:17.146Z level=DEBUG source=server.go:1382 msg="model load progress 0.71" time=2026-01-06T11:54:17.397Z level=DEBUG source=server.go:1382 msg="model load progress 0.72" time=2026-01-06T11:54:17.648Z level=DEBUG source=server.go:1382 msg="model load progress 0.72" time=2026-01-06T11:54:17.899Z level=DEBUG source=server.go:1382 msg="model load progress 0.73" time=2026-01-06T11:54:18.150Z level=DEBUG source=server.go:1382 msg="model load progress 0.74" time=2026-01-06T11:54:18.401Z level=DEBUG source=server.go:1382 msg="model load progress 0.75" time=2026-01-06T11:54:18.652Z level=DEBUG source=server.go:1382 msg="model load progress 0.76" time=2026-01-06T11:54:18.903Z level=DEBUG source=server.go:1382 msg="model load progress 0.77" time=2026-01-06T11:54:19.154Z level=DEBUG source=server.go:1382 msg="model load progress 0.77" time=2026-01-06T11:54:19.405Z level=DEBUG source=server.go:1382 msg="model load progress 0.78" time=2026-01-06T11:54:19.656Z level=DEBUG source=server.go:1382 msg="model load progress 0.79" time=2026-01-06T11:54:19.906Z level=DEBUG source=server.go:1382 msg="model load progress 0.79" time=2026-01-06T11:54:20.157Z level=DEBUG source=server.go:1382 msg="model load progress 0.80" time=2026-01-06T11:54:20.408Z level=DEBUG source=server.go:1382 msg="model load progress 0.81" time=2026-01-06T11:54:20.659Z level=DEBUG source=server.go:1382 msg="model load progress 0.81" time=2026-01-06T11:54:20.910Z level=DEBUG source=server.go:1382 msg="model load progress 0.82" time=2026-01-06T11:54:21.162Z level=DEBUG source=server.go:1382 msg="model load progress 0.83" time=2026-01-06T11:54:21.413Z level=DEBUG source=server.go:1382 msg="model load progress 0.84" time=2026-01-06T11:54:21.664Z level=DEBUG source=server.go:1382 msg="model load progress 0.85" time=2026-01-06T11:54:21.915Z level=DEBUG source=server.go:1382 msg="model load progress 0.86" time=2026-01-06T11:54:22.166Z level=DEBUG source=server.go:1382 msg="model load progress 0.86" time=2026-01-06T11:54:22.418Z level=DEBUG source=server.go:1382 msg="model load progress 0.87" time=2026-01-06T11:54:22.669Z level=DEBUG source=server.go:1382 msg="model load progress 0.88" time=2026-01-06T11:54:22.920Z level=DEBUG source=server.go:1382 msg="model load progress 0.89" time=2026-01-06T11:54:23.172Z level=DEBUG source=server.go:1382 msg="model load progress 0.90" time=2026-01-06T11:54:23.423Z level=DEBUG source=server.go:1382 msg="model load progress 0.91" time=2026-01-06T11:54:23.674Z level=DEBUG source=server.go:1382 msg="model load progress 0.91" time=2026-01-06T11:54:23.925Z level=DEBUG source=server.go:1382 msg="model load progress 0.92" time=2026-01-06T11:54:24.176Z level=DEBUG source=server.go:1382 msg="model load progress 0.93" time=2026-01-06T11:54:24.427Z level=DEBUG source=server.go:1382 msg="model load progress 0.94" time=2026-01-06T11:54:24.678Z level=DEBUG source=server.go:1382 msg="model load progress 0.95" time=2026-01-06T11:54:24.929Z level=DEBUG source=server.go:1382 msg="model load progress 0.95" time=2026-01-06T11:54:25.180Z level=DEBUG source=server.go:1382 msg="model load progress 0.95" time=2026-01-06T11:54:25.431Z level=DEBUG source=server.go:1382 msg="model load progress 0.95" time=2026-01-06T11:54:25.682Z level=DEBUG source=server.go:1382 msg="model load progress 0.95" time=2026-01-06T11:54:25.934Z level=DEBUG source=server.go:1382 msg="model load progress 0.95" time=2026-01-06T11:54:26.185Z level=DEBUG source=server.go:1382 msg="model load progress 0.96" time=2026-01-06T11:54:26.436Z level=DEBUG source=server.go:1382 msg="model load progress 0.96" time=2026-01-06T11:54:26.687Z level=DEBUG source=server.go:1382 msg="model load progress 0.96" time=2026-01-06T11:54:26.938Z level=DEBUG source=server.go:1382 msg="model load progress 0.96" time=2026-01-06T11:54:27.189Z level=DEBUG source=server.go:1382 msg="model load progress 0.96" time=2026-01-06T11:54:27.440Z level=DEBUG source=server.go:1382 msg="model load progress 0.97" time=2026-01-06T11:54:27.692Z level=DEBUG source=server.go:1382 msg="model load progress 0.98" time=2026-01-06T11:54:27.943Z level=DEBUG source=server.go:1382 msg="model load progress 0.98" time=2026-01-06T11:54:28.194Z level=DEBUG source=server.go:1382 msg="model load progress 0.99" time=2026-01-06T11:54:28.445Z level=DEBUG source=server.go:1382 msg="model load progress 1.00" time=2026-01-06T11:54:28.492Z level=DEBUG source=ggml.go:282 msg="key with type not found" key=gptoss.pooling_type default=0 time=2026-01-06T11:54:28.697Z level=INFO source=server.go:1376 msg="llama runner started in 35.16 seconds" time=2026-01-06T11:54:28.697Z level=DEBUG source=sched.go:529 msg="finished setting up" runner.name=registry.ollama.ai/library/gpt-oss:latest runner.inference="[{ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Library:CUDA} {ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Library:CUDA}]" runner.size="13.4 GiB" runner.vram="13.4 GiB" runner.parallel=1 runner.pid=118 runner.model=/root/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=8192 time=2026-01-06T11:54:28.697Z level=DEBUG source=sched.go:537 msg="context for request finished" [GIN] 2026/01/06 - 11:54:28 | 200 | 36.570596338s | 127.0.0.1 | POST "/api/generate" time=2026-01-06T11:54:28.697Z level=DEBUG source=sched.go:290 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gpt-oss:latest runner.inference="[{ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Library:CUDA} {ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Library:CUDA}]" runner.size="13.4 GiB" runner.vram="13.4 GiB" runner.parallel=1 runner.pid=118 runner.model=/root/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=8192 duration=5m0s time=2026-01-06T11:54:28.697Z level=DEBUG source=sched.go:308 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gpt-oss:latest runner.inference="[{ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Library:CUDA} {ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Library:CUDA}]" runner.size="13.4 GiB" runner.vram="13.4 GiB" runner.parallel=1 runner.pid=118 runner.model=/root/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=8192 refCount=0 time=2026-01-06T11:55:51.648Z level=DEBUG source=sched.go:626 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb time=2026-01-06T11:55:51.649Z level=DEBUG source=server.go:1509 msg="completion request" images=0 prompt=307 format="" time=2026-01-06T11:55:51.726Z level=DEBUG source=cache.go:142 msg="loading cache slot" id=0 cache=0 prompt=68 used=0 remaining=68 #################################### Typed in question in console: >>> Hello Error: 500 Internal Server Error: model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details #################################### panic: failed to sample token goroutine 90 [running]: github.com/ollama/ollama/runner/ollamarunner.(*Server).computeBatch(0xc0002790e0, {0x0, {0x5630acd8e250, 0xc0006b0040}, {0x5630acd98b20, 0xc002461830}, {0xc0002c3b08, 0x44, 0x8f}, {{0x5630acd98b20, ...}, ...}, ...}) github.com/ollama/ollama/runner/ollamarunner/runner.go:763 +0x1a85 created by github.com/ollama/ollama/runner/ollamarunner.(*Server).run in goroutine 62 github.com/ollama/ollama/runner/ollamarunner/runner.go:458 +0x2cd time=2026-01-06T11:55:52.094Z level=ERROR source=server.go:1583 msg="post predict" error="Post \"http://127.0.0.1:42057/completion\": EOF" [GIN] 2026/01/06 - 11:55:52 | 500 | 1.699267303s | 127.0.0.1 | POST "/api/chat" time=2026-01-06T11:55:52.094Z level=DEBUG source=sched.go:385 msg="context for request finished" runner.name=registry.ollama.ai/library/gpt-oss:latest runner.inference="[{ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Library:CUDA} {ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Library:CUDA}]" runner.size="13.4 GiB" runner.vram="13.4 GiB" runner.parallel=1 runner.pid=118 runner.model=/root/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=8192 time=2026-01-06T11:55:52.094Z level=DEBUG source=sched.go:290 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gpt-oss:latest runner.inference="[{ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Library:CUDA} {ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Library:CUDA}]" runner.size="13.4 GiB" runner.vram="13.4 GiB" runner.parallel=1 runner.pid=118 runner.model=/root/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=8192 duration=5m0s time=2026-01-06T11:55:52.095Z level=DEBUG source=sched.go:308 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gpt-oss:latest runner.inference="[{ID:GPU-d47e036d-17f8-d41b-f481-b576fd01fb68 Library:CUDA} {ID:GPU-eb7288f0-cd96-74d8-78e1-a8b70c334d95 Library:CUDA}]" runner.size="13.4 GiB" runner.vram="13.4 GiB" runner.parallel=1 runner.pid=118 runner.model=/root/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=8192 refCount=0 ``` ### OS Docker ### GPU Nvidia ### CPU Intel ### Ollama version 0.13.5
GiteaMirror added the bug label 2026-04-22 18:33:06 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34729