[GH-ISSUE #13083] GPU utilization issue #55172

Closed
opened 2026-04-29 08:26:52 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @qihouji on GitHub (Nov 14, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13083

After updating to version 0.12.11, GPU utilization dropped to 50%–60%.
System environment: Windows 11, CUDA 13 (RTX 5090 GPU).

Originally created by @qihouji on GitHub (Nov 14, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13083 After updating to version 0.12.11, GPU utilization dropped to 50%–60%. System environment: Windows 11, CUDA 13 (RTX 5090 GPU).
Author
Owner

@rick-github commented on GitHub (Nov 14, 2025):

Server log will help in debugging.

<!-- gh-comment-id:3531790285 --> @rick-github commented on GitHub (Nov 14, 2025): [Server log](https://docs.ollama.com/troubleshooting) will help in debugging.
Author
Owner

@moritzknecht commented on GitHub (Nov 15, 2025):

I also experience a performance regression on my linux system when upgrading to 0.12.11 from 0.12.10:

ollama version is 0.12.10
ollama run gpt-oss:20b-q4-128k --verbose hi
Thinking...
User says "hi". We need a friendly greeting. Probably ask how can I help.
...done thinking.

Hello! 👋 How can I help you today?

total duration: 2.842455436s
load duration: 2.628336154s
prompt eval count: 70 token(s)
prompt eval duration: 23.686579ms
prompt eval rate: 2955.26 tokens/s
eval count: 40 token(s)
eval duration: 161.480478ms
eval rate: 247.71 tokens/s

ollama version is 0.12.11
ollama run gpt-oss:20b-q4-128k --verbose hi
Thinking...
User says "hi". It's a greeting. Should respond politely. Probably ask how can help.
...done thinking.

Hello! 👋 How can I help you today?

total duration: 2.87789525s
load duration: 2.632629161s
prompt eval count: 70 token(s)
prompt eval duration: 23.924083ms
prompt eval rate: 2925.92 tokens/s
eval count: 40 token(s)
eval duration: 191.243912ms
eval rate: 209.16 tokens/s

----------------LOG 0.12.10-------------------

Nov 15 09:52:56 ai ollama[28485]: [GIN] 2025/11/15 - 09:52:56 | 200 | 18.907µs | 127.0.0.1 | HEAD "/"
Nov 15 09:52:56 ai ollama[28485]: time=2025-11-15T09:52:56.876Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
Nov 15 09:52:56 ai ollama[28485]: [GIN] 2025/11/15 - 09:52:56 | 200 | 58.293681ms | 127.0.0.1 | POST "/api/show"
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.022Z level=DEBUG source=runner.go:243 msg="refreshing free memory"
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.022Z level=DEBUG source=runner.go:307 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery"
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.022Z level=INFO source=server.go:400 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 14257"
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.022Z level=DEBUG source=server.go:401 msg=subprocess PATH=/root/.opencode/bin:/usr/local/cuda-12.6/bin:/root/.nvm/versions/node/v22.18.0/bin:/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/root/.local/bin:/usr/local/go/bin:/root/go/bin OLLAMA_HOST=0.0.0.0:11434 OLLAMA_NUM_THREADS=32 OLLAMA_NUM_PARALLEL=1 OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q8_0 OLLAMA_KEEP_ALIVE=60m OLLAMA_DEBUG=1 OLLAMA_USE_MLOCK=1 OLLAMA_NEW_ENGINE=1 OLLAMA_GPU_OVERHEAD=0 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.178Z level=DEBUG source=runner.go:415 msg="bootstrap discovery took" duration=156.625375ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[]
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.178Z level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=156.700899ms
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.198Z level=DEBUG source=sched.go:211 msg="loading first model" model=/usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.271Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.271Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.271Z level=INFO source=server.go:215 msg="enabling flash attention"
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.271Z level=INFO source=server.go:400 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb --port 37393"
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.271Z level=DEBUG source=server.go:401 msg=subprocess PATH=/root/.opencode/bin:/usr/local/cuda-12.6/bin:/root/.nvm/versions/node/v22.18.0/bin:/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/root/.local/bin:/usr/local/go/bin:/root/go/bin OLLAMA_HOST=0.0.0.0:11434 OLLAMA_NUM_THREADS=32 OLLAMA_NUM_PARALLEL=1 OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q8_0 OLLAMA_KEEP_ALIVE=60m OLLAMA_DEBUG=1 OLLAMA_USE_MLOCK=1 OLLAMA_NEW_ENGINE=1 OLLAMA_GPU_OVERHEAD=0 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.271Z level=INFO source=server.go:653 msg="loading model" "model layers"=25 requested=-1
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.271Z level=INFO source=server.go:658 msg="system memory" total="46.8 GiB" free="43.0 GiB" free_swap="8.0 GiB"
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.271Z level=INFO source=server.go:665 msg="gpu memory" id=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 library=CUDA available="30.9 GiB" free="31.3 GiB" minimum="457.0 MiB" overhead="0 B"
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.277Z level=INFO source=runner.go:1349 msg="starting ollama engine"
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.277Z level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:37393"
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.283Z level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:1024 FlashAttention:true KvSize:131072 KvCacheType:q8_0 NumThreads:32 GPULayers:25[ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.317Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.317Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default=""
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.317Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default=""
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.317Z level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.317Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
Nov 15 09:52:57 ai ollama[28485]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.320Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12
Nov 15 09:52:57 ai ollama[28485]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Nov 15 09:52:57 ai ollama[28485]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Nov 15 09:52:57 ai ollama[28485]: ggml_cuda_init: found 1 CUDA devices:
Nov 15 09:52:57 ai ollama[28485]: Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9
Nov 15 09:52:57 ai ollama[28485]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.451Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.452Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.906Z level=DEBUG source=ggml.go:853 msg="compute graph" nodes=1351 splits=2
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=DEBUG source=ggml.go:853 msg="compute graph" nodes=1351 splits=2
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=DEBUG source=device.go:212 msg="model weights" device=CUDA0 size="11.8 GiB"
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=DEBUG source=device.go:217 msg="model weights" device=CPU size="1.1 GiB"
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=DEBUG source=device.go:223 msg="kv cache" device=CUDA0 size="1.7 GiB"
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=DEBUG source=device.go:234 msg="compute graph" device=CUDA0 size="1.1 GiB"
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=DEBUG source=device.go:239 msg="compute graph" device=CPU size="11.2 MiB"
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=DEBUG source=device.go:244 msg="total memory" size="15.6 GiB"
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=DEBUG source=server.go:695 msg=memory success=true required.InputWeights=1158266880 required.CPU.Graph=11796480 required.CUDA0.ID=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 required.CUDA0.Weights="[477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 1158278400]" required.CUDA0.Cache="[5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 0]" required.CUDA0.Graph=1198788736
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=DEBUG source=server.go:892 msg="available gpu" id=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 library=CUDA "available layer vram"="29.8 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="1.1 GiB"
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=DEBUG source=server.go:706 msg="new layout created" layers="25[ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Layers:25(0..24)]"
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=INFO source=runner.go:1222 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:1024 FlashAttention:true KvSize:131072 KvCacheType:q8_0 NumThreads:32 GPULayers:25[ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.944Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.946Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.473Z level=DEBUG source=ggml.go:853 msg="compute graph" nodes=1351 splits=2
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.502Z level=DEBUG source=ggml.go:853 msg="compute graph" nodes=1351 splits=2
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=DEBUG source=device.go:212 msg="model weights" device=CUDA0 size="11.8 GiB"
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=DEBUG source=device.go:217 msg="model weights" device=CPU size="1.1 GiB"
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=DEBUG source=device.go:223 msg="kv cache" device=CUDA0 size="1.7 GiB"
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=DEBUG source=device.go:234 msg="compute graph" device=CUDA0 size="1.1 GiB"
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=DEBUG source=device.go:239 msg="compute graph" device=CPU size="11.2 MiB"
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=DEBUG source=device.go:244 msg="total memory" size="15.6 GiB"
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=DEBUG source=server.go:695 msg=memory success=true required.InputWeights=1158266880 required.CPU.Graph=11796480 required.CUDA0.ID=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 required.CUDA0.Weights="[477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 1158278400]" required.CUDA0.Cache="[5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 0]" required.CUDA0.Graph=1198788736
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=DEBUG source=server.go:892 msg="available gpu" id=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 library=CUDA "available layer vram"="29.8 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="1.1 GiB"
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=DEBUG source=server.go:706 msg="new layout created" layers="25[ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Layers:25(0..24)]"
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=runner.go:1222 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:1024 FlashAttention:true KvSize:131072 KvCacheType:q8_0 NumThreads:32 GPULayers:25[ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=ggml.go:482 msg="offloading 24 repeating layers to GPU"
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=ggml.go:494 msg="offloaded 25/25 layers to GPU"
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=device.go:212 msg="model weights" device=CUDA0 size="11.8 GiB"
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=device.go:217 msg="model weights" device=CPU size="1.1 GiB"
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=device.go:223 msg="kv cache" device=CUDA0 size="1.7 GiB"
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=device.go:234 msg="compute graph" device=CUDA0 size="1.1 GiB"
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=device.go:239 msg="compute graph" device=CPU size="11.2 MiB"
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=device.go:244 msg="total memory" size="15.6 GiB"
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=sched.go:500 msg="loaded runners" count=1
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.504Z level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.504Z level=DEBUG source=server.go:1295 msg="model load progress 0.00"
Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.754Z level=DEBUG source=server.go:1295 msg="model load progress 0.27"
Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.005Z level=DEBUG source=server.go:1295 msg="model load progress 0.54"
Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.256Z level=DEBUG source=server.go:1295 msg="model load progress 0.83"
Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.496Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0
Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.507Z level=INFO source=server.go:1289 msg="llama runner started in 2.24 seconds"
Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.507Z level=DEBUG source=sched.go:512 msg="finished setting up" runner.name=registry.ollama.ai/library/gpt-oss:20b-q4-128k runner.inference="[{ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Library:CUDA}]" runner.size="15.6 GiB" runner.vram="15.6 GiB" runner.parallel=1 runner.pid=28786 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=131072
Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.507Z level=DEBUG source=server.go:1401 msg="completion request" images=0 prompt=306 format=""
Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.527Z level=DEBUG source=cache.go:142 msg="loading cache slot" id=0 cache=0 prompt=70 used=0 remaining=70
Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.652Z level=WARN source=harmonyparser.go:122 msg="harmony parser: found message start tag in the middle of the content" content="\n<|start|>"
Nov 15 09:52:59 ai ollama[28485]: [GIN] 2025/11/15 - 09:52:59 | 200 | 2.870151156s | 127.0.0.1 | POST "/api/generate"
Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.748Z level=DEBUG source=sched.go:520 msg="context for request finished"
Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.748Z level=DEBUG source=sched.go:290 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gpt-oss:20b-q4-128k runner.inference="[{ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Library:CUDA}]" runner.size="15.6 GiB" runner.vram="15.6 GiB" runner.parallel=1 runner.pid=28786 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=131072 duration=1h0m0s
Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.748Z level=DEBUG source=sched.go:308 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gpt-oss:20b-q4-128k runner.inference="[{ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Library:CUDA}]" runner.size="15.6 GiB" runner.vram="15.6 GiB" runner.parallel=1 runner.pid=28786 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=131072 refCount=0

-----------------LOG 0.12.11----------------

Nov 15 09:51:22 ai ollama[27541]: [GIN] 2025/11/15 - 09:51:22 | 200 | 19.488µs | 127.0.0.1 | HEAD "/"
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.080Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
Nov 15 09:51:22 ai ollama[27541]: [GIN] 2025/11/15 - 09:51:22 | 200 | 61.358602ms | 127.0.0.1 | POST "/api/show"
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.205Z level=DEBUG source=runner.go:246 msg="refreshing free memory"
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.205Z level=DEBUG source=runner.go:310 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery"
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.205Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 10781"
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.205Z level=DEBUG source=server.go:393 msg=subprocess PATH=/root/.opencode/bin:/usr/local/cuda-12.6/bin:/root/.nvm/versions/node/v22.18.0/bin:/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/root/.local/bin:/usr/local/go/bin:/root/go/bin OLLAMA_HOST=0.0.0.0:11434 OLLAMA_NUM_THREADS=32 OLLAMA_NUM_PARALLEL=1 OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q8_0 OLLAMA_KEEP_ALIVE=60m OLLAMA_DEBUG=1 OLLAMA_USE_MLOCK=1 OLLAMA_NEW_ENGINE=1 OLLAMA_GPU_OVERHEAD=0 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.363Z level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=158.218073ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[]
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.363Z level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=158.293037ms
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.381Z level=DEBUG source=sched.go:211 msg="loading first model" model=/usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.461Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.461Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.461Z level=INFO source=server.go:209 msg="enabling flash attention"
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.461Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb --port 11935"
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.461Z level=DEBUG source=server.go:393 msg=subprocess PATH=/root/.opencode/bin:/usr/local/cuda-12.6/bin:/root/.nvm/versions/node/v22.18.0/bin:/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/root/.local/bin:/usr/local/go/bin:/root/go/bin OLLAMA_HOST=0.0.0.0:11434 OLLAMA_NUM_THREADS=32 OLLAMA_NUM_PARALLEL=1 OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q8_0 OLLAMA_KEEP_ALIVE=60m OLLAMA_DEBUG=1 OLLAMA_USE_MLOCK=1 OLLAMA_NEW_ENGINE=1 OLLAMA_GPU_OVERHEAD=0 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.462Z level=INFO source=sched.go:443 msg="system memory" total="46.8 GiB" free="43.0 GiB" free_swap="8.0 GiB"
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.462Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 library=CUDA available="30.9 GiB" free="31.3 GiB" minimum="457.0 MiB" overhead="0 B"
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.462Z level=INFO source=server.go:702 msg="loading model" "model layers"=25 requested=-1
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.467Z level=INFO source=runner.go:1398 msg="starting ollama engine"
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.468Z level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:11935"
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.473Z level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:1024 FlashAttention:true KvSize:131072 KvCacheType:q8_0 NumThreads:32 GPULayers:25[ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.506Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.507Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default=""
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.507Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default=""
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.507Z level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.507Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama
Nov 15 09:51:22 ai ollama[27541]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.509Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12
Nov 15 09:51:22 ai ollama[27541]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Nov 15 09:51:22 ai ollama[27541]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Nov 15 09:51:22 ai ollama[27541]: ggml_cuda_init: found 1 CUDA devices:
Nov 15 09:51:22 ai ollama[27541]: Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9
Nov 15 09:51:22 ai ollama[27541]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.642Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.643Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.099Z level=DEBUG source=ggml.go:853 msg="compute graph" nodes=1351 splits=2
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.108Z level=DEBUG source=ggml.go:853 msg="compute graph" nodes=1351 splits=2
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.109Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="11.8 GiB"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.109Z level=DEBUG source=device.go:245 msg="model weights" device=CPU size="1.1 GiB"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.109Z level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="1.7 GiB"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.109Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="1.1 GiB"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.109Z level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="11.2 MiB"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.109Z level=DEBUG source=device.go:272 msg="total memory" size="15.6 GiB"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.109Z level=DEBUG source=server.go:727 msg=memory success=true required.InputWeights=1158266880 required.CPU.Graph=11796480 required.CUDA0.ID=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 required.CUDA0.Weights="[477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 1158278400]" required.CUDA0.Cache="[5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 0]" required.CUDA0.Graph=1198788736
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.109Z level=DEBUG source=server.go:921 msg="available gpu" id=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 library=CUDA "available layer vram"="29.8 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="1.1 GiB"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.109Z level=DEBUG source=server.go:738 msg="new layout created" layers="25[ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Layers:25(0..24)]"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.109Z level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:1024 FlashAttention:true KvSize:131072 KvCacheType:q8_0 NumThreads:32 GPULayers:25[ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.139Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.140Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.683Z level=DEBUG source=ggml.go:853 msg="compute graph" nodes=1351 splits=2
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.711Z level=DEBUG source=ggml.go:853 msg="compute graph" nodes=1351 splits=2
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="11.8 GiB"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=DEBUG source=device.go:245 msg="model weights" device=CPU size="1.1 GiB"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="1.7 GiB"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="1.1 GiB"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="11.2 MiB"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=DEBUG source=device.go:272 msg="total memory" size="15.6 GiB"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=DEBUG source=server.go:727 msg=memory success=true required.InputWeights=1158266880 required.CPU.Graph=11796480 required.CUDA0.ID=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 required.CUDA0.Weights="[477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 1158278400]" required.CUDA0.Cache="[5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 0]" required.CUDA0.Graph=1198788736
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=DEBUG source=server.go:921 msg="available gpu" id=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 library=CUDA "available layer vram"="29.8 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="1.1 GiB"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=DEBUG source=server.go:738 msg="new layout created" layers="25[ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Layers:25(0..24)]"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:1024 FlashAttention:true KvSize:131072 KvCacheType:q8_0 NumThreads:32 GPULayers:25[ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="11.8 GiB"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=device.go:245 msg="model weights" device=CPU size="1.1 GiB"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.7 GiB"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="1.1 GiB"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="11.2 MiB"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=device.go:272 msg="total memory" size="15.6 GiB"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=ggml.go:482 msg="offloading 24 repeating layers to GPU"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=ggml.go:494 msg="offloaded 25/25 layers to GPU"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=sched.go:517 msg="loaded runners" count=1
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.963Z level=DEBUG source=server.go:1338 msg="model load progress 0.27"
Nov 15 09:51:24 ai ollama[27541]: time=2025-11-15T09:51:24.214Z level=DEBUG source=server.go:1338 msg="model load progress 0.55"
Nov 15 09:51:24 ai ollama[27541]: time=2025-11-15T09:51:24.465Z level=DEBUG source=server.go:1338 msg="model load progress 0.83"
Nov 15 09:51:24 ai ollama[27541]: time=2025-11-15T09:51:24.697Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0
Nov 15 09:51:24 ai ollama[27541]: time=2025-11-15T09:51:24.715Z level=INFO source=server.go:1332 msg="llama runner started in 2.25 seconds"
Nov 15 09:51:24 ai ollama[27541]: time=2025-11-15T09:51:24.715Z level=DEBUG source=sched.go:529 msg="finished setting up" runner.name=registry.ollama.ai/library/gpt-oss:20b-q4-128k runner.inference="[{ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Library:CUDA}]" runner.size="15.6 GiB" runner.vram="15.6 GiB" runner.parallel=1 runner.pid=27874 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=131072
Nov 15 09:51:24 ai ollama[27541]: time=2025-11-15T09:51:24.715Z level=DEBUG source=server.go:1465 msg="completion request" images=0 prompt=306 format=""
Nov 15 09:51:24 ai ollama[27541]: time=2025-11-15T09:51:24.736Z level=DEBUG source=cache.go:142 msg="loading cache slot" id=0 cache=0 prompt=70 used=0 remaining=70
Nov 15 09:51:24 ai ollama[27541]: [GIN] 2025/11/15 - 09:51:24 | 200 | 2.904959143s | 127.0.0.1 | POST "/api/generate"
Nov 15 09:51:24 ai ollama[27541]: time=2025-11-15T09:51:24.987Z level=DEBUG source=sched.go:537 msg="context for request finished"
Nov 15 09:51:24 ai ollama[27541]: time=2025-11-15T09:51:24.987Z level=DEBUG source=sched.go:290 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gpt-oss:20b-q4-128k runner.inference="[{ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Library:CUDA}]" runner.size="15.6 GiB" runner.vram="15.6 GiB" runner.parallel=1 runner.pid=27874 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=131072 duration=1h0m0s
Nov 15 09:51:24 ai ollama[27541]: time=2025-11-15T09:51:24.987Z level=DEBUG source=sched.go:308 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gpt-oss:20b-q4-128k runner.inference="[{ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Library:CUDA}]" runner.size="15.6 GiB" runner.vram="15.6 GiB" runner.parallel=1 runner.pid=27874 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=131072 refCount=0
Nov 15 09:52:35 ai systemd[1]: /etc/systemd/system/ollama.service.d/override.conf:9: Invalid syntax, ignoring: "OLLAMA_GPU_LAYERS=999""
Nov 15 09:52:35 ai systemd[1]: /etc/systemd/system/ollama.service.d/override.conf:9: Invalid syntax, ignoring: "OLLAMA_GPU_LAYERS=999""
Nov 15 09:52:35 ai systemd[1]: Stopping ollama.service - Ollama Service...
Nov 15 09:52:35 ai ollama[27541]: time=2025-11-15T09:52:35.753Z level=DEBUG source=sched.go:269 msg="shutting down scheduler completed loop"
Nov 15 09:52:35 ai ollama[27541]: time=2025-11-15T09:52:35.753Z level=DEBUG source=sched.go:844 msg="shutting down runner" model=/usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb
Nov 15 09:52:35 ai ollama[27541]: time=2025-11-15T09:52:35.753Z level=DEBUG source=sched.go:136 msg="shutting down scheduler pending loop"
Nov 15 09:52:35 ai ollama[27541]: time=2025-11-15T09:52:35.753Z level=DEBUG source=server.go:1755 msg="stopping llama server" pid=27874
Nov 15 09:52:35 ai ollama[27541]: time=2025-11-15T09:52:35.753Z level=DEBUG source=server.go:1761 msg="waiting for llama server to exit" pid=27874
Nov 15 09:52:36 ai ollama[27541]: time=2025-11-15T09:52:36.270Z level=DEBUG source=server.go:1765 msg="llama server stopped" pid=27874
Nov 15 09:52:36 ai systemd[1]: ollama.service: Deactivated successfully.
Nov 15 09:52:36 ai systemd[1]: Stopped ollama.service - Ollama Service.
Nov 15 09:52:36 ai systemd[1]: ollama.service: Consumed 6.048s CPU time, 2.7G memory peak, 0B memory swap peak.
Nov 15 09:52:36 ai systemd[1]: Started ollama.service - Ollama Service.
Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.305Z level=INFO source=routes.go:1525 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:1h0m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.311Z level=INFO source=images.go:522 msg="total blobs: 234"
Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.312Z level=INFO source=images.go:529 msg="total unused blobs removed: 0"
Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.313Z level=INFO source=routes.go:1578 msg="Listening on [::]:11434 (version 0.12.10)"
Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.313Z level=DEBUG source=sched.go:120 msg="starting llm scheduler"
Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.313Z level=INFO source=runner.go:67 msg="discovering available GPUs..."
Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.313Z level=INFO source=server.go:400 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 13649"
Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.313Z level=DEBUG source=server.go:401 msg=subprocess PATH=/root/.opencode/bin:/usr/local/cuda-12.6/bin:/root/.nvm/versions/node/v22.18.0/bin:/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/root/.local/bin:/usr/local/go/bin:/root/go/bin OLLAMA_HOST=0.0.0.0:11434 OLLAMA_NUM_THREADS=32 OLLAMA_NUM_PARALLEL=1 OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q8_0 OLLAMA_KEEP_ALIVE=60m OLLAMA_DEBUG=1 OLLAMA_USE_MLOCK=1 OLLAMA_NEW_ENGINE=1 OLLAMA_GPU_OVERHEAD=0 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12
Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.416Z level=DEBUG source=runner.go:415 msg="bootstrap discovery took" duration=102.96082ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[]
Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.416Z level=INFO source=server.go:400 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 31709"
Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.416Z level=DEBUG source=server.go:401 msg=subprocess PATH=/root/.opencode/bin:/usr/local/cuda-12.6/bin:/root/.nvm/versions/node/v22.18.0/bin:/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/root/.local/bin:/usr/local/go/bin:/root/go/bin OLLAMA_HOST=0.0.0.0:11434 OLLAMA_NUM_THREADS=32 OLLAMA_NUM_PARALLEL=1 OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q8_0 OLLAMA_KEEP_ALIVE=60m OLLAMA_DEBUG=1 OLLAMA_USE_MLOCK=1 OLLAMA_NEW_ENGINE=1 OLLAMA_GPU_OVERHEAD=0 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13
Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.516Z level=DEBUG source=runner.go:415 msg="bootstrap discovery took" duration=99.668277ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extra_envs=map[]
Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.516Z level=DEBUG source=runner.go:113 msg="evluating which if any devices to filter out" initial_count=2
Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.516Z level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=202.830124ms
Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.516Z level=INFO source=types.go:42 msg="inference compute" id=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5090" libdirs=ollama,cuda_v12 driver=13.0 pci_id=0000:01:00.0 type=discrete total="31.8 GiB" available="31.3 GiB"
Nov 15 09:52:36 ai ollama[28485]: [GIN] 2025/11/15 - 09:52:36 | 200 | 51.009µs | 127.0.0.1 | GET "/api/version"

<!-- gh-comment-id:3536287269 --> @moritzknecht commented on GitHub (Nov 15, 2025): I also experience a performance regression on my linux system when upgrading to 0.12.11 from 0.12.10: ollama version is 0.12.10 ollama run gpt-oss:20b-q4-128k --verbose hi Thinking... User says "hi". We need a friendly greeting. Probably ask how can I help. ...done thinking. Hello! 👋 How can I help you today? total duration: 2.842455436s load duration: 2.628336154s prompt eval count: 70 token(s) prompt eval duration: 23.686579ms prompt eval rate: 2955.26 tokens/s eval count: 40 token(s) eval duration: 161.480478ms eval rate: 247.71 tokens/s ollama version is 0.12.11 ollama run gpt-oss:20b-q4-128k --verbose hi Thinking... User says "hi". It's a greeting. Should respond politely. Probably ask how can help. ...done thinking. Hello! 👋 How can I help you today? total duration: 2.87789525s load duration: 2.632629161s prompt eval count: 70 token(s) prompt eval duration: 23.924083ms prompt eval rate: 2925.92 tokens/s eval count: 40 token(s) eval duration: 191.243912ms eval rate: 209.16 tokens/s ----------------LOG 0.12.10------------------- Nov 15 09:52:56 ai ollama[28485]: [GIN] 2025/11/15 - 09:52:56 | 200 | 18.907µs | 127.0.0.1 | HEAD "/" Nov 15 09:52:56 ai ollama[28485]: time=2025-11-15T09:52:56.876Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 Nov 15 09:52:56 ai ollama[28485]: [GIN] 2025/11/15 - 09:52:56 | 200 | 58.293681ms | 127.0.0.1 | POST "/api/show" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.022Z level=DEBUG source=runner.go:243 msg="refreshing free memory" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.022Z level=DEBUG source=runner.go:307 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.022Z level=INFO source=server.go:400 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 14257" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.022Z level=DEBUG source=server.go:401 msg=subprocess PATH=/root/.opencode/bin:/usr/local/cuda-12.6/bin:/root/.nvm/versions/node/v22.18.0/bin:/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/root/.local/bin:/usr/local/go/bin:/root/go/bin OLLAMA_HOST=0.0.0.0:11434 OLLAMA_NUM_THREADS=32 OLLAMA_NUM_PARALLEL=1 OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q8_0 OLLAMA_KEEP_ALIVE=60m OLLAMA_DEBUG=1 OLLAMA_USE_MLOCK=1 OLLAMA_NEW_ENGINE=1 OLLAMA_GPU_OVERHEAD=0 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.178Z level=DEBUG source=runner.go:415 msg="bootstrap discovery took" duration=156.625375ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[] Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.178Z level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=156.700899ms Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.197Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.198Z level=DEBUG source=sched.go:211 msg="loading first model" model=/usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.271Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.271Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0 Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.271Z level=INFO source=server.go:215 msg="enabling flash attention" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.271Z level=INFO source=server.go:400 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb --port 37393" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.271Z level=DEBUG source=server.go:401 msg=subprocess PATH=/root/.opencode/bin:/usr/local/cuda-12.6/bin:/root/.nvm/versions/node/v22.18.0/bin:/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/root/.local/bin:/usr/local/go/bin:/root/go/bin OLLAMA_HOST=0.0.0.0:11434 OLLAMA_NUM_THREADS=32 OLLAMA_NUM_PARALLEL=1 OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q8_0 OLLAMA_KEEP_ALIVE=60m OLLAMA_DEBUG=1 OLLAMA_USE_MLOCK=1 OLLAMA_NEW_ENGINE=1 OLLAMA_GPU_OVERHEAD=0 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.271Z level=INFO source=server.go:653 msg="loading model" "model layers"=25 requested=-1 Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.271Z level=INFO source=server.go:658 msg="system memory" total="46.8 GiB" free="43.0 GiB" free_swap="8.0 GiB" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.271Z level=INFO source=server.go:665 msg="gpu memory" id=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 library=CUDA available="30.9 GiB" free="31.3 GiB" minimum="457.0 MiB" overhead="0 B" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.277Z level=INFO source=runner.go:1349 msg="starting ollama engine" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.277Z level=INFO source=runner.go:1384 msg="Server listening on 127.0.0.1:37393" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.283Z level=INFO source=runner.go:1222 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:1024 FlashAttention:true KvSize:131072 KvCacheType:q8_0 NumThreads:32 GPULayers:25[ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.317Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.317Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default="" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.317Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.317Z level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32 Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.317Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama Nov 15 09:52:57 ai ollama[28485]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.320Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12 Nov 15 09:52:57 ai ollama[28485]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Nov 15 09:52:57 ai ollama[28485]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Nov 15 09:52:57 ai ollama[28485]: ggml_cuda_init: found 1 CUDA devices: Nov 15 09:52:57 ai ollama[28485]: Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Nov 15 09:52:57 ai ollama[28485]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.451Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.452Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0 Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.906Z level=DEBUG source=ggml.go:853 msg="compute graph" nodes=1351 splits=2 Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=DEBUG source=ggml.go:853 msg="compute graph" nodes=1351 splits=2 Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=DEBUG source=device.go:212 msg="model weights" device=CUDA0 size="11.8 GiB" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=DEBUG source=device.go:217 msg="model weights" device=CPU size="1.1 GiB" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=DEBUG source=device.go:223 msg="kv cache" device=CUDA0 size="1.7 GiB" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=DEBUG source=device.go:234 msg="compute graph" device=CUDA0 size="1.1 GiB" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=DEBUG source=device.go:239 msg="compute graph" device=CPU size="11.2 MiB" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=DEBUG source=device.go:244 msg="total memory" size="15.6 GiB" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=DEBUG source=server.go:695 msg=memory success=true required.InputWeights=1158266880 required.CPU.Graph=11796480 required.CUDA0.ID=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 required.CUDA0.Weights="[477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 1158278400]" required.CUDA0.Cache="[5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 0]" required.CUDA0.Graph=1198788736 Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=DEBUG source=server.go:892 msg="available gpu" id=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 library=CUDA "available layer vram"="29.8 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="1.1 GiB" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=DEBUG source=server.go:706 msg="new layout created" layers="25[ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Layers:25(0..24)]" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.915Z level=INFO source=runner.go:1222 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:1024 FlashAttention:true KvSize:131072 KvCacheType:q8_0 NumThreads:32 GPULayers:25[ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.944Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 Nov 15 09:52:57 ai ollama[28485]: time=2025-11-15T09:52:57.946Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0 Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.473Z level=DEBUG source=ggml.go:853 msg="compute graph" nodes=1351 splits=2 Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.502Z level=DEBUG source=ggml.go:853 msg="compute graph" nodes=1351 splits=2 Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=DEBUG source=device.go:212 msg="model weights" device=CUDA0 size="11.8 GiB" Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=DEBUG source=device.go:217 msg="model weights" device=CPU size="1.1 GiB" Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=DEBUG source=device.go:223 msg="kv cache" device=CUDA0 size="1.7 GiB" Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=DEBUG source=device.go:234 msg="compute graph" device=CUDA0 size="1.1 GiB" Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=DEBUG source=device.go:239 msg="compute graph" device=CPU size="11.2 MiB" Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=DEBUG source=device.go:244 msg="total memory" size="15.6 GiB" Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=DEBUG source=server.go:695 msg=memory success=true required.InputWeights=1158266880 required.CPU.Graph=11796480 required.CUDA0.ID=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 required.CUDA0.Weights="[477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 1158278400]" required.CUDA0.Cache="[5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 0]" required.CUDA0.Graph=1198788736 Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=DEBUG source=server.go:892 msg="available gpu" id=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 library=CUDA "available layer vram"="29.8 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="1.1 GiB" Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=DEBUG source=server.go:706 msg="new layout created" layers="25[ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Layers:25(0..24)]" Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=runner.go:1222 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:1024 FlashAttention:true KvSize:131072 KvCacheType:q8_0 NumThreads:32 GPULayers:25[ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=ggml.go:482 msg="offloading 24 repeating layers to GPU" Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU" Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=ggml.go:494 msg="offloaded 25/25 layers to GPU" Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=device.go:212 msg="model weights" device=CUDA0 size="11.8 GiB" Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=device.go:217 msg="model weights" device=CPU size="1.1 GiB" Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=device.go:223 msg="kv cache" device=CUDA0 size="1.7 GiB" Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=device.go:234 msg="compute graph" device=CUDA0 size="1.1 GiB" Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=device.go:239 msg="compute graph" device=CPU size="11.2 MiB" Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=device.go:244 msg="total memory" size="15.6 GiB" Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=sched.go:500 msg="loaded runners" count=1 Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.503Z level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.504Z level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.504Z level=DEBUG source=server.go:1295 msg="model load progress 0.00" Nov 15 09:52:58 ai ollama[28485]: time=2025-11-15T09:52:58.754Z level=DEBUG source=server.go:1295 msg="model load progress 0.27" Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.005Z level=DEBUG source=server.go:1295 msg="model load progress 0.54" Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.256Z level=DEBUG source=server.go:1295 msg="model load progress 0.83" Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.496Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0 Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.507Z level=INFO source=server.go:1289 msg="llama runner started in 2.24 seconds" Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.507Z level=DEBUG source=sched.go:512 msg="finished setting up" runner.name=registry.ollama.ai/library/gpt-oss:20b-q4-128k runner.inference="[{ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Library:CUDA}]" runner.size="15.6 GiB" runner.vram="15.6 GiB" runner.parallel=1 runner.pid=28786 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=131072 Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.507Z level=DEBUG source=server.go:1401 msg="completion request" images=0 prompt=306 format="" Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.527Z level=DEBUG source=cache.go:142 msg="loading cache slot" id=0 cache=0 prompt=70 used=0 remaining=70 Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.652Z level=WARN source=harmonyparser.go:122 msg="harmony parser: found message start tag in the middle of the content" content="\n<|start|>" Nov 15 09:52:59 ai ollama[28485]: [GIN] 2025/11/15 - 09:52:59 | 200 | 2.870151156s | 127.0.0.1 | POST "/api/generate" Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.748Z level=DEBUG source=sched.go:520 msg="context for request finished" Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.748Z level=DEBUG source=sched.go:290 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gpt-oss:20b-q4-128k runner.inference="[{ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Library:CUDA}]" runner.size="15.6 GiB" runner.vram="15.6 GiB" runner.parallel=1 runner.pid=28786 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=131072 duration=1h0m0s Nov 15 09:52:59 ai ollama[28485]: time=2025-11-15T09:52:59.748Z level=DEBUG source=sched.go:308 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gpt-oss:20b-q4-128k runner.inference="[{ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Library:CUDA}]" runner.size="15.6 GiB" runner.vram="15.6 GiB" runner.parallel=1 runner.pid=28786 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=131072 refCount=0 -----------------LOG 0.12.11---------------- Nov 15 09:51:22 ai ollama[27541]: [GIN] 2025/11/15 - 09:51:22 | 200 | 19.488µs | 127.0.0.1 | HEAD "/" Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.080Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 Nov 15 09:51:22 ai ollama[27541]: [GIN] 2025/11/15 - 09:51:22 | 200 | 61.358602ms | 127.0.0.1 | POST "/api/show" Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.205Z level=DEBUG source=runner.go:246 msg="refreshing free memory" Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.205Z level=DEBUG source=runner.go:310 msg="unable to refresh all GPUs with existing runners, performing bootstrap discovery" Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.205Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 10781" Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.205Z level=DEBUG source=server.go:393 msg=subprocess PATH=/root/.opencode/bin:/usr/local/cuda-12.6/bin:/root/.nvm/versions/node/v22.18.0/bin:/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/root/.local/bin:/usr/local/go/bin:/root/go/bin OLLAMA_HOST=0.0.0.0:11434 OLLAMA_NUM_THREADS=32 OLLAMA_NUM_PARALLEL=1 OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q8_0 OLLAMA_KEEP_ALIVE=60m OLLAMA_DEBUG=1 OLLAMA_USE_MLOCK=1 OLLAMA_NEW_ENGINE=1 OLLAMA_GPU_OVERHEAD=0 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.363Z level=DEBUG source=runner.go:418 msg="bootstrap discovery took" duration=158.218073ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[] Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.363Z level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=158.293037ms Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.381Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.381Z level=DEBUG source=sched.go:211 msg="loading first model" model=/usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.461Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.461Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0 Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.461Z level=INFO source=server.go:209 msg="enabling flash attention" Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.461Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb --port 11935" Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.461Z level=DEBUG source=server.go:393 msg=subprocess PATH=/root/.opencode/bin:/usr/local/cuda-12.6/bin:/root/.nvm/versions/node/v22.18.0/bin:/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/root/.local/bin:/usr/local/go/bin:/root/go/bin OLLAMA_HOST=0.0.0.0:11434 OLLAMA_NUM_THREADS=32 OLLAMA_NUM_PARALLEL=1 OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q8_0 OLLAMA_KEEP_ALIVE=60m OLLAMA_DEBUG=1 OLLAMA_USE_MLOCK=1 OLLAMA_NEW_ENGINE=1 OLLAMA_GPU_OVERHEAD=0 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.462Z level=INFO source=sched.go:443 msg="system memory" total="46.8 GiB" free="43.0 GiB" free_swap="8.0 GiB" Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.462Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 library=CUDA available="30.9 GiB" free="31.3 GiB" minimum="457.0 MiB" overhead="0 B" Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.462Z level=INFO source=server.go:702 msg="loading model" "model layers"=25 requested=-1 Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.467Z level=INFO source=runner.go:1398 msg="starting ollama engine" Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.468Z level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:11935" Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.473Z level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:1024 FlashAttention:true KvSize:131072 KvCacheType:q8_0 NumThreads:32 GPULayers:25[ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.506Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.507Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.name default="" Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.507Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.description default="" Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.507Z level=INFO source=ggml.go:136 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=459 num_key_values=32 Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.507Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama Nov 15 09:51:22 ai ollama[27541]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.509Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/local/lib/ollama/cuda_v12 Nov 15 09:51:22 ai ollama[27541]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Nov 15 09:51:22 ai ollama[27541]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Nov 15 09:51:22 ai ollama[27541]: ggml_cuda_init: found 1 CUDA devices: Nov 15 09:51:22 ai ollama[27541]: Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Nov 15 09:51:22 ai ollama[27541]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.642Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Nov 15 09:51:22 ai ollama[27541]: time=2025-11-15T09:51:22.643Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0 Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.099Z level=DEBUG source=ggml.go:853 msg="compute graph" nodes=1351 splits=2 Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.108Z level=DEBUG source=ggml.go:853 msg="compute graph" nodes=1351 splits=2 Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.109Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="11.8 GiB" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.109Z level=DEBUG source=device.go:245 msg="model weights" device=CPU size="1.1 GiB" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.109Z level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="1.7 GiB" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.109Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="1.1 GiB" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.109Z level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="11.2 MiB" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.109Z level=DEBUG source=device.go:272 msg="total memory" size="15.6 GiB" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.109Z level=DEBUG source=server.go:727 msg=memory success=true required.InputWeights=1158266880 required.CPU.Graph=11796480 required.CUDA0.ID=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 required.CUDA0.Weights="[477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 1158278400]" required.CUDA0.Cache="[5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 0]" required.CUDA0.Graph=1198788736 Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.109Z level=DEBUG source=server.go:921 msg="available gpu" id=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 library=CUDA "available layer vram"="29.8 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="1.1 GiB" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.109Z level=DEBUG source=server.go:738 msg="new layout created" layers="25[ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Layers:25(0..24)]" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.109Z level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:1024 FlashAttention:true KvSize:131072 KvCacheType:q8_0 NumThreads:32 GPULayers:25[ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.139Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=general.alignment default=32 Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.140Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0 Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.683Z level=DEBUG source=ggml.go:853 msg="compute graph" nodes=1351 splits=2 Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.711Z level=DEBUG source=ggml.go:853 msg="compute graph" nodes=1351 splits=2 Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=DEBUG source=device.go:240 msg="model weights" device=CUDA0 size="11.8 GiB" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=DEBUG source=device.go:245 msg="model weights" device=CPU size="1.1 GiB" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=DEBUG source=device.go:251 msg="kv cache" device=CUDA0 size="1.7 GiB" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=DEBUG source=device.go:262 msg="compute graph" device=CUDA0 size="1.1 GiB" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=DEBUG source=device.go:267 msg="compute graph" device=CPU size="11.2 MiB" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=DEBUG source=device.go:272 msg="total memory" size="15.6 GiB" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=DEBUG source=server.go:727 msg=memory success=true required.InputWeights=1158266880 required.CPU.Graph=11796480 required.CUDA0.ID=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 required.CUDA0.Weights="[477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 477628928 1158278400]" required.CUDA0.Cache="[5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 5571584 142607360 0]" required.CUDA0.Graph=1198788736 Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=DEBUG source=server.go:921 msg="available gpu" id=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 library=CUDA "available layer vram"="29.8 GiB" backoff=0.00 minimum="457.0 MiB" overhead="0 B" graph="1.1 GiB" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=DEBUG source=server.go:738 msg="new layout created" layers="25[ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Layers:25(0..24)]" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:1024 FlashAttention:true KvSize:131072 KvCacheType:q8_0 NumThreads:32 GPULayers:25[ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="11.8 GiB" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=device.go:245 msg="model weights" device=CPU size="1.1 GiB" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.7 GiB" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="1.1 GiB" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="11.2 MiB" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=device.go:272 msg="total memory" size="15.6 GiB" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=ggml.go:482 msg="offloading 24 repeating layers to GPU" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=ggml.go:494 msg="offloaded 25/25 layers to GPU" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=sched.go:517 msg="loaded runners" count=1 Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.712Z level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" Nov 15 09:51:23 ai ollama[27541]: time=2025-11-15T09:51:23.963Z level=DEBUG source=server.go:1338 msg="model load progress 0.27" Nov 15 09:51:24 ai ollama[27541]: time=2025-11-15T09:51:24.214Z level=DEBUG source=server.go:1338 msg="model load progress 0.55" Nov 15 09:51:24 ai ollama[27541]: time=2025-11-15T09:51:24.465Z level=DEBUG source=server.go:1338 msg="model load progress 0.83" Nov 15 09:51:24 ai ollama[27541]: time=2025-11-15T09:51:24.697Z level=DEBUG source=ggml.go:276 msg="key with type not found" key=gptoss.pooling_type default=0 Nov 15 09:51:24 ai ollama[27541]: time=2025-11-15T09:51:24.715Z level=INFO source=server.go:1332 msg="llama runner started in 2.25 seconds" Nov 15 09:51:24 ai ollama[27541]: time=2025-11-15T09:51:24.715Z level=DEBUG source=sched.go:529 msg="finished setting up" runner.name=registry.ollama.ai/library/gpt-oss:20b-q4-128k runner.inference="[{ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Library:CUDA}]" runner.size="15.6 GiB" runner.vram="15.6 GiB" runner.parallel=1 runner.pid=27874 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=131072 Nov 15 09:51:24 ai ollama[27541]: time=2025-11-15T09:51:24.715Z level=DEBUG source=server.go:1465 msg="completion request" images=0 prompt=306 format="" Nov 15 09:51:24 ai ollama[27541]: time=2025-11-15T09:51:24.736Z level=DEBUG source=cache.go:142 msg="loading cache slot" id=0 cache=0 prompt=70 used=0 remaining=70 Nov 15 09:51:24 ai ollama[27541]: [GIN] 2025/11/15 - 09:51:24 | 200 | 2.904959143s | 127.0.0.1 | POST "/api/generate" Nov 15 09:51:24 ai ollama[27541]: time=2025-11-15T09:51:24.987Z level=DEBUG source=sched.go:537 msg="context for request finished" Nov 15 09:51:24 ai ollama[27541]: time=2025-11-15T09:51:24.987Z level=DEBUG source=sched.go:290 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gpt-oss:20b-q4-128k runner.inference="[{ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Library:CUDA}]" runner.size="15.6 GiB" runner.vram="15.6 GiB" runner.parallel=1 runner.pid=27874 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=131072 duration=1h0m0s Nov 15 09:51:24 ai ollama[27541]: time=2025-11-15T09:51:24.987Z level=DEBUG source=sched.go:308 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gpt-oss:20b-q4-128k runner.inference="[{ID:GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 Library:CUDA}]" runner.size="15.6 GiB" runner.vram="15.6 GiB" runner.parallel=1 runner.pid=27874 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb runner.num_ctx=131072 refCount=0 Nov 15 09:52:35 ai systemd[1]: /etc/systemd/system/ollama.service.d/override.conf:9: Invalid syntax, ignoring: "OLLAMA_GPU_LAYERS=999"" Nov 15 09:52:35 ai systemd[1]: /etc/systemd/system/ollama.service.d/override.conf:9: Invalid syntax, ignoring: "OLLAMA_GPU_LAYERS=999"" Nov 15 09:52:35 ai systemd[1]: Stopping ollama.service - Ollama Service... Nov 15 09:52:35 ai ollama[27541]: time=2025-11-15T09:52:35.753Z level=DEBUG source=sched.go:269 msg="shutting down scheduler completed loop" Nov 15 09:52:35 ai ollama[27541]: time=2025-11-15T09:52:35.753Z level=DEBUG source=sched.go:844 msg="shutting down runner" model=/usr/share/ollama/.ollama/models/blobs/sha256-e7b273f9636059a689e3ddcab3716e4f65abe0143ac978e46673ad0e52d09efb Nov 15 09:52:35 ai ollama[27541]: time=2025-11-15T09:52:35.753Z level=DEBUG source=sched.go:136 msg="shutting down scheduler pending loop" Nov 15 09:52:35 ai ollama[27541]: time=2025-11-15T09:52:35.753Z level=DEBUG source=server.go:1755 msg="stopping llama server" pid=27874 Nov 15 09:52:35 ai ollama[27541]: time=2025-11-15T09:52:35.753Z level=DEBUG source=server.go:1761 msg="waiting for llama server to exit" pid=27874 Nov 15 09:52:36 ai ollama[27541]: time=2025-11-15T09:52:36.270Z level=DEBUG source=server.go:1765 msg="llama server stopped" pid=27874 Nov 15 09:52:36 ai systemd[1]: ollama.service: Deactivated successfully. Nov 15 09:52:36 ai systemd[1]: Stopped ollama.service - Ollama Service. Nov 15 09:52:36 ai systemd[1]: ollama.service: Consumed 6.048s CPU time, 2.7G memory peak, 0B memory swap peak. Nov 15 09:52:36 ai systemd[1]: Started ollama.service - Ollama Service. Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.305Z level=INFO source=routes.go:1525 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:1h0m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.311Z level=INFO source=images.go:522 msg="total blobs: 234" Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.312Z level=INFO source=images.go:529 msg="total unused blobs removed: 0" Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.313Z level=INFO source=routes.go:1578 msg="Listening on [::]:11434 (version 0.12.10)" Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.313Z level=DEBUG source=sched.go:120 msg="starting llm scheduler" Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.313Z level=INFO source=runner.go:67 msg="discovering available GPUs..." Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.313Z level=INFO source=server.go:400 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 13649" Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.313Z level=DEBUG source=server.go:401 msg=subprocess PATH=/root/.opencode/bin:/usr/local/cuda-12.6/bin:/root/.nvm/versions/node/v22.18.0/bin:/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/root/.local/bin:/usr/local/go/bin:/root/go/bin OLLAMA_HOST=0.0.0.0:11434 OLLAMA_NUM_THREADS=32 OLLAMA_NUM_PARALLEL=1 OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q8_0 OLLAMA_KEEP_ALIVE=60m OLLAMA_DEBUG=1 OLLAMA_USE_MLOCK=1 OLLAMA_NEW_ENGINE=1 OLLAMA_GPU_OVERHEAD=0 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12 Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.416Z level=DEBUG source=runner.go:415 msg="bootstrap discovery took" duration=102.96082ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v12]" extra_envs=map[] Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.416Z level=INFO source=server.go:400 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 31709" Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.416Z level=DEBUG source=server.go:401 msg=subprocess PATH=/root/.opencode/bin:/usr/local/cuda-12.6/bin:/root/.nvm/versions/node/v22.18.0/bin:/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/root/.local/bin:/usr/local/go/bin:/root/go/bin OLLAMA_HOST=0.0.0.0:11434 OLLAMA_NUM_THREADS=32 OLLAMA_NUM_PARALLEL=1 OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q8_0 OLLAMA_KEEP_ALIVE=60m OLLAMA_DEBUG=1 OLLAMA_USE_MLOCK=1 OLLAMA_NEW_ENGINE=1 OLLAMA_GPU_OVERHEAD=0 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 OLLAMA_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v13 Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.516Z level=DEBUG source=runner.go:415 msg="bootstrap discovery took" duration=99.668277ms OLLAMA_LIBRARY_PATH="[/usr/local/lib/ollama /usr/local/lib/ollama/cuda_v13]" extra_envs=map[] Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.516Z level=DEBUG source=runner.go:113 msg="evluating which if any devices to filter out" initial_count=2 Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.516Z level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=202.830124ms Nov 15 09:52:36 ai ollama[28485]: time=2025-11-15T09:52:36.516Z level=INFO source=types.go:42 msg="inference compute" id=GPU-2794afa5-4f22-8cf6-43b9-279a053d98e9 filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5090" libdirs=ollama,cuda_v12 driver=13.0 pci_id=0000:01:00.0 type=discrete total="31.8 GiB" available="31.3 GiB" Nov 15 09:52:36 ai ollama[28485]: [GIN] 2025/11/15 - 09:52:36 | 200 | 51.009µs | 127.0.0.1 | GET "/api/version"
Author
Owner

@qihouji commented on GitHub (Nov 19, 2025):

Startup log:
time=2025-11-19T20:26:35.460+08:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434d OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:Q:\ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]"
time=2025-11-19T20:26:35.477+08:00 level=INFO source=images.go:522 msg="total blobs: 21"
time=2025-11-19T20:26:35.478+08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-11-19T20:26:35.479+08:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.12.11)"
time=2025-11-19T20:26:35.481+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-11-19T20:26:35.501+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\Users\qihou\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --port 3846"
time=2025-11-19T20:26:35.733+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\Users\qihou\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --port 3855"
time=2025-11-19T20:26:35.946+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\Users\qihou\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --port 3864"
time=2025-11-19T20:26:36.065+08:00 level=INFO source=runner.go:98 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1"
time=2025-11-19T20:26:36.065+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-8b00aeb2-445d-e6d4-fdfe-b7cb31fbf339 filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5090" libdirs=ollama,cuda_v12 driver=13.0 pci_id=0000:0e:00.0 type=discrete total="31.8 GiB" available="29.6 GiB"
time=2025-11-19T20:26:46.393+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\Users\qihou\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --port 3896"
time=2025-11-19T20:26:46.613+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2025-11-19T20:26:46.613+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=16 efficiency=0 threads=32
time=2025-11-19T20:26:46.669+08:00 level=INFO source=server.go:209 msg="enabling flash attention"
time=2025-11-19T20:26:46.671+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\Users\qihou\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --model Q:\ollama_models\blobs\sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e --port 3905"
time=2025-11-19T20:26:46.676+08:00 level=INFO source=sched.go:443 msg="system memory" total="63.9 GiB" free="46.0 GiB" free_swap="89.8 GiB"
time=2025-11-19T20:26:46.676+08:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-8b00aeb2-445d-e6d4-fdfe-b7cb31fbf339 library=CUDA available="29.2 GiB" free="29.6 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-11-19T20:26:46.677+08:00 level=INFO source=server.go:702 msg="loading model" "model layers"=41 requested=-1
time=2025-11-19T20:26:46.719+08:00 level=INFO source=runner.go:1398 msg="starting ollama engine"
time=2025-11-19T20:26:46.740+08:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:3905"
time=2025-11-19T20:26:46.744+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:41[ID:GPU-8b00aeb2-445d-e6d4-fdfe-b7cb31fbf339 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-19T20:26:46.767+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3 file_type=Q4_K_M name="Qwen3 14B" description="" num_tensors=443 num_key_values=28
load_backend: loaded CPU backend from C:\Users\qihou\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-8b00aeb2-445d-e6d4-fdfe-b7cb31fbf339
load_backend: loaded CUDA backend from C:\Users\qihou\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
time=2025-11-19T20:26:46.880+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-11-19T20:26:47.415+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:41[ID:GPU-8b00aeb2-445d-e6d4-fdfe-b7cb31fbf339 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-19T20:26:47.568+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:41[ID:GPU-8b00aeb2-445d-e6d4-fdfe-b7cb31fbf339 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-19T20:26:47.568+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="8.2 GiB"
time=2025-11-19T20:26:47.568+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="417.3 MiB"
time=2025-11-19T20:26:47.569+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.2 GiB"
time=2025-11-19T20:26:47.569+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="224.0 MiB"
time=2025-11-19T20:26:47.569+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="10.0 MiB"
time=2025-11-19T20:26:47.569+08:00 level=INFO source=device.go:272 msg="total memory" size="10.1 GiB"
time=2025-11-19T20:26:47.569+08:00 level=INFO source=sched.go:517 msg="loaded runners" count=1
time=2025-11-19T20:26:47.569+08:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"
time=2025-11-19T20:26:47.569+08:00 level=INFO source=ggml.go:482 msg="offloading 40 repeating layers to GPU"
time=2025-11-19T20:26:47.569+08:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2025-11-19T20:26:47.569+08:00 level=INFO source=ggml.go:494 msg="offloaded 41/41 layers to GPU"
time=2025-11-19T20:26:47.569+08:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"
time=2025-11-19T20:26:53.577+08:00 level=INFO source=server.go:1332 msg="llama runner started in 6.90 seconds"
[GIN] 2025/11/19 - 20:26:58 | 200 | 11.9365493s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/11/19 - 20:27:04 | 200 | 18.1134217s | 127.0.0.1 | POST "/api/chat"

<!-- gh-comment-id:3552462625 --> @qihouji commented on GitHub (Nov 19, 2025): Startup log: time=2025-11-19T20:26:35.460+08:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434d OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:2 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:Q:\\ollama_models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]" time=2025-11-19T20:26:35.477+08:00 level=INFO source=images.go:522 msg="total blobs: 21" time=2025-11-19T20:26:35.478+08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-11-19T20:26:35.479+08:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.12.11)" time=2025-11-19T20:26:35.481+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-11-19T20:26:35.501+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\qihou\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 3846" time=2025-11-19T20:26:35.733+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\qihou\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 3855" time=2025-11-19T20:26:35.946+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\qihou\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 3864" time=2025-11-19T20:26:36.065+08:00 level=INFO source=runner.go:98 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2025-11-19T20:26:36.065+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-8b00aeb2-445d-e6d4-fdfe-b7cb31fbf339 filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5090" libdirs=ollama,cuda_v12 driver=13.0 pci_id=0000:0e:00.0 type=discrete total="31.8 GiB" available="29.6 GiB" time=2025-11-19T20:26:46.393+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\qihou\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 3896" time=2025-11-19T20:26:46.613+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2025-11-19T20:26:46.613+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=16 efficiency=0 threads=32 time=2025-11-19T20:26:46.669+08:00 level=INFO source=server.go:209 msg="enabling flash attention" time=2025-11-19T20:26:46.671+08:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\qihou\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model Q:\\ollama_models\\blobs\\sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e --port 3905" time=2025-11-19T20:26:46.676+08:00 level=INFO source=sched.go:443 msg="system memory" total="63.9 GiB" free="46.0 GiB" free_swap="89.8 GiB" time=2025-11-19T20:26:46.676+08:00 level=INFO source=sched.go:450 msg="gpu memory" id=GPU-8b00aeb2-445d-e6d4-fdfe-b7cb31fbf339 library=CUDA available="29.2 GiB" free="29.6 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-11-19T20:26:46.677+08:00 level=INFO source=server.go:702 msg="loading model" "model layers"=41 requested=-1 time=2025-11-19T20:26:46.719+08:00 level=INFO source=runner.go:1398 msg="starting ollama engine" time=2025-11-19T20:26:46.740+08:00 level=INFO source=runner.go:1433 msg="Server listening on 127.0.0.1:3905" time=2025-11-19T20:26:46.744+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:fit LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:41[ID:GPU-8b00aeb2-445d-e6d4-fdfe-b7cb31fbf339 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-19T20:26:46.767+08:00 level=INFO source=ggml.go:136 msg="" architecture=qwen3 file_type=Q4_K_M name="Qwen3 14B" description="" num_tensors=443 num_key_values=28 load_backend: loaded CPU backend from C:\Users\qihou\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-8b00aeb2-445d-e6d4-fdfe-b7cb31fbf339 load_backend: loaded CUDA backend from C:\Users\qihou\AppData\Local\Programs\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll time=2025-11-19T20:26:46.880+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-11-19T20:26:47.415+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:alloc LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:41[ID:GPU-8b00aeb2-445d-e6d4-fdfe-b7cb31fbf339 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-19T20:26:47.568+08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:commit LoraPath:[] Parallel:2 BatchSize:512 FlashAttention:true KvSize:8192 KvCacheType: NumThreads:16 GPULayers:41[ID:GPU-8b00aeb2-445d-e6d4-fdfe-b7cb31fbf339 Layers:41(0..40)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-19T20:26:47.568+08:00 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="8.2 GiB" time=2025-11-19T20:26:47.568+08:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="417.3 MiB" time=2025-11-19T20:26:47.569+08:00 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="1.2 GiB" time=2025-11-19T20:26:47.569+08:00 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="224.0 MiB" time=2025-11-19T20:26:47.569+08:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="10.0 MiB" time=2025-11-19T20:26:47.569+08:00 level=INFO source=device.go:272 msg="total memory" size="10.1 GiB" time=2025-11-19T20:26:47.569+08:00 level=INFO source=sched.go:517 msg="loaded runners" count=1 time=2025-11-19T20:26:47.569+08:00 level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2025-11-19T20:26:47.569+08:00 level=INFO source=ggml.go:482 msg="offloading 40 repeating layers to GPU" time=2025-11-19T20:26:47.569+08:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2025-11-19T20:26:47.569+08:00 level=INFO source=ggml.go:494 msg="offloaded 41/41 layers to GPU" time=2025-11-19T20:26:47.569+08:00 level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" time=2025-11-19T20:26:53.577+08:00 level=INFO source=server.go:1332 msg="llama runner started in 6.90 seconds" [GIN] 2025/11/19 - 20:26:58 | 200 | 11.9365493s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/11/19 - 20:27:04 | 200 | 18.1134217s | 127.0.0.1 | POST "/api/chat"
Author
Owner

@rick-github commented on GitHub (Nov 19, 2025):

Perhaps https://github.com/ollama/ollama/issues/13112, should be fixed in 0.13.0 (currently in pre-release).

<!-- gh-comment-id:3553939116 --> @rick-github commented on GitHub (Nov 19, 2025): Perhaps https://github.com/ollama/ollama/issues/13112, should be fixed in 0.13.0 (currently in [pre-release](https://github.com/ollama/ollama/releases/tag/v0.13.0-rc0)).
Author
Owner

@qihouji commented on GitHub (Nov 21, 2025):

fixed in 0.13.0, thanks

<!-- gh-comment-id:3560784003 --> @qihouji commented on GitHub (Nov 21, 2025): fixed in 0.13.0, thanks
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55172