[GH-ISSUE #11723] ollama 0.11 not using GPU despite detecting them and finding the libraries #69820

Closed
opened 2026-05-04 19:28:44 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @timbmg on GitHub (Aug 6, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11723

What is the issue?

I just upgraded from 0.9.6 to 0.11.3 (also tried the 0.11.2). However, it seems none of the models is able to use the GPU (tried with gpt-oss:20b and gemma3:12b-it-fp16). Oddly, the logs show that it does detect the GPUs and finds the libraries, but still it does not use them:

time=2025-08-06T09:33:14.064+02:00 level=INFO source=ggml.go:367 msg="offloading 0 repeating layers to GPU"
time=2025-08-06T09:33:14.064+02:00 level=INFO source=ggml.go:371 msg="offloading output layer to CPU"
time=2025-08-06T09:33:14.064+02:00 level=INFO source=ggml.go:378 msg="offloaded 0/25 layers to GPU"

I also tried with just using 1 GPU; same issue remains.

Relevant log output

Wed Aug  6 09:31:52 2025       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03             Driver Version: 535.129.03   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA A100-SXM4-80GB          On  | 00000000:87:00.0 Off |                    0 |
| N/A   36C    P0              67W / 400W |      4MiB / 81920MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
|   1  NVIDIA A100-SXM4-80GB          On  | 00000000:90:00.0 Off |                    0 |
| N/A   47C    P0              72W / 400W |      4MiB / 81920MiB |      0%      Default |
|                                         |                      |             Disabled |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+
time=2025-08-06T09:31:52.490+02:00 level=INFO source=routes.go:1297 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0,1 GPU_DEVICE_ORDINAL:0,1 HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:10m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/lab-storage-1/timbmg/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:0,1 http_proxy: https_proxy: no_proxy:]"
time=2025-08-06T09:31:52.568+02:00 level=INFO source=images.go:477 msg="total blobs: 51"
time=2025-08-06T09:31:52.610+02:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2025-08-06T09:31:52.629+02:00 level=INFO source=routes.go:1350 msg="Listening on [::]:11434 (version 0.11.3)"
time=2025-08-06T09:31:52.632+02:00 level=DEBUG source=sched.go:106 msg="starting llm scheduler"
time=2025-08-06T09:31:52.636+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-08-06T09:31:52.657+02:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-08-06T09:31:52.660+02:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so*
time=2025-08-06T09:31:52.661+02:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/mnt/beegfs/work/timbmg/ollama/lib/ollama/libcuda.so* /lab-storage-1/timbmg/ollama-server-slurm/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-08-06T09:31:52.666+02:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib64/libcuda.so.535.129.03]
initializing /usr/lib64/libcuda.so.535.129.03
dlsym: cuInit - 0x7f974a656660
dlsym: cuDriverGetVersion - 0x7f974a656680
dlsym: cuDeviceGetCount - 0x7f974a6566c0
dlsym: cuDeviceGet - 0x7f974a6566a0
dlsym: cuDeviceGetAttribute - 0x7f974a6567a0
dlsym: cuDeviceGetUuid - 0x7f974a656700
dlsym: cuDeviceGetName - 0x7f974a6566e0
dlsym: cuCtxCreate_v3 - 0x7f974a65e360
dlsym: cuMemGetInfo_v2 - 0x7f974a669850
dlsym: cuCtxDestroy - 0x7f974a6b8940
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 2
time=2025-08-06T09:31:53.549+02:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=2 library=/usr/lib64/libcuda.so.535.129.03
[GPU-d988422a-a4db-43bb-7c8c-d06cd5df7d1f] CUDA totalMem 81050mb
[GPU-d988422a-a4db-43bb-7c8c-d06cd5df7d1f] CUDA freeMem 80623mb
[GPU-d988422a-a4db-43bb-7c8c-d06cd5df7d1f] Compute Capability 8.0
[GPU-c4881392-6d2b-de2e-82ef-835f242bd71c] CUDA totalMem 81050mb
[GPU-c4881392-6d2b-de2e-82ef-835f242bd71c] CUDA freeMem 80623mb
[GPU-c4881392-6d2b-de2e-82ef-835f242bd71c] Compute Capability 8.0
time=2025-08-06T09:31:54.406+02:00 level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2025-08-06T09:31:54.410+02:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-d988422a-a4db-43bb-7c8c-d06cd5df7d1f library=cuda variant=v12 compute=8.0 driver=12.2 name="NVIDIA A100-SXM4-80GB" total="79.2 GiB" available="78.7 GiB"
time=2025-08-06T09:31:54.411+02:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-c4881392-6d2b-de2e-82ef-835f242bd71c library=cuda variant=v12 compute=8.0 driver=12.2 name="NVIDIA A100-SXM4-80GB" total="79.2 GiB" available="78.7 GiB"
[GIN] 2025/08/06 - 09:33:10 | 200 |    2.360317ms |    10.167.11.14 | HEAD     "/"
time=2025-08-06T09:33:10.953+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32
[GIN] 2025/08/06 - 09:33:10 | 200 |  267.061651ms |    10.167.11.14 | POST     "/api/show"
time=2025-08-06T09:33:11.543+02:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="2015.7 GiB" before.free="1928.5 GiB" before.free_swap="0 B" now.total="2015.7 GiB" now.free="1928.9 GiB" now.free_swap="0 B"
initializing /usr/lib64/libcuda.so.535.129.03
dlsym: cuInit - 0x7f974a656660
dlsym: cuDriverGetVersion - 0x7f974a656680
dlsym: cuDeviceGetCount - 0x7f974a6566c0
dlsym: cuDeviceGet - 0x7f974a6566a0
dlsym: cuDeviceGetAttribute - 0x7f974a6567a0
dlsym: cuDeviceGetUuid - 0x7f974a656700
dlsym: cuDeviceGetName - 0x7f974a6566e0
dlsym: cuCtxCreate_v3 - 0x7f974a65e360
dlsym: cuMemGetInfo_v2 - 0x7f974a669850
dlsym: cuCtxDestroy - 0x7f974a6b8940
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 2
time=2025-08-06T09:33:12.014+02:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-d988422a-a4db-43bb-7c8c-d06cd5df7d1f name="NVIDIA A100-SXM4-80GB" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="427.4 MiB"
time=2025-08-06T09:33:12.422+02:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-c4881392-6d2b-de2e-82ef-835f242bd71c name="NVIDIA A100-SXM4-80GB" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="427.4 MiB"
releasing cuda driver library
time=2025-08-06T09:33:12.492+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32
time=2025-08-06T09:33:12.616+02:00 level=DEBUG source=sched.go:226 msg="loading first model" model=/lab-storage-1/timbmg/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583
time=2025-08-06T09:33:12.618+02:00 level=DEBUG source=memory.go:111 msg=evaluating library=cuda gpu_count=1 available="[78.7 GiB]"
time=2025-08-06T09:33:12.620+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=gptoss.vision.block_count default=0
time=2025-08-06T09:33:12.623+02:00 level=INFO source=sched.go:786 msg="new model will fit in available VRAM in single GPU, loading" model=/lab-storage-1/timbmg/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 gpu=GPU-d988422a-a4db-43bb-7c8c-d06cd5df7d1f parallel=1 available=84539604992 required="14.9 GiB"
time=2025-08-06T09:33:12.624+02:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="2015.7 GiB" before.free="1928.9 GiB" before.free_swap="0 B" now.total="2015.7 GiB" now.free="1928.3 GiB" now.free_swap="0 B"
initializing /usr/lib64/libcuda.so.535.129.03
dlsym: cuInit - 0x7f974a656660
dlsym: cuDriverGetVersion - 0x7f974a656680
dlsym: cuDeviceGetCount - 0x7f974a6566c0
dlsym: cuDeviceGet - 0x7f974a6566a0
dlsym: cuDeviceGetAttribute - 0x7f974a6567a0
dlsym: cuDeviceGetUuid - 0x7f974a656700
dlsym: cuDeviceGetName - 0x7f974a6566e0
dlsym: cuCtxCreate_v3 - 0x7f974a65e360
dlsym: cuMemGetInfo_v2 - 0x7f974a669850
dlsym: cuCtxDestroy - 0x7f974a6b8940
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 2
time=2025-08-06T09:33:13.049+02:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-d988422a-a4db-43bb-7c8c-d06cd5df7d1f name="NVIDIA A100-SXM4-80GB" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="427.4 MiB"
time=2025-08-06T09:33:13.526+02:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-c4881392-6d2b-de2e-82ef-835f242bd71c name="NVIDIA A100-SXM4-80GB" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="427.4 MiB"
releasing cuda driver library
time=2025-08-06T09:33:13.529+02:00 level=INFO source=server.go:135 msg="system memory" total="2015.7 GiB" free="1928.3 GiB" free_swap="0 B"
time=2025-08-06T09:33:13.531+02:00 level=DEBUG source=memory.go:111 msg=evaluating library=cuda gpu_count=1 available="[78.7 GiB]"
time=2025-08-06T09:33:13.533+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=gptoss.vision.block_count default=0
time=2025-08-06T09:33:13.534+02:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[78.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="14.9 GiB" memory.required.partial="14.9 GiB" memory.required.kv="300.0 MiB" memory.required.allocations="[14.9 GiB]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="2.0 GiB" memory.graph.partial="2.0 GiB"
time=2025-08-06T09:33:13.539+02:00 level=DEBUG source=server.go:291 msg="compatible gpu libraries" compatible=[]
time=2025-08-06T09:33:13.645+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32
time=2025-08-06T09:33:13.647+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+(?i:'s|'t|'re|'ve|'m|'ll|'d)?|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*(?i:'s|'t|'re|'ve|'m|'ll|'d)?|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-08-06T09:33:13.652+02:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/mnt/beegfs/work/timbmg/ollama/bin/ollama runner --ollama-engine --model /lab-storage-1/timbmg/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --ctx-size 8192 --batch-size 512 --n-gpu-layers 25 --threads 128 --parallel 1 --port 35503"
time=2025-08-06T09:33:13.654+02:00 level=DEBUG source=server.go:439 msg=subprocess OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_LOAD_TIMEOUT=10m OLLAMA_DEBUG=1 OLLAMA_HOST=0.0.0.0:11434 ROCR_VISIBLE_DEVICES=0,1 CUDA_VISIBLE_DEVICES=GPU-d988422a-a4db-43bb-7c8c-d06cd5df7d1f PATH=/storage/lab/work/timbmg/.local/share/../bin:/lab-storage-1/timbmg/miniconda/bin:/lab-storage-1/timbmg/miniconda/condabin:/home/timbmg/anaconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/puppetlabs/bin:/lab-storage-1/timbmg/ollama/bin:/lab-storage-1/timbmg/.local/share/kitty-ssh-kitten/kitty/bin GPU_DEVICE_ORDINAL=0,1 OLLAMA_LIBRARY_PATH=/mnt/beegfs/work/timbmg/ollama/lib/ollama LD_LIBRARY_PATH=/mnt/beegfs/work/timbmg/ollama/lib/ollama:/mnt/beegfs/work/timbmg/ollama/lib/ollama
time=2025-08-06T09:33:13.686+02:00 level=INFO source=sched.go:481 msg="loaded runners" count=1
time=2025-08-06T09:33:13.706+02:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
time=2025-08-06T09:33:13.764+02:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
time=2025-08-06T09:33:13.813+02:00 level=INFO source=runner.go:925 msg="starting ollama engine"
time=2025-08-06T09:33:13.813+02:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:35503"
time=2025-08-06T09:33:13.920+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32
time=2025-08-06T09:33:13.922+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.name default=""
time=2025-08-06T09:33:13.922+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.description default=""
time=2025-08-06T09:33:13.923+02:00 level=INFO source=ggml.go:92 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30
time=2025-08-06T09:33:13.923+02:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/mnt/beegfs/work/timbmg/ollama/lib/ollama
time=2025-08-06T09:33:14.025+02:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
load_backend: loaded CPU backend from /mnt/beegfs/work/timbmg/ollama/lib/ollama/libggml-cpu-haswell.so
time=2025-08-06T09:33:14.063+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-08-06T09:33:14.064+02:00 level=INFO source=ggml.go:367 msg="offloading 0 repeating layers to GPU"
time=2025-08-06T09:33:14.064+02:00 level=INFO source=ggml.go:371 msg="offloading output layer to CPU"
time=2025-08-06T09:33:14.064+02:00 level=INFO source=ggml.go:378 msg="offloaded 0/25 layers to GPU"
time=2025-08-06T09:33:14.064+02:00 level=INFO source=ggml.go:381 msg="model weights" buffer=CPU size="12.8 GiB"
time=2025-08-06T09:33:14.065+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+(?i:'s|'t|'re|'ve|'m|'ll|'d)?|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*(?i:'s|'t|'re|'ve|'m|'ll|'d)?|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-08-06T09:33:14.178+02:00 level=DEBUG source=ggml.go:654 msg="compute graph" nodes=1847 splits=1
time=2025-08-06T09:33:14.178+02:00 level=INFO source=ggml.go:672 msg="compute graph" backend=CPU buffer_type=CPU size="2.0 GiB"
time=2025-08-06T09:33:14.178+02:00 level=DEBUG source=runner.go:883 msg=memory allocated.InputWeights=1158266880A allocated.CPU.Weights="[477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 1158278400A]" allocated.CPU.Cache="[9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 0U]" allocated.CPU.Graph=2178285568A
time=2025-08-06T09:33:14.277+02:00 level=DEBUG source=server.go:643 msg="model load progress 0.00"
...
time=2025-08-06T09:33:39.541+02:00 level=DEBUG source=server.go:643 msg="model load progress 1.00"
time=2025-08-06T09:33:39.794+02:00 level=INFO source=server.go:637 msg="llama runner started in 26.10 seconds"
time=2025-08-06T09:33:39.795+02:00 level=DEBUG source=sched.go:493 msg="finished setting up" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="14.9 GiB" runner.vram="14.9 GiB" runner.parallel=1 runner.pid=2068391 runner.model=/lab-storage-1/timbmg/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=8192
[GIN] 2025/08/06 - 09:33:39 | 200 | 28.442849614s |    10.167.11.14 | POST     "/api/generate"
time=2025-08-06T09:33:39.800+02:00 level=DEBUG source=sched.go:501 msg="context for request finished"
time=2025-08-06T09:33:39.801+02:00 level=DEBUG source=sched.go:341 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="14.9 GiB" runner.vram="14.9 GiB" runner.parallel=1 runner.pid=2068391 runner.model=/lab-storage-1/timbmg/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=8192 duration=5m0s
time=2025-08-06T09:33:39.803+02:00 level=DEBUG source=sched.go:359 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="14.9 GiB" runner.vram="14.9 GiB" runner.parallel=1 runner.pid=2068391 runner.model=/lab-storage-1/timbmg/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=8192 refCount=0

OS

Linux

GPU

Nvidia

CPU

No response

Ollama version

0.11.3

Originally created by @timbmg on GitHub (Aug 6, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11723 ### What is the issue? I just upgraded from 0.9.6 to 0.11.3 (also tried the 0.11.2). However, it seems none of the models is able to use the GPU (tried with gpt-oss:20b and gemma3:12b-it-fp16). Oddly, the logs show that it does detect the GPUs and finds the libraries, but still it does not use them: ``` time=2025-08-06T09:33:14.064+02:00 level=INFO source=ggml.go:367 msg="offloading 0 repeating layers to GPU" time=2025-08-06T09:33:14.064+02:00 level=INFO source=ggml.go:371 msg="offloading output layer to CPU" time=2025-08-06T09:33:14.064+02:00 level=INFO source=ggml.go:378 msg="offloaded 0/25 layers to GPU" ``` I also tried with just using 1 GPU; same issue remains. ### Relevant log output ```shell Wed Aug 6 09:31:52 2025 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA A100-SXM4-80GB On | 00000000:87:00.0 Off | 0 | | N/A 36C P0 67W / 400W | 4MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 1 NVIDIA A100-SXM4-80GB On | 00000000:90:00.0 Off | 0 | | N/A 47C P0 72W / 400W | 4MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | No running processes found | +---------------------------------------------------------------------------------------+ time=2025-08-06T09:31:52.490+02:00 level=INFO source=routes.go:1297 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0,1 GPU_DEVICE_ORDINAL:0,1 HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:10m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/lab-storage-1/timbmg/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:0,1 http_proxy: https_proxy: no_proxy:]" time=2025-08-06T09:31:52.568+02:00 level=INFO source=images.go:477 msg="total blobs: 51" time=2025-08-06T09:31:52.610+02:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2025-08-06T09:31:52.629+02:00 level=INFO source=routes.go:1350 msg="Listening on [::]:11434 (version 0.11.3)" time=2025-08-06T09:31:52.632+02:00 level=DEBUG source=sched.go:106 msg="starting llm scheduler" time=2025-08-06T09:31:52.636+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-08-06T09:31:52.657+02:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-08-06T09:31:52.660+02:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so* time=2025-08-06T09:31:52.661+02:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/mnt/beegfs/work/timbmg/ollama/lib/ollama/libcuda.so* /lab-storage-1/timbmg/ollama-server-slurm/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-08-06T09:31:52.666+02:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib64/libcuda.so.535.129.03] initializing /usr/lib64/libcuda.so.535.129.03 dlsym: cuInit - 0x7f974a656660 dlsym: cuDriverGetVersion - 0x7f974a656680 dlsym: cuDeviceGetCount - 0x7f974a6566c0 dlsym: cuDeviceGet - 0x7f974a6566a0 dlsym: cuDeviceGetAttribute - 0x7f974a6567a0 dlsym: cuDeviceGetUuid - 0x7f974a656700 dlsym: cuDeviceGetName - 0x7f974a6566e0 dlsym: cuCtxCreate_v3 - 0x7f974a65e360 dlsym: cuMemGetInfo_v2 - 0x7f974a669850 dlsym: cuCtxDestroy - 0x7f974a6b8940 calling cuInit calling cuDriverGetVersion raw version 0x2ef4 CUDA driver version: 12.2 calling cuDeviceGetCount device count 2 time=2025-08-06T09:31:53.549+02:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=2 library=/usr/lib64/libcuda.so.535.129.03 [GPU-d988422a-a4db-43bb-7c8c-d06cd5df7d1f] CUDA totalMem 81050mb [GPU-d988422a-a4db-43bb-7c8c-d06cd5df7d1f] CUDA freeMem 80623mb [GPU-d988422a-a4db-43bb-7c8c-d06cd5df7d1f] Compute Capability 8.0 [GPU-c4881392-6d2b-de2e-82ef-835f242bd71c] CUDA totalMem 81050mb [GPU-c4881392-6d2b-de2e-82ef-835f242bd71c] CUDA freeMem 80623mb [GPU-c4881392-6d2b-de2e-82ef-835f242bd71c] Compute Capability 8.0 time=2025-08-06T09:31:54.406+02:00 level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu" releasing cuda driver library time=2025-08-06T09:31:54.410+02:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-d988422a-a4db-43bb-7c8c-d06cd5df7d1f library=cuda variant=v12 compute=8.0 driver=12.2 name="NVIDIA A100-SXM4-80GB" total="79.2 GiB" available="78.7 GiB" time=2025-08-06T09:31:54.411+02:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-c4881392-6d2b-de2e-82ef-835f242bd71c library=cuda variant=v12 compute=8.0 driver=12.2 name="NVIDIA A100-SXM4-80GB" total="79.2 GiB" available="78.7 GiB" [GIN] 2025/08/06 - 09:33:10 | 200 | 2.360317ms | 10.167.11.14 | HEAD "/" time=2025-08-06T09:33:10.953+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/08/06 - 09:33:10 | 200 | 267.061651ms | 10.167.11.14 | POST "/api/show" time=2025-08-06T09:33:11.543+02:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="2015.7 GiB" before.free="1928.5 GiB" before.free_swap="0 B" now.total="2015.7 GiB" now.free="1928.9 GiB" now.free_swap="0 B" initializing /usr/lib64/libcuda.so.535.129.03 dlsym: cuInit - 0x7f974a656660 dlsym: cuDriverGetVersion - 0x7f974a656680 dlsym: cuDeviceGetCount - 0x7f974a6566c0 dlsym: cuDeviceGet - 0x7f974a6566a0 dlsym: cuDeviceGetAttribute - 0x7f974a6567a0 dlsym: cuDeviceGetUuid - 0x7f974a656700 dlsym: cuDeviceGetName - 0x7f974a6566e0 dlsym: cuCtxCreate_v3 - 0x7f974a65e360 dlsym: cuMemGetInfo_v2 - 0x7f974a669850 dlsym: cuCtxDestroy - 0x7f974a6b8940 calling cuInit calling cuDriverGetVersion raw version 0x2ef4 CUDA driver version: 12.2 calling cuDeviceGetCount device count 2 time=2025-08-06T09:33:12.014+02:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-d988422a-a4db-43bb-7c8c-d06cd5df7d1f name="NVIDIA A100-SXM4-80GB" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="427.4 MiB" time=2025-08-06T09:33:12.422+02:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-c4881392-6d2b-de2e-82ef-835f242bd71c name="NVIDIA A100-SXM4-80GB" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="427.4 MiB" releasing cuda driver library time=2025-08-06T09:33:12.492+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32 time=2025-08-06T09:33:12.616+02:00 level=DEBUG source=sched.go:226 msg="loading first model" model=/lab-storage-1/timbmg/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 time=2025-08-06T09:33:12.618+02:00 level=DEBUG source=memory.go:111 msg=evaluating library=cuda gpu_count=1 available="[78.7 GiB]" time=2025-08-06T09:33:12.620+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-08-06T09:33:12.623+02:00 level=INFO source=sched.go:786 msg="new model will fit in available VRAM in single GPU, loading" model=/lab-storage-1/timbmg/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 gpu=GPU-d988422a-a4db-43bb-7c8c-d06cd5df7d1f parallel=1 available=84539604992 required="14.9 GiB" time=2025-08-06T09:33:12.624+02:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="2015.7 GiB" before.free="1928.9 GiB" before.free_swap="0 B" now.total="2015.7 GiB" now.free="1928.3 GiB" now.free_swap="0 B" initializing /usr/lib64/libcuda.so.535.129.03 dlsym: cuInit - 0x7f974a656660 dlsym: cuDriverGetVersion - 0x7f974a656680 dlsym: cuDeviceGetCount - 0x7f974a6566c0 dlsym: cuDeviceGet - 0x7f974a6566a0 dlsym: cuDeviceGetAttribute - 0x7f974a6567a0 dlsym: cuDeviceGetUuid - 0x7f974a656700 dlsym: cuDeviceGetName - 0x7f974a6566e0 dlsym: cuCtxCreate_v3 - 0x7f974a65e360 dlsym: cuMemGetInfo_v2 - 0x7f974a669850 dlsym: cuCtxDestroy - 0x7f974a6b8940 calling cuInit calling cuDriverGetVersion raw version 0x2ef4 CUDA driver version: 12.2 calling cuDeviceGetCount device count 2 time=2025-08-06T09:33:13.049+02:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-d988422a-a4db-43bb-7c8c-d06cd5df7d1f name="NVIDIA A100-SXM4-80GB" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="427.4 MiB" time=2025-08-06T09:33:13.526+02:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-c4881392-6d2b-de2e-82ef-835f242bd71c name="NVIDIA A100-SXM4-80GB" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="427.4 MiB" releasing cuda driver library time=2025-08-06T09:33:13.529+02:00 level=INFO source=server.go:135 msg="system memory" total="2015.7 GiB" free="1928.3 GiB" free_swap="0 B" time=2025-08-06T09:33:13.531+02:00 level=DEBUG source=memory.go:111 msg=evaluating library=cuda gpu_count=1 available="[78.7 GiB]" time=2025-08-06T09:33:13.533+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-08-06T09:33:13.534+02:00 level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[78.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="14.9 GiB" memory.required.partial="14.9 GiB" memory.required.kv="300.0 MiB" memory.required.allocations="[14.9 GiB]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="2.0 GiB" memory.graph.partial="2.0 GiB" time=2025-08-06T09:33:13.539+02:00 level=DEBUG source=server.go:291 msg="compatible gpu libraries" compatible=[] time=2025-08-06T09:33:13.645+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32 time=2025-08-06T09:33:13.647+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+(?i:'s|'t|'re|'ve|'m|'ll|'d)?|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*(?i:'s|'t|'re|'ve|'m|'ll|'d)?|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-08-06T09:33:13.652+02:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/mnt/beegfs/work/timbmg/ollama/bin/ollama runner --ollama-engine --model /lab-storage-1/timbmg/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --ctx-size 8192 --batch-size 512 --n-gpu-layers 25 --threads 128 --parallel 1 --port 35503" time=2025-08-06T09:33:13.654+02:00 level=DEBUG source=server.go:439 msg=subprocess OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_LOAD_TIMEOUT=10m OLLAMA_DEBUG=1 OLLAMA_HOST=0.0.0.0:11434 ROCR_VISIBLE_DEVICES=0,1 CUDA_VISIBLE_DEVICES=GPU-d988422a-a4db-43bb-7c8c-d06cd5df7d1f PATH=/storage/lab/work/timbmg/.local/share/../bin:/lab-storage-1/timbmg/miniconda/bin:/lab-storage-1/timbmg/miniconda/condabin:/home/timbmg/anaconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/puppetlabs/bin:/lab-storage-1/timbmg/ollama/bin:/lab-storage-1/timbmg/.local/share/kitty-ssh-kitten/kitty/bin GPU_DEVICE_ORDINAL=0,1 OLLAMA_LIBRARY_PATH=/mnt/beegfs/work/timbmg/ollama/lib/ollama LD_LIBRARY_PATH=/mnt/beegfs/work/timbmg/ollama/lib/ollama:/mnt/beegfs/work/timbmg/ollama/lib/ollama time=2025-08-06T09:33:13.686+02:00 level=INFO source=sched.go:481 msg="loaded runners" count=1 time=2025-08-06T09:33:13.706+02:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" time=2025-08-06T09:33:13.764+02:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding" time=2025-08-06T09:33:13.813+02:00 level=INFO source=runner.go:925 msg="starting ollama engine" time=2025-08-06T09:33:13.813+02:00 level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:35503" time=2025-08-06T09:33:13.920+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32 time=2025-08-06T09:33:13.922+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.name default="" time=2025-08-06T09:33:13.922+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.description default="" time=2025-08-06T09:33:13.923+02:00 level=INFO source=ggml.go:92 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30 time=2025-08-06T09:33:13.923+02:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/mnt/beegfs/work/timbmg/ollama/lib/ollama time=2025-08-06T09:33:14.025+02:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" load_backend: loaded CPU backend from /mnt/beegfs/work/timbmg/ollama/lib/ollama/libggml-cpu-haswell.so time=2025-08-06T09:33:14.063+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-08-06T09:33:14.064+02:00 level=INFO source=ggml.go:367 msg="offloading 0 repeating layers to GPU" time=2025-08-06T09:33:14.064+02:00 level=INFO source=ggml.go:371 msg="offloading output layer to CPU" time=2025-08-06T09:33:14.064+02:00 level=INFO source=ggml.go:378 msg="offloaded 0/25 layers to GPU" time=2025-08-06T09:33:14.064+02:00 level=INFO source=ggml.go:381 msg="model weights" buffer=CPU size="12.8 GiB" time=2025-08-06T09:33:14.065+02:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+(?i:'s|'t|'re|'ve|'m|'ll|'d)?|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*(?i:'s|'t|'re|'ve|'m|'ll|'d)?|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-08-06T09:33:14.178+02:00 level=DEBUG source=ggml.go:654 msg="compute graph" nodes=1847 splits=1 time=2025-08-06T09:33:14.178+02:00 level=INFO source=ggml.go:672 msg="compute graph" backend=CPU buffer_type=CPU size="2.0 GiB" time=2025-08-06T09:33:14.178+02:00 level=DEBUG source=runner.go:883 msg=memory allocated.InputWeights=1158266880A allocated.CPU.Weights="[477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 477075584A 1158278400A]" allocated.CPU.Cache="[9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 0U]" allocated.CPU.Graph=2178285568A time=2025-08-06T09:33:14.277+02:00 level=DEBUG source=server.go:643 msg="model load progress 0.00" ... time=2025-08-06T09:33:39.541+02:00 level=DEBUG source=server.go:643 msg="model load progress 1.00" time=2025-08-06T09:33:39.794+02:00 level=INFO source=server.go:637 msg="llama runner started in 26.10 seconds" time=2025-08-06T09:33:39.795+02:00 level=DEBUG source=sched.go:493 msg="finished setting up" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="14.9 GiB" runner.vram="14.9 GiB" runner.parallel=1 runner.pid=2068391 runner.model=/lab-storage-1/timbmg/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=8192 [GIN] 2025/08/06 - 09:33:39 | 200 | 28.442849614s | 10.167.11.14 | POST "/api/generate" time=2025-08-06T09:33:39.800+02:00 level=DEBUG source=sched.go:501 msg="context for request finished" time=2025-08-06T09:33:39.801+02:00 level=DEBUG source=sched.go:341 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="14.9 GiB" runner.vram="14.9 GiB" runner.parallel=1 runner.pid=2068391 runner.model=/lab-storage-1/timbmg/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=8192 duration=5m0s time=2025-08-06T09:33:39.803+02:00 level=DEBUG source=sched.go:359 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gpt-oss:20b runner.inference=cuda runner.devices=1 runner.size="14.9 GiB" runner.vram="14.9 GiB" runner.parallel=1 runner.pid=2068391 runner.model=/lab-storage-1/timbmg/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=8192 refCount=0 ``` ### OS Linux ### GPU Nvidia ### CPU _No response_ ### Ollama version 0.11.3
GiteaMirror added the bug label 2026-05-04 19:28:44 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 6, 2025):

load_backend: loaded CPU backend from /mnt/beegfs/work/timbmg/ollama/lib/ollama/libggml-cpu-haswell.so

No GPU backends found. How did you install ollama?

<!-- gh-comment-id:3157976206 --> @rick-github commented on GitHub (Aug 6, 2025): ``` load_backend: loaded CPU backend from /mnt/beegfs/work/timbmg/ollama/lib/ollama/libggml-cpu-haswell.so ``` No GPU backends found. How did you install ollama?
Author
Owner

@timbmg commented on GitHub (Aug 6, 2025):

I installed it manually like this:

curl -LO https://ollama.com/download/ollama-linux-amd64.tgz
tar -C ./ollama -xzf ollama-linux-amd64.tgz

Actually, get 0.11.3 i used https://github.com/ollama/ollama/releases/download/v0.11.3/ollama-linux-amd64.tgz

<!-- gh-comment-id:3158003856 --> @timbmg commented on GitHub (Aug 6, 2025): I installed it manually like this: ```sh curl -LO https://ollama.com/download/ollama-linux-amd64.tgz tar -C ./ollama -xzf ollama-linux-amd64.tgz ``` Actually, get 0.11.3 i used https://github.com/ollama/ollama/releases/download/v0.11.3/ollama-linux-amd64.tgz
Author
Owner

@rick-github commented on GitHub (Aug 6, 2025):

The runner is being started as /mnt/beegfs/work/timbmg/ollama/bin/ollama, not /usr/bin/ollama. So it's likely the bin directory was moved/copied but not the entirety of the lib directory.

<!-- gh-comment-id:3158047903 --> @rick-github commented on GitHub (Aug 6, 2025): The runner is being started as /mnt/beegfs/work/timbmg/ollama/bin/ollama, not /usr/bin/ollama. So it's likely the bin directory was moved/copied but not the entirety of the lib directory.
Author
Owner

@timbmg commented on GitHub (Aug 6, 2025):

Yeah, just fixed the actual command I used for installing (i.e. i installed into ./ollama not /usr)

I deleted the ./ollama dir before install and installed it again into the ./ollama. So I don't think that's the issue? I actually can't install it into /usr since i dont have rights there

<!-- gh-comment-id:3158065062 --> @timbmg commented on GitHub (Aug 6, 2025): Yeah, just fixed the actual command I used for installing (i.e. i installed into ./ollama not /usr) I deleted the ./ollama dir before install and installed it again into the ./ollama. So I don't think that's the issue? I actually can't install it into /usr since i dont have rights there
Author
Owner

@rick-github commented on GitHub (Aug 6, 2025):

Is this a slurm job? Unset ROCR_VISIBLE_DEVICES in the start script.

<!-- gh-comment-id:3158344486 --> @rick-github commented on GitHub (Aug 6, 2025): Is this a slurm job? Unset `ROCR_VISIBLE_DEVICES` in the start script.
Author
Owner

@timbmg commented on GitHub (Aug 6, 2025):

Yes it's on Slurm.

Thanks, I tried that but the issue remains. I started ollama like this:
nvidia-smi & ROCR_VISIBLE_DEVICES="" OLLAMA_DEBUG=1 OLLAMA_LOAD_TIMEOUT=\"10m\" OLLAMA_HOST=0.0.0.0:${PORT} OLLAMA_MAX_LOADED_MODELS=1 ollama serve

which gave the log:
time=2025-08-06T11:10:00.461+02:00 level=INFO source=routes.go:1297 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL:0 HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:10m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/lab-storage-1/timbmg/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"

<!-- gh-comment-id:3158493917 --> @timbmg commented on GitHub (Aug 6, 2025): Yes it's on Slurm. Thanks, I tried that but the issue remains. I started ollama like this: `nvidia-smi & ROCR_VISIBLE_DEVICES="" OLLAMA_DEBUG=1 OLLAMA_LOAD_TIMEOUT=\"10m\" OLLAMA_HOST=0.0.0.0:${PORT} OLLAMA_MAX_LOADED_MODELS=1 ollama serve` which gave the log: ` time=2025-08-06T11:10:00.461+02:00 level=INFO source=routes.go:1297 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL:0 HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:10m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/lab-storage-1/timbmg/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" `
Author
Owner

@rick-github commented on GitHub (Aug 6, 2025):

unset ROCR_VISIBLE_DEVICES
OLLAMA_DEBUG=1 OLLAMA_LOAD_TIMEOUT=10m OLLAMA_HOST=0.0.0.0:${PORT} OLLAMA_MAX_LOADED_MODELS=1 ollama serve
<!-- gh-comment-id:3158521846 --> @rick-github commented on GitHub (Aug 6, 2025): ``` unset ROCR_VISIBLE_DEVICES OLLAMA_DEBUG=1 OLLAMA_LOAD_TIMEOUT=10m OLLAMA_HOST=0.0.0.0:${PORT} OLLAMA_MAX_LOADED_MODELS=1 ollama serve ```
Author
Owner

@timbmg commented on GitHub (Aug 6, 2025):

Amazing, that worked! Thank you so much for your help 🤗

Any idea why I needed to unset that starting with the upgrade to 0.11?

<!-- gh-comment-id:3158557155 --> @timbmg commented on GitHub (Aug 6, 2025): Amazing, that worked! Thank you so much for your help 🤗 Any idea why I needed to unset that starting with the upgrade to 0.11?
Author
Owner

@rick-github commented on GitHub (Aug 6, 2025):

https://github.com/ollama/ollama/pull/11169 introduced in 0.9.3 as a check for mixed libraries. Unclear as to why it didn't cause an issue for you earlier.

<!-- gh-comment-id:3158756089 --> @rick-github commented on GitHub (Aug 6, 2025): https://github.com/ollama/ollama/pull/11169 introduced in 0.9.3 as a check for mixed libraries. Unclear as to why it didn't cause an issue for you earlier.
Author
Owner

@Anxo06 commented on GitHub (Aug 12, 2025):

Solved the same issue for me! Thanks

<!-- gh-comment-id:3178929322 --> @Anxo06 commented on GitHub (Aug 12, 2025): Solved the same issue for me! Thanks
Author
Owner

@vuminhtue commented on GitHub (Sep 9, 2025):

Worked for me too. Thanks a lot @rick-github

<!-- gh-comment-id:3268877937 --> @vuminhtue commented on GitHub (Sep 9, 2025): Worked for me too. Thanks a lot @rick-github
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69820