[GH-ISSUE #14110] 0.15.5-rc0 GPU detection regression #34966

Closed
opened 2026-04-22 19:03:13 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @Vyryn on GitHub (Feb 6, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14110

What is the issue?

Hello. I am running the 0.15.5-rc0 prerelease on fedora 43 desktop (no docker).
ollama uses my cpu instead of gpu (ollama ps shows 100% CPU). This persists when I manually set CUDA_VISIBLE_DEVICES=[guid], OLLAMA_LLM_LIBRARY=cuda_v11 and/or enable vulkan.

When I downgrade to 0.15.4 release and make no other changes, my GPU is successfully detected and used.

Relevant log output

> nvidia-smi

Thu Feb  5 21:38:04 2026
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.119.02             Driver Version: 580.119.02     CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3070 Ti     Off |   00000000:01:00.0  On |                  N/A |
|  0%   47C    P3             48W /  310W |    1260MiB /   8192MiB |     15%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

> nvidia-smi -L
> GPU 0: NVIDIA GeForce RTX 3070 Ti (UUID: GPU-XXXXXXXX)


When I launch with none of the troubleshooting env variables set:

Feb 05 21:29:22 fedora systemd[1]: Started ollama.service - Ollama Service.
Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.215-05:00 level=INFO source=routes.go:1622 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GF>
Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.216-05:00 level=INFO source=images.go:473 msg="total blobs: 8"
Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.216-05:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0"
Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.216-05:00 level=INFO source=routes.go:1675 msg="Listening on 127.0.0.1:11434 (version 0.15.5-rc0)"
Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.216-05:00 level=DEBUG source=sched.go:121 msg="starting llm scheduler"
Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.222-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.222-05:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35495"
Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.222-05:00 level=DEBUG source=server.go:431 msg=subprocess PATH=/usr/bin OLLAMA_DEBUG=1 LD_LIBRARY_PATH=/usr/bin OLLAMA_LIBRARY_PATH=/usr/bin
Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.252-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=30.225075ms OLLAMA_LIBRARY_PATH=[/usr/bin] extra_envs=map[]
Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.252-05:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0
Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.252-05:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=35.801358ms
Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.252-05:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.1 G>
Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.252-05:00 level=INFO source=routes.go:1725 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096

When I launch with CUDA_VISIBLE_DEVICES set: (also tried with gpu id, 0)

Feb 05 21:26:36 fedora systemd[1]: Started ollama.service - Ollama Service.
Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.247-05:00 level=INFO source=routes.go:1622 msg="server config" env="map[CUDA_VISIBLE_DEVICES:GPU-XXXXXXXX GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDIN>
Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.248-05:00 level=INFO source=images.go:473 msg="total blobs: 8"
Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.248-05:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0"
Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.248-05:00 level=INFO source=routes.go:1675 msg="Listening on 127.0.0.1:11434 (version 0.15.5-rc0)"
Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.248-05:00 level=DEBUG source=sched.go:121 msg="starting llm scheduler"
Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.255-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.255-05:00 level=WARN source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=GPU-XXXXXXXX
Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.255-05:00 level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again"
Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.256-05:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39353"
Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.256-05:00 level=DEBUG source=server.go:431 msg=subprocess PATH=/usr/bin OLLAMA_DEBUG=1 CUDA_VISIBLE_DEVICES=GPU-XXXXXXXX LD_LIBRARY_PATH=/usr/bin O>
Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.286-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=30.68834ms OLLAMA_LIBRARY_PATH=[/usr/bin] extra_envs=map[]
Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.286-05:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0
Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.286-05:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=37.715295ms
Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.286-05:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.1 G>
Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.286-05:00 level=INFO source=routes.go:1725 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096


When I launch with OLLAMA_LLM_LIBRARY=cuda_v11:

Feb 05 21:34:41 fedora systemd[1]: Started ollama.service - Ollama Service.
Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.443-05:00 level=INFO source=routes.go:1622 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GF>
Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.444-05:00 level=INFO source=images.go:473 msg="total blobs: 8"
Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.445-05:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0"
Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.445-05:00 level=INFO source=routes.go:1675 msg="Listening on 127.0.0.1:11434 (version 0.15.5-rc0)"
Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.445-05:00 level=DEBUG source=sched.go:121 msg="starting llm scheduler"
Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.451-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.452-05:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38775"
Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.452-05:00 level=DEBUG source=server.go:431 msg=subprocess PATH=/usr/bin OLLAMA_DEBUG=1 OLLAMA_LLM_LIBRARY=cuda_v11 LD_LIBRARY_PATH=/usr/bin OLLAMA_LIBRARY_PATH=/usr/bin
Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.481-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=30.148575ms OLLAMA_LIBRARY_PATH=[/usr/bin] extra_envs=map[]
Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.481-05:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0
Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.481-05:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=36.495709ms
Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.482-05:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.1 G>
Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.482-05:00 level=INFO source=routes.go:1725 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096


I also tried the combination of  CUDA_VISIBLE_DEVICES and OLLAMA_LLM_LIBRARY, and I also tried vulkan

ollama.service contents:

[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/bin"
Environment="HOME=/usr/share/ollama"
Environment="OLLAMA_DEBUG=1"
# Environment="CUDA_VISIBLE_DEVICES=GPU-XXXXXXXX"
# Environment="OLLAMA_LLM_LIBRARY=cuda_v11"
# Environment="OLLAMA_VULKAN=1"


[Install]
WantedBy=default.target


Downgrading to ollama 0.15.4 release version and re-running causes my GPU to be detected and successfully run on:

Feb 05 21:48:26 fedora systemd[1]: Started ollama.service - Ollama Service.
Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.098-05:00 level=INFO source=routes.go:1631 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GF>
Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.099-05:00 level=INFO source=images.go:473 msg="total blobs: 8"
Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.099-05:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0"
Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.099-05:00 level=INFO source=routes.go:1684 msg="Listening on 127.0.0.1:11434 (version 0.15.4)"
Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.099-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.099-05:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.100-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 38443"
Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.212-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 44527"
Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.309-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 33985"
Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.309-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 43453"
Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.429-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-XXXXXXXX filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDI>
Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.429-05:00 level=INFO source=routes.go:1725 msg="entering low vram mode" "total vram"="8.0 GiB" threshold="20.0 GiB"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.15.5-rc0

Originally created by @Vyryn on GitHub (Feb 6, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14110 ### What is the issue? Hello. I am running the 0.15.5-rc0 prerelease on fedora 43 desktop (no docker). ollama uses my cpu instead of gpu (ollama ps shows 100% CPU). This persists when I manually set CUDA_VISIBLE_DEVICES=[guid], OLLAMA_LLM_LIBRARY=cuda_v11 and/or enable vulkan. When I downgrade to 0.15.4 release and make no other changes, my GPU is successfully detected and used. ### Relevant log output ```shell > nvidia-smi Thu Feb 5 21:38:04 2026 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 580.119.02 Driver Version: 580.119.02 CUDA Version: 13.0 | +-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3070 Ti Off | 00000000:01:00.0 On | N/A | | 0% 47C P3 48W / 310W | 1260MiB / 8192MiB | 15% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ > nvidia-smi -L > GPU 0: NVIDIA GeForce RTX 3070 Ti (UUID: GPU-XXXXXXXX) When I launch with none of the troubleshooting env variables set: Feb 05 21:29:22 fedora systemd[1]: Started ollama.service - Ollama Service. Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.215-05:00 level=INFO source=routes.go:1622 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GF> Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.216-05:00 level=INFO source=images.go:473 msg="total blobs: 8" Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.216-05:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0" Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.216-05:00 level=INFO source=routes.go:1675 msg="Listening on 127.0.0.1:11434 (version 0.15.5-rc0)" Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.216-05:00 level=DEBUG source=sched.go:121 msg="starting llm scheduler" Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.222-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.222-05:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35495" Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.222-05:00 level=DEBUG source=server.go:431 msg=subprocess PATH=/usr/bin OLLAMA_DEBUG=1 LD_LIBRARY_PATH=/usr/bin OLLAMA_LIBRARY_PATH=/usr/bin Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.252-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=30.225075ms OLLAMA_LIBRARY_PATH=[/usr/bin] extra_envs=map[] Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.252-05:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0 Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.252-05:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=35.801358ms Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.252-05:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.1 G> Feb 05 21:29:22 fedora ollama[23390]: time=2026-02-05T21:29:22.252-05:00 level=INFO source=routes.go:1725 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096 When I launch with CUDA_VISIBLE_DEVICES set: (also tried with gpu id, 0) Feb 05 21:26:36 fedora systemd[1]: Started ollama.service - Ollama Service. Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.247-05:00 level=INFO source=routes.go:1622 msg="server config" env="map[CUDA_VISIBLE_DEVICES:GPU-XXXXXXXX GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDIN> Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.248-05:00 level=INFO source=images.go:473 msg="total blobs: 8" Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.248-05:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0" Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.248-05:00 level=INFO source=routes.go:1675 msg="Listening on 127.0.0.1:11434 (version 0.15.5-rc0)" Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.248-05:00 level=DEBUG source=sched.go:121 msg="starting llm scheduler" Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.255-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.255-05:00 level=WARN source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=GPU-XXXXXXXX Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.255-05:00 level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again" Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.256-05:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39353" Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.256-05:00 level=DEBUG source=server.go:431 msg=subprocess PATH=/usr/bin OLLAMA_DEBUG=1 CUDA_VISIBLE_DEVICES=GPU-XXXXXXXX LD_LIBRARY_PATH=/usr/bin O> Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.286-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=30.68834ms OLLAMA_LIBRARY_PATH=[/usr/bin] extra_envs=map[] Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.286-05:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0 Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.286-05:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=37.715295ms Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.286-05:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.1 G> Feb 05 21:26:36 fedora ollama[22570]: time=2026-02-05T21:26:36.286-05:00 level=INFO source=routes.go:1725 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096 When I launch with OLLAMA_LLM_LIBRARY=cuda_v11: Feb 05 21:34:41 fedora systemd[1]: Started ollama.service - Ollama Service. Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.443-05:00 level=INFO source=routes.go:1622 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GF> Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.444-05:00 level=INFO source=images.go:473 msg="total blobs: 8" Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.445-05:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0" Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.445-05:00 level=INFO source=routes.go:1675 msg="Listening on 127.0.0.1:11434 (version 0.15.5-rc0)" Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.445-05:00 level=DEBUG source=sched.go:121 msg="starting llm scheduler" Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.451-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.452-05:00 level=INFO source=server.go:430 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38775" Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.452-05:00 level=DEBUG source=server.go:431 msg=subprocess PATH=/usr/bin OLLAMA_DEBUG=1 OLLAMA_LLM_LIBRARY=cuda_v11 LD_LIBRARY_PATH=/usr/bin OLLAMA_LIBRARY_PATH=/usr/bin Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.481-05:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=30.148575ms OLLAMA_LIBRARY_PATH=[/usr/bin] extra_envs=map[] Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.481-05:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=0 Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.481-05:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=36.495709ms Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.482-05:00 level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="31.1 G> Feb 05 21:34:41 fedora ollama[24671]: time=2026-02-05T21:34:41.482-05:00 level=INFO source=routes.go:1725 msg="vram-based default context" total_vram="0 B" default_num_ctx=4096 I also tried the combination of CUDA_VISIBLE_DEVICES and OLLAMA_LLM_LIBRARY, and I also tried vulkan ollama.service contents: [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/usr/bin" Environment="HOME=/usr/share/ollama" Environment="OLLAMA_DEBUG=1" # Environment="CUDA_VISIBLE_DEVICES=GPU-XXXXXXXX" # Environment="OLLAMA_LLM_LIBRARY=cuda_v11" # Environment="OLLAMA_VULKAN=1" [Install] WantedBy=default.target Downgrading to ollama 0.15.4 release version and re-running causes my GPU to be detected and successfully run on: Feb 05 21:48:26 fedora systemd[1]: Started ollama.service - Ollama Service. Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.098-05:00 level=INFO source=routes.go:1631 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GF> Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.099-05:00 level=INFO source=images.go:473 msg="total blobs: 8" Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.099-05:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0" Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.099-05:00 level=INFO source=routes.go:1684 msg="Listening on 127.0.0.1:11434 (version 0.15.4)" Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.099-05:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.099-05:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.100-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 38443" Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.212-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 44527" Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.309-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 33985" Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.309-05:00 level=INFO source=server.go:429 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 43453" Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.429-05:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-XXXXXXXX filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDI> Feb 05 21:48:26 fedora ollama[27889]: time=2026-02-05T21:48:26.429-05:00 level=INFO source=routes.go:1725 msg="entering low vram mode" "total vram"="8.0 GiB" threshold="20.0 GiB" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.15.5-rc0
GiteaMirror added the bug label 2026-04-22 19:03:13 -05:00
Author
Owner

@MichalRIcar commented on GitHub (Feb 6, 2026):

+1 for AMD card with the latest drivers rocm - running on just released 0.15.5 - the exact same behavior as above

<!-- gh-comment-id:3859345544 --> @MichalRIcar commented on GitHub (Feb 6, 2026): +1 for AMD card with the latest drivers rocm - running on just released 0.15.5 - the exact same behavior as above
Author
Owner

@rick-github commented on GitHub (Feb 6, 2026):

Set OLLAMA_DEBUG=2 and post the log from the start through to the line that contains inference compute.

<!-- gh-comment-id:3859502049 --> @rick-github commented on GitHub (Feb 6, 2026): Set `OLLAMA_DEBUG=2` and post the log from the start through to the line that contains `inference compute`.
Author
Owner

@MichalRIcar commented on GitHub (Feb 6, 2026):

**ollama 0.15.4 - speed 30t/s >>>**
Feb 06 11:59:19  ollama[219345]: load_backend: loaded ROCm backend from /usr/local/lib/ollama/rocm/libggml-hip.so
Feb 06 11:59:20  ollama[219345]: time=2026-02-06T11:59:20.179+01:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:32 G>
Feb 06 11:59:20  ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=runner.go:1278 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:32 >
Feb 06 11:59:20  ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=ggml.go:482 msg="offloading 64 repeating layers to GPU"
Feb 06 11:59:20  ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
Feb 06 11:59:20  ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=ggml.go:494 msg="offloaded 65/65 layers to GPU"
Feb 06 11:59:20  ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=device.go:240 msg="model weights" device=ROCm0 size="19.1 GiB"
Feb 06 11:59:20  ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="417.3 MiB"
Feb 06 11:59:20  ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=device.go:251 msg="kv cache" device=ROCm0 size="2.0 GiB"
Feb 06 11:59:20  ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=device.go:262 msg="compute graph" device=ROCm0 size="1000.6 MiB"
Feb 06 11:59:20  ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="79.1 MiB"
Feb 06 11:59:20  ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=device.go:272 msg="total memory" size="22.5 GiB"
Feb 06 11:59:20  ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=sched.go:526 msg="loaded runners" count=1
Feb 06 11:59:20  ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=server.go:1347 msg="waiting for llama runner to start responding"
Feb 06 11:59:20  ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model"
Feb 06 11:59:23  ollama[219345]: time=2026-02-06T11:59:23.119+01:00 level=INFO source=server.go:1385 msg="llama runner started in 4.65 seconds"

**ollama 0.15.5 - speed 8t/s >>>**
time=2026-02-06T11:36:10.103+01:00 level=INFO source=runner.go:1283 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:32 GP>
time=2026-02-06T11:36:10.433+01:00 level=INFO source=runner.go:1283 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:32 >
time=2026-02-06T11:36:11.376+01:00 level=INFO source=runner.go:1283 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:32>
time=2026-02-06T11:36:11.376+01:00 level=INFO source=ggml.go:482 msg="offloading 55 repeating layers to GPU"
time=2026-02-06T11:36:11.376+01:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2026-02-06T11:36:11.376+01:00 level=INFO source=ggml.go:494 msg="offloaded 55/65 layers to GPU"
time=2026-02-06T11:36:11.376+01:00 level=INFO source=device.go:240 msg="model weights" device=ROCm0 size="14.8 GiB"
time=2026-02-06T11:36:11.376+01:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="4.6 GiB"
time=2026-02-06T11:36:11.376+01:00 level=INFO source=device.go:251 msg="kv cache" device=ROCm0 size="6.9 GiB"
time=2026-02-06T11:36:11.376+01:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="1.1 GiB"
time=2026-02-06T11:36:11.376+01:00 level=INFO source=device.go:262 msg="compute graph" device=ROCm0 size="1.2 GiB"
time=2026-02-06T11:36:11.376+01:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="142.4 MiB"
time=2026-02-06T11:36:11.376+01:00 level=INFO source=device.go:272 msg="total memory" size="28.8 GiB"
time=2026-02-06T11:36:11.376+01:00 level=INFO source=sched.go:537 msg="loaded runners" count=1
time=2026-02-06T11:36:11.376+01:00 level=INFO source=server.go:1349 msg="waiting for llama runner to start responding"
time=2026-02-06T11:36:11.376+01:00 level=INFO source=server.go:1383 msg="waiting for server to become available" status="llm server loading model"
time=2026-02-06T11:36:13.637+01:00 level=INFO source=server.go:1387 msg="llama runner started in 5.34 seconds"


<!-- gh-comment-id:3859747556 --> @MichalRIcar commented on GitHub (Feb 6, 2026): ``` **ollama 0.15.4 - speed 30t/s >>>** Feb 06 11:59:19 ollama[219345]: load_backend: loaded ROCm backend from /usr/local/lib/ollama/rocm/libggml-hip.so Feb 06 11:59:20 ollama[219345]: time=2026-02-06T11:59:20.179+01:00 level=INFO source=runner.go:1278 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:32 G> Feb 06 11:59:20 ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=runner.go:1278 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:32 > Feb 06 11:59:20 ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=ggml.go:482 msg="offloading 64 repeating layers to GPU" Feb 06 11:59:20 ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=ggml.go:489 msg="offloading output layer to GPU" Feb 06 11:59:20 ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=ggml.go:494 msg="offloaded 65/65 layers to GPU" Feb 06 11:59:20 ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=device.go:240 msg="model weights" device=ROCm0 size="19.1 GiB" Feb 06 11:59:20 ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="417.3 MiB" Feb 06 11:59:20 ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=device.go:251 msg="kv cache" device=ROCm0 size="2.0 GiB" Feb 06 11:59:20 ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=device.go:262 msg="compute graph" device=ROCm0 size="1000.6 MiB" Feb 06 11:59:20 ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="79.1 MiB" Feb 06 11:59:20 ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=device.go:272 msg="total memory" size="22.5 GiB" Feb 06 11:59:20 ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=sched.go:526 msg="loaded runners" count=1 Feb 06 11:59:20 ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=server.go:1347 msg="waiting for llama runner to start responding" Feb 06 11:59:20 ollama[219345]: time=2026-02-06T11:59:20.605+01:00 level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model" Feb 06 11:59:23 ollama[219345]: time=2026-02-06T11:59:23.119+01:00 level=INFO source=server.go:1385 msg="llama runner started in 4.65 seconds" **ollama 0.15.5 - speed 8t/s >>>** time=2026-02-06T11:36:10.103+01:00 level=INFO source=runner.go:1283 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:32 GP> time=2026-02-06T11:36:10.433+01:00 level=INFO source=runner.go:1283 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:32 > time=2026-02-06T11:36:11.376+01:00 level=INFO source=runner.go:1283 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32768 KvCacheType: NumThreads:32> time=2026-02-06T11:36:11.376+01:00 level=INFO source=ggml.go:482 msg="offloading 55 repeating layers to GPU" time=2026-02-06T11:36:11.376+01:00 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-02-06T11:36:11.376+01:00 level=INFO source=ggml.go:494 msg="offloaded 55/65 layers to GPU" time=2026-02-06T11:36:11.376+01:00 level=INFO source=device.go:240 msg="model weights" device=ROCm0 size="14.8 GiB" time=2026-02-06T11:36:11.376+01:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="4.6 GiB" time=2026-02-06T11:36:11.376+01:00 level=INFO source=device.go:251 msg="kv cache" device=ROCm0 size="6.9 GiB" time=2026-02-06T11:36:11.376+01:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="1.1 GiB" time=2026-02-06T11:36:11.376+01:00 level=INFO source=device.go:262 msg="compute graph" device=ROCm0 size="1.2 GiB" time=2026-02-06T11:36:11.376+01:00 level=INFO source=device.go:267 msg="compute graph" device=CPU size="142.4 MiB" time=2026-02-06T11:36:11.376+01:00 level=INFO source=device.go:272 msg="total memory" size="28.8 GiB" time=2026-02-06T11:36:11.376+01:00 level=INFO source=sched.go:537 msg="loaded runners" count=1 time=2026-02-06T11:36:11.376+01:00 level=INFO source=server.go:1349 msg="waiting for llama runner to start responding" time=2026-02-06T11:36:11.376+01:00 level=INFO source=server.go:1383 msg="waiting for server to become available" status="llm server loading model" time=2026-02-06T11:36:13.637+01:00 level=INFO source=server.go:1387 msg="llama runner started in 5.34 seconds" ```
Author
Owner

@rick-github commented on GitHub (Feb 6, 2026):

@MichalRIcar Your problem is unrelated, it is this one: #14116.

<!-- gh-comment-id:3859809021 --> @rick-github commented on GitHub (Feb 6, 2026): @MichalRIcar Your problem is unrelated, it is this one: #14116.
Author
Owner

@Vyryn commented on GitHub (Feb 6, 2026):

This issue is not present for me on the 0.15.5 release version, so it's probably not worth investigating further

<!-- gh-comment-id:3861043428 --> @Vyryn commented on GitHub (Feb 6, 2026): This issue is not present for me on the 0.15.5 release version, so it's probably not worth investigating further
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34966