[GH-ISSUE #15148] 'ollama serve' should report what GPUs if any it found #71758

Closed
opened 2026-05-05 02:27:24 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @yurivict on GitHub (Mar 30, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15148

'ollama serve' doesn't report the list of GPUs itis going to use:

$ grep GPU ollama-serve.log
time=2026-03-30T12:32:42.143-07:00 level=INFO source=routes.go:1742 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:65536 OLLAMA_DEBUG:DEBUG OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/yuri/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2026-03-30T12:32:42.186-07:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-03-30T12:32:42.201-07:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=14.817587ms
time=2026-03-30T12:32:44.599-07:00 level=INFO source=runner.go:895 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Auto KvSize:16384 KvCacheType: NumThreads:16 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"

It says "discovering available GPUs..." but it never says: "discovered no GPUs", or "discovered these GPUs: Vulkan/GPU0 (NVIDIA ....)"

It should definitively say in human readable English what GPUs if any it is going to use.

Originally created by @yurivict on GitHub (Mar 30, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15148 'ollama serve' doesn't report the list of GPUs itis going to use: ``` $ grep GPU ollama-serve.log time=2026-03-30T12:32:42.143-07:00 level=INFO source=routes.go:1742 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:65536 OLLAMA_DEBUG:DEBUG OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/yuri/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-03-30T12:32:42.186-07:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-03-30T12:32:42.201-07:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=14.817587ms time=2026-03-30T12:32:44.599-07:00 level=INFO source=runner.go:895 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Auto KvSize:16384 KvCacheType: NumThreads:16 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" ``` It says ```"discovering available GPUs..."``` but it never says: ```"discovered no GPUs"```, or ```"discovered these GPUs: Vulkan/GPU0 (NVIDIA ....)"``` It should definitively say in human readable English what GPUs if any it is going to use.
GiteaMirror added the feature request label 2026-05-05 02:27:24 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 30, 2026):

$ docker compose logs ollama | grep inference.compute
ollama  | time=2026-03-05T12:18:25.601Z level=INFO source=types.go:42 msg="inference compute" id=GPU-b5d7e56c-4491-8eeb-cb2d-e8d8424e5bb7 filter_id="" library=CUDA compute=8.9 name=CUDA0 description="NVIDIA GeForce RTX 4070" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:01:00.0 type=discrete total="12.0 GiB" available="11.6 GiB"
<!-- gh-comment-id:4157768912 --> @rick-github commented on GitHub (Mar 30, 2026): ``` $ docker compose logs ollama | grep inference.compute ollama | time=2026-03-05T12:18:25.601Z level=INFO source=types.go:42 msg="inference compute" id=GPU-b5d7e56c-4491-8eeb-cb2d-e8d8424e5bb7 filter_id="" library=CUDA compute=8.9 name=CUDA0 description="NVIDIA GeForce RTX 4070" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:01:00.0 type=discrete total="12.0 GiB" available="11.6 GiB" ```
Author
Owner

@yurivict commented on GitHub (Mar 30, 2026):

In my situation ollama probably failed to find any GPUs for some reason, but it never said so explicitly.

msg="inference compute" id=GPU-b5d... is very cryptic. It should list them in English.

<!-- gh-comment-id:4157795175 --> @yurivict commented on GitHub (Mar 30, 2026): In my situation ollama probably failed to find any GPUs for some reason, but it never said so explicitly. ```msg="inference compute" id=GPU-b5d...``` is very cryptic. It should list them in English.
Author
Owner

@rick-github commented on GitHub (Mar 30, 2026):

There may be no GPUs, inference compute shows what will be used for inference computation.

$ docker compose logs ollama | grep inference.compute
ollama  | time=2026-03-28T21:11:15.839Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="125.2 GiB" available="125.2 GiB"
<!-- gh-comment-id:4157796927 --> @rick-github commented on GitHub (Mar 30, 2026): There may be no GPUs, `inference compute` shows what will be used for inference computation. ``` $ docker compose logs ollama | grep inference.compute ollama | time=2026-03-28T21:11:15.839Z level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="125.2 GiB" available="125.2 GiB" ```
Author
Owner

@yurivict commented on GitHub (Mar 30, 2026):

It should print an unambiguous English explanation in addition to the cryptic log entry like msg="inference compute" id=cpu library.

<!-- gh-comment-id:4158199282 --> @yurivict commented on GitHub (Mar 30, 2026): It should print an unambiguous English explanation in addition to the cryptic log entry like ```msg="inference compute" id=cpu library```.
Author
Owner

@rick-github commented on GitHub (Mar 30, 2026):

If less context is preferred, a simple script can be used to process the line:

$ what_gpus(){ docker logs ollama 2>&1 | grep inference.compute | sed -ne 's/.* description=\(.*\) libdirs.*/\1/p';}
$ what_gpus
cpu
<!-- gh-comment-id:4158249444 --> @rick-github commented on GitHub (Mar 30, 2026): If less context is preferred, a simple script can be used to process the line: ``` $ what_gpus(){ docker logs ollama 2>&1 | grep inference.compute | sed -ne 's/.* description=\(.*\) libdirs.*/\1/p';} $ what_gpus cpu ```
Author
Owner

@rick-github commented on GitHub (Mar 30, 2026):

#7262

<!-- gh-comment-id:4158283690 --> @rick-github commented on GitHub (Mar 30, 2026): #7262
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71758