[GH-ISSUE #13814] Ollama stopped using the GPU #71109

Open
opened 2026-05-05 00:23:09 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @CL415 on GitHub (Jan 21, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13814

What is the issue?

I have an Ollama Docker instance running in a MiG deployment of a RTX PRO 6000 Workstation, with 48 GB VRAM available, providing inference for a Mistral Small 3.1 model in a Q8 quant with only 32k context. Until mid-December, it ran very quickly, but suddenly its performance became very slow, even in short prompts as "hello". I am not aware of any changes in the workstation so I assume the bug comes from the last Ollama updates, as the container repulls the lastest image on each restart.
I see no GPU usage by any process when doing nvidia-smi during inference, it seems Ollama recognizes the GPU but silently uses CPU instead, still ollama ps claims that the model is fully in the GPU.
I see the CPU backend getting loaded before the CUDA devices are detected, but I do not know if that tells anything. Still, the logs say the model loads in GPU in subsequent messages...

Lurking for solutions, I have already tried turning off flash attention, setting GGML_CUDA_NO_VMM=1, OLLAMA_MAX_LOADED_MODELS=1, OLLAMA_NUM_PARALLEL=1, and ROCR_VISIBLE_DEVICES=0. None of these worked. Does anybody have any idea of what may be happening? Thank you!

EDIT: I downgraded Ollama to version 0.13.1 and got its normal speeds again, although nvidia-smi dmon -s u shows "-" in all values, and nvidia-smi this:

+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.95.05              Driver Version: 580.95.05      CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA RTX PRO 6000 Blac...    Off |   00000000:01:00.0 Off |                   On |
| 30%   36C    P8             27W /  300W |                  N/A   |     N/A      Default |
|                                         |                        |              Enabled |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| MIG devices:                                                                            |
+------------------+----------------------------------+-----------+-----------------------+
| GPU  GI  CI  MIG |              Shared Memory-Usage |        Vol|        Shared         |
|      ID  ID  Dev |                Shared BAR1-Usage | SM     Unc| CE ENC  DEC  OFA  JPG |
|                  |                                  |        ECC|                       |
|==================+==================================+===========+=======================|
|  0    1   0   0  |           45859MiB / 48512MiB    | 94      0 |  2   2    2    0    2 |
|                  |               0MiB / 16654MiB    |           |                       |
+------------------+----------------------------------+-----------+-----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0    1    0               88      C   /usr/bin/ollama                         400MiB |
+-----------------------------------------------------------------------------------------+

Relevant log output

time=2026-01-21T07:58:24.393Z level=INFO source=routes.go:1614 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:128000 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/models_map OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com]

OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"

time=2026-01-21T07:58:24.396Z level=INFO source=images.go:499 msg="total blobs: 33"

time=2026-01-21T07:58:24.396Z level=INFO source=images.go:506 msg="total unused blobs removed: 0"

time=2026-01-21T07:58:24.397Z level=INFO source=routes.go:1667 msg="Listening on [::]:11434 (version 0.14.2)" time=2026-01-21T07:58:24.397Z level=INFO source=runner.go:67 msg="discovering available GPUs..."

time=2026-01-21T07:58:24.397Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45045"

time=2026-01-21T07:58:24.645Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36445"

time=2026-01-21T07:58:24.863Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1"

time=2026-01-21T07:58:24.863Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35809"

time=2026-01-21T07:58:24.864Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34865"

time=2026-01-21T07:58:25.185Z level=INFO source=types.go:42 msg="inference compute" id=GPU-6d928b77-2cb9-381e-b44b-27d95a658611 filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition MIG 2g.48gb" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:01:00.0 type=discrete total="47.4 GiB" available="446.3 GiB"

[GIN] 2026/01/21 - 08:00:24 | 200 | 41.714µs | 127.0.0.1 | HEAD "/"

[GIN] 2026/01/21 - 08:00:24 | 200 | 112.928095ms | 127.0.0.1 | POST "/api/show"

time=2026-01-21T08:00:24.816Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42421"

time=2026-01-21T08:00:25.072Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" 
llama_model_loader: loaded meta data with 41 key-value pairs and 363 tensors from /models_map/blobs/sha256-b2a40e5ef4eab8837d0462c303e8147ec754e2963e41916b551107d2b0ca6527 (version GGUF V3 (latest))

[...]

time=2026-01-21T08:00:25.329Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --model /models_map/blobs/sha256-b2a40e5ef4eab8837d0462c303e8147ec754e2963e41916b551107d2b0ca6527 --port 36707"

time=2026-01-21T08:00:25.329Z level=INFO source=sched.go:452 msg="system memory" total="502.8 GiB" free="502.6 GiB" free_swap="8.0 GiB"

time=2026-01-21T08:00:25.329Z level=INFO source=sched.go:459 msg="gpu memory" id=GPU-6d928b77-2cb9-381e-b44b-27d95a658611 library=CUDA available="446.4 GiB" free="446.9 GiB" minimum="457.0 MiB" overhead="0 B"

time=2026-01-21T08:00:25.329Z level=INFO source=server.go:496 msg="loading model" "model layers"=41 requested=-1

time=2026-01-21T08:00:25.329Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="22.7 GiB"

time=2026-01-21T08:00:25.329Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="4.9 GiB"

time=2026-01-21T08:00:25.329Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="2.1 GiB"

time=2026-01-21T08:00:25.330Z level=INFO source=device.go:272 msg="total memory" size="29.6 GiB"

time=2026-01-21T08:00:25.342Z level=INFO source=runner.go:965 msg="starting go runner"

load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so

ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no

ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no

ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition MIG 2g.48gb, compute capability 12.0, VMM: yes, ID: GPU-6d928b77-2cb9-381e-b44b-27d95a658611

load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so
time=2026-01-21T08:00:25.400Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)

time=2026-01-21T08:00:25.401Z level=INFO source=runner.go:1001 msg="Server listening on 127.0.0.1:36707"

time=2026-01-21T08:00:25.403Z level=INFO source=runner.go:895 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32000 KvCacheType: NumThreads:64 GPULayers:41[ID:GPU-6d928b77-2cb9-381e-b44b-27d95a658611 Layers:41(0..40)] MultiUserCache:false ProjectorPath:/models_map/blobs/sha256-d6af684ae9136398eaa0b59ea9e0b0b850bb6ac5084f1e8c5cb8f85251825eaf MainGPU:0 UseMmap:true}"

time=2026-01-21T08:00:25.403Z level=INFO source=server.go:1347 msg="waiting for llama runner to start responding"

time=2026-01-21T08:00:25.403Z level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model"

ggml_backend_cuda_get_available_uma_memory: final available_memory_kb: 468464120

llama_model_load_from_file_impl: using device CUDA0 (NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition MIG 2g.48gb) (0000:01:00.0) - 457484 MiB free

llama_model_loader: loaded meta data with 41 key-value pairs and 363 tensors from /models_map/blobs/sha256-b2a40e5ef4eab8837d0462c303e8147ec754e2963e41916b551107d2b0ca6527 (version GGUF V3 (latest))

llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.

[...]

load_tensors: loading model tensors, this can take a while... (mmap = true)

time=2026-01-21T08:00:26.106Z level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server not responding"

load_tensors: offloading 40 repeating layers to GPU

load_tensors: offloading output layer to GPU load_tensors: offloaded 41/41 layers to GPU

load_tensors: CPU_Mapped model buffer size = 680.00 MiB

load_tensors: CUDA0 model buffer size = 23206.58 MiB

time=2026-01-21T08:00:27.259Z level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model"
[...]
time=2026-01-21T08:00:33.534Z level=INFO source=server.go:1385 msg="llama runner started in 8.20 seconds"

time=2026-01-21T08:00:33.534Z level=INFO source=sched.go:526 msg="loaded runners" count=1

time=2026-01-21T08:00:33.534Z level=INFO source=server.go:1347 msg="waiting for llama runner to start responding"

time=2026-01-21T08:00:33.534Z level=INFO source=server.go:1385 msg="llama runner started in 8.20 seconds"

[GIN] 2026/01/21 - 08:01:02 | 200 | 38.145927744s | 127.0.0.1 | POST "/api/generate"

OS

Docker

GPU

Nvidia

CPU

AMD

Ollama version

0.14.2

Originally created by @CL415 on GitHub (Jan 21, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13814 ### What is the issue? I have an Ollama Docker instance running in a MiG deployment of a RTX PRO 6000 Workstation, with 48 GB VRAM available, providing inference for a Mistral Small 3.1 model in a Q8 quant with only 32k context. Until mid-December, it ran very quickly, but suddenly its performance became very slow, even in short prompts as "hello". I am not aware of any changes in the workstation so I assume the bug comes from the last Ollama updates, as the container repulls the lastest image on each restart. I see no GPU usage by any process when doing `nvidia-smi` during inference, it seems Ollama recognizes the GPU but silently uses CPU instead, still `ollama ps` claims that the model is fully in the GPU. I see the CPU backend getting loaded before the CUDA devices are detected, but I do not know if that tells anything. Still, the logs say the model loads in GPU in subsequent messages... Lurking for solutions, I have already tried turning off flash attention, setting `GGML_CUDA_NO_VMM=1`, `OLLAMA_MAX_LOADED_MODELS=1`, `OLLAMA_NUM_PARALLEL=1`, and `ROCR_VISIBLE_DEVICES=0`. None of these worked. Does anybody have any idea of what may be happening? Thank you! **EDIT:** I downgraded Ollama to version 0.13.1 and got its normal speeds again, although `nvidia-smi dmon -s u` shows "-" in all values, and `nvidia-smi` this: ``` +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 580.95.05 Driver Version: 580.95.05 CUDA Version: 13.0 | +-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA RTX PRO 6000 Blac... Off | 00000000:01:00.0 Off | On | | 30% 36C P8 27W / 300W | N/A | N/A Default | | | | Enabled | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | MIG devices: | +------------------+----------------------------------+-----------+-----------------------+ | GPU GI CI MIG | Shared Memory-Usage | Vol| Shared | | ID ID Dev | Shared BAR1-Usage | SM Unc| CE ENC DEC OFA JPG | | | | ECC| | |==================+==================================+===========+=======================| | 0 1 0 0 | 45859MiB / 48512MiB | 94 0 | 2 2 2 0 2 | | | 0MiB / 16654MiB | | | +------------------+----------------------------------+-----------+-----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 1 0 88 C /usr/bin/ollama 400MiB | +-----------------------------------------------------------------------------------------+ ``` ### Relevant log output ```shell time=2026-01-21T07:58:24.393Z level=INFO source=routes.go:1614 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:128000 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/models_map OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-01-21T07:58:24.396Z level=INFO source=images.go:499 msg="total blobs: 33" time=2026-01-21T07:58:24.396Z level=INFO source=images.go:506 msg="total unused blobs removed: 0" time=2026-01-21T07:58:24.397Z level=INFO source=routes.go:1667 msg="Listening on [::]:11434 (version 0.14.2)" time=2026-01-21T07:58:24.397Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-01-21T07:58:24.397Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45045" time=2026-01-21T07:58:24.645Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 36445" time=2026-01-21T07:58:24.863Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-01-21T07:58:24.863Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35809" time=2026-01-21T07:58:24.864Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34865" time=2026-01-21T07:58:25.185Z level=INFO source=types.go:42 msg="inference compute" id=GPU-6d928b77-2cb9-381e-b44b-27d95a658611 filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition MIG 2g.48gb" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:01:00.0 type=discrete total="47.4 GiB" available="446.3 GiB" [GIN] 2026/01/21 - 08:00:24 | 200 | 41.714µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/21 - 08:00:24 | 200 | 112.928095ms | 127.0.0.1 | POST "/api/show" time=2026-01-21T08:00:24.816Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 42421" time=2026-01-21T08:00:25.072Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" llama_model_loader: loaded meta data with 41 key-value pairs and 363 tensors from /models_map/blobs/sha256-b2a40e5ef4eab8837d0462c303e8147ec754e2963e41916b551107d2b0ca6527 (version GGUF V3 (latest)) [...] time=2026-01-21T08:00:25.329Z level=INFO source=server.go:429 msg="starting runner" cmd="/usr/bin/ollama runner --model /models_map/blobs/sha256-b2a40e5ef4eab8837d0462c303e8147ec754e2963e41916b551107d2b0ca6527 --port 36707" time=2026-01-21T08:00:25.329Z level=INFO source=sched.go:452 msg="system memory" total="502.8 GiB" free="502.6 GiB" free_swap="8.0 GiB" time=2026-01-21T08:00:25.329Z level=INFO source=sched.go:459 msg="gpu memory" id=GPU-6d928b77-2cb9-381e-b44b-27d95a658611 library=CUDA available="446.4 GiB" free="446.9 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-01-21T08:00:25.329Z level=INFO source=server.go:496 msg="loading model" "model layers"=41 requested=-1 time=2026-01-21T08:00:25.329Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="22.7 GiB" time=2026-01-21T08:00:25.329Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="4.9 GiB" time=2026-01-21T08:00:25.329Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="2.1 GiB" time=2026-01-21T08:00:25.330Z level=INFO source=device.go:272 msg="total memory" size="29.6 GiB" time=2026-01-21T08:00:25.342Z level=INFO source=runner.go:965 msg="starting go runner" load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition MIG 2g.48gb, compute capability 12.0, VMM: yes, ID: GPU-6d928b77-2cb9-381e-b44b-27d95a658611 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so time=2026-01-21T08:00:25.400Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2026-01-21T08:00:25.401Z level=INFO source=runner.go:1001 msg="Server listening on 127.0.0.1:36707" time=2026-01-21T08:00:25.403Z level=INFO source=runner.go:895 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:32000 KvCacheType: NumThreads:64 GPULayers:41[ID:GPU-6d928b77-2cb9-381e-b44b-27d95a658611 Layers:41(0..40)] MultiUserCache:false ProjectorPath:/models_map/blobs/sha256-d6af684ae9136398eaa0b59ea9e0b0b850bb6ac5084f1e8c5cb8f85251825eaf MainGPU:0 UseMmap:true}" time=2026-01-21T08:00:25.403Z level=INFO source=server.go:1347 msg="waiting for llama runner to start responding" time=2026-01-21T08:00:25.403Z level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model" ggml_backend_cuda_get_available_uma_memory: final available_memory_kb: 468464120 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition MIG 2g.48gb) (0000:01:00.0) - 457484 MiB free llama_model_loader: loaded meta data with 41 key-value pairs and 363 tensors from /models_map/blobs/sha256-b2a40e5ef4eab8837d0462c303e8147ec754e2963e41916b551107d2b0ca6527 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. [...] load_tensors: loading model tensors, this can take a while... (mmap = true) time=2026-01-21T08:00:26.106Z level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server not responding" load_tensors: offloading 40 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 41/41 layers to GPU load_tensors: CPU_Mapped model buffer size = 680.00 MiB load_tensors: CUDA0 model buffer size = 23206.58 MiB time=2026-01-21T08:00:27.259Z level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model" [...] time=2026-01-21T08:00:33.534Z level=INFO source=server.go:1385 msg="llama runner started in 8.20 seconds" time=2026-01-21T08:00:33.534Z level=INFO source=sched.go:526 msg="loaded runners" count=1 time=2026-01-21T08:00:33.534Z level=INFO source=server.go:1347 msg="waiting for llama runner to start responding" time=2026-01-21T08:00:33.534Z level=INFO source=server.go:1385 msg="llama runner started in 8.20 seconds" [GIN] 2026/01/21 - 08:01:02 | 200 | 38.145927744s | 127.0.0.1 | POST "/api/generate" ``` ### OS Docker ### GPU Nvidia ### CPU AMD ### Ollama version 0.14.2
GiteaMirror added the bug label 2026-05-05 00:23:10 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 21, 2026):

load_tensors: offloading output layer to GPU load_tensors: offloaded 41/41 layers to GPU

Ollama thinks it using the GPU. Can you pinpoint which version (somewhere between 0.13.0 and 0.14.2) of ollama starts to perform slowly?

<!-- gh-comment-id:3777678456 --> @rick-github commented on GitHub (Jan 21, 2026): ``` load_tensors: offloading output layer to GPU load_tensors: offloaded 41/41 layers to GPU ``` Ollama thinks it using the GPU. Can you pinpoint which version (somewhere between 0.13.0 and 0.14.2) of ollama starts to perform slowly?
Author
Owner

@CL415 commented on GitHub (Jan 21, 2026):

load_tensors: offloading output layer to GPU load_tensors: offloaded 41/41 layers to GPU

Ollama thinks it using the GPU. Can you pinpoint which version (somewhere between 0.13.0 and 0.14.2) of ollama starts to perform slowly?

Just confirmed that performance drops once updated to 0.13.2, 0.13.1 still produces expected speeds. At first glance, I am not seeing much difference in the logs, to be honest, but here they are just in case:

time=2026-01-21T12:06:35.282Z level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:128000 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/models_map OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"

time=2026-01-21T12:06:35.284Z level=INFO source=images.go:522 msg="total blobs: 33"

time=2026-01-21T12:06:35.284Z level=INFO source=images.go:529 msg="total unused blobs removed: 0"

time=2026-01-21T12:06:35.285Z level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.1)"

time=2026-01-21T12:06:35.285Z level=INFO source=runner.go:67 msg="discovering available GPUs..."

time=2026-01-21T12:06:35.286Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41671"

time=2026-01-21T12:06:35.510Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44105"

time=2026-01-21T12:06:35.735Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"

time=2026-01-21T12:06:35.735Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39709"

time=2026-01-21T12:06:35.736Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45223"

time=2026-01-21T12:06:36.058Z level=INFO source=types.go:42 msg="inference compute" id=GPU-6d928b77-2cb9-381e-b44b-27d95a658611 filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition MIG 2g.48gb" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:01:00.0 type=discrete total="47.4 GiB" available="8.6 GiB"

[GIN] 2026/01/21 - 12:06:54 | 200 |      63.718µs |       127.0.0.1 | GET      "/api/version"

[GIN] 2026/01/21 - 12:07:26 | 200 |      26.301µs |       127.0.0.1 | HEAD     "/"

[GIN] 2026/01/21 - 12:07:26 | 200 |   39.773357ms |       127.0.0.1 | POST     "/api/show"

time=2026-01-21T12:07:26.812Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38903"

time=2026-01-21T12:07:27.051Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"

llama_model_loader: loaded meta data with 41 key-value pairs and 363 tensors from /models_map/blobs/sha256-b2a40e5ef4eab8837d0462c303e8147ec754e2963e41916b551107d2b0ca6527 (version GGUF V3 (latest))

[...]

llama_model_load: vocab only - skipping tensors

time=2026-01-21T12:07:27.284Z level=INFO source=server.go:209 msg="enabling flash attention"

time=2026-01-21T12:07:27.284Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --model /models_map/blobs/sha256-b2a40e5ef4eab8837d0462c303e8147ec754e2963e41916b551107d2b0ca6527 --port 40075"
time=2026-01-21T12:07:27.284Z level=INFO source=sched.go:443 msg="system memory" total="502.8 GiB" free="502.6 GiB" 

free_swap="8.0 GiB"

time=2026-01-21T12:07:27.284Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-6d928b77-2cb9-381e-b44b-27d95a658611 library=CUDA available="8.1 GiB" free="8.6 GiB" minimum="457.0 MiB" overhead="0 B"

time=2026-01-21T12:07:27.284Z level=INFO source=server.go:459 msg="loading model" "model layers"=41 requested=-1

time=2026-01-21T12:07:27.285Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="3.8 GiB"

time=2026-01-21T12:07:27.285Z level=INFO source=device.go:245 msg="model weights" device=CPU size="18.8 GiB"

time=2026-01-21T12:07:27.285Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="875.0 MiB"

time=2026-01-21T12:07:27.285Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="4.0 GiB"

time=2026-01-21T12:07:27.285Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="2.2 GiB"

time=2026-01-21T12:07:27.285Z level=INFO source=device.go:272 msg="total memory" size="29.7 GiB"

time=2026-01-21T12:07:27.296Z level=INFO source=runner.go:963 msg="starting go runner"

load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so

ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no

ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no

ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition MIG 2g.48gb, compute capability 12.0, VMM: yes, ID: GPU-6d928b77-2cb9-381e-b44b-27d95a658611

load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so

time=2026-01-21T12:07:27.354Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)

time=2026-01-21T12:07:27.355Z level=INFO source=runner.go:999 msg="Server listening on 127.0.0.1:40075"

time=2026-01-21T12:07:27.359Z level=INFO source=runner.go:893 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:32000 KvCacheType: NumThreads:64 GPULayers:7[ID:GPU-6d928b77-2cb9-381e-b44b-27d95a658611 Layers:7(33..39)] MultiUserCache:false ProjectorPath:/models_map/blobs/sha256-d6af684ae9136398eaa0b59ea9e0b0b850bb6ac5084f1e8c5cb8f85251825eaf MainGPU:0 UseMmap:true}"

time=2026-01-21T12:07:27.360Z level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"

time=2026-01-21T12:07:27.361Z level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model"

llama_model_load_from_file_impl: using device CUDA0 (NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition MIG 2g.48gb) (0000:01:00.0) - 8759 MiB free

llama_model_loader: loaded meta data with 41 key-value pairs and 363 tensors from /models_map/blobs/sha256-b2a40e5ef4eab8837d0462c303e8147ec754e2963e41916b551107d2b0ca6527 (version GGUF V3 (latest))

llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.

[...]
alloc_compute_meta:      CUDA0 compute buffer size =     3.97 MiB

alloc_compute_meta:        CPU compute buffer size =     0.14 MiB

time=2026-01-21T12:07:51.188Z level=INFO source=server.go:1332 msg="llama runner started in 23.90 seconds"

time=2026-01-21T12:07:51.188Z level=INFO source=sched.go:517 msg="loaded runners" count=1

time=2026-01-21T12:07:51.188Z level=INFO source=server.go:1294 msg="waiting for llama runner to start responding"

time=2026-01-21T12:07:51.189Z level=INFO source=server.go:1332 msg="llama runner started in 23.90 seconds"
<!-- gh-comment-id:3777772605 --> @CL415 commented on GitHub (Jan 21, 2026): > ``` > load_tensors: offloading output layer to GPU load_tensors: offloaded 41/41 layers to GPU > ``` > > Ollama thinks it using the GPU. Can you pinpoint which version (somewhere between 0.13.0 and 0.14.2) of ollama starts to perform slowly? Just confirmed that performance drops once updated to 0.13.2, 0.13.1 still produces expected speeds. At first glance, I am not seeing much difference in the logs, to be honest, but here they are just in case: ``` shell time=2026-01-21T12:06:35.282Z level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:128000 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/models_map OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2026-01-21T12:06:35.284Z level=INFO source=images.go:522 msg="total blobs: 33" time=2026-01-21T12:06:35.284Z level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2026-01-21T12:06:35.285Z level=INFO source=routes.go:1597 msg="Listening on [::]:11434 (version 0.13.1)" time=2026-01-21T12:06:35.285Z level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-01-21T12:06:35.286Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 41671" time=2026-01-21T12:06:35.510Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 44105" time=2026-01-21T12:06:35.735Z level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-01-21T12:06:35.735Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39709" time=2026-01-21T12:06:35.736Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 45223" time=2026-01-21T12:06:36.058Z level=INFO source=types.go:42 msg="inference compute" id=GPU-6d928b77-2cb9-381e-b44b-27d95a658611 filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition MIG 2g.48gb" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:01:00.0 type=discrete total="47.4 GiB" available="8.6 GiB" [GIN] 2026/01/21 - 12:06:54 | 200 | 63.718µs | 127.0.0.1 | GET "/api/version" [GIN] 2026/01/21 - 12:07:26 | 200 | 26.301µs | 127.0.0.1 | HEAD "/" [GIN] 2026/01/21 - 12:07:26 | 200 | 39.773357ms | 127.0.0.1 | POST "/api/show" time=2026-01-21T12:07:26.812Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 38903" time=2026-01-21T12:07:27.051Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" llama_model_loader: loaded meta data with 41 key-value pairs and 363 tensors from /models_map/blobs/sha256-b2a40e5ef4eab8837d0462c303e8147ec754e2963e41916b551107d2b0ca6527 (version GGUF V3 (latest)) [...] llama_model_load: vocab only - skipping tensors time=2026-01-21T12:07:27.284Z level=INFO source=server.go:209 msg="enabling flash attention" time=2026-01-21T12:07:27.284Z level=INFO source=server.go:392 msg="starting runner" cmd="/usr/bin/ollama runner --model /models_map/blobs/sha256-b2a40e5ef4eab8837d0462c303e8147ec754e2963e41916b551107d2b0ca6527 --port 40075" time=2026-01-21T12:07:27.284Z level=INFO source=sched.go:443 msg="system memory" total="502.8 GiB" free="502.6 GiB" free_swap="8.0 GiB" time=2026-01-21T12:07:27.284Z level=INFO source=sched.go:450 msg="gpu memory" id=GPU-6d928b77-2cb9-381e-b44b-27d95a658611 library=CUDA available="8.1 GiB" free="8.6 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-01-21T12:07:27.284Z level=INFO source=server.go:459 msg="loading model" "model layers"=41 requested=-1 time=2026-01-21T12:07:27.285Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="3.8 GiB" time=2026-01-21T12:07:27.285Z level=INFO source=device.go:245 msg="model weights" device=CPU size="18.8 GiB" time=2026-01-21T12:07:27.285Z level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="875.0 MiB" time=2026-01-21T12:07:27.285Z level=INFO source=device.go:256 msg="kv cache" device=CPU size="4.0 GiB" time=2026-01-21T12:07:27.285Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="2.2 GiB" time=2026-01-21T12:07:27.285Z level=INFO source=device.go:272 msg="total memory" size="29.7 GiB" time=2026-01-21T12:07:27.296Z level=INFO source=runner.go:963 msg="starting go runner" load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition MIG 2g.48gb, compute capability 12.0, VMM: yes, ID: GPU-6d928b77-2cb9-381e-b44b-27d95a658611 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so time=2026-01-21T12:07:27.354Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2026-01-21T12:07:27.355Z level=INFO source=runner.go:999 msg="Server listening on 127.0.0.1:40075" time=2026-01-21T12:07:27.359Z level=INFO source=runner.go:893 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:32000 KvCacheType: NumThreads:64 GPULayers:7[ID:GPU-6d928b77-2cb9-381e-b44b-27d95a658611 Layers:7(33..39)] MultiUserCache:false ProjectorPath:/models_map/blobs/sha256-d6af684ae9136398eaa0b59ea9e0b0b850bb6ac5084f1e8c5cb8f85251825eaf MainGPU:0 UseMmap:true}" time=2026-01-21T12:07:27.360Z level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2026-01-21T12:07:27.361Z level=INFO source=server.go:1328 msg="waiting for server to become available" status="llm server loading model" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition MIG 2g.48gb) (0000:01:00.0) - 8759 MiB free llama_model_loader: loaded meta data with 41 key-value pairs and 363 tensors from /models_map/blobs/sha256-b2a40e5ef4eab8837d0462c303e8147ec754e2963e41916b551107d2b0ca6527 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. [...] alloc_compute_meta: CUDA0 compute buffer size = 3.97 MiB alloc_compute_meta: CPU compute buffer size = 0.14 MiB time=2026-01-21T12:07:51.188Z level=INFO source=server.go:1332 msg="llama runner started in 23.90 seconds" time=2026-01-21T12:07:51.188Z level=INFO source=sched.go:517 msg="loaded runners" count=1 time=2026-01-21T12:07:51.188Z level=INFO source=server.go:1294 msg="waiting for llama runner to start responding" time=2026-01-21T12:07:51.189Z level=INFO source=server.go:1332 msg="llama runner started in 23.90 seconds" ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71109