[GH-ISSUE #10061] Ollama didn't use GPUs since updating to v0.6.3! #53109

Closed
opened 2026-04-29 01:59:00 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @Mozartuss on GitHub (Mar 31, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10061

What is the issue?

Ollama reserve a GPU place but then using the CPU instead of the GPU. The test was with the Gemma3:4b model during Image generation. If I use DeepseekV3:671B ollama use the GPU during normal text to text operations but also use the CPU if I want to gernerate a Image.

NVIDIA-SMI

+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.05              Driver Version: 560.35.05      CUDA Version: 12.6     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA H100 80GB HBM3          Off |   00000000:19:00.0 Off |                    0 |
| N/A   32C    P0            117W /  700W |    4406MiB /  81559MiB |      2%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA H100 80GB HBM3          Off |   00000000:3B:00.0 Off |                    0 |
| N/A   27C    P0             71W /  700W |       4MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   2  NVIDIA H100 80GB HBM3          Off |   00000000:4C:00.0 Off |                    0 |
| N/A   27C    P0             70W /  700W |       4MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   3  NVIDIA H100 80GB HBM3          Off |   00000000:5D:00.0 Off |                    0 |
| N/A   28C    P0             71W /  700W |       4MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   4  NVIDIA H100 80GB HBM3          Off |   00000000:9B:00.0 Off |                    0 |
| N/A   30C    P0             74W /  700W |       4MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   5  NVIDIA H100 80GB HBM3          Off |   00000000:BB:00.0 Off |                    0 |
| N/A   27C    P0             69W /  700W |       4MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   6  NVIDIA H100 80GB HBM3          Off |   00000000:CB:00.0 Off |                    0 |
| N/A   30C    P0             70W /  700W |       4MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   7  NVIDIA H100 80GB HBM3          Off |   00000000:DB:00.0 Off |                    0 |
| N/A   30C    P0            119W /  700W |   17054MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A    152617      C   /usr/bin/ollama                                 0MiB |
|    7   N/A  N/A    151596      C   python3                                     17044MiB |
+-----------------------------------------------------------------------------------------+

If i start the container, ollama recognize the GPUs but then didnt use it.
I also use ComfyUI to generate images, this works godd in docker.
Here my partial docker-compose.yml file

docker-compose.yml

services:
  ollama:
    image: ollama/ollama:${OLLAMA_DOCKER_TAG-latest}
    container_name: ollama
    pull_policy: always
    tty: true
    restart: unless-stopped
    volumes:
      - ollama:/root/.ollama
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    ports:
      - 11434
    environment:
      - OLLAMA_DEBUG=1
      - OLLAMA_FLASH_ATTENTION=1
      - OLLAMA_LOAD_TIMEOUT=30m
      - OLLAMA_KV_CACHE_TYPE=q8_0
      - OLLAMA_NEW_ENGINE=1
      - OLLAMA_NUM_PARALLEL=4
      - OLLAMA_KEEP_ALIVE=30m
      - OLLAMA_MAX_LOADED_MODELS=7
      - OLLAMA_HOST=0.0.0.0:11434
      - OLLAMA_ORIGINS=chrome-extension://*,moz-extension://*,safari-web-extension://*
      - OLLAMA_CONTEXT_LENGTH=4096
      - OLLAMA_GPU_OVERHEAD=1G
      #- CUDA_VISIBLE_DEVICES="GPU-325756ac-0b66-8d63-dcdd-4e50b69df7a9,GPU-312f787c-7b2d-a8a9-3bc2-2c715edcdfad,GPU-5695c0a3-7ad8-87d4-a576-0c35923189eb,GPU-bb2d14be-730b-4d06-a992-1ae3d9ecc0c7,GPU-ba3add55-ef1f-4f31-e411-8cd0834fcce8,GPU-de1ac0ce-f7f4-5489-158d-486a3c8ded1c,GPU-cdd09948-4b04-6af1-055d-65d6795352aa"
    logging:
      driver: json-file
      options:
        max-size: "5m"
        max-file: "2"
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]
    networks:
      - internet
volumes:
    ollama:
      driver: local
      driver_opts:
        type: none
        o: bind
        device: /var/ollama/ollama

logs.txt

Relevant log output

2025/03/31 08:36:00 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:30m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:30m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:4 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
level=INFO source=images.go:432 msg="total blobs: 48"
level=INFO source=images.go:439 msg="total unused blobs removed: 0"
level=INFO source=routes.go:1297 msg="Listening on [::]:11434 (version 0.6.3)"
level=DEBUG source=sched.go:106 msg="starting llm scheduler"
level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so*
level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05]
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05
dlsym: cuInit - 0x78b722060800
dlsym: cuDriverGetVersion - 0x78b722060820
dlsym: cuDeviceGetCount - 0x78b722060860
dlsym: cuDeviceGet - 0x78b722060840
dlsym: cuDeviceGetAttribute - 0x78b722060940
dlsym: cuDeviceGetUuid - 0x78b7220608a0
dlsym: cuDeviceGetName - 0x78b722060880
dlsym: cuCtxCreate_v3 - 0x78b72206b020
dlsym: cuMemGetInfo_v2 - 0x78b7220764e0
dlsym: cuCtxDestroy - 0x78b7220d11b0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f1c
CUDA driver version: 12.6
calling cuDeviceGetCount
device count 8
level=DEBUG source=gpu.go:125 msg="detected GPUs" count=8 library=/usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05
[GPU-325756ac-0b66-8d63-dcdd-4e50b69df7a9] CUDA totalMem 81109 mb
[GPU-325756ac-0b66-8d63-dcdd-4e50b69df7a9] CUDA freeMem 80580 mb
[GPU-325756ac-0b66-8d63-dcdd-4e50b69df7a9] Compute Capability 9.0
[GPU-312f787c-7b2d-a8a9-3bc2-2c715edcdfad] CUDA totalMem 81109 mb
[GPU-312f787c-7b2d-a8a9-3bc2-2c715edcdfad] CUDA freeMem 80580 mb
[GPU-312f787c-7b2d-a8a9-3bc2-2c715edcdfad] Compute Capability 9.0
[GPU-5695c0a3-7ad8-87d4-a576-0c35923189eb] CUDA totalMem 81109 mb
[GPU-5695c0a3-7ad8-87d4-a576-0c35923189eb] CUDA freeMem 80580 mb
[GPU-5695c0a3-7ad8-87d4-a576-0c35923189eb] Compute Capability 9.0
[GPU-bb2d14be-730b-4d06-a992-1ae3d9ecc0c7] CUDA totalMem 81109 mb
[GPU-bb2d14be-730b-4d06-a992-1ae3d9ecc0c7] CUDA freeMem 80580 mb
[GPU-bb2d14be-730b-4d06-a992-1ae3d9ecc0c7] Compute Capability 9.0
[GPU-ba3add55-ef1f-4f31-e411-8cd0834fcce8] CUDA totalMem 81109 mb
[GPU-ba3add55-ef1f-4f31-e411-8cd0834fcce8] CUDA freeMem 80580 mb
[GPU-ba3add55-ef1f-4f31-e411-8cd0834fcce8] Compute Capability 9.0
[GPU-de1ac0ce-f7f4-5489-158d-486a3c8ded1c] CUDA totalMem 81109 mb
[GPU-de1ac0ce-f7f4-5489-158d-486a3c8ded1c] CUDA freeMem 80580 mb
[GPU-de1ac0ce-f7f4-5489-158d-486a3c8ded1c] Compute Capability 9.0
[GPU-cdd09948-4b04-6af1-055d-65d6795352aa] CUDA totalMem 81109 mb
[GPU-cdd09948-4b04-6af1-055d-65d6795352aa] CUDA freeMem 80580 mb
[GPU-cdd09948-4b04-6af1-055d-65d6795352aa] Compute Capability 9.0
[GPU-54007339-25c2-ed5a-5016-cd4ea527527c] CUDA totalMem 81109 mb
[GPU-54007339-25c2-ed5a-5016-cd4ea527527c] CUDA freeMem 80059 mb
[GPU-54007339-25c2-ed5a-5016-cd4ea527527c] Compute Capability 9.0
level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
level=INFO source=types.go:130 msg="inference compute" id=GPU-325756ac-0b66-8d63-dcdd-4e50b69df7a9 library=cuda variant=v12 compute=9.0 driver=12.6 name="NVIDIA H100 80GB HBM3" total="79.2 GiB" available="78.7 GiB"
level=INFO source=types.go:130 msg="inference compute" id=GPU-312f787c-7b2d-a8a9-3bc2-2c715edcdfad library=cuda variant=v12 compute=9.0 driver=12.6 name="NVIDIA H100 80GB HBM3" total="79.2 GiB" available="78.7 GiB"
level=INFO source=types.go:130 msg="inference compute" id=GPU-5695c0a3-7ad8-87d4-a576-0c35923189eb library=cuda variant=v12 compute=9.0 driver=12.6 name="NVIDIA H100 80GB HBM3" total="79.2 GiB" available="78.7 GiB"
level=INFO source=types.go:130 msg="inference compute" id=GPU-bb2d14be-730b-4d06-a992-1ae3d9ecc0c7 library=cuda variant=v12 compute=9.0 driver=12.6 name="NVIDIA H100 80GB HBM3" total="79.2 GiB" available="78.7 GiB"
level=INFO source=types.go:130 msg="inference compute" id=GPU-ba3add55-ef1f-4f31-e411-8cd0834fcce8 library=cuda variant=v12 compute=9.0 driver=12.6 name="NVIDIA H100 80GB HBM3" total="79.2 GiB" available="78.7 GiB"
level=INFO source=types.go:130 msg="inference compute" id=GPU-de1ac0ce-f7f4-5489-158d-486a3c8ded1c library=cuda variant=v12 compute=9.0 driver=12.6 name="NVIDIA H100 80GB HBM3" total="79.2 GiB" available="78.7 GiB"
level=INFO source=types.go:130 msg="inference compute" id=GPU-cdd09948-4b04-6af1-055d-65d6795352aa library=cuda variant=v12 compute=9.0 driver=12.6 name="NVIDIA H100 80GB HBM3" total="79.2 GiB" available="78.7 GiB"
level=INFO source=types.go:130 msg="inference compute" id=GPU-54007339-25c2-ed5a-5016-cd4ea527527c library=cuda variant=v12 compute=9.0 driver=12.6 name="NVIDIA H100 80GB HBM3" total="79.2 GiB" available="78.2 GiB"
[GIN] 2025/03/31 - 08:36:58 | 200 |   11.991315ms |      172.18.0.3 | GET      "/api/tags"
[GIN] 2025/03/31 - 08:36:58 | 200 |     165.148µs |  141.82.169.215 | GET      "/api/version"
[GIN] 2025/03/31 - 08:37:01 | 200 |      88.697µs |  141.82.169.215 | GET      "/api/version"
[GIN] 2025/03/31 - 08:37:06 | 200 |     143.729µs |  141.82.169.215 | GET      "/api/version"
level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="4031.3 GiB" before.free="4007.5 GiB" before.free_swap="8.0 GiB" now.total="4031.3 GiB" now.free="3996.1 GiB" now.free_swap="8.0 GiB"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05
dlsym: cuInit - 0x78b722060800
dlsym: cuDriverGetVersion - 0x78b722060820
dlsym: cuDeviceGetCount - 0x78b722060860
dlsym: cuDeviceGet - 0x78b722060840
dlsym: cuDeviceGetAttribute - 0x78b722060940
dlsym: cuDeviceGetUuid - 0x78b7220608a0
dlsym: cuDeviceGetName - 0x78b722060880
dlsym: cuCtxCreate_v3 - 0x78b72206b020
dlsym: cuMemGetInfo_v2 - 0x78b7220764e0
dlsym: cuCtxDestroy - 0x78b7220d11b0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f1c
CUDA driver version: 12.6
calling cuDeviceGetCount
device count 8
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-325756ac-0b66-8d63-dcdd-4e50b69df7a9 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-312f787c-7b2d-a8a9-3bc2-2c715edcdfad name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-5695c0a3-7ad8-87d4-a576-0c35923189eb name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-bb2d14be-730b-4d06-a992-1ae3d9ecc0c7 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-ba3add55-ef1f-4f31-e411-8cd0834fcce8 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-de1ac0ce-f7f4-5489-158d-486a3c8ded1c name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-cdd09948-4b04-6af1-055d-65d6795352aa name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-54007339-25c2-ed5a-5016-cd4ea527527c name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.2 GiB" now.total="79.2 GiB" now.free="78.2 GiB" now.used="1.0 GiB"
releasing cuda driver library
level=DEBUG source=sched.go:182 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=24 gpu_count=8
level=DEBUG source=sched.go:225 msg="loading first model" model=/root/.ollama/models/blobs/sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada
level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[78.7 GiB]"
level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="4031.3 GiB" before.free="3996.1 GiB" before.free_swap="8.0 GiB" now.total="4031.3 GiB" now.free="3996.1 GiB" now.free_swap="8.0 GiB"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05
dlsym: cuInit - 0x78b722060800
dlsym: cuDriverGetVersion - 0x78b722060820
dlsym: cuDeviceGetCount - 0x78b722060860
dlsym: cuDeviceGet - 0x78b722060840
dlsym: cuDeviceGetAttribute - 0x78b722060940
dlsym: cuDeviceGetUuid - 0x78b7220608a0
dlsym: cuDeviceGetName - 0x78b722060880
dlsym: cuCtxCreate_v3 - 0x78b72206b020
dlsym: cuMemGetInfo_v2 - 0x78b7220764e0
dlsym: cuCtxDestroy - 0x78b7220d11b0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f1c
CUDA driver version: 12.6
calling cuDeviceGetCount
device count 8
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-325756ac-0b66-8d63-dcdd-4e50b69df7a9 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-312f787c-7b2d-a8a9-3bc2-2c715edcdfad name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-5695c0a3-7ad8-87d4-a576-0c35923189eb name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-bb2d14be-730b-4d06-a992-1ae3d9ecc0c7 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-ba3add55-ef1f-4f31-e411-8cd0834fcce8 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-de1ac0ce-f7f4-5489-158d-486a3c8ded1c name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-cdd09948-4b04-6af1-055d-65d6795352aa name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-54007339-25c2-ed5a-5016-cd4ea527527c name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.2 GiB" now.total="79.2 GiB" now.free="78.2 GiB" now.used="1.0 GiB"
releasing cuda driver library
level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada gpu=GPU-325756ac-0b66-8d63-dcdd-4e50b69df7a9 parallel=4 available=84495040512 required="5.4 GiB"
level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="4031.3 GiB" before.free="3996.1 GiB" before.free_swap="8.0 GiB" now.total="4031.3 GiB" now.free="3996.1 GiB" now.free_swap="8.0 GiB"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05
dlsym: cuInit - 0x78b722060800
dlsym: cuDriverGetVersion - 0x78b722060820
dlsym: cuDeviceGetCount - 0x78b722060860
dlsym: cuDeviceGet - 0x78b722060840
dlsym: cuDeviceGetAttribute - 0x78b722060940
dlsym: cuDeviceGetUuid - 0x78b7220608a0
dlsym: cuDeviceGetName - 0x78b722060880
dlsym: cuCtxCreate_v3 - 0x78b72206b020
dlsym: cuMemGetInfo_v2 - 0x78b7220764e0
dlsym: cuCtxDestroy - 0x78b7220d11b0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f1c
CUDA driver version: 12.6
calling cuDeviceGetCount
device count 8
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-325756ac-0b66-8d63-dcdd-4e50b69df7a9 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-312f787c-7b2d-a8a9-3bc2-2c715edcdfad name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-5695c0a3-7ad8-87d4-a576-0c35923189eb name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-bb2d14be-730b-4d06-a992-1ae3d9ecc0c7 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-ba3add55-ef1f-4f31-e411-8cd0834fcce8 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-de1ac0ce-f7f4-5489-158d-486a3c8ded1c name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-cdd09948-4b04-6af1-055d-65d6795352aa name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-54007339-25c2-ed5a-5016-cd4ea527527c name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.2 GiB" now.total="79.2 GiB" now.free="78.2 GiB" now.used="1.0 GiB"
releasing cuda driver library
level=INFO source=server.go:105 msg="system memory" total="4031.3 GiB" free="3996.1 GiB" free_swap="8.0 GiB"
level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[78.7 GiB]"
level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="4031.3 GiB" before.free="3996.1 GiB" before.free_swap="8.0 GiB" now.total="4031.3 GiB" now.free="3996.1 GiB" now.free_swap="8.0 GiB"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05
dlsym: cuInit - 0x78b722060800
dlsym: cuDriverGetVersion - 0x78b722060820
dlsym: cuDeviceGetCount - 0x78b722060860
dlsym: cuDeviceGet - 0x78b722060840
dlsym: cuDeviceGetAttribute - 0x78b722060940
dlsym: cuDeviceGetUuid - 0x78b7220608a0
dlsym: cuDeviceGetName - 0x78b722060880
dlsym: cuCtxCreate_v3 - 0x78b72206b020
dlsym: cuMemGetInfo_v2 - 0x78b7220764e0
dlsym: cuCtxDestroy - 0x78b7220d11b0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f1c
CUDA driver version: 12.6
calling cuDeviceGetCount
device count 8
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-325756ac-0b66-8d63-dcdd-4e50b69df7a9 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-312f787c-7b2d-a8a9-3bc2-2c715edcdfad name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-5695c0a3-7ad8-87d4-a576-0c35923189eb name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-bb2d14be-730b-4d06-a992-1ae3d9ecc0c7 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-ba3add55-ef1f-4f31-e411-8cd0834fcce8 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-de1ac0ce-f7f4-5489-158d-486a3c8ded1c name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-cdd09948-4b04-6af1-055d-65d6795352aa name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB"
level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-54007339-25c2-ed5a-5016-cd4ea527527c name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.2 GiB" now.total="79.2 GiB" now.free="78.2 GiB" now.used="1.0 GiB"
releasing cuda driver library
level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=35 layers.offload=35 layers.split="" memory.available="[78.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.4 GiB" memory.required.partial="5.4 GiB" memory.required.kv="341.0 MiB" memory.required.allocations="[5.4 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
level=INFO source=server.go:185 msg="enabling flash attention"
level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]"
level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
level=DEBUG source=process_text_spm.go:27 msg=Tokens "num tokens"=262145 vals="[<pad> <eos> <bos> <unk> <mask>]" scores="[0 0 0 0 0]" types="[3 3 3 2 1]"
level=DEBUG source=process_text_spm.go:41 msg="Token counts" normal=261882 unknown=1 control=5 "user defined"=1 unused=0 byte=256 "max token len"=93
level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
level=DEBUG source=process_text_spm.go:27 msg=Tokens "num tokens"=262145 vals="[<pad> <eos> <bos> <unk> <mask>]" scores="[0 0 0 0 0]" types="[3 3 3 2 1]"
level=DEBUG source=process_text_spm.go:41 msg="Token counts" normal=261882 unknown=1 control=5 "user defined"=1 unused=0 byte=256 "max token len"=93
level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
level=DEBUG source=server.go:335 msg="adding gpu library" path=/usr/lib/ollama/cuda_v12
level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[/usr/lib/ollama/cuda_v12]
level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada --ctx-size 8192 --batch-size 512 --n-gpu-layers 35 --verbose --threads 112 --flash-attn --kv-cache-type q8_0 --parallel 4 --port 42199"

OS

Docker

GPU

Nvidia

CPU

Intel

Ollama version

0.6.3

Originally created by @Mozartuss on GitHub (Mar 31, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10061 ### What is the issue? Ollama reserve a GPU place but then using the CPU instead of the GPU. The test was with the Gemma3:4b model during Image generation. If I use DeepseekV3:671B ollama use the GPU during normal text to text operations but also use the CPU if I want to gernerate a Image. ### NVIDIA-SMI ``` +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 560.35.05 Driver Version: 560.35.05 CUDA Version: 12.6 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA H100 80GB HBM3 Off | 00000000:19:00.0 Off | 0 | | N/A 32C P0 117W / 700W | 4406MiB / 81559MiB | 2% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA H100 80GB HBM3 Off | 00000000:3B:00.0 Off | 0 | | N/A 27C P0 71W / 700W | 4MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 2 NVIDIA H100 80GB HBM3 Off | 00000000:4C:00.0 Off | 0 | | N/A 27C P0 70W / 700W | 4MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 3 NVIDIA H100 80GB HBM3 Off | 00000000:5D:00.0 Off | 0 | | N/A 28C P0 71W / 700W | 4MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 4 NVIDIA H100 80GB HBM3 Off | 00000000:9B:00.0 Off | 0 | | N/A 30C P0 74W / 700W | 4MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 5 NVIDIA H100 80GB HBM3 Off | 00000000:BB:00.0 Off | 0 | | N/A 27C P0 69W / 700W | 4MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 6 NVIDIA H100 80GB HBM3 Off | 00000000:CB:00.0 Off | 0 | | N/A 30C P0 70W / 700W | 4MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 7 NVIDIA H100 80GB HBM3 Off | 00000000:DB:00.0 Off | 0 | | N/A 30C P0 119W / 700W | 17054MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 152617 C /usr/bin/ollama 0MiB | | 7 N/A N/A 151596 C python3 17044MiB | +-----------------------------------------------------------------------------------------+ ``` If i start the container, ollama recognize the GPUs but then didnt use it. I also use ComfyUI to generate images, this works godd in docker. Here my partial `docker-compose.yml` file ### docker-compose.yml ``` services: ollama: image: ollama/ollama:${OLLAMA_DOCKER_TAG-latest} container_name: ollama pull_policy: always tty: true restart: unless-stopped volumes: - ollama:/root/.ollama - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro ports: - 11434 environment: - OLLAMA_DEBUG=1 - OLLAMA_FLASH_ATTENTION=1 - OLLAMA_LOAD_TIMEOUT=30m - OLLAMA_KV_CACHE_TYPE=q8_0 - OLLAMA_NEW_ENGINE=1 - OLLAMA_NUM_PARALLEL=4 - OLLAMA_KEEP_ALIVE=30m - OLLAMA_MAX_LOADED_MODELS=7 - OLLAMA_HOST=0.0.0.0:11434 - OLLAMA_ORIGINS=chrome-extension://*,moz-extension://*,safari-web-extension://* - OLLAMA_CONTEXT_LENGTH=4096 - OLLAMA_GPU_OVERHEAD=1G #- CUDA_VISIBLE_DEVICES="GPU-325756ac-0b66-8d63-dcdd-4e50b69df7a9,GPU-312f787c-7b2d-a8a9-3bc2-2c715edcdfad,GPU-5695c0a3-7ad8-87d4-a576-0c35923189eb,GPU-bb2d14be-730b-4d06-a992-1ae3d9ecc0c7,GPU-ba3add55-ef1f-4f31-e411-8cd0834fcce8,GPU-de1ac0ce-f7f4-5489-158d-486a3c8ded1c,GPU-cdd09948-4b04-6af1-055d-65d6795352aa" logging: driver: json-file options: max-size: "5m" max-file: "2" deploy: resources: reservations: devices: - driver: nvidia count: all capabilities: [gpu] networks: - internet volumes: ollama: driver: local driver_opts: type: none o: bind device: /var/ollama/ollama ``` [logs.txt](https://github.com/user-attachments/files/19532848/logs.txt) ### Relevant log output ```shell 2025/03/31 08:36:00 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:30m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:30m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:4 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" level=INFO source=images.go:432 msg="total blobs: 48" level=INFO source=images.go:439 msg="total unused blobs removed: 0" level=INFO source=routes.go:1297 msg="Listening on [::]:11434 (version 0.6.3)" level=DEBUG source=sched.go:106 msg="starting llm scheduler" level=INFO source=gpu.go:217 msg="looking for compatible GPUs" level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so* level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05] initializing /usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05 dlsym: cuInit - 0x78b722060800 dlsym: cuDriverGetVersion - 0x78b722060820 dlsym: cuDeviceGetCount - 0x78b722060860 dlsym: cuDeviceGet - 0x78b722060840 dlsym: cuDeviceGetAttribute - 0x78b722060940 dlsym: cuDeviceGetUuid - 0x78b7220608a0 dlsym: cuDeviceGetName - 0x78b722060880 dlsym: cuCtxCreate_v3 - 0x78b72206b020 dlsym: cuMemGetInfo_v2 - 0x78b7220764e0 dlsym: cuCtxDestroy - 0x78b7220d11b0 calling cuInit calling cuDriverGetVersion raw version 0x2f1c CUDA driver version: 12.6 calling cuDeviceGetCount device count 8 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=8 library=/usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05 [GPU-325756ac-0b66-8d63-dcdd-4e50b69df7a9] CUDA totalMem 81109 mb [GPU-325756ac-0b66-8d63-dcdd-4e50b69df7a9] CUDA freeMem 80580 mb [GPU-325756ac-0b66-8d63-dcdd-4e50b69df7a9] Compute Capability 9.0 [GPU-312f787c-7b2d-a8a9-3bc2-2c715edcdfad] CUDA totalMem 81109 mb [GPU-312f787c-7b2d-a8a9-3bc2-2c715edcdfad] CUDA freeMem 80580 mb [GPU-312f787c-7b2d-a8a9-3bc2-2c715edcdfad] Compute Capability 9.0 [GPU-5695c0a3-7ad8-87d4-a576-0c35923189eb] CUDA totalMem 81109 mb [GPU-5695c0a3-7ad8-87d4-a576-0c35923189eb] CUDA freeMem 80580 mb [GPU-5695c0a3-7ad8-87d4-a576-0c35923189eb] Compute Capability 9.0 [GPU-bb2d14be-730b-4d06-a992-1ae3d9ecc0c7] CUDA totalMem 81109 mb [GPU-bb2d14be-730b-4d06-a992-1ae3d9ecc0c7] CUDA freeMem 80580 mb [GPU-bb2d14be-730b-4d06-a992-1ae3d9ecc0c7] Compute Capability 9.0 [GPU-ba3add55-ef1f-4f31-e411-8cd0834fcce8] CUDA totalMem 81109 mb [GPU-ba3add55-ef1f-4f31-e411-8cd0834fcce8] CUDA freeMem 80580 mb [GPU-ba3add55-ef1f-4f31-e411-8cd0834fcce8] Compute Capability 9.0 [GPU-de1ac0ce-f7f4-5489-158d-486a3c8ded1c] CUDA totalMem 81109 mb [GPU-de1ac0ce-f7f4-5489-158d-486a3c8ded1c] CUDA freeMem 80580 mb [GPU-de1ac0ce-f7f4-5489-158d-486a3c8ded1c] Compute Capability 9.0 [GPU-cdd09948-4b04-6af1-055d-65d6795352aa] CUDA totalMem 81109 mb [GPU-cdd09948-4b04-6af1-055d-65d6795352aa] CUDA freeMem 80580 mb [GPU-cdd09948-4b04-6af1-055d-65d6795352aa] Compute Capability 9.0 [GPU-54007339-25c2-ed5a-5016-cd4ea527527c] CUDA totalMem 81109 mb [GPU-54007339-25c2-ed5a-5016-cd4ea527527c] CUDA freeMem 80059 mb [GPU-54007339-25c2-ed5a-5016-cd4ea527527c] Compute Capability 9.0 level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu" releasing cuda driver library level=INFO source=types.go:130 msg="inference compute" id=GPU-325756ac-0b66-8d63-dcdd-4e50b69df7a9 library=cuda variant=v12 compute=9.0 driver=12.6 name="NVIDIA H100 80GB HBM3" total="79.2 GiB" available="78.7 GiB" level=INFO source=types.go:130 msg="inference compute" id=GPU-312f787c-7b2d-a8a9-3bc2-2c715edcdfad library=cuda variant=v12 compute=9.0 driver=12.6 name="NVIDIA H100 80GB HBM3" total="79.2 GiB" available="78.7 GiB" level=INFO source=types.go:130 msg="inference compute" id=GPU-5695c0a3-7ad8-87d4-a576-0c35923189eb library=cuda variant=v12 compute=9.0 driver=12.6 name="NVIDIA H100 80GB HBM3" total="79.2 GiB" available="78.7 GiB" level=INFO source=types.go:130 msg="inference compute" id=GPU-bb2d14be-730b-4d06-a992-1ae3d9ecc0c7 library=cuda variant=v12 compute=9.0 driver=12.6 name="NVIDIA H100 80GB HBM3" total="79.2 GiB" available="78.7 GiB" level=INFO source=types.go:130 msg="inference compute" id=GPU-ba3add55-ef1f-4f31-e411-8cd0834fcce8 library=cuda variant=v12 compute=9.0 driver=12.6 name="NVIDIA H100 80GB HBM3" total="79.2 GiB" available="78.7 GiB" level=INFO source=types.go:130 msg="inference compute" id=GPU-de1ac0ce-f7f4-5489-158d-486a3c8ded1c library=cuda variant=v12 compute=9.0 driver=12.6 name="NVIDIA H100 80GB HBM3" total="79.2 GiB" available="78.7 GiB" level=INFO source=types.go:130 msg="inference compute" id=GPU-cdd09948-4b04-6af1-055d-65d6795352aa library=cuda variant=v12 compute=9.0 driver=12.6 name="NVIDIA H100 80GB HBM3" total="79.2 GiB" available="78.7 GiB" level=INFO source=types.go:130 msg="inference compute" id=GPU-54007339-25c2-ed5a-5016-cd4ea527527c library=cuda variant=v12 compute=9.0 driver=12.6 name="NVIDIA H100 80GB HBM3" total="79.2 GiB" available="78.2 GiB" [GIN] 2025/03/31 - 08:36:58 | 200 | 11.991315ms | 172.18.0.3 | GET "/api/tags" [GIN] 2025/03/31 - 08:36:58 | 200 | 165.148µs | 141.82.169.215 | GET "/api/version" [GIN] 2025/03/31 - 08:37:01 | 200 | 88.697µs | 141.82.169.215 | GET "/api/version" [GIN] 2025/03/31 - 08:37:06 | 200 | 143.729µs | 141.82.169.215 | GET "/api/version" level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="4031.3 GiB" before.free="4007.5 GiB" before.free_swap="8.0 GiB" now.total="4031.3 GiB" now.free="3996.1 GiB" now.free_swap="8.0 GiB" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05 dlsym: cuInit - 0x78b722060800 dlsym: cuDriverGetVersion - 0x78b722060820 dlsym: cuDeviceGetCount - 0x78b722060860 dlsym: cuDeviceGet - 0x78b722060840 dlsym: cuDeviceGetAttribute - 0x78b722060940 dlsym: cuDeviceGetUuid - 0x78b7220608a0 dlsym: cuDeviceGetName - 0x78b722060880 dlsym: cuCtxCreate_v3 - 0x78b72206b020 dlsym: cuMemGetInfo_v2 - 0x78b7220764e0 dlsym: cuCtxDestroy - 0x78b7220d11b0 calling cuInit calling cuDriverGetVersion raw version 0x2f1c CUDA driver version: 12.6 calling cuDeviceGetCount device count 8 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-325756ac-0b66-8d63-dcdd-4e50b69df7a9 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-312f787c-7b2d-a8a9-3bc2-2c715edcdfad name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-5695c0a3-7ad8-87d4-a576-0c35923189eb name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-bb2d14be-730b-4d06-a992-1ae3d9ecc0c7 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-ba3add55-ef1f-4f31-e411-8cd0834fcce8 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-de1ac0ce-f7f4-5489-158d-486a3c8ded1c name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-cdd09948-4b04-6af1-055d-65d6795352aa name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-54007339-25c2-ed5a-5016-cd4ea527527c name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.2 GiB" now.total="79.2 GiB" now.free="78.2 GiB" now.used="1.0 GiB" releasing cuda driver library level=DEBUG source=sched.go:182 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=24 gpu_count=8 level=DEBUG source=sched.go:225 msg="loading first model" model=/root/.ollama/models/blobs/sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[78.7 GiB]" level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="4031.3 GiB" before.free="3996.1 GiB" before.free_swap="8.0 GiB" now.total="4031.3 GiB" now.free="3996.1 GiB" now.free_swap="8.0 GiB" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05 dlsym: cuInit - 0x78b722060800 dlsym: cuDriverGetVersion - 0x78b722060820 dlsym: cuDeviceGetCount - 0x78b722060860 dlsym: cuDeviceGet - 0x78b722060840 dlsym: cuDeviceGetAttribute - 0x78b722060940 dlsym: cuDeviceGetUuid - 0x78b7220608a0 dlsym: cuDeviceGetName - 0x78b722060880 dlsym: cuCtxCreate_v3 - 0x78b72206b020 dlsym: cuMemGetInfo_v2 - 0x78b7220764e0 dlsym: cuCtxDestroy - 0x78b7220d11b0 calling cuInit calling cuDriverGetVersion raw version 0x2f1c CUDA driver version: 12.6 calling cuDeviceGetCount device count 8 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-325756ac-0b66-8d63-dcdd-4e50b69df7a9 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-312f787c-7b2d-a8a9-3bc2-2c715edcdfad name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-5695c0a3-7ad8-87d4-a576-0c35923189eb name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-bb2d14be-730b-4d06-a992-1ae3d9ecc0c7 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-ba3add55-ef1f-4f31-e411-8cd0834fcce8 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-de1ac0ce-f7f4-5489-158d-486a3c8ded1c name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-cdd09948-4b04-6af1-055d-65d6795352aa name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-54007339-25c2-ed5a-5016-cd4ea527527c name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.2 GiB" now.total="79.2 GiB" now.free="78.2 GiB" now.used="1.0 GiB" releasing cuda driver library level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada gpu=GPU-325756ac-0b66-8d63-dcdd-4e50b69df7a9 parallel=4 available=84495040512 required="5.4 GiB" level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="4031.3 GiB" before.free="3996.1 GiB" before.free_swap="8.0 GiB" now.total="4031.3 GiB" now.free="3996.1 GiB" now.free_swap="8.0 GiB" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05 dlsym: cuInit - 0x78b722060800 dlsym: cuDriverGetVersion - 0x78b722060820 dlsym: cuDeviceGetCount - 0x78b722060860 dlsym: cuDeviceGet - 0x78b722060840 dlsym: cuDeviceGetAttribute - 0x78b722060940 dlsym: cuDeviceGetUuid - 0x78b7220608a0 dlsym: cuDeviceGetName - 0x78b722060880 dlsym: cuCtxCreate_v3 - 0x78b72206b020 dlsym: cuMemGetInfo_v2 - 0x78b7220764e0 dlsym: cuCtxDestroy - 0x78b7220d11b0 calling cuInit calling cuDriverGetVersion raw version 0x2f1c CUDA driver version: 12.6 calling cuDeviceGetCount device count 8 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-325756ac-0b66-8d63-dcdd-4e50b69df7a9 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-312f787c-7b2d-a8a9-3bc2-2c715edcdfad name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-5695c0a3-7ad8-87d4-a576-0c35923189eb name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-bb2d14be-730b-4d06-a992-1ae3d9ecc0c7 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-ba3add55-ef1f-4f31-e411-8cd0834fcce8 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-de1ac0ce-f7f4-5489-158d-486a3c8ded1c name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-cdd09948-4b04-6af1-055d-65d6795352aa name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-54007339-25c2-ed5a-5016-cd4ea527527c name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.2 GiB" now.total="79.2 GiB" now.free="78.2 GiB" now.used="1.0 GiB" releasing cuda driver library level=INFO source=server.go:105 msg="system memory" total="4031.3 GiB" free="3996.1 GiB" free_swap="8.0 GiB" level=DEBUG source=memory.go:108 msg=evaluating library=cuda gpu_count=1 available="[78.7 GiB]" level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="4031.3 GiB" before.free="3996.1 GiB" before.free_swap="8.0 GiB" now.total="4031.3 GiB" now.free="3996.1 GiB" now.free_swap="8.0 GiB" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.560.35.05 dlsym: cuInit - 0x78b722060800 dlsym: cuDriverGetVersion - 0x78b722060820 dlsym: cuDeviceGetCount - 0x78b722060860 dlsym: cuDeviceGet - 0x78b722060840 dlsym: cuDeviceGetAttribute - 0x78b722060940 dlsym: cuDeviceGetUuid - 0x78b7220608a0 dlsym: cuDeviceGetName - 0x78b722060880 dlsym: cuCtxCreate_v3 - 0x78b72206b020 dlsym: cuMemGetInfo_v2 - 0x78b7220764e0 dlsym: cuCtxDestroy - 0x78b7220d11b0 calling cuInit calling cuDriverGetVersion raw version 0x2f1c CUDA driver version: 12.6 calling cuDeviceGetCount device count 8 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-325756ac-0b66-8d63-dcdd-4e50b69df7a9 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-312f787c-7b2d-a8a9-3bc2-2c715edcdfad name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-5695c0a3-7ad8-87d4-a576-0c35923189eb name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-bb2d14be-730b-4d06-a992-1ae3d9ecc0c7 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-ba3add55-ef1f-4f31-e411-8cd0834fcce8 name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-de1ac0ce-f7f4-5489-158d-486a3c8ded1c name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-cdd09948-4b04-6af1-055d-65d6795352aa name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.7 GiB" now.total="79.2 GiB" now.free="78.7 GiB" now.used="529.1 MiB" level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-54007339-25c2-ed5a-5016-cd4ea527527c name="NVIDIA H100 80GB HBM3" overhead="0 B" before.total="79.2 GiB" before.free="78.2 GiB" now.total="79.2 GiB" now.free="78.2 GiB" now.used="1.0 GiB" releasing cuda driver library level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=35 layers.offload=35 layers.split="" memory.available="[78.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.4 GiB" memory.required.partial="5.4 GiB" memory.required.kv="341.0 MiB" memory.required.allocations="[5.4 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" level=INFO source=server.go:185 msg="enabling flash attention" level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible="[cuda_v12 cuda_v11]" level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false level=DEBUG source=process_text_spm.go:27 msg=Tokens "num tokens"=262145 vals="[<pad> <eos> <bos> <unk> <mask>]" scores="[0 0 0 0 0]" types="[3 3 3 2 1]" level=DEBUG source=process_text_spm.go:41 msg="Token counts" normal=261882 unknown=1 control=5 "user defined"=1 unused=0 byte=256 "max token len"=93 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" level=DEBUG source=process_text_spm.go:27 msg=Tokens "num tokens"=262145 vals="[<pad> <eos> <bos> <unk> <mask>]" scores="[0 0 0 0 0]" types="[3 3 3 2 1]" level=DEBUG source=process_text_spm.go:41 msg="Token counts" normal=261882 unknown=1 control=5 "user defined"=1 unused=0 byte=256 "max token len"=93 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256 level=DEBUG source=server.go:335 msg="adding gpu library" path=/usr/lib/ollama/cuda_v12 level=DEBUG source=server.go:343 msg="adding gpu dependency paths" paths=[/usr/lib/ollama/cuda_v12] level=INFO source=server.go:405 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada --ctx-size 8192 --batch-size 512 --n-gpu-layers 35 --verbose --threads 112 --flash-attn --kv-cache-type q8_0 --parallel 4 --port 42199" ``` ### OS Docker ### GPU Nvidia ### CPU Intel ### Ollama version 0.6.3
GiteaMirror added the bug label 2026-04-29 01:59:00 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 31, 2025):

You left off the bit of the log that shows what the runner is doing.

<!-- gh-comment-id:2765652090 --> @rick-github commented on GitHub (Mar 31, 2025): You left off the bit of the log that shows what the runner is doing.
Author
Owner

@Mozartuss commented on GitHub (Mar 31, 2025):

I uploaded the complete logs as .txt file
logs.txt

<!-- gh-comment-id:2765925116 --> @Mozartuss commented on GitHub (Mar 31, 2025): I uploaded the complete logs as .txt file [logs.txt](https://github.com/user-attachments/files/19532848/logs.txt)
Author
Owner

@rick-github commented on GitHub (Mar 31, 2025):

level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=35 layers.offload=35 layers.split=""
 memory.available="[78.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.4 GiB" memory.required.partial="5.4 GiB"
 memory.required.kv="341.0 MiB" memory.required.allocations="[5.4 GiB]" memory.weights.total="2.3 GiB"
 memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB"
 memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"

gemma3:4b is a small model, ollama estimates that it can fit it all on one GPU.

time=2025-03-31T08:37:27.525Z level=DEBUG source=ggml.go:222 msg="created tensor"
 name=mm.mm_input_projection.weight shape="[2560 1152]" dtype=1 buffer_type=CUDA0

The runner starts assigning parts of the model to CUDA0.

time=2025-03-31T08:37:27.525Z level=DEBUG source=ggml.go:222 msg="created tensor"
 name=token_embd.weight shape="[2560 262144]" dtype=14 buffer_type=CPU

Part of the model is assigned to the CPU.

time=2025-03-31T08:37:27.538Z level=INFO source=ggml.go:291 msg="model weights" buffer=CUDA0 size="3.1 GiB"
time=2025-03-31T08:37:27.538Z level=INFO source=ggml.go:291 msg="model weights" buffer=CPU size="525.0 MiB"

The bulk of the model is assigned to the GPU.

|=========================================+========================+======================|
|   0  NVIDIA H100 80GB HBM3          Off |   00000000:19:00.0 Off |                    0 |
| N/A   32C    P0            117W /  700W |    4406MiB /  81559MiB |      2%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+

This looks normal. The model is running in GPU, with one tensor running in CPU because the tensor is not supported in CUDA.

<!-- gh-comment-id:2765988514 --> @rick-github commented on GitHub (Mar 31, 2025): ``` level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=35 layers.offload=35 layers.split="" memory.available="[78.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.4 GiB" memory.required.partial="5.4 GiB" memory.required.kv="341.0 MiB" memory.required.allocations="[5.4 GiB]" memory.weights.total="2.3 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" ``` gemma3:4b is a small model, ollama estimates that it can fit it all on one GPU. ``` time=2025-03-31T08:37:27.525Z level=DEBUG source=ggml.go:222 msg="created tensor" name=mm.mm_input_projection.weight shape="[2560 1152]" dtype=1 buffer_type=CUDA0 ``` The runner starts assigning parts of the model to CUDA0. ``` time=2025-03-31T08:37:27.525Z level=DEBUG source=ggml.go:222 msg="created tensor" name=token_embd.weight shape="[2560 262144]" dtype=14 buffer_type=CPU ``` Part of the model is assigned to the CPU. ``` time=2025-03-31T08:37:27.538Z level=INFO source=ggml.go:291 msg="model weights" buffer=CUDA0 size="3.1 GiB" time=2025-03-31T08:37:27.538Z level=INFO source=ggml.go:291 msg="model weights" buffer=CPU size="525.0 MiB" ``` The bulk of the model is assigned to the GPU. ``` |=========================================+========================+======================| | 0 NVIDIA H100 80GB HBM3 Off | 00000000:19:00.0 Off | 0 | | N/A 32C P0 117W / 700W | 4406MiB / 81559MiB | 2% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ ``` This looks normal. The model is running in GPU, with one tensor running in CPU because the tensor is not supported in CUDA.
Author
Owner

@jessegross commented on GitHub (Mar 31, 2025):

OLLAMA_KV_CACHE_TYPE=q8_0

See https://github.com/ollama/ollama/issues/9683

I would recommend trying running with an unquantized KV cache.

<!-- gh-comment-id:2767591501 --> @jessegross commented on GitHub (Mar 31, 2025): `OLLAMA_KV_CACHE_TYPE=q8_0` See https://github.com/ollama/ollama/issues/9683 I would recommend trying running with an unquantized KV cache.
Author
Owner

@Mozartuss commented on GitHub (Apr 1, 2025):

OLLAMA_KV_CACHE_TYPE=q8_0

See #9683

I would recommend trying running with an unquantized KV cache.

Oh nice, thanks that was the problem for me

<!-- gh-comment-id:2768707680 --> @Mozartuss commented on GitHub (Apr 1, 2025): > `OLLAMA_KV_CACHE_TYPE=q8_0` > > See [#9683](https://github.com/ollama/ollama/issues/9683) > > I would recommend trying running with an unquantized KV cache. Oh nice, thanks that was the problem for me
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53109