[GH-ISSUE #8884] 0.5.8-rc7: ROCm Not Loading on to GPU #5759

Closed
opened 2026-04-12 17:04:48 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @ProjectMoon on GitHub (Feb 6, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8884

Originally assigned to: @mxyng on GitHub.

What is the issue?

I tried out 0.5.8-rc7, and ollama reports that the model will be offloaded to the GPU, but it appears to only run on CPU.

I found https://github.com/ollama/ollama/issues/8828, but this seems to be happening even with libraries in the right place.

I tried two scenarios:

  1. Download ollama-linux and ollama-linux-rocm .tar.gz files, and untar to /opt/ollama. Run with the dist/linux/amd64 directory structure.
  2. Copy the libraries into the old directory structure: /opt/ollama/bin/ollama and /opt/ollama/lib/ with all new separate lib directories merged together.

In either case, ollama reports offloading to ROCm, but runs on CPU.

Relevant log output

Expand for logs

Here are the logs:

2025/02/06 12:30:37 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-02-06T12:30:37.908+01:00 level=INFO source=images.go:432 msg="total blobs: 118"
time=2025-02-06T12:30:37.909+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-06T12:30:37.910+01:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11434 (version 0.5.8-rc7)"
time=2025-02-06T12:30:37.911+01:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2025-02-06T12:30:37.911+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-02-06T12:30:37.912+01:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-02-06T12:30:37.912+01:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so*
time=2025-02-06T12:30:37.912+01:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/opt/ollama/bin/libcuda.so* /libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-02-06T12:30:37.924+01:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[/usr/lib/libcuda.so.565.77 /usr/lib64/libcuda.so.565.77]"
initializing /usr/lib/libcuda.so.565.77
library /usr/lib/libcuda.so.565.77 load err: /usr/lib/libcuda.so.565.77: wrong ELF class: ELFCLASS32
time=2025-02-06T12:30:37.924+01:00 level=DEBUG source=gpu.go:609 msg="skipping 32bit library" library=/usr/lib/libcuda.so.565.77
initializing /usr/lib64/libcuda.so.565.77
dlsym: cuInit - 0x7f758d2d4cc0
dlsym: cuDriverGetVersion - 0x7f758d2d4ce0
dlsym: cuDeviceGetCount - 0x7f758d2d4d20
dlsym: cuDeviceGet - 0x7f758d2d4d00
dlsym: cuDeviceGetAttribute - 0x7f758d2d4e00
dlsym: cuDeviceGetUuid - 0x7f758d2d4d60
dlsym: cuDeviceGetName - 0x7f758d2d4d40
dlsym: cuCtxCreate_v3 - 0x7f758d2d4fe0
dlsym: cuMemGetInfo_v2 - 0x7f758d2d5760
dlsym: cuCtxDestroy - 0x7f758d3213a0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f26
CUDA driver version: 12.7
calling cuDeviceGetCount
device count 1
time=2025-02-06T12:30:37.931+01:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib64/libcuda.so.565.77
[GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7] CUDA totalMem 4030 mb
[GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7] CUDA freeMem 3963 mb
[GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7] Compute Capability 5.2
time=2025-02-06T12:30:38.019+01:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2025-02-06T12:30:38.019+01:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/0/properties"
time=2025-02-06T12:30:38.019+01:00 level=DEBUG source=amd_linux.go:121 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/0/properties"
time=2025-02-06T12:30:38.020+01:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/1/properties"
time=2025-02-06T12:30:38.020+01:00 level=DEBUG source=amd_linux.go:206 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties vendor=4098 device=29631 unique_id=10870137312548343375
time=2025-02-06T12:30:38.020+01:00 level=DEBUG source=amd_linux.go:240 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties drm=/sys/class/drm/card0/device
time=2025-02-06T12:30:38.020+01:00 level=DEBUG source=amd_linux.go:318 msg="amdgpu memory" gpu=0 total="16.0 GiB"
time=2025-02-06T12:30:38.020+01:00 level=DEBUG source=amd_linux.go:319 msg="amdgpu memory" gpu=0 available="15.4 GiB"
time=2025-02-06T12:30:38.020+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /opt/ollama/bin/rocm"
time=2025-02-06T12:30:38.020+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /opt/rocm/lib"
time=2025-02-06T12:30:38.020+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/lib64"
time=2025-02-06T12:30:38.025+01:00 level=DEBUG source=amd_linux.go:371 msg="rocm supported GPUs" types="[gfx1030 gfx1100 gfx906 gfx908 gfx90a gfx942]"
time=2025-02-06T12:30:38.025+01:00 level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=GPU-96da7c4b1629ce4f gpu_type=gfx1030
releasing cuda driver library
time=2025-02-06T12:30:38.025+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 library=cuda variant=v11 compute=5.2 driver=12.7 name="NVIDIA GeForce GTX 970" total="3.9 GiB" available="3.9 GiB"
time=2025-02-06T12:30:38.025+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-96da7c4b1629ce4f library=rocm variant="" compute=gfx1030 driver=0.0 name=1002:73bf total="16.0 GiB" available="15.4 GiB"
[GIN] 2025/02/06 - 12:30:44 | 200 |      41.835µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/02/06 - 12:30:44 | 200 |   21.973698ms |       127.0.0.1 | POST     "/api/show"
time=2025-02-06T12:30:44.263+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.7 GiB" before.free="36.2 GiB" before.free_swap="1.0 GiB" now.total="62.7 GiB" now.free="36.1 GiB" now.free_swap="1.0 GiB"
initializing /usr/lib64/libcuda.so.565.77
dlsym: cuInit - 0x7f758d2d4cc0
dlsym: cuDriverGetVersion - 0x7f758d2d4ce0
dlsym: cuDeviceGetCount - 0x7f758d2d4d20
dlsym: cuDeviceGet - 0x7f758d2d4d00
dlsym: cuDeviceGetAttribute - 0x7f758d2d4e00
dlsym: cuDeviceGetUuid - 0x7f758d2d4d60
dlsym: cuDeviceGetName - 0x7f758d2d4d40
dlsym: cuCtxCreate_v3 - 0x7f758d2d4fe0
dlsym: cuMemGetInfo_v2 - 0x7f758d2d5760
dlsym: cuCtxDestroy - 0x7f758d3213a0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f26
CUDA driver version: 12.7
calling cuDeviceGetCount
device count 1
time=2025-02-06T12:30:44.344+01:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 name="NVIDIA GeForce GTX 970" overhead="0 B" before.total="3.9 GiB" before.free="3.9 GiB" now.total="3.9 GiB" now.free="3.9 GiB" now.used="66.5 MiB"
time=2025-02-06T12:30:44.344+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-96da7c4b1629ce4f name=1002:73bf before="15.4 GiB" now="15.4 GiB"
releasing cuda driver library
time=2025-02-06T12:30:44.344+01:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x55e3b630e9c0 gpu_count=2
time=2025-02-06T12:30:44.395+01:00 level=DEBUG source=sched.go:224 msg="loading first model" model=/ollama/blobs/sha256-bbfb685133c274407d565c65b1ca806eb1593482b1c9d8524596797b24123862
time=2025-02-06T12:30:44.395+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[3.9 GiB]"
time=2025-02-06T12:30:44.396+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.7 GiB" before.free="36.1 GiB" before.free_swap="1.0 GiB" now.total="62.7 GiB" now.free="36.1 GiB" now.free_swap="1.0 GiB"
initializing /usr/lib64/libcuda.so.565.77
dlsym: cuInit - 0x7f758d2d4cc0
dlsym: cuDriverGetVersion - 0x7f758d2d4ce0
dlsym: cuDeviceGetCount - 0x7f758d2d4d20
dlsym: cuDeviceGet - 0x7f758d2d4d00
dlsym: cuDeviceGetAttribute - 0x7f758d2d4e00
dlsym: cuDeviceGetUuid - 0x7f758d2d4d60
dlsym: cuDeviceGetName - 0x7f758d2d4d40
dlsym: cuCtxCreate_v3 - 0x7f758d2d4fe0
dlsym: cuMemGetInfo_v2 - 0x7f758d2d5760
dlsym: cuCtxDestroy - 0x7f758d3213a0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f26
CUDA driver version: 12.7
calling cuDeviceGetCount
device count 1
time=2025-02-06T12:30:44.470+01:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 name="NVIDIA GeForce GTX 970" overhead="0 B" before.total="3.9 GiB" before.free="3.9 GiB" now.total="3.9 GiB" now.free="3.9 GiB" now.used="66.5 MiB"
time=2025-02-06T12:30:44.470+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-96da7c4b1629ce4f name=1002:73bf before="15.4 GiB" now="15.4 GiB"
releasing cuda driver library
time=2025-02-06T12:30:44.470+01:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 library=cuda variant=v11 compute=5.2 driver=12.7 name="NVIDIA GeForce GTX 970" total="3.9 GiB" available="3.9 GiB" minimum_memory=479199232 layer_size="307.4 MiB" gpu_zer_overhead="0 B" partial_offload="5.9 GiB" full_offload="4.7 GiB"
time=2025-02-06T12:30:44.470+01:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers"
time=2025-02-06T12:30:44.470+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[3.9 GiB]"
time=2025-02-06T12:30:44.471+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.7 GiB" before.free="36.1 GiB" before.free_swap="1.0 GiB" now.total="62.7 GiB" now.free="36.0 GiB" now.free_swap="1.0 GiB"
initializing /usr/lib64/libcuda.so.565.77
dlsym: cuInit - 0x7f758d2d4cc0
dlsym: cuDriverGetVersion - 0x7f758d2d4ce0
dlsym: cuDeviceGetCount - 0x7f758d2d4d20
dlsym: cuDeviceGet - 0x7f758d2d4d00
dlsym: cuDeviceGetAttribute - 0x7f758d2d4e00
dlsym: cuDeviceGetUuid - 0x7f758d2d4d60
dlsym: cuDeviceGetName - 0x7f758d2d4d40
dlsym: cuCtxCreate_v3 - 0x7f758d2d4fe0
dlsym: cuMemGetInfo_v2 - 0x7f758d2d5760
dlsym: cuCtxDestroy - 0x7f758d3213a0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f26
CUDA driver version: 12.7
calling cuDeviceGetCount
device count 1
time=2025-02-06T12:30:44.543+01:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 name="NVIDIA GeForce GTX 970" overhead="0 B" before.total="3.9 GiB" before.free="3.9 GiB" now.total="3.9 GiB" now.free="3.9 GiB" now.used="66.5 MiB"
time=2025-02-06T12:30:44.543+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-96da7c4b1629ce4f name=1002:73bf before="15.4 GiB" now="15.4 GiB"
releasing cuda driver library
time=2025-02-06T12:30:44.544+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[3.9 GiB]"
time=2025-02-06T12:30:44.544+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.7 GiB" before.free="36.0 GiB" before.free_swap="1.0 GiB" now.total="62.7 GiB" now.free="36.1 GiB" now.free_swap="1.0 GiB"
initializing /usr/lib64/libcuda.so.565.77
dlsym: cuInit - 0x7f758d2d4cc0
dlsym: cuDriverGetVersion - 0x7f758d2d4ce0
dlsym: cuDeviceGetCount - 0x7f758d2d4d20
dlsym: cuDeviceGet - 0x7f758d2d4d00
dlsym: cuDeviceGetAttribute - 0x7f758d2d4e00
dlsym: cuDeviceGetUuid - 0x7f758d2d4d60
dlsym: cuDeviceGetName - 0x7f758d2d4d40
dlsym: cuCtxCreate_v3 - 0x7f758d2d4fe0
dlsym: cuMemGetInfo_v2 - 0x7f758d2d5760
dlsym: cuCtxDestroy - 0x7f758d3213a0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f26
CUDA driver version: 12.7
calling cuDeviceGetCount
device count 1
time=2025-02-06T12:30:44.611+01:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 name="NVIDIA GeForce GTX 970" overhead="0 B" before.total="3.9 GiB" before.free="3.9 GiB" now.total="3.9 GiB" now.free="3.9 GiB" now.used="66.5 MiB"
time=2025-02-06T12:30:44.611+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-96da7c4b1629ce4f name=1002:73bf before="15.4 GiB" now="15.4 GiB"
releasing cuda driver library
time=2025-02-06T12:30:44.612+01:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 library=cuda variant=v11 compute=5.2 driver=12.7 name="NVIDIA GeForce GTX 970" total="3.9 GiB" available="3.9 GiB" minimum_memory=479199232 layer_size="307.4 MiB" gpu_zer_overhead="0 B" partial_offload="5.9 GiB" full_offload="4.7 GiB"
time=2025-02-06T12:30:44.612+01:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers"
time=2025-02-06T12:30:44.612+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[3.9 GiB]"
time=2025-02-06T12:30:44.612+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.7 GiB" before.free="36.1 GiB" before.free_swap="1.0 GiB" now.total="62.7 GiB" now.free="36.1 GiB" now.free_swap="1.0 GiB"
initializing /usr/lib64/libcuda.so.565.77
dlsym: cuInit - 0x7f758d2d4cc0
dlsym: cuDriverGetVersion - 0x7f758d2d4ce0
dlsym: cuDeviceGetCount - 0x7f758d2d4d20
dlsym: cuDeviceGet - 0x7f758d2d4d00
dlsym: cuDeviceGetAttribute - 0x7f758d2d4e00
dlsym: cuDeviceGetUuid - 0x7f758d2d4d60
dlsym: cuDeviceGetName - 0x7f758d2d4d40
dlsym: cuCtxCreate_v3 - 0x7f758d2d4fe0
dlsym: cuMemGetInfo_v2 - 0x7f758d2d5760
dlsym: cuCtxDestroy - 0x7f758d3213a0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f26
CUDA driver version: 12.7
calling cuDeviceGetCount
device count 1
time=2025-02-06T12:30:44.681+01:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 name="NVIDIA GeForce GTX 970" overhead="0 B" before.total="3.9 GiB" before.free="3.9 GiB" now.total="3.9 GiB" now.free="3.9 GiB" now.used="66.5 MiB"
time=2025-02-06T12:30:44.681+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-96da7c4b1629ce4f name=1002:73bf before="15.4 GiB" now="15.4 GiB"
releasing cuda driver library
time=2025-02-06T12:30:44.681+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=rocm gpu_count=1 available="[15.4 GiB]"
time=2025-02-06T12:30:44.682+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.7 GiB" before.free="36.1 GiB" before.free_swap="1.0 GiB" now.total="62.7 GiB" now.free="36.1 GiB" now.free_swap="1.0 GiB"
initializing /usr/lib64/libcuda.so.565.77
dlsym: cuInit - 0x7f758d2d4cc0
dlsym: cuDriverGetVersion - 0x7f758d2d4ce0
dlsym: cuDeviceGetCount - 0x7f758d2d4d20
dlsym: cuDeviceGet - 0x7f758d2d4d00
dlsym: cuDeviceGetAttribute - 0x7f758d2d4e00
dlsym: cuDeviceGetUuid - 0x7f758d2d4d60
dlsym: cuDeviceGetName - 0x7f758d2d4d40
dlsym: cuCtxCreate_v3 - 0x7f758d2d4fe0
dlsym: cuMemGetInfo_v2 - 0x7f758d2d5760
dlsym: cuCtxDestroy - 0x7f758d3213a0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f26
CUDA driver version: 12.7
calling cuDeviceGetCount
device count 1
time=2025-02-06T12:30:44.750+01:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 name="NVIDIA GeForce GTX 970" overhead="0 B" before.total="3.9 GiB" before.free="3.9 GiB" now.total="3.9 GiB" now.free="3.9 GiB" now.used="66.5 MiB"
time=2025-02-06T12:30:44.750+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-96da7c4b1629ce4f name=1002:73bf before="15.4 GiB" now="15.4 GiB"
releasing cuda driver library
time=2025-02-06T12:30:44.751+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=rocm gpu_count=1 available="[15.4 GiB]"
time=2025-02-06T12:30:44.751+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.7 GiB" before.free="36.1 GiB" before.free_swap="1.0 GiB" now.total="62.7 GiB" now.free="36.1 GiB" now.free_swap="1.0 GiB"
initializing /usr/lib64/libcuda.so.565.77
dlsym: cuInit - 0x7f758d2d4cc0
dlsym: cuDriverGetVersion - 0x7f758d2d4ce0
dlsym: cuDeviceGetCount - 0x7f758d2d4d20
dlsym: cuDeviceGet - 0x7f758d2d4d00
dlsym: cuDeviceGetAttribute - 0x7f758d2d4e00
dlsym: cuDeviceGetUuid - 0x7f758d2d4d60
dlsym: cuDeviceGetName - 0x7f758d2d4d40
dlsym: cuCtxCreate_v3 - 0x7f758d2d4fe0
dlsym: cuMemGetInfo_v2 - 0x7f758d2d5760
dlsym: cuCtxDestroy - 0x7f758d3213a0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f26
CUDA driver version: 12.7
calling cuDeviceGetCount
device count 1
time=2025-02-06T12:30:44.820+01:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 name="NVIDIA GeForce GTX 970" overhead="0 B" before.total="3.9 GiB" before.free="3.9 GiB" now.total="3.9 GiB" now.free="3.9 GiB" now.used="66.5 MiB"
time=2025-02-06T12:30:44.820+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-96da7c4b1629ce4f name=1002:73bf before="15.4 GiB" now="15.4 GiB"
releasing cuda driver library
time=2025-02-06T12:30:44.820+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/ollama/blobs/sha256-bbfb685133c274407d565c65b1ca806eb1593482b1c9d8524596797b24123862 gpu=GPU-96da7c4b1629ce4f parallel=1 available=16494198784 required="12.5 GiB"
time=2025-02-06T12:30:44.820+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.7 GiB" before.free="36.1 GiB" before.free_swap="1.0 GiB" now.total="62.7 GiB" now.free="36.1 GiB" now.free_swap="1.0 GiB"
initializing /usr/lib64/libcuda.so.565.77
dlsym: cuInit - 0x7f758d2d4cc0
dlsym: cuDriverGetVersion - 0x7f758d2d4ce0
dlsym: cuDeviceGetCount - 0x7f758d2d4d20
dlsym: cuDeviceGet - 0x7f758d2d4d00
dlsym: cuDeviceGetAttribute - 0x7f758d2d4e00
dlsym: cuDeviceGetUuid - 0x7f758d2d4d60
dlsym: cuDeviceGetName - 0x7f758d2d4d40
dlsym: cuCtxCreate_v3 - 0x7f758d2d4fe0
dlsym: cuMemGetInfo_v2 - 0x7f758d2d5760
dlsym: cuCtxDestroy - 0x7f758d3213a0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f26
CUDA driver version: 12.7
calling cuDeviceGetCount
device count 1
time=2025-02-06T12:30:44.888+01:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 name="NVIDIA GeForce GTX 970" overhead="0 B" before.total="3.9 GiB" before.free="3.9 GiB" now.total="3.9 GiB" now.free="3.9 GiB" now.used="66.5 MiB"
time=2025-02-06T12:30:44.888+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-96da7c4b1629ce4f name=1002:73bf before="15.4 GiB" now="15.4 GiB"
releasing cuda driver library
time=2025-02-06T12:30:44.888+01:00 level=INFO source=server.go:100 msg="system memory" total="62.7 GiB" free="36.1 GiB" free_swap="1.0 GiB"
time=2025-02-06T12:30:44.888+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=rocm gpu_count=1 available="[15.4 GiB]"
time=2025-02-06T12:30:44.889+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.7 GiB" before.free="36.1 GiB" before.free_swap="1.0 GiB" now.total="62.7 GiB" now.free="36.1 GiB" now.free_swap="1.0 GiB"
initializing /usr/lib64/libcuda.so.565.77
dlsym: cuInit - 0x7f758d2d4cc0
dlsym: cuDriverGetVersion - 0x7f758d2d4ce0
dlsym: cuDeviceGetCount - 0x7f758d2d4d20
dlsym: cuDeviceGet - 0x7f758d2d4d00
dlsym: cuDeviceGetAttribute - 0x7f758d2d4e00
dlsym: cuDeviceGetUuid - 0x7f758d2d4d60
dlsym: cuDeviceGetName - 0x7f758d2d4d40
dlsym: cuCtxCreate_v3 - 0x7f758d2d4fe0
dlsym: cuMemGetInfo_v2 - 0x7f758d2d5760
dlsym: cuCtxDestroy - 0x7f758d3213a0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f26
CUDA driver version: 12.7
calling cuDeviceGetCount
device count 1
time=2025-02-06T12:30:44.959+01:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 name="NVIDIA GeForce GTX 970" overhead="0 B" before.total="3.9 GiB" before.free="3.9 GiB" now.total="3.9 GiB" now.free="3.9 GiB" now.used="66.5 MiB"
time=2025-02-06T12:30:44.959+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-96da7c4b1629ce4f name=1002:73bf before="15.4 GiB" now="15.4 GiB"
releasing cuda driver library
time=2025-02-06T12:30:44.959+01:00 level=INFO source=memory.go:356 msg="offload to rocm" layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[15.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="12.5 GiB" memory.required.partial="12.5 GiB" memory.required.kv="1.4 GiB" memory.required.allocations="[12.5 GiB]" memory.weights.total="10.1 GiB" memory.weights.repeating="9.5 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="1.2 GiB" memory.graph.partial="1.5 GiB"
time=2025-02-06T12:30:44.959+01:00 level=INFO source=server.go:185 msg="enabling flash attention"
time=2025-02-06T12:30:44.959+01:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible=[]
time=2025-02-06T12:30:44.960+01:00 level=DEBUG source=server.go:310 msg="adding gpu dependency paths" paths=[/usr/lib64]
time=2025-02-06T12:30:44.960+01:00 level=INFO source=server.go:381 msg="starting llama server" cmd="/opt/ollama/bin/ollama runner --model /ollama/blobs/sha256-bbfb685133c274407d565c65b1ca806eb1593482b1c9d8524596797b24123862 --ctx-size 15000 --batch-size 512 --n-gpu-layers 49 --verbose --threads 6 --flash-attn --kv-cache-type q8_0 --parallel 1 --port 46243"
time=2025-02-06T12:30:44.960+01:00 level=DEBUG source=server.go:399 msg=subprocess environment="[PATH=/opt/ollama/bin:/bin:/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/opt/bin:/usr/lib/llvm/19/bin:/usr/lib/llvm/18/bin:/etc/eselect/wine/bin:/opt/cuda/bin LD_LIBRARY_PATH=/usr/lib64:/opt/ollama/bin ROCR_VISIBLE_DEVICES=GPU-96da7c4b1629ce4f]"
time=2025-02-06T12:30:44.960+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-06T12:30:44.961+01:00 level=INFO source=server.go:558 msg="waiting for llama runner to start responding"
time=2025-02-06T12:30:44.961+01:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server error"
time=2025-02-06T12:30:44.982+01:00 level=INFO source=runner.go:936 msg="starting go runner"
time=2025-02-06T12:30:44.982+01:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=6
time=2025-02-06T12:30:44.982+01:00 level=DEBUG source=ggml.go:84 msg="ggml backend load all from path" path=/usr/lib64
time=2025-02-06T12:30:44.982+01:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:46243"
time=2025-02-06T12:30:45.119+01:00 level=DEBUG source=ggml.go:84 msg="ggml backend load all from path" path=/opt/ollama/bin
llama_model_loader: loaded meta data with 43 key-value pairs and 579 tensors from /ollama/blobs/sha256-bbfb685133c274407d565c65b1ca806eb1593482b1c9d8524596797b24123862 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen2.5 14B
llama_model_loader: - kv   3:                       general.organization str              = Qwen
llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5
llama_model_loader: - kv   5:                         general.size_label str              = 14B
llama_model_loader: - kv   6:                   general.base_model.count u32              = 3
llama_model_loader: - kv   7:                  general.base_model.0.name str              = Qwamma 14b Merge v1
llama_model_loader: - kv   8:               general.base_model.0.version str              = v1
llama_model_loader: - kv   9:          general.base_model.0.organization str              = Chargoddard
llama_model_loader: - kv  10:              general.base_model.0.repo_url str              = https://huggingface.co/chargoddard/qw...
llama_model_loader: - kv  11:                  general.base_model.1.name str              = Qwen2.5 14B Instruct_arcee Qwen2 14B ...
llama_model_loader: - kv  12:               general.base_model.1.version str              = v0.2
llama_model_loader: - kv  13:          general.base_model.1.organization str              = Arcee Train
llama_model_loader: - kv  14:              general.base_model.1.repo_url str              = https://huggingface.co/arcee-train/Qw...
llama_model_loader: - kv  15:                  general.base_model.2.name str              = Qwen2.5 14B
llama_model_loader: - kv  16:          general.base_model.2.organization str              = Qwen
llama_model_loader: - kv  17:              general.base_model.2.repo_url str              = https://huggingface.co/Qwen/Qwen2.5-14B
llama_model_loader: - kv  18:                               general.tags arr[str,2]       = ["mergekit", "merge"]
llama_model_loader: - kv  19:                          qwen2.block_count u32              = 48
llama_model_loader: - kv  20:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv  21:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv  22:                  qwen2.feed_forward_length u32              = 13824
llama_model_loader: - kv  23:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  24:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  25:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  26:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  27:                          general.file_type u32              = 17
llama_model_loader: - kv  28:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  29:                         tokenizer.ggml.pre str              = qwen2
time=2025-02-06T12:30:45.213+01:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv  30:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  31:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  32:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  33:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  34:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  35:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  36:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  37:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  38:               general.quantization_version u32              = 2
llama_model_loader: - kv  39:                      quantize.imatrix.file str              = /models_out/SuperNova-14B-GGUF/SuperN...
llama_model_loader: - kv  40:                   quantize.imatrix.dataset str              = /training_dir/calibration_datav3.txt
llama_model_loader: - kv  41:             quantize.imatrix.entries_count i32              = 336
llama_model_loader: - kv  42:              quantize.imatrix.chunks_count i32              = 128
llama_model_loader: - type  f32:  241 tensors
llama_model_loader: - type q5_K:  289 tensors
llama_model_loader: - type q6_K:   49 tensors
llm_load_vocab: control token: 151660 '<|fim_middle|>' is not marked as EOG
llm_load_vocab: control token: 151659 '<|fim_prefix|>' is not marked as EOG
llm_load_vocab: control token: 151653 '<|vision_end|>' is not marked as EOG
llm_load_vocab: control token: 151648 '<|box_start|>' is not marked as EOG
llm_load_vocab: control token: 151646 '<|object_ref_start|>' is not marked as EOG
llm_load_vocab: control token: 151649 '<|box_end|>' is not marked as EOG
llm_load_vocab: control token: 151655 '<|image_pad|>' is not marked as EOG
llm_load_vocab: control token: 151651 '<|quad_end|>' is not marked as EOG
llm_load_vocab: control token: 151647 '<|object_ref_end|>' is not marked as EOG
llm_load_vocab: control token: 151652 '<|vision_start|>' is not marked as EOG
llm_load_vocab: control token: 151654 '<|vision_pad|>' is not marked as EOG
llm_load_vocab: control token: 151656 '<|video_pad|>' is not marked as EOG
llm_load_vocab: control token: 151644 '<|im_start|>' is not marked as EOG
llm_load_vocab: control token: 151661 '<|fim_suffix|>' is not marked as EOG
llm_load_vocab: control token: 151650 '<|quad_start|>' is not marked as EOG
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 152064
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 5120
llm_load_print_meta: n_layer          = 48
llm_load_print_meta: n_head           = 40
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 5
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 13824
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 14B
llm_load_print_meta: model ftype      = Q5_K - Medium
llm_load_print_meta: model params     = 14.77 B
llm_load_print_meta: model size       = 9.78 GiB (5.69 BPW) 
llm_load_print_meta: general.name     = Qwen2.5 14B
llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_print_meta: FIM PRE token    = 151659 '<|fim_prefix|>'
llm_load_print_meta: FIM SUF token    = 151661 '<|fim_suffix|>'
llm_load_print_meta: FIM MID token    = 151660 '<|fim_middle|>'
llm_load_print_meta: FIM PAD token    = 151662 '<|fim_pad|>'
llm_load_print_meta: FIM REP token    = 151663 '<|repo_name|>'
llm_load_print_meta: FIM SEP token    = 151664 '<|file_sep|>'
llm_load_print_meta: EOG token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOG token        = 151645 '<|im_end|>'
llm_load_print_meta: EOG token        = 151662 '<|fim_pad|>'
llm_load_print_meta: EOG token        = 151663 '<|repo_name|>'
llm_load_print_meta: EOG token        = 151664 '<|file_sep|>'
llm_load_print_meta: max token length = 256
llm_load_tensors:   CPU_Mapped model buffer size = 10016.35 MiB
llama_new_context_with_model: n_seq_max     = 1
llama_new_context_with_model: n_ctx         = 15104
llama_new_context_with_model: n_ctx_per_seq = 15104
llama_new_context_with_model: n_batch       = 512
llama_new_context_with_model: n_ubatch      = 512
llama_new_context_with_model: flash_attn    = 1
llama_new_context_with_model: freq_base     = 1000000.0
llama_new_context_with_model: freq_scale    = 1
llama_new_context_with_model: n_ctx_per_seq (15104) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 15104, offload = 1, type_k = 'q8_0', type_v = 'q8_0', n_layer = 48, can_shift = 1
llama_kv_cache_init: layer 0: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 1: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 2: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 3: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 4: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 5: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 6: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 7: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 8: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 9: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 10: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 11: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 12: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 13: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 14: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 15: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 16: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 17: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 18: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 19: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 20: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 21: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 22: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 23: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 24: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 25: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 26: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 27: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 28: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 29: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 30: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 31: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 32: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 33: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 34: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 35: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 36: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 37: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 38: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 39: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 40: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 41: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 42: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 43: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 44: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 45: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 46: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
llama_kv_cache_init: layer 47: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024
time=2025-02-06T12:30:46.971+01:00 level=DEBUG source=server.go:603 msg="model load progress 1.00"
time=2025-02-06T12:30:47.221+01:00 level=DEBUG source=server.go:606 msg="model load completed, waiting for server to become available" status="llm server loading model"
llama_kv_cache_init:        CPU KV buffer size =  1504.50 MiB
llama_new_context_with_model: KV self size  = 1504.50 MiB, K (q8_0):  752.25 MiB, V (q8_0):  752.25 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.60 MiB
llama_new_context_with_model:        CPU compute buffer size =   317.00 MiB
llama_new_context_with_model: graph nodes  = 1495
llama_new_context_with_model: graph splits = 1
time=2025-02-06T12:30:47.974+01:00 level=INFO source=server.go:597 msg="llama runner started in 3.01 seconds"
time=2025-02-06T12:30:47.974+01:00 level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/ollama/blobs/sha256-bbfb685133c274407d565c65b1ca806eb1593482b1c9d8524596797b24123862
[GIN] 2025/02/06 - 12:30:47 | 200 |  3.733069514s |       127.0.0.1 | POST     "/api/generate"
time=2025-02-06T12:30:47.974+01:00 level=DEBUG source=sched.go:466 msg="context for request finished"
time=2025-02-06T12:30:47.975+01:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/ollama/blobs/sha256-bbfb685133c274407d565c65b1ca806eb1593482b1c9d8524596797b24123862 duration=5m0s
time=2025-02-06T12:30:47.975+01:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/ollama/blobs/sha256-bbfb685133c274407d565c65b1ca806eb1593482b1c9d8524596797b24123862 refCount=0

OS

Linux

GPU

Nvidia, AMD

CPU

AMD

Ollama version

0.5.8-rc7

Originally created by @ProjectMoon on GitHub (Feb 6, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8884 Originally assigned to: @mxyng on GitHub. ### What is the issue? I tried out 0.5.8-rc7, and ollama reports that the model will be offloaded to the GPU, but it appears to only run on CPU. I found https://github.com/ollama/ollama/issues/8828, but this seems to be happening even with libraries in the right place. I tried two scenarios: 1. Download ollama-linux and ollama-linux-rocm .tar.gz files, and untar to /opt/ollama. Run with the `dist/linux/amd64` directory structure. 2. Copy the libraries into the old directory structure: `/opt/ollama/bin/ollama` and `/opt/ollama/lib/` with all new separate lib directories merged together. In either case, ollama reports offloading to ROCm, but runs on CPU. ### Relevant log output <details> <summary>Expand for logs</summary> Here are the logs: ```shell 2025/02/06 12:30:37 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q8_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-02-06T12:30:37.908+01:00 level=INFO source=images.go:432 msg="total blobs: 118" time=2025-02-06T12:30:37.909+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-06T12:30:37.910+01:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11434 (version 0.5.8-rc7)" time=2025-02-06T12:30:37.911+01:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2025-02-06T12:30:37.911+01:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-02-06T12:30:37.912+01:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-02-06T12:30:37.912+01:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so* time=2025-02-06T12:30:37.912+01:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/opt/ollama/bin/libcuda.so* /libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-02-06T12:30:37.924+01:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[/usr/lib/libcuda.so.565.77 /usr/lib64/libcuda.so.565.77]" initializing /usr/lib/libcuda.so.565.77 library /usr/lib/libcuda.so.565.77 load err: /usr/lib/libcuda.so.565.77: wrong ELF class: ELFCLASS32 time=2025-02-06T12:30:37.924+01:00 level=DEBUG source=gpu.go:609 msg="skipping 32bit library" library=/usr/lib/libcuda.so.565.77 initializing /usr/lib64/libcuda.so.565.77 dlsym: cuInit - 0x7f758d2d4cc0 dlsym: cuDriverGetVersion - 0x7f758d2d4ce0 dlsym: cuDeviceGetCount - 0x7f758d2d4d20 dlsym: cuDeviceGet - 0x7f758d2d4d00 dlsym: cuDeviceGetAttribute - 0x7f758d2d4e00 dlsym: cuDeviceGetUuid - 0x7f758d2d4d60 dlsym: cuDeviceGetName - 0x7f758d2d4d40 dlsym: cuCtxCreate_v3 - 0x7f758d2d4fe0 dlsym: cuMemGetInfo_v2 - 0x7f758d2d5760 dlsym: cuCtxDestroy - 0x7f758d3213a0 calling cuInit calling cuDriverGetVersion raw version 0x2f26 CUDA driver version: 12.7 calling cuDeviceGetCount device count 1 time=2025-02-06T12:30:37.931+01:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib64/libcuda.so.565.77 [GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7] CUDA totalMem 4030 mb [GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7] CUDA freeMem 3963 mb [GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7] Compute Capability 5.2 time=2025-02-06T12:30:38.019+01:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2025-02-06T12:30:38.019+01:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/0/properties" time=2025-02-06T12:30:38.019+01:00 level=DEBUG source=amd_linux.go:121 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/0/properties" time=2025-02-06T12:30:38.020+01:00 level=DEBUG source=amd_linux.go:101 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/1/properties" time=2025-02-06T12:30:38.020+01:00 level=DEBUG source=amd_linux.go:206 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties vendor=4098 device=29631 unique_id=10870137312548343375 time=2025-02-06T12:30:38.020+01:00 level=DEBUG source=amd_linux.go:240 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/1/properties drm=/sys/class/drm/card0/device time=2025-02-06T12:30:38.020+01:00 level=DEBUG source=amd_linux.go:318 msg="amdgpu memory" gpu=0 total="16.0 GiB" time=2025-02-06T12:30:38.020+01:00 level=DEBUG source=amd_linux.go:319 msg="amdgpu memory" gpu=0 available="15.4 GiB" time=2025-02-06T12:30:38.020+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /opt/ollama/bin/rocm" time=2025-02-06T12:30:38.020+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /opt/rocm/lib" time=2025-02-06T12:30:38.020+01:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/lib64" time=2025-02-06T12:30:38.025+01:00 level=DEBUG source=amd_linux.go:371 msg="rocm supported GPUs" types="[gfx1030 gfx1100 gfx906 gfx908 gfx90a gfx942]" time=2025-02-06T12:30:38.025+01:00 level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=GPU-96da7c4b1629ce4f gpu_type=gfx1030 releasing cuda driver library time=2025-02-06T12:30:38.025+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 library=cuda variant=v11 compute=5.2 driver=12.7 name="NVIDIA GeForce GTX 970" total="3.9 GiB" available="3.9 GiB" time=2025-02-06T12:30:38.025+01:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-96da7c4b1629ce4f library=rocm variant="" compute=gfx1030 driver=0.0 name=1002:73bf total="16.0 GiB" available="15.4 GiB" [GIN] 2025/02/06 - 12:30:44 | 200 | 41.835µs | 127.0.0.1 | HEAD "/" [GIN] 2025/02/06 - 12:30:44 | 200 | 21.973698ms | 127.0.0.1 | POST "/api/show" time=2025-02-06T12:30:44.263+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.7 GiB" before.free="36.2 GiB" before.free_swap="1.0 GiB" now.total="62.7 GiB" now.free="36.1 GiB" now.free_swap="1.0 GiB" initializing /usr/lib64/libcuda.so.565.77 dlsym: cuInit - 0x7f758d2d4cc0 dlsym: cuDriverGetVersion - 0x7f758d2d4ce0 dlsym: cuDeviceGetCount - 0x7f758d2d4d20 dlsym: cuDeviceGet - 0x7f758d2d4d00 dlsym: cuDeviceGetAttribute - 0x7f758d2d4e00 dlsym: cuDeviceGetUuid - 0x7f758d2d4d60 dlsym: cuDeviceGetName - 0x7f758d2d4d40 dlsym: cuCtxCreate_v3 - 0x7f758d2d4fe0 dlsym: cuMemGetInfo_v2 - 0x7f758d2d5760 dlsym: cuCtxDestroy - 0x7f758d3213a0 calling cuInit calling cuDriverGetVersion raw version 0x2f26 CUDA driver version: 12.7 calling cuDeviceGetCount device count 1 time=2025-02-06T12:30:44.344+01:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 name="NVIDIA GeForce GTX 970" overhead="0 B" before.total="3.9 GiB" before.free="3.9 GiB" now.total="3.9 GiB" now.free="3.9 GiB" now.used="66.5 MiB" time=2025-02-06T12:30:44.344+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-96da7c4b1629ce4f name=1002:73bf before="15.4 GiB" now="15.4 GiB" releasing cuda driver library time=2025-02-06T12:30:44.344+01:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x55e3b630e9c0 gpu_count=2 time=2025-02-06T12:30:44.395+01:00 level=DEBUG source=sched.go:224 msg="loading first model" model=/ollama/blobs/sha256-bbfb685133c274407d565c65b1ca806eb1593482b1c9d8524596797b24123862 time=2025-02-06T12:30:44.395+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[3.9 GiB]" time=2025-02-06T12:30:44.396+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.7 GiB" before.free="36.1 GiB" before.free_swap="1.0 GiB" now.total="62.7 GiB" now.free="36.1 GiB" now.free_swap="1.0 GiB" initializing /usr/lib64/libcuda.so.565.77 dlsym: cuInit - 0x7f758d2d4cc0 dlsym: cuDriverGetVersion - 0x7f758d2d4ce0 dlsym: cuDeviceGetCount - 0x7f758d2d4d20 dlsym: cuDeviceGet - 0x7f758d2d4d00 dlsym: cuDeviceGetAttribute - 0x7f758d2d4e00 dlsym: cuDeviceGetUuid - 0x7f758d2d4d60 dlsym: cuDeviceGetName - 0x7f758d2d4d40 dlsym: cuCtxCreate_v3 - 0x7f758d2d4fe0 dlsym: cuMemGetInfo_v2 - 0x7f758d2d5760 dlsym: cuCtxDestroy - 0x7f758d3213a0 calling cuInit calling cuDriverGetVersion raw version 0x2f26 CUDA driver version: 12.7 calling cuDeviceGetCount device count 1 time=2025-02-06T12:30:44.470+01:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 name="NVIDIA GeForce GTX 970" overhead="0 B" before.total="3.9 GiB" before.free="3.9 GiB" now.total="3.9 GiB" now.free="3.9 GiB" now.used="66.5 MiB" time=2025-02-06T12:30:44.470+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-96da7c4b1629ce4f name=1002:73bf before="15.4 GiB" now="15.4 GiB" releasing cuda driver library time=2025-02-06T12:30:44.470+01:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 library=cuda variant=v11 compute=5.2 driver=12.7 name="NVIDIA GeForce GTX 970" total="3.9 GiB" available="3.9 GiB" minimum_memory=479199232 layer_size="307.4 MiB" gpu_zer_overhead="0 B" partial_offload="5.9 GiB" full_offload="4.7 GiB" time=2025-02-06T12:30:44.470+01:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers" time=2025-02-06T12:30:44.470+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[3.9 GiB]" time=2025-02-06T12:30:44.471+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.7 GiB" before.free="36.1 GiB" before.free_swap="1.0 GiB" now.total="62.7 GiB" now.free="36.0 GiB" now.free_swap="1.0 GiB" initializing /usr/lib64/libcuda.so.565.77 dlsym: cuInit - 0x7f758d2d4cc0 dlsym: cuDriverGetVersion - 0x7f758d2d4ce0 dlsym: cuDeviceGetCount - 0x7f758d2d4d20 dlsym: cuDeviceGet - 0x7f758d2d4d00 dlsym: cuDeviceGetAttribute - 0x7f758d2d4e00 dlsym: cuDeviceGetUuid - 0x7f758d2d4d60 dlsym: cuDeviceGetName - 0x7f758d2d4d40 dlsym: cuCtxCreate_v3 - 0x7f758d2d4fe0 dlsym: cuMemGetInfo_v2 - 0x7f758d2d5760 dlsym: cuCtxDestroy - 0x7f758d3213a0 calling cuInit calling cuDriverGetVersion raw version 0x2f26 CUDA driver version: 12.7 calling cuDeviceGetCount device count 1 time=2025-02-06T12:30:44.543+01:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 name="NVIDIA GeForce GTX 970" overhead="0 B" before.total="3.9 GiB" before.free="3.9 GiB" now.total="3.9 GiB" now.free="3.9 GiB" now.used="66.5 MiB" time=2025-02-06T12:30:44.543+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-96da7c4b1629ce4f name=1002:73bf before="15.4 GiB" now="15.4 GiB" releasing cuda driver library time=2025-02-06T12:30:44.544+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[3.9 GiB]" time=2025-02-06T12:30:44.544+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.7 GiB" before.free="36.0 GiB" before.free_swap="1.0 GiB" now.total="62.7 GiB" now.free="36.1 GiB" now.free_swap="1.0 GiB" initializing /usr/lib64/libcuda.so.565.77 dlsym: cuInit - 0x7f758d2d4cc0 dlsym: cuDriverGetVersion - 0x7f758d2d4ce0 dlsym: cuDeviceGetCount - 0x7f758d2d4d20 dlsym: cuDeviceGet - 0x7f758d2d4d00 dlsym: cuDeviceGetAttribute - 0x7f758d2d4e00 dlsym: cuDeviceGetUuid - 0x7f758d2d4d60 dlsym: cuDeviceGetName - 0x7f758d2d4d40 dlsym: cuCtxCreate_v3 - 0x7f758d2d4fe0 dlsym: cuMemGetInfo_v2 - 0x7f758d2d5760 dlsym: cuCtxDestroy - 0x7f758d3213a0 calling cuInit calling cuDriverGetVersion raw version 0x2f26 CUDA driver version: 12.7 calling cuDeviceGetCount device count 1 time=2025-02-06T12:30:44.611+01:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 name="NVIDIA GeForce GTX 970" overhead="0 B" before.total="3.9 GiB" before.free="3.9 GiB" now.total="3.9 GiB" now.free="3.9 GiB" now.used="66.5 MiB" time=2025-02-06T12:30:44.611+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-96da7c4b1629ce4f name=1002:73bf before="15.4 GiB" now="15.4 GiB" releasing cuda driver library time=2025-02-06T12:30:44.612+01:00 level=DEBUG source=memory.go:186 msg="gpu has too little memory to allocate any layers" id=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 library=cuda variant=v11 compute=5.2 driver=12.7 name="NVIDIA GeForce GTX 970" total="3.9 GiB" available="3.9 GiB" minimum_memory=479199232 layer_size="307.4 MiB" gpu_zer_overhead="0 B" partial_offload="5.9 GiB" full_offload="4.7 GiB" time=2025-02-06T12:30:44.612+01:00 level=DEBUG source=memory.go:330 msg="insufficient VRAM to load any model layers" time=2025-02-06T12:30:44.612+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[3.9 GiB]" time=2025-02-06T12:30:44.612+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.7 GiB" before.free="36.1 GiB" before.free_swap="1.0 GiB" now.total="62.7 GiB" now.free="36.1 GiB" now.free_swap="1.0 GiB" initializing /usr/lib64/libcuda.so.565.77 dlsym: cuInit - 0x7f758d2d4cc0 dlsym: cuDriverGetVersion - 0x7f758d2d4ce0 dlsym: cuDeviceGetCount - 0x7f758d2d4d20 dlsym: cuDeviceGet - 0x7f758d2d4d00 dlsym: cuDeviceGetAttribute - 0x7f758d2d4e00 dlsym: cuDeviceGetUuid - 0x7f758d2d4d60 dlsym: cuDeviceGetName - 0x7f758d2d4d40 dlsym: cuCtxCreate_v3 - 0x7f758d2d4fe0 dlsym: cuMemGetInfo_v2 - 0x7f758d2d5760 dlsym: cuCtxDestroy - 0x7f758d3213a0 calling cuInit calling cuDriverGetVersion raw version 0x2f26 CUDA driver version: 12.7 calling cuDeviceGetCount device count 1 time=2025-02-06T12:30:44.681+01:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 name="NVIDIA GeForce GTX 970" overhead="0 B" before.total="3.9 GiB" before.free="3.9 GiB" now.total="3.9 GiB" now.free="3.9 GiB" now.used="66.5 MiB" time=2025-02-06T12:30:44.681+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-96da7c4b1629ce4f name=1002:73bf before="15.4 GiB" now="15.4 GiB" releasing cuda driver library time=2025-02-06T12:30:44.681+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=rocm gpu_count=1 available="[15.4 GiB]" time=2025-02-06T12:30:44.682+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.7 GiB" before.free="36.1 GiB" before.free_swap="1.0 GiB" now.total="62.7 GiB" now.free="36.1 GiB" now.free_swap="1.0 GiB" initializing /usr/lib64/libcuda.so.565.77 dlsym: cuInit - 0x7f758d2d4cc0 dlsym: cuDriverGetVersion - 0x7f758d2d4ce0 dlsym: cuDeviceGetCount - 0x7f758d2d4d20 dlsym: cuDeviceGet - 0x7f758d2d4d00 dlsym: cuDeviceGetAttribute - 0x7f758d2d4e00 dlsym: cuDeviceGetUuid - 0x7f758d2d4d60 dlsym: cuDeviceGetName - 0x7f758d2d4d40 dlsym: cuCtxCreate_v3 - 0x7f758d2d4fe0 dlsym: cuMemGetInfo_v2 - 0x7f758d2d5760 dlsym: cuCtxDestroy - 0x7f758d3213a0 calling cuInit calling cuDriverGetVersion raw version 0x2f26 CUDA driver version: 12.7 calling cuDeviceGetCount device count 1 time=2025-02-06T12:30:44.750+01:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 name="NVIDIA GeForce GTX 970" overhead="0 B" before.total="3.9 GiB" before.free="3.9 GiB" now.total="3.9 GiB" now.free="3.9 GiB" now.used="66.5 MiB" time=2025-02-06T12:30:44.750+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-96da7c4b1629ce4f name=1002:73bf before="15.4 GiB" now="15.4 GiB" releasing cuda driver library time=2025-02-06T12:30:44.751+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=rocm gpu_count=1 available="[15.4 GiB]" time=2025-02-06T12:30:44.751+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.7 GiB" before.free="36.1 GiB" before.free_swap="1.0 GiB" now.total="62.7 GiB" now.free="36.1 GiB" now.free_swap="1.0 GiB" initializing /usr/lib64/libcuda.so.565.77 dlsym: cuInit - 0x7f758d2d4cc0 dlsym: cuDriverGetVersion - 0x7f758d2d4ce0 dlsym: cuDeviceGetCount - 0x7f758d2d4d20 dlsym: cuDeviceGet - 0x7f758d2d4d00 dlsym: cuDeviceGetAttribute - 0x7f758d2d4e00 dlsym: cuDeviceGetUuid - 0x7f758d2d4d60 dlsym: cuDeviceGetName - 0x7f758d2d4d40 dlsym: cuCtxCreate_v3 - 0x7f758d2d4fe0 dlsym: cuMemGetInfo_v2 - 0x7f758d2d5760 dlsym: cuCtxDestroy - 0x7f758d3213a0 calling cuInit calling cuDriverGetVersion raw version 0x2f26 CUDA driver version: 12.7 calling cuDeviceGetCount device count 1 time=2025-02-06T12:30:44.820+01:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 name="NVIDIA GeForce GTX 970" overhead="0 B" before.total="3.9 GiB" before.free="3.9 GiB" now.total="3.9 GiB" now.free="3.9 GiB" now.used="66.5 MiB" time=2025-02-06T12:30:44.820+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-96da7c4b1629ce4f name=1002:73bf before="15.4 GiB" now="15.4 GiB" releasing cuda driver library time=2025-02-06T12:30:44.820+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/ollama/blobs/sha256-bbfb685133c274407d565c65b1ca806eb1593482b1c9d8524596797b24123862 gpu=GPU-96da7c4b1629ce4f parallel=1 available=16494198784 required="12.5 GiB" time=2025-02-06T12:30:44.820+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.7 GiB" before.free="36.1 GiB" before.free_swap="1.0 GiB" now.total="62.7 GiB" now.free="36.1 GiB" now.free_swap="1.0 GiB" initializing /usr/lib64/libcuda.so.565.77 dlsym: cuInit - 0x7f758d2d4cc0 dlsym: cuDriverGetVersion - 0x7f758d2d4ce0 dlsym: cuDeviceGetCount - 0x7f758d2d4d20 dlsym: cuDeviceGet - 0x7f758d2d4d00 dlsym: cuDeviceGetAttribute - 0x7f758d2d4e00 dlsym: cuDeviceGetUuid - 0x7f758d2d4d60 dlsym: cuDeviceGetName - 0x7f758d2d4d40 dlsym: cuCtxCreate_v3 - 0x7f758d2d4fe0 dlsym: cuMemGetInfo_v2 - 0x7f758d2d5760 dlsym: cuCtxDestroy - 0x7f758d3213a0 calling cuInit calling cuDriverGetVersion raw version 0x2f26 CUDA driver version: 12.7 calling cuDeviceGetCount device count 1 time=2025-02-06T12:30:44.888+01:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 name="NVIDIA GeForce GTX 970" overhead="0 B" before.total="3.9 GiB" before.free="3.9 GiB" now.total="3.9 GiB" now.free="3.9 GiB" now.used="66.5 MiB" time=2025-02-06T12:30:44.888+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-96da7c4b1629ce4f name=1002:73bf before="15.4 GiB" now="15.4 GiB" releasing cuda driver library time=2025-02-06T12:30:44.888+01:00 level=INFO source=server.go:100 msg="system memory" total="62.7 GiB" free="36.1 GiB" free_swap="1.0 GiB" time=2025-02-06T12:30:44.888+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=rocm gpu_count=1 available="[15.4 GiB]" time=2025-02-06T12:30:44.889+01:00 level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.7 GiB" before.free="36.1 GiB" before.free_swap="1.0 GiB" now.total="62.7 GiB" now.free="36.1 GiB" now.free_swap="1.0 GiB" initializing /usr/lib64/libcuda.so.565.77 dlsym: cuInit - 0x7f758d2d4cc0 dlsym: cuDriverGetVersion - 0x7f758d2d4ce0 dlsym: cuDeviceGetCount - 0x7f758d2d4d20 dlsym: cuDeviceGet - 0x7f758d2d4d00 dlsym: cuDeviceGetAttribute - 0x7f758d2d4e00 dlsym: cuDeviceGetUuid - 0x7f758d2d4d60 dlsym: cuDeviceGetName - 0x7f758d2d4d40 dlsym: cuCtxCreate_v3 - 0x7f758d2d4fe0 dlsym: cuMemGetInfo_v2 - 0x7f758d2d5760 dlsym: cuCtxDestroy - 0x7f758d3213a0 calling cuInit calling cuDriverGetVersion raw version 0x2f26 CUDA driver version: 12.7 calling cuDeviceGetCount device count 1 time=2025-02-06T12:30:44.959+01:00 level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-64fa45ff-fe00-d712-1796-ed74da57bfa7 name="NVIDIA GeForce GTX 970" overhead="0 B" before.total="3.9 GiB" before.free="3.9 GiB" now.total="3.9 GiB" now.free="3.9 GiB" now.used="66.5 MiB" time=2025-02-06T12:30:44.959+01:00 level=DEBUG source=amd_linux.go:488 msg="updating rocm free memory" gpu=GPU-96da7c4b1629ce4f name=1002:73bf before="15.4 GiB" now="15.4 GiB" releasing cuda driver library time=2025-02-06T12:30:44.959+01:00 level=INFO source=memory.go:356 msg="offload to rocm" layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[15.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="12.5 GiB" memory.required.partial="12.5 GiB" memory.required.kv="1.4 GiB" memory.required.allocations="[12.5 GiB]" memory.weights.total="10.1 GiB" memory.weights.repeating="9.5 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="1.2 GiB" memory.graph.partial="1.5 GiB" time=2025-02-06T12:30:44.959+01:00 level=INFO source=server.go:185 msg="enabling flash attention" time=2025-02-06T12:30:44.959+01:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible=[] time=2025-02-06T12:30:44.960+01:00 level=DEBUG source=server.go:310 msg="adding gpu dependency paths" paths=[/usr/lib64] time=2025-02-06T12:30:44.960+01:00 level=INFO source=server.go:381 msg="starting llama server" cmd="/opt/ollama/bin/ollama runner --model /ollama/blobs/sha256-bbfb685133c274407d565c65b1ca806eb1593482b1c9d8524596797b24123862 --ctx-size 15000 --batch-size 512 --n-gpu-layers 49 --verbose --threads 6 --flash-attn --kv-cache-type q8_0 --parallel 1 --port 46243" time=2025-02-06T12:30:44.960+01:00 level=DEBUG source=server.go:399 msg=subprocess environment="[PATH=/opt/ollama/bin:/bin:/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/opt/bin:/usr/lib/llvm/19/bin:/usr/lib/llvm/18/bin:/etc/eselect/wine/bin:/opt/cuda/bin LD_LIBRARY_PATH=/usr/lib64:/opt/ollama/bin ROCR_VISIBLE_DEVICES=GPU-96da7c4b1629ce4f]" time=2025-02-06T12:30:44.960+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-06T12:30:44.961+01:00 level=INFO source=server.go:558 msg="waiting for llama runner to start responding" time=2025-02-06T12:30:44.961+01:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server error" time=2025-02-06T12:30:44.982+01:00 level=INFO source=runner.go:936 msg="starting go runner" time=2025-02-06T12:30:44.982+01:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=6 time=2025-02-06T12:30:44.982+01:00 level=DEBUG source=ggml.go:84 msg="ggml backend load all from path" path=/usr/lib64 time=2025-02-06T12:30:44.982+01:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:46243" time=2025-02-06T12:30:45.119+01:00 level=DEBUG source=ggml.go:84 msg="ggml backend load all from path" path=/opt/ollama/bin llama_model_loader: loaded meta data with 43 key-value pairs and 579 tensors from /ollama/blobs/sha256-bbfb685133c274407d565c65b1ca806eb1593482b1c9d8524596797b24123862 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 14B llama_model_loader: - kv 3: general.organization str = Qwen llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 14B llama_model_loader: - kv 6: general.base_model.count u32 = 3 llama_model_loader: - kv 7: general.base_model.0.name str = Qwamma 14b Merge v1 llama_model_loader: - kv 8: general.base_model.0.version str = v1 llama_model_loader: - kv 9: general.base_model.0.organization str = Chargoddard llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/chargoddard/qw... llama_model_loader: - kv 11: general.base_model.1.name str = Qwen2.5 14B Instruct_arcee Qwen2 14B ... llama_model_loader: - kv 12: general.base_model.1.version str = v0.2 llama_model_loader: - kv 13: general.base_model.1.organization str = Arcee Train llama_model_loader: - kv 14: general.base_model.1.repo_url str = https://huggingface.co/arcee-train/Qw... llama_model_loader: - kv 15: general.base_model.2.name str = Qwen2.5 14B llama_model_loader: - kv 16: general.base_model.2.organization str = Qwen llama_model_loader: - kv 17: general.base_model.2.repo_url str = https://huggingface.co/Qwen/Qwen2.5-14B llama_model_loader: - kv 18: general.tags arr[str,2] = ["mergekit", "merge"] llama_model_loader: - kv 19: qwen2.block_count u32 = 48 llama_model_loader: - kv 20: qwen2.context_length u32 = 131072 llama_model_loader: - kv 21: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 22: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 23: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 24: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 25: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 26: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 27: general.file_type u32 = 17 llama_model_loader: - kv 28: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 29: tokenizer.ggml.pre str = qwen2 time=2025-02-06T12:30:45.213+01:00 level=INFO source=server.go:592 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 30: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 31: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 32: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 33: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 34: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 35: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 36: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 37: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 38: general.quantization_version u32 = 2 llama_model_loader: - kv 39: quantize.imatrix.file str = /models_out/SuperNova-14B-GGUF/SuperN... llama_model_loader: - kv 40: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt llama_model_loader: - kv 41: quantize.imatrix.entries_count i32 = 336 llama_model_loader: - kv 42: quantize.imatrix.chunks_count i32 = 128 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q5_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors llm_load_vocab: control token: 151660 '<|fim_middle|>' is not marked as EOG llm_load_vocab: control token: 151659 '<|fim_prefix|>' is not marked as EOG llm_load_vocab: control token: 151653 '<|vision_end|>' is not marked as EOG llm_load_vocab: control token: 151648 '<|box_start|>' is not marked as EOG llm_load_vocab: control token: 151646 '<|object_ref_start|>' is not marked as EOG llm_load_vocab: control token: 151649 '<|box_end|>' is not marked as EOG llm_load_vocab: control token: 151655 '<|image_pad|>' is not marked as EOG llm_load_vocab: control token: 151651 '<|quad_end|>' is not marked as EOG llm_load_vocab: control token: 151647 '<|object_ref_end|>' is not marked as EOG llm_load_vocab: control token: 151652 '<|vision_start|>' is not marked as EOG llm_load_vocab: control token: 151654 '<|vision_pad|>' is not marked as EOG llm_load_vocab: control token: 151656 '<|video_pad|>' is not marked as EOG llm_load_vocab: control token: 151644 '<|im_start|>' is not marked as EOG llm_load_vocab: control token: 151661 '<|fim_suffix|>' is not marked as EOG llm_load_vocab: control token: 151650 '<|quad_start|>' is not marked as EOG llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 5120 llm_load_print_meta: n_layer = 48 llm_load_print_meta: n_head = 40 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 5 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 13824 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 14B llm_load_print_meta: model ftype = Q5_K - Medium llm_load_print_meta: model params = 14.77 B llm_load_print_meta: model size = 9.78 GiB (5.69 BPW) llm_load_print_meta: general.name = Qwen2.5 14B llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' llm_load_print_meta: EOG token = 151643 '<|endoftext|>' llm_load_print_meta: EOG token = 151645 '<|im_end|>' llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' llm_load_print_meta: EOG token = 151663 '<|repo_name|>' llm_load_print_meta: EOG token = 151664 '<|file_sep|>' llm_load_print_meta: max token length = 256 llm_load_tensors: CPU_Mapped model buffer size = 10016.35 MiB llama_new_context_with_model: n_seq_max = 1 llama_new_context_with_model: n_ctx = 15104 llama_new_context_with_model: n_ctx_per_seq = 15104 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 1 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (15104) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 15104, offload = 1, type_k = 'q8_0', type_v = 'q8_0', n_layer = 48, can_shift = 1 llama_kv_cache_init: layer 0: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 1: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 2: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 3: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 4: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 5: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 6: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 7: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 8: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 9: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 10: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 11: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 12: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 13: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 14: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 15: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 16: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 17: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 18: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 19: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 20: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 21: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 22: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 23: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 24: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 25: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 26: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 27: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 28: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 29: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 30: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 31: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 32: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 33: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 34: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 35: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 36: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 37: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 38: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 39: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 40: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 41: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 42: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 43: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 44: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 45: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 46: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 llama_kv_cache_init: layer 47: n_embd_k_gqa = 1024, n_embd_v_gqa = 1024 time=2025-02-06T12:30:46.971+01:00 level=DEBUG source=server.go:603 msg="model load progress 1.00" time=2025-02-06T12:30:47.221+01:00 level=DEBUG source=server.go:606 msg="model load completed, waiting for server to become available" status="llm server loading model" llama_kv_cache_init: CPU KV buffer size = 1504.50 MiB llama_new_context_with_model: KV self size = 1504.50 MiB, K (q8_0): 752.25 MiB, V (q8_0): 752.25 MiB llama_new_context_with_model: CPU output buffer size = 0.60 MiB llama_new_context_with_model: CPU compute buffer size = 317.00 MiB llama_new_context_with_model: graph nodes = 1495 llama_new_context_with_model: graph splits = 1 time=2025-02-06T12:30:47.974+01:00 level=INFO source=server.go:597 msg="llama runner started in 3.01 seconds" time=2025-02-06T12:30:47.974+01:00 level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/ollama/blobs/sha256-bbfb685133c274407d565c65b1ca806eb1593482b1c9d8524596797b24123862 [GIN] 2025/02/06 - 12:30:47 | 200 | 3.733069514s | 127.0.0.1 | POST "/api/generate" time=2025-02-06T12:30:47.974+01:00 level=DEBUG source=sched.go:466 msg="context for request finished" time=2025-02-06T12:30:47.975+01:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/ollama/blobs/sha256-bbfb685133c274407d565c65b1ca806eb1593482b1c9d8524596797b24123862 duration=5m0s time=2025-02-06T12:30:47.975+01:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/ollama/blobs/sha256-bbfb685133c274407d565c65b1ca806eb1593482b1c9d8524596797b24123862 refCount=0 ``` </details> ### OS Linux ### GPU Nvidia, AMD ### CPU AMD ### Ollama version 0.5.8-rc7
GiteaMirror added the buildamdbug labels 2026-04-12 17:04:48 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 6, 2025):

time=2025-02-06T12:30:44.959+01:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible=[]

No runners found. What's the result of find /opt/ollama/?

<!-- gh-comment-id:2639617827 --> @rick-github commented on GitHub (Feb 6, 2025): ``` time=2025-02-06T12:30:44.959+01:00 level=DEBUG source=server.go:262 msg="compatible gpu libraries" compatible=[] ``` No runners found. What's the result of `find /opt/ollama/`?
Author
Owner

@ProjectMoon commented on GitHub (Feb 6, 2025):

In both scenarios, ollama uses CPU instead of GPU. untar.txt is the directory structure after just un-tarring the linux and linux-rocm.tgz files. merged.txt is after cp -ring all of the separate library directories into one place.

untar.txt
merged.txt

Edit:

My installation process has always been to download the linux and linux-rocm .tgz files from GitHub, untar them into /opt/ollama, and restart the service.

<!-- gh-comment-id:2639799719 --> @ProjectMoon commented on GitHub (Feb 6, 2025): In both scenarios, ollama uses CPU instead of GPU. untar.txt is the directory structure after just un-tarring the linux and linux-rocm.tgz files. merged.txt is after `cp -r`ing all of the separate library directories into one place. [untar.txt](https://github.com/user-attachments/files/18690454/untar.txt) [merged.txt](https://github.com/user-attachments/files/18690453/merged.txt) Edit: My installation process has always been to download the linux and linux-rocm .tgz files from GitHub, untar them into `/opt/ollama`, and restart the service.
Author
Owner

@rick-github commented on GitHub (Feb 6, 2025):

The bundle for 0.5.8 appears to be missing a bunch of libraries that are in the 0.5.7 bundle.

8,15c6
< ./lib/ollama/libdrm_amdgpu.so.1
< ./lib/ollama/libdrm_amdgpu.so.1.0.0
< ./lib/ollama/libdrm.so.2
< ./lib/ollama/libdrm.so.2.4.0
< ./lib/ollama/libelf-0.176.so
< ./lib/ollama/libelf.so.1
< ./lib/ollama/libhipblaslt.so.0
< ./lib/ollama/libhipblaslt.so.0.7.60102
---
> ./lib/ollama/libggml-hip.so
18,19d8
< ./lib/ollama/libhsa-runtime64.so.1
< ./lib/ollama/libhsa-runtime64.so.1.13.60102
24,31d12
< ./lib/ollama/librocprofiler-register.so.0
< ./lib/ollama/librocprofiler-register.so.0.3.0
< ./lib/ollama/librocsolver.so.0
< ./lib/ollama/librocsolver.so.0.1.60102
< ./lib/ollama/librocsparse.so.1
< ./lib/ollama/librocsparse.so.1.0.60102
< ./lib/ollama/libtinfo.so.5
< ./lib/ollama/libtinfo.so.5.9

<!-- gh-comment-id:2639916084 --> @rick-github commented on GitHub (Feb 6, 2025): The bundle for 0.5.8 appears to be missing a bunch of libraries that are in the 0.5.7 bundle. ```diff 8,15c6 < ./lib/ollama/libdrm_amdgpu.so.1 < ./lib/ollama/libdrm_amdgpu.so.1.0.0 < ./lib/ollama/libdrm.so.2 < ./lib/ollama/libdrm.so.2.4.0 < ./lib/ollama/libelf-0.176.so < ./lib/ollama/libelf.so.1 < ./lib/ollama/libhipblaslt.so.0 < ./lib/ollama/libhipblaslt.so.0.7.60102 --- > ./lib/ollama/libggml-hip.so 18,19d8 < ./lib/ollama/libhsa-runtime64.so.1 < ./lib/ollama/libhsa-runtime64.so.1.13.60102 24,31d12 < ./lib/ollama/librocprofiler-register.so.0 < ./lib/ollama/librocprofiler-register.so.0.3.0 < ./lib/ollama/librocsolver.so.0 < ./lib/ollama/librocsolver.so.0.1.60102 < ./lib/ollama/librocsparse.so.1 < ./lib/ollama/librocsparse.so.1.0.60102 < ./lib/ollama/libtinfo.so.5 < ./lib/ollama/libtinfo.so.5.9 ```
Author
Owner

@ProjectMoon commented on GitHub (Feb 6, 2025):

Well that would cause problems, I think. 😄

<!-- gh-comment-id:2639922701 --> @ProjectMoon commented on GitHub (Feb 6, 2025): Well that would cause problems, I think. 😄
Author
Owner

@rick-github commented on GitHub (Feb 6, 2025):

The libraries are not in the docker image either. There's been significant changes in the build procedure for 0.5.8. I don't use ROCm so I don't know if the missing libraries are made up by the addition of libggml-hip.so, but that's quite a few libs and several gigabytes that have been removed.

ollama/ollama   0.5.8-rc7-rocm     ac861eae2ac6   19 hours ago    5.38GB
ollama/ollama   0.5.7-rocm         4b1f3ce64bfb   2 weeks ago     8.28GB
<!-- gh-comment-id:2639949435 --> @rick-github commented on GitHub (Feb 6, 2025): The libraries are not in the docker image either. There's been significant changes in the build procedure for 0.5.8. I don't use ROCm so I don't know if the missing libraries are made up by the addition of `libggml-hip.so`, but that's quite a few libs and several gigabytes that have been removed. ``` ollama/ollama 0.5.8-rc7-rocm ac861eae2ac6 19 hours ago 5.38GB ollama/ollama 0.5.7-rocm 4b1f3ce64bfb 2 weeks ago 8.28GB ```
Author
Owner

@mattcaron commented on GitHub (Feb 6, 2025):

I can confirm this in 0.5.8-rc7 as well, on a Radeon 6900XT.

My install process is:

cd ~/ai/ollama
mkdir 0.5.8
cd 0.5.8
tar -xf ~/download/ollama-linux-amd64.tgz
tar -xf ~/download/ollama-linux-amd64-rocm.tgz
cd ..
rm current && ln -s 0.5.8 current

And then my scripts know where to find everything because they look in current.

There's a minor issue here in that the directory structure changed and now everything lives under dist/linux/amd64 whereas before it did not. If this is intended, then fine, I can adjust my symlink trickery so things can find it.

However, when I run ollama manually, jobs still gets scheduled on the CPU and run painfully slowly.

Thanks in advance.

<!-- gh-comment-id:2640853658 --> @mattcaron commented on GitHub (Feb 6, 2025): I can confirm this in `0.5.8-rc7` as well, on a Radeon 6900XT. My install process is: ``` cd ~/ai/ollama mkdir 0.5.8 cd 0.5.8 tar -xf ~/download/ollama-linux-amd64.tgz tar -xf ~/download/ollama-linux-amd64-rocm.tgz cd .. rm current && ln -s 0.5.8 current ``` And then my scripts know where to find everything because they look in `current`. There's a minor issue here in that the directory structure changed and now everything lives under `dist/linux/amd64` whereas before it did not. If this is intended, then fine, I can adjust my symlink trickery so things can find it. However, when I run `ollama` manually, jobs still gets scheduled on the CPU and run painfully slowly. Thanks in advance.
Author
Owner

@mxyng commented on GitHub (Feb 6, 2025):

this should be fixed by #8896 and #8899

<!-- gh-comment-id:2641326533 --> @mxyng commented on GitHub (Feb 6, 2025): this should be fixed by #8896 and #8899
Author
Owner

@ProjectMoon commented on GitHub (Feb 7, 2025):

Looks like it works now. Thanks for the quick fix.

<!-- gh-comment-id:2642412647 --> @ProjectMoon commented on GitHub (Feb 7, 2025): Looks like it works now. Thanks for the quick fix.
Author
Owner

@mattcaron commented on GitHub (Feb 10, 2025):

Can confirm fixed for me in 0.5.8-rc12 as well.

Also, the tarball hierarchy is back to matching 0.5.7, which makes me happy.

Thanks for the quick fix.

<!-- gh-comment-id:2648310875 --> @mattcaron commented on GitHub (Feb 10, 2025): Can confirm fixed for me in `0.5.8-rc12` as well. Also, the tarball hierarchy is back to matching `0.5.7`, which makes me happy. Thanks for the quick fix.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5759