[GH-ISSUE #7729] GPU radeon not used #4933

Closed
opened 2026-04-12 15:59:41 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @alphaonex86 on GitHub (Nov 19, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7729

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

GPU not used:
2024/11/18 20:38:09 routes.go:1158: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/mnt/data/read/LLM/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-11-18T20:38:09.214-04:00 level=INFO source=images.go:754 msg="total blobs: 69"
time=2024-11-18T20:38:09.372-04:00 level=INFO source=images.go:761 msg="total unused blobs removed: 0"
time=2024-11-18T20:38:09.373-04:00 level=INFO source=routes.go:1205 msg="Listening on 127.0.0.1:11434 (version 0.3.14)"
time=2024-11-18T20:38:09.373-04:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama1129448929/runners
time=2024-11-18T20:38:15.122-04:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11 cuda_v12 rocm_v60102 cpu cpu_avx]"
time=2024-11-18T20:38:15.122-04:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-18T20:38:15.130-04:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-11-18T20:38:15.133-04:00 level=WARN source=amd_linux.go:440 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install"
time=2024-11-18T20:38:15.133-04:00 level=WARN source=amd_linux.go:345 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU"
time=2024-11-18T20:38:15.133-04:00 level=INFO source=gpu.go:384 msg="no compatible GPUs were discovered"
time=2024-11-18T20:38:15.133-04:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="62.0 GiB" available="51.7 GiB"

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.3.14

Originally created by @alphaonex86 on GitHub (Nov 19, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7729 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? GPU not used: 2024/11/18 20:38:09 routes.go:1158: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/mnt/data/read/LLM/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2024-11-18T20:38:09.214-04:00 level=INFO source=images.go:754 msg="total blobs: 69" time=2024-11-18T20:38:09.372-04:00 level=INFO source=images.go:761 msg="total unused blobs removed: 0" time=2024-11-18T20:38:09.373-04:00 level=INFO source=routes.go:1205 msg="Listening on 127.0.0.1:11434 (version 0.3.14)" time=2024-11-18T20:38:09.373-04:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama1129448929/runners time=2024-11-18T20:38:15.122-04:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11 cuda_v12 rocm_v60102 cpu cpu_avx]" time=2024-11-18T20:38:15.122-04:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-11-18T20:38:15.130-04:00 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-11-18T20:38:15.133-04:00 level=WARN source=amd_linux.go:440 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install" time=2024-11-18T20:38:15.133-04:00 level=WARN source=amd_linux.go:345 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU" time=2024-11-18T20:38:15.133-04:00 level=INFO source=gpu.go:384 msg="no compatible GPUs were discovered" time=2024-11-18T20:38:15.133-04:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="62.0 GiB" available="51.7 GiB" ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.3.14
GiteaMirror added the installneeds more infoamdbug labels 2026-04-12 15:59:42 -05:00
Author
Owner

@dhiltgen commented on GitHub (Nov 19, 2024):

How did you install Ollama? The install script should install rocm automatically.

https://ollama.com/download/linux

<!-- gh-comment-id:2484476085 --> @dhiltgen commented on GitHub (Nov 19, 2024): How did you install Ollama? The install script should install rocm automatically. https://ollama.com/download/linux
Author
Owner

@alphaonex86 commented on GitHub (Nov 19, 2024):

via https://ollama.com/download/ollama-linux-amd64-rocm.tgz or https://ollama.com/download/ollama-linux-amd64.tgz Installer download the same file

<!-- gh-comment-id:2484538747 --> @alphaonex86 commented on GitHub (Nov 19, 2024): via https://ollama.com/download/ollama-linux-amd64-rocm.tgz or https://ollama.com/download/ollama-linux-amd64.tgz Installer download the same file
Author
Owner

@Asherathe commented on GitHub (Nov 24, 2024):

I have the same issue on Opensuse Tumbleweed. ROCm is installed but isn't detected.

<!-- gh-comment-id:2496027686 --> @Asherathe commented on GitHub (Nov 24, 2024): I have the same issue on Opensuse Tumbleweed. ROCm is installed but isn't detected.
Author
Owner

@dhiltgen commented on GitHub (Nov 24, 2024):

@alphaonex86 @Asherathe please run the following and share the results:

ls $(dirname $(which ollama))/../lib/ollama

It sounds like you didn't run the install script, but installed manually or through some other means. As long as you extracted both tar bundles in the same location, then wherever bin/ollama wound up, there should be a lib/ollama/<rocm>

<!-- gh-comment-id:2496261037 --> @dhiltgen commented on GitHub (Nov 24, 2024): @alphaonex86 @Asherathe please run the following and share the results: ``` ls $(dirname $(which ollama))/../lib/ollama ``` It sounds like you didn't run the install script, but installed manually or through some other means. As long as you extracted both tar bundles in the same location, then wherever `bin/ollama` wound up, there should be a `lib/ollama/<rocm>`
Author
Owner

@Asherathe commented on GitHub (Nov 25, 2024):

Mine is in /home/user/.config. The binary and /lib/ollama/ are in the same folder.

<!-- gh-comment-id:2496442263 --> @Asherathe commented on GitHub (Nov 25, 2024): Mine is in /home/user/.config. The binary and /lib/ollama/ are in the same folder.
Author
Owner

@dhiltgen commented on GitHub (Dec 9, 2024):

The binary and /lib/ollama/ are in the same folder

I think that is the problem. I believe you have:

./ollama
./lib/ollama/...

What you need is:

./bin/ollama
./lib/ollama/...

Then the relative paths will resolve.

<!-- gh-comment-id:2529705948 --> @dhiltgen commented on GitHub (Dec 9, 2024): > The binary and /lib/ollama/ are in the same folder I think that is the problem. I believe you have: ``` ./ollama ./lib/ollama/... ``` What you need is: ``` ./bin/ollama ./lib/ollama/... ``` Then the relative paths will resolve.
Author
Owner

@Asherathe commented on GitHub (Dec 10, 2024):

Yes, that was it! Thanks!

<!-- gh-comment-id:2529963039 --> @Asherathe commented on GitHub (Dec 10, 2024): Yes, that was it! Thanks!
Author
Owner

@mdsadiqueinam commented on GitHub (Feb 4, 2025):

@alphaonex86 @Asherathe please run the following and share the results:

ls $(dirname $(which ollama))/../lib/ollama

It sounds like you didn't run the install script, but installed manually or through some other means. As long as you extracted both tar bundles in the same location, then wherever bin/ollama wound up, there should be a lib/ollama/<rocm>

hello 👋 @dhiltgen

I got this output

libamd_comgr.so.2          libcudart.so.12                 libnuma.so.1
libamd_comgr.so.2.7.60102  libcudart.so.12.4.127           libnuma.so.1.0.0
libamdhip64.so.6           libdrm_amdgpu.so.1              librocblas.so.4
libamdhip64.so.6.1.60102   libdrm_amdgpu.so.1.0.0          librocblas.so.4.1.60102
libcublasLt.so.11          libdrm.so.2                     librocprofiler-register.so.0
libcublasLt.so.11.5.1.109  libdrm.so.2.4.0                 librocprofiler-register.so.0.3.0
libcublasLt.so.12          libelf-0.176.so                 librocsolver.so.0
libcublasLt.so.12.4.5.8    libelf.so.1                     librocsolver.so.0.1.60102
libcublas.so.11            libhipblaslt.so.0               librocsparse.so.1
libcublas.so.11.5.1.109    libhipblaslt.so.0.7.60102       librocsparse.so.1.0.60102
libcublas.so.12            libhipblas.so.2                 libtinfo.so.5
libcublas.so.12.4.5.8      libhipblas.so.2.1.60102         libtinfo.so.5.9
libcudart.so.11.0          libhsa-runtime64.so.1           rocblas
libcudart.so.11.3.109      libhsa-runtime64.so.1.13.60102  runners

what is the cause of GPU not being used

install output

>>> Cleaning up old version at /usr/local/lib/ollama
[sudo] password for sadique: 
>>> Installing ollama to /usr/local
>>> Downloading Linux amd64 bundle
######################################################################## 100.0%
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
>>> Downloading Linux ROCm amd64 bundle
######################################################################## 100.0%
>>> The Ollama API is now available at 127.0.0.1:11434.
>>> Install complete. Run "ollama" from the command line.
>>> AMD GPU ready.
<!-- gh-comment-id:2633863183 --> @mdsadiqueinam commented on GitHub (Feb 4, 2025): > [@alphaonex86](https://github.com/alphaonex86) [@Asherathe](https://github.com/Asherathe) please run the following and share the results: > > ``` > ls $(dirname $(which ollama))/../lib/ollama > ``` > > It sounds like you didn't run the install script, but installed manually or through some other means. As long as you extracted both tar bundles in the same location, then wherever `bin/ollama` wound up, there should be a `lib/ollama/<rocm>` hello 👋 @dhiltgen I got this output ``` libamd_comgr.so.2 libcudart.so.12 libnuma.so.1 libamd_comgr.so.2.7.60102 libcudart.so.12.4.127 libnuma.so.1.0.0 libamdhip64.so.6 libdrm_amdgpu.so.1 librocblas.so.4 libamdhip64.so.6.1.60102 libdrm_amdgpu.so.1.0.0 librocblas.so.4.1.60102 libcublasLt.so.11 libdrm.so.2 librocprofiler-register.so.0 libcublasLt.so.11.5.1.109 libdrm.so.2.4.0 librocprofiler-register.so.0.3.0 libcublasLt.so.12 libelf-0.176.so librocsolver.so.0 libcublasLt.so.12.4.5.8 libelf.so.1 librocsolver.so.0.1.60102 libcublas.so.11 libhipblaslt.so.0 librocsparse.so.1 libcublas.so.11.5.1.109 libhipblaslt.so.0.7.60102 librocsparse.so.1.0.60102 libcublas.so.12 libhipblas.so.2 libtinfo.so.5 libcublas.so.12.4.5.8 libhipblas.so.2.1.60102 libtinfo.so.5.9 libcudart.so.11.0 libhsa-runtime64.so.1 rocblas libcudart.so.11.3.109 libhsa-runtime64.so.1.13.60102 runners ``` what is the cause of GPU not being used install output ``` >>> Cleaning up old version at /usr/local/lib/ollama [sudo] password for sadique: >>> Installing ollama to /usr/local >>> Downloading Linux amd64 bundle ######################################################################## 100.0% >>> Adding ollama user to render group... >>> Adding ollama user to video group... >>> Adding current user to ollama group... >>> Creating ollama systemd service... >>> Enabling and starting ollama service... >>> Downloading Linux ROCm amd64 bundle ######################################################################## 100.0% >>> The Ollama API is now available at 127.0.0.1:11434. >>> Install complete. Run "ollama" from the command line. >>> AMD GPU ready. ```
Author
Owner

@dhiltgen commented on GitHub (Feb 22, 2025):

@mdsadiqueinam please share your server logs so we can see why it didn't run on the GPU

https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md

<!-- gh-comment-id:2676399027 --> @dhiltgen commented on GitHub (Feb 22, 2025): @mdsadiqueinam please share your server logs so we can see why it didn't run on the GPU https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md
Author
Owner

@mdsadiqueinam commented on GitHub (Mar 6, 2025):

@dhiltgen Sorry for late reply, here is the log

Mar 06 06:11:47 sadique-Victus systemd[1]: Stopping ollama.service - Ollama Service...
Mar 06 06:11:47 sadique-Victus systemd[1]: ollama.service: Deactivated successfully.
Mar 06 06:11:47 sadique-Victus systemd[1]: Stopped ollama.service - Ollama Service.
Mar 06 06:11:47 sadique-Victus systemd[1]: Started ollama.service - Ollama Service.
Mar 06 06:11:47 sadique-Victus ollama[44217]: 2025/03/06 06:11:47 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
Mar 06 06:11:47 sadique-Victus ollama[44217]: time=2025-03-06T06:11:47.762+05:30 level=INFO source=images.go:432 msg="total blobs: 15"
Mar 06 06:11:47 sadique-Victus ollama[44217]: time=2025-03-06T06:11:47.763+05:30 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
Mar 06 06:11:47 sadique-Victus ollama[44217]: time=2025-03-06T06:11:47.764+05:30 level=INFO source=routes.go:1277 msg="Listening on [::]:11434 (version 0.5.13)"
Mar 06 06:11:47 sadique-Victus ollama[44217]: time=2025-03-06T06:11:47.765+05:30 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Mar 06 06:11:47 sadique-Victus ollama[44217]: time=2025-03-06T06:11:47.783+05:30 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
Mar 06 06:11:47 sadique-Victus ollama[44217]: time=2025-03-06T06:11:47.787+05:30 level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=0 gpu_type=gfx1012
Mar 06 06:11:47 sadique-Victus ollama[44217]: time=2025-03-06T06:11:47.788+05:30 level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=1 total="512.0 MiB"
Mar 06 06:11:47 sadique-Victus ollama[44217]: time=2025-03-06T06:11:47.788+05:30 level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1012 driver=0.0 name=1002:7340 total="4.0 GiB" available="4.0 GiB"
Mar 06 06:13:56 sadique-Victus ollama[44217]: [GIN] 2025/03/06 - 06:13:56 | 200 |     623.202µs |       127.0.0.1 | HEAD     "/"
Mar 06 06:13:56 sadique-Victus ollama[44217]: [GIN] 2025/03/06 - 06:13:56 | 200 |    1.914992ms |       127.0.0.1 | GET      "/api/tags"
Mar 06 06:14:17 sadique-Victus ollama[44217]: [GIN] 2025/03/06 - 06:14:17 | 200 |      53.104µs |       127.0.0.1 | HEAD     "/"
Mar 06 06:14:17 sadique-Victus ollama[44217]: [GIN] 2025/03/06 - 06:14:17 | 200 |   18.826697ms |       127.0.0.1 | POST     "/api/show"
Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.265+05:30 level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.265+05:30 level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.265+05:30 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-d040cc18521592f70c199396aeaa44cdc40224079156dc09d4283d745d9dc5fd gpu=0 parallel=4 available=4266512384 required="3.0 GiB"
Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.265+05:30 level=INFO source=server.go:97 msg="system memory" total="15.0 GiB" free="11.8 GiB" free_swap="3.1 GiB"
Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.265+05:30 level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128
Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.265+05:30 level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128
Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.266+05:30 level=INFO source=server.go:130 msg=offload library=rocm layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[4.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.0 GiB" memory.required.partial="3.0 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[3.0 GiB]" memory.weights.total="2.1 GiB" memory.weights.repeating="2.1 GiB" memory.weights.nonrepeating="51.7 MiB" memory.graph.full="288.0 MiB" memory.graph.partial="346.3 MiB"
Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.266+05:30 level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-d040cc18521592f70c199396aeaa44cdc40224079156dc09d4283d745d9dc5fd --ctx-size 8192 --batch-size 512 --n-gpu-layers 25 --threads 6 --parallel 4 --port 44695"
Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.267+05:30 level=INFO source=sched.go:450 msg="loaded runners" count=1
Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.267+05:30 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.268+05:30 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.292+05:30 level=INFO source=runner.go:931 msg="starting go runner"
Mar 06 06:14:19 sadique-Victus ollama[44217]: /opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory
Mar 06 06:14:19 sadique-Victus ollama[44217]: /opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory
Mar 06 06:14:22 sadique-Victus ollama[44217]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Mar 06 06:14:22 sadique-Victus ollama[44217]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Mar 06 06:14:22 sadique-Victus ollama[44217]: ggml_cuda_init: found 1 ROCm devices:
Mar 06 06:14:22 sadique-Victus ollama[44217]:   Device 0: AMD Radeon Graphics, gfx1012:xnack- (0x1012), VMM: no, Wave Size: 32
Mar 06 06:14:22 sadique-Victus ollama[44217]: load_backend: loaded ROCm backend from /usr/local/lib/ollama/rocm/libggml-hip.so
Mar 06 06:14:22 sadique-Victus ollama[44217]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
Mar 06 06:14:22 sadique-Victus ollama[44217]: time=2025-03-06T06:14:22.874+05:30 level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | ROCm : NO_VMM = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=6
Mar 06 06:14:22 sadique-Victus ollama[44217]: time=2025-03-06T06:14:22.875+05:30 level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:44695"
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon Graphics) - 4060 MiB free
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: loaded meta data with 26 key-value pairs and 219 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-d040cc18521592f70c199396aeaa44cdc40224079156dc09d4283d745d9dc5fd (version GGUF V3 (latest))
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv   0:                       general.architecture str              = llama
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv   1:                               general.name str              = deepseek-ai
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv   2:                       llama.context_length u32              = 16384
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv   3:                     llama.embedding_length u32              = 2048
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv   4:                          llama.block_count u32              = 24
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 5504
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 16
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 16
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000001
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 100000.000000
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv  11:                    llama.rope.scaling.type str              = linear
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv  12:                  llama.rope.scaling.factor f32              = 4.000000
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv  13:                          general.file_type u32              = 2
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32256]   = ["!", "\"", "#", "$", "%", "&", "'", ...
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32256]   = [0.000000, 0.000000, 0.000000, 0.0000...
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32256]   = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,31757]   = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e...
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 32013
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 32021
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 32014
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv  25:               general.quantization_version u32              = 2
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - type  f32:   49 tensors
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - type q4_0:  169 tensors
Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - type q6_K:    1 tensors
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: file format = GGUF V3 (latest)
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: file type   = Q4_0
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: file size   = 738.88 MiB (4.60 BPW)
Mar 06 06:14:22 sadique-Victus ollama[44217]: load: missing or unrecognized pre-tokenizer type, using: 'default'
Mar 06 06:14:22 sadique-Victus ollama[44217]: load: control-looking token:  32015 '<|fim▁hole|>' was not control-type; this is probably a bug in the model. its type will be overridden
Mar 06 06:14:22 sadique-Victus ollama[44217]: load: control-looking token:  32017 '<|fim▁end|>' was not control-type; this is probably a bug in the model. its type will be overridden
Mar 06 06:14:22 sadique-Victus ollama[44217]: load: control-looking token:  32016 '<|fim▁begin|>' was not control-type; this is probably a bug in the model. its type will be overridden
Mar 06 06:14:22 sadique-Victus ollama[44217]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
Mar 06 06:14:22 sadique-Victus ollama[44217]: load: special_eot_id is not in special_eog_ids - the tokenizer config may be incorrect
Mar 06 06:14:22 sadique-Victus ollama[44217]: load: special tokens cache size = 256
Mar 06 06:14:22 sadique-Victus ollama[44217]: load: token to piece cache size = 0.1792 MB
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: arch             = llama
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: vocab_only       = 0
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_ctx_train      = 16384
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_embd           = 2048
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_layer          = 24
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_head           = 16
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_head_kv        = 16
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_rot            = 128
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_swa            = 0
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_embd_head_k    = 128
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_embd_head_v    = 128
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_gqa            = 1
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_embd_k_gqa     = 2048
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_embd_v_gqa     = 2048
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: f_norm_eps       = 0.0e+00
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: f_norm_rms_eps   = 1.0e-06
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: f_clamp_kqv      = 0.0e+00
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: f_max_alibi_bias = 0.0e+00
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: f_logit_scale    = 0.0e+00
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_ff             = 5504
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_expert         = 0
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_expert_used    = 0
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: causal attn      = 1
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: pooling type     = 0
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: rope type        = 0
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: rope scaling     = linear
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: freq_base_train  = 100000.0
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: freq_scale_train = 0.25
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_ctx_orig_yarn  = 16384
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: rope_finetuned   = unknown
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: ssm_d_conv       = 0
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: ssm_d_inner      = 0
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: ssm_d_state      = 0
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: ssm_dt_rank      = 0
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: ssm_dt_b_c_rms   = 0
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: model type       = ?B
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: model params     = 1.35 B
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: general.name     = deepseek-ai
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: vocab type       = BPE
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_vocab          = 32256
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_merges         = 31757
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: BOS token        = 32013 '<|begin▁of▁sentence|>'
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: EOS token        = 32021 '<|EOT|>'
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: EOT token        = 32014 '<|end▁of▁sentence|>'
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: PAD token        = 32014 '<|end▁of▁sentence|>'
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: LF token         = 185 'Ċ'
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: FIM PRE token    = 32016 '<|fim▁begin|>'
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: FIM SUF token    = 32015 '<|fim▁hole|>'
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: FIM MID token    = 32017 '<|fim▁end|>'
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: EOG token        = 32014 '<|end▁of▁sentence|>'
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: EOG token        = 32021 '<|EOT|>'
Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: max token length = 128
Mar 06 06:14:22 sadique-Victus ollama[44217]: load_tensors: loading model tensors, this can take a while... (mmap = true)
Mar 06 06:14:23 sadique-Victus ollama[44217]: time=2025-03-06T06:14:23.050+05:30 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
Mar 06 06:14:23 sadique-Victus ollama[44217]: load_tensors: offloading 24 repeating layers to GPU
Mar 06 06:14:23 sadique-Victus ollama[44217]: load_tensors: offloading output layer to GPU
Mar 06 06:14:23 sadique-Victus ollama[44217]: load_tensors: offloaded 25/25 layers to GPU
Mar 06 06:14:23 sadique-Victus ollama[44217]: load_tensors:        ROCm0 model buffer size =   703.44 MiB
Mar 06 06:14:23 sadique-Victus ollama[44217]: load_tensors:   CPU_Mapped model buffer size =    35.44 MiB
Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: n_seq_max     = 4
Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: n_ctx         = 8192
Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: n_ctx_per_seq = 2048
Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: n_batch       = 2048
Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: n_ubatch      = 512
Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: flash_attn    = 0
Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: freq_base     = 100000.0
Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: freq_scale    = 0.25
Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (16384) -- the full capacity of the model will not be utilized
Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 24, can_shift = 1
Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_kv_cache_init:      ROCm0 KV buffer size =  1536.00 MiB
Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: KV self size  = 1536.00 MiB, K (f16):  768.00 MiB, V (f16):  768.00 MiB
Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model:  ROCm_Host  output buffer size =     0.52 MiB
Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model:      ROCm0 compute buffer size =   288.00 MiB
Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model:  ROCm_Host compute buffer size =    20.01 MiB
Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: graph nodes  = 774
Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: graph splits = 2
Mar 06 06:14:24 sadique-Victus ollama[44217]: time=2025-03-06T06:14:24.307+05:30 level=INFO source=server.go:596 msg="llama runner started in 7.04 seconds"
Mar 06 06:14:24 sadique-Victus ollama[44217]: [GIN] 2025/03/06 - 06:14:24 | 200 |  7.067325163s |       127.0.0.1 | POST     "/api/generate"

<!-- gh-comment-id:2702435015 --> @mdsadiqueinam commented on GitHub (Mar 6, 2025): @dhiltgen Sorry for late reply, here is the log ``` Mar 06 06:11:47 sadique-Victus systemd[1]: Stopping ollama.service - Ollama Service... Mar 06 06:11:47 sadique-Victus systemd[1]: ollama.service: Deactivated successfully. Mar 06 06:11:47 sadique-Victus systemd[1]: Stopped ollama.service - Ollama Service. Mar 06 06:11:47 sadique-Victus systemd[1]: Started ollama.service - Ollama Service. Mar 06 06:11:47 sadique-Victus ollama[44217]: 2025/03/06 06:11:47 routes.go:1215: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Mar 06 06:11:47 sadique-Victus ollama[44217]: time=2025-03-06T06:11:47.762+05:30 level=INFO source=images.go:432 msg="total blobs: 15" Mar 06 06:11:47 sadique-Victus ollama[44217]: time=2025-03-06T06:11:47.763+05:30 level=INFO source=images.go:439 msg="total unused blobs removed: 0" Mar 06 06:11:47 sadique-Victus ollama[44217]: time=2025-03-06T06:11:47.764+05:30 level=INFO source=routes.go:1277 msg="Listening on [::]:11434 (version 0.5.13)" Mar 06 06:11:47 sadique-Victus ollama[44217]: time=2025-03-06T06:11:47.765+05:30 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Mar 06 06:11:47 sadique-Victus ollama[44217]: time=2025-03-06T06:11:47.783+05:30 level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" Mar 06 06:11:47 sadique-Victus ollama[44217]: time=2025-03-06T06:11:47.787+05:30 level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=0 gpu_type=gfx1012 Mar 06 06:11:47 sadique-Victus ollama[44217]: time=2025-03-06T06:11:47.788+05:30 level=INFO source=amd_linux.go:296 msg="unsupported Radeon iGPU detected skipping" id=1 total="512.0 MiB" Mar 06 06:11:47 sadique-Victus ollama[44217]: time=2025-03-06T06:11:47.788+05:30 level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1012 driver=0.0 name=1002:7340 total="4.0 GiB" available="4.0 GiB" Mar 06 06:13:56 sadique-Victus ollama[44217]: [GIN] 2025/03/06 - 06:13:56 | 200 | 623.202µs | 127.0.0.1 | HEAD "/" Mar 06 06:13:56 sadique-Victus ollama[44217]: [GIN] 2025/03/06 - 06:13:56 | 200 | 1.914992ms | 127.0.0.1 | GET "/api/tags" Mar 06 06:14:17 sadique-Victus ollama[44217]: [GIN] 2025/03/06 - 06:14:17 | 200 | 53.104µs | 127.0.0.1 | HEAD "/" Mar 06 06:14:17 sadique-Victus ollama[44217]: [GIN] 2025/03/06 - 06:14:17 | 200 | 18.826697ms | 127.0.0.1 | POST "/api/show" Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.265+05:30 level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.265+05:30 level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.265+05:30 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-d040cc18521592f70c199396aeaa44cdc40224079156dc09d4283d745d9dc5fd gpu=0 parallel=4 available=4266512384 required="3.0 GiB" Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.265+05:30 level=INFO source=server.go:97 msg="system memory" total="15.0 GiB" free="11.8 GiB" free_swap="3.1 GiB" Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.265+05:30 level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.key_length default=128 Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.265+05:30 level=WARN source=ggml.go:136 msg="key not found" key=llama.attention.value_length default=128 Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.266+05:30 level=INFO source=server.go:130 msg=offload library=rocm layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[4.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.0 GiB" memory.required.partial="3.0 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[3.0 GiB]" memory.weights.total="2.1 GiB" memory.weights.repeating="2.1 GiB" memory.weights.nonrepeating="51.7 MiB" memory.graph.full="288.0 MiB" memory.graph.partial="346.3 MiB" Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.266+05:30 level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-d040cc18521592f70c199396aeaa44cdc40224079156dc09d4283d745d9dc5fd --ctx-size 8192 --batch-size 512 --n-gpu-layers 25 --threads 6 --parallel 4 --port 44695" Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.267+05:30 level=INFO source=sched.go:450 msg="loaded runners" count=1 Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.267+05:30 level=INFO source=server.go:557 msg="waiting for llama runner to start responding" Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.268+05:30 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" Mar 06 06:14:17 sadique-Victus ollama[44217]: time=2025-03-06T06:14:17.292+05:30 level=INFO source=runner.go:931 msg="starting go runner" Mar 06 06:14:19 sadique-Victus ollama[44217]: /opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory Mar 06 06:14:19 sadique-Victus ollama[44217]: /opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory Mar 06 06:14:22 sadique-Victus ollama[44217]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Mar 06 06:14:22 sadique-Victus ollama[44217]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Mar 06 06:14:22 sadique-Victus ollama[44217]: ggml_cuda_init: found 1 ROCm devices: Mar 06 06:14:22 sadique-Victus ollama[44217]: Device 0: AMD Radeon Graphics, gfx1012:xnack- (0x1012), VMM: no, Wave Size: 32 Mar 06 06:14:22 sadique-Victus ollama[44217]: load_backend: loaded ROCm backend from /usr/local/lib/ollama/rocm/libggml-hip.so Mar 06 06:14:22 sadique-Victus ollama[44217]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so Mar 06 06:14:22 sadique-Victus ollama[44217]: time=2025-03-06T06:14:22.874+05:30 level=INFO source=runner.go:934 msg=system info="CPU : LLAMAFILE = 1 | ROCm : NO_VMM = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=6 Mar 06 06:14:22 sadique-Victus ollama[44217]: time=2025-03-06T06:14:22.875+05:30 level=INFO source=runner.go:992 msg="Server listening on 127.0.0.1:44695" Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon Graphics) - 4060 MiB free Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: loaded meta data with 26 key-value pairs and 219 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-d040cc18521592f70c199396aeaa44cdc40224079156dc09d4283d745d9dc5fd (version GGUF V3 (latest)) Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 0: general.architecture str = llama Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 1: general.name str = deepseek-ai Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 2: llama.context_length u32 = 16384 Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 3: llama.embedding_length u32 = 2048 Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 4: llama.block_count u32 = 24 Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 5: llama.feed_forward_length u32 = 5504 Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 7: llama.attention.head_count u32 = 16 Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 16 Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000001 Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 10: llama.rope.freq_base f32 = 100000.000000 Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 11: llama.rope.scaling.type str = linear Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 12: llama.rope.scaling.factor f32 = 4.000000 Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 13: general.file_type u32 = 2 Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32256] = ["!", "\"", "#", "$", "%", "&", "'", ... Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32256] = [0.000000, 0.000000, 0.000000, 0.0000... Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,31757] = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e... Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 32013 Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 32021 Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 32014 Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - kv 25: general.quantization_version u32 = 2 Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - type f32: 49 tensors Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - type q4_0: 169 tensors Mar 06 06:14:22 sadique-Victus ollama[44217]: llama_model_loader: - type q6_K: 1 tensors Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: file format = GGUF V3 (latest) Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: file type = Q4_0 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: file size = 738.88 MiB (4.60 BPW) Mar 06 06:14:22 sadique-Victus ollama[44217]: load: missing or unrecognized pre-tokenizer type, using: 'default' Mar 06 06:14:22 sadique-Victus ollama[44217]: load: control-looking token: 32015 '<|fim▁hole|>' was not control-type; this is probably a bug in the model. its type will be overridden Mar 06 06:14:22 sadique-Victus ollama[44217]: load: control-looking token: 32017 '<|fim▁end|>' was not control-type; this is probably a bug in the model. its type will be overridden Mar 06 06:14:22 sadique-Victus ollama[44217]: load: control-looking token: 32016 '<|fim▁begin|>' was not control-type; this is probably a bug in the model. its type will be overridden Mar 06 06:14:22 sadique-Victus ollama[44217]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect Mar 06 06:14:22 sadique-Victus ollama[44217]: load: special_eot_id is not in special_eog_ids - the tokenizer config may be incorrect Mar 06 06:14:22 sadique-Victus ollama[44217]: load: special tokens cache size = 256 Mar 06 06:14:22 sadique-Victus ollama[44217]: load: token to piece cache size = 0.1792 MB Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: arch = llama Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: vocab_only = 0 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_ctx_train = 16384 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_embd = 2048 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_layer = 24 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_head = 16 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_head_kv = 16 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_rot = 128 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_swa = 0 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_embd_head_k = 128 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_embd_head_v = 128 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_gqa = 1 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_embd_k_gqa = 2048 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_embd_v_gqa = 2048 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: f_norm_eps = 0.0e+00 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: f_norm_rms_eps = 1.0e-06 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: f_clamp_kqv = 0.0e+00 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: f_max_alibi_bias = 0.0e+00 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: f_logit_scale = 0.0e+00 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_ff = 5504 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_expert = 0 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_expert_used = 0 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: causal attn = 1 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: pooling type = 0 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: rope type = 0 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: rope scaling = linear Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: freq_base_train = 100000.0 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: freq_scale_train = 0.25 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_ctx_orig_yarn = 16384 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: rope_finetuned = unknown Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: ssm_d_conv = 0 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: ssm_d_inner = 0 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: ssm_d_state = 0 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: ssm_dt_rank = 0 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: ssm_dt_b_c_rms = 0 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: model type = ?B Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: model params = 1.35 B Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: general.name = deepseek-ai Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: vocab type = BPE Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_vocab = 32256 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: n_merges = 31757 Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: BOS token = 32013 '<|begin▁of▁sentence|>' Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: EOS token = 32021 '<|EOT|>' Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: EOT token = 32014 '<|end▁of▁sentence|>' Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: PAD token = 32014 '<|end▁of▁sentence|>' Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: LF token = 185 'Ċ' Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: FIM PRE token = 32016 '<|fim▁begin|>' Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: FIM SUF token = 32015 '<|fim▁hole|>' Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: FIM MID token = 32017 '<|fim▁end|>' Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: EOG token = 32014 '<|end▁of▁sentence|>' Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: EOG token = 32021 '<|EOT|>' Mar 06 06:14:22 sadique-Victus ollama[44217]: print_info: max token length = 128 Mar 06 06:14:22 sadique-Victus ollama[44217]: load_tensors: loading model tensors, this can take a while... (mmap = true) Mar 06 06:14:23 sadique-Victus ollama[44217]: time=2025-03-06T06:14:23.050+05:30 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" Mar 06 06:14:23 sadique-Victus ollama[44217]: load_tensors: offloading 24 repeating layers to GPU Mar 06 06:14:23 sadique-Victus ollama[44217]: load_tensors: offloading output layer to GPU Mar 06 06:14:23 sadique-Victus ollama[44217]: load_tensors: offloaded 25/25 layers to GPU Mar 06 06:14:23 sadique-Victus ollama[44217]: load_tensors: ROCm0 model buffer size = 703.44 MiB Mar 06 06:14:23 sadique-Victus ollama[44217]: load_tensors: CPU_Mapped model buffer size = 35.44 MiB Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: n_seq_max = 4 Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: n_ctx = 8192 Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: n_ctx_per_seq = 2048 Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: n_batch = 2048 Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: n_ubatch = 512 Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: flash_attn = 0 Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: freq_base = 100000.0 Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: freq_scale = 0.25 Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (16384) -- the full capacity of the model will not be utilized Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 24, can_shift = 1 Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_kv_cache_init: ROCm0 KV buffer size = 1536.00 MiB Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: KV self size = 1536.00 MiB, K (f16): 768.00 MiB, V (f16): 768.00 MiB Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: ROCm_Host output buffer size = 0.52 MiB Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: ROCm0 compute buffer size = 288.00 MiB Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: ROCm_Host compute buffer size = 20.01 MiB Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: graph nodes = 774 Mar 06 06:14:24 sadique-Victus ollama[44217]: llama_init_from_model: graph splits = 2 Mar 06 06:14:24 sadique-Victus ollama[44217]: time=2025-03-06T06:14:24.307+05:30 level=INFO source=server.go:596 msg="llama runner started in 7.04 seconds" Mar 06 06:14:24 sadique-Victus ollama[44217]: [GIN] 2025/03/06 - 06:14:24 | 200 | 7.067325163s | 127.0.0.1 | POST "/api/generate" ```
Author
Owner

@mdsadiqueinam commented on GitHub (Mar 6, 2025):

@dhiltgen Thank you, Solved the problem the driver was missing

<!-- gh-comment-id:2702444916 --> @mdsadiqueinam commented on GitHub (Mar 6, 2025): @dhiltgen Thank you, Solved the problem the driver was missing
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4933