[GH-ISSUE #8735] ollama does not utilize HBM3 memory on MI300A #52178

Open
opened 2026-04-28 22:25:37 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @garrettbyrd on GitHub (Jan 31, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8735

What is the issue?

System Specs (GPUs are gfx942):

          __wgliliiligw_,              garrett@nicholson
       _williiiiiiliilililw,           -----------------
     _%iiiiiilililiiiiiiiiiii_         OS: Rocky Linux 9.5 (Blue Onyx) x86_64
   .Qliiiililiiiiiiililililiilm.       Host: Super Server (0123456789)
  _iiiiiliiiiiililiiiiiiiiiiliil,      Kernel: Linux 5.14.0-503.19.1.el9_5.x86_64
 .lililiiilililiiiilililililiiiii,     Uptime: 19 days, 22 hours, 6 mins
_liiiiiiliiiiiiiliiiiiF{iiiiiilili,    Packages: 2117 (rpm)
jliililiiilililiiili@`  ~ililiiiiiL    Shell: bash 5.1.8
iiiliiiiliiiiiiili>`      ~liililii    Display (Virtual-2): 1600x1200 @ 60 Hz
liliiiliiilililii`         -9liiiil    Cursor: Adwaita
iiiiiliiliiiiii~             "4lili    Terminal: slurmstepd: [339.interactive]
4ililiiiiilil~|      -w,       )4lf    CPU: 4 x AMD Instinct MI300A Accelerator (192) @ 3.70 GHz
-liiiiililiF'       _liig,       )'    GPU: ASPEED Technology, Inc. ASPEED Graphics Family
 )iiiliii@`       _QIililig,           Memory: 7.86 GiB / 501.75 GiB (2%)
  )iiii>`       .Qliliiiililw          Swap: 40.00 KiB / 4.00 GiB (0%)
   )<>~       .mliiiiiliiiiiil,        Disk (/): 96.09 GiB / 3.49 TiB (3%) - xfs
            _gllilililiililii~         
           giliiiiiiiiiiiiT`           
          -^~$ililili@~~'

Versions

gcc   12.4.0
cmake 3.31.2
go    1.23.4
rocm  6.1.2

This issue is up-to-date as of commit 2ef3c803a151a0a9b1776c9ebe6a7e86b3971660. I am not using latest main (see this issue). At the time of posting, the latest releases do not reflect the new build system (cmake) in newer commits.

This is the exact model of the server I am using.

This system is equipped with four AMD MI300A APUs. Each APU has 128GB of HBM3 memory.

Steps to reproduce

The error occurs whether I have manually installed or built from source.

Manual install:

curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz
sudo tar -xzf ollama-linux-amd64.tgz
curl -L https://ollama.com/download/ollama-linux-amd64-rocm.tgz -o ollama-linux-amd64-rocm.tgz
sudo tar ollama-linux-amd64-rocm.tgz

Build from source

git clone https://github.com/ollama/ollama.git
cd ollama
make

Installing through either of this methods successfully produces the ollama binary with ROCm support.

Issue

Although the binary builds, ollama does not correctly detect my GPUs. I assume this is because of the unique APU architecture, and that they are considered integrated GPUs that use system memory.

Here is the raw output of a fresh install running ./ollama serve:

2025/01/31 14:14:56 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/garrett/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:0,1,2,3 http_proxy: https_proxy: no_proxy:]"
time=2025-01-31T14:14:56.414-05:00 level=INFO source=images.go:432 msg="total blobs: 6"
time=2025-01-31T14:14:56.415-05:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:	export GIN_MODE=release
 - using code:	gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-01-31T14:14:56.417-05:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)"
time=2025-01-31T14:14:56.419-05:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx cpu cpu_avx]"
time=2025-01-31T14:14:56.419-05:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-01-31T14:14:56.442-05:00 level=INFO source=amd_linux.go:297 msg="unsupported Radeon iGPU detected skipping" id=0 total="0 B"
time=2025-01-31T14:14:56.444-05:00 level=INFO source=amd_linux.go:297 msg="unsupported Radeon iGPU detected skipping" id=1 total="0 B"
time=2025-01-31T14:14:56.445-05:00 level=INFO source=amd_linux.go:297 msg="unsupported Radeon iGPU detected skipping" id=2 total="0 B"
time=2025-01-31T14:14:56.446-05:00 level=INFO source=amd_linux.go:297 msg="unsupported Radeon iGPU detected skipping" id=3 total="0 B"
time=2025-01-31T14:14:56.446-05:00 level=INFO source=amd_linux.go:404 msg="no compatible amdgpu devices detected"
time=2025-01-31T14:14:56.446-05:00 level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
time=2025-01-31T14:14:56.446-05:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="501.7 GiB" available="493.9 GiB"

As you can see, it is "detecting" all four GPUs, but considers them integrated and so it skips them.

I have tried modifying discover/amd_linux.go by commenting out this block which assumes it is an iGPU. (Also see this line in discover/gpu.go. I might ask that this "TODO" gains some priority.)

Now, .ollama serve outputs the following:

2025/01/30 16:03:01 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/garrett/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:0,1 http_proxy: https_proxy: no_proxy:]"
time=2025-01-30T16:03:01.840-05:00 level=INFO source=images.go:432 msg="total blobs: 6"
time=2025-01-30T16:03:01.841-05:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:	export GIN_MODE=release
 - using code:	gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-01-30T16:03:01.846-05:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7-6-g2ef3c80-dirty)"
time=2025-01-30T16:03:01.849-05:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 rocm_avx]"
time=2025-01-30T16:03:01.853-05:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:52 msg=AMDGetGPUInfo
time=2025-01-30T16:03:01.862-05:00 level=WARN source=amd_linux.go:62 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:121 msg="gfx_target_version :"
time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:123 msg=gfx_target_version
time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:123 msg=0
time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:128 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/0/properties"
time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:121 msg="gfx_target_version :"
time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:123 msg=gfx_target_version
time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:123 msg=0
time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:128 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/1/properties"
time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:121 msg="gfx_target_version :"
time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:123 msg=gfx_target_version
time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:123 msg=90010
time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:213 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/2/properties vendor=4098 device=29711 unique_id=4331889410177107612
time=2025-01-30T16:03:01.863-05:00 level=INFO source=amd_linux.go:249 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/2/properties drm=/sys/class/drm/card1/device
time=2025-01-30T16:03:01.863-05:00 level=INFO source=amd_linux.go:251 msg="-------- -------- --------"
time=2025-01-30T16:03:01.863-05:00 level=INFO source=amd_linux.go:252 msg=/sys/class/drm/card1/device/mem_info_vram_total
time=2025-01-30T16:03:01.863-05:00 level=INFO source=amd_linux.go:307 msg=68702699520
time=2025-01-30T16:03:01.863-05:00 level=INFO source=amd_linux.go:332 msg="amdgpu memory" gpu=0 total="64.0 GiB"
time=2025-01-30T16:03:01.863-05:00 level=INFO source=amd_linux.go:333 msg="amdgpu memory" gpu=0 available="64.0 GiB"
time=2025-01-30T16:03:01.863-05:00 level=INFO source=amd_common.go:41 msg=commonAMDValidateLibDir
time=2025-01-30T16:03:01.863-05:00 level=INFO source=amd_common.go:56 msg="---- hipPath: "
time=2025-01-30T16:03:01.866-05:00 level=INFO source=amd_linux.go:389 msg=gfx90a
time=2025-01-30T16:03:01.866-05:00 level=INFO source=amd_linux.go:402 msg="amdgpu is supported" gpu=GPU-3c1df6779be2069c gpu_type=gfx90a
time=2025-01-30T16:03:01.866-05:00 level=INFO source=amd_linux.go:121 msg="gfx_target_version :"
time=2025-01-30T16:03:01.866-05:00 level=INFO source=amd_linux.go:123 msg=gfx_target_version
time=2025-01-30T16:03:01.866-05:00 level=INFO source=amd_linux.go:123 msg=90010
time=2025-01-30T16:03:01.866-05:00 level=INFO source=amd_linux.go:213 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/3/properties vendor=4098 device=29711 unique_id=118146762079177208
time=2025-01-30T16:03:01.867-05:00 level=INFO source=amd_linux.go:249 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/3/properties drm=/sys/class/drm/card2/device
time=2025-01-30T16:03:01.867-05:00 level=INFO source=amd_linux.go:251 msg="-------- -------- --------"
time=2025-01-30T16:03:01.867-05:00 level=INFO source=amd_linux.go:252 msg=/sys/class/drm/card2/device/mem_info_vram_total
time=2025-01-30T16:03:01.867-05:00 level=INFO source=amd_linux.go:307 msg=68702699520
time=2025-01-30T16:03:01.867-05:00 level=INFO source=amd_linux.go:332 msg="amdgpu memory" gpu=1 total="64.0 GiB"
time=2025-01-30T16:03:01.867-05:00 level=INFO source=amd_linux.go:333 msg="amdgpu memory" gpu=1 available="64.0 GiB"
time=2025-01-30T16:03:01.867-05:00 level=INFO source=amd_linux.go:389 msg=gfx90a
time=2025-01-30T16:03:01.867-05:00 level=INFO source=amd_linux.go:402 msg="amdgpu is supported" gpu=GPU-01a3bddaa91789f8 gpu_type=gfx90a
time=2025-01-30T16:03:01.867-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-3c1df6779be2069c library=rocm variant="" compute=gfx90a driver=0.0 name=1002:740f total="64.0 GiB" available="64.0 GiB"
time=2025-01-30T16:03:01.867-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-01a3bddaa91789f8 library=rocm variant="" compute=gfx90a driver=0.0 name=1002:740f total="64.0 GiB" available="64.0 GiB"
2025/01/31 15:44:39 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/garrett/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:0,1,2,3 http_proxy: https_proxy: no_proxy:]"
time=2025-01-31T15:44:39.941-05:00 level=INFO source=images.go:432 msg="total blobs: 6"
time=2025-01-31T15:44:39.942-05:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:	export GIN_MODE=release
 - using code:	gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-01-31T15:44:39.944-05:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7-6-g2ef3c80-dirty)"
time=2025-01-31T15:44:39.947-05:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 rocm_avx]"
time=2025-01-31T15:44:39.947-05:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-01-31T15:44:39.967-05:00 level=INFO source=amd_common.go:41 msg=commonAMDValidateLibDir
time=2025-01-31T15:44:39.967-05:00 level=INFO source=amd_common.go:56 msg="---- hipPath: /opt/rocm-6.2.4"
time=2025-01-31T15:44:39.969-05:00 level=INFO source=amd_linux.go:388 msg="amdgpu is supported" gpu=GPU-b8e0078991ff9d4d gpu_type=gfx942
time=2025-01-31T15:44:39.972-05:00 level=INFO source=amd_linux.go:388 msg="amdgpu is supported" gpu=GPU-eaa69826c47efb9c gpu_type=gfx942
time=2025-01-31T15:44:39.976-05:00 level=INFO source=amd_linux.go:388 msg="amdgpu is supported" gpu=GPU-b88af96f2664a354 gpu_type=gfx942
time=2025-01-31T15:44:39.977-05:00 level=INFO source=amd_linux.go:388 msg="amdgpu is supported" gpu=GPU-1aa50edc8434308c gpu_type=gfx942
time=2025-01-31T15:44:39.985-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b8e0078991ff9d4d library=rocm variant="" compute=gfx942 driver=6.10 name=1002:74a0 total="0 B" available="0 B"
time=2025-01-31T15:44:39.985-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-eaa69826c47efb9c library=rocm variant="" compute=gfx942 driver=6.10 name=1002:74a0 total="0 B" available="0 B"
time=2025-01-31T15:44:39.985-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b88af96f2664a354 library=rocm variant="" compute=gfx942 driver=6.10 name=1002:74a0 total="0 B" available="0 B"
time=2025-01-31T15:44:39.985-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-1aa50edc8434308c library=rocm variant="" compute=gfx942 driver=6.10 name=1002:74a0 total="0 B" available="0 B"
2025/01/31 15:44:52 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/garrett/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:0,1,2,3 http_proxy: https_proxy: no_proxy:]"
time=2025-01-31T15:44:52.286-05:00 level=INFO source=images.go:432 msg="total blobs: 6"
time=2025-01-31T15:44:52.287-05:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:	export GIN_MODE=release
 - using code:	gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-01-31T15:44:52.289-05:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7-6-g2ef3c80-dirty)"
time=2025-01-31T15:44:52.290-05:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu_avx2 rocm_avx cpu cpu_avx]"
time=2025-01-31T15:44:52.291-05:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-01-31T15:44:52.310-05:00 level=INFO source=amd_common.go:41 msg=commonAMDValidateLibDir
time=2025-01-31T15:44:52.310-05:00 level=INFO source=amd_common.go:56 msg="---- hipPath: /opt/rocm-6.2.4"
time=2025-01-31T15:44:52.312-05:00 level=INFO source=amd_linux.go:388 msg="amdgpu is supported" gpu=GPU-b8e0078991ff9d4d gpu_type=gfx942
time=2025-01-31T15:44:52.314-05:00 level=INFO source=amd_linux.go:388 msg="amdgpu is supported" gpu=GPU-eaa69826c47efb9c gpu_type=gfx942
time=2025-01-31T15:44:52.319-05:00 level=INFO source=amd_linux.go:388 msg="amdgpu is supported" gpu=GPU-b88af96f2664a354 gpu_type=gfx942
time=2025-01-31T15:44:52.321-05:00 level=INFO source=amd_linux.go:388 msg="amdgpu is supported" gpu=GPU-1aa50edc8434308c gpu_type=gfx942
time=2025-01-31T15:44:52.327-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b8e0078991ff9d4d library=rocm variant="" compute=gfx942 driver=6.10 name=1002:74a0 total="0 B" available="0 B"
time=2025-01-31T15:44:52.327-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-eaa69826c47efb9c library=rocm variant="" compute=gfx942 driver=6.10 name=1002:74a0 total="0 B" available="0 B"
time=2025-01-31T15:44:52.327-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b88af96f2664a354 library=rocm variant="" compute=gfx942 driver=6.10 name=1002:74a0 total="0 B" available="0 B"
time=2025-01-31T15:44:52.327-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-1aa50edc8434308c library=rocm variant="" compute=gfx942 driver=6.10 name=1002:74a0 total="0 B" available="0 B"

As you can see, now ollama correctly identifies the GPU as gfx942; however, it does not output the correct amount of memory. This line in discover/amd_linux.go reads the file from /sys/class/drm/card1/device/mem_info_vram_total, which just contains "0". (Again, this sytem uses the same HBM3 memory on device and host.)

At this point, the serve does run; however, it is not utilizing the GPUs. Here is the output when I run ./ollama run deepseek-r1 in a separate terminal. (I'll also note that the tokens/seconds is << 1, very slow).

[GIN] 2025/01/31 - 15:47:59 | 200 |     137.691µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/01/31 - 15:47:59 | 404 |    3.053047ms |       127.0.0.1 | POST     "/api/show"
time=2025-01-31T15:47:59.765-05:00 level=INFO source=download.go:175 msg="downloading 96c415656d37 in 16 292 MB part(s)"
time=2025-01-31T15:49:16.506-05:00 level=INFO source=download.go:175 msg="downloading 369ca498f347 in 1 387 B part(s)"
time=2025-01-31T15:49:17.671-05:00 level=INFO source=download.go:175 msg="downloading 6e4c38e1172f in 1 1.1 KB part(s)"
time=2025-01-31T15:49:18.883-05:00 level=INFO source=download.go:175 msg="downloading f4d24e9138dd in 1 148 B part(s)"
time=2025-01-31T15:49:20.072-05:00 level=INFO source=download.go:175 msg="downloading 40fb844194b2 in 1 487 B part(s)"
[GIN] 2025/01/31 - 15:50:02 | 200 |          2m2s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2025/01/31 - 15:50:02 | 200 |   22.550986ms |       127.0.0.1 | POST     "/api/show"
time=2025-01-31T15:50:02.206-05:00 level=INFO source=server.go:104 msg="system memory" total="501.7 GiB" free="493.6 GiB" free_swap="4.0 GiB"
time=2025-01-31T15:50:02.210-05:00 level=INFO source=memory.go:356 msg="offload to rocm" layers.requested=-1 layers.model=29 layers.offload=0 layers.split="" memory.available="[0 B 0 B 0 B 0 B]" memory.gpu_overhead="0 B" memory.required.full="4.4 GiB" memory.required.partial="0 B" memory.required.kv="112.0 MiB" memory.required.allocations="[0 B 0 B 0 B 0 B]" memory.weights.total="3.8 GiB" memory.weights.repeating="3.3 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="730.4 MiB" memory.graph.partial="730.4 MiB"
time=2025-01-31T15:50:02.211-05:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/home/garrett/projects/ai-playground/ollama-new/llama/build/linux-amd64/runners/cpu_avx2/ollama_llama_server runner --model /home/garrett/.ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 --ctx-size 2048 --batch-size 512 --threads 96 --no-mmap --parallel 1 --port 34441"
time=2025-01-31T15:50:02.214-05:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-01-31T15:50:02.214-05:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-01-31T15:50:02.214-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-01-31T15:50:02.226-05:00 level=INFO source=runner.go:936 msg="starting go runner"
time=2025-01-31T15:50:02.228-05:00 level=INFO source=runner.go:937 msg=system info="CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=96
time=2025-01-31T15:50:02.228-05:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:34441"
llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /home/garrett/.ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 7B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen
llama_model_loader: - kv   4:                         general.size_label str              = 7B
llama_model_loader: - kv   5:                          qwen2.block_count u32              = 28
llama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 3584
llama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 18944
llama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 28
llama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 4
llama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  13:                          general.file_type u32              = 15
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 151646
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  141 tensors
llama_model_loader: - type q4_K:  169 tensors
llama_model_loader: - type q6_K:   29 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 152064
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 3584
llm_load_print_meta: n_layer          = 28
llm_load_print_meta: n_head           = 28
llm_load_print_meta: n_head_kv        = 4
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 7
llm_load_print_meta: n_embd_k_gqa     = 512
llm_load_print_meta: n_embd_v_gqa     = 512
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 18944
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 7.62 B
llm_load_print_meta: model size       = 4.36 GiB (4.91 BPW) 
llm_load_print_meta: general.name     = DeepSeek R1 Distill Qwen 7B
llm_load_print_meta: BOS token        = 151646 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOT token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: PAD token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_print_meta: FIM PRE token    = 151659 '<|fim_prefix|>'
llm_load_print_meta: FIM SUF token    = 151661 '<|fim_suffix|>'
llm_load_print_meta: FIM MID token    = 151660 '<|fim_middle|>'
llm_load_print_meta: FIM PAD token    = 151662 '<|fim_pad|>'
llm_load_print_meta: FIM REP token    = 151663 '<|repo_name|>'
llm_load_print_meta: FIM SEP token    = 151664 '<|file_sep|>'
llm_load_print_meta: EOG token        = 151643 '<|end▁of▁sentence|>'
llm_load_print_meta: EOG token        = 151662 '<|fim_pad|>'
llm_load_print_meta: EOG token        = 151663 '<|repo_name|>'
llm_load_print_meta: EOG token        = 151664 '<|file_sep|>'
llm_load_print_meta: max token length = 256
llm_load_tensors:          CPU model buffer size =  4460.45 MiB
time=2025-01-31T15:50:02.466-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_new_context_with_model: n_seq_max     = 1
llama_new_context_with_model: n_ctx         = 2048
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch       = 512
llama_new_context_with_model: n_ubatch      = 512
llama_new_context_with_model: flash_attn    = 0
llama_new_context_with_model: freq_base     = 10000.0
llama_new_context_with_model: freq_scale    = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1
llama_kv_cache_init:        CPU KV buffer size =   112.00 MiB
llama_new_context_with_model: KV self size  =  112.00 MiB, K (f16):   56.00 MiB, V (f16):   56.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.59 MiB
llama_new_context_with_model:        CPU compute buffer size =   304.00 MiB
llama_new_context_with_model: graph nodes  = 986
llama_new_context_with_model: graph splits = 1
time=2025-01-31T15:50:03.974-05:00 level=INFO source=server.go:594 msg="llama runner started in 1.76 seconds"
[GIN] 2025/01/31 - 15:50:03 | 200 |  1.839687296s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/01/31 - 15:51:18 | 200 |         1m10s |       127.0.0.1 | POST     "/api/chat"

I see that MI300 is supported, so any help here would be nice.

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

ollama version is 0.5.7-6-g2ef3c80-dirty

Originally created by @garrettbyrd on GitHub (Jan 31, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8735 ### What is the issue? ## System Specs (GPUs are `gfx942`): ``` __wgliliiligw_, garrett@nicholson _williiiiiiliilililw, ----------------- _%iiiiiilililiiiiiiiiiii_ OS: Rocky Linux 9.5 (Blue Onyx) x86_64 .Qliiiililiiiiiiililililiilm. Host: Super Server (0123456789) _iiiiiliiiiiililiiiiiiiiiiliil, Kernel: Linux 5.14.0-503.19.1.el9_5.x86_64 .lililiiilililiiiilililililiiiii, Uptime: 19 days, 22 hours, 6 mins _liiiiiiliiiiiiiliiiiiF{iiiiiilili, Packages: 2117 (rpm) jliililiiilililiiili@` ~ililiiiiiL Shell: bash 5.1.8 iiiliiiiliiiiiiili>` ~liililii Display (Virtual-2): 1600x1200 @ 60 Hz liliiiliiilililii` -9liiiil Cursor: Adwaita iiiiiliiliiiiii~ "4lili Terminal: slurmstepd: [339.interactive] 4ililiiiiilil~| -w, )4lf CPU: 4 x AMD Instinct MI300A Accelerator (192) @ 3.70 GHz -liiiiililiF' _liig, )' GPU: ASPEED Technology, Inc. ASPEED Graphics Family )iiiliii@` _QIililig, Memory: 7.86 GiB / 501.75 GiB (2%) )iiii>` .Qliliiiililw Swap: 40.00 KiB / 4.00 GiB (0%) )<>~ .mliiiiiliiiiiil, Disk (/): 96.09 GiB / 3.49 TiB (3%) - xfs _gllilililiililii~ giliiiiiiiiiiiiT` -^~$ililili@~~' ``` ## Versions ``` gcc 12.4.0 cmake 3.31.2 go 1.23.4 rocm 6.1.2 ``` This issue is up-to-date as of commit [`2ef3c803a151a0a9b1776c9ebe6a7e86b3971660`](https://github.com/ollama/ollama/tree/2ef3c803a151a0a9b1776c9ebe6a7e86b3971660). I am not using latest `main` (see [this issue](https://github.com/ollama/ollama/issues/8730)). At the time of posting, the [latest releases](https://github.com/ollama/ollama/releases/tag/v0.5.7) do not reflect the new build system (`cmake`) in newer commits. [This](https://www.supermicro.com/en/products/system/gpu/4u/as%20-4145gh-tnmr) is the exact model of the server I am using. This system is equipped with four AMD MI300A APUs. Each APU has 128GB of HBM3 memory. ## Steps to reproduce The error occurs whether I have manually installed or built from source. ### Manual install: ```sh curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz sudo tar -xzf ollama-linux-amd64.tgz curl -L https://ollama.com/download/ollama-linux-amd64-rocm.tgz -o ollama-linux-amd64-rocm.tgz sudo tar ollama-linux-amd64-rocm.tgz ``` ### Build from source ```sh git clone https://github.com/ollama/ollama.git cd ollama make ``` Installing through either of this methods successfully produces the `ollama` binary with ROCm support. ## Issue Although the binary builds, `ollama` does not correctly detect my GPUs. I assume this is because of the unique APU architecture, and that they are considered integrated GPUs that use system memory. Here is the raw output of a fresh install running `./ollama serve`: ``` 2025/01/31 14:14:56 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/garrett/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:0,1,2,3 http_proxy: https_proxy: no_proxy:]" time=2025-01-31T14:14:56.414-05:00 level=INFO source=images.go:432 msg="total blobs: 6" time=2025-01-31T14:14:56.415-05:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2025-01-31T14:14:56.417-05:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)" time=2025-01-31T14:14:56.419-05:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx cpu cpu_avx]" time=2025-01-31T14:14:56.419-05:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-01-31T14:14:56.442-05:00 level=INFO source=amd_linux.go:297 msg="unsupported Radeon iGPU detected skipping" id=0 total="0 B" time=2025-01-31T14:14:56.444-05:00 level=INFO source=amd_linux.go:297 msg="unsupported Radeon iGPU detected skipping" id=1 total="0 B" time=2025-01-31T14:14:56.445-05:00 level=INFO source=amd_linux.go:297 msg="unsupported Radeon iGPU detected skipping" id=2 total="0 B" time=2025-01-31T14:14:56.446-05:00 level=INFO source=amd_linux.go:297 msg="unsupported Radeon iGPU detected skipping" id=3 total="0 B" time=2025-01-31T14:14:56.446-05:00 level=INFO source=amd_linux.go:404 msg="no compatible amdgpu devices detected" time=2025-01-31T14:14:56.446-05:00 level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered" time=2025-01-31T14:14:56.446-05:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="501.7 GiB" available="493.9 GiB" ``` As you can see, it is "detecting" all four GPUs, but considers them integrated and so it skips them. I have tried modifying `discover/amd_linux.go` by commenting out [this block](https://github.com/ollama/ollama/blob/2ef3c803a151a0a9b1776c9ebe6a7e86b3971660/discover/amd_linux.go#L295) which assumes it is an iGPU. (Also see [this line](https://github.com/ollama/ollama/blob/39fd89308c0bbe26311db583cf9729f81ffa9a94/discover/gpu.go#L76) in `discover/gpu.go`. I might ask that this "TODO" gains some priority.) Now, `.ollama serve` outputs the following: ``` 2025/01/30 16:03:01 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/garrett/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:0,1 http_proxy: https_proxy: no_proxy:]" time=2025-01-30T16:03:01.840-05:00 level=INFO source=images.go:432 msg="total blobs: 6" time=2025-01-30T16:03:01.841-05:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2025-01-30T16:03:01.846-05:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7-6-g2ef3c80-dirty)" time=2025-01-30T16:03:01.849-05:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 rocm_avx]" time=2025-01-30T16:03:01.853-05:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:52 msg=AMDGetGPUInfo time=2025-01-30T16:03:01.862-05:00 level=WARN source=amd_linux.go:62 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:121 msg="gfx_target_version :" time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:123 msg=gfx_target_version time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:123 msg=0 time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:128 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/0/properties" time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:121 msg="gfx_target_version :" time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:123 msg=gfx_target_version time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:123 msg=0 time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:128 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/1/properties" time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:121 msg="gfx_target_version :" time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:123 msg=gfx_target_version time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:123 msg=90010 time=2025-01-30T16:03:01.862-05:00 level=INFO source=amd_linux.go:213 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/2/properties vendor=4098 device=29711 unique_id=4331889410177107612 time=2025-01-30T16:03:01.863-05:00 level=INFO source=amd_linux.go:249 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/2/properties drm=/sys/class/drm/card1/device time=2025-01-30T16:03:01.863-05:00 level=INFO source=amd_linux.go:251 msg="-------- -------- --------" time=2025-01-30T16:03:01.863-05:00 level=INFO source=amd_linux.go:252 msg=/sys/class/drm/card1/device/mem_info_vram_total time=2025-01-30T16:03:01.863-05:00 level=INFO source=amd_linux.go:307 msg=68702699520 time=2025-01-30T16:03:01.863-05:00 level=INFO source=amd_linux.go:332 msg="amdgpu memory" gpu=0 total="64.0 GiB" time=2025-01-30T16:03:01.863-05:00 level=INFO source=amd_linux.go:333 msg="amdgpu memory" gpu=0 available="64.0 GiB" time=2025-01-30T16:03:01.863-05:00 level=INFO source=amd_common.go:41 msg=commonAMDValidateLibDir time=2025-01-30T16:03:01.863-05:00 level=INFO source=amd_common.go:56 msg="---- hipPath: " time=2025-01-30T16:03:01.866-05:00 level=INFO source=amd_linux.go:389 msg=gfx90a time=2025-01-30T16:03:01.866-05:00 level=INFO source=amd_linux.go:402 msg="amdgpu is supported" gpu=GPU-3c1df6779be2069c gpu_type=gfx90a time=2025-01-30T16:03:01.866-05:00 level=INFO source=amd_linux.go:121 msg="gfx_target_version :" time=2025-01-30T16:03:01.866-05:00 level=INFO source=amd_linux.go:123 msg=gfx_target_version time=2025-01-30T16:03:01.866-05:00 level=INFO source=amd_linux.go:123 msg=90010 time=2025-01-30T16:03:01.866-05:00 level=INFO source=amd_linux.go:213 msg="mapping amdgpu to drm sysfs nodes" amdgpu=/sys/class/kfd/kfd/topology/nodes/3/properties vendor=4098 device=29711 unique_id=118146762079177208 time=2025-01-30T16:03:01.867-05:00 level=INFO source=amd_linux.go:249 msg=matched amdgpu=/sys/class/kfd/kfd/topology/nodes/3/properties drm=/sys/class/drm/card2/device time=2025-01-30T16:03:01.867-05:00 level=INFO source=amd_linux.go:251 msg="-------- -------- --------" time=2025-01-30T16:03:01.867-05:00 level=INFO source=amd_linux.go:252 msg=/sys/class/drm/card2/device/mem_info_vram_total time=2025-01-30T16:03:01.867-05:00 level=INFO source=amd_linux.go:307 msg=68702699520 time=2025-01-30T16:03:01.867-05:00 level=INFO source=amd_linux.go:332 msg="amdgpu memory" gpu=1 total="64.0 GiB" time=2025-01-30T16:03:01.867-05:00 level=INFO source=amd_linux.go:333 msg="amdgpu memory" gpu=1 available="64.0 GiB" time=2025-01-30T16:03:01.867-05:00 level=INFO source=amd_linux.go:389 msg=gfx90a time=2025-01-30T16:03:01.867-05:00 level=INFO source=amd_linux.go:402 msg="amdgpu is supported" gpu=GPU-01a3bddaa91789f8 gpu_type=gfx90a time=2025-01-30T16:03:01.867-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-3c1df6779be2069c library=rocm variant="" compute=gfx90a driver=0.0 name=1002:740f total="64.0 GiB" available="64.0 GiB" time=2025-01-30T16:03:01.867-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-01a3bddaa91789f8 library=rocm variant="" compute=gfx90a driver=0.0 name=1002:740f total="64.0 GiB" available="64.0 GiB" 2025/01/31 15:44:39 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/garrett/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:0,1,2,3 http_proxy: https_proxy: no_proxy:]" time=2025-01-31T15:44:39.941-05:00 level=INFO source=images.go:432 msg="total blobs: 6" time=2025-01-31T15:44:39.942-05:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2025-01-31T15:44:39.944-05:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7-6-g2ef3c80-dirty)" time=2025-01-31T15:44:39.947-05:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 rocm_avx]" time=2025-01-31T15:44:39.947-05:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-01-31T15:44:39.967-05:00 level=INFO source=amd_common.go:41 msg=commonAMDValidateLibDir time=2025-01-31T15:44:39.967-05:00 level=INFO source=amd_common.go:56 msg="---- hipPath: /opt/rocm-6.2.4" time=2025-01-31T15:44:39.969-05:00 level=INFO source=amd_linux.go:388 msg="amdgpu is supported" gpu=GPU-b8e0078991ff9d4d gpu_type=gfx942 time=2025-01-31T15:44:39.972-05:00 level=INFO source=amd_linux.go:388 msg="amdgpu is supported" gpu=GPU-eaa69826c47efb9c gpu_type=gfx942 time=2025-01-31T15:44:39.976-05:00 level=INFO source=amd_linux.go:388 msg="amdgpu is supported" gpu=GPU-b88af96f2664a354 gpu_type=gfx942 time=2025-01-31T15:44:39.977-05:00 level=INFO source=amd_linux.go:388 msg="amdgpu is supported" gpu=GPU-1aa50edc8434308c gpu_type=gfx942 time=2025-01-31T15:44:39.985-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b8e0078991ff9d4d library=rocm variant="" compute=gfx942 driver=6.10 name=1002:74a0 total="0 B" available="0 B" time=2025-01-31T15:44:39.985-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-eaa69826c47efb9c library=rocm variant="" compute=gfx942 driver=6.10 name=1002:74a0 total="0 B" available="0 B" time=2025-01-31T15:44:39.985-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b88af96f2664a354 library=rocm variant="" compute=gfx942 driver=6.10 name=1002:74a0 total="0 B" available="0 B" time=2025-01-31T15:44:39.985-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-1aa50edc8434308c library=rocm variant="" compute=gfx942 driver=6.10 name=1002:74a0 total="0 B" available="0 B" 2025/01/31 15:44:52 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/garrett/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:0,1,2,3 http_proxy: https_proxy: no_proxy:]" time=2025-01-31T15:44:52.286-05:00 level=INFO source=images.go:432 msg="total blobs: 6" time=2025-01-31T15:44:52.287-05:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2025-01-31T15:44:52.289-05:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7-6-g2ef3c80-dirty)" time=2025-01-31T15:44:52.290-05:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu_avx2 rocm_avx cpu cpu_avx]" time=2025-01-31T15:44:52.291-05:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-01-31T15:44:52.310-05:00 level=INFO source=amd_common.go:41 msg=commonAMDValidateLibDir time=2025-01-31T15:44:52.310-05:00 level=INFO source=amd_common.go:56 msg="---- hipPath: /opt/rocm-6.2.4" time=2025-01-31T15:44:52.312-05:00 level=INFO source=amd_linux.go:388 msg="amdgpu is supported" gpu=GPU-b8e0078991ff9d4d gpu_type=gfx942 time=2025-01-31T15:44:52.314-05:00 level=INFO source=amd_linux.go:388 msg="amdgpu is supported" gpu=GPU-eaa69826c47efb9c gpu_type=gfx942 time=2025-01-31T15:44:52.319-05:00 level=INFO source=amd_linux.go:388 msg="amdgpu is supported" gpu=GPU-b88af96f2664a354 gpu_type=gfx942 time=2025-01-31T15:44:52.321-05:00 level=INFO source=amd_linux.go:388 msg="amdgpu is supported" gpu=GPU-1aa50edc8434308c gpu_type=gfx942 time=2025-01-31T15:44:52.327-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b8e0078991ff9d4d library=rocm variant="" compute=gfx942 driver=6.10 name=1002:74a0 total="0 B" available="0 B" time=2025-01-31T15:44:52.327-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-eaa69826c47efb9c library=rocm variant="" compute=gfx942 driver=6.10 name=1002:74a0 total="0 B" available="0 B" time=2025-01-31T15:44:52.327-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-b88af96f2664a354 library=rocm variant="" compute=gfx942 driver=6.10 name=1002:74a0 total="0 B" available="0 B" time=2025-01-31T15:44:52.327-05:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-1aa50edc8434308c library=rocm variant="" compute=gfx942 driver=6.10 name=1002:74a0 total="0 B" available="0 B" ``` As you can see, now `ollama` correctly identifies the GPU as `gfx942`; however, it does not output the correct amount of memory. [This line](https://github.com/ollama/ollama/blob/2ef3c803a151a0a9b1776c9ebe6a7e86b3971660/discover/amd_linux.go#L218) in `discover/amd_linux.go` reads the file from `/sys/class/drm/card1/device/mem_info_vram_total`, which just contains "`0`". (Again, this sytem uses the same HBM3 memory on device and host.) At this point, the serve *does* run; however, it is not utilizing the GPUs. Here is the output when I run `./ollama run deepseek-r1` in a separate terminal. (I'll also note that the tokens/seconds is << 1, very slow). ``` [GIN] 2025/01/31 - 15:47:59 | 200 | 137.691µs | 127.0.0.1 | HEAD "/" [GIN] 2025/01/31 - 15:47:59 | 404 | 3.053047ms | 127.0.0.1 | POST "/api/show" time=2025-01-31T15:47:59.765-05:00 level=INFO source=download.go:175 msg="downloading 96c415656d37 in 16 292 MB part(s)" time=2025-01-31T15:49:16.506-05:00 level=INFO source=download.go:175 msg="downloading 369ca498f347 in 1 387 B part(s)" time=2025-01-31T15:49:17.671-05:00 level=INFO source=download.go:175 msg="downloading 6e4c38e1172f in 1 1.1 KB part(s)" time=2025-01-31T15:49:18.883-05:00 level=INFO source=download.go:175 msg="downloading f4d24e9138dd in 1 148 B part(s)" time=2025-01-31T15:49:20.072-05:00 level=INFO source=download.go:175 msg="downloading 40fb844194b2 in 1 487 B part(s)" [GIN] 2025/01/31 - 15:50:02 | 200 | 2m2s | 127.0.0.1 | POST "/api/pull" [GIN] 2025/01/31 - 15:50:02 | 200 | 22.550986ms | 127.0.0.1 | POST "/api/show" time=2025-01-31T15:50:02.206-05:00 level=INFO source=server.go:104 msg="system memory" total="501.7 GiB" free="493.6 GiB" free_swap="4.0 GiB" time=2025-01-31T15:50:02.210-05:00 level=INFO source=memory.go:356 msg="offload to rocm" layers.requested=-1 layers.model=29 layers.offload=0 layers.split="" memory.available="[0 B 0 B 0 B 0 B]" memory.gpu_overhead="0 B" memory.required.full="4.4 GiB" memory.required.partial="0 B" memory.required.kv="112.0 MiB" memory.required.allocations="[0 B 0 B 0 B 0 B]" memory.weights.total="3.8 GiB" memory.weights.repeating="3.3 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="730.4 MiB" memory.graph.partial="730.4 MiB" time=2025-01-31T15:50:02.211-05:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/home/garrett/projects/ai-playground/ollama-new/llama/build/linux-amd64/runners/cpu_avx2/ollama_llama_server runner --model /home/garrett/.ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 --ctx-size 2048 --batch-size 512 --threads 96 --no-mmap --parallel 1 --port 34441" time=2025-01-31T15:50:02.214-05:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-01-31T15:50:02.214-05:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-01-31T15:50:02.214-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-01-31T15:50:02.226-05:00 level=INFO source=runner.go:936 msg="starting go runner" time=2025-01-31T15:50:02.228-05:00 level=INFO source=runner.go:937 msg=system info="CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=96 time=2025-01-31T15:50:02.228-05:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:34441" llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /home/garrett/.ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 7B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen llama_model_loader: - kv 4: general.size_label str = 7B llama_model_loader: - kv 5: qwen2.block_count u32 = 28 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 3584 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 18944 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 28 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 4 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q4_K: 169 tensors llama_model_loader: - type q6_K: 29 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 3584 llm_load_print_meta: n_layer = 28 llm_load_print_meta: n_head = 28 llm_load_print_meta: n_head_kv = 4 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 7 llm_load_print_meta: n_embd_k_gqa = 512 llm_load_print_meta: n_embd_v_gqa = 512 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 18944 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 7.62 B llm_load_print_meta: model size = 4.36 GiB (4.91 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 7B llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>' llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' llm_load_print_meta: EOG token = 151663 '<|repo_name|>' llm_load_print_meta: EOG token = 151664 '<|file_sep|>' llm_load_print_meta: max token length = 256 llm_load_tensors: CPU model buffer size = 4460.45 MiB time=2025-01-31T15:50:02.466-05:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_new_context_with_model: n_seq_max = 1 llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1 llama_kv_cache_init: CPU KV buffer size = 112.00 MiB llama_new_context_with_model: KV self size = 112.00 MiB, K (f16): 56.00 MiB, V (f16): 56.00 MiB llama_new_context_with_model: CPU output buffer size = 0.59 MiB llama_new_context_with_model: CPU compute buffer size = 304.00 MiB llama_new_context_with_model: graph nodes = 986 llama_new_context_with_model: graph splits = 1 time=2025-01-31T15:50:03.974-05:00 level=INFO source=server.go:594 msg="llama runner started in 1.76 seconds" [GIN] 2025/01/31 - 15:50:03 | 200 | 1.839687296s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/01/31 - 15:51:18 | 200 | 1m10s | 127.0.0.1 | POST "/api/chat" ``` I see that [`MI300` is supported](https://github.com/ollama/ollama/blob/39fd89308c0bbe26311db583cf9729f81ffa9a94/docs/gpu.md?plain=1#L56), so any help here would be nice. ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version ollama version is 0.5.7-6-g2ef3c80-dirty
GiteaMirror added the bug label 2026-04-28 22:25:37 -05:00
Author
Owner

@garrettbyrd commented on GitHub (Feb 3, 2025):

I'll also note that manually setting totalMemory = ... in amd_linux.go produces the GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG output that seems to be prevalent in memory-related issues.

Same results no matter what I set totalMemory to, and this output occurs no matter which model is run (tested with deepseek-r1 (7b), qwen:1.8b, and some others.)

Same output if I am using 1x MI300A or 4x.

<!-- gh-comment-id:2631303167 --> @garrettbyrd commented on GitHub (Feb 3, 2025): I'll also note that manually setting `totalMemory = ...` in `amd_linux.go` produces the `GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG` output [that seems to be prevalent in memory-related issues](https://github.com/ollama/ollama/issues?q=is%3Aissue%20GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG%20). Same results no matter what I set `totalMemory` to, and this output occurs no matter which model is run (tested with `deepseek-r1` (7b), `qwen:1.8b`, and some others.) Same output if I am using 1x MI300A or 4x.
Author
Owner

@MaxAmende commented on GitHub (Feb 4, 2025):

Same issue here

<!-- gh-comment-id:2634788265 --> @MaxAmende commented on GitHub (Feb 4, 2025): Same issue here
Author
Owner

@garrettbyrd commented on GitHub (Apr 4, 2025):

@MaxAmende did you ever find a solution for this?

<!-- gh-comment-id:2779518992 --> @garrettbyrd commented on GitHub (Apr 4, 2025): @MaxAmende did you ever find a solution for this?
Author
Owner

@mglaubitz commented on GitHub (Apr 14, 2025):

We ran into the same problem here and would really like to e able to use our AMD Mi300A GPUs.
Is there any news on this. Can we provide any additional data or test something?

<!-- gh-comment-id:2801177175 --> @mglaubitz commented on GitHub (Apr 14, 2025): We ran into the same problem here and would really like to e able to use our AMD Mi300A GPUs. Is there any news on this. Can we provide any additional data or test something?
Author
Owner

@robinson96 commented on GitHub (Aug 8, 2025):

Same issue here.

<!-- gh-comment-id:3169403632 --> @robinson96 commented on GitHub (Aug 8, 2025): Same issue here.
Author
Owner

@yhavinga commented on GitHub (Oct 7, 2025):

I got something similar on a 4 MI300x system (gfx942) and the latest ollama/ollama:rocm - tried a lot, settings etc
In the end switching to this image worked: ollama/ollama:0.12.4-rc6-rocm
ROCm version on the host installed is 6.4.1

<!-- gh-comment-id:3377298899 --> @yhavinga commented on GitHub (Oct 7, 2025): I got something similar on a 4 MI300x system (gfx942) and the latest ollama/ollama:rocm - tried a lot, settings etc In the end switching to this image worked: ollama/ollama:0.12.4-rc6-rocm ROCm version on the host installed is 6.4.1
Author
Owner

@somewatson commented on GitHub (Oct 10, 2025):

I'm also experiencing this issue with the latest version of ollama/ollama:rocm

I can confirm that the version @yhavinga mentions works with the MI300A GPU ollama/ollama:0.12.4-rc6-rocm 👍

So it seems there's some regression along the way

<!-- gh-comment-id:3389351069 --> @somewatson commented on GitHub (Oct 10, 2025): I'm also experiencing this issue with the latest version of `ollama/ollama:rocm` I can confirm that the version @yhavinga mentions works with the MI300A GPU `ollama/ollama:0.12.4-rc6-rocm` 👍 So it seems there's some regression along the way
Author
Owner

@javicacheiro commented on GitHub (Nov 29, 2025):

Latest version 0.13.0 still has this issue, using 0.12.4-rc6-rocm works.

<!-- gh-comment-id:3591800052 --> @javicacheiro commented on GitHub (Nov 29, 2025): Latest version 0.13.0 still has this issue, using 0.12.4-rc6-rocm works.
Author
Owner

@javicacheiro commented on GitHub (Dec 14, 2025):

Today I had some free time to look into this. After testing multiple versions, I found that the regression was introduced in v0.13.0, when mem_hip switched to using sysfs VRAM reporting.

The issue is caused by the fact that MI300A uses unified CPU/GPU memory and therefore reports zero total VRAM in /sys/class/drm/card0/device/mem_info_vram_total. As a result, mem_hip detects zero available GPU memory and assumes the GPU cannot be used.

I’ve already submitted a PR to address this.

In the meantime, you can either apply the patch from the PR or use v0.12.11, the latest version which does not has this issue.

<!-- gh-comment-id:3650061263 --> @javicacheiro commented on GitHub (Dec 14, 2025): Today I had some free time to look into this. After testing multiple versions, I found that the regression was introduced in v0.13.0, when `mem_hip` switched to using sysfs VRAM reporting. The issue is caused by the fact that MI300A uses unified CPU/GPU memory and therefore reports zero total VRAM in `/sys/class/drm/card0/device/mem_info_vram_total`. As a result, `mem_hip` detects zero available GPU memory and assumes the GPU cannot be used. I’ve already submitted a PR to address this. In the meantime, you can either apply the patch from the PR or use v0.12.11, the latest version which does not has this issue.
Author
Owner

@somewatson commented on GitHub (Jan 22, 2026):

Ty for the @javicacheiro. Great work. Do you know if this will be merged in during an upcoming version?

<!-- gh-comment-id:3782809977 --> @somewatson commented on GitHub (Jan 22, 2026): Ty for the @javicacheiro. Great work. Do you know if this will be merged in during an upcoming version?
Author
Owner

@javicacheiro commented on GitHub (Jan 22, 2026):

I hope so!

In the meanwhile, to use the latest version of ollama, what I do is to apply the patch and then I can compile with (I have ROCm 6):

cmake --preset "ROCm 6"
cmake --build build --parallel
go build

I hope the patch will be merged soon and we avoid the manual compilation.

<!-- gh-comment-id:3785108623 --> @javicacheiro commented on GitHub (Jan 22, 2026): I hope so! In the meanwhile, to use the latest version of ollama, what I do is to apply the patch and then I can compile with (I have ROCm 6): ``` cmake --preset "ROCm 6" cmake --build build --parallel go build ``` I hope the patch will be merged soon and we avoid the manual compilation.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52178