[GH-ISSUE #5471] Available memory calculation on AMD APU no longer takes GTT into account #29181

Closed
opened 2026-04-22 07:52:52 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @Ph0enix89 on GitHub (Jul 3, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5471

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

First I have to acknowledge that I understand that officially running on 780m GPU is not supported. However for some scenarios it performs better than pure CPU and in others it's perhaps more power efficient to run on the GPU.
Perhaps this could also be relevant for bigger GPUs that are supported as the patch states:

The solution is MI300A approach, i.e., let VRAM allocations go to GTT.
Then device and host can flexibly and effectively share memory resource.

However I have to admit that the last part is just my speculation.

In any case 6.10 kernel release candidates have an improved GPU memory allocation which now allows computational workloads to utilize GTT in addition to VRAM as opposed to just VRAM before. More details here. I believe this to be the relevant commit.

In practice what it means is that on my laptop with 64 GB of memory I can play around with bigger models. Prior to 0.1.45 I see the following in the logs:

level=INFO source=types.go:71 msg="inference compute" id=0 library=rocm compute=gfx1103 driver=0.0 name=1002:15bf total="27.3 GiB" available="27.3 GiB"

While this is perhaps still less than what is actually available:

kernel: [drm] amdgpu: 8192M of VRAM memory ready
kernel: [drm] amdgpu: 27940M of GTT memory ready.

it still allows to work with models that are bigger than 8 GBs that still offer reasonable performance on the 780m GPU.

It would be nice to be able to continue to be able to use that extra memory in the future. Ideally being able to access VRAM+GTT, e.g. 36 GBs would be even better.

I suspect that this commit introduced some changes to how available memory is calculated. Starting from 0.1.45 I see the following in the logs:

level=INFO source=types.go:98 msg="inference compute" id=0 library=rocm compute=gfx1103 driver=0.0 name=1002:15bf total="8.0 GiB" available="6.4 GiB"

The new way of calculating the available memory does a better job of determining the actual available free memory but ideally it would be nice if would run this calculation against VRAM+GTT.

OS

Arch (6.10.0-rc6-1-mainline) + docker container

GPU

AMD

CPU

AMD

Ollama version

0.1.48-rocm

Originally created by @Ph0enix89 on GitHub (Jul 3, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5471 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? First I have to acknowledge that I understand that officially running on 780m GPU is not supported. However for some scenarios it performs better than pure CPU and in others it's perhaps more power efficient to run on the GPU. Perhaps this could also be relevant for bigger GPUs that are supported as the patch states: ``` The solution is MI300A approach, i.e., let VRAM allocations go to GTT. Then device and host can flexibly and effectively share memory resource. ``` However I have to admit that the last part is just my speculation. In any case `6.10` kernel release candidates have an improved GPU memory allocation which now allows computational workloads to utilize `GTT` in addition to `VRAM` as opposed to just `VRAM` before. More details [here](https://www.phoronix.com/news/Linux-6.10-AMDKFD-Small-APUs). I believe [this](https://gitlab.freedesktop.org/drm/kernel/-/commit/89773b85599affe89dfc030aa1cb70d6ca7de4d3) to be the relevant commit. In practice what it means is that on my laptop with 64 GB of memory I can play around with bigger models. Prior to `0.1.45` I see the following in the logs: ``` level=INFO source=types.go:71 msg="inference compute" id=0 library=rocm compute=gfx1103 driver=0.0 name=1002:15bf total="27.3 GiB" available="27.3 GiB" ``` While this is perhaps still less than what is actually available: ``` kernel: [drm] amdgpu: 8192M of VRAM memory ready kernel: [drm] amdgpu: 27940M of GTT memory ready. ``` it still allows to work with models that are bigger than 8 GBs that still offer reasonable performance on the 780m GPU. It would be nice to be able to continue to be able to use that extra memory in the future. Ideally being able to access `VRAM`+`GTT`, e.g. 36 GBs would be even better. I suspect that [this commit](https://github.com/ollama/ollama/commit/b32ebb4f2990817403484d50974077a5c52a4677) introduced some changes to how available memory is calculated. Starting from `0.1.45` I see the following in the logs: ``` level=INFO source=types.go:98 msg="inference compute" id=0 library=rocm compute=gfx1103 driver=0.0 name=1002:15bf total="8.0 GiB" available="6.4 GiB" ``` The new way of calculating the available memory does a better job of determining the actual available free memory but ideally it would be nice if would run this calculation against `VRAM`+`GTT`. ### OS Arch (6.10.0-rc6-1-mainline) + docker container ### GPU AMD ### CPU AMD ### Ollama version 0.1.48-rocm
GiteaMirror added the gpubugamd labels 2026-04-22 07:52:52 -05:00
Author
Owner

@salah7670 commented on GitHub (Jul 27, 2024):

Can reproduce this on latest arch with Radeon 680m as well

<!-- gh-comment-id:2253791779 --> @salah7670 commented on GitHub (Jul 27, 2024): Can reproduce this on latest arch with Radeon 680m as well
Author
Owner

@Snuupy commented on GitHub (Aug 12, 2024):

looks like this can be fixed with https://github.com/ollama/ollama/pull/6282

<!-- gh-comment-id:2282970537 --> @Snuupy commented on GitHub (Aug 12, 2024): looks like this can be fixed with https://github.com/ollama/ollama/pull/6282
Author
Owner

@midddle commented on GitHub (Dec 8, 2024):

I'm having the same issue using

  • Blackview MP-100 mini-pc with AMD Ryzen7 5700U (gfx90c)
  • ollama 0.4.1 on
  • Ubuntu 24.04 with
  • 6.8.0-49-generic kernel (what came with the 24.04)

First of all, big applause for the ollama team, everything works out of the box, ollama installed rocm and everything works well - using the gpu type override flag HSA_OVERRIDE_GFX_VERSION=9.0.0.
Ollama even utilizes GTT memory with no problem, the only issue is that it does seem to move only as much tensors as fit in VRAM even though it uses GTT.
Checked with pytorch and could easily allocate 10GB memory (4GB VRAM configuration), radeontop indeed shows it is allocated in GTT space:

import torch
mems = [torch.zeros((1024,1024,10), dtype=torch.uint8, device="cuda") for _ in range(10*100)]

Checked Llama3.1:8b model with different VRAM settings in BIOS:

8GB VRAM

Service report

Dec 08 10:13:37 mp100 ollama[1708]: time=2024-12-08T10:13:37.645+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe gpu=0 parallel=4 available=7567347712 required="6.2 GiB"
Dec 08 10:13:37 mp100 ollama[1708]: time=2024-12-08T10:13:37.645+01:00 level=INFO source=server.go:105 msg="system memory" total="23.4 GiB" free="18.9 GiB" free_swap="8.0 GiB"
Dec 08 10:13:37 mp100 ollama[1708]: time=2024-12-08T10:13:37.646+01:00 level=INFO source=memory.go:343 msg="offload to rocm" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[7.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
...
Dec 08 10:13:39 mp100 ollama[1708]: llm_load_tensors: ggml ctx size = 0.27 MiB
Dec 08 10:13:45 mp100 ollama[1708]: llm_load_tensors: offloading 32 repeating layers to GPU
Dec 08 10:13:45 mp100 ollama[1708]: llm_load_tensors: offloading non-repeating layers to GPU
Dec 08 10:13:45 mp100 ollama[1708]: llm_load_tensors: offloaded 33/33 layers to GPU
Dec 08 10:13:45 mp100 ollama[1708]: llm_load_tensors: ROCm0 buffer size = 4156.00 MiB
Dec 08 10:13:45 mp100 ollama[1708]: llm_load_tensors: CPU buffer size = 281.81 MiB

Radeontop shows

1052/8167M VRAM
6209/11951M GTT
90-100% Graphics pipeline and Texture Addresser utilization

4GB VRAM

Service report

Dec 08 10:34:54 mp100 ollama[1717]: time=2024-12-08T10:34:54.047+01:00 level=INFO source=amd_linux.go:386 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION=9.0.0
Dec 08 10:34:54 mp100 ollama[1717]: time=2024-12-08T10:34:54.053+01:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=rocm variant="" compute=gfx90c driver=6.8 name=1002:164c total="4.0 GiB" available="4.0 GiB"
...
Dec 08 10:42:53 mp100 ollama[1717]: time=2024-12-08T10:42:53.345+01:00 level=INFO source=server.go:105 msg="system memory" total="27.3 GiB" free="23.8 GiB" free_swap="8.0 GiB"
Dec 08 10:42:53 mp100 ollama[1717]: time=2024-12-08T10:42:53.347+01:00 level=INFO source=memory.go:343 msg="offload to rocm" layers.requested=-1 layers.model=33 layers.offload=15 layers.split="" memory.available="[3.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.5 GiB" memory.required.partial="3.1 GiB" memory.required.kv="256.0 MiB" memory.required.allocations="[3.1 GiB]" memory.weights.total="3.9 GiB" memory.weights.repeating="3.5 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="677.5 MiB"
...
Dec 08 10:42:55 mp100 ollama[1717]: llm_load_tensors: ggml ctx size = 0.27 MiB
Dec 08 10:43:00 mp100 ollama[1717]: llm_load_tensors: offloading 15 repeating layers to GPU
Dec 08 10:43:00 mp100 ollama[1717]: llm_load_tensors: offloaded 15/33 layers to GPU
Dec 08 10:43:00 mp100 ollama[1717]: llm_load_tensors: ROCm0 buffer size = 1755.47 MiB
Dec 08 10:43:00 mp100 ollama[1717]: llm_load_tensors: CPU buffer size = 4437.80 MiB

Radeontop shows

919/4041M
2859/13967M
55-65% Graphics pipeline and Texture Addresser utilization

<!-- gh-comment-id:2525577980 --> @midddle commented on GitHub (Dec 8, 2024): I'm having the same issue using - Blackview MP-100 mini-pc with AMD Ryzen7 5700U (gfx90c) - ollama 0.4.1 on - Ubuntu 24.04 with - 6.8.0-49-generic kernel (what came with the 24.04) First of all, big applause for the ollama team, everything works out of the box, ollama installed rocm and everything works well - using the gpu type override flag HSA_OVERRIDE_GFX_VERSION=9.0.0. Ollama even utilizes GTT memory with no problem, the only issue is that it does seem to move only as much tensors as fit in VRAM even though it uses GTT. Checked with pytorch and could easily allocate 10GB memory (4GB VRAM configuration), radeontop indeed shows it is allocated in GTT space: ``` python import torch mems = [torch.zeros((1024,1024,10), dtype=torch.uint8, device="cuda") for _ in range(10*100)] ``` Checked Llama3.1:8b model with different VRAM settings in BIOS: ### 8GB VRAM Service report > Dec 08 10:13:37 mp100 ollama[1708]: time=2024-12-08T10:13:37.645+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe gpu=0 parallel=4 available=7567347712 required="6.2 GiB" Dec 08 10:13:37 mp100 ollama[1708]: time=2024-12-08T10:13:37.645+01:00 level=INFO source=server.go:105 msg="system memory" total="23.4 GiB" free="18.9 GiB" free_swap="8.0 GiB" Dec 08 10:13:37 mp100 ollama[1708]: time=2024-12-08T10:13:37.646+01:00 level=INFO source=memory.go:343 msg="offload to rocm" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[7.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB" ... Dec 08 10:13:39 mp100 ollama[1708]: llm_load_tensors: ggml ctx size = 0.27 MiB Dec 08 10:13:45 mp100 ollama[1708]: llm_load_tensors: offloading 32 repeating layers to GPU Dec 08 10:13:45 mp100 ollama[1708]: llm_load_tensors: offloading non-repeating layers to GPU Dec 08 10:13:45 mp100 ollama[1708]: llm_load_tensors: offloaded 33/33 layers to GPU Dec 08 10:13:45 mp100 ollama[1708]: llm_load_tensors: ROCm0 buffer size = 4156.00 MiB Dec 08 10:13:45 mp100 ollama[1708]: llm_load_tensors: CPU buffer size = 281.81 MiB Radeontop shows > 1052/8167M VRAM > 6209/11951M GTT > 90-100% Graphics pipeline and Texture Addresser utilization ### 4GB VRAM Service report > Dec 08 10:34:54 mp100 ollama[1717]: time=2024-12-08T10:34:54.047+01:00 level=INFO source=amd_linux.go:386 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION=9.0.0 Dec 08 10:34:54 mp100 ollama[1717]: time=2024-12-08T10:34:54.053+01:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=rocm variant="" compute=gfx90c driver=6.8 name=1002:164c total="4.0 GiB" available="4.0 GiB" ... Dec 08 10:42:53 mp100 ollama[1717]: time=2024-12-08T10:42:53.345+01:00 level=INFO source=server.go:105 msg="system memory" total="27.3 GiB" free="23.8 GiB" free_swap="8.0 GiB" Dec 08 10:42:53 mp100 ollama[1717]: time=2024-12-08T10:42:53.347+01:00 level=INFO source=memory.go:343 msg="offload to rocm" layers.requested=-1 layers.model=33 layers.offload=15 layers.split="" memory.available="[3.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.5 GiB" memory.required.partial="3.1 GiB" memory.required.kv="256.0 MiB" memory.required.allocations="[3.1 GiB]" memory.weights.total="3.9 GiB" memory.weights.repeating="3.5 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="258.5 MiB" memory.graph.partial="677.5 MiB" ... Dec 08 10:42:55 mp100 ollama[1717]: llm_load_tensors: ggml ctx size = 0.27 MiB Dec 08 10:43:00 mp100 ollama[1717]: llm_load_tensors: offloading 15 repeating layers to GPU Dec 08 10:43:00 mp100 ollama[1717]: llm_load_tensors: offloaded 15/33 layers to GPU Dec 08 10:43:00 mp100 ollama[1717]: llm_load_tensors: ROCm0 buffer size = 1755.47 MiB Dec 08 10:43:00 mp100 ollama[1717]: llm_load_tensors: CPU buffer size = 4437.80 MiB Radeontop shows > 919/4041M > 2859/13967M > 55-65% Graphics pipeline and Texture Addresser utilization
Author
Owner

@eliasmagn commented on GitHub (Jan 30, 2025):

This is beeing ignored ?

<!-- gh-comment-id:2623862133 --> @eliasmagn commented on GitHub (Jan 30, 2025): This is beeing ignored ?
Author
Owner

@Binsk commented on GitHub (Feb 4, 2025):

So just coming in as I noticed something with my iGPU. I'm running w/ a 780m (Linux kernel 6.13) and if I have 8GB dedicated in the BIOS ollama will load the model into GTT Memory and not even touch the 'dedicated' vRAM. However if I only dedicate 1GB in the bios then it will use the CPU even though GTT has more than enough space to store it? It seems to be checking the capacity of my dedicated vRAM to determine whether or not to load via GPU but when it actually does it instead uses the GTT.

Checking the status of ollama.service indicates it only sees my 8GB or 1GB dedicated. Confirmed that it is using GTT via radeontop.

<!-- gh-comment-id:2633000039 --> @Binsk commented on GitHub (Feb 4, 2025): So just coming in as I noticed something with my iGPU. I'm running w/ a 780m (Linux kernel 6.13) and if I have 8GB dedicated in the BIOS ollama will load the model into GTT Memory and not even touch the 'dedicated' vRAM. However if I only dedicate 1GB in the bios then it will use the CPU even though GTT has more than enough space to store it? It seems to be checking the capacity of my dedicated vRAM to determine whether or not to load via GPU but when it actually does it instead uses the GTT. Checking the status of ollama.service indicates it only sees my 8GB or 1GB dedicated. Confirmed that it is using GTT via radeontop.
Author
Owner

@LinusCDE commented on GitHub (Mar 17, 2025):

I got the exact same issue as Binsk.

Running on a Framework 13" AMD (Ryzen 7840U with Radeon 780M) (forcing detection of the GPU as a gfx1102, since gfx1103 is not supported, using systemctl edit ollama and adding the env HSA_OVERRIDE_GFX_VERSION=11.0.2).

Image

Image

Image

$ rocminfo (click to expand)
ROCk module is loaded
=====================
HSA System Attributes
=====================
Runtime Version:         1.1
Runtime Ext Version:     1.6
System Timestamp Freq.:  1000.000000MHz
Sig. Max Wait Duration:  18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model:           LARGE
System Endianness:       LITTLE
Mwaitx:                  DISABLED
DMAbuf Support:          YES

==========
HSA Agents
==========
*******
Agent 1
*******
  Name:                    AMD Ryzen 7 7840U w/ Radeon  780M Graphics
  Uuid:                    CPU-XX
  Marketing Name:          AMD Ryzen 7 7840U w/ Radeon  780M Graphics
  Vendor Name:             CPU
  Feature:                 None specified
  Profile:                 FULL_PROFILE
  Float Round Mode:        NEAR
  Max Queue Number:        0(0x0)
  Queue Min Size:          0(0x0)
  Queue Max Size:          0(0x0)
  Queue Type:              MULTI
  Node:                    0
  Device Type:             CPU
  Cache Info:
    L1:                      32768(0x8000) KB
  Chip ID:                 0(0x0)
  ASIC Revision:           0(0x0)
  Cacheline Size:          64(0x40)
  Max Clock Freq. (MHz):   3301
  BDFID:                   0
  Internal Node ID:        0
  Compute Unit:            16
  SIMDs per CU:            0
  Shader Engines:          0
  Shader Arrs. per Eng.:   0
  WatchPts on Addr. Ranges:1
  Memory Properties:
  Features:                None
  Pool Info:
    Pool 1
      Segment:                 GLOBAL; FLAGS: FINE GRAINED
      Size:                    57113852(0x3677cfc) KB
      Allocatable:             TRUE
      Alloc Granule:           4KB
      Alloc Recommended Granule:4KB
      Alloc Alignment:         4KB
      Accessible by all:       TRUE
    Pool 2
      Segment:                 GLOBAL; FLAGS: KERNARG, FINE GRAINED
      Size:                    57113852(0x3677cfc) KB
      Allocatable:             TRUE
      Alloc Granule:           4KB
      Alloc Recommended Granule:4KB
      Alloc Alignment:         4KB
      Accessible by all:       TRUE
    Pool 3
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED
      Size:                    57113852(0x3677cfc) KB
      Allocatable:             TRUE
      Alloc Granule:           4KB
      Alloc Recommended Granule:4KB
      Alloc Alignment:         4KB
      Accessible by all:       TRUE
  ISA Info:
*******
Agent 2
*******
  Name:                    gfx1103
  Uuid:                    GPU-XX
  Marketing Name:          AMD Radeon 780M
  Vendor Name:             AMD
  Feature:                 KERNEL_DISPATCH
  Profile:                 BASE_PROFILE
  Float Round Mode:        NEAR
  Max Queue Number:        128(0x80)
  Queue Min Size:          64(0x40)
  Queue Max Size:          131072(0x20000)
  Queue Type:              MULTI
  Node:                    1
  Device Type:             GPU
  Cache Info:
    L1:                      32(0x20) KB
    L2:                      2048(0x800) KB
  Chip ID:                 5567(0x15bf)
  ASIC Revision:           9(0x9)
  Cacheline Size:          128(0x80)
  Max Clock Freq. (MHz):   2700
  BDFID:                   49408
  Internal Node ID:        1
  Compute Unit:            12
  SIMDs per CU:            2
  Shader Engines:          1
  Shader Arrs. per Eng.:   2
  WatchPts on Addr. Ranges:4
  Coherent Host Access:    FALSE
  Memory Properties:       APU
  Features:                KERNEL_DISPATCH
  Fast F16 Operation:      TRUE
  Wavefront Size:          32(0x20)
  Workgroup Max Size:      1024(0x400)
  Workgroup Max Size per Dimension:
    x                        1024(0x400)
    y                        1024(0x400)
    z                        1024(0x400)
  Max Waves Per CU:        32(0x20)
  Max Work-item Per CU:    1024(0x400)
  Grid Max Size:           4294967295(0xffffffff)
  Grid Max Size per Dimension:
    x                        4294967295(0xffffffff)
    y                        4294967295(0xffffffff)
    z                        4294967295(0xffffffff)
  Max fbarriers/Workgrp:   32
  Packet Processor uCode:: 40
  SDMA engine uCode::      21
  IOMMU Support::          None
  Pool Info:
    Pool 1
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED
      Size:                    28556924(0x1b3be7c) KB
      Allocatable:             TRUE
      Alloc Granule:           4KB
      Alloc Recommended Granule:2048KB
      Alloc Alignment:         4KB
      Accessible by all:       FALSE
    Pool 2
      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
      Size:                    28556924(0x1b3be7c) KB
      Allocatable:             TRUE
      Alloc Granule:           4KB
      Alloc Recommended Granule:2048KB
      Alloc Alignment:         4KB
      Accessible by all:       FALSE
    Pool 3
      Segment:                 GROUP
      Size:                    64(0x40) KB
      Allocatable:             FALSE
      Alloc Granule:           0KB
      Alloc Recommended Granule:0KB
      Alloc Alignment:         0KB
      Accessible by all:       FALSE
  ISA Info:
    ISA 1
      Name:                    amdgcn-amd-amdhsa--gfx1103
      Machine Models:          HSA_MACHINE_MODEL_LARGE
      Profiles:                HSA_PROFILE_BASE
      Default Rounding Mode:   NEAR
      Default Rounding Mode:   NEAR
      Fast f16:                TRUE
      Workgroup Max Size:      1024(0x400)
      Workgroup Max Size per Dimension:
        x                        1024(0x400)
        y                        1024(0x400)
        z                        1024(0x400)
      Grid Max Size:           4294967295(0xffffffff)
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)
        y                        4294967295(0xffffffff)
        z                        4294967295(0xffffffff)
      FBarrier Max Size:       32
*** Done ***

Image

<!-- gh-comment-id:2730624231 --> @LinusCDE commented on GitHub (Mar 17, 2025): I got the exact same issue as Binsk. Running on a Framework 13" AMD (Ryzen 7840U with Radeon 780M) (forcing detection of the GPU as a gfx1102, since gfx1103 is not supported, using `systemctl edit ollama` and adding the env `HSA_OVERRIDE_GFX_VERSION=11.0.2`). ![Image](https://github.com/user-attachments/assets/4901b5b0-fe04-4545-a953-915234200116) ![Image](https://github.com/user-attachments/assets/17e11839-93ec-4c56-9c74-c21978d43932) ![Image](https://github.com/user-attachments/assets/8d5aa35f-5de4-4d4f-9837-d3a5aca92a5c) <details> <summary><code>$ rocminfo (click to expand)</code></summary> ``` ROCk module is loaded ===================== HSA System Attributes ===================== Runtime Version: 1.1 Runtime Ext Version: 1.6 System Timestamp Freq.: 1000.000000MHz Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count) Machine Model: LARGE System Endianness: LITTLE Mwaitx: DISABLED DMAbuf Support: YES ========== HSA Agents ========== ******* Agent 1 ******* Name: AMD Ryzen 7 7840U w/ Radeon 780M Graphics Uuid: CPU-XX Marketing Name: AMD Ryzen 7 7840U w/ Radeon 780M Graphics Vendor Name: CPU Feature: None specified Profile: FULL_PROFILE Float Round Mode: NEAR Max Queue Number: 0(0x0) Queue Min Size: 0(0x0) Queue Max Size: 0(0x0) Queue Type: MULTI Node: 0 Device Type: CPU Cache Info: L1: 32768(0x8000) KB Chip ID: 0(0x0) ASIC Revision: 0(0x0) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 3301 BDFID: 0 Internal Node ID: 0 Compute Unit: 16 SIMDs per CU: 0 Shader Engines: 0 Shader Arrs. per Eng.: 0 WatchPts on Addr. Ranges:1 Memory Properties: Features: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: FINE GRAINED Size: 57113852(0x3677cfc) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 2 Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED Size: 57113852(0x3677cfc) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 3 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 57113852(0x3677cfc) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:4KB Alloc Alignment: 4KB Accessible by all: TRUE ISA Info: ******* Agent 2 ******* Name: gfx1103 Uuid: GPU-XX Marketing Name: AMD Radeon 780M Vendor Name: AMD Feature: KERNEL_DISPATCH Profile: BASE_PROFILE Float Round Mode: NEAR Max Queue Number: 128(0x80) Queue Min Size: 64(0x40) Queue Max Size: 131072(0x20000) Queue Type: MULTI Node: 1 Device Type: GPU Cache Info: L1: 32(0x20) KB L2: 2048(0x800) KB Chip ID: 5567(0x15bf) ASIC Revision: 9(0x9) Cacheline Size: 128(0x80) Max Clock Freq. (MHz): 2700 BDFID: 49408 Internal Node ID: 1 Compute Unit: 12 SIMDs per CU: 2 Shader Engines: 1 Shader Arrs. per Eng.: 2 WatchPts on Addr. Ranges:4 Coherent Host Access: FALSE Memory Properties: APU Features: KERNEL_DISPATCH Fast F16 Operation: TRUE Wavefront Size: 32(0x20) Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Max Waves Per CU: 32(0x20) Max Work-item Per CU: 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) Max fbarriers/Workgrp: 32 Packet Processor uCode:: 40 SDMA engine uCode:: 21 IOMMU Support:: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 28556924(0x1b3be7c) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:2048KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 2 Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED Size: 28556924(0x1b3be7c) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:2048KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 3 Segment: GROUP Size: 64(0x40) KB Allocatable: FALSE Alloc Granule: 0KB Alloc Recommended Granule:0KB Alloc Alignment: 0KB Accessible by all: FALSE ISA Info: ISA 1 Name: amdgcn-amd-amdhsa--gfx1103 Machine Models: HSA_MACHINE_MODEL_LARGE Profiles: HSA_PROFILE_BASE Default Rounding Mode: NEAR Default Rounding Mode: NEAR Fast f16: TRUE Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) FBarrier Max Size: 32 *** Done *** ``` </details> ![Image](https://github.com/user-attachments/assets/f4f42abb-ab04-40d1-bbb4-500d3b90cf00)
Author
Owner

@DocMAX commented on GitHub (Apr 4, 2025):

To my knowing GTT has never been supported by Ollama. You need a special Ollama version for that.

<!-- gh-comment-id:2778745361 --> @DocMAX commented on GitHub (Apr 4, 2025): To my knowing GTT has never been supported by Ollama. You need a special Ollama version for that.
Author
Owner

@Ph0enix89 commented on GitHub (Apr 4, 2025):

To my knowing GTT has never been supported by Ollama. You need a special Ollama version for that.

Before kernel 6.10 GTT memory was not accessible to Ollama unless some extra modifications were made. When 6.10 came out the memory allocation would happen from a joint pool of VRAM+GTT since effectively it's the same memory anyway, that was a kernel change, not an Ollama change. However around that time there was a patch that refined available memory calculation to account for memory that's already allocated, and that patch explicitly and exclusively relies on VRAM without GTT, at least that's my understanding.

<!-- gh-comment-id:2779348254 --> @Ph0enix89 commented on GitHub (Apr 4, 2025): > To my knowing GTT has never been supported by Ollama. You need a special Ollama version for that. Before kernel `6.10` GTT memory was not accessible to Ollama unless some extra modifications were made. When `6.10` came out the memory allocation would happen from a joint pool of VRAM+GTT since effectively it's the same memory anyway, that was a kernel change, not an Ollama change. However around that time there was a patch that refined available memory calculation to account for memory that's already allocated, and that patch explicitly and exclusively relies on VRAM without GTT, at least that's my understanding.
Author
Owner

@DocMAX commented on GitHub (Apr 5, 2025):

So what's the situation now? It once worked and now not, right?

<!-- gh-comment-id:2780627496 --> @DocMAX commented on GitHub (Apr 5, 2025): So what's the situation now? It once worked and now not, right?
Author
Owner

@godmar commented on GitHub (May 3, 2025):

I'm curious about this, too. I run ollama 0.6.6 on Ubuntu 24.04.1 with Linux 6.11.0.
It's an 780M integrated APU (rocminfo reports AMD Ryzen 7 7840HS w/ Radeon 780M Graphics) where I have set aside via the BIOS 8GB of VRAM. It's a gfx1103 which I have overwritten to be a 11.0.0 to allow ollama to run. The machine has 32GB of DDR5 (I believe) in total, leaving 24GB for Linux from which the amdgpu driver carves out GTT.

Linux reports:

[    2.719061] [drm] amdgpu: 8192M of VRAM memory ready
[    2.719064] [drm] amdgpu: 11867M of GTT memory ready.

When Ollama runs this model, radeontop shows that it uses mostly GTT and hardly any of the VRAM I had set aside.

Image

ollama[2463]: print_info: file format = GGUF V3 (latest)
ollama[2463]: print_info: file type   = Q4_K - Medium
ollama[2463]: print_info: file size   = 8.63 GiB (5.02 BPW)
ollama[2463]: ggml_cuda_init: found 1 ROCm devices:
ollama[2463]:   Device 0: AMD Radeon Graphics, gfx1100 (0x1100), VMM: no, Wave Size: 32
ollama[2463]: load_backend: loaded ROCm backend from /usr/local/lib/ollama/rocm/libggml-hip.so
ollama[2463]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
ollama[2463]: time=2025-05-02T21:36:58.850-04:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
ollama[2463]: llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon Graphics) - 11780 MiB free

ollama[2463]: load_tensors: loading model tensors, this can take a while... (mmap = true)
ollama[2463]: load_tensors: offloading 34 repeating layers to GPU
ollama[2463]: load_tensors: offloaded 34/41 layers to GPU
ollama[2463]: load_tensors:        ROCm0 model buffer size =  6598.82 MiB
ollama[2463]: load_tensors:   CPU_Mapped model buffer size =  2241.95 MiB
$ ollama ps
NAME         ID              SIZE     PROCESSOR          UNTIL              
qwen3:14b    7d7da67570e2    10 GB    20%/80% CPU/GPU    4 minutes from now    

My questions are whether that's correct. If so, should I set aside 8GB of VRAM only to let it go to waste?
If not, how do I tell ollama and/or the Linux kernel to use the available VRAM?

Also, it seems the entire model could be done by the GPU - why isn't it?

This could be the same issue as described in PR 5471.

<!-- gh-comment-id:2848356451 --> @godmar commented on GitHub (May 3, 2025): I'm curious about this, too. I run ollama 0.6.6 on Ubuntu 24.04.1 with Linux 6.11.0. It's an 780M integrated APU (`rocminfo` reports `AMD Ryzen 7 7840HS w/ Radeon 780M Graphics`) where I have set aside via the BIOS 8GB of VRAM. It's a `gfx1103` which I have overwritten to be a `11.0.0` to allow ollama to run. The machine has 32GB of DDR5 (I believe) in total, leaving 24GB for Linux from which the amdgpu driver carves out GTT. Linux reports: ``` [ 2.719061] [drm] amdgpu: 8192M of VRAM memory ready [ 2.719064] [drm] amdgpu: 11867M of GTT memory ready. ``` When Ollama runs this model, radeontop shows that it uses mostly GTT and hardly any of the VRAM I had set aside. ![Image](https://github.com/user-attachments/assets/a02dfb95-166a-4aa2-baab-e63c91eb6cf5) ``` ollama[2463]: print_info: file format = GGUF V3 (latest) ollama[2463]: print_info: file type = Q4_K - Medium ollama[2463]: print_info: file size = 8.63 GiB (5.02 BPW) ollama[2463]: ggml_cuda_init: found 1 ROCm devices: ollama[2463]: Device 0: AMD Radeon Graphics, gfx1100 (0x1100), VMM: no, Wave Size: 32 ollama[2463]: load_backend: loaded ROCm backend from /usr/local/lib/ollama/rocm/libggml-hip.so ollama[2463]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so ollama[2463]: time=2025-05-02T21:36:58.850-04:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) ollama[2463]: llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon Graphics) - 11780 MiB free ollama[2463]: load_tensors: loading model tensors, this can take a while... (mmap = true) ollama[2463]: load_tensors: offloading 34 repeating layers to GPU ollama[2463]: load_tensors: offloaded 34/41 layers to GPU ollama[2463]: load_tensors: ROCm0 model buffer size = 6598.82 MiB ollama[2463]: load_tensors: CPU_Mapped model buffer size = 2241.95 MiB ``` ``` $ ollama ps NAME ID SIZE PROCESSOR UNTIL qwen3:14b 7d7da67570e2 10 GB 20%/80% CPU/GPU 4 minutes from now ``` My questions are whether that's correct. If so, should I set aside 8GB of VRAM only to let it go to waste? If not, how do I tell ollama and/or the Linux kernel to use the available VRAM? Also, it seems the entire model could be done by the GPU - why isn't it? This could be the same issue as [described in PR 5471](https://github.com/ollama/ollama/issues/5471#issuecomment-2633000039).
Author
Owner

@nedomika commented on GitHub (Jun 3, 2025):

I have same problem

Image

On Ollama LXC:

Image

On PROXMOX host:

Image

~15GB take from proxmox host + 4GB from LXC RAM

<!-- gh-comment-id:2936956064 --> @nedomika commented on GitHub (Jun 3, 2025): I have same problem ![Image](https://github.com/user-attachments/assets/2652763c-9b48-4271-b34d-7fbd836ca531) On Ollama LXC: ![Image](https://github.com/user-attachments/assets/6f8722b6-9fba-4e47-901a-b8c21768668e) On PROXMOX host: ![Image](https://github.com/user-attachments/assets/49a2fc1f-2edc-4e4a-9e48-31750ff6c6d2) ~15GB take from proxmox host + 4GB from LXC RAM
Author
Owner

@yuannan commented on GitHub (Jun 18, 2025):

I currently also have the same problem with a AI MAX+ 395 (8060S GPU), it won't allocate the memory to the VRAM section, but instead the GTT section. If I have not enough VRAM it tries to run it on the CPU instead. So the VRAM is "wasted" as all it does is check the regular VRAM section.

It is seriously slowing the compute.

<!-- gh-comment-id:2984344706 --> @yuannan commented on GitHub (Jun 18, 2025): I currently also have the same problem with a AI MAX+ 395 (8060S GPU), it won't allocate the memory to the VRAM section, but instead the GTT section. If I have not enough VRAM it tries to run it on the CPU instead. So the VRAM is "wasted" as all it does is check the regular VRAM section. It is seriously slowing the compute.
Author
Owner

@yingjiegau commented on GitHub (Aug 21, 2025):

I pushed the situation to its hardware limitation , want to see what's gonna happen if I load an over-sized model on a 128G ram / GTT 64G rig , cuz I found even for a smaller model like 8GB with Ollama rocm image barely touches GTT memory before

experiment spec

  1. 128G RAM ; Swap off
  2. manually set UMA 16GB in bios
  3. using 780m with Ollama 11.5 rocm image
  4. override to GFX11.0.0
  5. llama3.3 70B q8 , model size is about 75GB

result:
it runs on those GFX pipelines , on GPU I assume, and it didn't crash, but spitting words extremely slow,

inspect using AMDGPU_top shows GTT memory 15439/65535 , Vram XX/16348

conclusion:
so now 11.5 version is barely touches Vram memory? heavily depends on some kind of "ram swap" mechanism

and even it is utilizing some GTT memory, the limitation will be the Vram you set in the BIOS?

<!-- gh-comment-id:3208835532 --> @yingjiegau commented on GitHub (Aug 21, 2025): I pushed the situation to its hardware limitation , want to see what's gonna happen if I load an over-sized model on a 128G ram / GTT 64G rig , cuz I found even for a smaller model like 8GB with Ollama rocm image barely touches GTT memory before experiment spec 1. 128G RAM ; Swap off 2. manually set UMA 16GB in bios 3. using 780m with Ollama 11.5 rocm image 5. override to GFX11.0.0 6. llama3.3 70B q8 , model size is about 75GB result: it runs on those GFX pipelines , on GPU I assume, and it didn't crash, but spitting words extremely slow, inspect using AMDGPU_top shows GTT memory 15439/65535 , Vram XX/16348 conclusion: so now 11.5 version is barely touches Vram memory? heavily depends on some kind of "ram swap" mechanism and even it is utilizing some GTT memory, the limitation will be the Vram you set in the BIOS?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#29181