[GH-ISSUE #7556] llama runner process has terminated: error loading model: unable to allocate backend buffer when AMD iGPU vram allocation larger than 8GB #51321

Open
opened 2026-04-28 19:25:36 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @oatmealm on GitHub (Nov 7, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7556

What is the issue?

After setting iGPU allocation to 16GB (out of 32GB) some models crash when loaded, while other mange.

ollama run llama3.2
Error: llama runner process has terminated: cudaMalloc failed: out of memory
llama_kv_cache_init: failed to allocate buffer for kv cache
llama_new_context_with_model: llama_kv_cache_init() failed for self-attention cache
ollama run llama3.2:3b-instruct-q6_K
Error: llama runner process has terminated: error loading model: unable to allocate backend buffer
llama_load_model_from_file: exception loading model
ollama run smollm2:1.7b-instruct-q6_K
>>> Send a message (/? for help)

With a smaller ram/vram split, like 4G, ollama loads models into vram fully, or gpu and cpu.

[Service]
Environment="HSA_OVERRIDE_GFX_VERSION=9.0.0"
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"
Environment="OLLAMA_KEEP_ALIVE=24h"
rocminfo 
ROCk module is loaded
=====================    
HSA System Attributes    
=====================    
Runtime Version:         1.1
Runtime Ext Version:     1.6
System Timestamp Freq.:  1000.000000MHz
Sig. Max Wait Duration:  18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model:           LARGE                              
System Endianness:       LITTLE                             
Mwaitx:                  DISABLED
DMAbuf Support:          YES

==========               
HSA Agents               
==========               
*******                  
Agent 1                  
*******                  
  Name:                    AMD Ryzen 9 5900HX with Radeon Graphics
  Uuid:                    CPU-XX                             
  Marketing Name:          AMD Ryzen 9 5900HX with Radeon Graphics
  Vendor Name:             CPU                                
  Feature:                 None specified                     
  Profile:                 FULL_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        0(0x0)                             
  Queue Min Size:          0(0x0)                             
  Queue Max Size:          0(0x0)                             
  Queue Type:              MULTI                              
  Node:                    0                                  
  Device Type:             CPU                                
  Cache Info:              
    L1:                      32768(0x8000) KB                   
  Chip ID:                 0(0x0)                             
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          64(0x40)                           
  Max Clock Freq. (MHz):   4680                               
  BDFID:                   0                                  
  Internal Node ID:        0                                  
  Compute Unit:            16                                 
  SIMDs per CU:            0                                  
  Shader Engines:          0                                  
  Shader Arrs. per Eng.:   0                                  
  WatchPts on Addr. Ranges:1                                  
  Memory Properties:       
  Features:                None
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: FINE GRAINED        
      Size:                    16285796(0xf88064) KB              
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: KERNARG, FINE GRAINED
      Size:                    16285796(0xf88064) KB              
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 3                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    16285796(0xf88064) KB              
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
  ISA Info:                
*******                  
Agent 2                  
*******                  
  Name:                    gfx90c                             
  Uuid:                    GPU-XX                             
  Marketing Name:          AMD Radeon Graphics                
  Vendor Name:             AMD                                
  Feature:                 KERNEL_DISPATCH                    
  Profile:                 BASE_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        128(0x80)                          
  Queue Min Size:          64(0x40)                           
  Queue Max Size:          131072(0x20000)                    
  Queue Type:              MULTI                              
  Node:                    1                                  
  Device Type:             GPU                                
  Cache Info:              
    L1:                      16(0x10) KB                        
    L2:                      1024(0x400) KB                     
  Chip ID:                 5688(0x1638)                       
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          64(0x40)                           
  Max Clock Freq. (MHz):   2100                               
  BDFID:                   1024                               
  Internal Node ID:        1                                  
  Compute Unit:            8                                  
  SIMDs per CU:            4                                  
  Shader Engines:          1                                  
  Shader Arrs. per Eng.:   1                                  
  WatchPts on Addr. Ranges:4                                  
  Coherent Host Access:    FALSE                              
  Memory Properties:       APU
  Features:                KERNEL_DISPATCH 
  Fast F16 Operation:      TRUE                               
  Wavefront Size:          64(0x40)                           
  Workgroup Max Size:      1024(0x400)                        
  Workgroup Max Size per Dimension:
    x                        1024(0x400)                        
    y                        1024(0x400)                        
    z                        1024(0x400)                        
  Max Waves Per CU:        40(0x28)                           
  Max Work-item Per CU:    2560(0xa00)                        
  Grid Max Size:           4294967295(0xffffffff)             
  Grid Max Size per Dimension:
    x                        4294967295(0xffffffff)             
    y                        4294967295(0xffffffff)             
    z                        4294967295(0xffffffff)             
  Max fbarriers/Workgrp:   32                                 
  Packet Processor uCode:: 472                                
  SDMA engine uCode::      40                                 
  IOMMU Support::          None                               
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    8142896(0x7c4030) KB               
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:2048KB                             
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
      Size:                    8142896(0x7c4030) KB               
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:2048KB                             
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 3                   
      Segment:                 GROUP                              
      Size:                    64(0x40) KB                        
      Allocatable:             FALSE                              
      Alloc Granule:           0KB                                
      Alloc Recommended Granule:0KB                                
      Alloc Alignment:         0KB                                
      Accessible by all:       FALSE                              
  ISA Info:                
    ISA 1                    
      Name:                    amdgcn-amd-amdhsa--gfx90c:xnack+   
      Machine Models:          HSA_MACHINE_MODEL_LARGE            
      Profiles:                HSA_PROFILE_BASE                   
      Default Rounding Mode:   NEAR                               
      Default Rounding Mode:   NEAR                               
      Fast f16:                TRUE                               
      Workgroup Max Size:      1024(0x400)                        
      Workgroup Max Size per Dimension:
        x                        1024(0x400)                        
        y                        1024(0x400)                        
        z                        1024(0x400)                        
      Grid Max Size:           4294967295(0xffffffff)             
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)             
        y                        4294967295(0xffffffff)             
        z                        4294967295(0xffffffff)             
      FBarrier Max Size:       32                                 
*** Done ***      
journalctl -u ollama.service -n 100 --no-pager
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv   1:                               general.type str              = model
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv   2:                               general.name str              = Llama 3.2 3B Instruct
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv   3:                           general.finetune str              = Instruct
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv   4:                           general.basename str              = Llama-3.2
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv   5:                         general.size_label str              = 3B
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv   6:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv   7:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv   8:                          llama.block_count u32              = 28
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv   9:                       llama.context_length u32              = 131072
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv  10:                     llama.embedding_length u32              = 3072
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv  11:                  llama.feed_forward_length u32              = 8192
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv  12:                 llama.attention.head_count u32              = 24
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv  13:              llama.attention.head_count_kv u32              = 8
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv  14:                       llama.rope.freq_base f32              = 500000.000000
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv  15:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv  16:                 llama.attention.key_length u32              = 128
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv  17:               llama.attention.value_length u32              = 128
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv  18:                          general.file_type u32              = 18
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv  19:                           llama.vocab_size u32              = 128256
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv  20:                 llama.rope.dimension_count u32              = 128
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv  21:                       tokenizer.ggml.model str              = gpt2
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv  22:                         tokenizer.ggml.pre str              = llama-bpe
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv  23:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv  24:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv  25:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv  26:                tokenizer.ggml.bos_token_id u32              = 128000
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv  27:                tokenizer.ggml.eos_token_id u32              = 128009
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv  29:               general.quantization_version u32              = 2
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - type  f32:   58 tensors
Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - type q6_K:  197 tensors
Nov 07 13:49:56 slimb ollama[1817]: time=2024-11-07T13:49:56.473+01:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
Nov 07 13:49:56 slimb ollama[1817]: llm_load_vocab: special tokens cache size = 256
Nov 07 13:49:56 slimb ollama[1817]: llm_load_vocab: token to piece cache size = 0.7999 MB
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: format           = GGUF V3 (latest)
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: arch             = llama
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: vocab type       = BPE
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_vocab          = 128256
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_merges         = 280147
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: vocab_only       = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_ctx_train      = 131072
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_embd           = 3072
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_layer          = 28
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_head           = 24
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_head_kv        = 8
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_rot            = 128
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_swa            = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_embd_head_k    = 128
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_embd_head_v    = 128
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_gqa            = 3
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_embd_k_gqa     = 1024
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_embd_v_gqa     = 1024
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_ff             = 8192
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_expert         = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_expert_used    = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: causal attn      = 1
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: pooling type     = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: rope type        = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: rope scaling     = linear
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: freq_base_train  = 500000.0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: freq_scale_train = 1
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_ctx_orig_yarn  = 131072
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: rope_finetuned   = unknown
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: ssm_d_conv       = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: ssm_d_inner      = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: ssm_d_state      = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: ssm_dt_rank      = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: model type       = 3B
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: model ftype      = Q6_K
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: model params     = 3.21 B
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: model size       = 2.45 GiB (6.56 BPW)
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: general.name     = Llama 3.2 3B Instruct
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: LF token         = 128 'Ä'
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: EOM token        = 128008 '<|eom_id|>'
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: EOG token        = 128008 '<|eom_id|>'
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: EOG token        = 128009 '<|eot_id|>'
Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: max token length = 256
Nov 07 13:49:56 slimb ollama[1817]: /opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory
Nov 07 13:49:57 slimb ollama[1817]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Nov 07 13:49:57 slimb ollama[1817]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Nov 07 13:49:57 slimb ollama[1817]: ggml_cuda_init: found 1 ROCm devices:
Nov 07 13:49:57 slimb ollama[1817]:   Device 0: AMD Radeon Graphics, compute capability 9.0, VMM: no
Nov 07 13:49:57 slimb ollama[1817]: llm_load_tensors: ggml ctx size =    0.24 MiB
Nov 07 13:49:57 slimb ollama[1817]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 2513.91 MiB on device 0: cudaMalloc failed: out of memory
Nov 07 13:49:57 slimb ollama[1817]: llama_model_load: error loading model: unable to allocate backend buffer
Nov 07 13:49:57 slimb ollama[1817]: llama_load_model_from_file: exception loading model
Nov 07 13:49:57 slimb ollama[1817]: terminate called after throwing an instance of 'std::runtime_error'
Nov 07 13:49:57 slimb ollama[1817]:   what():  unable to allocate backend buffer
Nov 07 13:49:57 slimb ollama[1817]: time=2024-11-07T13:49:57.678+01:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding"
Nov 07 13:49:59 slimb ollama[1817]: time=2024-11-07T13:49:59.282+01:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate backend buffer\nllama_load_model_from_file: exception loading model"
Nov 07 13:49:59 slimb ollama[1817]: [GIN] 2024/11/07 - 13:49:59 | 500 |  3.106339582s |       127.0.0.1 | POST     "/api/generate"

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.3.14

Originally created by @oatmealm on GitHub (Nov 7, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7556 ### What is the issue? After setting iGPU allocation to 16GB (out of 32GB) some models crash when loaded, while other mange. ``` ollama run llama3.2 Error: llama runner process has terminated: cudaMalloc failed: out of memory llama_kv_cache_init: failed to allocate buffer for kv cache llama_new_context_with_model: llama_kv_cache_init() failed for self-attention cache ``` ``` ollama run llama3.2:3b-instruct-q6_K Error: llama runner process has terminated: error loading model: unable to allocate backend buffer llama_load_model_from_file: exception loading model ``` ``` ollama run smollm2:1.7b-instruct-q6_K >>> Send a message (/? for help) ``` With a smaller ram/vram split, like 4G, ollama loads models into vram fully, or gpu and cpu. ``` [Service] Environment="HSA_OVERRIDE_GFX_VERSION=9.0.0" Environment="OLLAMA_HOST=0.0.0.0" Environment="OLLAMA_ORIGINS=*" Environment="OLLAMA_KEEP_ALIVE=24h" ``` ``` rocminfo ROCk module is loaded ===================== HSA System Attributes ===================== Runtime Version: 1.1 Runtime Ext Version: 1.6 System Timestamp Freq.: 1000.000000MHz Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count) Machine Model: LARGE System Endianness: LITTLE Mwaitx: DISABLED DMAbuf Support: YES ========== HSA Agents ========== ******* Agent 1 ******* Name: AMD Ryzen 9 5900HX with Radeon Graphics Uuid: CPU-XX Marketing Name: AMD Ryzen 9 5900HX with Radeon Graphics Vendor Name: CPU Feature: None specified Profile: FULL_PROFILE Float Round Mode: NEAR Max Queue Number: 0(0x0) Queue Min Size: 0(0x0) Queue Max Size: 0(0x0) Queue Type: MULTI Node: 0 Device Type: CPU Cache Info: L1: 32768(0x8000) KB Chip ID: 0(0x0) ASIC Revision: 0(0x0) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 4680 BDFID: 0 Internal Node ID: 0 Compute Unit: 16 SIMDs per CU: 0 Shader Engines: 0 Shader Arrs. per Eng.: 0 WatchPts on Addr. Ranges:1 Memory Properties: Features: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: FINE GRAINED Size: 16285796(0xf88064) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 2 Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED Size: 16285796(0xf88064) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 3 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 16285796(0xf88064) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:4KB Alloc Alignment: 4KB Accessible by all: TRUE ISA Info: ******* Agent 2 ******* Name: gfx90c Uuid: GPU-XX Marketing Name: AMD Radeon Graphics Vendor Name: AMD Feature: KERNEL_DISPATCH Profile: BASE_PROFILE Float Round Mode: NEAR Max Queue Number: 128(0x80) Queue Min Size: 64(0x40) Queue Max Size: 131072(0x20000) Queue Type: MULTI Node: 1 Device Type: GPU Cache Info: L1: 16(0x10) KB L2: 1024(0x400) KB Chip ID: 5688(0x1638) ASIC Revision: 0(0x0) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 2100 BDFID: 1024 Internal Node ID: 1 Compute Unit: 8 SIMDs per CU: 4 Shader Engines: 1 Shader Arrs. per Eng.: 1 WatchPts on Addr. Ranges:4 Coherent Host Access: FALSE Memory Properties: APU Features: KERNEL_DISPATCH Fast F16 Operation: TRUE Wavefront Size: 64(0x40) Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Max Waves Per CU: 40(0x28) Max Work-item Per CU: 2560(0xa00) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) Max fbarriers/Workgrp: 32 Packet Processor uCode:: 472 SDMA engine uCode:: 40 IOMMU Support:: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 8142896(0x7c4030) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:2048KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 2 Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED Size: 8142896(0x7c4030) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:2048KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 3 Segment: GROUP Size: 64(0x40) KB Allocatable: FALSE Alloc Granule: 0KB Alloc Recommended Granule:0KB Alloc Alignment: 0KB Accessible by all: FALSE ISA Info: ISA 1 Name: amdgcn-amd-amdhsa--gfx90c:xnack+ Machine Models: HSA_MACHINE_MODEL_LARGE Profiles: HSA_PROFILE_BASE Default Rounding Mode: NEAR Default Rounding Mode: NEAR Fast f16: TRUE Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) FBarrier Max Size: 32 *** Done *** ``` ``` journalctl -u ollama.service -n 100 --no-pager Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 1: general.type str = model Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 3: general.finetune str = Instruct Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 4: general.basename str = Llama-3.2 Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 5: general.size_label str = 3B Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 8: llama.block_count u32 = 28 Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 9: llama.context_length u32 = 131072 Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 10: llama.embedding_length u32 = 3072 Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192 Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 12: llama.attention.head_count u32 = 24 Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8 Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000 Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 16: llama.attention.key_length u32 = 128 Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 17: llama.attention.value_length u32 = 128 Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 18: general.file_type u32 = 18 Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 19: llama.vocab_size u32 = 128256 Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128 Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000 Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009 Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - kv 29: general.quantization_version u32 = 2 Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - type f32: 58 tensors Nov 07 13:49:56 slimb ollama[1817]: llama_model_loader: - type q6_K: 197 tensors Nov 07 13:49:56 slimb ollama[1817]: time=2024-11-07T13:49:56.473+01:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" Nov 07 13:49:56 slimb ollama[1817]: llm_load_vocab: special tokens cache size = 256 Nov 07 13:49:56 slimb ollama[1817]: llm_load_vocab: token to piece cache size = 0.7999 MB Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: format = GGUF V3 (latest) Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: arch = llama Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: vocab type = BPE Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_vocab = 128256 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_merges = 280147 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: vocab_only = 0 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_ctx_train = 131072 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_embd = 3072 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_layer = 28 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_head = 24 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_head_kv = 8 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_rot = 128 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_swa = 0 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_embd_head_k = 128 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_embd_head_v = 128 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_gqa = 3 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_embd_k_gqa = 1024 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_embd_v_gqa = 1024 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: f_norm_eps = 0.0e+00 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: f_logit_scale = 0.0e+00 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_ff = 8192 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_expert = 0 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_expert_used = 0 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: causal attn = 1 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: pooling type = 0 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: rope type = 0 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: rope scaling = linear Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: freq_base_train = 500000.0 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: freq_scale_train = 1 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: n_ctx_orig_yarn = 131072 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: rope_finetuned = unknown Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: ssm_d_conv = 0 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: ssm_d_inner = 0 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: ssm_d_state = 0 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: ssm_dt_rank = 0 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: model type = 3B Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: model ftype = Q6_K Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: model params = 3.21 B Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: model size = 2.45 GiB (6.56 BPW) Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: general.name = Llama 3.2 3B Instruct Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>' Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: LF token = 128 'Ä' Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>' Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: EOM token = 128008 '<|eom_id|>' Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: EOG token = 128008 '<|eom_id|>' Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: EOG token = 128009 '<|eot_id|>' Nov 07 13:49:56 slimb ollama[1817]: llm_load_print_meta: max token length = 256 Nov 07 13:49:56 slimb ollama[1817]: /opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory Nov 07 13:49:57 slimb ollama[1817]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Nov 07 13:49:57 slimb ollama[1817]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Nov 07 13:49:57 slimb ollama[1817]: ggml_cuda_init: found 1 ROCm devices: Nov 07 13:49:57 slimb ollama[1817]: Device 0: AMD Radeon Graphics, compute capability 9.0, VMM: no Nov 07 13:49:57 slimb ollama[1817]: llm_load_tensors: ggml ctx size = 0.24 MiB Nov 07 13:49:57 slimb ollama[1817]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 2513.91 MiB on device 0: cudaMalloc failed: out of memory Nov 07 13:49:57 slimb ollama[1817]: llama_model_load: error loading model: unable to allocate backend buffer Nov 07 13:49:57 slimb ollama[1817]: llama_load_model_from_file: exception loading model Nov 07 13:49:57 slimb ollama[1817]: terminate called after throwing an instance of 'std::runtime_error' Nov 07 13:49:57 slimb ollama[1817]: what(): unable to allocate backend buffer Nov 07 13:49:57 slimb ollama[1817]: time=2024-11-07T13:49:57.678+01:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding" Nov 07 13:49:59 slimb ollama[1817]: time=2024-11-07T13:49:59.282+01:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: error loading model: unable to allocate backend buffer\nllama_load_model_from_file: exception loading model" Nov 07 13:49:59 slimb ollama[1817]: [GIN] 2024/11/07 - 13:49:59 | 500 | 3.106339582s | 127.0.0.1 | POST "/api/generate" ``` ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.3.14
GiteaMirror added the bugamdlinux labels 2026-04-28 19:25:37 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 7, 2024):

Add full log, not the last 100 lines.

<!-- gh-comment-id:2462177439 --> @rick-github commented on GitHub (Nov 7, 2024): Add full log, not the last 100 lines.
Author
Owner

@oatmealm commented on GitHub (Nov 7, 2024):

https://pastebin.com/7mTSYr4j

<!-- gh-comment-id:2462209194 --> @oatmealm commented on GitHub (Nov 7, 2024): https://pastebin.com/7mTSYr4j
Author
Owner

@oatmealm commented on GitHub (Nov 7, 2024):

This is what I seet with VRAM set to 8GB currently. Two models fully loaded into gpu. Though cpu peeks nonetheless...

ollama ps
NAME                          ID              SIZE      PROCESSOR    UNTIL             
llama3.2:3b-instruct-q6_K     355f7bc7ff61    3.7 GB    100% GPU     24 hours from now    
smollm2:1.7b-instruct-q6_K    d334c54e8df6    4.2 GB    100% GPU     24 hours from now  

or

ollama ps
NAME                               ID              SIZE      PROCESSOR    UNTIL             
granite3-dense:8b-instruct-q6_K    bf751839c6a7    7.9 GB    100% GPU     24 hours from now  

image

<!-- gh-comment-id:2462215009 --> @oatmealm commented on GitHub (Nov 7, 2024): This is what I seet with VRAM set to 8GB currently. Two models fully loaded into gpu. Though cpu peeks nonetheless... ``` ollama ps NAME ID SIZE PROCESSOR UNTIL llama3.2:3b-instruct-q6_K 355f7bc7ff61 3.7 GB 100% GPU 24 hours from now smollm2:1.7b-instruct-q6_K d334c54e8df6 4.2 GB 100% GPU 24 hours from now ``` or ``` ollama ps NAME ID SIZE PROCESSOR UNTIL granite3-dense:8b-instruct-q6_K bf751839c6a7 7.9 GB 100% GPU 24 hours from now ``` ![image](https://github.com/user-attachments/assets/f841ab47-c475-4f10-8d49-1f72d4d9307d)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51321