[GH-ISSUE #2411] Discrete AMD GPU not used, CPU used instead #47917

Closed
opened 2026-04-28 05:48:43 -05:00 by GiteaMirror · 28 comments
Owner

Originally created by @haplo on GitHub (Feb 8, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2411

Originally assigned to: @dhiltgen on GitHub.

My system has both an integrated and a dedicated GPU (an AMD Radeon 7900XTX). I see ollama ignores the integrated card, detects the 7900XTX but then it goes ahead and uses the CPU (Ryzen 7900).

I'm running ollama 0.1.23 from Arch Linux repository. This should include the fix at #2195, I see in the logs that ROCR_VISIBLE_DEVICES=0.

Only errors I see in the logs are:

rsmi_dev_serial_number_get failed: 2
rsmi_dev_vram_vendor_get failed: 2
rsmi_dev_serial_number_get failed: 2

Full debug log starting the systemd unit with OLLAMA_DEBUG=1 and then ollama run mistral:

Started Ollama Service.
time=2024-02-08T13:52:58.187Z level=INFO source=images.go:860 msg="total blobs: 9"
time=2024-02-08T13:52:58.188Z level=INFO source=images.go:867 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:	export GIN_MODE=release
 - using code:	gin.SetMode(gin.ReleaseMode)
[GIN-debug] POST   /api/pull                 --> github.com/jmorganca/ollama/server.PullModelHandler (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/jmorganca/ollama/server.GenerateHandler (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/jmorganca/ollama/server.ChatHandler (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/jmorganca/ollama/server.EmbeddingHandler (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/jmorganca/ollama/server.CreateModelHandler (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/jmorganca/ollama/server.PushModelHandler (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/jmorganca/ollama/server.CopyModelHandler (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/jmorganca/ollama/server.DeleteModelHandler (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/jmorganca/ollama/server.ShowModelHandler (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/jmorganca/ollama/server.CreateBlobHandler (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/jmorganca/ollama/server.HeadBlobHandler (5 handlers)
[GIN-debug] GET    /                         --> github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/jmorganca/ollama/server.ListModelsHandler (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers)
[GIN-debug] HEAD   /                         --> github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/jmorganca/ollama/server.ListModelsHandler (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers)
time=2024-02-08T13:52:58.188Z level=INFO source=routes.go:995 msg="Listening on 127.0.0.1:11434 (version 0.1.23)"
time=2024-02-08T13:52:58.188Z level=INFO source=payload_common.go:106 msg="Extracting dynamic libraries..."
time=2024-02-08T13:52:58.289Z level=INFO source=payload_common.go:145 msg="Dynamic LLM libraries [cpu_avx2 cpu cpu_avx]"
time=2024-02-08T13:52:58.289Z level=DEBUG source=payload_common.go:146 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-02-08T13:52:58.289Z level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-08T13:52:58.289Z level=INFO source=gpu.go:242 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-02-08T13:52:58.289Z level=DEBUG source=gpu.go:260 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /var/lib/ollama/libnvidia-ml.so*]"
time=2024-02-08T13:52:58.294Z level=INFO source=gpu.go:288 msg="Discovered GPU libraries: []"
time=2024-02-08T13:52:58.294Z level=INFO source=gpu.go:242 msg="Searching for GPU management library librocm_smi64.so"
time=2024-02-08T13:52:58.294Z level=DEBUG source=gpu.go:260 msg="gpu management search paths: [/opt/rocm*/lib*/librocm_smi64.so* /var/lib/ollama/librocm_smi64.so*]"
time=2024-02-08T13:52:58.294Z level=INFO source=gpu.go:288 msg="Discovered GPU libraries: [/opt/rocm/lib/librocm_smi64.so.1.0]"
wiring rocm management library functions in /opt/rocm/lib/librocm_smi64.so.1.0
dlsym: rsmi_init
dlsym: rsmi_shut_down
dlsym: rsmi_dev_memory_total_get
dlsym: rsmi_dev_memory_usage_get
dlsym: rsmi_version_get
dlsym: rsmi_num_monitor_devices
dlsym: rsmi_dev_id_get
dlsym: rsmi_dev_name_get
dlsym: rsmi_dev_brand_get
dlsym: rsmi_dev_vendor_name_get
dlsym: rsmi_dev_vram_vendor_get
dlsym: rsmi_dev_serial_number_get
dlsym: rsmi_dev_subsystem_name_get
dlsym: rsmi_dev_vbios_version_get
time=2024-02-08T13:52:58.298Z level=INFO source=gpu.go:109 msg="Radeon GPU detected"
time=2024-02-08T13:52:58.299Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
discovered 2 ROCm GPU Devices
[0] ROCm device name: Navi 31 [Radeon RX 7900 XT/7900 XTX/7900M]
[0] ROCm brand: Navi 31 [Radeon RX 7900 XT/7900 XTX/7900M]
[0] ROCm vendor: Advanced Micro Devices, Inc. [AMD/ATI]
[0] ROCm VRAM vendor: samsung
rsmi_dev_serial_number_get failed: 2
[0] ROCm subsystem name: PULSE RX 7900 XTX
[0] ROCm vbios version: 113-3E4710U-O4X
[0] ROCm totalMem 25753026560
[0] ROCm usedMem 2400063488
[1] ROCm device name: Raphael
[1] ROCm brand: Raphael
[1] ROCm vendor: Advanced Micro Devices, Inc. [AMD/ATI]
rsmi_dev_vram_vendor_get failed: 2
rsmi_dev_serial_number_get failed: 2
[1] ROCm subsystem name: GA-MA78GM-S2H Motherboard
[1] ROCm vbios version: 102-RAPHAEL-008
[1] ROCm totalMem 67108864
[1] ROCm usedMem 16441344
[1] ROCm integrated GPU
time=2024-02-08T13:52:58.302Z level=INFO source=gpu.go:177 msg="ROCm integrated GPU detected - ROCR_VISIBLE_DEVICES=0"
time=2024-02-08T13:52:58.302Z level=DEBUG source=gpu.go:231 msg="rocm detected 2 devices with 20044M available memory"
[GIN] 2024/02/08 - 13:53:15 | 200 |      23.355µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/02/08 - 13:53:15 | 200 |     669.664µs |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/02/08 - 13:53:15 | 200 |     221.658µs |       127.0.0.1 | POST     "/api/show"
time=2024-02-08T13:53:15.435Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
discovered 2 ROCm GPU Devices
[0] ROCm device name: Navi 31 [Radeon RX 7900 XT/7900 XTX/7900M]
[0] ROCm brand: Navi 31 [Radeon RX 7900 XT/7900 XTX/7900M]
[0] ROCm vendor: Advanced Micro Devices, Inc. [AMD/ATI]
[0] ROCm VRAM vendor: samsung
rsmi_dev_serial_number_get failed: 2
[0] ROCm subsystem name: PULSE RX 7900 XTX
[0] ROCm vbios version: 113-3E4710U-O4X
[0] ROCm totalMem 25753026560
[0] ROCm usedMem 2400071680
[1] ROCm device name: Raphael
[1] ROCm brand: Raphael
[1] ROCm vendor: Advanced Micro Devices, Inc. [AMD/ATI]
rsmi_dev_vram_vendor_get failed: 2
rsmi_dev_serial_number_get failed: 2
[1] ROCm subsystem name: GA-MA78GM-S2H Motherboard
[1] ROCm vbios version: 102-RAPHAEL-008
[1] ROCm totalMem 67108864
[1] ROCm usedMem 16441344
[1] ROCm integrated GPU
time=2024-02-08T13:53:15.439Z level=INFO source=gpu.go:177 msg="ROCm integrated GPU detected - ROCR_VISIBLE_DEVICES=0"
time=2024-02-08T13:53:15.439Z level=DEBUG source=gpu.go:231 msg="rocm detected 2 devices with 20044M available memory"
time=2024-02-08T13:53:15.439Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
discovered 2 ROCm GPU Devices
[0] ROCm device name: Navi 31 [Radeon RX 7900 XT/7900 XTX/7900M]
[0] ROCm brand: Navi 31 [Radeon RX 7900 XT/7900 XTX/7900M]
[0] ROCm vendor: Advanced Micro Devices, Inc. [AMD/ATI]
[0] ROCm VRAM vendor: samsung
rsmi_dev_serial_number_get failed: 2
[0] ROCm subsystem name: PULSE RX 7900 XTX
[0] ROCm vbios version: 113-3E4710U-O4X
[0] ROCm totalMem 25753026560
[0] ROCm usedMem 2400071680
[1] ROCm device name: Raphael
[1] ROCm brand: Raphael
[1] ROCm vendor: Advanced Micro Devices, Inc. [AMD/ATI]
rsmi_dev_vram_vendor_get failed: 2
rsmi_dev_serial_number_get failed: 2
[1] ROCm subsystem name: GA-MA78GM-S2H Motherboard
[1] ROCm vbios version: 102-RAPHAEL-008
[1] ROCm totalMem 67108864
[1] ROCm usedMem 16441344
[1] ROCm integrated GPU
time=2024-02-08T13:53:15.442Z level=INFO source=gpu.go:177 msg="ROCm integrated GPU detected - ROCR_VISIBLE_DEVICES=0"
time=2024-02-08T13:53:15.442Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-08T13:53:15.443Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama1814794725/cpu_avx2/libext_server.so"
time=2024-02-08T13:53:15.443Z level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server"
[1707400395] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /var/lib/ollama/.ollama/models/blobs/sha256:e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = mistralai
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,58980]   = ["▁ t", "i n", "e r", "▁ a", "h e...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  22:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  23:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 7.24 B
llm_load_print_meta: model size       = 3.83 GiB (4.54 BPW)
llm_load_print_meta: general.name     = mistralai
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.11 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:        CPU buffer size =  3917.87 MiB
...................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:        CPU input buffer size   =    12.01 MiB
llama_new_context_with_model:        CPU compute buffer size =   167.20 MiB
llama_new_context_with_model: graph splits (measure): 1
[1707400395] warming up the model with an empty run
[1707400395] Available slots:
[1707400395]  -> Slot 0 - max context: 2048
time=2024-02-08T13:53:15.764Z level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop"
[1707400395] llama server main loop starting
[1707400395] all slots are idle and system prompt is empty, clear the KV cache
[GIN] 2024/02/08 - 13:53:15 | 200 |  508.028434ms |       127.0.0.1 | POST     "/api/chat"
time=2024-02-08T13:53:16.722Z level=DEBUG source=routes.go:1161 msg="chat handler" prompt="[INST]  Tell me a joke [/INST]"
[1707400396] slot 0 is processing [task id: 0]
[1707400396] slot 0 : in cache: 0 tokens | to process: 13 tokens
[1707400396] slot 0 : kv cache rm - [0, end)
[1707400397] sampled token:  4315: ' Why'
[1707400397] sampled token:   949: ' don'
[1707400397] sampled token: 28742: '''
[1707400397] sampled token: 28707: 't'
[1707400397] sampled token: 15067: ' scientists'
[1707400397] sampled token:  4893: ' trust'
[1707400397] sampled token: 24221: ' atoms'
[1707400397] sampled token: 28804: '?'
[1707400398] sampled token:    13: '
'
[1707400398] sampled token:    13: '
'
[1707400398] sampled token: 17098: 'Because'
[1707400398] sampled token:   590: ' they'
[1707400398] sampled token:  1038: ' make'
[1707400398] sampled token:   582: ' up'
[1707400398] sampled token:  2905: ' everything'
[1707400398] sampled token: 28808: '!'
[1707400398] sampled token:     2: ''
[1707400398]
[1707400398] print_timings: prompt eval time =     533.24 ms /    13 tokens (   41.02 ms per token,    24.38 tokens per second)
[1707400398] print_timings:        eval time =    1521.86 ms /    17 runs   (   89.52 ms per token,    11.17 tokens per second)
[1707400398] print_timings:       total time =    2055.10 ms
[1707400398] slot 0 released (30 tokens in cache)
[1707400398] next result cancel on stop
[1707400398] next result removing waiting task ID: 0
[GIN] 2024/02/08 - 13:53:18 | 200 |  2.055812875s |       127.0.0.1 | POST     "/api/chat"

Installed packages:

$ pacman -Qs 'amd|hip|rocm|opencl|clblast|llama' | grep --color=auto local
local/amd-ucode 20240115.9b6d0b08-2
local/clblast 1.6.1-1
local/clinfo 3.0.21.02.21-1
local/comgr 6.0.0-1
local/composable-kernel 6.0.0-1
local/flashrom 1.2-4
local/gcc-libs 13.2.1-5
local/hip-runtime-amd 6.0.0-1
local/hipblas 6.0.0-1
local/hipcub 6.0.0-1
local/hipfft 6.0.0-1
local/hiprand 6.0.0-1
local/hipsolver 6.0.0-1
local/hipsparse 6.0.0-1
local/hsa-rocr 6.0.0-2
local/libftdi 1.5-5
local/libteam 1.32-1
local/magma-hip 2.7.2-3
local/miopen-hip 6.0.0-1
local/nvtop 3.0.2-1
local/ocl-icd 2.3.2-1
local/ollama 0.1.23-1
local/opencl-headers 2:2023.04.17-2
local/python-pytorch-opt-rocm 2.2.0-1
local/python-torchvision-rocm 0.16.2-1
local/rccl 6.0.0-1
local/rocalution 6.0.0-2
local/rocblas 6.0.0-1
local/rocfft 6.0.0-1
local/rocm-clang-ocl 6.0.0-1
local/rocm-cmake 6.0.0-1
local/rocm-core 6.0.0-2
local/rocm-device-libs 6.0.0-1
local/rocm-hip-libraries 6.0.0-1
local/rocm-hip-runtime 6.0.0-1
local/rocm-hip-sdk 6.0.0-1
local/rocm-language-runtime 6.0.0-1
local/rocm-llvm 6.0.0-2
local/rocm-opencl-runtime 6.0.0-1
local/rocm-opencl-sdk 6.0.0-1
local/rocm-smi-lib 6.0.0-1
local/rocminfo 6.0.0-1
local/rocprim 6.0.0-1
local/rocrand 6.0.0-1
local/rocsolver 6.0.0-1
local/rocsparse 6.0.0-1
local/rocthrust 6.0.0-1
local/roctracer 6.0.0-1

rocminfo:

ROCk module is loaded
=====================
HSA System Attributes
=====================
Runtime Version:         1.1
System Timestamp Freq.:  1000.000000MHz
Sig. Max Wait Duration:  18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model:           LARGE
System Endianness:       LITTLE
Mwaitx:                  DISABLED
DMAbuf Support:          YES

==========
HSA Agents
==========
*******
Agent 1
*******
Name:                    AMD Ryzen 9 7900 12-Core Processor
Uuid:                    CPU-XX
Marketing Name:          AMD Ryzen 9 7900 12-Core Processor
Vendor Name:             CPU
Feature:                 None specified
Profile:                 FULL_PROFILE
Float Round Mode:        NEAR
Max Queue Number:        0(0x0)
Queue Min Size:          0(0x0)
Queue Max Size:          0(0x0)
Queue Type:              MULTI
Node:                    0
Device Type:             CPU
Cache Info:
L1:                      32768(0x8000) KB
Chip ID:                 0(0x0)
ASIC Revision:           0(0x0)
Cacheline Size:          64(0x40)
Max Clock Freq. (MHz):   5482
BDFID:                   0
Internal Node ID:        0
Compute Unit:            24
SIMDs per CU:            0
Shader Engines:          0
Shader Arrs. per Eng.:   0
WatchPts on Addr. Ranges:1
Features:                None
Pool Info:
Pool 1
Segment:                 GLOBAL; FLAGS: FINE GRAINED
Size:                    65412596(0x3e61df4) KB
Allocatable:             TRUE
Alloc Granule:           4KB
Alloc Alignment:         4KB
Accessible by all:       TRUE
Pool 2
Segment:                 GLOBAL; FLAGS: KERNARG, FINE GRAINED
Size:                    65412596(0x3e61df4) KB
Allocatable:             TRUE
Alloc Granule:           4KB
Alloc Alignment:         4KB
Accessible by all:       TRUE
Pool 3
Segment:                 GLOBAL; FLAGS: COARSE GRAINED
Size:                    65412596(0x3e61df4) KB
Allocatable:             TRUE
Alloc Granule:           4KB
Alloc Alignment:         4KB
Accessible by all:       TRUE
ISA Info:
*******
Agent 2
*******
Name:                    gfx1100
Uuid:                    GPU-8e7a334a1ad8aec8
Marketing Name:          AMD Radeon RX 7900 XTX
Vendor Name:             AMD
Feature:                 KERNEL_DISPATCH
Profile:                 BASE_PROFILE
Float Round Mode:        NEAR
Max Queue Number:        128(0x80)
Queue Min Size:          64(0x40)
Queue Max Size:          131072(0x20000)
Queue Type:              MULTI
Node:                    1
Device Type:             GPU
Cache Info:
L1:                      32(0x20) KB
L2:                      6144(0x1800) KB
L3:                      98304(0x18000) KB
Chip ID:                 29772(0x744c)
ASIC Revision:           0(0x0)
Cacheline Size:          64(0x40)
Max Clock Freq. (MHz):   2371
BDFID:                   768
Internal Node ID:        1
Compute Unit:            96
SIMDs per CU:            2
Shader Engines:          6
Shader Arrs. per Eng.:   2
WatchPts on Addr. Ranges:4
Coherent Host Access:    FALSE
Features:                KERNEL_DISPATCH
Fast F16 Operation:      TRUE
Wavefront Size:          32(0x20)
Workgroup Max Size:      1024(0x400)
Workgroup Max Size per Dimension:
x                        1024(0x400)
y                        1024(0x400)
z                        1024(0x400)
Max Waves Per CU:        32(0x20)
Max Work-item Per CU:    1024(0x400)
Grid Max Size:           4294967295(0xffffffff)
Grid Max Size per Dimension:
x                        4294967295(0xffffffff)
y                        4294967295(0xffffffff)
z                        4294967295(0xffffffff)
Max fbarriers/Workgrp:   32
Packet Processor uCode:: 528
SDMA engine uCode::      19
IOMMU Support::          None
Pool Info:
Pool 1
Segment:                 GLOBAL; FLAGS: COARSE GRAINED
Size:                    25149440(0x17fc000) KB
Allocatable:             TRUE
Alloc Granule:           4KB
Alloc Alignment:         4KB
Accessible by all:       FALSE
Pool 2
Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
Size:                    25149440(0x17fc000) KB
Allocatable:             TRUE
Alloc Granule:           4KB
Alloc Alignment:         4KB
Accessible by all:       FALSE
Pool 3
Segment:                 GROUP
Size:                    64(0x40) KB
Allocatable:             FALSE
Alloc Granule:           0KB
Alloc Alignment:         0KB
Accessible by all:       FALSE
ISA Info:
ISA 1
Name:                    amdgcn-amd-amdhsa--gfx1100
Machine Models:          HSA_MACHINE_MODEL_LARGE
Profiles:                HSA_PROFILE_BASE
Default Rounding Mode:   NEAR
Default Rounding Mode:   NEAR
Fast f16:                TRUE
Workgroup Max Size:      1024(0x400)
Workgroup Max Size per Dimension:
x                        1024(0x400)
y                        1024(0x400)
z                        1024(0x400)
Grid Max Size:           4294967295(0xffffffff)
Grid Max Size per Dimension:
x                        4294967295(0xffffffff)
y                        4294967295(0xffffffff)
z                        4294967295(0xffffffff)
FBarrier Max Size:       32
*******
Agent 3
*******
Name:                    gfx1036
Uuid:                    GPU-XX
Marketing Name:          AMD Radeon Graphics
Vendor Name:             AMD
Feature:                 KERNEL_DISPATCH
Profile:                 BASE_PROFILE
Float Round Mode:        NEAR
Max Queue Number:        128(0x80)
Queue Min Size:          64(0x40)
Queue Max Size:          131072(0x20000)
Queue Type:              MULTI
Node:                    2
Device Type:             GPU
Cache Info:
L1:                      16(0x10) KB
L2:                      256(0x100) KB
Chip ID:                 5710(0x164e)
ASIC Revision:           1(0x1)
Cacheline Size:          64(0x40)
Max Clock Freq. (MHz):   2200
BDFID:                   5376
Internal Node ID:        2
Compute Unit:            2
SIMDs per CU:            2
Shader Engines:          1
Shader Arrs. per Eng.:   1
WatchPts on Addr. Ranges:4
Coherent Host Access:    FALSE
Features:                KERNEL_DISPATCH
Fast F16 Operation:      TRUE
Wavefront Size:          32(0x20)
Workgroup Max Size:      1024(0x400)
Workgroup Max Size per Dimension:
x                        1024(0x400)
y                        1024(0x400)
z                        1024(0x400)
Max Waves Per CU:        32(0x20)
Max Work-item Per CU:    1024(0x400)
Grid Max Size:           4294967295(0xffffffff)
Grid Max Size per Dimension:
x                        4294967295(0xffffffff)
y                        4294967295(0xffffffff)
z                        4294967295(0xffffffff)
Max fbarriers/Workgrp:   32
Packet Processor uCode:: 20
SDMA engine uCode::      8
IOMMU Support::          None
Pool Info:
Pool 1
Segment:                 GLOBAL; FLAGS: COARSE GRAINED
Size:                    65536(0x10000) KB
Allocatable:             TRUE
Alloc Granule:           4KB
Alloc Alignment:         4KB
Accessible by all:       FALSE
Pool 2
Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
Size:                    65536(0x10000) KB
Allocatable:             TRUE
Alloc Granule:           4KB
Alloc Alignment:         4KB
Accessible by all:       FALSE
Pool 3
Segment:                 GROUP
Size:                    64(0x40) KB
Allocatable:             FALSE
Alloc Granule:           0KB
Alloc Alignment:         0KB
Accessible by all:       FALSE
ISA Info:
ISA 1
Name:                    amdgcn-amd-amdhsa--gfx1036
Machine Models:          HSA_MACHINE_MODEL_LARGE
Profiles:                HSA_PROFILE_BASE
Default Rounding Mode:   NEAR
Default Rounding Mode:   NEAR
Fast f16:                TRUE
Workgroup Max Size:      1024(0x400)
Workgroup Max Size per Dimension:
x                        1024(0x400)
y                        1024(0x400)
z                        1024(0x400)
Grid Max Size:           4294967295(0xffffffff)
Grid Max Size per Dimension:
x                        4294967295(0xffffffff)
y                        4294967295(0xffffffff)
z                        4294967295(0xffffffff)
FBarrier Max Size:       32
*** Done ***
Originally created by @haplo on GitHub (Feb 8, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2411 Originally assigned to: @dhiltgen on GitHub. My system has both an integrated and a dedicated GPU (an AMD Radeon 7900XTX). I see ollama ignores the integrated card, detects the 7900XTX but then it goes ahead and uses the CPU (Ryzen 7900). I'm running ollama 0.1.23 from Arch Linux repository. This should include the fix at #2195, I see in the logs that `ROCR_VISIBLE_DEVICES=0`. Only errors I see in the logs are: ``` rsmi_dev_serial_number_get failed: 2 rsmi_dev_vram_vendor_get failed: 2 rsmi_dev_serial_number_get failed: 2 ``` Full debug log starting the systemd unit with `OLLAMA_DEBUG=1` and then `ollama run mistral`: ``` Started Ollama Service. time=2024-02-08T13:52:58.187Z level=INFO source=images.go:860 msg="total blobs: 9" time=2024-02-08T13:52:58.188Z level=INFO source=images.go:867 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/jmorganca/ollama/server.PullModelHandler (5 handlers) [GIN-debug] POST /api/generate --> github.com/jmorganca/ollama/server.GenerateHandler (5 handlers) [GIN-debug] POST /api/chat --> github.com/jmorganca/ollama/server.ChatHandler (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/jmorganca/ollama/server.EmbeddingHandler (5 handlers) [GIN-debug] POST /api/create --> github.com/jmorganca/ollama/server.CreateModelHandler (5 handlers) [GIN-debug] POST /api/push --> github.com/jmorganca/ollama/server.PushModelHandler (5 handlers) [GIN-debug] POST /api/copy --> github.com/jmorganca/ollama/server.CopyModelHandler (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/jmorganca/ollama/server.DeleteModelHandler (5 handlers) [GIN-debug] POST /api/show --> github.com/jmorganca/ollama/server.ShowModelHandler (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/jmorganca/ollama/server.CreateBlobHandler (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/jmorganca/ollama/server.HeadBlobHandler (5 handlers) [GIN-debug] GET / --> github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] GET /api/tags --> github.com/jmorganca/ollama/server.ListModelsHandler (5 handlers) [GIN-debug] GET /api/version --> github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers) [GIN-debug] HEAD / --> github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/jmorganca/ollama/server.ListModelsHandler (5 handlers) [GIN-debug] HEAD /api/version --> github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers) time=2024-02-08T13:52:58.188Z level=INFO source=routes.go:995 msg="Listening on 127.0.0.1:11434 (version 0.1.23)" time=2024-02-08T13:52:58.188Z level=INFO source=payload_common.go:106 msg="Extracting dynamic libraries..." time=2024-02-08T13:52:58.289Z level=INFO source=payload_common.go:145 msg="Dynamic LLM libraries [cpu_avx2 cpu cpu_avx]" time=2024-02-08T13:52:58.289Z level=DEBUG source=payload_common.go:146 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-02-08T13:52:58.289Z level=INFO source=gpu.go:94 msg="Detecting GPU type" time=2024-02-08T13:52:58.289Z level=INFO source=gpu.go:242 msg="Searching for GPU management library libnvidia-ml.so" time=2024-02-08T13:52:58.289Z level=DEBUG source=gpu.go:260 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /var/lib/ollama/libnvidia-ml.so*]" time=2024-02-08T13:52:58.294Z level=INFO source=gpu.go:288 msg="Discovered GPU libraries: []" time=2024-02-08T13:52:58.294Z level=INFO source=gpu.go:242 msg="Searching for GPU management library librocm_smi64.so" time=2024-02-08T13:52:58.294Z level=DEBUG source=gpu.go:260 msg="gpu management search paths: [/opt/rocm*/lib*/librocm_smi64.so* /var/lib/ollama/librocm_smi64.so*]" time=2024-02-08T13:52:58.294Z level=INFO source=gpu.go:288 msg="Discovered GPU libraries: [/opt/rocm/lib/librocm_smi64.so.1.0]" wiring rocm management library functions in /opt/rocm/lib/librocm_smi64.so.1.0 dlsym: rsmi_init dlsym: rsmi_shut_down dlsym: rsmi_dev_memory_total_get dlsym: rsmi_dev_memory_usage_get dlsym: rsmi_version_get dlsym: rsmi_num_monitor_devices dlsym: rsmi_dev_id_get dlsym: rsmi_dev_name_get dlsym: rsmi_dev_brand_get dlsym: rsmi_dev_vendor_name_get dlsym: rsmi_dev_vram_vendor_get dlsym: rsmi_dev_serial_number_get dlsym: rsmi_dev_subsystem_name_get dlsym: rsmi_dev_vbios_version_get time=2024-02-08T13:52:58.298Z level=INFO source=gpu.go:109 msg="Radeon GPU detected" time=2024-02-08T13:52:58.299Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" discovered 2 ROCm GPU Devices [0] ROCm device name: Navi 31 [Radeon RX 7900 XT/7900 XTX/7900M] [0] ROCm brand: Navi 31 [Radeon RX 7900 XT/7900 XTX/7900M] [0] ROCm vendor: Advanced Micro Devices, Inc. [AMD/ATI] [0] ROCm VRAM vendor: samsung rsmi_dev_serial_number_get failed: 2 [0] ROCm subsystem name: PULSE RX 7900 XTX [0] ROCm vbios version: 113-3E4710U-O4X [0] ROCm totalMem 25753026560 [0] ROCm usedMem 2400063488 [1] ROCm device name: Raphael [1] ROCm brand: Raphael [1] ROCm vendor: Advanced Micro Devices, Inc. [AMD/ATI] rsmi_dev_vram_vendor_get failed: 2 rsmi_dev_serial_number_get failed: 2 [1] ROCm subsystem name: GA-MA78GM-S2H Motherboard [1] ROCm vbios version: 102-RAPHAEL-008 [1] ROCm totalMem 67108864 [1] ROCm usedMem 16441344 [1] ROCm integrated GPU time=2024-02-08T13:52:58.302Z level=INFO source=gpu.go:177 msg="ROCm integrated GPU detected - ROCR_VISIBLE_DEVICES=0" time=2024-02-08T13:52:58.302Z level=DEBUG source=gpu.go:231 msg="rocm detected 2 devices with 20044M available memory" [GIN] 2024/02/08 - 13:53:15 | 200 | 23.355µs | 127.0.0.1 | HEAD "/" [GIN] 2024/02/08 - 13:53:15 | 200 | 669.664µs | 127.0.0.1 | POST "/api/show" [GIN] 2024/02/08 - 13:53:15 | 200 | 221.658µs | 127.0.0.1 | POST "/api/show" time=2024-02-08T13:53:15.435Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" discovered 2 ROCm GPU Devices [0] ROCm device name: Navi 31 [Radeon RX 7900 XT/7900 XTX/7900M] [0] ROCm brand: Navi 31 [Radeon RX 7900 XT/7900 XTX/7900M] [0] ROCm vendor: Advanced Micro Devices, Inc. [AMD/ATI] [0] ROCm VRAM vendor: samsung rsmi_dev_serial_number_get failed: 2 [0] ROCm subsystem name: PULSE RX 7900 XTX [0] ROCm vbios version: 113-3E4710U-O4X [0] ROCm totalMem 25753026560 [0] ROCm usedMem 2400071680 [1] ROCm device name: Raphael [1] ROCm brand: Raphael [1] ROCm vendor: Advanced Micro Devices, Inc. [AMD/ATI] rsmi_dev_vram_vendor_get failed: 2 rsmi_dev_serial_number_get failed: 2 [1] ROCm subsystem name: GA-MA78GM-S2H Motherboard [1] ROCm vbios version: 102-RAPHAEL-008 [1] ROCm totalMem 67108864 [1] ROCm usedMem 16441344 [1] ROCm integrated GPU time=2024-02-08T13:53:15.439Z level=INFO source=gpu.go:177 msg="ROCm integrated GPU detected - ROCR_VISIBLE_DEVICES=0" time=2024-02-08T13:53:15.439Z level=DEBUG source=gpu.go:231 msg="rocm detected 2 devices with 20044M available memory" time=2024-02-08T13:53:15.439Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" discovered 2 ROCm GPU Devices [0] ROCm device name: Navi 31 [Radeon RX 7900 XT/7900 XTX/7900M] [0] ROCm brand: Navi 31 [Radeon RX 7900 XT/7900 XTX/7900M] [0] ROCm vendor: Advanced Micro Devices, Inc. [AMD/ATI] [0] ROCm VRAM vendor: samsung rsmi_dev_serial_number_get failed: 2 [0] ROCm subsystem name: PULSE RX 7900 XTX [0] ROCm vbios version: 113-3E4710U-O4X [0] ROCm totalMem 25753026560 [0] ROCm usedMem 2400071680 [1] ROCm device name: Raphael [1] ROCm brand: Raphael [1] ROCm vendor: Advanced Micro Devices, Inc. [AMD/ATI] rsmi_dev_vram_vendor_get failed: 2 rsmi_dev_serial_number_get failed: 2 [1] ROCm subsystem name: GA-MA78GM-S2H Motherboard [1] ROCm vbios version: 102-RAPHAEL-008 [1] ROCm totalMem 67108864 [1] ROCm usedMem 16441344 [1] ROCm integrated GPU time=2024-02-08T13:53:15.442Z level=INFO source=gpu.go:177 msg="ROCm integrated GPU detected - ROCR_VISIBLE_DEVICES=0" time=2024-02-08T13:53:15.442Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-08T13:53:15.443Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama1814794725/cpu_avx2/libext_server.so" time=2024-02-08T13:53:15.443Z level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server" [1707400395] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /var/lib/ollama/.ollama/models/blobs/sha256:e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = mistralai llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 7.24 B llm_load_print_meta: model size = 3.83 GiB (4.54 BPW) llm_load_print_meta: general.name = mistralai llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.11 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU buffer size = 3917.87 MiB ................................................................................................... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CPU input buffer size = 12.01 MiB llama_new_context_with_model: CPU compute buffer size = 167.20 MiB llama_new_context_with_model: graph splits (measure): 1 [1707400395] warming up the model with an empty run [1707400395] Available slots: [1707400395] -> Slot 0 - max context: 2048 time=2024-02-08T13:53:15.764Z level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop" [1707400395] llama server main loop starting [1707400395] all slots are idle and system prompt is empty, clear the KV cache [GIN] 2024/02/08 - 13:53:15 | 200 | 508.028434ms | 127.0.0.1 | POST "/api/chat" time=2024-02-08T13:53:16.722Z level=DEBUG source=routes.go:1161 msg="chat handler" prompt="[INST] Tell me a joke [/INST]" [1707400396] slot 0 is processing [task id: 0] [1707400396] slot 0 : in cache: 0 tokens | to process: 13 tokens [1707400396] slot 0 : kv cache rm - [0, end) [1707400397] sampled token: 4315: ' Why' [1707400397] sampled token: 949: ' don' [1707400397] sampled token: 28742: ''' [1707400397] sampled token: 28707: 't' [1707400397] sampled token: 15067: ' scientists' [1707400397] sampled token: 4893: ' trust' [1707400397] sampled token: 24221: ' atoms' [1707400397] sampled token: 28804: '?' [1707400398] sampled token: 13: ' ' [1707400398] sampled token: 13: ' ' [1707400398] sampled token: 17098: 'Because' [1707400398] sampled token: 590: ' they' [1707400398] sampled token: 1038: ' make' [1707400398] sampled token: 582: ' up' [1707400398] sampled token: 2905: ' everything' [1707400398] sampled token: 28808: '!' [1707400398] sampled token: 2: '' [1707400398] [1707400398] print_timings: prompt eval time = 533.24 ms / 13 tokens ( 41.02 ms per token, 24.38 tokens per second) [1707400398] print_timings: eval time = 1521.86 ms / 17 runs ( 89.52 ms per token, 11.17 tokens per second) [1707400398] print_timings: total time = 2055.10 ms [1707400398] slot 0 released (30 tokens in cache) [1707400398] next result cancel on stop [1707400398] next result removing waiting task ID: 0 [GIN] 2024/02/08 - 13:53:18 | 200 | 2.055812875s | 127.0.0.1 | POST "/api/chat" ``` Installed packages: ``` $ pacman -Qs 'amd|hip|rocm|opencl|clblast|llama' | grep --color=auto local local/amd-ucode 20240115.9b6d0b08-2 local/clblast 1.6.1-1 local/clinfo 3.0.21.02.21-1 local/comgr 6.0.0-1 local/composable-kernel 6.0.0-1 local/flashrom 1.2-4 local/gcc-libs 13.2.1-5 local/hip-runtime-amd 6.0.0-1 local/hipblas 6.0.0-1 local/hipcub 6.0.0-1 local/hipfft 6.0.0-1 local/hiprand 6.0.0-1 local/hipsolver 6.0.0-1 local/hipsparse 6.0.0-1 local/hsa-rocr 6.0.0-2 local/libftdi 1.5-5 local/libteam 1.32-1 local/magma-hip 2.7.2-3 local/miopen-hip 6.0.0-1 local/nvtop 3.0.2-1 local/ocl-icd 2.3.2-1 local/ollama 0.1.23-1 local/opencl-headers 2:2023.04.17-2 local/python-pytorch-opt-rocm 2.2.0-1 local/python-torchvision-rocm 0.16.2-1 local/rccl 6.0.0-1 local/rocalution 6.0.0-2 local/rocblas 6.0.0-1 local/rocfft 6.0.0-1 local/rocm-clang-ocl 6.0.0-1 local/rocm-cmake 6.0.0-1 local/rocm-core 6.0.0-2 local/rocm-device-libs 6.0.0-1 local/rocm-hip-libraries 6.0.0-1 local/rocm-hip-runtime 6.0.0-1 local/rocm-hip-sdk 6.0.0-1 local/rocm-language-runtime 6.0.0-1 local/rocm-llvm 6.0.0-2 local/rocm-opencl-runtime 6.0.0-1 local/rocm-opencl-sdk 6.0.0-1 local/rocm-smi-lib 6.0.0-1 local/rocminfo 6.0.0-1 local/rocprim 6.0.0-1 local/rocrand 6.0.0-1 local/rocsolver 6.0.0-1 local/rocsparse 6.0.0-1 local/rocthrust 6.0.0-1 local/roctracer 6.0.0-1 ``` `rocminfo`: ``` ROCk module is loaded ===================== HSA System Attributes ===================== Runtime Version: 1.1 System Timestamp Freq.: 1000.000000MHz Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count) Machine Model: LARGE System Endianness: LITTLE Mwaitx: DISABLED DMAbuf Support: YES ========== HSA Agents ========== ******* Agent 1 ******* Name: AMD Ryzen 9 7900 12-Core Processor Uuid: CPU-XX Marketing Name: AMD Ryzen 9 7900 12-Core Processor Vendor Name: CPU Feature: None specified Profile: FULL_PROFILE Float Round Mode: NEAR Max Queue Number: 0(0x0) Queue Min Size: 0(0x0) Queue Max Size: 0(0x0) Queue Type: MULTI Node: 0 Device Type: CPU Cache Info: L1: 32768(0x8000) KB Chip ID: 0(0x0) ASIC Revision: 0(0x0) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 5482 BDFID: 0 Internal Node ID: 0 Compute Unit: 24 SIMDs per CU: 0 Shader Engines: 0 Shader Arrs. per Eng.: 0 WatchPts on Addr. Ranges:1 Features: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: FINE GRAINED Size: 65412596(0x3e61df4) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 2 Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED Size: 65412596(0x3e61df4) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 3 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 65412596(0x3e61df4) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: TRUE ISA Info: ******* Agent 2 ******* Name: gfx1100 Uuid: GPU-8e7a334a1ad8aec8 Marketing Name: AMD Radeon RX 7900 XTX Vendor Name: AMD Feature: KERNEL_DISPATCH Profile: BASE_PROFILE Float Round Mode: NEAR Max Queue Number: 128(0x80) Queue Min Size: 64(0x40) Queue Max Size: 131072(0x20000) Queue Type: MULTI Node: 1 Device Type: GPU Cache Info: L1: 32(0x20) KB L2: 6144(0x1800) KB L3: 98304(0x18000) KB Chip ID: 29772(0x744c) ASIC Revision: 0(0x0) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 2371 BDFID: 768 Internal Node ID: 1 Compute Unit: 96 SIMDs per CU: 2 Shader Engines: 6 Shader Arrs. per Eng.: 2 WatchPts on Addr. Ranges:4 Coherent Host Access: FALSE Features: KERNEL_DISPATCH Fast F16 Operation: TRUE Wavefront Size: 32(0x20) Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Max Waves Per CU: 32(0x20) Max Work-item Per CU: 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) Max fbarriers/Workgrp: 32 Packet Processor uCode:: 528 SDMA engine uCode:: 19 IOMMU Support:: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 25149440(0x17fc000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 2 Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED Size: 25149440(0x17fc000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 3 Segment: GROUP Size: 64(0x40) KB Allocatable: FALSE Alloc Granule: 0KB Alloc Alignment: 0KB Accessible by all: FALSE ISA Info: ISA 1 Name: amdgcn-amd-amdhsa--gfx1100 Machine Models: HSA_MACHINE_MODEL_LARGE Profiles: HSA_PROFILE_BASE Default Rounding Mode: NEAR Default Rounding Mode: NEAR Fast f16: TRUE Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) FBarrier Max Size: 32 ******* Agent 3 ******* Name: gfx1036 Uuid: GPU-XX Marketing Name: AMD Radeon Graphics Vendor Name: AMD Feature: KERNEL_DISPATCH Profile: BASE_PROFILE Float Round Mode: NEAR Max Queue Number: 128(0x80) Queue Min Size: 64(0x40) Queue Max Size: 131072(0x20000) Queue Type: MULTI Node: 2 Device Type: GPU Cache Info: L1: 16(0x10) KB L2: 256(0x100) KB Chip ID: 5710(0x164e) ASIC Revision: 1(0x1) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 2200 BDFID: 5376 Internal Node ID: 2 Compute Unit: 2 SIMDs per CU: 2 Shader Engines: 1 Shader Arrs. per Eng.: 1 WatchPts on Addr. Ranges:4 Coherent Host Access: FALSE Features: KERNEL_DISPATCH Fast F16 Operation: TRUE Wavefront Size: 32(0x20) Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Max Waves Per CU: 32(0x20) Max Work-item Per CU: 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) Max fbarriers/Workgrp: 32 Packet Processor uCode:: 20 SDMA engine uCode:: 8 IOMMU Support:: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 65536(0x10000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 2 Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED Size: 65536(0x10000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 3 Segment: GROUP Size: 64(0x40) KB Allocatable: FALSE Alloc Granule: 0KB Alloc Alignment: 0KB Accessible by all: FALSE ISA Info: ISA 1 Name: amdgcn-amd-amdhsa--gfx1036 Machine Models: HSA_MACHINE_MODEL_LARGE Profiles: HSA_PROFILE_BASE Default Rounding Mode: NEAR Default Rounding Mode: NEAR Fast f16: TRUE Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) FBarrier Max Size: 32 *** Done *** ```
GiteaMirror added the amd label 2026-04-28 05:48:43 -05:00
Author
Owner

@remy415 commented on GitHub (Feb 14, 2024):

time=2024-02-08T13:52:58.289Z level=INFO source=payload_common.go:145 msg="Dynamic LLM libraries [cpu_avx2 cpu cpu_avx]"

The libraries reported by this line are the libraries that were packaged into llama.cpp at build time. According to the troubleshooting documentation at troubleshooting.md you should see something like this:

Dynamic LLM libraries [rocm_v6 cpu cpu_avx cpu_avx2 cuda_v11 rocm_v5]

Note that these are the libraries packaged into llama.cpp, so it's only reporting what is supported by the included llama.cpp, not what it's detecting in your system.

I don't personally have a Radeon system, so I can't test anything. But what you could do is build it from source and see if that binary detects and loads the ROCm dynamic LLM libraries. Follow the instructions here, check the section Linux ROCm (AMD).

<!-- gh-comment-id:1943930975 --> @remy415 commented on GitHub (Feb 14, 2024): > time=2024-02-08T13:52:58.289Z level=INFO source=payload_common.go:145 msg="Dynamic LLM libraries [cpu_avx2 cpu cpu_avx]" The libraries reported by this line are the libraries that were packaged into llama.cpp at build time. According to the troubleshooting documentation at [troubleshooting.md](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md) you should see something like this: `Dynamic LLM libraries [rocm_v6 cpu cpu_avx cpu_avx2 cuda_v11 rocm_v5]` Note that these are the libraries packaged into llama.cpp, so it's only reporting what is supported by the included llama.cpp, not what it's detecting in your system. I don't personally have a Radeon system, so I can't test anything. But what you could do is build it from source and see if that binary detects and loads the ROCm dynamic LLM libraries. Follow the instructions [here](https://github.com/ollama/ollama/blob/main/docs/development.md), check the section `Linux ROCm (AMD)`.
Author
Owner

@sid-cypher commented on GitHub (Feb 18, 2024):

I also have a Radeon RX 7900 XTX, and I've compiled ollama with export AMDGPU_TARGETS=gfx1100 and CLblast_DIR, all according to development.md, but ollama fails to detect the GPU with a contradiction between source=gpu.go:109 msg="Radeon GPU detected" and source=routes.go:1037 msg="no GPU detected"

This may not be the same issue as the original posters', but the issue title fits the problem perfectly.

More log:

time=2024-02-18T21:47:18.033+01:00 level=INFO source=routes.go:1014 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2024-02-18T21:47:18.034+01:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-02-18T21:47:18.192+01:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cpu rocm_v6]"
time=2024-02-18T21:47:18.192+01:00 level=DEBUG source=payload_common.go:147 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-02-18T21:47:18.192+01:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-18T21:47:18.192+01:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library libnvidia-ml.so"
...
time=2024-02-18T21:47:18.195+01:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library librocm_smi64.so"
time=2024-02-18T21:47:18.195+01:00 level=DEBUG source=gpu.go:280 msg="gpu management search paths: [/opt/rocm*/lib*/librocm_smi64.so* /home/sid/labs/ollama/librocm_smi64.so*]"
time=2024-02-18T21:47:18.195+01:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [/opt/rocm/lib/librocm_smi64.so.6.0.60000 /opt/rocm-6.0.0/lib/librocm_smi64.so.6.0.60000]"
wiring rocm management library functions in /opt/rocm/lib/librocm_smi64.so.6.0.60000
dlsym: rsmi_init
...
dlsym: rsmi_dev_vbios_version_get
time=2024-02-18T21:47:18.198+01:00 level=INFO source=gpu.go:109 msg="Radeon GPU detected"
time=2024-02-18T21:47:18.198+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-18T21:47:18.198+01:00 level=INFO source=routes.go:1037 msg="no GPU detected"
[GIN] 2024/02/18 - 21:50:22 | 200 |       55.43µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/02/18 - 21:50:22 | 200 |     235.119µs |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/02/18 - 21:50:22 | 200 |      132.08µs |       127.0.0.1 | POST     "/api/show"
time=2024-02-18T21:50:22.340+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-18T21:50:22.340+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-18T21:50:22.340+01:00 level=INFO source=llm.go:77 msg="GPU not available, falling back to CPU"

I've successfully compiled and used llama.cpp with all model layers in VRAM, full acceleration, so it's not a ROCm issue. I want to keep trying to make ollama work, too.

<!-- gh-comment-id:1951444630 --> @sid-cypher commented on GitHub (Feb 18, 2024): I also have a Radeon RX 7900 XTX, and I've compiled ollama with `export AMDGPU_TARGETS=gfx1100` and CLblast_DIR, all according to development.md, but ollama fails to detect the GPU with a contradiction between `source=gpu.go:109 msg="Radeon GPU detected"` and `source=routes.go:1037 msg="no GPU detected"` This may not be the same issue as the original posters', but the issue title fits the problem perfectly. More log: ``` time=2024-02-18T21:47:18.033+01:00 level=INFO source=routes.go:1014 msg="Listening on 127.0.0.1:11434 (version 0.0.0)" time=2024-02-18T21:47:18.034+01:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..." time=2024-02-18T21:47:18.192+01:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cpu rocm_v6]" time=2024-02-18T21:47:18.192+01:00 level=DEBUG source=payload_common.go:147 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-02-18T21:47:18.192+01:00 level=INFO source=gpu.go:94 msg="Detecting GPU type" time=2024-02-18T21:47:18.192+01:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library libnvidia-ml.so" ... time=2024-02-18T21:47:18.195+01:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library librocm_smi64.so" time=2024-02-18T21:47:18.195+01:00 level=DEBUG source=gpu.go:280 msg="gpu management search paths: [/opt/rocm*/lib*/librocm_smi64.so* /home/sid/labs/ollama/librocm_smi64.so*]" time=2024-02-18T21:47:18.195+01:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [/opt/rocm/lib/librocm_smi64.so.6.0.60000 /opt/rocm-6.0.0/lib/librocm_smi64.so.6.0.60000]" wiring rocm management library functions in /opt/rocm/lib/librocm_smi64.so.6.0.60000 dlsym: rsmi_init ... dlsym: rsmi_dev_vbios_version_get time=2024-02-18T21:47:18.198+01:00 level=INFO source=gpu.go:109 msg="Radeon GPU detected" time=2024-02-18T21:47:18.198+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-18T21:47:18.198+01:00 level=INFO source=routes.go:1037 msg="no GPU detected" [GIN] 2024/02/18 - 21:50:22 | 200 | 55.43µs | 127.0.0.1 | HEAD "/" [GIN] 2024/02/18 - 21:50:22 | 200 | 235.119µs | 127.0.0.1 | POST "/api/show" [GIN] 2024/02/18 - 21:50:22 | 200 | 132.08µs | 127.0.0.1 | POST "/api/show" time=2024-02-18T21:50:22.340+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-18T21:50:22.340+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-18T21:50:22.340+01:00 level=INFO source=llm.go:77 msg="GPU not available, falling back to CPU" ``` I've successfully compiled and used llama.cpp with all model layers in VRAM, full acceleration, so it's not a ROCm issue. I want to keep trying to make ollama work, too.
Author
Owner

@sid-cypher commented on GitHub (Feb 18, 2024):

I've compiled main at commit 1e23e82 with some added print statements, and the GPU was detected, but still not used, logs say I'm missing libnuma.so.1, yet APT says "libnuma-dev is already the newest version (2.0.14-3ubuntu2)." I wonder why I'm missing the /sys/module/amdgpu/version file? Maybe an upgrade from ROCm 6.0.0 to 6.0.2 will help.

time=2024-02-18T22:35:53.351+01:00 level=INFO source=gpu.go:109 msg="Radeon GPU detected"
time=2024-02-18T22:35:53.351+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-18T22:35:53.351+01:00 level=INFO source=gpu.go:128 msg="calling GetCPUVariant"
time=2024-02-18T22:35:53.351+01:00 level=INFO source=gpu.go:155 msg="calling AMDDriverVersion"
time=2024-02-18T22:35:53.351+01:00 level=DEBUG source=gpu.go:161 msg="error looking up amd driver version: %s" !BADKEY="amdgpu file stat error: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-02-18T22:35:53.352+01:00 level=DEBUG source=amd.go:76 msg="malformed gfx_target_version 0"
discovered 1 ROCm GPU Devices
[0] ROCm device name: 0x744c
[0] ROCm brand: 0x744c
[0] ROCm vendor: Advanced Micro Devices, Inc. [AMD/ATI]
[0] ROCm VRAM vendor: samsung
[0] ROCm S/N: 76b41436a4525095
[0] ROCm subsystem name: 0x471e
[0] ROCm vbios version: 113-3E4710U-O4X
[0] ROCm totalMem 25753026560
[0] ROCm usedMem 815833088
time=2024-02-18T22:35:53.354+01:00 level=DEBUG source=gpu.go:257 msg="rocm detected 1 devices with 21403M available memory"

time=2024-02-18T22:37:45.233+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [/run/user/1000/ollama3038918786/rocm_v6/libext_server.so /run/user/1000/ollama3038918786/cpu_avx2/libext_server.so]"
loading library /run/user/1000/ollama3038918786/rocm_v6/libext_server.so
time=2024-02-18T22:37:45.233+01:00 level=WARN source=llm.go:162 msg="Failed to load dynamic library /run/user/1000/ollama3038918786/rocm_v6/libext_server.so  Unable to load dynamic library: Unable to load dynamic server library: libnuma.so.1: cannot open shared object file: No such file or directory"
loading library /run/user/1000/ollama3038918786/cpu_avx2/libext_server.so
time=2024-02-18T22:37:45.233+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /run/user/1000/ollama3038918786/cpu_avx2/libext_server.so"
<!-- gh-comment-id:1951456407 --> @sid-cypher commented on GitHub (Feb 18, 2024): I've compiled main at commit 1e23e82 with some added print statements, and the GPU was detected, but still not used, logs say I'm missing `libnuma.so.1`, yet APT says "libnuma-dev is already the newest version (2.0.14-3ubuntu2)." I wonder why I'm missing the `/sys/module/amdgpu/version` file? Maybe an upgrade from ROCm 6.0.0 to 6.0.2 will help. ``` time=2024-02-18T22:35:53.351+01:00 level=INFO source=gpu.go:109 msg="Radeon GPU detected" time=2024-02-18T22:35:53.351+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-18T22:35:53.351+01:00 level=INFO source=gpu.go:128 msg="calling GetCPUVariant" time=2024-02-18T22:35:53.351+01:00 level=INFO source=gpu.go:155 msg="calling AMDDriverVersion" time=2024-02-18T22:35:53.351+01:00 level=DEBUG source=gpu.go:161 msg="error looking up amd driver version: %s" !BADKEY="amdgpu file stat error: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-02-18T22:35:53.352+01:00 level=DEBUG source=amd.go:76 msg="malformed gfx_target_version 0" discovered 1 ROCm GPU Devices [0] ROCm device name: 0x744c [0] ROCm brand: 0x744c [0] ROCm vendor: Advanced Micro Devices, Inc. [AMD/ATI] [0] ROCm VRAM vendor: samsung [0] ROCm S/N: 76b41436a4525095 [0] ROCm subsystem name: 0x471e [0] ROCm vbios version: 113-3E4710U-O4X [0] ROCm totalMem 25753026560 [0] ROCm usedMem 815833088 time=2024-02-18T22:35:53.354+01:00 level=DEBUG source=gpu.go:257 msg="rocm detected 1 devices with 21403M available memory" time=2024-02-18T22:37:45.233+01:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [/run/user/1000/ollama3038918786/rocm_v6/libext_server.so /run/user/1000/ollama3038918786/cpu_avx2/libext_server.so]" loading library /run/user/1000/ollama3038918786/rocm_v6/libext_server.so time=2024-02-18T22:37:45.233+01:00 level=WARN source=llm.go:162 msg="Failed to load dynamic library /run/user/1000/ollama3038918786/rocm_v6/libext_server.so Unable to load dynamic library: Unable to load dynamic server library: libnuma.so.1: cannot open shared object file: No such file or directory" loading library /run/user/1000/ollama3038918786/cpu_avx2/libext_server.so time=2024-02-18T22:37:45.233+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /run/user/1000/ollama3038918786/cpu_avx2/libext_server.so" ```
Author
Owner

@remy415 commented on GitHub (Feb 18, 2024):

dlsym: rsmi_init
...
dlsym: rsmi_dev_vbios_version_get

It looks like it initializes then fails on the function after bios version get, which I think is the check vram function. Try pulling the latest version, make sure you run ‘go clean’, and try rebuilding again.

<!-- gh-comment-id:1951457759 --> @remy415 commented on GitHub (Feb 18, 2024): > dlsym: rsmi_init > ... > dlsym: rsmi_dev_vbios_version_get It looks like it initializes then fails on the function after bios version get, which I think is the check vram function. Try pulling the latest version, make sure you run ‘go clean’, and try rebuilding again.
Author
Owner

@sid-cypher commented on GitHub (Feb 19, 2024):

Thank you, @remy415 - I've been able to solve all issues.
@haplo - I could help you build and test, we have the same GPU, but different CPU.

  • rebuilding amdgpu-dkms after upgrading ROCm to 6.0.2 has given me /sys/module/amdgpu/version
  • switching the CMake to Kitware APT repo version has fixed my dynamic library loading errors in libext_server.so

Resulting log:

time=2024-02-19T01:01:04.672+01:00 level=INFO source=gpu.go:158 msg="AMD Driver: 6.3.6"
time=2024-02-19T01:01:04.672+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
loading library /tmp/ollama3303004989/rocm_v6/libext_server.so
time=2024-02-19T01:01:04.777+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama3303004989/rocm_v6/libext_server.so"
time=2024-02-19T01:01:04.777+01:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server"
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 ROCm devices:
  Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
  ...
  llm_load_tensors: offloaded 41/41 layers to GPU

and the model indeed runs fast.

Conclusion:
The precompiled version I've downloaded did not work, but building the latest commit 1e23e82 from main with up-to-date drivers and tools did work, and the GPU offloading works for my discrete AMD GPU. Happy times ❤️

<!-- gh-comment-id:1951500982 --> @sid-cypher commented on GitHub (Feb 19, 2024): Thank you, @remy415 - I've been able to solve all issues. @haplo - I could help you build and test, we have the same GPU, but different CPU. - rebuilding amdgpu-dkms after upgrading ROCm to 6.0.2 has given me `/sys/module/amdgpu/version` - switching the CMake to Kitware APT repo version has fixed my dynamic library loading errors in `libext_server.so` Resulting log: ``` time=2024-02-19T01:01:04.672+01:00 level=INFO source=gpu.go:158 msg="AMD Driver: 6.3.6" time=2024-02-19T01:01:04.672+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" loading library /tmp/ollama3303004989/rocm_v6/libext_server.so time=2024-02-19T01:01:04.777+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama3303004989/rocm_v6/libext_server.so" time=2024-02-19T01:01:04.777+01:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server" ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 1 ROCm devices: Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no ... llm_load_tensors: offloaded 41/41 layers to GPU ``` and the model indeed runs fast. Conclusion: The precompiled version I've downloaded did not work, but building the latest commit 1e23e82 from `main` with up-to-date drivers and tools did work, and the GPU offloading works for my discrete AMD GPU. Happy times :heart:
Author
Owner

@TimTheBig commented on GitHub (Feb 22, 2024):

v0.1.26 and v0.1.25 still do not use GPU(7900xtx) on Linux when I use the install script. https://github.com/ollama/ollama/issues/2685

<!-- gh-comment-id:1959769746 --> @TimTheBig commented on GitHub (Feb 22, 2024): v0.1.26 and v0.1.25 still do not use GPU(7900xtx) on Linux when I use the install script. https://github.com/ollama/ollama/issues/2685
Author
Owner

@DocMAX commented on GitHub (Feb 24, 2024):

@sid-cypher I have no /sys/module/amdgpu/version

<!-- gh-comment-id:1962338687 --> @DocMAX commented on GitHub (Feb 24, 2024): @sid-cypher I have no /sys/module/amdgpu/version
Author
Owner

@sid-cypher commented on GitHub (Feb 24, 2024):

@DocMAX As I stated, rebuilding amdgpu-dkms (after upgrading ROCm to 6.0.2) allowed me to use the driver version with the /sys/module/amdgpu/version interface present. It doesn't give you anything but the version string ("6.3.6" in my case) and it should not be necessary for anything else but the report on detected AMD GPU driver.
Feel free to contact me (sid_cypher) on ollama Discord if you want help with this minor issue.

<!-- gh-comment-id:1962360383 --> @sid-cypher commented on GitHub (Feb 24, 2024): @DocMAX As I stated, rebuilding amdgpu-dkms (after upgrading ROCm to 6.0.2) allowed me to use the driver version with the `/sys/module/amdgpu/version` interface present. It doesn't give you anything but the version string ("6.3.6" in my case) and it should not be necessary for anything else but the report on detected AMD GPU driver. Feel free to contact me (sid_cypher) on ollama Discord if you want help with this minor issue.
Author
Owner

@dhiltgen commented on GitHub (Mar 11, 2024):

We've revamped a number of algorithms related to Radeon cards for the latest preview release - 0.1.29. Please give it a try and let us know if it works properly on your setup.

If you want to use our install script, be aware you'll need to use one from my branch until we merge a PR and mark 0.1.29 latest.

curl -fsSL https://raw.githubusercontent.com/dhiltgen/ollama/rocm_install/scripts/install.sh  | OLLAMA_VERSION="0.1.29" sh
<!-- gh-comment-id:1989296895 --> @dhiltgen commented on GitHub (Mar 11, 2024): We've revamped a number of algorithms related to Radeon cards for the latest preview release - [0.1.29](https://github.com/ollama/ollama/releases/tag/v0.1.29). Please give it a try and let us know if it works properly on your setup. If you want to use our install script, be aware you'll need to use one from my branch until we merge a PR and mark 0.1.29 latest. ``` curl -fsSL https://raw.githubusercontent.com/dhiltgen/ollama/rocm_install/scripts/install.sh | OLLAMA_VERSION="0.1.29" sh ```
Author
Owner

@haplo commented on GitHub (Mar 12, 2024):

Thank you for your work on this!

I built ollama from the v0.1.29 tag and now it's using the GPU, but it fails on inference (when warming up the model):

time=2024-03-12T14:50:16.262Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-12T14:50:16.262Z level=WARN source=amd_linux.go:50 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-03-12T14:50:16.262Z level=INFO source=amd_linux.go:235 msg="[0] amdgpu totalMemory 66982842368"
time=2024-03-12T14:50:16.262Z level=INFO source=amd_linux.go:236 msg="[0] amdgpu freeMemory  66982842368"
time=2024-03-12T14:50:16.263Z level=INFO source=amd_linux.go:235 msg="[1] amdgpu totalMemory 25753026560"
time=2024-03-12T14:50:16.263Z level=INFO source=amd_linux.go:236 msg="[1] amdgpu freeMemory  25753026560"
time=2024-03-12T14:50:16.263Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-12T14:50:16.263Z level=WARN source=amd_linux.go:50 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-03-12T14:50:16.263Z level=INFO source=amd_linux.go:235 msg="[0] amdgpu totalMemory 66982842368"
time=2024-03-12T14:50:16.263Z level=INFO source=amd_linux.go:236 msg="[0] amdgpu freeMemory  66982842368"
time=2024-03-12T14:50:16.263Z level=INFO source=amd_linux.go:235 msg="[1] amdgpu totalMemory 25753026560"
time=2024-03-12T14:50:16.263Z level=INFO source=amd_linux.go:236 msg="[1] amdgpu freeMemory  25753026560"
time=2024-03-12T14:50:16.263Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
loading library /tmp/ollama2884622770/runners/rocm/libext_server.so
time=2024-03-12T14:50:16.288Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama2884622770/runners/rocm/libext_server.so"
time=2024-03-12T14:50:16.288Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 2 ROCm devices:
Device 0: AMD Radeon RX 7900 XTX, compute capability 11.0, VMM: no
Device 1: AMD Radeon Graphics, compute capability 10.3, VMM: no
llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /var/lib/ollama/.ollama/models/blobs/sha256:e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = mistralai
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,58980]   = ["▁ t", "i n", "e r", "▁ a", "h e...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  22:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  23:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 7.24 B
llm_load_print_meta: model size       = 3.83 GiB (4.54 BPW)
llm_load_print_meta: general.name     = mistralai
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.22 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:      ROCm0 buffer size =  3847.55 MiB
llm_load_tensors:        CPU buffer size =    70.31 MiB
..................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      ROCm0 KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:  ROCm_Host input buffer size   =    13.02 MiB
llama_new_context_with_model:      ROCm0 compute buffer size =   164.00 MiB
llama_new_context_with_model:      ROCm1 compute buffer size =     0.00 MiB
llama_new_context_with_model:  ROCm_Host compute buffer size =     8.00 MiB
llama_new_context_with_model: graph splits (measure): 2
CUDA error: shared object initialization failed
current device: 0, in function ggml_cuda_op_flatten at /home/fidel/Code/ollama/llm/llama.cpp/ggml-cuda.cu:10110
hipGetLastError()
GGML_ASSERT: /home/fidel/Code/ollama/llm/llama.cpp/ggml-cuda.cu:256: !"CUDA error"
ptrace: Operation not permitted.
No stack.
The program is not being run.
SIGABRT: abort
PC=0x75fbdbcab32c m=5 sigcode=18446744073709551610
signal arrived during cgo execution

I'm using the built-in amdgpu driver. I know the amdgpu-pro driver is recommended, but I would rather use the open-source drivers unless there is no helping it.

ROCm and related libraries are installed from Arch Linux official repositories at their latest versions.

<!-- gh-comment-id:1991871154 --> @haplo commented on GitHub (Mar 12, 2024): Thank you for your work on this! I built ollama from the v0.1.29 tag and now it's using the GPU, but it fails on inference (when warming up the model): ``` time=2024-03-12T14:50:16.262Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-12T14:50:16.262Z level=WARN source=amd_linux.go:50 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-03-12T14:50:16.262Z level=INFO source=amd_linux.go:235 msg="[0] amdgpu totalMemory 66982842368" time=2024-03-12T14:50:16.262Z level=INFO source=amd_linux.go:236 msg="[0] amdgpu freeMemory 66982842368" time=2024-03-12T14:50:16.263Z level=INFO source=amd_linux.go:235 msg="[1] amdgpu totalMemory 25753026560" time=2024-03-12T14:50:16.263Z level=INFO source=amd_linux.go:236 msg="[1] amdgpu freeMemory 25753026560" time=2024-03-12T14:50:16.263Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-12T14:50:16.263Z level=WARN source=amd_linux.go:50 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-03-12T14:50:16.263Z level=INFO source=amd_linux.go:235 msg="[0] amdgpu totalMemory 66982842368" time=2024-03-12T14:50:16.263Z level=INFO source=amd_linux.go:236 msg="[0] amdgpu freeMemory 66982842368" time=2024-03-12T14:50:16.263Z level=INFO source=amd_linux.go:235 msg="[1] amdgpu totalMemory 25753026560" time=2024-03-12T14:50:16.263Z level=INFO source=amd_linux.go:236 msg="[1] amdgpu freeMemory 25753026560" time=2024-03-12T14:50:16.263Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" loading library /tmp/ollama2884622770/runners/rocm/libext_server.so time=2024-03-12T14:50:16.288Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama2884622770/runners/rocm/libext_server.so" time=2024-03-12T14:50:16.288Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 2 ROCm devices: Device 0: AMD Radeon RX 7900 XTX, compute capability 11.0, VMM: no Device 1: AMD Radeon Graphics, compute capability 10.3, VMM: no llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /var/lib/ollama/.ollama/models/blobs/sha256:e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = mistralai llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 7.24 B llm_load_print_meta: model size = 3.83 GiB (4.54 BPW) llm_load_print_meta: general.name = mistralai llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.22 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: ROCm0 buffer size = 3847.55 MiB llm_load_tensors: CPU buffer size = 70.31 MiB .................................................................................................. llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: ROCm0 KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: ROCm_Host input buffer size = 13.02 MiB llama_new_context_with_model: ROCm0 compute buffer size = 164.00 MiB llama_new_context_with_model: ROCm1 compute buffer size = 0.00 MiB llama_new_context_with_model: ROCm_Host compute buffer size = 8.00 MiB llama_new_context_with_model: graph splits (measure): 2 CUDA error: shared object initialization failed current device: 0, in function ggml_cuda_op_flatten at /home/fidel/Code/ollama/llm/llama.cpp/ggml-cuda.cu:10110 hipGetLastError() GGML_ASSERT: /home/fidel/Code/ollama/llm/llama.cpp/ggml-cuda.cu:256: !"CUDA error" ptrace: Operation not permitted. No stack. The program is not being run. SIGABRT: abort PC=0x75fbdbcab32c m=5 sigcode=18446744073709551610 signal arrived during cgo execution ``` I'm using the built-in amdgpu driver. I know the amdgpu-pro driver is recommended, but I would rather use the open-source drivers unless there is no helping it. ROCm and related libraries are installed from Arch Linux official repositories at their latest versions.
Author
Owner

@remy415 commented on GitHub (Mar 12, 2024):

@haplo Did you ensure the user assigned to your systemd service has been added to the render group? Check the groups command to verify membership, or just 'readd' it with sudo usermod -a -G render <SYSTEMD SERVICE USERNAME> i.e. sudo usermod -a -G render ollama

<!-- gh-comment-id:1991989619 --> @remy415 commented on GitHub (Mar 12, 2024): @haplo Did you ensure the user assigned to your systemd service has been added to the `render` group? Check the `groups` command to verify membership, or just 'readd' it with `sudo usermod -a -G render <SYSTEMD SERVICE USERNAME>` i.e. `sudo usermod -a -G render ollama`
Author
Owner

@haplo commented on GitHub (Mar 12, 2024):

@remy415 I originally installed ollama from Arch Linux's repository, and it created an ollama user, but turns out it didn't add it to the render group:

$ groups ollama
ollama
$ sudo usermod -a -G render ollama
$ groups ollama
render ollama

Same error though when running sudo -u ollama ./ollama serve.

<!-- gh-comment-id:1992020632 --> @haplo commented on GitHub (Mar 12, 2024): @remy415 I originally installed ollama from Arch Linux's repository, and it created an ollama user, but turns out it didn't add it to the `render` group: ``` $ groups ollama ollama $ sudo usermod -a -G render ollama $ groups ollama render ollama ``` Same error though when running `sudo -u ollama ./ollama serve`.
Author
Owner

@remy415 commented on GitHub (Mar 12, 2024):

If you haven't done this yet, adding users to a new group does require reloading the environment, best accomplished through a reboot.

<!-- gh-comment-id:1992026890 --> @remy415 commented on GitHub (Mar 12, 2024): If you haven't done this yet, adding users to a new group does require reloading the environment, best accomplished through a reboot.
Author
Owner

@dhiltgen commented on GitHub (Mar 12, 2024):

@haplo is the second GPU an iGPU by any chance?

Device 1: AMD Radeon Graphics, compute capability 10.3, VMM: no

I've added code to detect and skip those because it will result in this sort of error until we add proper support for iGPUs, but there's probably a bug in the detection logic.

Could you share the output of cat /sys/class/kfd/kfd/topology/nodes/*/properties (if that is an iGPU) so I can see what it looks like?

Until we fix that, you should be able to set HIP_VISIBLE_DEVICES=0 and get the system to ignore the second GPU.

<!-- gh-comment-id:1992658721 --> @dhiltgen commented on GitHub (Mar 12, 2024): @haplo is the second GPU an iGPU by any chance? ``` Device 1: AMD Radeon Graphics, compute capability 10.3, VMM: no ``` I've added code to detect and skip those because it will result in this sort of error until we add proper support for iGPUs, but there's probably a bug in the detection logic. Could you share the output of `cat /sys/class/kfd/kfd/topology/nodes/*/properties` (if that is an iGPU) so I can see what it looks like? Until we fix that, you should be able to set `HIP_VISIBLE_DEVICES=0` and get the system to ignore the second GPU.
Author
Owner

@haplo commented on GitHub (Mar 12, 2024):

@dhiltgen Yes, the second is an iGPU (Gigabyte B650M Gaming X AX motherboard):

$ cat /sys/class/kfd/kfd/topology/nodes/*/properties
cpu_cores_count 24
simd_count 0
mem_banks_count 1
caches_count 0
io_links_count 2
p2p_links_count 0
cpu_core_id_base 0
simd_id_base 0
max_waves_per_simd 0
lds_size_in_kb 0
gds_size_in_kb 0
num_gws 0
wave_front_size 0
array_count 0
simd_arrays_per_engine 0
cu_per_simd_array 0
simd_per_cu 0
max_slots_scratch_cu 0
gfx_target_version 0
vendor_id 0
device_id 0
location_id 0
domain 0
drm_render_minor 0
hive_id 0
num_sdma_engines 0
num_sdma_xgmi_engines 0
num_sdma_queues_per_engine 0
num_cp_queues 0
max_engine_clk_ccompute 5482
cpu_cores_count 0
simd_count 192
mem_banks_count 1
caches_count 206
io_links_count 1
p2p_links_count 0
cpu_core_id_base 0
simd_id_base 2147487744
max_waves_per_simd 16
lds_size_in_kb 64
gds_size_in_kb 0
num_gws 64
wave_front_size 32
array_count 12
simd_arrays_per_engine 2
cu_per_simd_array 8
simd_per_cu 2
max_slots_scratch_cu 32
gfx_target_version 110000
vendor_id 4098
device_id 29772
location_id 768
domain 0
drm_render_minor 128
hive_id 0
num_sdma_engines 2
num_sdma_xgmi_engines 0
num_sdma_queues_per_engine 6
num_cp_queues 8
max_engine_clk_fcompute 2371
local_mem_size 0
fw_version 550
capability 671588992
debug_prop 1495
sdma_fw_version 19
unique_id 10266574693915471560
num_xcc 1
max_engine_clk_ccompute 5482
cpu_cores_count 0
simd_count 4
mem_banks_count 1
caches_count 6
io_links_count 1
p2p_links_count 0
cpu_core_id_base 0
simd_id_base 2147487840
max_waves_per_simd 16
lds_size_in_kb 64
gds_size_in_kb 0
num_gws 0
wave_front_size 32
array_count 1
simd_arrays_per_engine 1
cu_per_simd_array 2
simd_per_cu 2
max_slots_scratch_cu 32
gfx_target_version 100306
vendor_id 4098
device_id 5710
location_id 5376
domain 0
drm_render_minor 129
hive_id 0
num_sdma_engines 1
num_sdma_xgmi_engines 0
num_sdma_queues_per_engine 2
num_cp_queues 8
max_engine_clk_fcompute 2200
local_mem_size 0
fw_version 20
capability 675521152
debug_prop 1495
sdma_fw_version 9
unique_id 0
num_xcc 1
max_engine_clk_ccompute 5482

From lspci -v:

15:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Raphael (rev c4) (prog-if 00 [VGA controller])
Subsystem: Gigabyte Technology Co., Ltd Raphael
Flags: bus master, fast devsel, latency 0, IRQ 97, IOMMU group 32
Memory at f820000000 (64-bit, prefetchable) [size=256M]
Memory at f830000000 (64-bit, prefetchable) [size=2M]
I/O ports at e000 [size=256]
Memory at f6600000 (32-bit, non-prefetchable) [size=512K]
Capabilities: [48] Vendor Specific Information: Len=08 <?>
Capabilities: [50] Power Management version 3
Capabilities: [64] Express Legacy Endpoint, IntMsgNum 0
Capabilities: [a0] MSI: Enable- Count=1/4 Maskable- 64bit+
Capabilities: [c0] MSI-X: Enable+ Count=4 Masked-
Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
Capabilities: [270] Secondary PCI Express
Capabilities: [2a0] Access Control Services
Capabilities: [2b0] Address Translation Service (ATS)
Capabilities: [2c0] Page Request Interface (PRI)
Capabilities: [2d0] Process Address Space ID (PASID)
Capabilities: [410] Physical Layer 16.0 GT/s <?>
Capabilities: [450] Lane Margining at the Receiver
Kernel driver in use: amdgpu
Kernel modules: amdgpu

The iGPU is completely unused at the moment, the dedicated GPU is driving the monitor and everything else. This means the iGPU has a very low amount of RAM dedicated to it.

Until we fix that, you should be able to set HIP_VISIBLE_DEVICES=0 and get the system to ignore the second GPU.

After setting HIP_VISIBLE_DEVICES=0 ollama still finds both GPUs:

time=2024-03-12T22:22:23.982Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-12T22:22:23.982Z level=WARN source=amd_linux.go:50 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-03-12T22:22:23.982Z level=INFO source=amd_linux.go:235 msg="[0] amdgpu totalMemory 66982842368"
time=2024-03-12T22:22:23.982Z level=INFO source=amd_linux.go:236 msg="[0] amdgpu freeMemory  66982842368"
time=2024-03-12T22:22:23.982Z level=INFO source=amd_linux.go:235 msg="[1] amdgpu totalMemory 25753026560"
time=2024-03-12T22:22:23.982Z level=INFO source=amd_linux.go:236 msg="[1] amdgpu freeMemory  25753026560"
time=2024-03-12T22:22:23.982Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-12T22:22:23.982Z level=WARN source=amd_linux.go:50 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-03-12T22:22:23.982Z level=INFO source=amd_linux.go:235 msg="[0] amdgpu totalMemory 66982842368"
time=2024-03-12T22:22:23.982Z level=INFO source=amd_linux.go:236 msg="[0] amdgpu freeMemory  66982842368"
time=2024-03-12T22:22:23.982Z level=INFO source=amd_linux.go:235 msg="[1] amdgpu totalMemory 25753026560"
time=2024-03-12T22:22:23.982Z level=INFO source=amd_linux.go:236 msg="[1] amdgpu freeMemory  25753026560"
time=2024-03-12T22:22:23.982Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
loading library /tmp/ollama2460222419/runners/rocm/libext_server.so
time=2024-03-12T22:22:24.008Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama2460222419/runners/rocm/libext_server.so"
time=2024-03-12T22:22:24.008Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 2 ROCm devices:
Device 0: AMD Radeon RX 7900 XTX, compute capability 11.0, VMM: no
Device 1: AMD Radeon Graphics, compute capability 10.3, VMM: no

Then it fails with the same error as before when it tries to warm up the model. I will reboot tomorrow as @remy415 suggested and report back.

<!-- gh-comment-id:1992683489 --> @haplo commented on GitHub (Mar 12, 2024): @dhiltgen Yes, the second is an iGPU ([Gigabyte B650M Gaming X AX motherboard](https://www.gigabyte.com/Motherboard/B650M-GAMING-X-AX-rev-13)): ``` $ cat /sys/class/kfd/kfd/topology/nodes/*/properties cpu_cores_count 24 simd_count 0 mem_banks_count 1 caches_count 0 io_links_count 2 p2p_links_count 0 cpu_core_id_base 0 simd_id_base 0 max_waves_per_simd 0 lds_size_in_kb 0 gds_size_in_kb 0 num_gws 0 wave_front_size 0 array_count 0 simd_arrays_per_engine 0 cu_per_simd_array 0 simd_per_cu 0 max_slots_scratch_cu 0 gfx_target_version 0 vendor_id 0 device_id 0 location_id 0 domain 0 drm_render_minor 0 hive_id 0 num_sdma_engines 0 num_sdma_xgmi_engines 0 num_sdma_queues_per_engine 0 num_cp_queues 0 max_engine_clk_ccompute 5482 cpu_cores_count 0 simd_count 192 mem_banks_count 1 caches_count 206 io_links_count 1 p2p_links_count 0 cpu_core_id_base 0 simd_id_base 2147487744 max_waves_per_simd 16 lds_size_in_kb 64 gds_size_in_kb 0 num_gws 64 wave_front_size 32 array_count 12 simd_arrays_per_engine 2 cu_per_simd_array 8 simd_per_cu 2 max_slots_scratch_cu 32 gfx_target_version 110000 vendor_id 4098 device_id 29772 location_id 768 domain 0 drm_render_minor 128 hive_id 0 num_sdma_engines 2 num_sdma_xgmi_engines 0 num_sdma_queues_per_engine 6 num_cp_queues 8 max_engine_clk_fcompute 2371 local_mem_size 0 fw_version 550 capability 671588992 debug_prop 1495 sdma_fw_version 19 unique_id 10266574693915471560 num_xcc 1 max_engine_clk_ccompute 5482 cpu_cores_count 0 simd_count 4 mem_banks_count 1 caches_count 6 io_links_count 1 p2p_links_count 0 cpu_core_id_base 0 simd_id_base 2147487840 max_waves_per_simd 16 lds_size_in_kb 64 gds_size_in_kb 0 num_gws 0 wave_front_size 32 array_count 1 simd_arrays_per_engine 1 cu_per_simd_array 2 simd_per_cu 2 max_slots_scratch_cu 32 gfx_target_version 100306 vendor_id 4098 device_id 5710 location_id 5376 domain 0 drm_render_minor 129 hive_id 0 num_sdma_engines 1 num_sdma_xgmi_engines 0 num_sdma_queues_per_engine 2 num_cp_queues 8 max_engine_clk_fcompute 2200 local_mem_size 0 fw_version 20 capability 675521152 debug_prop 1495 sdma_fw_version 9 unique_id 0 num_xcc 1 max_engine_clk_ccompute 5482 ``` From `lspci -v`: ``` 15:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Raphael (rev c4) (prog-if 00 [VGA controller]) Subsystem: Gigabyte Technology Co., Ltd Raphael Flags: bus master, fast devsel, latency 0, IRQ 97, IOMMU group 32 Memory at f820000000 (64-bit, prefetchable) [size=256M] Memory at f830000000 (64-bit, prefetchable) [size=2M] I/O ports at e000 [size=256] Memory at f6600000 (32-bit, non-prefetchable) [size=512K] Capabilities: [48] Vendor Specific Information: Len=08 <?> Capabilities: [50] Power Management version 3 Capabilities: [64] Express Legacy Endpoint, IntMsgNum 0 Capabilities: [a0] MSI: Enable- Count=1/4 Maskable- 64bit+ Capabilities: [c0] MSI-X: Enable+ Count=4 Masked- Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?> Capabilities: [270] Secondary PCI Express Capabilities: [2a0] Access Control Services Capabilities: [2b0] Address Translation Service (ATS) Capabilities: [2c0] Page Request Interface (PRI) Capabilities: [2d0] Process Address Space ID (PASID) Capabilities: [410] Physical Layer 16.0 GT/s <?> Capabilities: [450] Lane Margining at the Receiver Kernel driver in use: amdgpu Kernel modules: amdgpu ``` The iGPU is completely unused at the moment, the dedicated GPU is driving the monitor and everything else. This means the iGPU has a very low amount of RAM dedicated to it. > Until we fix that, you should be able to set HIP_VISIBLE_DEVICES=0 and get the system to ignore the second GPU. After setting `HIP_VISIBLE_DEVICES=0` ollama still finds both GPUs: ``` time=2024-03-12T22:22:23.982Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-12T22:22:23.982Z level=WARN source=amd_linux.go:50 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-03-12T22:22:23.982Z level=INFO source=amd_linux.go:235 msg="[0] amdgpu totalMemory 66982842368" time=2024-03-12T22:22:23.982Z level=INFO source=amd_linux.go:236 msg="[0] amdgpu freeMemory 66982842368" time=2024-03-12T22:22:23.982Z level=INFO source=amd_linux.go:235 msg="[1] amdgpu totalMemory 25753026560" time=2024-03-12T22:22:23.982Z level=INFO source=amd_linux.go:236 msg="[1] amdgpu freeMemory 25753026560" time=2024-03-12T22:22:23.982Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-12T22:22:23.982Z level=WARN source=amd_linux.go:50 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-03-12T22:22:23.982Z level=INFO source=amd_linux.go:235 msg="[0] amdgpu totalMemory 66982842368" time=2024-03-12T22:22:23.982Z level=INFO source=amd_linux.go:236 msg="[0] amdgpu freeMemory 66982842368" time=2024-03-12T22:22:23.982Z level=INFO source=amd_linux.go:235 msg="[1] amdgpu totalMemory 25753026560" time=2024-03-12T22:22:23.982Z level=INFO source=amd_linux.go:236 msg="[1] amdgpu freeMemory 25753026560" time=2024-03-12T22:22:23.982Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" loading library /tmp/ollama2460222419/runners/rocm/libext_server.so time=2024-03-12T22:22:24.008Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama2460222419/runners/rocm/libext_server.so" time=2024-03-12T22:22:24.008Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 2 ROCm devices: Device 0: AMD Radeon RX 7900 XTX, compute capability 11.0, VMM: no Device 1: AMD Radeon Graphics, compute capability 10.3, VMM: no ``` Then it fails with the same error as before when it tries to warm up the model. I will reboot tomorrow as @remy415 suggested and report back.
Author
Owner

@dhiltgen commented on GitHub (Mar 12, 2024):

Thanks for the details @haplo!

I'm working on a fix, and we'll aim to get this into the final 0.1.29 release.

<!-- gh-comment-id:1992692161 --> @dhiltgen commented on GitHub (Mar 12, 2024): Thanks for the details @haplo! I'm working on a fix, and we'll aim to get this into the final 0.1.29 release.
Author
Owner

@haplo commented on GitHub (Mar 12, 2024):

Oh, ollama 0.1.29 just got in Arch Linux. I upgraded and with it the AMD GPUs are not detected, the CPU is used instead.

The rocm library is not detected in Arch's ollama:

time=2024-03-12T22:32:29.530Z level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu_avx2 cpu_avx cpu]"

Should I file a bug with Arch? Seems like a package build issue.

<!-- gh-comment-id:1992693690 --> @haplo commented on GitHub (Mar 12, 2024): Oh, ollama 0.1.29 just got in Arch Linux. I upgraded and with it the AMD GPUs are not detected, the CPU is used instead. The rocm library is not detected in Arch's ollama: ``` time=2024-03-12T22:32:29.530Z level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu_avx2 cpu_avx cpu]" ``` Should I file a bug with Arch? Seems like a package build issue.
Author
Owner

@dhiltgen commented on GitHub (Mar 12, 2024):

0.1.29 is in pre-release right now, we haven't marked it final as we hammer out some final bugs

<!-- gh-comment-id:1992694729 --> @dhiltgen commented on GitHub (Mar 12, 2024): [0.1.29](https://github.com/ollama/ollama/releases/tag/v0.1.29) is in pre-release right now, we haven't marked it final as we hammer out some final bugs
Author
Owner

@haplo commented on GitHub (Mar 12, 2024):

Thanks for the details @haplo!

I'm working on a fix, and we'll aim to get this into the final 0.1.29 release.

Thank you for your work on this, and do let me know if I can be of help in any way.

0.1.29 is in pre-release right now, we haven't marked it final as we hammer out some final bugs

I don't know if it's an automated process but Arch's ollama is at 0.1.29.

image

Maybe they are checking for tags but not checking that the Github release is marked as pre-release. I will open a bug in Arch about that.

<!-- gh-comment-id:1992698962 --> @haplo commented on GitHub (Mar 12, 2024): > Thanks for the details @haplo! > > I'm working on a fix, and we'll aim to get this into the final 0.1.29 release. Thank you for your work on this, and do let me know if I can be of help in any way. > [0.1.29](https://github.com/ollama/ollama/releases/tag/v0.1.29) is in pre-release right now, we haven't marked it final as we hammer out some final bugs I don't know if it's an automated process but [Arch's ollama is at 0.1.29](https://archlinux.org/packages/extra/x86_64/ollama/). ![image](https://github.com/ollama/ollama/assets/71658/c6dafcb5-7e9f-4cf8-aae7-f79c4aadbb8a) Maybe they are checking for tags but not checking that the Github release is marked as pre-release. I will open a bug in Arch about that.
Author
Owner

@dhiltgen commented on GitHub (Mar 12, 2024):

One quick update - I was wrong about HIP_VISIBLE_DEVICES=0 For your setup, I believe HIP_VISIBLE_DEVICES=1 should work if that's passed to the server. (For your setup, 0 is the CPU, 1 is your discrete GPU, and 2 is the iGPU) Could you try that, and also set OLLAMA_DEBUG=1 so we can see a little more detail on the discovery?

<!-- gh-comment-id:1992703464 --> @dhiltgen commented on GitHub (Mar 12, 2024): One quick update - I was wrong about `HIP_VISIBLE_DEVICES=0` For your setup, I believe `HIP_VISIBLE_DEVICES=1` should work if that's passed to the server. (For your setup, 0 is the CPU, 1 is your discrete GPU, and 2 is the iGPU) Could you try that, and also set `OLLAMA_DEBUG=1` so we can see a little more detail on the discovery?
Author
Owner

@haplo commented on GitHub (Mar 12, 2024):

@dhiltgen I messed up the HIP_VISIBLE_DEVICES=0 export because of how I use sudo -u ollama. After fixing that the GPU inference now works! 🎉

The Device 1 (iGPU) now doesn't appear.

One thing I noticed is that ollama reports RAM+VRAM as available memory:

time=2024-03-12T22:52:34.570Z level=DEBUG source=amd_linux.go:168 msg="discovering amdgpu devices [0]"
time=2024-03-12T22:52:34.570Z level=INFO source=amd_linux.go:235 msg="[0] amdgpu totalMemory 66982842368"
time=2024-03-12T22:52:34.570Z level=INFO source=amd_linux.go:236 msg="[0] amdgpu freeMemory  66982842368"
time=2024-03-12T22:52:34.570Z level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 57491M available memory"
<!-- gh-comment-id:1992711074 --> @haplo commented on GitHub (Mar 12, 2024): @dhiltgen I messed up the `HIP_VISIBLE_DEVICES=0` export because of how I use `sudo -u ollama`. After fixing that the GPU inference now works! :tada: The Device 1 (iGPU) now doesn't appear. One thing I noticed is that ollama reports RAM+VRAM as available memory: ``` time=2024-03-12T22:52:34.570Z level=DEBUG source=amd_linux.go:168 msg="discovering amdgpu devices [0]" time=2024-03-12T22:52:34.570Z level=INFO source=amd_linux.go:235 msg="[0] amdgpu totalMemory 66982842368" time=2024-03-12T22:52:34.570Z level=INFO source=amd_linux.go:236 msg="[0] amdgpu freeMemory 66982842368" time=2024-03-12T22:52:34.570Z level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 57491M available memory" ```
Author
Owner

@dhiltgen commented on GitHub (Mar 12, 2024):

One thing I noticed is that ollama reports RAM+VRAM as available memory:

Since device 0 is your CPU, I think that's system memory being reported by the amdgpu driver. Definitely a bug in the algo. I'll make sure to find and squash that as part of my fix.

Actually, can you retry with HIP_VISIBLE_DEVICES=1 not zero? I think that should resolve the system-memory being reported.

<!-- gh-comment-id:1992720107 --> @dhiltgen commented on GitHub (Mar 12, 2024): > One thing I noticed is that ollama reports RAM+VRAM as available memory: Since device 0 is your CPU, I think that's system memory being reported by the amdgpu driver. Definitely a bug in the algo. I'll make sure to find and squash that as part of my fix. Actually, can you retry with `HIP_VISIBLE_DEVICES=1` not zero? I think that should resolve the system-memory being reported.
Author
Owner

@haplo commented on GitHub (Mar 12, 2024):

Actually, can you retry with HIP_VISIBLE_DEVICES=1 not zero? I think that should resolve the system-memory being reported.

With HIP_VISIBLE_DEVICES=1 the memory is fixed as you expected:

time=2024-03-12T23:14:01.085Z level=INFO source=routes.go:1082 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2024-03-12T23:14:01.085Z level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to /tmp/ollama2537978927/runners ..."
time=2024-03-12T23:14:01.254Z level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu_avx cpu rocm]"
time=2024-03-12T23:14:01.254Z level=DEBUG source=payload_common.go:140 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-03-12T23:14:01.254Z level=INFO source=gpu.go:77 msg="Detecting GPU type"
time=2024-03-12T23:14:01.254Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-03-12T23:14:01.254Z level=DEBUG source=gpu.go:209 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /home/fidel/Code/ollama/libnvidia-ml.so*]"
time=2024-03-12T23:14:01.277Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []"
time=2024-03-12T23:14:01.277Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-12T23:14:01.277Z level=WARN source=amd_linux.go:50 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-03-12T23:14:01.277Z level=DEBUG source=amd_linux.go:168 msg="discovering amdgpu devices [1]"
time=2024-03-12T23:14:01.277Z level=INFO source=amd_linux.go:235 msg="[1] amdgpu totalMemory 25753026560"
time=2024-03-12T23:14:01.277Z level=INFO source=amd_linux.go:236 msg="[1] amdgpu freeMemory  25753026560"
time=2024-03-12T23:14:01.277Z level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 22104M available memory"

But inference fails as the iGPU is used:

time=2024-03-12T23:14:08.430Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-12T23:14:08.430Z level=WARN source=amd_linux.go:50 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-03-12T23:14:08.430Z level=DEBUG source=amd_linux.go:168 msg="discovering amdgpu devices [1]"
time=2024-03-12T23:14:08.430Z level=INFO source=amd_linux.go:235 msg="[1] amdgpu totalMemory 25753026560"
time=2024-03-12T23:14:08.430Z level=INFO source=amd_linux.go:236 msg="[1] amdgpu freeMemory  25753026560"
time=2024-03-12T23:14:08.430Z level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 22104M available memory"
time=2024-03-12T23:14:08.430Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-12T23:14:08.430Z level=WARN source=amd_linux.go:50 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-03-12T23:14:08.430Z level=DEBUG source=amd_linux.go:168 msg="discovering amdgpu devices [1]"
time=2024-03-12T23:14:08.430Z level=INFO source=amd_linux.go:235 msg="[1] amdgpu totalMemory 25753026560"
time=2024-03-12T23:14:08.430Z level=INFO source=amd_linux.go:236 msg="[1] amdgpu freeMemory  25753026560"
time=2024-03-12T23:14:08.430Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-12T23:14:08.430Z level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [/tmp/ollama2537978927/runners/rocm/libext_server.so]"
loading library /tmp/ollama2537978927/runners/rocm/libext_server.so
time=2024-03-12T23:14:08.526Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama2537978927/runners/rocm/libext_server.so"
time=2024-03-12T23:14:08.526Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
time=2024-03-12T23:14:08.526Z level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x76a8540f5dd0 n_ctx:2048 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:41 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters:<nil> mmproj:<nil> verbose_logging:true _:[0 0 0 0 0 0 0]}"
[1710285248] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
[1710285248] Performing pre-initialization of GPU

rocBLAS error: Cannot read /opt/rocm/lib/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1036
List of available TensileLibrary Files :
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1030.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1100.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1101.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1102.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx900.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx906.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx908.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx90a.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx940.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx941.dat"
"/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx942.dat"
<!-- gh-comment-id:1992725943 --> @haplo commented on GitHub (Mar 12, 2024): > Actually, can you retry with `HIP_VISIBLE_DEVICES=1` not zero? I think that should resolve the system-memory being reported. With `HIP_VISIBLE_DEVICES=1` the memory is fixed as you expected: ``` time=2024-03-12T23:14:01.085Z level=INFO source=routes.go:1082 msg="Listening on 127.0.0.1:11434 (version 0.0.0)" time=2024-03-12T23:14:01.085Z level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to /tmp/ollama2537978927/runners ..." time=2024-03-12T23:14:01.254Z level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu_avx cpu rocm]" time=2024-03-12T23:14:01.254Z level=DEBUG source=payload_common.go:140 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-03-12T23:14:01.254Z level=INFO source=gpu.go:77 msg="Detecting GPU type" time=2024-03-12T23:14:01.254Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so" time=2024-03-12T23:14:01.254Z level=DEBUG source=gpu.go:209 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /home/fidel/Code/ollama/libnvidia-ml.so*]" time=2024-03-12T23:14:01.277Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []" time=2024-03-12T23:14:01.277Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-12T23:14:01.277Z level=WARN source=amd_linux.go:50 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-03-12T23:14:01.277Z level=DEBUG source=amd_linux.go:168 msg="discovering amdgpu devices [1]" time=2024-03-12T23:14:01.277Z level=INFO source=amd_linux.go:235 msg="[1] amdgpu totalMemory 25753026560" time=2024-03-12T23:14:01.277Z level=INFO source=amd_linux.go:236 msg="[1] amdgpu freeMemory 25753026560" time=2024-03-12T23:14:01.277Z level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 22104M available memory" ``` But inference fails as the iGPU is used: ``` time=2024-03-12T23:14:08.430Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-12T23:14:08.430Z level=WARN source=amd_linux.go:50 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-03-12T23:14:08.430Z level=DEBUG source=amd_linux.go:168 msg="discovering amdgpu devices [1]" time=2024-03-12T23:14:08.430Z level=INFO source=amd_linux.go:235 msg="[1] amdgpu totalMemory 25753026560" time=2024-03-12T23:14:08.430Z level=INFO source=amd_linux.go:236 msg="[1] amdgpu freeMemory 25753026560" time=2024-03-12T23:14:08.430Z level=DEBUG source=gpu.go:180 msg="rocm detected 1 devices with 22104M available memory" time=2024-03-12T23:14:08.430Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-12T23:14:08.430Z level=WARN source=amd_linux.go:50 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-03-12T23:14:08.430Z level=DEBUG source=amd_linux.go:168 msg="discovering amdgpu devices [1]" time=2024-03-12T23:14:08.430Z level=INFO source=amd_linux.go:235 msg="[1] amdgpu totalMemory 25753026560" time=2024-03-12T23:14:08.430Z level=INFO source=amd_linux.go:236 msg="[1] amdgpu freeMemory 25753026560" time=2024-03-12T23:14:08.430Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-12T23:14:08.430Z level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [/tmp/ollama2537978927/runners/rocm/libext_server.so]" loading library /tmp/ollama2537978927/runners/rocm/libext_server.so time=2024-03-12T23:14:08.526Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama2537978927/runners/rocm/libext_server.so" time=2024-03-12T23:14:08.526Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" time=2024-03-12T23:14:08.526Z level=DEBUG source=dyn_ext_server.go:151 msg="server params: {model:0x76a8540f5dd0 n_ctx:2048 n_batch:512 n_threads:0 n_parallel:1 rope_freq_base:0 rope_freq_scale:0 memory_f16:true n_gpu_layers:41 main_gpu:0 use_mlock:false use_mmap:true numa:0 embedding:true lora_adapters:<nil> mmproj:<nil> verbose_logging:true _:[0 0 0 0 0 0 0]}" [1710285248] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | [1710285248] Performing pre-initialization of GPU rocBLAS error: Cannot read /opt/rocm/lib/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1036 List of available TensileLibrary Files : "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1030.dat" "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1100.dat" "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1101.dat" "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx1102.dat" "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx900.dat" "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx906.dat" "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx908.dat" "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx90a.dat" "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx940.dat" "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx941.dat" "/opt/rocm/lib/rocblas/library/TensileLibrary_lazy_gfx942.dat" ```
Author
Owner

@dhiltgen commented on GitHub (Mar 12, 2024):

Ugh. So the HIP library is ignoring the CPU node and there's an off-by-one glitch here in my calculations. Thanks!

<!-- gh-comment-id:1992727693 --> @dhiltgen commented on GitHub (Mar 12, 2024): Ugh. So the HIP library is ignoring the CPU node and there's an off-by-one glitch here in my calculations. Thanks!
Author
Owner

@haplo commented on GitHub (Mar 12, 2024):

@dhiltgen I consider this bug fixed on my end, any issues with Arch's ollama package I will file with them.

Thanks again for your support!

<!-- gh-comment-id:1992736358 --> @haplo commented on GitHub (Mar 12, 2024): @dhiltgen I consider this bug fixed on my end, any issues with Arch's ollama package I will file with them. Thanks again for your support!
Author
Owner

@xyproto commented on GitHub (Apr 29, 2024):

I just packaged ollama-rocm for Arch Linux, please test.

If there are issues with the packaging, it can be reported here: https://gitlab.archlinux.org/archlinux/packaging/packages/ollama-rocm/-/issues

<!-- gh-comment-id:2082083582 --> @xyproto commented on GitHub (Apr 29, 2024): I just packaged `ollama-rocm` for Arch Linux, please test. If there are issues with the packaging, it can be reported here: https://gitlab.archlinux.org/archlinux/packaging/packages/ollama-rocm/-/issues
Author
Owner

@ketsapiwiq commented on GitHub (Apr 29, 2024):

I just packaged ollama-rocm for Arch Linux, please test.

If there are issues with the packaging, it can be reported here: https://gitlab.archlinux.org/archlinux/packaging/packages/ollama-rocm/-/issues

Thanks so much @xyproto, it just works!!!

<!-- gh-comment-id:2082639379 --> @ketsapiwiq commented on GitHub (Apr 29, 2024): > I just packaged `ollama-rocm` for Arch Linux, please test. > > If there are issues with the packaging, it can be reported here: https://gitlab.archlinux.org/archlinux/packaging/packages/ollama-rocm/-/issues Thanks so much @xyproto, it just works!!!
Author
Owner

@xyproto commented on GitHub (Apr 29, 2024):

Cool, thanks for testing @ketsapiwiq. 🙂

<!-- gh-comment-id:2082657775 --> @xyproto commented on GitHub (Apr 29, 2024): Cool, thanks for testing @ketsapiwiq. 🙂
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47917