[GH-ISSUE #2502] Ollama fails to detect gpu on prerelease 0.1.25 #63501

Closed
opened 2026-05-03 13:51:29 -05:00 by GiteaMirror · 22 comments
Owner

Originally created by @abysssol on GitHub (Feb 14, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2502

Originally assigned to: @dhiltgen on GitHub.

I'm working to update the ollama package in nixpkgs, and release 0.1.24 works as expected (nix source, build here), but the new prerelease 0.1.25 fails to detect the gpu (nix source, build here). It seems to build correctly, and it detects the gpu management library librocm_smi64.so.5.0, but it then fails to use it, logging no GPU detected. I don't know if this is a rocm problem and cuda works right or not, since I only have an amd gpu.

Unfortunately, I don't have the familiarity with ollama to have the slightest clue as to what could be going wrong. Hopefully these logs with OLLAMA_DEBUG=1 are somehow helpful, though.

The server log from 0.1.25; and download log.

time=2024-02-16T12:17:46.124-05:00 level=INFO source=images.go:706 msg="total blobs: 10"
time=2024-02-16T12:17:46.124-05:00 level=INFO source=images.go:713 msg="total unused blobs removed: 0"
time=2024-02-16T12:17:46.125-05:00 level=INFO source=routes.go:1014 msg="Listening on 127.0.0.1:11434 (version 0.1.25)"
time=2024-02-16T12:17:46.125-05:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-02-16T12:17:49.132-05:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cuda_v12 cpu_avx rocm cpu cpu_avx2]"
time=2024-02-16T12:17:49.133-05:00 level=DEBUG source=payload_common.go:147 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-02-16T12:17:49.133-05:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-16T12:17:49.133-05:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-02-16T12:17:49.133-05:00 level=DEBUG source=gpu.go:280 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /nix/store/l7xkh2k5dqbfp1yrckas1r5zrapcd7c5-pipewire-1.0.1-jack/lib/libnvidia-ml.so* /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/libnvidia-ml.so* /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so*]"
time=2024-02-16T12:17:49.133-05:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [/nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06]"
wiring nvidia management library functions in /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06
dlsym: nvmlInit_v2
dlsym: nvmlShutdown
dlsym: nvmlDeviceGetHandleByIndex
dlsym: nvmlDeviceGetMemoryInfo
dlsym: nvmlDeviceGetCount_v2
dlsym: nvmlDeviceGetCudaComputeCapability
dlsym: nvmlSystemGetDriverVersion
dlsym: nvmlDeviceGetName
dlsym: nvmlDeviceGetSerial
dlsym: nvmlDeviceGetVbiosVersion
dlsym: nvmlDeviceGetBoardPartNumber
dlsym: nvmlDeviceGetBrand
nvmlInit_v2 err: 9
time=2024-02-16T12:17:49.151-05:00 level=INFO source=gpu.go:320 msg="Unable to load CUDA management library /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06: nvml vram init failure: 9"
time=2024-02-16T12:17:49.151-05:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library librocm_smi64.so"
time=2024-02-16T12:17:49.151-05:00 level=DEBUG source=gpu.go:280 msg="gpu management search paths: [/opt/rocm*/lib*/librocm_smi64.so* /nix/store/l7xkh2k5dqbfp1yrckas1r5zrapcd7c5-pipewire-1.0.1-jack/lib/librocm_smi64.so* /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so* /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/librocm_smi64.so*]"
time=2024-02-16T12:17:49.151-05:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [/nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0]"
wiring rocm management library functions in /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0
dlsym: rsmi_init
dlsym: rsmi_shut_down
dlsym: rsmi_dev_memory_total_get
dlsym: rsmi_dev_memory_usage_get
dlsym: rsmi_version_get
dlsym: rsmi_num_monitor_devices
dlsym: rsmi_dev_id_get
dlsym: rsmi_dev_name_get
dlsym: rsmi_dev_brand_get
dlsym: rsmi_dev_vendor_name_get
dlsym: rsmi_dev_vram_vendor_get
dlsym: rsmi_dev_serial_number_get
dlsym: rsmi_dev_subsystem_name_get
dlsym: rsmi_dev_vbios_version_get
time=2024-02-16T12:17:49.153-05:00 level=INFO source=gpu.go:109 msg="Radeon GPU detected"
time=2024-02-16T12:17:49.153-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-16T12:17:49.153-05:00 level=INFO source=routes.go:1037 msg="no GPU detected"
[GIN] 2024/02/16 - 12:17:51 | 200 |      23.353µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/02/16 - 12:17:51 | 200 |     314.341µs |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/02/16 - 12:17:51 | 200 |     166.067µs |       127.0.0.1 | POST     "/api/show"
time=2024-02-16T12:17:51.163-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-16T12:17:51.163-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-16T12:17:51.163-05:00 level=INFO source=llm.go:77 msg="GPU not available, falling back to CPU"
time=2024-02-16T12:17:51.163-05:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [/tmp/ollama81314947/cpu_avx2/libext_server.so]"
time=2024-02-16T12:17:51.163-05:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama81314947/cpu_avx2/libext_server.so"
time=2024-02-16T12:17:51.163-05:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server"
[1708103871] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | 
llama_model_loader: loaded meta data with 24 key-value pairs and 995 tensors from /home/abysssol/.ollama/models/blobs/sha256:097a1ff4445ccc1e7668f70b9de3caa60cfc2a8e2cb9da3505b13854f1cfe20f (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = cognitivecomputations
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:                         llama.expert_count u32              = 8
llama_model_loader: - kv  10:                    llama.expert_used_count u32              = 2
llama_model_loader: - kv  11:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:                          general.file_type u32              = 2
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32002]   = ["&ltunk&gt", "&lts&gt", "&lt/s&gt", "&lt0x00&gt", "&lt...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32002]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32002]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 32000
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  22:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  23:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type  f16:   32 tensors
llama_model_loader: - type q4_0:  833 tensors
llama_model_loader: - type q8_0:   64 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 261/32002 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32002
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 8
llm_load_print_meta: n_expert_used    = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 46.70 B
llm_load_print_meta: model size       = 24.62 GiB (4.53 BPW) 
llm_load_print_meta: general.name     = cognitivecomputations
llm_load_print_meta: BOS token        = 1 '&lts&gt'
llm_load_print_meta: EOS token        = 32000 '&lt|im_end|&gt'
llm_load_print_meta: UNK token        = 0 '&ltunk&gt'
llm_load_print_meta: LF token         = 13 '&lt0x0A&gt'
llm_load_tensors: ggml ctx size =    0.38 MiB
llm_load_tensors:        CPU buffer size = 25215.88 MiB
....................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:        CPU input buffer size   =    13.01 MiB
llama_new_context_with_model:        CPU compute buffer size =   180.03 MiB
llama_new_context_with_model: graph splits (measure): 1
[1708103872] warming up the model with an empty run
[1708103872] Available slots:
[1708103872]  -&gt Slot 0 - max context: 2048
time=2024-02-16T12:17:52.570-05:00 level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop"
[1708103872] llama server main loop starting
[1708103872] all slots are idle and system prompt is empty, clear the KV cache
time=2024-02-16T12:17:52.571-05:00 level=DEBUG source=prompt.go:175 msg="prompt now fits in context window" required=27 window=2048
[GIN] 2024/02/16 - 12:17:52 | 200 |  1.483697709s |       127.0.0.1 | POST     "/api/chat"
[1708103881] 
initiating shutdown - draining remaining tasks...
[1708103881] 
llama server shutting down
[1708103881] llama server shutdown complete

The server log from 0.1.24; and download log.

time=2024-02-16T12:59:29.210-05:00 level=INFO source=images.go:863 msg="total blobs: 10"
time=2024-02-16T12:59:29.210-05:00 level=INFO source=images.go:870 msg="total unused blobs removed: 0"
time=2024-02-16T12:59:29.211-05:00 level=INFO source=routes.go:999 msg="Listening on 127.0.0.1:11434 (version 0.1.24)"
time=2024-02-16T12:59:29.211-05:00 level=INFO source=payload_common.go:106 msg="Extracting dynamic libraries..."
time=2024-02-16T12:59:32.207-05:00 level=INFO source=payload_common.go:145 msg="Dynamic LLM libraries [cpu cpu_avx rocm cpu_avx2 cuda_v12]"
time=2024-02-16T12:59:32.207-05:00 level=DEBUG source=payload_common.go:146 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-02-16T12:59:32.207-05:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-16T12:59:32.207-05:00 level=INFO source=gpu.go:242 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-02-16T12:59:32.207-05:00 level=DEBUG source=gpu.go:260 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /nix/store/l7xkh2k5dqbfp1yrckas1r5zrapcd7c5-pipewire-1.0.1-jack/lib/libnvidia-ml.so* /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/libnvidia-ml.so* /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so*]"
time=2024-02-16T12:59:32.208-05:00 level=INFO source=gpu.go:288 msg="Discovered GPU libraries: [/nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06]"
wiring nvidia management library functions in /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06
dlsym: nvmlInit_v2
dlsym: nvmlShutdown
dlsym: nvmlDeviceGetHandleByIndex
dlsym: nvmlDeviceGetMemoryInfo
dlsym: nvmlDeviceGetCount_v2
dlsym: nvmlDeviceGetCudaComputeCapability
dlsym: nvmlSystemGetDriverVersion
dlsym: nvmlDeviceGetName
dlsym: nvmlDeviceGetSerial
dlsym: nvmlDeviceGetVbiosVersion
dlsym: nvmlDeviceGetBoardPartNumber
dlsym: nvmlDeviceGetBrand
nvmlInit_v2 err: 9
time=2024-02-16T12:59:32.225-05:00 level=INFO source=gpu.go:300 msg="Unable to load CUDA management library /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06: nvml vram init failure: 9"
time=2024-02-16T12:59:32.225-05:00 level=INFO source=gpu.go:242 msg="Searching for GPU management library librocm_smi64.so"
time=2024-02-16T12:59:32.225-05:00 level=DEBUG source=gpu.go:260 msg="gpu management search paths: [/opt/rocm*/lib*/librocm_smi64.so* /nix/store/l7xkh2k5dqbfp1yrckas1r5zrapcd7c5-pipewire-1.0.1-jack/lib/librocm_smi64.so* /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so* /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/librocm_smi64.so*]"
time=2024-02-16T12:59:32.225-05:00 level=INFO source=gpu.go:288 msg="Discovered GPU libraries: [/nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0]"
wiring rocm management library functions in /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0
dlsym: rsmi_init
dlsym: rsmi_shut_down
dlsym: rsmi_dev_memory_total_get
dlsym: rsmi_dev_memory_usage_get
dlsym: rsmi_version_get
dlsym: rsmi_num_monitor_devices
dlsym: rsmi_dev_id_get
dlsym: rsmi_dev_name_get
dlsym: rsmi_dev_brand_get
dlsym: rsmi_dev_vendor_name_get
dlsym: rsmi_dev_vram_vendor_get
dlsym: rsmi_dev_serial_number_get
dlsym: rsmi_dev_subsystem_name_get
dlsym: rsmi_dev_vbios_version_get
time=2024-02-16T12:59:32.227-05:00 level=INFO source=gpu.go:109 msg="Radeon GPU detected"
time=2024-02-16T12:59:32.227-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
discovered 2 ROCm GPU Devices
[0] ROCm device name: 0x1002
[0] ROCm brand: 0x1002
[0] ROCm vendor: 0x1002
[0] ROCm VRAM vendor: samsung
rsmi_dev_serial_number_get failed: 2
[0] ROCm subsystem name: 0x1002
[0] ROCm vbios version: 113-V395TRIO-4OC
[0] ROCm totalMem 17163091968
[0] ROCm usedMem 1409773568
[1] ROCm device name: 0x1002
[1] ROCm brand: 0x1002
[1] ROCm vendor: 0x1002
[1] ROCm VRAM vendor: unknown
rsmi_dev_serial_number_get failed: 2
[1] ROCm subsystem name: 0x1002
[1] ROCm vbios version: 102-RAPHAEL-006
[1] ROCm totalMem 536870912
[1] ROCm usedMem 20668416
[1] ROCm integrated GPU
time=2024-02-16T12:59:32.228-05:00 level=INFO source=gpu.go:177 msg="ROCm integrated GPU detected - ROCR_VISIBLE_DEVICES=0"
time=2024-02-16T12:59:32.228-05:00 level=DEBUG source=gpu.go:231 msg="rocm detected 2 devices with 12975M available memory"
[GIN] 2024/02/16 - 12:59:35 | 200 |      22.091µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/02/16 - 12:59:35 | 200 |      372.13µs |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/02/16 - 12:59:35 | 200 |     471.904µs |       127.0.0.1 | POST     "/api/show"
time=2024-02-16T12:59:35.210-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
discovered 2 ROCm GPU Devices
[0] ROCm device name: 0x1002
[0] ROCm brand: 0x1002
[0] ROCm vendor: 0x1002
[0] ROCm VRAM vendor: samsung
rsmi_dev_serial_number_get failed: 2
[0] ROCm subsystem name: 0x1002
[0] ROCm vbios version: 113-V395TRIO-4OC
[0] ROCm totalMem 17163091968
[0] ROCm usedMem 1409773568
[1] ROCm device name: 0x1002
[1] ROCm brand: 0x1002
[1] ROCm vendor: 0x1002
[1] ROCm VRAM vendor: unknown
rsmi_dev_serial_number_get failed: 2
[1] ROCm subsystem name: 0x1002
[1] ROCm vbios version: 102-RAPHAEL-006
[1] ROCm totalMem 536870912
[1] ROCm usedMem 20668416
[1] ROCm integrated GPU
time=2024-02-16T12:59:35.211-05:00 level=INFO source=gpu.go:177 msg="ROCm integrated GPU detected - ROCR_VISIBLE_DEVICES=0"
time=2024-02-16T12:59:35.211-05:00 level=DEBUG source=gpu.go:231 msg="rocm detected 2 devices with 12975M available memory"
time=2024-02-16T12:59:35.211-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
discovered 2 ROCm GPU Devices
[0] ROCm device name: 0x1002
[0] ROCm brand: 0x1002
[0] ROCm vendor: 0x1002
[0] ROCm VRAM vendor: samsung
rsmi_dev_serial_number_get failed: 2
[0] ROCm subsystem name: 0x1002
[0] ROCm vbios version: 113-V395TRIO-4OC
[0] ROCm totalMem 17163091968
[0] ROCm usedMem 1409773568
[1] ROCm device name: 0x1002
[1] ROCm brand: 0x1002
[1] ROCm vendor: 0x1002
[1] ROCm VRAM vendor: unknown
rsmi_dev_serial_number_get failed: 2
[1] ROCm subsystem name: 0x1002
[1] ROCm vbios version: 102-RAPHAEL-006
[1] ROCm totalMem 536870912
[1] ROCm usedMem 20668416
[1] ROCm integrated GPU
time=2024-02-16T12:59:35.211-05:00 level=INFO source=gpu.go:177 msg="ROCm integrated GPU detected - ROCR_VISIBLE_DEVICES=0"
time=2024-02-16T12:59:35.211-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-16T12:59:35.241-05:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama1279410782/rocm/libext_server.so"
time=2024-02-16T12:59:35.241-05:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server"
[1708106375] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | 
[1708106375] Performing pre-initialization of GPU
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 ROCm devices:
  Device 0: AMD Radeon RX 6950 XT, compute capability 10.3, VMM: no
llama_model_loader: loaded meta data with 24 key-value pairs and 995 tensors from /home/abysssol/.ollama/models/blobs/sha256:097a1ff4445ccc1e7668f70b9de3caa60cfc2a8e2cb9da3505b13854f1cfe20f (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = cognitivecomputations
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:                         llama.expert_count u32              = 8
llama_model_loader: - kv  10:                    llama.expert_used_count u32              = 2
llama_model_loader: - kv  11:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:                          general.file_type u32              = 2
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32002]   = ["&ltunk&gt", "&lts&gt", "&lt/s&gt", "&lt0x00&gt", "&lt...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32002]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32002]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 32000
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  22:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  23:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type  f16:   32 tensors
llama_model_loader: - type q4_0:  833 tensors
llama_model_loader: - type q8_0:   64 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 261/32002 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32002
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 8
llm_load_print_meta: n_expert_used    = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 46.70 B
llm_load_print_meta: model size       = 24.62 GiB (4.53 BPW) 
llm_load_print_meta: general.name     = cognitivecomputations
llm_load_print_meta: BOS token        = 1 '&lts&gt'
llm_load_print_meta: EOS token        = 32000 '&lt|im_end|&gt'
llm_load_print_meta: UNK token        = 0 '&ltunk&gt'
llm_load_print_meta: LF token         = 13 '&lt0x0A&gt'
llm_load_tensors: ggml ctx size =    0.76 MiB
llm_load_tensors: offloading 16 repeating layers to GPU
llm_load_tensors: offloaded 16/33 layers to GPU
llm_load_tensors:      ROCm0 buffer size = 12521.50 MiB
llm_load_tensors:        CPU buffer size = 25215.88 MiB
....................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      ROCm0 KV buffer size =   128.00 MiB
llama_kv_cache_init:  ROCm_Host KV buffer size =   128.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:  ROCm_Host input buffer size   =    12.01 MiB
llama_new_context_with_model:      ROCm0 compute buffer size =   211.21 MiB
llama_new_context_with_model:  ROCm_Host compute buffer size =   198.03 MiB
llama_new_context_with_model: graph splits (measure): 5
[1708106378] warming up the model with an empty run
[1708106378] Available slots:
[1708106378]  -&gt Slot 0 - max context: 2048
time=2024-02-16T12:59:38.397-05:00 level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop"
[1708106378] llama server main loop starting
[1708106378] all slots are idle and system prompt is empty, clear the KV cache
time=2024-02-16T12:59:38.397-05:00 level=DEBUG source=routes.go:1165 msg="chat handler" prompt="&lt|im_start|&gtsystem\nYou are Dolphin, a helpful AI assistant.\n&lt|im_end|&gt\n&lt|im_start|&gtuser\n&lt|im_end|&gt\n&lt|im_start|&gtassistant\n"
[1708106378] slot 0 is processing [task id: 0]
[1708106378] slot 0 : in cache: 0 tokens | to process: 27 tokens
[1708106378] slot 0 : kv cache rm - [0, end)

# ... removed ...

[1708106422] print_timings: prompt eval time =     893.79 ms /    27 tokens (   33.10 ms per token,    30.21 tokens per second)
[1708106422] print_timings:        eval time =   42755.44 ms /   437 runs   (   97.84 ms per token,    10.22 tokens per second)
[1708106422] print_timings:       total time =   43649.23 ms
[1708106422] slot 0 released (464 tokens in cache)
[1708106422] next result cancel on stop
[1708106422] next result removing waiting task ID: 0
[GIN] 2024/02/16 - 13:00:22 | 200 | 46.916460155s |       127.0.0.1 | POST     "/api/chat"
[1708106427] 
initiating shutdown - draining remaining tasks...
[1708106427] 
llama server shutting down
[1708106427] llama server shutdown complete
Originally created by @abysssol on GitHub (Feb 14, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2502 Originally assigned to: @dhiltgen on GitHub. I'm working to update the ollama package in [nixpkgs](https://github.com/NixOS/nixpkgs), and release 0.1.24 works as expected ([nix source](https://github.com/abysssol/nixpkgs/tree/update-ollama-0.1.24), [build here](https://github.com/abysssol/ollama-flake/tree/1.4.1)), but the new prerelease 0.1.25 fails to detect the gpu ([nix source](https://github.com/abysssol/nixpkgs/tree/update-ollama-0.1.25), [build here](https://github.com/abysssol/ollama-flake/tree/ollama-0.1.25)). It seems to build correctly, and it detects the gpu management library `librocm_smi64.so.5.0`, but it then fails to use it, logging `no GPU detected`. I don't know if this is a rocm problem and cuda works right or not, since I only have an amd gpu. Unfortunately, I don't have the familiarity with ollama to have the slightest clue as to what could be going wrong. Hopefully these logs with OLLAMA_DEBUG=1 are somehow helpful, though. <details><summary> #### The server log from [0.1.25](https://github.com/abysssol/ollama-flake/tree/ollama-0.1.25); and [download log](https://github.com/ollama/ollama/files/14314105/debug-0.1.25.log). </summary> ``` time=2024-02-16T12:17:46.124-05:00 level=INFO source=images.go:706 msg="total blobs: 10" time=2024-02-16T12:17:46.124-05:00 level=INFO source=images.go:713 msg="total unused blobs removed: 0" time=2024-02-16T12:17:46.125-05:00 level=INFO source=routes.go:1014 msg="Listening on 127.0.0.1:11434 (version 0.1.25)" time=2024-02-16T12:17:46.125-05:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..." time=2024-02-16T12:17:49.132-05:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cuda_v12 cpu_avx rocm cpu cpu_avx2]" time=2024-02-16T12:17:49.133-05:00 level=DEBUG source=payload_common.go:147 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-02-16T12:17:49.133-05:00 level=INFO source=gpu.go:94 msg="Detecting GPU type" time=2024-02-16T12:17:49.133-05:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library libnvidia-ml.so" time=2024-02-16T12:17:49.133-05:00 level=DEBUG source=gpu.go:280 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /nix/store/l7xkh2k5dqbfp1yrckas1r5zrapcd7c5-pipewire-1.0.1-jack/lib/libnvidia-ml.so* /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/libnvidia-ml.so* /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so*]" time=2024-02-16T12:17:49.133-05:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [/nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06]" wiring nvidia management library functions in /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06 dlsym: nvmlInit_v2 dlsym: nvmlShutdown dlsym: nvmlDeviceGetHandleByIndex dlsym: nvmlDeviceGetMemoryInfo dlsym: nvmlDeviceGetCount_v2 dlsym: nvmlDeviceGetCudaComputeCapability dlsym: nvmlSystemGetDriverVersion dlsym: nvmlDeviceGetName dlsym: nvmlDeviceGetSerial dlsym: nvmlDeviceGetVbiosVersion dlsym: nvmlDeviceGetBoardPartNumber dlsym: nvmlDeviceGetBrand nvmlInit_v2 err: 9 time=2024-02-16T12:17:49.151-05:00 level=INFO source=gpu.go:320 msg="Unable to load CUDA management library /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06: nvml vram init failure: 9" time=2024-02-16T12:17:49.151-05:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library librocm_smi64.so" time=2024-02-16T12:17:49.151-05:00 level=DEBUG source=gpu.go:280 msg="gpu management search paths: [/opt/rocm*/lib*/librocm_smi64.so* /nix/store/l7xkh2k5dqbfp1yrckas1r5zrapcd7c5-pipewire-1.0.1-jack/lib/librocm_smi64.so* /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so* /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/librocm_smi64.so*]" time=2024-02-16T12:17:49.151-05:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [/nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0]" wiring rocm management library functions in /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0 dlsym: rsmi_init dlsym: rsmi_shut_down dlsym: rsmi_dev_memory_total_get dlsym: rsmi_dev_memory_usage_get dlsym: rsmi_version_get dlsym: rsmi_num_monitor_devices dlsym: rsmi_dev_id_get dlsym: rsmi_dev_name_get dlsym: rsmi_dev_brand_get dlsym: rsmi_dev_vendor_name_get dlsym: rsmi_dev_vram_vendor_get dlsym: rsmi_dev_serial_number_get dlsym: rsmi_dev_subsystem_name_get dlsym: rsmi_dev_vbios_version_get time=2024-02-16T12:17:49.153-05:00 level=INFO source=gpu.go:109 msg="Radeon GPU detected" time=2024-02-16T12:17:49.153-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-16T12:17:49.153-05:00 level=INFO source=routes.go:1037 msg="no GPU detected" [GIN] 2024/02/16 - 12:17:51 | 200 | 23.353µs | 127.0.0.1 | HEAD "/" [GIN] 2024/02/16 - 12:17:51 | 200 | 314.341µs | 127.0.0.1 | POST "/api/show" [GIN] 2024/02/16 - 12:17:51 | 200 | 166.067µs | 127.0.0.1 | POST "/api/show" time=2024-02-16T12:17:51.163-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-16T12:17:51.163-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-16T12:17:51.163-05:00 level=INFO source=llm.go:77 msg="GPU not available, falling back to CPU" time=2024-02-16T12:17:51.163-05:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [/tmp/ollama81314947/cpu_avx2/libext_server.so]" time=2024-02-16T12:17:51.163-05:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama81314947/cpu_avx2/libext_server.so" time=2024-02-16T12:17:51.163-05:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server" [1708103871] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | llama_model_loader: loaded meta data with 24 key-value pairs and 995 tensors from /home/abysssol/.ollama/models/blobs/sha256:097a1ff4445ccc1e7668f70b9de3caa60cfc2a8e2cb9da3505b13854f1cfe20f (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = cognitivecomputations llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.expert_count u32 = 8 llama_model_loader: - kv 10: llama.expert_used_count u32 = 2 llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: general.file_type u32 = 2 llama_model_loader: - kv 14: tokenizer.ggml.model str = llama llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32002] = ["&ltunk&gt", "&lts&gt", "&lt/s&gt", "&lt0x00&gt", "&lt... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32002] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32002] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 32000 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type f16: 32 tensors llama_model_loader: - type q4_0: 833 tensors llama_model_loader: - type q8_0: 64 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 261/32002 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32002 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 8 llm_load_print_meta: n_expert_used = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 46.70 B llm_load_print_meta: model size = 24.62 GiB (4.53 BPW) llm_load_print_meta: general.name = cognitivecomputations llm_load_print_meta: BOS token = 1 '&lts&gt' llm_load_print_meta: EOS token = 32000 '&lt|im_end|&gt' llm_load_print_meta: UNK token = 0 '&ltunk&gt' llm_load_print_meta: LF token = 13 '&lt0x0A&gt' llm_load_tensors: ggml ctx size = 0.38 MiB llm_load_tensors: CPU buffer size = 25215.88 MiB .................................................................................................... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CPU input buffer size = 13.01 MiB llama_new_context_with_model: CPU compute buffer size = 180.03 MiB llama_new_context_with_model: graph splits (measure): 1 [1708103872] warming up the model with an empty run [1708103872] Available slots: [1708103872] -&gt Slot 0 - max context: 2048 time=2024-02-16T12:17:52.570-05:00 level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop" [1708103872] llama server main loop starting [1708103872] all slots are idle and system prompt is empty, clear the KV cache time=2024-02-16T12:17:52.571-05:00 level=DEBUG source=prompt.go:175 msg="prompt now fits in context window" required=27 window=2048 [GIN] 2024/02/16 - 12:17:52 | 200 | 1.483697709s | 127.0.0.1 | POST "/api/chat" [1708103881] initiating shutdown - draining remaining tasks... [1708103881] llama server shutting down [1708103881] llama server shutdown complete ``` </details> <details><summary> #### The server log from [0.1.24](https://github.com/abysssol/ollama-flake/tree/1.4.0); and [download log](https://github.com/ollama/ollama/files/14314412/debug-0.1.24.log). </summary> ``` time=2024-02-16T12:59:29.210-05:00 level=INFO source=images.go:863 msg="total blobs: 10" time=2024-02-16T12:59:29.210-05:00 level=INFO source=images.go:870 msg="total unused blobs removed: 0" time=2024-02-16T12:59:29.211-05:00 level=INFO source=routes.go:999 msg="Listening on 127.0.0.1:11434 (version 0.1.24)" time=2024-02-16T12:59:29.211-05:00 level=INFO source=payload_common.go:106 msg="Extracting dynamic libraries..." time=2024-02-16T12:59:32.207-05:00 level=INFO source=payload_common.go:145 msg="Dynamic LLM libraries [cpu cpu_avx rocm cpu_avx2 cuda_v12]" time=2024-02-16T12:59:32.207-05:00 level=DEBUG source=payload_common.go:146 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-02-16T12:59:32.207-05:00 level=INFO source=gpu.go:94 msg="Detecting GPU type" time=2024-02-16T12:59:32.207-05:00 level=INFO source=gpu.go:242 msg="Searching for GPU management library libnvidia-ml.so" time=2024-02-16T12:59:32.207-05:00 level=DEBUG source=gpu.go:260 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /nix/store/l7xkh2k5dqbfp1yrckas1r5zrapcd7c5-pipewire-1.0.1-jack/lib/libnvidia-ml.so* /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/libnvidia-ml.so* /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so*]" time=2024-02-16T12:59:32.208-05:00 level=INFO source=gpu.go:288 msg="Discovered GPU libraries: [/nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06]" wiring nvidia management library functions in /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06 dlsym: nvmlInit_v2 dlsym: nvmlShutdown dlsym: nvmlDeviceGetHandleByIndex dlsym: nvmlDeviceGetMemoryInfo dlsym: nvmlDeviceGetCount_v2 dlsym: nvmlDeviceGetCudaComputeCapability dlsym: nvmlSystemGetDriverVersion dlsym: nvmlDeviceGetName dlsym: nvmlDeviceGetSerial dlsym: nvmlDeviceGetVbiosVersion dlsym: nvmlDeviceGetBoardPartNumber dlsym: nvmlDeviceGetBrand nvmlInit_v2 err: 9 time=2024-02-16T12:59:32.225-05:00 level=INFO source=gpu.go:300 msg="Unable to load CUDA management library /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06: nvml vram init failure: 9" time=2024-02-16T12:59:32.225-05:00 level=INFO source=gpu.go:242 msg="Searching for GPU management library librocm_smi64.so" time=2024-02-16T12:59:32.225-05:00 level=DEBUG source=gpu.go:260 msg="gpu management search paths: [/opt/rocm*/lib*/librocm_smi64.so* /nix/store/l7xkh2k5dqbfp1yrckas1r5zrapcd7c5-pipewire-1.0.1-jack/lib/librocm_smi64.so* /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so* /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/librocm_smi64.so*]" time=2024-02-16T12:59:32.225-05:00 level=INFO source=gpu.go:288 msg="Discovered GPU libraries: [/nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0]" wiring rocm management library functions in /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0 dlsym: rsmi_init dlsym: rsmi_shut_down dlsym: rsmi_dev_memory_total_get dlsym: rsmi_dev_memory_usage_get dlsym: rsmi_version_get dlsym: rsmi_num_monitor_devices dlsym: rsmi_dev_id_get dlsym: rsmi_dev_name_get dlsym: rsmi_dev_brand_get dlsym: rsmi_dev_vendor_name_get dlsym: rsmi_dev_vram_vendor_get dlsym: rsmi_dev_serial_number_get dlsym: rsmi_dev_subsystem_name_get dlsym: rsmi_dev_vbios_version_get time=2024-02-16T12:59:32.227-05:00 level=INFO source=gpu.go:109 msg="Radeon GPU detected" time=2024-02-16T12:59:32.227-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" discovered 2 ROCm GPU Devices [0] ROCm device name: 0x1002 [0] ROCm brand: 0x1002 [0] ROCm vendor: 0x1002 [0] ROCm VRAM vendor: samsung rsmi_dev_serial_number_get failed: 2 [0] ROCm subsystem name: 0x1002 [0] ROCm vbios version: 113-V395TRIO-4OC [0] ROCm totalMem 17163091968 [0] ROCm usedMem 1409773568 [1] ROCm device name: 0x1002 [1] ROCm brand: 0x1002 [1] ROCm vendor: 0x1002 [1] ROCm VRAM vendor: unknown rsmi_dev_serial_number_get failed: 2 [1] ROCm subsystem name: 0x1002 [1] ROCm vbios version: 102-RAPHAEL-006 [1] ROCm totalMem 536870912 [1] ROCm usedMem 20668416 [1] ROCm integrated GPU time=2024-02-16T12:59:32.228-05:00 level=INFO source=gpu.go:177 msg="ROCm integrated GPU detected - ROCR_VISIBLE_DEVICES=0" time=2024-02-16T12:59:32.228-05:00 level=DEBUG source=gpu.go:231 msg="rocm detected 2 devices with 12975M available memory" [GIN] 2024/02/16 - 12:59:35 | 200 | 22.091µs | 127.0.0.1 | HEAD "/" [GIN] 2024/02/16 - 12:59:35 | 200 | 372.13µs | 127.0.0.1 | POST "/api/show" [GIN] 2024/02/16 - 12:59:35 | 200 | 471.904µs | 127.0.0.1 | POST "/api/show" time=2024-02-16T12:59:35.210-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" discovered 2 ROCm GPU Devices [0] ROCm device name: 0x1002 [0] ROCm brand: 0x1002 [0] ROCm vendor: 0x1002 [0] ROCm VRAM vendor: samsung rsmi_dev_serial_number_get failed: 2 [0] ROCm subsystem name: 0x1002 [0] ROCm vbios version: 113-V395TRIO-4OC [0] ROCm totalMem 17163091968 [0] ROCm usedMem 1409773568 [1] ROCm device name: 0x1002 [1] ROCm brand: 0x1002 [1] ROCm vendor: 0x1002 [1] ROCm VRAM vendor: unknown rsmi_dev_serial_number_get failed: 2 [1] ROCm subsystem name: 0x1002 [1] ROCm vbios version: 102-RAPHAEL-006 [1] ROCm totalMem 536870912 [1] ROCm usedMem 20668416 [1] ROCm integrated GPU time=2024-02-16T12:59:35.211-05:00 level=INFO source=gpu.go:177 msg="ROCm integrated GPU detected - ROCR_VISIBLE_DEVICES=0" time=2024-02-16T12:59:35.211-05:00 level=DEBUG source=gpu.go:231 msg="rocm detected 2 devices with 12975M available memory" time=2024-02-16T12:59:35.211-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" discovered 2 ROCm GPU Devices [0] ROCm device name: 0x1002 [0] ROCm brand: 0x1002 [0] ROCm vendor: 0x1002 [0] ROCm VRAM vendor: samsung rsmi_dev_serial_number_get failed: 2 [0] ROCm subsystem name: 0x1002 [0] ROCm vbios version: 113-V395TRIO-4OC [0] ROCm totalMem 17163091968 [0] ROCm usedMem 1409773568 [1] ROCm device name: 0x1002 [1] ROCm brand: 0x1002 [1] ROCm vendor: 0x1002 [1] ROCm VRAM vendor: unknown rsmi_dev_serial_number_get failed: 2 [1] ROCm subsystem name: 0x1002 [1] ROCm vbios version: 102-RAPHAEL-006 [1] ROCm totalMem 536870912 [1] ROCm usedMem 20668416 [1] ROCm integrated GPU time=2024-02-16T12:59:35.211-05:00 level=INFO source=gpu.go:177 msg="ROCm integrated GPU detected - ROCR_VISIBLE_DEVICES=0" time=2024-02-16T12:59:35.211-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-16T12:59:35.241-05:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama1279410782/rocm/libext_server.so" time=2024-02-16T12:59:35.241-05:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server" [1708106375] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | [1708106375] Performing pre-initialization of GPU ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 1 ROCm devices: Device 0: AMD Radeon RX 6950 XT, compute capability 10.3, VMM: no llama_model_loader: loaded meta data with 24 key-value pairs and 995 tensors from /home/abysssol/.ollama/models/blobs/sha256:097a1ff4445ccc1e7668f70b9de3caa60cfc2a8e2cb9da3505b13854f1cfe20f (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = cognitivecomputations llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.expert_count u32 = 8 llama_model_loader: - kv 10: llama.expert_used_count u32 = 2 llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: general.file_type u32 = 2 llama_model_loader: - kv 14: tokenizer.ggml.model str = llama llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32002] = ["&ltunk&gt", "&lts&gt", "&lt/s&gt", "&lt0x00&gt", "&lt... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32002] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32002] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 32000 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type f16: 32 tensors llama_model_loader: - type q4_0: 833 tensors llama_model_loader: - type q8_0: 64 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 261/32002 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32002 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 8 llm_load_print_meta: n_expert_used = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 46.70 B llm_load_print_meta: model size = 24.62 GiB (4.53 BPW) llm_load_print_meta: general.name = cognitivecomputations llm_load_print_meta: BOS token = 1 '&lts&gt' llm_load_print_meta: EOS token = 32000 '&lt|im_end|&gt' llm_load_print_meta: UNK token = 0 '&ltunk&gt' llm_load_print_meta: LF token = 13 '&lt0x0A&gt' llm_load_tensors: ggml ctx size = 0.76 MiB llm_load_tensors: offloading 16 repeating layers to GPU llm_load_tensors: offloaded 16/33 layers to GPU llm_load_tensors: ROCm0 buffer size = 12521.50 MiB llm_load_tensors: CPU buffer size = 25215.88 MiB .................................................................................................... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: ROCm0 KV buffer size = 128.00 MiB llama_kv_cache_init: ROCm_Host KV buffer size = 128.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: ROCm_Host input buffer size = 12.01 MiB llama_new_context_with_model: ROCm0 compute buffer size = 211.21 MiB llama_new_context_with_model: ROCm_Host compute buffer size = 198.03 MiB llama_new_context_with_model: graph splits (measure): 5 [1708106378] warming up the model with an empty run [1708106378] Available slots: [1708106378] -&gt Slot 0 - max context: 2048 time=2024-02-16T12:59:38.397-05:00 level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop" [1708106378] llama server main loop starting [1708106378] all slots are idle and system prompt is empty, clear the KV cache time=2024-02-16T12:59:38.397-05:00 level=DEBUG source=routes.go:1165 msg="chat handler" prompt="&lt|im_start|&gtsystem\nYou are Dolphin, a helpful AI assistant.\n&lt|im_end|&gt\n&lt|im_start|&gtuser\n&lt|im_end|&gt\n&lt|im_start|&gtassistant\n" [1708106378] slot 0 is processing [task id: 0] [1708106378] slot 0 : in cache: 0 tokens | to process: 27 tokens [1708106378] slot 0 : kv cache rm - [0, end) # ... removed ... [1708106422] print_timings: prompt eval time = 893.79 ms / 27 tokens ( 33.10 ms per token, 30.21 tokens per second) [1708106422] print_timings: eval time = 42755.44 ms / 437 runs ( 97.84 ms per token, 10.22 tokens per second) [1708106422] print_timings: total time = 43649.23 ms [1708106422] slot 0 released (464 tokens in cache) [1708106422] next result cancel on stop [1708106422] next result removing waiting task ID: 0 [GIN] 2024/02/16 - 13:00:22 | 200 | 46.916460155s | 127.0.0.1 | POST "/api/chat" [1708106427] initiating shutdown - draining remaining tasks... [1708106427] llama server shutting down [1708106427] llama server shutdown complete ``` </details>
Author
Owner

@stephensrmmartin commented on GitHub (Feb 15, 2024):

Oh, I have been racking my brain about this for the past 48 hours. Let me check out 0.1.24 and see whether I have this problem.

<!-- gh-comment-id:1946846502 --> @stephensrmmartin commented on GitHub (Feb 15, 2024): Oh, I have been racking my brain about this for the past 48 hours. Let me check out 0.1.24 and see whether I have this problem.
Author
Owner

@stephensrmmartin commented on GitHub (Feb 15, 2024):

Confirmed: my 6700xt does not work under 0.1.25. It will say Radeon gpu detected; it will say rocm is supported; but will then either say no gpu detected or gpu not available. Switched to 0.1.24, and it works fine.

<!-- gh-comment-id:1947060684 --> @stephensrmmartin commented on GitHub (Feb 15, 2024): Confirmed: my 6700xt does not work under 0.1.25. It will say Radeon gpu detected; it will say rocm is supported; but will then either say `no gpu detected` or `gpu not available`. Switched to 0.1.24, and it works fine.
Author
Owner

@dhiltgen commented on GitHub (Feb 15, 2024):

Can you try running the 0.1.25 server with debug enabled so we can see a bit more about why it's no longer detecting the GPU?

OLLAMA_DEBUG=1 ollama serve
<!-- gh-comment-id:1947343514 --> @dhiltgen commented on GitHub (Feb 15, 2024): Can you try running the 0.1.25 server with debug enabled so we can see a bit more about why it's no longer detecting the GPU? ``` OLLAMA_DEBUG=1 ollama serve ```
Author
Owner

@abysssol commented on GitHub (Feb 15, 2024):

Can you try running the 0.1.25 server with debug enabled so we can see a bit more about why it's no longer detecting the GPU?

OLLAMA_DEBUG=1 ollama serve

I'm pretty sure I already did, did you not see?

Click to expand The server log from 0.1.25; download on pastebin

time=2024-02-14T14:14:21.458-05:00 level=INFO source=images.go:706 msg="total blobs: 6"
time=2024-02-14T14:14:21.459-05:00 level=INFO source=images.go:713 msg="total unused blobs removed: 0"
time=2024-02-14T14:14:21.459-05:00 level=INFO source=routes.go:1014 msg="Listening on 127.0.0.1:11434 (version 0.1.25)"
time=2024-02-14T14:14:21.459-05:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-02-14T14:14:24.485-05:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu cpu_avx2 cuda_v12 rocm cpu_avx]"
time=2024-02-14T14:14:24.485-05:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-14T14:14:24.485-05:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-02-14T14:14:24.485-05:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [/nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06]"
time=2024-02-14T14:14:24.503-05:00 level=INFO source=gpu.go:320 msg="Unable to load CUDA management library /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06: nvml vram init failure: 9"
time=2024-02-14T14:14:24.503-05:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library librocm_smi64.so"
time=2024-02-14T14:14:24.504-05:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [/nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0]"
time=2024-02-14T14:14:24.505-05:00 level=INFO source=gpu.go:109 msg="Radeon GPU detected"
time=2024-02-14T14:14:24.505-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-14T14:14:24.505-05:00 level=INFO source=routes.go:1037 msg="no GPU detected"
[GIN] 2024/02/14 - 14:14:38 | 200 |      25.047µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/02/14 - 14:14:38 | 200 |     322.287µs |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/02/14 - 14:14:38 | 200 |     153.214µs |       127.0.0.1 | POST     "/api/show"
time=2024-02-14T14:14:38.420-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-14T14:14:38.420-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-14T14:14:38.420-05:00 level=INFO source=llm.go:77 msg="GPU not available, falling back to CPU"
time=2024-02-14T14:14:38.421-05:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama3709215360/cpu_avx2/libext_server.so"
time=2024-02-14T14:14:38.421-05:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server"
llama_model_loader: loaded meta data with 24 key-value pairs and 995 tensors from /home/abysssol/.ollama/models/blobs/sha256:097a1ff4445ccc1e7668f70b9de3caa60cfc2a8e2cb9da3505b13854f1cfe20f (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = cognitivecomputations
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:                         llama.expert_count u32              = 8
llama_model_loader: - kv  10:                    llama.expert_used_count u32              = 2
llama_model_loader: - kv  11:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:                          general.file_type u32              = 2
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32002]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32002]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32002]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 32000
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  22:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  23:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type  f16:   32 tensors
llama_model_loader: - type q4_0:  833 tensors
llama_model_loader: - type q8_0:   64 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 261/32002 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32002
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 8
llm_load_print_meta: n_expert_used    = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 46.70 B
llm_load_print_meta: model size       = 24.62 GiB (4.53 BPW) 
llm_load_print_meta: general.name     = cognitivecomputations
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 32000 '<|im_end|>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.38 MiB
llm_load_tensors:        CPU buffer size = 25215.88 MiB
....................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:        CPU input buffer size   =    13.01 MiB
llama_new_context_with_model:        CPU compute buffer size =   180.03 MiB
llama_new_context_with_model: graph splits (measure): 1
time=2024-02-14T14:14:39.646-05:00 level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop"
[GIN] 2024/02/14 - 14:14:39 | 200 |  1.307699706s |       127.0.0.1 | POST     "/api/chat"
<!-- gh-comment-id:1947524927 --> @abysssol commented on GitHub (Feb 15, 2024): > Can you try running the 0.1.25 server with debug enabled so we can see a bit more about why it's no longer detecting the GPU? > > ``` > OLLAMA_DEBUG=1 ollama serve > ``` I'm pretty sure I already did, did you not see? <details> <summary> <em>Click to expand</em> The server log from 0.1.25; download on <a href="https://pastebin.com/u87UAq9q">pastebin</a> </summary> <div><pre><code> time=2024-02-14T14:14:21.458-05:00 level=INFO source=images.go:706 msg="total blobs: 6" time=2024-02-14T14:14:21.459-05:00 level=INFO source=images.go:713 msg="total unused blobs removed: 0" time=2024-02-14T14:14:21.459-05:00 level=INFO source=routes.go:1014 msg="Listening on 127.0.0.1:11434 (version 0.1.25)" time=2024-02-14T14:14:21.459-05:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..." time=2024-02-14T14:14:24.485-05:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu cpu_avx2 cuda_v12 rocm cpu_avx]" time=2024-02-14T14:14:24.485-05:00 level=INFO source=gpu.go:94 msg="Detecting GPU type" time=2024-02-14T14:14:24.485-05:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library libnvidia-ml.so" time=2024-02-14T14:14:24.485-05:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [/nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06]" time=2024-02-14T14:14:24.503-05:00 level=INFO source=gpu.go:320 msg="Unable to load CUDA management library /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06: nvml vram init failure: 9" time=2024-02-14T14:14:24.503-05:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library librocm_smi64.so" <strong><em>time=2024-02-14T14:14:24.504-05:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [/nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0]"</em></strong> <strong><em>time=2024-02-14T14:14:24.505-05:00 level=INFO source=gpu.go:109 msg="Radeon GPU detected"</em></strong> time=2024-02-14T14:14:24.505-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" <strong><em>time=2024-02-14T14:14:24.505-05:00 level=INFO source=routes.go:1037 msg="no GPU detected"</em></strong> [GIN] 2024/02/14 - 14:14:38 | 200 | 25.047µs | 127.0.0.1 | HEAD "/" [GIN] 2024/02/14 - 14:14:38 | 200 | 322.287µs | 127.0.0.1 | POST "/api/show" [GIN] 2024/02/14 - 14:14:38 | 200 | 153.214µs | 127.0.0.1 | POST "/api/show" time=2024-02-14T14:14:38.420-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-14T14:14:38.420-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" <strong><em>time=2024-02-14T14:14:38.420-05:00 level=INFO source=llm.go:77 msg="GPU not available, falling back to CPU"</em></strong> time=2024-02-14T14:14:38.421-05:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama3709215360/cpu_avx2/libext_server.so" time=2024-02-14T14:14:38.421-05:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server" llama_model_loader: loaded meta data with 24 key-value pairs and 995 tensors from /home/abysssol/.ollama/models/blobs/sha256:097a1ff4445ccc1e7668f70b9de3caa60cfc2a8e2cb9da3505b13854f1cfe20f (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = cognitivecomputations llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.expert_count u32 = 8 llama_model_loader: - kv 10: llama.expert_used_count u32 = 2 llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: general.file_type u32 = 2 llama_model_loader: - kv 14: tokenizer.ggml.model str = llama llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32002] = ["&ltunk&gt", "&lts&gt", "&lt/s&gt", "&lt0x00&gt", "&lt... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32002] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32002] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 32000 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type f16: 32 tensors llama_model_loader: - type q4_0: 833 tensors llama_model_loader: - type q8_0: 64 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 261/32002 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32002 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 8 llm_load_print_meta: n_expert_used = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 46.70 B llm_load_print_meta: model size = 24.62 GiB (4.53 BPW) llm_load_print_meta: general.name = cognitivecomputations llm_load_print_meta: BOS token = 1 '&lts&gt' llm_load_print_meta: EOS token = 32000 '&lt|im_end|&gt' llm_load_print_meta: UNK token = 0 '&ltunk&gt' llm_load_print_meta: LF token = 13 '&lt0x0A&gt' llm_load_tensors: ggml ctx size = 0.38 MiB llm_load_tensors: CPU buffer size = 25215.88 MiB .................................................................................................... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CPU input buffer size = 13.01 MiB llama_new_context_with_model: CPU compute buffer size = 180.03 MiB llama_new_context_with_model: graph splits (measure): 1 time=2024-02-14T14:14:39.646-05:00 level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop" [GIN] 2024/02/14 - 14:14:39 | 200 | 1.307699706s | 127.0.0.1 | POST "/api/chat" </code></pre></div> </details>
Author
Owner

@dhiltgen commented on GitHub (Feb 16, 2024):

Those log captures don't seem to be with debugging enabled, only INFO messages are showing.

Here's an example run of 0.1.25 on a Radeon RX 7600 test system for reference
OLLAMA_DEBUG=1 ./ollama-linux-amd64 serve
time=2024-02-16T16:46:28.218Z level=INFO source=images.go:706 msg="total blobs: 11"
time=2024-02-16T16:46:28.218Z level=INFO source=images.go:713 msg="total unused blobs removed: 0"
time=2024-02-16T16:46:28.219Z level=INFO source=routes.go:1014 msg="Listening on 127.0.0.1:11434 (version 0.1.25)"
time=2024-02-16T16:46:28.219Z level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-02-16T16:46:30.884Z level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cuda_v11 cpu_avx rocm_v5 rocm_v6]"
time=2024-02-16T16:46:30.884Z level=DEBUG source=payload_common.go:147 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-02-16T16:46:30.884Z level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-16T16:46:30.885Z level=INFO source=gpu.go:262 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-02-16T16:46:30.885Z level=DEBUG source=gpu.go:280 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /home/daniel/libnvidia-ml.so*]"
time=2024-02-16T16:46:30.885Z level=INFO source=gpu.go:308 msg="Discovered GPU libraries: []"
time=2024-02-16T16:46:30.885Z level=INFO source=gpu.go:262 msg="Searching for GPU management library librocm_smi64.so"
time=2024-02-16T16:46:30.885Z level=DEBUG source=gpu.go:280 msg="gpu management search paths: [/opt/rocm*/lib*/librocm_smi64.so* /home/daniel/librocm_smi64.so*]"
time=2024-02-16T16:46:30.886Z level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [/opt/rocm/lib/librocm_smi64.so.6.0.60002 /opt/rocm-6.0.2/lib/librocm_smi64.so.6.0.60002]"
wiring rocm management library functions in /opt/rocm/lib/librocm_smi64.so.6.0.60002
dlsym: rsmi_init
dlsym: rsmi_shut_down
dlsym: rsmi_dev_memory_total_get
dlsym: rsmi_dev_memory_usage_get
dlsym: rsmi_version_get
dlsym: rsmi_num_monitor_devices
dlsym: rsmi_dev_id_get
dlsym: rsmi_dev_name_get
dlsym: rsmi_dev_brand_get
dlsym: rsmi_dev_vendor_name_get
dlsym: rsmi_dev_vram_vendor_get
dlsym: rsmi_dev_serial_number_get
dlsym: rsmi_dev_subsystem_name_get
dlsym: rsmi_dev_vbios_version_get
time=2024-02-16T16:46:30.888Z level=INFO source=gpu.go:109 msg="Radeon GPU detected"
time=2024-02-16T16:46:30.888Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-16T16:46:30.888Z level=INFO source=gpu.go:155 msg="AMD Driver: 6.3.6"
time=2024-02-16T16:46:30.888Z level=DEBUG source=amd.go:66 msg="malformed gfx_target_version 0"
discovered 1 ROCm GPU Devices
[0] ROCm device name: Navi 33 [Radeon RX 7700S/7600/7600S/7600M XT/PRO W7600]
[0] ROCm brand: Navi 33 [Radeon RX 7700S/7600/7600S/7600M XT/PRO W7600]
[0] ROCm vendor: Advanced Micro Devices, Inc. [AMD/ATI]
[0] ROCm VRAM vendor: samsung
rsmi_dev_serial_number_get failed: 2
[0] ROCm subsystem name: RX 7600 Challenger OC
[0] ROCm vbios version: 113-D7451000-0001
[0] ROCm totalMem 8573157376
[0] ROCm usedMem 27176960
time=2024-02-16T16:46:30.890Z level=DEBUG source=gpu.go:251 msg="rocm detected 1 devices with 7126M available memory"
<!-- gh-comment-id:1948870334 --> @dhiltgen commented on GitHub (Feb 16, 2024): Those log captures don't seem to be with debugging enabled, only INFO messages are showing. <details> <summary>Here's an example run of 0.1.25 on a Radeon RX 7600 test system for reference</summary> ``` OLLAMA_DEBUG=1 ./ollama-linux-amd64 serve time=2024-02-16T16:46:28.218Z level=INFO source=images.go:706 msg="total blobs: 11" time=2024-02-16T16:46:28.218Z level=INFO source=images.go:713 msg="total unused blobs removed: 0" time=2024-02-16T16:46:28.219Z level=INFO source=routes.go:1014 msg="Listening on 127.0.0.1:11434 (version 0.1.25)" time=2024-02-16T16:46:28.219Z level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..." time=2024-02-16T16:46:30.884Z level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cuda_v11 cpu_avx rocm_v5 rocm_v6]" time=2024-02-16T16:46:30.884Z level=DEBUG source=payload_common.go:147 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-02-16T16:46:30.884Z level=INFO source=gpu.go:94 msg="Detecting GPU type" time=2024-02-16T16:46:30.885Z level=INFO source=gpu.go:262 msg="Searching for GPU management library libnvidia-ml.so" time=2024-02-16T16:46:30.885Z level=DEBUG source=gpu.go:280 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /home/daniel/libnvidia-ml.so*]" time=2024-02-16T16:46:30.885Z level=INFO source=gpu.go:308 msg="Discovered GPU libraries: []" time=2024-02-16T16:46:30.885Z level=INFO source=gpu.go:262 msg="Searching for GPU management library librocm_smi64.so" time=2024-02-16T16:46:30.885Z level=DEBUG source=gpu.go:280 msg="gpu management search paths: [/opt/rocm*/lib*/librocm_smi64.so* /home/daniel/librocm_smi64.so*]" time=2024-02-16T16:46:30.886Z level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [/opt/rocm/lib/librocm_smi64.so.6.0.60002 /opt/rocm-6.0.2/lib/librocm_smi64.so.6.0.60002]" wiring rocm management library functions in /opt/rocm/lib/librocm_smi64.so.6.0.60002 dlsym: rsmi_init dlsym: rsmi_shut_down dlsym: rsmi_dev_memory_total_get dlsym: rsmi_dev_memory_usage_get dlsym: rsmi_version_get dlsym: rsmi_num_monitor_devices dlsym: rsmi_dev_id_get dlsym: rsmi_dev_name_get dlsym: rsmi_dev_brand_get dlsym: rsmi_dev_vendor_name_get dlsym: rsmi_dev_vram_vendor_get dlsym: rsmi_dev_serial_number_get dlsym: rsmi_dev_subsystem_name_get dlsym: rsmi_dev_vbios_version_get time=2024-02-16T16:46:30.888Z level=INFO source=gpu.go:109 msg="Radeon GPU detected" time=2024-02-16T16:46:30.888Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-16T16:46:30.888Z level=INFO source=gpu.go:155 msg="AMD Driver: 6.3.6" time=2024-02-16T16:46:30.888Z level=DEBUG source=amd.go:66 msg="malformed gfx_target_version 0" discovered 1 ROCm GPU Devices [0] ROCm device name: Navi 33 [Radeon RX 7700S/7600/7600S/7600M XT/PRO W7600] [0] ROCm brand: Navi 33 [Radeon RX 7700S/7600/7600S/7600M XT/PRO W7600] [0] ROCm vendor: Advanced Micro Devices, Inc. [AMD/ATI] [0] ROCm VRAM vendor: samsung rsmi_dev_serial_number_get failed: 2 [0] ROCm subsystem name: RX 7600 Challenger OC [0] ROCm vbios version: 113-D7451000-0001 [0] ROCm totalMem 8573157376 [0] ROCm usedMem 27176960 time=2024-02-16T16:46:30.890Z level=DEBUG source=gpu.go:251 msg="rocm detected 1 devices with 7126M available memory" ```
Author
Owner

@abysssol commented on GitHub (Feb 16, 2024):

Sorry, it seems I must have started a new terminal or something and OLLAMA_DEBUG was unset.
Hopefully this is better.

Here's the logs from the flake for release 0.1.25; and download log.

time=2024-02-16T12:17:46.124-05:00 level=INFO source=images.go:706 msg="total blobs: 10"
time=2024-02-16T12:17:46.124-05:00 level=INFO source=images.go:713 msg="total unused blobs removed: 0"
time=2024-02-16T12:17:46.125-05:00 level=INFO source=routes.go:1014 msg="Listening on 127.0.0.1:11434 (version 0.1.25)"
time=2024-02-16T12:17:46.125-05:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-02-16T12:17:49.132-05:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cuda_v12 cpu_avx rocm cpu cpu_avx2]"
time=2024-02-16T12:17:49.133-05:00 level=DEBUG source=payload_common.go:147 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-02-16T12:17:49.133-05:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-16T12:17:49.133-05:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-02-16T12:17:49.133-05:00 level=DEBUG source=gpu.go:280 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /nix/store/l7xkh2k5dqbfp1yrckas1r5zrapcd7c5-pipewire-1.0.1-jack/lib/libnvidia-ml.so* /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/libnvidia-ml.so* /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so*]"
time=2024-02-16T12:17:49.133-05:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [/nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06]"
wiring nvidia management library functions in /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06
dlsym: nvmlInit_v2
dlsym: nvmlShutdown
dlsym: nvmlDeviceGetHandleByIndex
dlsym: nvmlDeviceGetMemoryInfo
dlsym: nvmlDeviceGetCount_v2
dlsym: nvmlDeviceGetCudaComputeCapability
dlsym: nvmlSystemGetDriverVersion
dlsym: nvmlDeviceGetName
dlsym: nvmlDeviceGetSerial
dlsym: nvmlDeviceGetVbiosVersion
dlsym: nvmlDeviceGetBoardPartNumber
dlsym: nvmlDeviceGetBrand
nvmlInit_v2 err: 9
time=2024-02-16T12:17:49.151-05:00 level=INFO source=gpu.go:320 msg="Unable to load CUDA management library /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06: nvml vram init failure: 9"
time=2024-02-16T12:17:49.151-05:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library librocm_smi64.so"
time=2024-02-16T12:17:49.151-05:00 level=DEBUG source=gpu.go:280 msg="gpu management search paths: [/opt/rocm*/lib*/librocm_smi64.so* /nix/store/l7xkh2k5dqbfp1yrckas1r5zrapcd7c5-pipewire-1.0.1-jack/lib/librocm_smi64.so* /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so* /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/librocm_smi64.so*]"
time=2024-02-16T12:17:49.151-05:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [/nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0]"
wiring rocm management library functions in /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0
dlsym: rsmi_init
dlsym: rsmi_shut_down
dlsym: rsmi_dev_memory_total_get
dlsym: rsmi_dev_memory_usage_get
dlsym: rsmi_version_get
dlsym: rsmi_num_monitor_devices
dlsym: rsmi_dev_id_get
dlsym: rsmi_dev_name_get
dlsym: rsmi_dev_brand_get
dlsym: rsmi_dev_vendor_name_get
dlsym: rsmi_dev_vram_vendor_get
dlsym: rsmi_dev_serial_number_get
dlsym: rsmi_dev_subsystem_name_get
dlsym: rsmi_dev_vbios_version_get
time=2024-02-16T12:17:49.153-05:00 level=INFO source=gpu.go:109 msg="Radeon GPU detected"
time=2024-02-16T12:17:49.153-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-16T12:17:49.153-05:00 level=INFO source=routes.go:1037 msg="no GPU detected"
[GIN] 2024/02/16 - 12:17:51 | 200 |      23.353µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/02/16 - 12:17:51 | 200 |     314.341µs |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/02/16 - 12:17:51 | 200 |     166.067µs |       127.0.0.1 | POST     "/api/show"
time=2024-02-16T12:17:51.163-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-16T12:17:51.163-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-16T12:17:51.163-05:00 level=INFO source=llm.go:77 msg="GPU not available, falling back to CPU"
time=2024-02-16T12:17:51.163-05:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [/tmp/ollama81314947/cpu_avx2/libext_server.so]"
time=2024-02-16T12:17:51.163-05:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama81314947/cpu_avx2/libext_server.so"
time=2024-02-16T12:17:51.163-05:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server"
[1708103871] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | 
llama_model_loader: loaded meta data with 24 key-value pairs and 995 tensors from /home/abysssol/.ollama/models/blobs/sha256:097a1ff4445ccc1e7668f70b9de3caa60cfc2a8e2cb9da3505b13854f1cfe20f (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = cognitivecomputations
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:                         llama.expert_count u32              = 8
llama_model_loader: - kv  10:                    llama.expert_used_count u32              = 2
llama_model_loader: - kv  11:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:                          general.file_type u32              = 2
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32002]   = ["&ltunk&gt", "&lts&gt", "&lt/s&gt", "&lt0x00&gt", "&lt...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32002]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32002]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 32000
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  22:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  23:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type  f16:   32 tensors
llama_model_loader: - type q4_0:  833 tensors
llama_model_loader: - type q8_0:   64 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 261/32002 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32002
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 8
llm_load_print_meta: n_expert_used    = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 46.70 B
llm_load_print_meta: model size       = 24.62 GiB (4.53 BPW) 
llm_load_print_meta: general.name     = cognitivecomputations
llm_load_print_meta: BOS token        = 1 '&lts&gt'
llm_load_print_meta: EOS token        = 32000 '&lt|im_end|&gt'
llm_load_print_meta: UNK token        = 0 '&ltunk&gt'
llm_load_print_meta: LF token         = 13 '&lt0x0A&gt'
llm_load_tensors: ggml ctx size =    0.38 MiB
llm_load_tensors:        CPU buffer size = 25215.88 MiB
....................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:        CPU input buffer size   =    13.01 MiB
llama_new_context_with_model:        CPU compute buffer size =   180.03 MiB
llama_new_context_with_model: graph splits (measure): 1
[1708103872] warming up the model with an empty run
[1708103872] Available slots:
[1708103872]  -&gt Slot 0 - max context: 2048
time=2024-02-16T12:17:52.570-05:00 level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop"
[1708103872] llama server main loop starting
[1708103872] all slots are idle and system prompt is empty, clear the KV cache
time=2024-02-16T12:17:52.571-05:00 level=DEBUG source=prompt.go:175 msg="prompt now fits in context window" required=27 window=2048
[GIN] 2024/02/16 - 12:17:52 | 200 |  1.483697709s |       127.0.0.1 | POST     "/api/chat"
[1708103881] 
initiating shutdown - draining remaining tasks...
[1708103881] 
llama server shutting down
[1708103881] llama server shutdown complete

And again from 0.1.24; and download log.

time=2024-02-16T12:59:29.210-05:00 level=INFO source=images.go:863 msg="total blobs: 10"
time=2024-02-16T12:59:29.210-05:00 level=INFO source=images.go:870 msg="total unused blobs removed: 0"
time=2024-02-16T12:59:29.211-05:00 level=INFO source=routes.go:999 msg="Listening on 127.0.0.1:11434 (version 0.1.24)"
time=2024-02-16T12:59:29.211-05:00 level=INFO source=payload_common.go:106 msg="Extracting dynamic libraries..."
time=2024-02-16T12:59:32.207-05:00 level=INFO source=payload_common.go:145 msg="Dynamic LLM libraries [cpu cpu_avx rocm cpu_avx2 cuda_v12]"
time=2024-02-16T12:59:32.207-05:00 level=DEBUG source=payload_common.go:146 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-02-16T12:59:32.207-05:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-16T12:59:32.207-05:00 level=INFO source=gpu.go:242 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-02-16T12:59:32.207-05:00 level=DEBUG source=gpu.go:260 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /nix/store/l7xkh2k5dqbfp1yrckas1r5zrapcd7c5-pipewire-1.0.1-jack/lib/libnvidia-ml.so* /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/libnvidia-ml.so* /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so*]"
time=2024-02-16T12:59:32.208-05:00 level=INFO source=gpu.go:288 msg="Discovered GPU libraries: [/nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06]"
wiring nvidia management library functions in /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06
dlsym: nvmlInit_v2
dlsym: nvmlShutdown
dlsym: nvmlDeviceGetHandleByIndex
dlsym: nvmlDeviceGetMemoryInfo
dlsym: nvmlDeviceGetCount_v2
dlsym: nvmlDeviceGetCudaComputeCapability
dlsym: nvmlSystemGetDriverVersion
dlsym: nvmlDeviceGetName
dlsym: nvmlDeviceGetSerial
dlsym: nvmlDeviceGetVbiosVersion
dlsym: nvmlDeviceGetBoardPartNumber
dlsym: nvmlDeviceGetBrand
nvmlInit_v2 err: 9
time=2024-02-16T12:59:32.225-05:00 level=INFO source=gpu.go:300 msg="Unable to load CUDA management library /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06: nvml vram init failure: 9"
time=2024-02-16T12:59:32.225-05:00 level=INFO source=gpu.go:242 msg="Searching for GPU management library librocm_smi64.so"
time=2024-02-16T12:59:32.225-05:00 level=DEBUG source=gpu.go:260 msg="gpu management search paths: [/opt/rocm*/lib*/librocm_smi64.so* /nix/store/l7xkh2k5dqbfp1yrckas1r5zrapcd7c5-pipewire-1.0.1-jack/lib/librocm_smi64.so* /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so* /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/librocm_smi64.so*]"
time=2024-02-16T12:59:32.225-05:00 level=INFO source=gpu.go:288 msg="Discovered GPU libraries: [/nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0]"
wiring rocm management library functions in /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0
dlsym: rsmi_init
dlsym: rsmi_shut_down
dlsym: rsmi_dev_memory_total_get
dlsym: rsmi_dev_memory_usage_get
dlsym: rsmi_version_get
dlsym: rsmi_num_monitor_devices
dlsym: rsmi_dev_id_get
dlsym: rsmi_dev_name_get
dlsym: rsmi_dev_brand_get
dlsym: rsmi_dev_vendor_name_get
dlsym: rsmi_dev_vram_vendor_get
dlsym: rsmi_dev_serial_number_get
dlsym: rsmi_dev_subsystem_name_get
dlsym: rsmi_dev_vbios_version_get
time=2024-02-16T12:59:32.227-05:00 level=INFO source=gpu.go:109 msg="Radeon GPU detected"
time=2024-02-16T12:59:32.227-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
discovered 2 ROCm GPU Devices
[0] ROCm device name: 0x1002
[0] ROCm brand: 0x1002
[0] ROCm vendor: 0x1002
[0] ROCm VRAM vendor: samsung
rsmi_dev_serial_number_get failed: 2
[0] ROCm subsystem name: 0x1002
[0] ROCm vbios version: 113-V395TRIO-4OC
[0] ROCm totalMem 17163091968
[0] ROCm usedMem 1409773568
[1] ROCm device name: 0x1002
[1] ROCm brand: 0x1002
[1] ROCm vendor: 0x1002
[1] ROCm VRAM vendor: unknown
rsmi_dev_serial_number_get failed: 2
[1] ROCm subsystem name: 0x1002
[1] ROCm vbios version: 102-RAPHAEL-006
[1] ROCm totalMem 536870912
[1] ROCm usedMem 20668416
[1] ROCm integrated GPU
time=2024-02-16T12:59:32.228-05:00 level=INFO source=gpu.go:177 msg="ROCm integrated GPU detected - ROCR_VISIBLE_DEVICES=0"
time=2024-02-16T12:59:32.228-05:00 level=DEBUG source=gpu.go:231 msg="rocm detected 2 devices with 12975M available memory"
[GIN] 2024/02/16 - 12:59:35 | 200 |      22.091µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/02/16 - 12:59:35 | 200 |      372.13µs |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/02/16 - 12:59:35 | 200 |     471.904µs |       127.0.0.1 | POST     "/api/show"
time=2024-02-16T12:59:35.210-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
discovered 2 ROCm GPU Devices
[0] ROCm device name: 0x1002
[0] ROCm brand: 0x1002
[0] ROCm vendor: 0x1002
[0] ROCm VRAM vendor: samsung
rsmi_dev_serial_number_get failed: 2
[0] ROCm subsystem name: 0x1002
[0] ROCm vbios version: 113-V395TRIO-4OC
[0] ROCm totalMem 17163091968
[0] ROCm usedMem 1409773568
[1] ROCm device name: 0x1002
[1] ROCm brand: 0x1002
[1] ROCm vendor: 0x1002
[1] ROCm VRAM vendor: unknown
rsmi_dev_serial_number_get failed: 2
[1] ROCm subsystem name: 0x1002
[1] ROCm vbios version: 102-RAPHAEL-006
[1] ROCm totalMem 536870912
[1] ROCm usedMem 20668416
[1] ROCm integrated GPU
time=2024-02-16T12:59:35.211-05:00 level=INFO source=gpu.go:177 msg="ROCm integrated GPU detected - ROCR_VISIBLE_DEVICES=0"
time=2024-02-16T12:59:35.211-05:00 level=DEBUG source=gpu.go:231 msg="rocm detected 2 devices with 12975M available memory"
time=2024-02-16T12:59:35.211-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
discovered 2 ROCm GPU Devices
[0] ROCm device name: 0x1002
[0] ROCm brand: 0x1002
[0] ROCm vendor: 0x1002
[0] ROCm VRAM vendor: samsung
rsmi_dev_serial_number_get failed: 2
[0] ROCm subsystem name: 0x1002
[0] ROCm vbios version: 113-V395TRIO-4OC
[0] ROCm totalMem 17163091968
[0] ROCm usedMem 1409773568
[1] ROCm device name: 0x1002
[1] ROCm brand: 0x1002
[1] ROCm vendor: 0x1002
[1] ROCm VRAM vendor: unknown
rsmi_dev_serial_number_get failed: 2
[1] ROCm subsystem name: 0x1002
[1] ROCm vbios version: 102-RAPHAEL-006
[1] ROCm totalMem 536870912
[1] ROCm usedMem 20668416
[1] ROCm integrated GPU
time=2024-02-16T12:59:35.211-05:00 level=INFO source=gpu.go:177 msg="ROCm integrated GPU detected - ROCR_VISIBLE_DEVICES=0"
time=2024-02-16T12:59:35.211-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-16T12:59:35.241-05:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama1279410782/rocm/libext_server.so"
time=2024-02-16T12:59:35.241-05:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server"
[1708106375] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | 
[1708106375] Performing pre-initialization of GPU
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 ROCm devices:
  Device 0: AMD Radeon RX 6950 XT, compute capability 10.3, VMM: no
llama_model_loader: loaded meta data with 24 key-value pairs and 995 tensors from /home/abysssol/.ollama/models/blobs/sha256:097a1ff4445ccc1e7668f70b9de3caa60cfc2a8e2cb9da3505b13854f1cfe20f (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = cognitivecomputations
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:                         llama.expert_count u32              = 8
llama_model_loader: - kv  10:                    llama.expert_used_count u32              = 2
llama_model_loader: - kv  11:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:                          general.file_type u32              = 2
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32002]   = ["&ltunk&gt", "&lts&gt", "&lt/s&gt", "&lt0x00&gt", "&lt...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32002]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32002]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 32000
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  22:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  23:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type  f16:   32 tensors
llama_model_loader: - type q4_0:  833 tensors
llama_model_loader: - type q8_0:   64 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 261/32002 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32002
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 8
llm_load_print_meta: n_expert_used    = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 46.70 B
llm_load_print_meta: model size       = 24.62 GiB (4.53 BPW) 
llm_load_print_meta: general.name     = cognitivecomputations
llm_load_print_meta: BOS token        = 1 '&lts&gt'
llm_load_print_meta: EOS token        = 32000 '&lt|im_end|&gt'
llm_load_print_meta: UNK token        = 0 '&ltunk&gt'
llm_load_print_meta: LF token         = 13 '&lt0x0A&gt'
llm_load_tensors: ggml ctx size =    0.76 MiB
llm_load_tensors: offloading 16 repeating layers to GPU
llm_load_tensors: offloaded 16/33 layers to GPU
llm_load_tensors:      ROCm0 buffer size = 12521.50 MiB
llm_load_tensors:        CPU buffer size = 25215.88 MiB
....................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      ROCm0 KV buffer size =   128.00 MiB
llama_kv_cache_init:  ROCm_Host KV buffer size =   128.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:  ROCm_Host input buffer size   =    12.01 MiB
llama_new_context_with_model:      ROCm0 compute buffer size =   211.21 MiB
llama_new_context_with_model:  ROCm_Host compute buffer size =   198.03 MiB
llama_new_context_with_model: graph splits (measure): 5
[1708106378] warming up the model with an empty run
[1708106378] Available slots:
[1708106378]  -&gt Slot 0 - max context: 2048
time=2024-02-16T12:59:38.397-05:00 level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop"
[1708106378] llama server main loop starting
[1708106378] all slots are idle and system prompt is empty, clear the KV cache
time=2024-02-16T12:59:38.397-05:00 level=DEBUG source=routes.go:1165 msg="chat handler" prompt="&lt|im_start|&gtsystem\nYou are Dolphin, a helpful AI assistant.\n&lt|im_end|&gt\n&lt|im_start|&gtuser\n&lt|im_end|&gt\n&lt|im_start|&gtassistant\n"
[1708106378] slot 0 is processing [task id: 0]
[1708106378] slot 0 : in cache: 0 tokens | to process: 27 tokens
[1708106378] slot 0 : kv cache rm - [0, end)

# ... removed ...

[1708106422] print_timings: prompt eval time =     893.79 ms /    27 tokens (   33.10 ms per token,    30.21 tokens per second)
[1708106422] print_timings:        eval time =   42755.44 ms /   437 runs   (   97.84 ms per token,    10.22 tokens per second)
[1708106422] print_timings:       total time =   43649.23 ms
[1708106422] slot 0 released (464 tokens in cache)
[1708106422] next result cancel on stop
[1708106422] next result removing waiting task ID: 0
[GIN] 2024/02/16 - 13:00:22 | 200 | 46.916460155s |       127.0.0.1 | POST     "/api/chat"
[1708106427] 
initiating shutdown - draining remaining tasks...
[1708106427] 
llama server shutting down
[1708106427] llama server shutdown complete
<!-- gh-comment-id:1949018067 --> @abysssol commented on GitHub (Feb 16, 2024): Sorry, it seems I must have started a new terminal or something and OLLAMA_DEBUG was unset. Hopefully this is better. <details><summary> #### Here's the logs from [the flake for release 0.1.25](https://github.com/abysssol/ollama-flake/tree/ollama-0.1.25); and [download log](https://github.com/ollama/ollama/files/14314105/debug-0.1.25.log). </summary> ``` time=2024-02-16T12:17:46.124-05:00 level=INFO source=images.go:706 msg="total blobs: 10" time=2024-02-16T12:17:46.124-05:00 level=INFO source=images.go:713 msg="total unused blobs removed: 0" time=2024-02-16T12:17:46.125-05:00 level=INFO source=routes.go:1014 msg="Listening on 127.0.0.1:11434 (version 0.1.25)" time=2024-02-16T12:17:46.125-05:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..." time=2024-02-16T12:17:49.132-05:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cuda_v12 cpu_avx rocm cpu cpu_avx2]" time=2024-02-16T12:17:49.133-05:00 level=DEBUG source=payload_common.go:147 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-02-16T12:17:49.133-05:00 level=INFO source=gpu.go:94 msg="Detecting GPU type" time=2024-02-16T12:17:49.133-05:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library libnvidia-ml.so" time=2024-02-16T12:17:49.133-05:00 level=DEBUG source=gpu.go:280 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /nix/store/l7xkh2k5dqbfp1yrckas1r5zrapcd7c5-pipewire-1.0.1-jack/lib/libnvidia-ml.so* /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/libnvidia-ml.so* /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so*]" time=2024-02-16T12:17:49.133-05:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [/nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06]" wiring nvidia management library functions in /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06 dlsym: nvmlInit_v2 dlsym: nvmlShutdown dlsym: nvmlDeviceGetHandleByIndex dlsym: nvmlDeviceGetMemoryInfo dlsym: nvmlDeviceGetCount_v2 dlsym: nvmlDeviceGetCudaComputeCapability dlsym: nvmlSystemGetDriverVersion dlsym: nvmlDeviceGetName dlsym: nvmlDeviceGetSerial dlsym: nvmlDeviceGetVbiosVersion dlsym: nvmlDeviceGetBoardPartNumber dlsym: nvmlDeviceGetBrand nvmlInit_v2 err: 9 time=2024-02-16T12:17:49.151-05:00 level=INFO source=gpu.go:320 msg="Unable to load CUDA management library /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06: nvml vram init failure: 9" time=2024-02-16T12:17:49.151-05:00 level=INFO source=gpu.go:262 msg="Searching for GPU management library librocm_smi64.so" time=2024-02-16T12:17:49.151-05:00 level=DEBUG source=gpu.go:280 msg="gpu management search paths: [/opt/rocm*/lib*/librocm_smi64.so* /nix/store/l7xkh2k5dqbfp1yrckas1r5zrapcd7c5-pipewire-1.0.1-jack/lib/librocm_smi64.so* /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so* /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/librocm_smi64.so*]" time=2024-02-16T12:17:49.151-05:00 level=INFO source=gpu.go:308 msg="Discovered GPU libraries: [/nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0]" wiring rocm management library functions in /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0 dlsym: rsmi_init dlsym: rsmi_shut_down dlsym: rsmi_dev_memory_total_get dlsym: rsmi_dev_memory_usage_get dlsym: rsmi_version_get dlsym: rsmi_num_monitor_devices dlsym: rsmi_dev_id_get dlsym: rsmi_dev_name_get dlsym: rsmi_dev_brand_get dlsym: rsmi_dev_vendor_name_get dlsym: rsmi_dev_vram_vendor_get dlsym: rsmi_dev_serial_number_get dlsym: rsmi_dev_subsystem_name_get dlsym: rsmi_dev_vbios_version_get time=2024-02-16T12:17:49.153-05:00 level=INFO source=gpu.go:109 msg="Radeon GPU detected" time=2024-02-16T12:17:49.153-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-16T12:17:49.153-05:00 level=INFO source=routes.go:1037 msg="no GPU detected" [GIN] 2024/02/16 - 12:17:51 | 200 | 23.353µs | 127.0.0.1 | HEAD "/" [GIN] 2024/02/16 - 12:17:51 | 200 | 314.341µs | 127.0.0.1 | POST "/api/show" [GIN] 2024/02/16 - 12:17:51 | 200 | 166.067µs | 127.0.0.1 | POST "/api/show" time=2024-02-16T12:17:51.163-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-16T12:17:51.163-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-16T12:17:51.163-05:00 level=INFO source=llm.go:77 msg="GPU not available, falling back to CPU" time=2024-02-16T12:17:51.163-05:00 level=DEBUG source=payload_common.go:93 msg="ordered list of LLM libraries to try [/tmp/ollama81314947/cpu_avx2/libext_server.so]" time=2024-02-16T12:17:51.163-05:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama81314947/cpu_avx2/libext_server.so" time=2024-02-16T12:17:51.163-05:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server" [1708103871] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | llama_model_loader: loaded meta data with 24 key-value pairs and 995 tensors from /home/abysssol/.ollama/models/blobs/sha256:097a1ff4445ccc1e7668f70b9de3caa60cfc2a8e2cb9da3505b13854f1cfe20f (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = cognitivecomputations llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.expert_count u32 = 8 llama_model_loader: - kv 10: llama.expert_used_count u32 = 2 llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: general.file_type u32 = 2 llama_model_loader: - kv 14: tokenizer.ggml.model str = llama llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32002] = ["&ltunk&gt", "&lts&gt", "&lt/s&gt", "&lt0x00&gt", "&lt... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32002] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32002] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 32000 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type f16: 32 tensors llama_model_loader: - type q4_0: 833 tensors llama_model_loader: - type q8_0: 64 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 261/32002 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32002 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 8 llm_load_print_meta: n_expert_used = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 46.70 B llm_load_print_meta: model size = 24.62 GiB (4.53 BPW) llm_load_print_meta: general.name = cognitivecomputations llm_load_print_meta: BOS token = 1 '&lts&gt' llm_load_print_meta: EOS token = 32000 '&lt|im_end|&gt' llm_load_print_meta: UNK token = 0 '&ltunk&gt' llm_load_print_meta: LF token = 13 '&lt0x0A&gt' llm_load_tensors: ggml ctx size = 0.38 MiB llm_load_tensors: CPU buffer size = 25215.88 MiB .................................................................................................... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CPU input buffer size = 13.01 MiB llama_new_context_with_model: CPU compute buffer size = 180.03 MiB llama_new_context_with_model: graph splits (measure): 1 [1708103872] warming up the model with an empty run [1708103872] Available slots: [1708103872] -&gt Slot 0 - max context: 2048 time=2024-02-16T12:17:52.570-05:00 level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop" [1708103872] llama server main loop starting [1708103872] all slots are idle and system prompt is empty, clear the KV cache time=2024-02-16T12:17:52.571-05:00 level=DEBUG source=prompt.go:175 msg="prompt now fits in context window" required=27 window=2048 [GIN] 2024/02/16 - 12:17:52 | 200 | 1.483697709s | 127.0.0.1 | POST "/api/chat" [1708103881] initiating shutdown - draining remaining tasks... [1708103881] llama server shutting down [1708103881] llama server shutdown complete ``` </details> <details><summary> #### And again from [0.1.24](https://github.com/abysssol/ollama-flake/tree/1.4.0); and [download log](https://github.com/ollama/ollama/files/14314412/debug-0.1.24.log). </summary> ``` time=2024-02-16T12:59:29.210-05:00 level=INFO source=images.go:863 msg="total blobs: 10" time=2024-02-16T12:59:29.210-05:00 level=INFO source=images.go:870 msg="total unused blobs removed: 0" time=2024-02-16T12:59:29.211-05:00 level=INFO source=routes.go:999 msg="Listening on 127.0.0.1:11434 (version 0.1.24)" time=2024-02-16T12:59:29.211-05:00 level=INFO source=payload_common.go:106 msg="Extracting dynamic libraries..." time=2024-02-16T12:59:32.207-05:00 level=INFO source=payload_common.go:145 msg="Dynamic LLM libraries [cpu cpu_avx rocm cpu_avx2 cuda_v12]" time=2024-02-16T12:59:32.207-05:00 level=DEBUG source=payload_common.go:146 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-02-16T12:59:32.207-05:00 level=INFO source=gpu.go:94 msg="Detecting GPU type" time=2024-02-16T12:59:32.207-05:00 level=INFO source=gpu.go:242 msg="Searching for GPU management library libnvidia-ml.so" time=2024-02-16T12:59:32.207-05:00 level=DEBUG source=gpu.go:260 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /nix/store/l7xkh2k5dqbfp1yrckas1r5zrapcd7c5-pipewire-1.0.1-jack/lib/libnvidia-ml.so* /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/libnvidia-ml.so* /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so*]" time=2024-02-16T12:59:32.208-05:00 level=INFO source=gpu.go:288 msg="Discovered GPU libraries: [/nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06]" wiring nvidia management library functions in /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06 dlsym: nvmlInit_v2 dlsym: nvmlShutdown dlsym: nvmlDeviceGetHandleByIndex dlsym: nvmlDeviceGetMemoryInfo dlsym: nvmlDeviceGetCount_v2 dlsym: nvmlDeviceGetCudaComputeCapability dlsym: nvmlSystemGetDriverVersion dlsym: nvmlDeviceGetName dlsym: nvmlDeviceGetSerial dlsym: nvmlDeviceGetVbiosVersion dlsym: nvmlDeviceGetBoardPartNumber dlsym: nvmlDeviceGetBrand nvmlInit_v2 err: 9 time=2024-02-16T12:59:32.225-05:00 level=INFO source=gpu.go:300 msg="Unable to load CUDA management library /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/libnvidia-ml.so.545.29.06: nvml vram init failure: 9" time=2024-02-16T12:59:32.225-05:00 level=INFO source=gpu.go:242 msg="Searching for GPU management library librocm_smi64.so" time=2024-02-16T12:59:32.225-05:00 level=DEBUG source=gpu.go:260 msg="gpu management search paths: [/opt/rocm*/lib*/librocm_smi64.so* /nix/store/l7xkh2k5dqbfp1yrckas1r5zrapcd7c5-pipewire-1.0.1-jack/lib/librocm_smi64.so* /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so* /nix/store/z6557r7pgvmxr9x16a4ffazly8dflh65-nvidia-x11-545.29.06-6.1.77/lib/librocm_smi64.so*]" time=2024-02-16T12:59:32.225-05:00 level=INFO source=gpu.go:288 msg="Discovered GPU libraries: [/nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0]" wiring rocm management library functions in /nix/store/146j4sd2z0miywgn0ggydfhcc8zmhxad-rocm-smi-5.7.1/lib/librocm_smi64.so.5.0 dlsym: rsmi_init dlsym: rsmi_shut_down dlsym: rsmi_dev_memory_total_get dlsym: rsmi_dev_memory_usage_get dlsym: rsmi_version_get dlsym: rsmi_num_monitor_devices dlsym: rsmi_dev_id_get dlsym: rsmi_dev_name_get dlsym: rsmi_dev_brand_get dlsym: rsmi_dev_vendor_name_get dlsym: rsmi_dev_vram_vendor_get dlsym: rsmi_dev_serial_number_get dlsym: rsmi_dev_subsystem_name_get dlsym: rsmi_dev_vbios_version_get time=2024-02-16T12:59:32.227-05:00 level=INFO source=gpu.go:109 msg="Radeon GPU detected" time=2024-02-16T12:59:32.227-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" discovered 2 ROCm GPU Devices [0] ROCm device name: 0x1002 [0] ROCm brand: 0x1002 [0] ROCm vendor: 0x1002 [0] ROCm VRAM vendor: samsung rsmi_dev_serial_number_get failed: 2 [0] ROCm subsystem name: 0x1002 [0] ROCm vbios version: 113-V395TRIO-4OC [0] ROCm totalMem 17163091968 [0] ROCm usedMem 1409773568 [1] ROCm device name: 0x1002 [1] ROCm brand: 0x1002 [1] ROCm vendor: 0x1002 [1] ROCm VRAM vendor: unknown rsmi_dev_serial_number_get failed: 2 [1] ROCm subsystem name: 0x1002 [1] ROCm vbios version: 102-RAPHAEL-006 [1] ROCm totalMem 536870912 [1] ROCm usedMem 20668416 [1] ROCm integrated GPU time=2024-02-16T12:59:32.228-05:00 level=INFO source=gpu.go:177 msg="ROCm integrated GPU detected - ROCR_VISIBLE_DEVICES=0" time=2024-02-16T12:59:32.228-05:00 level=DEBUG source=gpu.go:231 msg="rocm detected 2 devices with 12975M available memory" [GIN] 2024/02/16 - 12:59:35 | 200 | 22.091µs | 127.0.0.1 | HEAD "/" [GIN] 2024/02/16 - 12:59:35 | 200 | 372.13µs | 127.0.0.1 | POST "/api/show" [GIN] 2024/02/16 - 12:59:35 | 200 | 471.904µs | 127.0.0.1 | POST "/api/show" time=2024-02-16T12:59:35.210-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" discovered 2 ROCm GPU Devices [0] ROCm device name: 0x1002 [0] ROCm brand: 0x1002 [0] ROCm vendor: 0x1002 [0] ROCm VRAM vendor: samsung rsmi_dev_serial_number_get failed: 2 [0] ROCm subsystem name: 0x1002 [0] ROCm vbios version: 113-V395TRIO-4OC [0] ROCm totalMem 17163091968 [0] ROCm usedMem 1409773568 [1] ROCm device name: 0x1002 [1] ROCm brand: 0x1002 [1] ROCm vendor: 0x1002 [1] ROCm VRAM vendor: unknown rsmi_dev_serial_number_get failed: 2 [1] ROCm subsystem name: 0x1002 [1] ROCm vbios version: 102-RAPHAEL-006 [1] ROCm totalMem 536870912 [1] ROCm usedMem 20668416 [1] ROCm integrated GPU time=2024-02-16T12:59:35.211-05:00 level=INFO source=gpu.go:177 msg="ROCm integrated GPU detected - ROCR_VISIBLE_DEVICES=0" time=2024-02-16T12:59:35.211-05:00 level=DEBUG source=gpu.go:231 msg="rocm detected 2 devices with 12975M available memory" time=2024-02-16T12:59:35.211-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" discovered 2 ROCm GPU Devices [0] ROCm device name: 0x1002 [0] ROCm brand: 0x1002 [0] ROCm vendor: 0x1002 [0] ROCm VRAM vendor: samsung rsmi_dev_serial_number_get failed: 2 [0] ROCm subsystem name: 0x1002 [0] ROCm vbios version: 113-V395TRIO-4OC [0] ROCm totalMem 17163091968 [0] ROCm usedMem 1409773568 [1] ROCm device name: 0x1002 [1] ROCm brand: 0x1002 [1] ROCm vendor: 0x1002 [1] ROCm VRAM vendor: unknown rsmi_dev_serial_number_get failed: 2 [1] ROCm subsystem name: 0x1002 [1] ROCm vbios version: 102-RAPHAEL-006 [1] ROCm totalMem 536870912 [1] ROCm usedMem 20668416 [1] ROCm integrated GPU time=2024-02-16T12:59:35.211-05:00 level=INFO source=gpu.go:177 msg="ROCm integrated GPU detected - ROCR_VISIBLE_DEVICES=0" time=2024-02-16T12:59:35.211-05:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-16T12:59:35.241-05:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama1279410782/rocm/libext_server.so" time=2024-02-16T12:59:35.241-05:00 level=INFO source=dyn_ext_server.go:145 msg="Initializing llama server" [1708106375] system info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | [1708106375] Performing pre-initialization of GPU ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 1 ROCm devices: Device 0: AMD Radeon RX 6950 XT, compute capability 10.3, VMM: no llama_model_loader: loaded meta data with 24 key-value pairs and 995 tensors from /home/abysssol/.ollama/models/blobs/sha256:097a1ff4445ccc1e7668f70b9de3caa60cfc2a8e2cb9da3505b13854f1cfe20f (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = cognitivecomputations llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.expert_count u32 = 8 llama_model_loader: - kv 10: llama.expert_used_count u32 = 2 llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: general.file_type u32 = 2 llama_model_loader: - kv 14: tokenizer.ggml.model str = llama llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32002] = ["&ltunk&gt", "&lts&gt", "&lt/s&gt", "&lt0x00&gt", "&lt... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32002] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32002] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 32000 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type f16: 32 tensors llama_model_loader: - type q4_0: 833 tensors llama_model_loader: - type q8_0: 64 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 261/32002 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32002 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 8 llm_load_print_meta: n_expert_used = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 46.70 B llm_load_print_meta: model size = 24.62 GiB (4.53 BPW) llm_load_print_meta: general.name = cognitivecomputations llm_load_print_meta: BOS token = 1 '&lts&gt' llm_load_print_meta: EOS token = 32000 '&lt|im_end|&gt' llm_load_print_meta: UNK token = 0 '&ltunk&gt' llm_load_print_meta: LF token = 13 '&lt0x0A&gt' llm_load_tensors: ggml ctx size = 0.76 MiB llm_load_tensors: offloading 16 repeating layers to GPU llm_load_tensors: offloaded 16/33 layers to GPU llm_load_tensors: ROCm0 buffer size = 12521.50 MiB llm_load_tensors: CPU buffer size = 25215.88 MiB .................................................................................................... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: ROCm0 KV buffer size = 128.00 MiB llama_kv_cache_init: ROCm_Host KV buffer size = 128.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: ROCm_Host input buffer size = 12.01 MiB llama_new_context_with_model: ROCm0 compute buffer size = 211.21 MiB llama_new_context_with_model: ROCm_Host compute buffer size = 198.03 MiB llama_new_context_with_model: graph splits (measure): 5 [1708106378] warming up the model with an empty run [1708106378] Available slots: [1708106378] -&gt Slot 0 - max context: 2048 time=2024-02-16T12:59:38.397-05:00 level=INFO source=dyn_ext_server.go:156 msg="Starting llama main loop" [1708106378] llama server main loop starting [1708106378] all slots are idle and system prompt is empty, clear the KV cache time=2024-02-16T12:59:38.397-05:00 level=DEBUG source=routes.go:1165 msg="chat handler" prompt="&lt|im_start|&gtsystem\nYou are Dolphin, a helpful AI assistant.\n&lt|im_end|&gt\n&lt|im_start|&gtuser\n&lt|im_end|&gt\n&lt|im_start|&gtassistant\n" [1708106378] slot 0 is processing [task id: 0] [1708106378] slot 0 : in cache: 0 tokens | to process: 27 tokens [1708106378] slot 0 : kv cache rm - [0, end) # ... removed ... [1708106422] print_timings: prompt eval time = 893.79 ms / 27 tokens ( 33.10 ms per token, 30.21 tokens per second) [1708106422] print_timings: eval time = 42755.44 ms / 437 runs ( 97.84 ms per token, 10.22 tokens per second) [1708106422] print_timings: total time = 43649.23 ms [1708106422] slot 0 released (464 tokens in cache) [1708106422] next result cancel on stop [1708106422] next result removing waiting task ID: 0 [GIN] 2024/02/16 - 13:00:22 | 200 | 46.916460155s | 127.0.0.1 | POST "/api/chat" [1708106427] initiating shutdown - draining remaining tasks... [1708106427] llama server shutting down [1708106427] llama server shutdown complete ``` </details>
Author
Owner

@dhiltgen commented on GitHub (Feb 16, 2024):

Thanks for the output @abysssol. That definitely looks like a regression someplace in the radeon GPU discovery logic.

<!-- gh-comment-id:1949087267 --> @dhiltgen commented on GitHub (Feb 16, 2024): Thanks for the output @abysssol. That definitely looks like a regression someplace in the radeon GPU discovery logic.
Author
Owner

@dhiltgen commented on GitHub (Feb 16, 2024):

Digging into the code, I have a feeling this is the culprit.

@abysssol can you poke around on your system to see what sysfs path doesn't match what we were expecting?

	DriverVersionFile     = "/sys/module/amdgpu/version"
	GPUPropertiesFileGlob = "/sys/class/kfd/kfd/topology/nodes/*/properties"
<!-- gh-comment-id:1949100324 --> @dhiltgen commented on GitHub (Feb 16, 2024): Digging into the code, I have a feeling [this](https://github.com/ollama/ollama/commit/6d84f07505bdbd72696cbe249f2ae13fdb02a586) is the culprit. @abysssol can you poke around on your system to see what sysfs path doesn't match what we were expecting? ```go DriverVersionFile = "/sys/module/amdgpu/version" GPUPropertiesFileGlob = "/sys/class/kfd/kfd/topology/nodes/*/properties" ```
Author
Owner

@abysssol commented on GitHub (Feb 16, 2024):

The problem appears to be that /sys/module/amdgpu/ doesn't contain version.
As far as I can tell, /sys/class/kfd/kfd/topology/nodes/*/properties is probably as expected.

Is knowing the version necessary for proper functionality? Could ollama use a more restricted subset of the gpu apis if the version is missing? Maybe there's another way to get the driver version?

File tree of /sys/module/amdgpu/

drwxr-xr-x    - root 16 Feb 14:37 /sys/module/amdgpu/
drwxr-xr-x    - root 16 Feb 14:37 ├── drivers/
lrwxrwxrwx    - root 16 Feb 14:37 │  └── pci:amdgpu -> ../../../bus/pci/drivers/amdgpu/
drwxr-xr-x    - root 16 Feb 14:37 ├── holders/
drwxr-xr-x    - root 16 Feb 14:37 ├── notes/
.r--r--r--   64 root 16 Feb 14:37 │  ├── .note.gnu.property
.r--r--r--   48 root 16 Feb 14:37 │  └── .note.Linux
drwxr-xr-x    - root 16 Feb 14:37 ├── parameters/
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── abmlevel
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── aspm
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── async_gfx_ring
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── audio
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── backlight
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── bad_page_threshold
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── bapm
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── cg_mask
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── cik_support
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── compute_multipipe
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── cwsr_enable
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── dc
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── dcdebugmask
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── dcfeaturemask
.rw-r--r-- 4.1k root 16 Feb 14:37 │  ├── debug_evictions
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── debug_largebar
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── deep_color
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── disable_cu
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── discovery
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── disp_priority
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── dpm
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── emu_mode
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── enforce_isolation
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── exp_hw_support
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── force_asic_type
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── forcelongtraining
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── fw_load_type
.rw------- 4.1k root 16 Feb 14:37 │  ├── gartsize
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── gpu_recovery
.rw------- 4.1k root 16 Feb 14:37 │  ├── gttsize
.rw-r--r-- 4.1k root 16 Feb 14:37 │  ├── halt_if_hws_hang
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── hw_i2c
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── hws_gws_support
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── hws_max_conc_proc
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── ip_block_mask
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── lbpw
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── lockup_timeout
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── max_num_of_queues_per_device
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── mcbp
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── mes
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── mes_kiq
.rw------- 4.1k root 16 Feb 14:37 │  ├── moverate
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── msi
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── mtype_local
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── no_queue_eviction_on_vm_fault
.rw-r--r-- 4.1k root 16 Feb 14:37 │  ├── no_system_mem_limit
.rw-r--r-- 4.1k root 16 Feb 14:37 │  ├── noretry
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── num_kcq
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── pcie_gen2
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── pcie_gen_cap
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── pcie_lane_cap
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── pg_mask
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── ppfeaturemask
.rw-r--r-- 4.1k root 16 Feb 14:37 │  ├── queue_preemption_timeout_ms
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── ras_enable
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── ras_mask
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── reset_method
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── runpm
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── sched_hw_submission
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── sched_jobs
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── sched_policy
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── sdma_phase_quantum
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── send_sigterm
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── sg_display
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── si_support
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── smu_memory_pool_size
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── smu_pptable_id
.rw-r--r-- 4.1k root 16 Feb 14:37 │  ├── timeout_fatal_disable
.rw-r--r-- 4.1k root 16 Feb 14:37 │  ├── timeout_period
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── tmz
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── use_xgmi_p2p
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── user_partt_mode
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── vcnfw_log
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── virtual_display
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── vis_vramlimit
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── visualconfirm
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── vm_block_size
.rw-r--r-- 4.1k root 16 Feb 14:37 │  ├── vm_debug
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── vm_fault_stop
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── vm_fragment_size
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── vm_size
.r--r--r-- 4.1k root 16 Feb 14:37 │  ├── vm_update_mode
.rw------- 4.1k root 16 Feb 14:37 │  └── vramlimit
drwxr-xr-x    - root 16 Feb 14:37 ├── sections/
.r--------   19 root 16 Feb 14:37 │  ├── .altinstr_aux
.r--------   19 root 16 Feb 14:37 │  ├── .altinstr_replacement
.r--------   19 root 16 Feb 14:37 │  ├── .altinstructions
.r--------   19 root 16 Feb 14:37 │  ├── .bss
.r--------   19 root 16 Feb 14:37 │  ├── .call_sites
.r--------   19 root 16 Feb 14:37 │  ├── .data
.r--------   19 root 16 Feb 14:37 │  ├── .data..read_mostly
.r--------   19 root 16 Feb 14:37 │  ├── .data.once
.r--------   19 root 16 Feb 14:37 │  ├── .exit.data
.r--------   19 root 16 Feb 14:37 │  ├── .exit.text
.r--------   19 root 16 Feb 14:37 │  ├── .gnu.linkonce.this_module
.r--------   19 root 16 Feb 14:37 │  ├── .ibt_endbr_seal
.r--------   19 root 16 Feb 14:37 │  ├── .init.data
.r--------   19 root 16 Feb 14:37 │  ├── .init.text
.r--------   19 root 16 Feb 14:37 │  ├── .note.gnu.property
.r--------   19 root 16 Feb 14:37 │  ├── .note.Linux
.r--------   19 root 16 Feb 14:37 │  ├── .orc_header
.r--------   19 root 16 Feb 14:37 │  ├── .orc_unwind
.r--------   19 root 16 Feb 14:37 │  ├── .orc_unwind_ip
.r--------   19 root 16 Feb 14:37 │  ├── .parainstructions
.r--------   19 root 16 Feb 14:37 │  ├── .ref.data
.r--------   19 root 16 Feb 14:37 │  ├── .retpoline_sites
.r--------   19 root 16 Feb 14:37 │  ├── .return_sites
.r--------   19 root 16 Feb 14:37 │  ├── .rodata
.r--------   19 root 16 Feb 14:37 │  ├── .rodata.cst4
.r--------   19 root 16 Feb 14:37 │  ├── .rodata.cst8
.r--------   19 root 16 Feb 14:37 │  ├── .rodata.cst16
.r--------   19 root 16 Feb 14:37 │  ├── .rodata.str1.1
.r--------   19 root 16 Feb 14:37 │  ├── .rodata.str1.8
.r--------   19 root 16 Feb 14:37 │  ├── .smp_locks
.r--------   19 root 16 Feb 14:37 │  ├── .static_call.text
.r--------   19 root 16 Feb 14:37 │  ├── .static_call_sites
.r--------   19 root 16 Feb 14:37 │  ├── .strtab
.r--------   19 root 16 Feb 14:37 │  ├── .symtab
.r--------   19 root 16 Feb 14:37 │  ├── .text
.r--------   19 root 16 Feb 14:37 │  ├── .text.unlikely
.r--------   19 root 16 Feb 14:37 │  ├── ___srcu_struct_ptrs
.r--------   19 root 16 Feb 14:37 │  ├── __bpf_raw_tp_map
.r--------   19 root 16 Feb 14:37 │  ├── __bug_table
.r--------   19 root 16 Feb 14:37 │  ├── __dyndbg
.r--------   19 root 16 Feb 14:37 │  ├── __dyndbg_classes
.r--------   19 root 16 Feb 14:37 │  ├── __jump_table
.r--------   19 root 16 Feb 14:37 │  ├── __mcount_loc
.r--------   19 root 16 Feb 14:37 │  ├── __param
.r--------   19 root 16 Feb 14:37 │  ├── __patchable_function_entries
.r--------   19 root 16 Feb 14:37 │  ├── __tracepoints
.r--------   19 root 16 Feb 14:37 │  ├── __tracepoints_ptrs
.r--------   19 root 16 Feb 14:37 │  ├── __tracepoints_strings
.r--------   19 root 16 Feb 14:37 │  └── _ftrace_events
.r--r--r-- 4.1k root 16 Feb 14:37 ├── coresize
.r--r--r-- 4.1k root 16 Feb 14:37 ├── initsize
.r--r--r-- 4.1k root 16 Feb 14:37 ├── initstate
.r--r--r-- 4.1k root 16 Feb 14:37 ├── refcnt
.r--r--r-- 4.1k root 16 Feb 14:37 ├── taint
.-w------- 4.1k root 16 Feb 14:37 └── uevent

File tree of /sys/class/kfd/kfd/topology/nodes/

drwxr-xr-x    - root 16 Feb 14:42 /sys/class/kfd/kfd/topology/nodes/
drwxr-xr-x    - root 16 Feb 14:42 ├── 0/
drwxr-xr-x    - root 16 Feb 14:42 │  ├── caches/
drwxr-xr-x    - root 16 Feb 14:42 │  ├── io_links/
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 0/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  └── 1/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │     └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  ├── mem_banks/
drwxr-xr-x    - root 16 Feb 14:42 │  │  └── 0/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │     └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  ├── p2p_links/
drwxr-xr-x    - root 16 Feb 14:42 │  ├── perf/
.r--r--r-- 4.1k root 16 Feb 14:42 │  ├── gpu_id
.r--r--r-- 4.1k root 16 Feb 14:42 │  ├── name
.r--r--r-- 4.1k root 16 Feb 14:42 │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 ├── 1/
drwxr-xr-x    - root 16 Feb 14:42 │  ├── caches/
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 0/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 1/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 2/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 3/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 4/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 5/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 6/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 7/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 8/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 9/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 10/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 11/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 12/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 13/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 14/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 15/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 16/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 17/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 18/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 19/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 20/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 21/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 22/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 23/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 24/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 25/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 26/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 27/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 28/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 29/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 30/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 31/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 32/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 33/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 34/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 35/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 36/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 37/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 38/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 39/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 40/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 41/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 42/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 43/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 44/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 45/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 46/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 47/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 48/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 49/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 50/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 51/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 52/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 53/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 54/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 55/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 56/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 57/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 58/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 59/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 60/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 61/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 62/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 63/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 64/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 65/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 66/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 67/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 68/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 69/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 70/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 71/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 72/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 73/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 74/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 75/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 76/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 77/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 78/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 79/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 80/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 81/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 82/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 83/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 84/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 85/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 86/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 87/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 88/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 89/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 90/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 91/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 92/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 93/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 94/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 95/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 96/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 97/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 98/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 99/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 100/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 101/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 102/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 103/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 104/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 105/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 106/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 107/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 108/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 109/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 110/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 111/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 112/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 113/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 114/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 115/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 116/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 117/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 118/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 119/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 120/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 121/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 122/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 123/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 124/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 125/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 126/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 127/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 128/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 129/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 130/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 131/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 132/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 133/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 134/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 135/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 136/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 137/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 138/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 139/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 140/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 141/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 142/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 143/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 144/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 145/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 146/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 147/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 148/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 149/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 150/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 151/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 152/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 153/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 154/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 155/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 156/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 157/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 158/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 159/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 160/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 161/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 162/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 163/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 164/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 165/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 166/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 167/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  ├── 168/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  │  └── 169/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │     └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  ├── io_links/
drwxr-xr-x    - root 16 Feb 14:42 │  │  └── 0/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │     └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  ├── mem_banks/
drwxr-xr-x    - root 16 Feb 14:42 │  │  └── 0/
.r--r--r-- 4.1k root 16 Feb 14:42 │  │     └── properties
drwxr-xr-x    - root 16 Feb 14:42 │  ├── p2p_links/
drwxr-xr-x    - root 16 Feb 14:42 │  ├── perf/
.r--r--r-- 4.1k root 16 Feb 14:42 │  ├── gpu_id
.r--r--r-- 4.1k root 16 Feb 14:42 │  ├── name
.r--r--r-- 4.1k root 16 Feb 14:42 │  └── properties
drwxr-xr-x    - root 16 Feb 14:42 └── 2/
drwxr-xr-x    - root 16 Feb 14:42    ├── caches/
drwxr-xr-x    - root 16 Feb 14:42    │  ├── 0/
.r--r--r-- 4.1k root 16 Feb 14:42    │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42    │  ├── 1/
.r--r--r-- 4.1k root 16 Feb 14:42    │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42    │  ├── 2/
.r--r--r-- 4.1k root 16 Feb 14:42    │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42    │  ├── 3/
.r--r--r-- 4.1k root 16 Feb 14:42    │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42    │  ├── 4/
.r--r--r-- 4.1k root 16 Feb 14:42    │  │  └── properties
drwxr-xr-x    - root 16 Feb 14:42    │  └── 5/
.r--r--r-- 4.1k root 16 Feb 14:42    │     └── properties
drwxr-xr-x    - root 16 Feb 14:42    ├── io_links/
drwxr-xr-x    - root 16 Feb 14:42    │  └── 0/
.r--r--r-- 4.1k root 16 Feb 14:42    │     └── properties
drwxr-xr-x    - root 16 Feb 14:42    ├── mem_banks/
drwxr-xr-x    - root 16 Feb 14:42    │  └── 0/
.r--r--r-- 4.1k root 16 Feb 14:42    │     └── properties
drwxr-xr-x    - root 16 Feb 14:42    ├── p2p_links/
drwxr-xr-x    - root 16 Feb 14:42    ├── perf/
.r--r--r-- 4.1k root 16 Feb 14:42    ├── gpu_id
.r--r--r-- 4.1k root 16 Feb 14:42    ├── name
.r--r--r-- 4.1k root 16 Feb 14:42    └── properties
<!-- gh-comment-id:1949243976 --> @abysssol commented on GitHub (Feb 16, 2024): The problem appears to be that `/sys/module/amdgpu/` doesn't contain `version`. As far as I can tell, `/sys/class/kfd/kfd/topology/nodes/*/properties` is probably as expected. Is knowing the version necessary for proper functionality? Could ollama use a more restricted subset of the gpu apis if the version is missing? Maybe there's another way to get the driver version? <details><summary> #### File tree of `/sys/module/amdgpu/` </summary> ``` drwxr-xr-x - root 16 Feb 14:37 /sys/module/amdgpu/ drwxr-xr-x - root 16 Feb 14:37 ├── drivers/ lrwxrwxrwx - root 16 Feb 14:37 │ └── pci:amdgpu -> ../../../bus/pci/drivers/amdgpu/ drwxr-xr-x - root 16 Feb 14:37 ├── holders/ drwxr-xr-x - root 16 Feb 14:37 ├── notes/ .r--r--r-- 64 root 16 Feb 14:37 │ ├── .note.gnu.property .r--r--r-- 48 root 16 Feb 14:37 │ └── .note.Linux drwxr-xr-x - root 16 Feb 14:37 ├── parameters/ .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── abmlevel .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── aspm .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── async_gfx_ring .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── audio .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── backlight .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── bad_page_threshold .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── bapm .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── cg_mask .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── cik_support .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── compute_multipipe .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── cwsr_enable .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── dc .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── dcdebugmask .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── dcfeaturemask .rw-r--r-- 4.1k root 16 Feb 14:37 │ ├── debug_evictions .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── debug_largebar .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── deep_color .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── disable_cu .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── discovery .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── disp_priority .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── dpm .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── emu_mode .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── enforce_isolation .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── exp_hw_support .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── force_asic_type .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── forcelongtraining .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── fw_load_type .rw------- 4.1k root 16 Feb 14:37 │ ├── gartsize .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── gpu_recovery .rw------- 4.1k root 16 Feb 14:37 │ ├── gttsize .rw-r--r-- 4.1k root 16 Feb 14:37 │ ├── halt_if_hws_hang .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── hw_i2c .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── hws_gws_support .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── hws_max_conc_proc .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── ip_block_mask .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── lbpw .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── lockup_timeout .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── max_num_of_queues_per_device .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── mcbp .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── mes .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── mes_kiq .rw------- 4.1k root 16 Feb 14:37 │ ├── moverate .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── msi .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── mtype_local .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── no_queue_eviction_on_vm_fault .rw-r--r-- 4.1k root 16 Feb 14:37 │ ├── no_system_mem_limit .rw-r--r-- 4.1k root 16 Feb 14:37 │ ├── noretry .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── num_kcq .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── pcie_gen2 .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── pcie_gen_cap .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── pcie_lane_cap .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── pg_mask .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── ppfeaturemask .rw-r--r-- 4.1k root 16 Feb 14:37 │ ├── queue_preemption_timeout_ms .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── ras_enable .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── ras_mask .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── reset_method .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── runpm .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── sched_hw_submission .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── sched_jobs .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── sched_policy .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── sdma_phase_quantum .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── send_sigterm .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── sg_display .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── si_support .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── smu_memory_pool_size .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── smu_pptable_id .rw-r--r-- 4.1k root 16 Feb 14:37 │ ├── timeout_fatal_disable .rw-r--r-- 4.1k root 16 Feb 14:37 │ ├── timeout_period .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── tmz .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── use_xgmi_p2p .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── user_partt_mode .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── vcnfw_log .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── virtual_display .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── vis_vramlimit .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── visualconfirm .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── vm_block_size .rw-r--r-- 4.1k root 16 Feb 14:37 │ ├── vm_debug .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── vm_fault_stop .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── vm_fragment_size .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── vm_size .r--r--r-- 4.1k root 16 Feb 14:37 │ ├── vm_update_mode .rw------- 4.1k root 16 Feb 14:37 │ └── vramlimit drwxr-xr-x - root 16 Feb 14:37 ├── sections/ .r-------- 19 root 16 Feb 14:37 │ ├── .altinstr_aux .r-------- 19 root 16 Feb 14:37 │ ├── .altinstr_replacement .r-------- 19 root 16 Feb 14:37 │ ├── .altinstructions .r-------- 19 root 16 Feb 14:37 │ ├── .bss .r-------- 19 root 16 Feb 14:37 │ ├── .call_sites .r-------- 19 root 16 Feb 14:37 │ ├── .data .r-------- 19 root 16 Feb 14:37 │ ├── .data..read_mostly .r-------- 19 root 16 Feb 14:37 │ ├── .data.once .r-------- 19 root 16 Feb 14:37 │ ├── .exit.data .r-------- 19 root 16 Feb 14:37 │ ├── .exit.text .r-------- 19 root 16 Feb 14:37 │ ├── .gnu.linkonce.this_module .r-------- 19 root 16 Feb 14:37 │ ├── .ibt_endbr_seal .r-------- 19 root 16 Feb 14:37 │ ├── .init.data .r-------- 19 root 16 Feb 14:37 │ ├── .init.text .r-------- 19 root 16 Feb 14:37 │ ├── .note.gnu.property .r-------- 19 root 16 Feb 14:37 │ ├── .note.Linux .r-------- 19 root 16 Feb 14:37 │ ├── .orc_header .r-------- 19 root 16 Feb 14:37 │ ├── .orc_unwind .r-------- 19 root 16 Feb 14:37 │ ├── .orc_unwind_ip .r-------- 19 root 16 Feb 14:37 │ ├── .parainstructions .r-------- 19 root 16 Feb 14:37 │ ├── .ref.data .r-------- 19 root 16 Feb 14:37 │ ├── .retpoline_sites .r-------- 19 root 16 Feb 14:37 │ ├── .return_sites .r-------- 19 root 16 Feb 14:37 │ ├── .rodata .r-------- 19 root 16 Feb 14:37 │ ├── .rodata.cst4 .r-------- 19 root 16 Feb 14:37 │ ├── .rodata.cst8 .r-------- 19 root 16 Feb 14:37 │ ├── .rodata.cst16 .r-------- 19 root 16 Feb 14:37 │ ├── .rodata.str1.1 .r-------- 19 root 16 Feb 14:37 │ ├── .rodata.str1.8 .r-------- 19 root 16 Feb 14:37 │ ├── .smp_locks .r-------- 19 root 16 Feb 14:37 │ ├── .static_call.text .r-------- 19 root 16 Feb 14:37 │ ├── .static_call_sites .r-------- 19 root 16 Feb 14:37 │ ├── .strtab .r-------- 19 root 16 Feb 14:37 │ ├── .symtab .r-------- 19 root 16 Feb 14:37 │ ├── .text .r-------- 19 root 16 Feb 14:37 │ ├── .text.unlikely .r-------- 19 root 16 Feb 14:37 │ ├── ___srcu_struct_ptrs .r-------- 19 root 16 Feb 14:37 │ ├── __bpf_raw_tp_map .r-------- 19 root 16 Feb 14:37 │ ├── __bug_table .r-------- 19 root 16 Feb 14:37 │ ├── __dyndbg .r-------- 19 root 16 Feb 14:37 │ ├── __dyndbg_classes .r-------- 19 root 16 Feb 14:37 │ ├── __jump_table .r-------- 19 root 16 Feb 14:37 │ ├── __mcount_loc .r-------- 19 root 16 Feb 14:37 │ ├── __param .r-------- 19 root 16 Feb 14:37 │ ├── __patchable_function_entries .r-------- 19 root 16 Feb 14:37 │ ├── __tracepoints .r-------- 19 root 16 Feb 14:37 │ ├── __tracepoints_ptrs .r-------- 19 root 16 Feb 14:37 │ ├── __tracepoints_strings .r-------- 19 root 16 Feb 14:37 │ └── _ftrace_events .r--r--r-- 4.1k root 16 Feb 14:37 ├── coresize .r--r--r-- 4.1k root 16 Feb 14:37 ├── initsize .r--r--r-- 4.1k root 16 Feb 14:37 ├── initstate .r--r--r-- 4.1k root 16 Feb 14:37 ├── refcnt .r--r--r-- 4.1k root 16 Feb 14:37 ├── taint .-w------- 4.1k root 16 Feb 14:37 └── uevent ``` </details> <details><summary> #### File tree of `/sys/class/kfd/kfd/topology/nodes/` </summary> ``` drwxr-xr-x - root 16 Feb 14:42 /sys/class/kfd/kfd/topology/nodes/ drwxr-xr-x - root 16 Feb 14:42 ├── 0/ drwxr-xr-x - root 16 Feb 14:42 │ ├── caches/ drwxr-xr-x - root 16 Feb 14:42 │ ├── io_links/ drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 0/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ └── 1/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ ├── mem_banks/ drwxr-xr-x - root 16 Feb 14:42 │ │ └── 0/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ ├── p2p_links/ drwxr-xr-x - root 16 Feb 14:42 │ ├── perf/ .r--r--r-- 4.1k root 16 Feb 14:42 │ ├── gpu_id .r--r--r-- 4.1k root 16 Feb 14:42 │ ├── name .r--r--r-- 4.1k root 16 Feb 14:42 │ └── properties drwxr-xr-x - root 16 Feb 14:42 ├── 1/ drwxr-xr-x - root 16 Feb 14:42 │ ├── caches/ drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 0/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 1/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 2/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 3/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 4/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 5/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 6/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 7/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 8/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 9/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 10/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 11/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 12/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 13/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 14/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 15/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 16/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 17/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 18/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 19/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 20/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 21/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 22/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 23/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 24/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 25/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 26/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 27/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 28/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 29/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 30/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 31/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 32/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 33/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 34/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 35/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 36/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 37/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 38/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 39/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 40/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 41/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 42/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 43/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 44/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 45/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 46/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 47/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 48/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 49/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 50/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 51/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 52/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 53/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 54/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 55/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 56/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 57/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 58/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 59/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 60/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 61/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 62/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 63/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 64/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 65/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 66/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 67/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 68/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 69/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 70/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 71/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 72/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 73/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 74/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 75/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 76/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 77/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 78/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 79/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 80/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 81/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 82/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 83/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 84/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 85/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 86/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 87/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 88/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 89/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 90/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 91/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 92/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 93/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 94/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 95/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 96/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 97/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 98/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 99/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 100/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 101/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 102/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 103/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 104/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 105/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 106/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 107/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 108/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 109/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 110/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 111/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 112/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 113/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 114/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 115/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 116/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 117/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 118/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 119/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 120/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 121/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 122/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 123/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 124/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 125/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 126/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 127/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 128/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 129/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 130/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 131/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 132/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 133/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 134/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 135/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 136/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 137/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 138/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 139/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 140/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 141/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 142/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 143/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 144/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 145/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 146/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 147/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 148/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 149/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 150/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 151/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 152/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 153/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 154/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 155/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 156/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 157/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 158/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 159/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 160/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 161/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 162/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 163/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 164/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 165/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 166/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 167/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ ├── 168/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ │ └── 169/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ ├── io_links/ drwxr-xr-x - root 16 Feb 14:42 │ │ └── 0/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ ├── mem_banks/ drwxr-xr-x - root 16 Feb 14:42 │ │ └── 0/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ ├── p2p_links/ drwxr-xr-x - root 16 Feb 14:42 │ ├── perf/ .r--r--r-- 4.1k root 16 Feb 14:42 │ ├── gpu_id .r--r--r-- 4.1k root 16 Feb 14:42 │ ├── name .r--r--r-- 4.1k root 16 Feb 14:42 │ └── properties drwxr-xr-x - root 16 Feb 14:42 └── 2/ drwxr-xr-x - root 16 Feb 14:42 ├── caches/ drwxr-xr-x - root 16 Feb 14:42 │ ├── 0/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ ├── 1/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ ├── 2/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ ├── 3/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ ├── 4/ .r--r--r-- 4.1k root 16 Feb 14:42 │ │ └── properties drwxr-xr-x - root 16 Feb 14:42 │ └── 5/ .r--r--r-- 4.1k root 16 Feb 14:42 │ └── properties drwxr-xr-x - root 16 Feb 14:42 ├── io_links/ drwxr-xr-x - root 16 Feb 14:42 │ └── 0/ .r--r--r-- 4.1k root 16 Feb 14:42 │ └── properties drwxr-xr-x - root 16 Feb 14:42 ├── mem_banks/ drwxr-xr-x - root 16 Feb 14:42 │ └── 0/ .r--r--r-- 4.1k root 16 Feb 14:42 │ └── properties drwxr-xr-x - root 16 Feb 14:42 ├── p2p_links/ drwxr-xr-x - root 16 Feb 14:42 ├── perf/ .r--r--r-- 4.1k root 16 Feb 14:42 ├── gpu_id .r--r--r-- 4.1k root 16 Feb 14:42 ├── name .r--r--r-- 4.1k root 16 Feb 14:42 └── properties ``` </details>
Author
Owner

@dhiltgen commented on GitHub (Feb 16, 2024):

@abysssol thanks! That explains it. I'll have a fix up shortly.

<!-- gh-comment-id:1949499886 --> @dhiltgen commented on GitHub (Feb 16, 2024): @abysssol thanks! That explains it. I'll have a fix up shortly.
Author
Owner

@abysssol commented on GitHub (Feb 17, 2024):

@dhiltgen Thank you! Should I expect a new release containing this in the next week? I'm unsure whether I should skip packaging 0.1.25 and wait for the next release, or if the startup performance fix is worth the runtime performance hit. What do you think?
Upstream nixpkgs pull request for reference.

<!-- gh-comment-id:1949512037 --> @abysssol commented on GitHub (Feb 17, 2024): @dhiltgen Thank you! Should I expect a new release containing this in the next week? I'm unsure whether I should skip packaging 0.1.25 and wait for the next release, or if the startup performance fix is worth the runtime performance hit. What do you think? Upstream nixpkgs [pull request](https://github.com/NixOS/nixpkgs/pull/289108) for reference.
Author
Owner

@dhiltgen commented on GitHub (Feb 17, 2024):

Version 0.1.25 will remain broken for radeon cards on nixos. We'll have a 0.1.26 out soon, probably next week, but there are other variables that could impact when we cut the next release.

<!-- gh-comment-id:1949514130 --> @dhiltgen commented on GitHub (Feb 17, 2024): Version 0.1.25 will remain broken for radeon cards on nixos. We'll have a 0.1.26 out soon, probably next week, but there are other variables that could impact when we cut the next release.
Author
Owner

@DaKingof commented on GitHub (Mar 12, 2024):

Seems we are up to 1.28 and I am getting the same errors on my 6700xt.

OLLAMA_DEBUG=1 ollama serve
time=2024-03-12T19:18:54.771-04:00 level=INFO source=images.go:710 msg="total blobs: 0"
time=2024-03-12T19:18:54.771-04:00 level=INFO source=images.go:717 msg="total unused blobs removed: 0"
time=2024-03-12T19:18:54.772-04:00 level=INFO source=routes.go:1021 msg="Listening on 127.0.0.1:11434 (version 0.1.28)"
time=2024-03-12T19:18:54.772-04:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-03-12T19:18:54.805-04:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu_avx cpu cpu_avx2]"
time=2024-03-12T19:18:54.805-04:00 level=DEBUG source=payload_common.go:147 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-03-12T19:18:54.805-04:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-03-12T19:18:54.805-04:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-03-12T19:18:54.805-04:00 level=DEBUG source=gpu.go:283 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /run/opengl-driver/lib/libnvidia-ml.so* /run/opengl-driver-32/lib/libnvidia-ml.so*]"
time=2024-03-12T19:18:54.806-04:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []"
time=2024-03-12T19:18:54.806-04:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library librocm_smi64.so"
time=2024-03-12T19:18:54.806-04:00 level=DEBUG source=gpu.go:283 msg="gpu management search paths: [/opt/rocm*/lib*/librocm_smi64.so* /run/opengl-driver/lib/librocm_smi64.so* /run/opengl-driver-32/lib/librocm_smi64.so*]"
time=2024-03-12T19:18:54.806-04:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []"
time=2024-03-12T19:18:54.806-04:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-12T19:18:54.806-04:00 level=INFO source=routes.go:1044 msg="no GPU detected"
<!-- gh-comment-id:1992727552 --> @DaKingof commented on GitHub (Mar 12, 2024): Seems we are up to 1.28 and I am getting the same errors on my 6700xt. ``` OLLAMA_DEBUG=1 ollama serve time=2024-03-12T19:18:54.771-04:00 level=INFO source=images.go:710 msg="total blobs: 0" time=2024-03-12T19:18:54.771-04:00 level=INFO source=images.go:717 msg="total unused blobs removed: 0" time=2024-03-12T19:18:54.772-04:00 level=INFO source=routes.go:1021 msg="Listening on 127.0.0.1:11434 (version 0.1.28)" time=2024-03-12T19:18:54.772-04:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..." time=2024-03-12T19:18:54.805-04:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu_avx cpu cpu_avx2]" time=2024-03-12T19:18:54.805-04:00 level=DEBUG source=payload_common.go:147 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-03-12T19:18:54.805-04:00 level=INFO source=gpu.go:94 msg="Detecting GPU type" time=2024-03-12T19:18:54.805-04:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so" time=2024-03-12T19:18:54.805-04:00 level=DEBUG source=gpu.go:283 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /run/opengl-driver/lib/libnvidia-ml.so* /run/opengl-driver-32/lib/libnvidia-ml.so*]" time=2024-03-12T19:18:54.806-04:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []" time=2024-03-12T19:18:54.806-04:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library librocm_smi64.so" time=2024-03-12T19:18:54.806-04:00 level=DEBUG source=gpu.go:283 msg="gpu management search paths: [/opt/rocm*/lib*/librocm_smi64.so* /run/opengl-driver/lib/librocm_smi64.so* /run/opengl-driver-32/lib/librocm_smi64.so*]" time=2024-03-12T19:18:54.806-04:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []" time=2024-03-12T19:18:54.806-04:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-12T19:18:54.806-04:00 level=INFO source=routes.go:1044 msg="no GPU detected" ```
Author
Owner

@abysssol commented on GitHub (Mar 12, 2024):

@DaKingof It appears to me that your problem is different. The logs indicate that no gpu libraries were discovered, while this issue was about ollama not using amd gpus even after discovering rocm libraries. This issue was due to ollama refusing to even try using rocm if a version file was missing from the kernel module.

Is it possible that you don't have the necessary libraries installed? I don't know what distro you're using, but it probably has a package for the libraries that you need to install.
It may also be that you're using a version without rocm support compiled in? Where did you get the ollama binary from?

<!-- gh-comment-id:1992748516 --> @abysssol commented on GitHub (Mar 12, 2024): @DaKingof It appears to me that your problem is different. The logs indicate that no gpu libraries were discovered, while this issue was about ollama not using amd gpus even after discovering rocm libraries. This issue was due to ollama refusing to even try using rocm if a version file was missing from the kernel module. Is it possible that you don't have the necessary libraries installed? I don't know what distro you're using, but it probably has a package for the libraries that you need to install. It may also be that you're using a version without rocm support compiled in? Where did you get the ollama binary from?
Author
Owner

@DaKingof commented on GitHub (Mar 12, 2024):

@abysssol I am also on NixOS. I'm not sure what I am missing. I followed all of the instructions from the wiki! rocm-opencl-icd is installed along with the HIP options. I did search for that library file and I couldn't find it so I suspected my issue was possibly different. Do you have any idea of which package or option I need to install/enable? I even added the services.ollama.acceleration = "rocm"; option!

<!-- gh-comment-id:1992751327 --> @DaKingof commented on GitHub (Mar 12, 2024): @abysssol I am also on NixOS. I'm not sure what I am missing. I followed all of the instructions from the wiki! ```rocm-opencl-icd``` is installed along with the HIP options. I did search for that library file and I couldn't find it so I suspected my issue was possibly different. Do you have any idea of which package or option I need to install/enable? I even added the ``` services.ollama.acceleration = "rocm";``` option!
Author
Owner

@dhiltgen commented on GitHub (Mar 13, 2024):

0.1.29 (still pre-release as we squash a few final bugs) revamps the Radeon discovery logic to use sysfs, so the management library hiccups should be gone.

@DaKingof one note though, we're now double checking the LLVM target, and I believe your GPU is a gfx1031, which isn't supported by ROCm v6 right now. I believe it's "close enough" to a 10.3.0 that you'll be able to use HSA_OVERRIDE_GFX_VERSION=10.3.0 to force ROCm to use a supported type, but you'll want to test it.

<!-- gh-comment-id:1992806178 --> @dhiltgen commented on GitHub (Mar 13, 2024): [0.1.29](https://github.com/ollama/ollama/releases/tag/v0.1.29) (still pre-release as we squash a few final bugs) revamps the Radeon discovery logic to use sysfs, so the management library hiccups should be gone. @DaKingof one note though, we're now double checking the LLVM target, and I believe your GPU is a gfx1031, which isn't supported by ROCm v6 right now. I believe it's "close enough" to a 10.3.0 that you'll be able to use `HSA_OVERRIDE_GFX_VERSION=10.3.0` to force ROCm to use a supported type, but you'll want to test it.
Author
Owner

@stephensrmmartin commented on GitHub (Mar 13, 2024):

0.1.29 (still pre-release as we squash a few final bugs) revamps the Radeon discovery logic to use sysfs, so the management library hiccups should be gone.

@DaKingof one note though, we're now double checking the LLVM target, and I believe your GPU is a gfx1031, which isn't supported by ROCm v6 right now. I believe it's "close enough" to a 10.3.0 that you'll be able to use HSA_OVERRIDE_GFX_VERSION=10.3.0 to force ROCm to use a supported type, but you'll want to test it.

Just replying to confirm that 6700xt does work with rocm v6 and ollama v0.1.29, with that HSA env variable set as you suggested. I added it to the Environment field of a systemd service via override file.

<!-- gh-comment-id:1992813684 --> @stephensrmmartin commented on GitHub (Mar 13, 2024): > [0.1.29](https://github.com/ollama/ollama/releases/tag/v0.1.29) (still pre-release as we squash a few final bugs) revamps the Radeon discovery logic to use sysfs, so the management library hiccups should be gone. > > @DaKingof one note though, we're now double checking the LLVM target, and I believe your GPU is a gfx1031, which isn't supported by ROCm v6 right now. I believe it's "close enough" to a 10.3.0 that you'll be able to use `HSA_OVERRIDE_GFX_VERSION=10.3.0` to force ROCm to use a supported type, but you'll want to test it. Just replying to confirm that 6700xt does work with rocm v6 and ollama v0.1.29, with that HSA env variable set as you suggested. I added it to the Environment field of a systemd service via override file.
Author
Owner

@abysssol commented on GitHub (Mar 13, 2024):

@DaKingof Did you actually enable the service?
If you only did this:

services.ollama.acceleration = "rocm";
environment.systemPackages = [ pkgs.ollama ];

Then you should instead do:

services.ollama.enable = true;
services.ollama.acceleration = "rocm";
<!-- gh-comment-id:1992847365 --> @abysssol commented on GitHub (Mar 13, 2024): @DaKingof Did you actually enable the service? If you only did this: ``` nix services.ollama.acceleration = "rocm"; environment.systemPackages = [ pkgs.ollama ]; ``` Then you should instead do: ``` nix services.ollama.enable = true; services.ollama.acceleration = "rocm"; ```
Author
Owner

@abysssol commented on GitHub (Mar 13, 2024):

@dhiltgen Do you know if there's a complete list of all environment variables supported by ollama? I'm planning on adding more env var override options to the ollama nixos module, and I would prefer to have a separate option for each individual variable. Otherwise, I'll have to add an option to set any arbitrary variables, which I would prefer not to if it's not necessary.

<!-- gh-comment-id:1992866500 --> @abysssol commented on GitHub (Mar 13, 2024): @dhiltgen Do you know if there's a complete list of all environment variables supported by ollama? I'm planning on adding more env var override options to the ollama nixos module, and I would prefer to have a separate option for each individual variable. Otherwise, I'll have to add an option to set any arbitrary variables, which I would prefer not to if it's not necessary.
Author
Owner

@DaKingof commented on GitHub (Mar 13, 2024):

@DaKingof Did you actually enable the service?

I did not! Thank you. I completely forgot to look to see if there were other ollama options.

@DaKingof one note though, we're now double checking the LLVM target, and I believe your GPU is a gfx1031, which isn't supported by ROCm v6 right now. I believe it's "close enough" to a 10.3.0 that you'll be able to use HSA_OVERRIDE_GFX_VERSION=10.3.0 to force ROCm to use a supported type, but you'll want to test it.

Ahh, yes! I had this in my configuration.nix but it was commented out after I ran into some issues and was trying to figure out the culprit and I completely forgot what it was for and to uncomment it. In my configuration I had it set to HSA_OVERRIDE_GFX_VERSION = "10.3.0 HCC_AMDGPU_TARGET=gfx1030"; which seems to work as well.

Thanks for the assist guys. Seems I am good to go now. <3

<!-- gh-comment-id:1993137735 --> @DaKingof commented on GitHub (Mar 13, 2024): > @DaKingof Did you actually enable the service? I did not! Thank you. I completely forgot to look to see if there were other ollama options. > @DaKingof one note though, we're now double checking the LLVM target, and I believe your GPU is a gfx1031, which isn't supported by ROCm v6 right now. I believe it's "close enough" to a 10.3.0 that you'll be able to use `HSA_OVERRIDE_GFX_VERSION=10.3.0` to force ROCm to use a supported type, but you'll want to test it. Ahh, yes! I had this in my configuration.nix but it was commented out after I ran into some issues and was trying to figure out the culprit and I completely forgot what it was for and to uncomment it. In my configuration I had it set to ```HSA_OVERRIDE_GFX_VERSION = "10.3.0 HCC_AMDGPU_TARGET=gfx1030";``` which seems to work as well. Thanks for the assist guys. Seems I am good to go now. <3
Author
Owner

@dhiltgen commented on GitHub (Mar 13, 2024):

@abysssol we don't have a succinct list. Check the various docs in the docs folder, but also remember we're running on top of CUDA and ROCm (and Metal on macs) and those have their own env var matrix).

<!-- gh-comment-id:1994519409 --> @dhiltgen commented on GitHub (Mar 13, 2024): @abysssol we don't have a succinct list. Check the various docs in the [docs folder](https://github.com/ollama/ollama/tree/main/docs), but also remember we're running on top of CUDA and ROCm (and Metal on macs) and those have their own env var matrix).
Author
Owner

@FabioSFernandes commented on GitHub (May 11, 2024):

0.1.29 (still pre-release as we squash a few final bugs) revamps the Radeon discovery logic to use sysfs, so the management library hiccups should be gone.

@DaKingof one note though, we're now double checking the LLVM target, and I believe your GPU is a gfx1031, which isn't supported by ROCm v6 right now. I believe it's "close enough" to a 10.3.0 that you'll be able to use HSA_OVERRIDE_GFX_VERSION=10.3.0 to force ROCm to use a supported type, but you'll want to test it.

Hello just for sharing, I create the env var and the result: the error was gone and my RX 6750 gpu is now being detected.

before env:
time...source=amd_windows.go:95 msg="amdgpu is not supported" gpu=0 gpu_type=gfx1031 library="C:\Program Files\AMD\ROCm\5.5\bin" supported_types="[gfx1030 gfx1100 gfx1101 gfx1102 gfx906]"
time...source=amd_windows.go:97 msg="See https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md for HSA_OVERRIDE_GFX_VERSION usage"
time...source=types.go:71 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="255.9 GiB" available="239.4 GiB"

after env/ollama restart:
time...source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11.3 rocm_v5.7 cpu cpu_avx]"
time...source=amd_windows.go:63 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION=10.3.0
time...source=types.go:71 msg="inference compute" id=0 library=rocm compute=gfx1031 driver=0.0 name="AMD Radeon RX 6750 XT" total="12.0 GiB" available="11.9 GiB"

I will now test models and chat performance.

<!-- gh-comment-id:2105671987 --> @FabioSFernandes commented on GitHub (May 11, 2024): > [0.1.29](https://github.com/ollama/ollama/releases/tag/v0.1.29) (still pre-release as we squash a few final bugs) revamps the Radeon discovery logic to use sysfs, so the management library hiccups should be gone. > > @DaKingof one note though, we're now double checking the LLVM target, and I believe your GPU is a gfx1031, which isn't supported by ROCm v6 right now. I believe it's "close enough" to a 10.3.0 that you'll be able to use `HSA_OVERRIDE_GFX_VERSION=10.3.0` to force ROCm to use a supported type, but you'll want to test it. Hello just for sharing, I create the env var and the result: the error was gone and my RX 6750 gpu is now being detected. before env: time...source=amd_windows.go:95 msg="amdgpu is not supported" gpu=0 gpu_type=gfx1031 library="C:\Program Files\AMD\ROCm\5.5\bin" supported_types="[gfx1030 gfx1100 gfx1101 gfx1102 gfx906]" time...source=amd_windows.go:97 msg="See https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md for HSA_OVERRIDE_GFX_VERSION usage" time...source=types.go:71 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="255.9 GiB" available="239.4 GiB" after env/ollama restart: time...source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11.3 rocm_v5.7 cpu cpu_avx]" time...source=amd_windows.go:63 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION=10.3.0 time...source=types.go:71 msg="inference compute" id=0 library=rocm compute=gfx1031 driver=0.0 name="AMD Radeon RX 6750 XT" total="12.0 GiB" available="11.9 GiB" I will now test models and chat performance.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#63501