[GH-ISSUE #3825] Updating to docker 0.1.29-rocm and beyond breaks detection of GPU (Radeon Pro W6600) #2368

Closed
opened 2026-04-12 12:41:08 -05:00 by GiteaMirror · 19 comments
Owner

Originally created by @ic4-y on GitHub (Apr 22, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3825

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

When updating my docker stack from using the image 0.1.24-rocm to newer versions in order to run some embeddings models that crashed otherwise, I noticed that 0.1.29-rocm and above break GPU detection on my Radeon Pro W6600. The GPU works fine in 0.1.28-rocm

On 0.1.32-rocm I get the following error when trying to start generation:

ollama-rocm  | rocBLAS error: Could not initialize Tensile host: No devices found
ollama-rocm  | time=2024-04-22T14:33:01.535Z level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: -1 error:Could not initialize Tensile host: No devices found"
ollama-rocm  | time=2024-04-22T14:33:01.535Z level=DEBUG source=server.go:832 msg="stopping llama server"

While on 0.1.28-rocm it works just fine:

ollama-rocm  | [1713796808] Performing pre-initialization of GPU
ollama-rocm  | ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ollama-rocm  | ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ollama-rocm  | ggml_init_cublas: found 1 ROCm devices:
ollama-rocm  |   Device 0: AMD Radeon PRO W6600, compute capability 10.3, VMM: no

I am wondering if this is a duplicate of https://github.com/ollama/ollama/issues/3304 or maybe related. The problems started with 0.1.29-rocm in my case as well.

I am running this ollama docker container in an LXC container on a Proxmox host with Dual Xeon v4 CPUs.

OS

Docker in Linux LXC container running on Proxmox 8.1

GPU

AMD Radeon Pro W6600

CPU

Intel Xeon v4

Ollama version

0.1.29 and above

Originally created by @ic4-y on GitHub (Apr 22, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3825 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? When updating my docker stack from using the image `0.1.24-rocm` to newer versions in order to run some embeddings models that crashed otherwise, I noticed that `0.1.29-rocm` and above break GPU detection on my Radeon Pro W6600. The GPU works fine in `0.1.28-rocm` On `0.1.32-rocm` I get the following error when trying to start generation: ``` ollama-rocm | rocBLAS error: Could not initialize Tensile host: No devices found ollama-rocm | time=2024-04-22T14:33:01.535Z level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: -1 error:Could not initialize Tensile host: No devices found" ollama-rocm | time=2024-04-22T14:33:01.535Z level=DEBUG source=server.go:832 msg="stopping llama server" ``` While on `0.1.28-rocm` it works just fine: ``` ollama-rocm | [1713796808] Performing pre-initialization of GPU ollama-rocm | ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ollama-rocm | ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ollama-rocm | ggml_init_cublas: found 1 ROCm devices: ollama-rocm | Device 0: AMD Radeon PRO W6600, compute capability 10.3, VMM: no ``` I am wondering if this is a duplicate of https://github.com/ollama/ollama/issues/3304 or maybe related. The problems started with `0.1.29-rocm` in my case as well. I am running this ollama docker container in an LXC container on a Proxmox host with Dual Xeon v4 CPUs. ### OS Docker in Linux LXC container running on Proxmox 8.1 ### GPU AMD Radeon Pro W6600 ### CPU Intel Xeon v4 ### Ollama version 0.1.29 and above
GiteaMirror added the bugamd labels 2026-04-12 12:41:08 -05:00
Author
Owner

@dhiltgen commented on GitHub (Apr 22, 2024):

Could you share a little more of the debug logs? My suspicion from what you shared is we got the HIP_VISIBLE_DEVICES or ROCR_VISIBLE_DEVICES wired up incorrectly.

<!-- gh-comment-id:2070985137 --> @dhiltgen commented on GitHub (Apr 22, 2024): Could you share a little more of the debug logs? My suspicion from what you shared is we got the HIP_VISIBLE_DEVICES or ROCR_VISIBLE_DEVICES wired up incorrectly.
Author
Owner

@ic4-y commented on GitHub (Apr 23, 2024):

Thanks for getting back to me so quickly! And thanks for the responsive support on ROCm for this project, this is really a great thing (in particular also given how sensitive ROCm can be still).

Here is a more complete debug log for th 0.1.32-rocm container untill ollama decided to no longer stick around:

ollama-rocm  | [GIN] 2024/04/23 - 13:58:05 | 200 |          1m8s |      172.19.0.1 | POST     "/api/pull"
ollama-rocm  | time=2024-04-23T13:58:08.232Z level=DEBUG source=gguf.go:57 msg="model = &llm.gguf{containerGGUF:(*llm.containerGGUF)(0xc000766580), kv:llm.KV{}, tensors:[]*llm.Tensor(nil), parameters:0x0}"
ollama-rocm  | time=2024-04-23T13:58:10.905Z level=DEBUG source=gguf.go:193 msg="general.architecture = llama"
ollama-rocm  | time=2024-04-23T13:58:10.911Z level=INFO source=gpu.go:121 msg="Detecting GPU type"
ollama-rocm  | time=2024-04-23T13:58:10.911Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
ollama-rocm  | time=2024-04-23T13:58:10.911Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/tmp/ollama531474679/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /opt/rocm/lib/libcudart.so** /usr/local/lib/libcudart.so** /opt/rh/devtoolset-7/root/libcudart.so** /tmp/ollama531474679/rocm/libcudart.so**]"
ollama-rocm  | time=2024-04-23T13:58:10.913Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama531474679/runners/cuda_v11/libcudart.so.11.0]"
ollama-rocm  | wiring cudart library functions in /tmp/ollama531474679/runners/cuda_v11/libcudart.so.11.0
ollama-rocm  | dlsym: cudaSetDevice
ollama-rocm  | dlsym: cudaDeviceSynchronize
ollama-rocm  | dlsym: cudaDeviceReset
ollama-rocm  | dlsym: cudaMemGetInfo
ollama-rocm  | dlsym: cudaGetDeviceCount
ollama-rocm  | dlsym: cudaDeviceGetAttribute
ollama-rocm  | dlsym: cudaDriverGetVersion
ollama-rocm  | cudaSetDevice err: 35
ollama-rocm  | time=2024-04-23T13:58:10.914Z level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama531474679/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama"
ollama-rocm  | time=2024-04-23T13:58:10.914Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so"
ollama-rocm  | time=2024-04-23T13:58:10.914Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /opt/rocm/lib/libnvidia-ml.so* /usr/local/lib/libnvidia-ml.so* /opt/rh/devtoolset-7/root/libnvidia-ml.so* /tmp/ollama531474679/rocm/libnvidia-ml.so*]"
ollama-rocm  | time=2024-04-23T13:58:10.916Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []"
ollama-rocm  | time=2024-04-23T13:58:10.916Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
ollama-rocm  | time=2024-04-23T13:58:10.916Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
ollama-rocm  | time=2024-04-23T13:58:10.916Z level=DEBUG source=amd_linux.go:169 msg="discovering VRAM for amdgpu devices"
ollama-rocm  | time=2024-04-23T13:58:10.916Z level=DEBUG source=amd_linux.go:188 msg="amdgpu devices [1]"
ollama-rocm  | time=2024-04-23T13:58:10.916Z level=INFO source=amd_linux.go:263 msg="[1] amdgpu totalMemory 8176M"
ollama-rocm  | time=2024-04-23T13:58:10.916Z level=INFO source=amd_linux.go:264 msg="[1] amdgpu freeMemory  8176M"
ollama-rocm  | time=2024-04-23T13:58:10.916Z level=INFO source=gpu.go:121 msg="Detecting GPU type"
ollama-rocm  | time=2024-04-23T13:58:10.916Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
ollama-rocm  | time=2024-04-23T13:58:10.916Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/tmp/ollama531474679/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /opt/rocm/lib/libcudart.so** /usr/local/lib/libcudart.so** /opt/rh/devtoolset-7/root/libcudart.so** /tmp/ollama531474679/rocm/libcudart.so**]"
ollama-rocm  | time=2024-04-23T13:58:10.918Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama531474679/runners/cuda_v11/libcudart.so.11.0]"
ollama-rocm  | wiring cudart library functions in /tmp/ollama531474679/runners/cuda_v11/libcudart.so.11.0
ollama-rocm  | dlsym: cudaSetDevice
ollama-rocm  | dlsym: cudaDeviceSynchronize
ollama-rocm  | dlsym: cudaDeviceReset
ollama-rocm  | dlsym: cudaMemGetInfo
ollama-rocm  | dlsym: cudaGetDeviceCount
ollama-rocm  | dlsym: cudaDeviceGetAttribute
ollama-rocm  | dlsym: cudaDriverGetVersion
ollama-rocm  | cudaSetDevice err: 35
ollama-rocm  | time=2024-04-23T13:58:10.918Z level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama531474679/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama"
ollama-rocm  | time=2024-04-23T13:58:10.918Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so"
ollama-rocm  | time=2024-04-23T13:58:10.918Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /opt/rocm/lib/libnvidia-ml.so* /usr/local/lib/libnvidia-ml.so* /opt/rh/devtoolset-7/root/libnvidia-ml.so* /tmp/ollama531474679/rocm/libnvidia-ml.so*]"
ollama-rocm  | time=2024-04-23T13:58:10.920Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []"
ollama-rocm  | time=2024-04-23T13:58:10.920Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
ollama-rocm  | time=2024-04-23T13:58:10.920Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
ollama-rocm  | time=2024-04-23T13:58:10.920Z level=DEBUG source=amd_linux.go:169 msg="discovering VRAM for amdgpu devices"
ollama-rocm  | time=2024-04-23T13:58:10.920Z level=DEBUG source=amd_linux.go:188 msg="amdgpu devices [1]"
ollama-rocm  | time=2024-04-23T13:58:10.920Z level=INFO source=amd_linux.go:263 msg="[1] amdgpu totalMemory 8176M"
ollama-rocm  | time=2024-04-23T13:58:10.920Z level=INFO source=amd_linux.go:264 msg="[1] amdgpu freeMemory  8176M"
ollama-rocm  | time=2024-04-23T13:58:10.921Z level=INFO source=server.go:127 msg="offload to gpu" reallayers=33 layers=33 required="5033.0 MiB" used="5033.0 MiB" available="8176.0 MiB" kv="256.0 MiB" fulloffload="164.0 MiB" partialoffload="677.5 MiB"
ollama-rocm  | time=2024-04-23T13:58:10.921Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama531474679/runners/cpu
ollama-rocm  | time=2024-04-23T13:58:10.921Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama531474679/runners/cpu_avx
ollama-rocm  | time=2024-04-23T13:58:10.921Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama531474679/runners/cpu_avx2
ollama-rocm  | time=2024-04-23T13:58:10.921Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama531474679/runners/cuda_v11
ollama-rocm  | time=2024-04-23T13:58:10.921Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama531474679/runners/rocm_v60002
ollama-rocm  | time=2024-04-23T13:58:10.921Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama531474679/runners/cpu
ollama-rocm  | time=2024-04-23T13:58:10.921Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama531474679/runners/cpu_avx
ollama-rocm  | time=2024-04-23T13:58:10.921Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama531474679/runners/cpu_avx2
ollama-rocm  | time=2024-04-23T13:58:10.921Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama531474679/runners/cuda_v11
ollama-rocm  | time=2024-04-23T13:58:10.921Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama531474679/runners/rocm_v60002
ollama-rocm  | time=2024-04-23T13:58:10.921Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
ollama-rocm  | time=2024-04-23T13:58:10.923Z level=DEBUG source=server.go:259 msg="LD_LIBRARY_PATH=/opt/rocm/lib:/usr/local/lib:/opt/rh/devtoolset-7/root:/tmp/ollama531474679/rocm:/tmp/ollama531474679/runners/rocm_v60002"
ollama-rocm  | time=2024-04-23T13:58:10.923Z level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama531474679/runners/rocm_v60002/ollama_llama_server --model /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 --ctx-size 2048 --batch-size 512 --embedding --log-format json --n-gpu-layers 33 --verbose --port 33779"
ollama-rocm  | time=2024-04-23T13:58:10.923Z level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
ollama-rocm  | time=2024-04-23T13:58:10.974Z level=DEBUG source=server.go:420 msg="server not yet available" error="health resp: Get \"http://127.0.0.1:33779/health\": dial tcp 127.0.0.1:33779: connect: connection refused"
ollama-rocm  | {"function":"server_params_parse","level":"WARN","line":2494,"msg":"server.cpp is not built with verbose logging.","tid":"134522014141504","timestamp":1713880690}
ollama-rocm  | {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2820,"msg":"build info","tid":"134522014141504","timestamp":1713880690}
ollama-rocm  | {"function":"main","level":"INFO","line":2827,"msg":"system info","n_threads":20,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"134522014141504","timestamp":1713880690,"total_threads":40}
ollama-rocm  | llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 (version GGUF V3 (latest))
ollama-rocm  | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
ollama-rocm  | llama_model_loader: - kv   0:                       general.architecture str              = llama
ollama-rocm  | llama_model_loader: - kv   1:                               general.name str              = Meta-Llama-3-8B-Instruct
ollama-rocm  | llama_model_loader: - kv   2:                          llama.block_count u32              = 32
ollama-rocm  | llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
ollama-rocm  | llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
ollama-rocm  | llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
ollama-rocm  | llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 32
ollama-rocm  | llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
ollama-rocm  | llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
ollama-rocm  | llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
ollama-rocm  | llama_model_loader: - kv  10:                          general.file_type u32              = 2
ollama-rocm  | llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
ollama-rocm  | llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
ollama-rocm  | llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
ollama-rocm  | llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
ollama-rocm  | llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
ollama-rocm  | time=2024-04-23T13:58:11.224Z level=DEBUG source=server.go:420 msg="server not yet available" error="server not responding"
ollama-rocm  | llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
ollama-rocm  | llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 128000
ollama-rocm  | llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 128001
ollama-rocm  | llama_model_loader: - kv  19:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
ollama-rocm  | llama_model_loader: - kv  20:               general.quantization_version u32              = 2
ollama-rocm  | llama_model_loader: - type  f32:   65 tensors
ollama-rocm  | llama_model_loader: - type q4_0:  225 tensors
ollama-rocm  | llama_model_loader: - type q6_K:    1 tensors
ollama-rocm  | llm_load_vocab: special tokens definition check successful ( 256/128256 ).
ollama-rocm  | llm_load_print_meta: format           = GGUF V3 (latest)
ollama-rocm  | llm_load_print_meta: arch             = llama
ollama-rocm  | llm_load_print_meta: vocab type       = BPE
ollama-rocm  | llm_load_print_meta: n_vocab          = 128256
ollama-rocm  | llm_load_print_meta: n_merges         = 280147
ollama-rocm  | llm_load_print_meta: n_ctx_train      = 8192
ollama-rocm  | llm_load_print_meta: n_embd           = 4096
ollama-rocm  | llm_load_print_meta: n_head           = 32
ollama-rocm  | llm_load_print_meta: n_head_kv        = 8
ollama-rocm  | llm_load_print_meta: n_layer          = 32
ollama-rocm  | llm_load_print_meta: n_rot            = 128
ollama-rocm  | llm_load_print_meta: n_embd_head_k    = 128
ollama-rocm  | llm_load_print_meta: n_embd_head_v    = 128
ollama-rocm  | llm_load_print_meta: n_gqa            = 4
ollama-rocm  | llm_load_print_meta: n_embd_k_gqa     = 1024
ollama-rocm  | llm_load_print_meta: n_embd_v_gqa     = 1024
ollama-rocm  | llm_load_print_meta: f_norm_eps       = 0.0e+00
ollama-rocm  | llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
ollama-rocm  | llm_load_print_meta: f_clamp_kqv      = 0.0e+00
ollama-rocm  | llm_load_print_meta: f_max_alibi_bias = 0.0e+00
ollama-rocm  | llm_load_print_meta: f_logit_scale    = 0.0e+00
ollama-rocm  | llm_load_print_meta: n_ff             = 14336
ollama-rocm  | llm_load_print_meta: n_expert         = 0
ollama-rocm  | llm_load_print_meta: n_expert_used    = 0
ollama-rocm  | llm_load_print_meta: causal attn      = 1
ollama-rocm  | llm_load_print_meta: pooling type     = 0
ollama-rocm  | llm_load_print_meta: rope type        = 0
ollama-rocm  | llm_load_print_meta: rope scaling     = linear
ollama-rocm  | llm_load_print_meta: freq_base_train  = 500000.0
ollama-rocm  | llm_load_print_meta: freq_scale_train = 1
ollama-rocm  | llm_load_print_meta: n_yarn_orig_ctx  = 8192
ollama-rocm  | llm_load_print_meta: rope_finetuned   = unknown
ollama-rocm  | llm_load_print_meta: ssm_d_conv       = 0
ollama-rocm  | llm_load_print_meta: ssm_d_inner      = 0
ollama-rocm  | llm_load_print_meta: ssm_d_state      = 0
ollama-rocm  | llm_load_print_meta: ssm_dt_rank      = 0
ollama-rocm  | llm_load_print_meta: model type       = 7B
ollama-rocm  | llm_load_print_meta: model ftype      = Q4_0
ollama-rocm  | llm_load_print_meta: model params     = 8.03 B
ollama-rocm  | llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW) 
ollama-rocm  | llm_load_print_meta: general.name     = Meta-Llama-3-8B-Instruct
ollama-rocm  | llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
ollama-rocm  | llm_load_print_meta: EOS token        = 128001 '<|end_of_text|>'
ollama-rocm  | llm_load_print_meta: LF token         = 128 'Ä'
ollama-rocm  | 
ollama-rocm  | rocBLAS error: Could not initialize Tensile host: No devices found
ollama-rocm  | time=2024-04-23T13:58:12.216Z level=DEBUG source=server.go:420 msg="server not yet available" error="health resp: Get \"http://127.0.0.1:33779/health\": read tcp 127.0.0.1:45854->127.0.0.1:33779: read: connection reset by peer"
ollama-rocm  | time=2024-04-23T13:58:12.216Z level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: -1 error:Could not initialize Tensile host: No devices found"
ollama-rocm  | time=2024-04-23T13:58:12.216Z level=DEBUG source=server.go:832 msg="stopping llama server"
<!-- gh-comment-id:2072400147 --> @ic4-y commented on GitHub (Apr 23, 2024): Thanks for getting back to me so quickly! And thanks for the responsive support on ROCm for this project, this is really a great thing (in particular also given how sensitive ROCm can be still). Here is a more complete debug log for th 0.1.32-rocm container untill ollama decided to no longer stick around: ``` ollama-rocm | [GIN] 2024/04/23 - 13:58:05 | 200 | 1m8s | 172.19.0.1 | POST "/api/pull" ollama-rocm | time=2024-04-23T13:58:08.232Z level=DEBUG source=gguf.go:57 msg="model = &llm.gguf{containerGGUF:(*llm.containerGGUF)(0xc000766580), kv:llm.KV{}, tensors:[]*llm.Tensor(nil), parameters:0x0}" ollama-rocm | time=2024-04-23T13:58:10.905Z level=DEBUG source=gguf.go:193 msg="general.architecture = llama" ollama-rocm | time=2024-04-23T13:58:10.911Z level=INFO source=gpu.go:121 msg="Detecting GPU type" ollama-rocm | time=2024-04-23T13:58:10.911Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" ollama-rocm | time=2024-04-23T13:58:10.911Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/tmp/ollama531474679/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /opt/rocm/lib/libcudart.so** /usr/local/lib/libcudart.so** /opt/rh/devtoolset-7/root/libcudart.so** /tmp/ollama531474679/rocm/libcudart.so**]" ollama-rocm | time=2024-04-23T13:58:10.913Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama531474679/runners/cuda_v11/libcudart.so.11.0]" ollama-rocm | wiring cudart library functions in /tmp/ollama531474679/runners/cuda_v11/libcudart.so.11.0 ollama-rocm | dlsym: cudaSetDevice ollama-rocm | dlsym: cudaDeviceSynchronize ollama-rocm | dlsym: cudaDeviceReset ollama-rocm | dlsym: cudaMemGetInfo ollama-rocm | dlsym: cudaGetDeviceCount ollama-rocm | dlsym: cudaDeviceGetAttribute ollama-rocm | dlsym: cudaDriverGetVersion ollama-rocm | cudaSetDevice err: 35 ollama-rocm | time=2024-04-23T13:58:10.914Z level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama531474679/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama" ollama-rocm | time=2024-04-23T13:58:10.914Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so" ollama-rocm | time=2024-04-23T13:58:10.914Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /opt/rocm/lib/libnvidia-ml.so* /usr/local/lib/libnvidia-ml.so* /opt/rh/devtoolset-7/root/libnvidia-ml.so* /tmp/ollama531474679/rocm/libnvidia-ml.so*]" ollama-rocm | time=2024-04-23T13:58:10.916Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []" ollama-rocm | time=2024-04-23T13:58:10.916Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ollama-rocm | time=2024-04-23T13:58:10.916Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" ollama-rocm | time=2024-04-23T13:58:10.916Z level=DEBUG source=amd_linux.go:169 msg="discovering VRAM for amdgpu devices" ollama-rocm | time=2024-04-23T13:58:10.916Z level=DEBUG source=amd_linux.go:188 msg="amdgpu devices [1]" ollama-rocm | time=2024-04-23T13:58:10.916Z level=INFO source=amd_linux.go:263 msg="[1] amdgpu totalMemory 8176M" ollama-rocm | time=2024-04-23T13:58:10.916Z level=INFO source=amd_linux.go:264 msg="[1] amdgpu freeMemory 8176M" ollama-rocm | time=2024-04-23T13:58:10.916Z level=INFO source=gpu.go:121 msg="Detecting GPU type" ollama-rocm | time=2024-04-23T13:58:10.916Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" ollama-rocm | time=2024-04-23T13:58:10.916Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/tmp/ollama531474679/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /opt/rocm/lib/libcudart.so** /usr/local/lib/libcudart.so** /opt/rh/devtoolset-7/root/libcudart.so** /tmp/ollama531474679/rocm/libcudart.so**]" ollama-rocm | time=2024-04-23T13:58:10.918Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama531474679/runners/cuda_v11/libcudart.so.11.0]" ollama-rocm | wiring cudart library functions in /tmp/ollama531474679/runners/cuda_v11/libcudart.so.11.0 ollama-rocm | dlsym: cudaSetDevice ollama-rocm | dlsym: cudaDeviceSynchronize ollama-rocm | dlsym: cudaDeviceReset ollama-rocm | dlsym: cudaMemGetInfo ollama-rocm | dlsym: cudaGetDeviceCount ollama-rocm | dlsym: cudaDeviceGetAttribute ollama-rocm | dlsym: cudaDriverGetVersion ollama-rocm | cudaSetDevice err: 35 ollama-rocm | time=2024-04-23T13:58:10.918Z level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama531474679/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama" ollama-rocm | time=2024-04-23T13:58:10.918Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so" ollama-rocm | time=2024-04-23T13:58:10.918Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /opt/rocm/lib/libnvidia-ml.so* /usr/local/lib/libnvidia-ml.so* /opt/rh/devtoolset-7/root/libnvidia-ml.so* /tmp/ollama531474679/rocm/libnvidia-ml.so*]" ollama-rocm | time=2024-04-23T13:58:10.920Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []" ollama-rocm | time=2024-04-23T13:58:10.920Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ollama-rocm | time=2024-04-23T13:58:10.920Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" ollama-rocm | time=2024-04-23T13:58:10.920Z level=DEBUG source=amd_linux.go:169 msg="discovering VRAM for amdgpu devices" ollama-rocm | time=2024-04-23T13:58:10.920Z level=DEBUG source=amd_linux.go:188 msg="amdgpu devices [1]" ollama-rocm | time=2024-04-23T13:58:10.920Z level=INFO source=amd_linux.go:263 msg="[1] amdgpu totalMemory 8176M" ollama-rocm | time=2024-04-23T13:58:10.920Z level=INFO source=amd_linux.go:264 msg="[1] amdgpu freeMemory 8176M" ollama-rocm | time=2024-04-23T13:58:10.921Z level=INFO source=server.go:127 msg="offload to gpu" reallayers=33 layers=33 required="5033.0 MiB" used="5033.0 MiB" available="8176.0 MiB" kv="256.0 MiB" fulloffload="164.0 MiB" partialoffload="677.5 MiB" ollama-rocm | time=2024-04-23T13:58:10.921Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama531474679/runners/cpu ollama-rocm | time=2024-04-23T13:58:10.921Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama531474679/runners/cpu_avx ollama-rocm | time=2024-04-23T13:58:10.921Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama531474679/runners/cpu_avx2 ollama-rocm | time=2024-04-23T13:58:10.921Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama531474679/runners/cuda_v11 ollama-rocm | time=2024-04-23T13:58:10.921Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama531474679/runners/rocm_v60002 ollama-rocm | time=2024-04-23T13:58:10.921Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama531474679/runners/cpu ollama-rocm | time=2024-04-23T13:58:10.921Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama531474679/runners/cpu_avx ollama-rocm | time=2024-04-23T13:58:10.921Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama531474679/runners/cpu_avx2 ollama-rocm | time=2024-04-23T13:58:10.921Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama531474679/runners/cuda_v11 ollama-rocm | time=2024-04-23T13:58:10.921Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama531474679/runners/rocm_v60002 ollama-rocm | time=2024-04-23T13:58:10.921Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ollama-rocm | time=2024-04-23T13:58:10.923Z level=DEBUG source=server.go:259 msg="LD_LIBRARY_PATH=/opt/rocm/lib:/usr/local/lib:/opt/rh/devtoolset-7/root:/tmp/ollama531474679/rocm:/tmp/ollama531474679/runners/rocm_v60002" ollama-rocm | time=2024-04-23T13:58:10.923Z level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama531474679/runners/rocm_v60002/ollama_llama_server --model /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 --ctx-size 2048 --batch-size 512 --embedding --log-format json --n-gpu-layers 33 --verbose --port 33779" ollama-rocm | time=2024-04-23T13:58:10.923Z level=INFO source=server.go:389 msg="waiting for llama runner to start responding" ollama-rocm | time=2024-04-23T13:58:10.974Z level=DEBUG source=server.go:420 msg="server not yet available" error="health resp: Get \"http://127.0.0.1:33779/health\": dial tcp 127.0.0.1:33779: connect: connection refused" ollama-rocm | {"function":"server_params_parse","level":"WARN","line":2494,"msg":"server.cpp is not built with verbose logging.","tid":"134522014141504","timestamp":1713880690} ollama-rocm | {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2820,"msg":"build info","tid":"134522014141504","timestamp":1713880690} ollama-rocm | {"function":"main","level":"INFO","line":2827,"msg":"system info","n_threads":20,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"134522014141504","timestamp":1713880690,"total_threads":40} ollama-rocm | llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 (version GGUF V3 (latest)) ollama-rocm | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. ollama-rocm | llama_model_loader: - kv 0: general.architecture str = llama ollama-rocm | llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct ollama-rocm | llama_model_loader: - kv 2: llama.block_count u32 = 32 ollama-rocm | llama_model_loader: - kv 3: llama.context_length u32 = 8192 ollama-rocm | llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 ollama-rocm | llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 ollama-rocm | llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 ollama-rocm | llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 ollama-rocm | llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 ollama-rocm | llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 ollama-rocm | llama_model_loader: - kv 10: general.file_type u32 = 2 ollama-rocm | llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 ollama-rocm | llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 ollama-rocm | llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 ollama-rocm | llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... ollama-rocm | llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ollama-rocm | time=2024-04-23T13:58:11.224Z level=DEBUG source=server.go:420 msg="server not yet available" error="server not responding" ollama-rocm | llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... ollama-rocm | llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 ollama-rocm | llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001 ollama-rocm | llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... ollama-rocm | llama_model_loader: - kv 20: general.quantization_version u32 = 2 ollama-rocm | llama_model_loader: - type f32: 65 tensors ollama-rocm | llama_model_loader: - type q4_0: 225 tensors ollama-rocm | llama_model_loader: - type q6_K: 1 tensors ollama-rocm | llm_load_vocab: special tokens definition check successful ( 256/128256 ). ollama-rocm | llm_load_print_meta: format = GGUF V3 (latest) ollama-rocm | llm_load_print_meta: arch = llama ollama-rocm | llm_load_print_meta: vocab type = BPE ollama-rocm | llm_load_print_meta: n_vocab = 128256 ollama-rocm | llm_load_print_meta: n_merges = 280147 ollama-rocm | llm_load_print_meta: n_ctx_train = 8192 ollama-rocm | llm_load_print_meta: n_embd = 4096 ollama-rocm | llm_load_print_meta: n_head = 32 ollama-rocm | llm_load_print_meta: n_head_kv = 8 ollama-rocm | llm_load_print_meta: n_layer = 32 ollama-rocm | llm_load_print_meta: n_rot = 128 ollama-rocm | llm_load_print_meta: n_embd_head_k = 128 ollama-rocm | llm_load_print_meta: n_embd_head_v = 128 ollama-rocm | llm_load_print_meta: n_gqa = 4 ollama-rocm | llm_load_print_meta: n_embd_k_gqa = 1024 ollama-rocm | llm_load_print_meta: n_embd_v_gqa = 1024 ollama-rocm | llm_load_print_meta: f_norm_eps = 0.0e+00 ollama-rocm | llm_load_print_meta: f_norm_rms_eps = 1.0e-05 ollama-rocm | llm_load_print_meta: f_clamp_kqv = 0.0e+00 ollama-rocm | llm_load_print_meta: f_max_alibi_bias = 0.0e+00 ollama-rocm | llm_load_print_meta: f_logit_scale = 0.0e+00 ollama-rocm | llm_load_print_meta: n_ff = 14336 ollama-rocm | llm_load_print_meta: n_expert = 0 ollama-rocm | llm_load_print_meta: n_expert_used = 0 ollama-rocm | llm_load_print_meta: causal attn = 1 ollama-rocm | llm_load_print_meta: pooling type = 0 ollama-rocm | llm_load_print_meta: rope type = 0 ollama-rocm | llm_load_print_meta: rope scaling = linear ollama-rocm | llm_load_print_meta: freq_base_train = 500000.0 ollama-rocm | llm_load_print_meta: freq_scale_train = 1 ollama-rocm | llm_load_print_meta: n_yarn_orig_ctx = 8192 ollama-rocm | llm_load_print_meta: rope_finetuned = unknown ollama-rocm | llm_load_print_meta: ssm_d_conv = 0 ollama-rocm | llm_load_print_meta: ssm_d_inner = 0 ollama-rocm | llm_load_print_meta: ssm_d_state = 0 ollama-rocm | llm_load_print_meta: ssm_dt_rank = 0 ollama-rocm | llm_load_print_meta: model type = 7B ollama-rocm | llm_load_print_meta: model ftype = Q4_0 ollama-rocm | llm_load_print_meta: model params = 8.03 B ollama-rocm | llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) ollama-rocm | llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct ollama-rocm | llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' ollama-rocm | llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' ollama-rocm | llm_load_print_meta: LF token = 128 'Ä' ollama-rocm | ollama-rocm | rocBLAS error: Could not initialize Tensile host: No devices found ollama-rocm | time=2024-04-23T13:58:12.216Z level=DEBUG source=server.go:420 msg="server not yet available" error="health resp: Get \"http://127.0.0.1:33779/health\": read tcp 127.0.0.1:45854->127.0.0.1:33779: read: connection reset by peer" ollama-rocm | time=2024-04-23T13:58:12.216Z level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: -1 error:Could not initialize Tensile host: No devices found" ollama-rocm | time=2024-04-23T13:58:12.216Z level=DEBUG source=server.go:832 msg="stopping llama server" ```
Author
Owner

@ic4-y commented on GitHub (Apr 23, 2024):

I should say, I also have a Ryzen 3000 machine with exactly the same Radeon Pro W6600 and a Radeon Pro W6800. I might try the docker compose file on that machine and see if there is any connection to the setup on the Dual Xeon Proxmox host for this error.

<!-- gh-comment-id:2072404154 --> @ic4-y commented on GitHub (Apr 23, 2024): I should say, I also have a Ryzen 3000 machine with exactly the same Radeon Pro W6600 and a Radeon Pro W6800. I might try the docker compose file on that machine and see if there is any connection to the setup on the Dual Xeon Proxmox host for this error.
Author
Owner

@ic4-y commented on GitHub (Apr 23, 2024):

okay so this is maybe interesting: when running on my dual GPU Ryzen machien it do get the same error if I try to isolate the W6600 GPU by passing through /dev/dri/render129 and setting ROCR_VISIBLE_DEVICES="1" ... if I just pass through /dev/dri and do not set the variable, it works, but it appears to be using both (???) GPUs. I am not sure if I am reading this correctly, but it seems to be allocating VRAM on both GPUs at least...

<!-- gh-comment-id:2072459413 --> @ic4-y commented on GitHub (Apr 23, 2024): okay so this is maybe interesting: when running on my dual GPU Ryzen machien it do get the same error if I try to isolate the W6600 GPU by passing through `/dev/dri/render129` and setting `ROCR_VISIBLE_DEVICES="1"` ... if I just pass through `/dev/dri` and do not set the variable, it works, but it appears to be using both (???) GPUs. I am not sure if I am reading this correctly, but it seems to be allocating VRAM on both GPUs at least...
Author
Owner

@dhiltgen commented on GitHub (Apr 23, 2024):

Interesting. I don't see this log message in your output, which was my initial theory on what could be going wrong, so there's something else. Can you try to exec into the running container and dump out the environment variables. In particular I'm curious for any HIP/ROC settings that might be affecting ROCm's behavior in a way I wasn't anticipating.

<!-- gh-comment-id:2072759757 --> @dhiltgen commented on GitHub (Apr 23, 2024): Interesting. I don't see [this](https://github.com/ollama/ollama/blob/fb9580df85c562295d919b6c2632117d3d8cea89/gpu/amd_common.go#L54) log message in your output, which was my initial theory on what could be going wrong, so there's something else. Can you try to exec into the running container and dump out the environment variables. In particular I'm curious for any HIP/ROC settings that might be affecting ROCm's behavior in a way I wasn't anticipating.
Author
Owner

@ic4-y commented on GitHub (Apr 23, 2024):

Ah yes, in fact I must have missed a part of my log because it seem to repeat itself. But there it is what you were looking for:

ollama-rocm  | time=2024-04-23T15:47:40.621Z level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama2428555918/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama"
ollama-rocm  | time=2024-04-23T15:47:40.621Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so"
ollama-rocm  | time=2024-04-23T15:47:40.621Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /opt/rocm/lib/libnvidia-ml.so* /usr/local/lib/libnvidia-ml.so* /opt/rh/devtoolset-7/root/libnvidia-ml.so*]"
ollama-rocm  | time=2024-04-23T15:47:40.623Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []"
ollama-rocm  | time=2024-04-23T15:47:40.623Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
ollama-rocm  | time=2024-04-23T15:47:40.623Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
ollama-rocm  | time=2024-04-23T15:47:40.624Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx000 gfx1032]"
ollama-rocm  | time=2024-04-23T15:47:40.624Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /tmp/ollama2428555918/rocm"
ollama-rocm  | time=2024-04-23T15:47:40.624Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/bin"
ollama-rocm  | time=2024-04-23T15:47:40.625Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/bin/rocm"
ollama-rocm  | time=2024-04-23T15:47:40.625Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/share/ollama/lib/rocm"
ollama-rocm  | time=2024-04-23T15:47:40.625Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /opt/rocm/lib"
ollama-rocm  | time=2024-04-23T15:47:40.625Z level=DEBUG source=amd_linux.go:296 msg="host rocm linked /opt/rocm/lib => /tmp/ollama2428555918/rocm"
ollama-rocm  | time=2024-04-23T15:47:40.625Z level=DEBUG source=amd_linux.go:159 msg="updated lib path" LD_LIBRARY_PATH=/opt/rocm/lib:/usr/local/lib:/opt/rh/devtoolset-7/root:/tmp/ollama2428555918/rocm
ollama-rocm  | time=2024-04-23T15:47:40.625Z level=DEBUG source=amd_linux.go:125 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0"
ollama-rocm  | time=2024-04-23T15:47:40.625Z level=DEBUG source=amd_linux.go:169 msg="discovering VRAM for amdgpu devices"
ollama-rocm  | time=2024-04-23T15:47:40.625Z level=DEBUG source=amd_linux.go:188 msg="amdgpu devices [0 1]"
ollama-rocm  | time=2024-04-23T15:47:40.626Z level=INFO source=amd_linux.go:263 msg="[1] amdgpu totalMemory 8176M"
ollama-rocm  | time=2024-04-23T15:47:40.626Z level=INFO source=amd_linux.go:264 msg="[1] amdgpu freeMemory  8176M"
ollama-rocm  | time=2024-04-23T15:47:40.626Z level=INFO source=amd_common.go:54 msg="Setting HIP_VISIBLE_DEVICES=1"

Should I try overriding this? Notice also that it detects [gfx000 gfx1032] as if there are two GPUs, while in fact this machine only has one.

<!-- gh-comment-id:2072790275 --> @ic4-y commented on GitHub (Apr 23, 2024): Ah yes, in fact I must have missed a part of my log because it seem to repeat itself. But there it is what you were looking for: ``` ollama-rocm | time=2024-04-23T15:47:40.621Z level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama2428555918/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama" ollama-rocm | time=2024-04-23T15:47:40.621Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so" ollama-rocm | time=2024-04-23T15:47:40.621Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /opt/rocm/lib/libnvidia-ml.so* /usr/local/lib/libnvidia-ml.so* /opt/rh/devtoolset-7/root/libnvidia-ml.so*]" ollama-rocm | time=2024-04-23T15:47:40.623Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []" ollama-rocm | time=2024-04-23T15:47:40.623Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ollama-rocm | time=2024-04-23T15:47:40.623Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" ollama-rocm | time=2024-04-23T15:47:40.624Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx000 gfx1032]" ollama-rocm | time=2024-04-23T15:47:40.624Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /tmp/ollama2428555918/rocm" ollama-rocm | time=2024-04-23T15:47:40.624Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/bin" ollama-rocm | time=2024-04-23T15:47:40.625Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/bin/rocm" ollama-rocm | time=2024-04-23T15:47:40.625Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/share/ollama/lib/rocm" ollama-rocm | time=2024-04-23T15:47:40.625Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /opt/rocm/lib" ollama-rocm | time=2024-04-23T15:47:40.625Z level=DEBUG source=amd_linux.go:296 msg="host rocm linked /opt/rocm/lib => /tmp/ollama2428555918/rocm" ollama-rocm | time=2024-04-23T15:47:40.625Z level=DEBUG source=amd_linux.go:159 msg="updated lib path" LD_LIBRARY_PATH=/opt/rocm/lib:/usr/local/lib:/opt/rh/devtoolset-7/root:/tmp/ollama2428555918/rocm ollama-rocm | time=2024-04-23T15:47:40.625Z level=DEBUG source=amd_linux.go:125 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0" ollama-rocm | time=2024-04-23T15:47:40.625Z level=DEBUG source=amd_linux.go:169 msg="discovering VRAM for amdgpu devices" ollama-rocm | time=2024-04-23T15:47:40.625Z level=DEBUG source=amd_linux.go:188 msg="amdgpu devices [0 1]" ollama-rocm | time=2024-04-23T15:47:40.626Z level=INFO source=amd_linux.go:263 msg="[1] amdgpu totalMemory 8176M" ollama-rocm | time=2024-04-23T15:47:40.626Z level=INFO source=amd_linux.go:264 msg="[1] amdgpu freeMemory 8176M" ollama-rocm | time=2024-04-23T15:47:40.626Z level=INFO source=amd_common.go:54 msg="Setting HIP_VISIBLE_DEVICES=1" ``` Should I try overriding this? Notice also that it detects `[gfx000 gfx1032]` as if there are two GPUs, while in fact this machine only has one.
Author
Owner

@ic4-y commented on GitHub (Apr 23, 2024):

Okay so for completeness sake, I reran the thing and tried to curl the container to run llama3:8b, which used to work on 0.1.28 and below.

This is the complete output from starting the container until it errors out:

 ✔ Container ollama-rocm  Created                                                                                                                                                                                                                           0.0s 
Attaching to ollama-rocm
ollama-rocm  | time=2024-04-23T15:58:41.452Z level=INFO source=images.go:817 msg="total blobs: 5"
ollama-rocm  | time=2024-04-23T15:58:41.453Z level=INFO source=images.go:824 msg="total unused blobs removed: 0"
ollama-rocm  | time=2024-04-23T15:58:41.454Z level=INFO source=routes.go:1143 msg="Listening on [::]:11434 (version 0.1.32)"
ollama-rocm  | time=2024-04-23T15:58:41.454Z level=INFO source=payload.go:28 msg="extracting embedded files" dir=/tmp/ollama3788123448/runners
ollama-rocm  | time=2024-04-23T15:58:41.454Z level=DEBUG source=payload.go:160 msg=extracting variant=cpu file=build/linux/x86_64/cpu/bin/ollama_llama_server.gz
ollama-rocm  | time=2024-04-23T15:58:41.454Z level=DEBUG source=payload.go:160 msg=extracting variant=cpu_avx file=build/linux/x86_64/cpu_avx/bin/ollama_llama_server.gz
ollama-rocm  | time=2024-04-23T15:58:41.455Z level=DEBUG source=payload.go:160 msg=extracting variant=cpu_avx2 file=build/linux/x86_64/cpu_avx2/bin/ollama_llama_server.gz
ollama-rocm  | time=2024-04-23T15:58:41.455Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublas.so.11.gz
ollama-rocm  | time=2024-04-23T15:58:41.455Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublasLt.so.11.gz
ollama-rocm  | time=2024-04-23T15:58:41.455Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcudart.so.11.0.gz
ollama-rocm  | time=2024-04-23T15:58:41.455Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/ollama_llama_server.gz
ollama-rocm  | time=2024-04-23T15:58:41.455Z level=DEBUG source=payload.go:160 msg=extracting variant=rocm_v60002 file=build/linux/x86_64/rocm_v60002/bin/deps.txt.gz
ollama-rocm  | time=2024-04-23T15:58:41.455Z level=DEBUG source=payload.go:160 msg=extracting variant=rocm_v60002 file=build/linux/x86_64/rocm_v60002/bin/ollama_llama_server.gz
ollama-rocm  | time=2024-04-23T15:58:44.461Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cpu
ollama-rocm  | time=2024-04-23T15:58:44.461Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cpu_avx
ollama-rocm  | time=2024-04-23T15:58:44.461Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cpu_avx2
ollama-rocm  | time=2024-04-23T15:58:44.461Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cuda_v11
ollama-rocm  | time=2024-04-23T15:58:44.461Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/rocm_v60002
ollama-rocm  | time=2024-04-23T15:58:44.461Z level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 rocm_v60002 cpu]"
ollama-rocm  | time=2024-04-23T15:58:44.461Z level=DEBUG source=payload.go:42 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
ollama-rocm  | time=2024-04-23T15:58:44.461Z level=INFO source=gpu.go:121 msg="Detecting GPU type"
ollama-rocm  | time=2024-04-23T15:58:44.461Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
ollama-rocm  | time=2024-04-23T15:58:44.461Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/tmp/ollama3788123448/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /opt/rocm/lib/libcudart.so** /usr/local/lib/libcudart.so** /opt/rh/devtoolset-7/root/libcudart.so**]"
ollama-rocm  | time=2024-04-23T15:58:44.464Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3788123448/runners/cuda_v11/libcudart.so.11.0]"
ollama-rocm  | wiring cudart library functions in /tmp/ollama3788123448/runners/cuda_v11/libcudart.so.11.0
ollama-rocm  | dlsym: cudaSetDevice
ollama-rocm  | dlsym: cudaDeviceSynchronize
ollama-rocm  | dlsym: cudaDeviceReset
ollama-rocm  | dlsym: cudaMemGetInfo
ollama-rocm  | dlsym: cudaGetDeviceCount
ollama-rocm  | dlsym: cudaDeviceGetAttribute
ollama-rocm  | dlsym: cudaDriverGetVersion
ollama-rocm  | cudaSetDevice err: 35
ollama-rocm  | time=2024-04-23T15:58:44.465Z level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama3788123448/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama"
ollama-rocm  | time=2024-04-23T15:58:44.465Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so"
ollama-rocm  | time=2024-04-23T15:58:44.465Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /opt/rocm/lib/libnvidia-ml.so* /usr/local/lib/libnvidia-ml.so* /opt/rh/devtoolset-7/root/libnvidia-ml.so*]"
ollama-rocm  | time=2024-04-23T15:58:44.468Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []"
ollama-rocm  | time=2024-04-23T15:58:44.468Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
ollama-rocm  | time=2024-04-23T15:58:44.468Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
ollama-rocm  | time=2024-04-23T15:58:44.468Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1032 gfx000]"
ollama-rocm  | time=2024-04-23T15:58:44.468Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /tmp/ollama3788123448/rocm"
ollama-rocm  | time=2024-04-23T15:58:44.468Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/bin"
ollama-rocm  | time=2024-04-23T15:58:44.469Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/bin/rocm"
ollama-rocm  | time=2024-04-23T15:58:44.469Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/share/ollama/lib/rocm"
ollama-rocm  | time=2024-04-23T15:58:44.469Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /opt/rocm/lib"
ollama-rocm  | time=2024-04-23T15:58:44.470Z level=DEBUG source=amd_linux.go:296 msg="host rocm linked /opt/rocm/lib => /tmp/ollama3788123448/rocm"
ollama-rocm  | time=2024-04-23T15:58:44.470Z level=DEBUG source=amd_linux.go:159 msg="updated lib path" LD_LIBRARY_PATH=/opt/rocm/lib:/usr/local/lib:/opt/rh/devtoolset-7/root:/tmp/ollama3788123448/rocm
ollama-rocm  | time=2024-04-23T15:58:44.470Z level=DEBUG source=amd_linux.go:125 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0"
ollama-rocm  | time=2024-04-23T15:58:44.470Z level=DEBUG source=amd_linux.go:169 msg="discovering VRAM for amdgpu devices"
ollama-rocm  | time=2024-04-23T15:58:44.470Z level=DEBUG source=amd_linux.go:188 msg="amdgpu devices [0 1]"
ollama-rocm  | time=2024-04-23T15:58:44.470Z level=INFO source=amd_linux.go:263 msg="[1] amdgpu totalMemory 8176M"
ollama-rocm  | time=2024-04-23T15:58:44.470Z level=INFO source=amd_linux.go:264 msg="[1] amdgpu freeMemory  8176M"
ollama-rocm  | time=2024-04-23T15:58:44.470Z level=INFO source=amd_common.go:54 msg="Setting HIP_VISIBLE_DEVICES=1"


ollama-rocm  | time=2024-04-23T15:59:56.557Z level=DEBUG source=gguf.go:57 msg="model = &llm.gguf{containerGGUF:(*llm.containerGGUF)(0xc000496b80), kv:llm.KV{}, tensors:[]*llm.Tensor(nil), parameters:0x0}"
ollama-rocm  | time=2024-04-23T15:59:59.219Z level=DEBUG source=gguf.go:193 msg="general.architecture = llama"
ollama-rocm  | time=2024-04-23T15:59:59.225Z level=INFO source=gpu.go:121 msg="Detecting GPU type"
ollama-rocm  | time=2024-04-23T15:59:59.225Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
ollama-rocm  | time=2024-04-23T15:59:59.225Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/tmp/ollama3788123448/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /opt/rocm/lib/libcudart.so** /usr/local/lib/libcudart.so** /opt/rh/devtoolset-7/root/libcudart.so** /tmp/ollama3788123448/rocm/libcudart.so**]"
ollama-rocm  | time=2024-04-23T15:59:59.227Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3788123448/runners/cuda_v11/libcudart.so.11.0]"
ollama-rocm  | wiring cudart library functions in /tmp/ollama3788123448/runners/cuda_v11/libcudart.so.11.0
ollama-rocm  | dlsym: cudaSetDevice
ollama-rocm  | dlsym: cudaDeviceSynchronize
ollama-rocm  | dlsym: cudaDeviceReset
ollama-rocm  | dlsym: cudaMemGetInfo
ollama-rocm  | dlsym: cudaGetDeviceCount
ollama-rocm  | dlsym: cudaDeviceGetAttribute
ollama-rocm  | dlsym: cudaDriverGetVersion
ollama-rocm  | cudaSetDevice err: 35
ollama-rocm  | time=2024-04-23T15:59:59.228Z level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama3788123448/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama"
ollama-rocm  | time=2024-04-23T15:59:59.228Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so"
ollama-rocm  | time=2024-04-23T15:59:59.228Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /opt/rocm/lib/libnvidia-ml.so* /usr/local/lib/libnvidia-ml.so* /opt/rh/devtoolset-7/root/libnvidia-ml.so* /tmp/ollama3788123448/rocm/libnvidia-ml.so*]"
ollama-rocm  | time=2024-04-23T15:59:59.230Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []"
ollama-rocm  | time=2024-04-23T15:59:59.230Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
ollama-rocm  | time=2024-04-23T15:59:59.230Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
ollama-rocm  | time=2024-04-23T15:59:59.230Z level=DEBUG source=amd_linux.go:169 msg="discovering VRAM for amdgpu devices"
ollama-rocm  | time=2024-04-23T15:59:59.230Z level=DEBUG source=amd_linux.go:188 msg="amdgpu devices [1]"
ollama-rocm  | time=2024-04-23T15:59:59.230Z level=INFO source=amd_linux.go:263 msg="[1] amdgpu totalMemory 8176M"
ollama-rocm  | time=2024-04-23T15:59:59.230Z level=INFO source=amd_linux.go:264 msg="[1] amdgpu freeMemory  8176M"
ollama-rocm  | time=2024-04-23T15:59:59.230Z level=INFO source=gpu.go:121 msg="Detecting GPU type"
ollama-rocm  | time=2024-04-23T15:59:59.230Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
ollama-rocm  | time=2024-04-23T15:59:59.230Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/tmp/ollama3788123448/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /opt/rocm/lib/libcudart.so** /usr/local/lib/libcudart.so** /opt/rh/devtoolset-7/root/libcudart.so** /tmp/ollama3788123448/rocm/libcudart.so**]"
ollama-rocm  | time=2024-04-23T15:59:59.232Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3788123448/runners/cuda_v11/libcudart.so.11.0]"
ollama-rocm  | wiring cudart library functions in /tmp/ollama3788123448/runners/cuda_v11/libcudart.so.11.0
ollama-rocm  | dlsym: cudaSetDevice
ollama-rocm  | dlsym: cudaDeviceSynchronize
ollama-rocm  | dlsym: cudaDeviceReset
ollama-rocm  | dlsym: cudaMemGetInfo
ollama-rocm  | dlsym: cudaGetDeviceCount
ollama-rocm  | dlsym: cudaDeviceGetAttribute
ollama-rocm  | dlsym: cudaDriverGetVersion
ollama-rocm  | cudaSetDevice err: 35
ollama-rocm  | time=2024-04-23T15:59:59.233Z level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama3788123448/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama"
ollama-rocm  | time=2024-04-23T15:59:59.233Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so"
ollama-rocm  | time=2024-04-23T15:59:59.233Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /opt/rocm/lib/libnvidia-ml.so* /usr/local/lib/libnvidia-ml.so* /opt/rh/devtoolset-7/root/libnvidia-ml.so* /tmp/ollama3788123448/rocm/libnvidia-ml.so*]"
ollama-rocm  | time=2024-04-23T15:59:59.234Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []"
ollama-rocm  | time=2024-04-23T15:59:59.234Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
ollama-rocm  | time=2024-04-23T15:59:59.234Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
ollama-rocm  | time=2024-04-23T15:59:59.234Z level=DEBUG source=amd_linux.go:169 msg="discovering VRAM for amdgpu devices"
ollama-rocm  | time=2024-04-23T15:59:59.234Z level=DEBUG source=amd_linux.go:188 msg="amdgpu devices [1]"
ollama-rocm  | time=2024-04-23T15:59:59.235Z level=INFO source=amd_linux.go:263 msg="[1] amdgpu totalMemory 8176M"
ollama-rocm  | time=2024-04-23T15:59:59.235Z level=INFO source=amd_linux.go:264 msg="[1] amdgpu freeMemory  8176M"
ollama-rocm  | time=2024-04-23T15:59:59.235Z level=INFO source=server.go:127 msg="offload to gpu" reallayers=33 layers=33 required="5033.0 MiB" used="5033.0 MiB" available="8176.0 MiB" kv="256.0 MiB" fulloffload="164.0 MiB" partialoffload="677.5 MiB"
ollama-rocm  | time=2024-04-23T15:59:59.235Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cpu
ollama-rocm  | time=2024-04-23T15:59:59.235Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cpu_avx
ollama-rocm  | time=2024-04-23T15:59:59.235Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cpu_avx2
ollama-rocm  | time=2024-04-23T15:59:59.235Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cuda_v11
ollama-rocm  | time=2024-04-23T15:59:59.235Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/rocm_v60002
ollama-rocm  | time=2024-04-23T15:59:59.235Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cpu
ollama-rocm  | time=2024-04-23T15:59:59.235Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cpu_avx
ollama-rocm  | time=2024-04-23T15:59:59.235Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cpu_avx2
ollama-rocm  | time=2024-04-23T15:59:59.235Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cuda_v11
ollama-rocm  | time=2024-04-23T15:59:59.235Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/rocm_v60002
ollama-rocm  | time=2024-04-23T15:59:59.235Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
ollama-rocm  | time=2024-04-23T15:59:59.236Z level=DEBUG source=server.go:259 msg="LD_LIBRARY_PATH=/opt/rocm/lib:/usr/local/lib:/opt/rh/devtoolset-7/root:/tmp/ollama3788123448/rocm:/tmp/ollama3788123448/runners/rocm_v60002"
ollama-rocm  | time=2024-04-23T15:59:59.236Z level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama3788123448/runners/rocm_v60002/ollama_llama_server --model /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 --ctx-size 2048 --batch-size 512 --embedding --log-format json --n-gpu-layers 33 --verbose --port 35977"
ollama-rocm  | time=2024-04-23T15:59:59.237Z level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
ollama-rocm  | {"function":"server_params_parse","level":"WARN","line":2494,"msg":"server.cpp is not built with verbose logging.","tid":"135264232893504","timestamp":1713887999}
ollama-rocm  | {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2820,"msg":"build info","tid":"135264232893504","timestamp":1713887999}
ollama-rocm  | {"function":"main","level":"INFO","line":2827,"msg":"system info","n_threads":20,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"135264232893504","timestamp":1713887999,"total_threads":40}
ollama-rocm  | llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 (version GGUF V3 (latest))
ollama-rocm  | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
ollama-rocm  | llama_model_loader: - kv   0:                       general.architecture str              = llama
ollama-rocm  | llama_model_loader: - kv   1:                               general.name str              = Meta-Llama-3-8B-Instruct
ollama-rocm  | llama_model_loader: - kv   2:                          llama.block_count u32              = 32
ollama-rocm  | llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
ollama-rocm  | llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
ollama-rocm  | llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
ollama-rocm  | llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 32
ollama-rocm  | llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
ollama-rocm  | llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
ollama-rocm  | llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
ollama-rocm  | llama_model_loader: - kv  10:                          general.file_type u32              = 2
ollama-rocm  | llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
ollama-rocm  | llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
ollama-rocm  | llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
ollama-rocm  | llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
ollama-rocm  | llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
ollama-rocm  | time=2024-04-23T15:59:59.488Z level=DEBUG source=server.go:420 msg="server not yet available" error="server not responding"
ollama-rocm  | llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
ollama-rocm  | llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 128000
ollama-rocm  | llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 128001
ollama-rocm  | llama_model_loader: - kv  19:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
ollama-rocm  | llama_model_loader: - kv  20:               general.quantization_version u32              = 2
ollama-rocm  | llama_model_loader: - type  f32:   65 tensors
ollama-rocm  | llama_model_loader: - type q4_0:  225 tensors
ollama-rocm  | llama_model_loader: - type q6_K:    1 tensors
ollama-rocm  | llm_load_vocab: special tokens definition check successful ( 256/128256 ).
ollama-rocm  | llm_load_print_meta: format           = GGUF V3 (latest)
ollama-rocm  | llm_load_print_meta: arch             = llama
ollama-rocm  | llm_load_print_meta: vocab type       = BPE
ollama-rocm  | llm_load_print_meta: n_vocab          = 128256
ollama-rocm  | llm_load_print_meta: n_merges         = 280147
ollama-rocm  | llm_load_print_meta: n_ctx_train      = 8192
ollama-rocm  | llm_load_print_meta: n_embd           = 4096
ollama-rocm  | llm_load_print_meta: n_head           = 32
ollama-rocm  | llm_load_print_meta: n_head_kv        = 8
ollama-rocm  | llm_load_print_meta: n_layer          = 32
ollama-rocm  | llm_load_print_meta: n_rot            = 128
ollama-rocm  | llm_load_print_meta: n_embd_head_k    = 128
ollama-rocm  | llm_load_print_meta: n_embd_head_v    = 128
ollama-rocm  | llm_load_print_meta: n_gqa            = 4
ollama-rocm  | llm_load_print_meta: n_embd_k_gqa     = 1024
ollama-rocm  | llm_load_print_meta: n_embd_v_gqa     = 1024
ollama-rocm  | llm_load_print_meta: f_norm_eps       = 0.0e+00
ollama-rocm  | llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
ollama-rocm  | llm_load_print_meta: f_clamp_kqv      = 0.0e+00
ollama-rocm  | llm_load_print_meta: f_max_alibi_bias = 0.0e+00
ollama-rocm  | llm_load_print_meta: f_logit_scale    = 0.0e+00
ollama-rocm  | llm_load_print_meta: n_ff             = 14336
ollama-rocm  | llm_load_print_meta: n_expert         = 0
ollama-rocm  | llm_load_print_meta: n_expert_used    = 0
ollama-rocm  | llm_load_print_meta: causal attn      = 1
ollama-rocm  | llm_load_print_meta: pooling type     = 0
ollama-rocm  | llm_load_print_meta: rope type        = 0
ollama-rocm  | llm_load_print_meta: rope scaling     = linear
ollama-rocm  | llm_load_print_meta: freq_base_train  = 500000.0
ollama-rocm  | llm_load_print_meta: freq_scale_train = 1
ollama-rocm  | llm_load_print_meta: n_yarn_orig_ctx  = 8192
ollama-rocm  | llm_load_print_meta: rope_finetuned   = unknown
ollama-rocm  | llm_load_print_meta: ssm_d_conv       = 0
ollama-rocm  | llm_load_print_meta: ssm_d_inner      = 0
ollama-rocm  | llm_load_print_meta: ssm_d_state      = 0
ollama-rocm  | llm_load_print_meta: ssm_dt_rank      = 0
ollama-rocm  | llm_load_print_meta: model type       = 7B
ollama-rocm  | llm_load_print_meta: model ftype      = Q4_0
ollama-rocm  | llm_load_print_meta: model params     = 8.03 B
ollama-rocm  | llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW) 
ollama-rocm  | llm_load_print_meta: general.name     = Meta-Llama-3-8B-Instruct
ollama-rocm  | llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
ollama-rocm  | llm_load_print_meta: EOS token        = 128001 '<|end_of_text|>'
ollama-rocm  | llm_load_print_meta: LF token         = 128 'Ä'
ollama-rocm  | 
ollama-rocm  | rocBLAS error: Could not initialize Tensile host: No devices found
ollama-rocm  | time=2024-04-23T16:00:00.694Z level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: -1 error:Could not initialize Tensile host: No devices found"
ollama-rocm  | time=2024-04-23T16:00:00.694Z level=DEBUG source=server.go:832 msg="stopping llama server"
ollama-rocm  | [GIN] 2024/04/23 - 16:00:00 | 500 |   4.13850964s |   192.168.7.120 | POST     "/api/generate"
<!-- gh-comment-id:2072819005 --> @ic4-y commented on GitHub (Apr 23, 2024): Okay so for completeness sake, I reran the thing and tried to curl the container to run llama3:8b, which used to work on 0.1.28 and below. This is the complete output from starting the container until it errors out: ``` ✔ Container ollama-rocm Created 0.0s Attaching to ollama-rocm ollama-rocm | time=2024-04-23T15:58:41.452Z level=INFO source=images.go:817 msg="total blobs: 5" ollama-rocm | time=2024-04-23T15:58:41.453Z level=INFO source=images.go:824 msg="total unused blobs removed: 0" ollama-rocm | time=2024-04-23T15:58:41.454Z level=INFO source=routes.go:1143 msg="Listening on [::]:11434 (version 0.1.32)" ollama-rocm | time=2024-04-23T15:58:41.454Z level=INFO source=payload.go:28 msg="extracting embedded files" dir=/tmp/ollama3788123448/runners ollama-rocm | time=2024-04-23T15:58:41.454Z level=DEBUG source=payload.go:160 msg=extracting variant=cpu file=build/linux/x86_64/cpu/bin/ollama_llama_server.gz ollama-rocm | time=2024-04-23T15:58:41.454Z level=DEBUG source=payload.go:160 msg=extracting variant=cpu_avx file=build/linux/x86_64/cpu_avx/bin/ollama_llama_server.gz ollama-rocm | time=2024-04-23T15:58:41.455Z level=DEBUG source=payload.go:160 msg=extracting variant=cpu_avx2 file=build/linux/x86_64/cpu_avx2/bin/ollama_llama_server.gz ollama-rocm | time=2024-04-23T15:58:41.455Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublas.so.11.gz ollama-rocm | time=2024-04-23T15:58:41.455Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublasLt.so.11.gz ollama-rocm | time=2024-04-23T15:58:41.455Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcudart.so.11.0.gz ollama-rocm | time=2024-04-23T15:58:41.455Z level=DEBUG source=payload.go:160 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/ollama_llama_server.gz ollama-rocm | time=2024-04-23T15:58:41.455Z level=DEBUG source=payload.go:160 msg=extracting variant=rocm_v60002 file=build/linux/x86_64/rocm_v60002/bin/deps.txt.gz ollama-rocm | time=2024-04-23T15:58:41.455Z level=DEBUG source=payload.go:160 msg=extracting variant=rocm_v60002 file=build/linux/x86_64/rocm_v60002/bin/ollama_llama_server.gz ollama-rocm | time=2024-04-23T15:58:44.461Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cpu ollama-rocm | time=2024-04-23T15:58:44.461Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cpu_avx ollama-rocm | time=2024-04-23T15:58:44.461Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cpu_avx2 ollama-rocm | time=2024-04-23T15:58:44.461Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cuda_v11 ollama-rocm | time=2024-04-23T15:58:44.461Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/rocm_v60002 ollama-rocm | time=2024-04-23T15:58:44.461Z level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 rocm_v60002 cpu]" ollama-rocm | time=2024-04-23T15:58:44.461Z level=DEBUG source=payload.go:42 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" ollama-rocm | time=2024-04-23T15:58:44.461Z level=INFO source=gpu.go:121 msg="Detecting GPU type" ollama-rocm | time=2024-04-23T15:58:44.461Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" ollama-rocm | time=2024-04-23T15:58:44.461Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/tmp/ollama3788123448/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /opt/rocm/lib/libcudart.so** /usr/local/lib/libcudart.so** /opt/rh/devtoolset-7/root/libcudart.so**]" ollama-rocm | time=2024-04-23T15:58:44.464Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3788123448/runners/cuda_v11/libcudart.so.11.0]" ollama-rocm | wiring cudart library functions in /tmp/ollama3788123448/runners/cuda_v11/libcudart.so.11.0 ollama-rocm | dlsym: cudaSetDevice ollama-rocm | dlsym: cudaDeviceSynchronize ollama-rocm | dlsym: cudaDeviceReset ollama-rocm | dlsym: cudaMemGetInfo ollama-rocm | dlsym: cudaGetDeviceCount ollama-rocm | dlsym: cudaDeviceGetAttribute ollama-rocm | dlsym: cudaDriverGetVersion ollama-rocm | cudaSetDevice err: 35 ollama-rocm | time=2024-04-23T15:58:44.465Z level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama3788123448/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama" ollama-rocm | time=2024-04-23T15:58:44.465Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so" ollama-rocm | time=2024-04-23T15:58:44.465Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /opt/rocm/lib/libnvidia-ml.so* /usr/local/lib/libnvidia-ml.so* /opt/rh/devtoolset-7/root/libnvidia-ml.so*]" ollama-rocm | time=2024-04-23T15:58:44.468Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []" ollama-rocm | time=2024-04-23T15:58:44.468Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ollama-rocm | time=2024-04-23T15:58:44.468Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" ollama-rocm | time=2024-04-23T15:58:44.468Z level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx1032 gfx000]" ollama-rocm | time=2024-04-23T15:58:44.468Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /tmp/ollama3788123448/rocm" ollama-rocm | time=2024-04-23T15:58:44.468Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/bin" ollama-rocm | time=2024-04-23T15:58:44.469Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/bin/rocm" ollama-rocm | time=2024-04-23T15:58:44.469Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/share/ollama/lib/rocm" ollama-rocm | time=2024-04-23T15:58:44.469Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /opt/rocm/lib" ollama-rocm | time=2024-04-23T15:58:44.470Z level=DEBUG source=amd_linux.go:296 msg="host rocm linked /opt/rocm/lib => /tmp/ollama3788123448/rocm" ollama-rocm | time=2024-04-23T15:58:44.470Z level=DEBUG source=amd_linux.go:159 msg="updated lib path" LD_LIBRARY_PATH=/opt/rocm/lib:/usr/local/lib:/opt/rh/devtoolset-7/root:/tmp/ollama3788123448/rocm ollama-rocm | time=2024-04-23T15:58:44.470Z level=DEBUG source=amd_linux.go:125 msg="skipping rocm gfx compatibility check with HSA_OVERRIDE_GFX_VERSION=10.3.0" ollama-rocm | time=2024-04-23T15:58:44.470Z level=DEBUG source=amd_linux.go:169 msg="discovering VRAM for amdgpu devices" ollama-rocm | time=2024-04-23T15:58:44.470Z level=DEBUG source=amd_linux.go:188 msg="amdgpu devices [0 1]" ollama-rocm | time=2024-04-23T15:58:44.470Z level=INFO source=amd_linux.go:263 msg="[1] amdgpu totalMemory 8176M" ollama-rocm | time=2024-04-23T15:58:44.470Z level=INFO source=amd_linux.go:264 msg="[1] amdgpu freeMemory 8176M" ollama-rocm | time=2024-04-23T15:58:44.470Z level=INFO source=amd_common.go:54 msg="Setting HIP_VISIBLE_DEVICES=1" ollama-rocm | time=2024-04-23T15:59:56.557Z level=DEBUG source=gguf.go:57 msg="model = &llm.gguf{containerGGUF:(*llm.containerGGUF)(0xc000496b80), kv:llm.KV{}, tensors:[]*llm.Tensor(nil), parameters:0x0}" ollama-rocm | time=2024-04-23T15:59:59.219Z level=DEBUG source=gguf.go:193 msg="general.architecture = llama" ollama-rocm | time=2024-04-23T15:59:59.225Z level=INFO source=gpu.go:121 msg="Detecting GPU type" ollama-rocm | time=2024-04-23T15:59:59.225Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" ollama-rocm | time=2024-04-23T15:59:59.225Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/tmp/ollama3788123448/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /opt/rocm/lib/libcudart.so** /usr/local/lib/libcudart.so** /opt/rh/devtoolset-7/root/libcudart.so** /tmp/ollama3788123448/rocm/libcudart.so**]" ollama-rocm | time=2024-04-23T15:59:59.227Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3788123448/runners/cuda_v11/libcudart.so.11.0]" ollama-rocm | wiring cudart library functions in /tmp/ollama3788123448/runners/cuda_v11/libcudart.so.11.0 ollama-rocm | dlsym: cudaSetDevice ollama-rocm | dlsym: cudaDeviceSynchronize ollama-rocm | dlsym: cudaDeviceReset ollama-rocm | dlsym: cudaMemGetInfo ollama-rocm | dlsym: cudaGetDeviceCount ollama-rocm | dlsym: cudaDeviceGetAttribute ollama-rocm | dlsym: cudaDriverGetVersion ollama-rocm | cudaSetDevice err: 35 ollama-rocm | time=2024-04-23T15:59:59.228Z level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama3788123448/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama" ollama-rocm | time=2024-04-23T15:59:59.228Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so" ollama-rocm | time=2024-04-23T15:59:59.228Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /opt/rocm/lib/libnvidia-ml.so* /usr/local/lib/libnvidia-ml.so* /opt/rh/devtoolset-7/root/libnvidia-ml.so* /tmp/ollama3788123448/rocm/libnvidia-ml.so*]" ollama-rocm | time=2024-04-23T15:59:59.230Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []" ollama-rocm | time=2024-04-23T15:59:59.230Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ollama-rocm | time=2024-04-23T15:59:59.230Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" ollama-rocm | time=2024-04-23T15:59:59.230Z level=DEBUG source=amd_linux.go:169 msg="discovering VRAM for amdgpu devices" ollama-rocm | time=2024-04-23T15:59:59.230Z level=DEBUG source=amd_linux.go:188 msg="amdgpu devices [1]" ollama-rocm | time=2024-04-23T15:59:59.230Z level=INFO source=amd_linux.go:263 msg="[1] amdgpu totalMemory 8176M" ollama-rocm | time=2024-04-23T15:59:59.230Z level=INFO source=amd_linux.go:264 msg="[1] amdgpu freeMemory 8176M" ollama-rocm | time=2024-04-23T15:59:59.230Z level=INFO source=gpu.go:121 msg="Detecting GPU type" ollama-rocm | time=2024-04-23T15:59:59.230Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" ollama-rocm | time=2024-04-23T15:59:59.230Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/tmp/ollama3788123448/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /opt/rocm/lib/libcudart.so** /usr/local/lib/libcudart.so** /opt/rh/devtoolset-7/root/libcudart.so** /tmp/ollama3788123448/rocm/libcudart.so**]" ollama-rocm | time=2024-04-23T15:59:59.232Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3788123448/runners/cuda_v11/libcudart.so.11.0]" ollama-rocm | wiring cudart library functions in /tmp/ollama3788123448/runners/cuda_v11/libcudart.so.11.0 ollama-rocm | dlsym: cudaSetDevice ollama-rocm | dlsym: cudaDeviceSynchronize ollama-rocm | dlsym: cudaDeviceReset ollama-rocm | dlsym: cudaMemGetInfo ollama-rocm | dlsym: cudaGetDeviceCount ollama-rocm | dlsym: cudaDeviceGetAttribute ollama-rocm | dlsym: cudaDriverGetVersion ollama-rocm | cudaSetDevice err: 35 ollama-rocm | time=2024-04-23T15:59:59.233Z level=INFO source=gpu.go:343 msg="Unable to load cudart CUDA management library /tmp/ollama3788123448/runners/cuda_v11/libcudart.so.11.0: your nvidia driver is too old or missing, please upgrade to run ollama" ollama-rocm | time=2024-04-23T15:59:59.233Z level=INFO source=gpu.go:268 msg="Searching for GPU management library libnvidia-ml.so" ollama-rocm | time=2024-04-23T15:59:59.233Z level=DEBUG source=gpu.go:286 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /opt/rocm/lib/libnvidia-ml.so* /usr/local/lib/libnvidia-ml.so* /opt/rh/devtoolset-7/root/libnvidia-ml.so* /tmp/ollama3788123448/rocm/libnvidia-ml.so*]" ollama-rocm | time=2024-04-23T15:59:59.234Z level=INFO source=gpu.go:314 msg="Discovered GPU libraries: []" ollama-rocm | time=2024-04-23T15:59:59.234Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ollama-rocm | time=2024-04-23T15:59:59.234Z level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" ollama-rocm | time=2024-04-23T15:59:59.234Z level=DEBUG source=amd_linux.go:169 msg="discovering VRAM for amdgpu devices" ollama-rocm | time=2024-04-23T15:59:59.234Z level=DEBUG source=amd_linux.go:188 msg="amdgpu devices [1]" ollama-rocm | time=2024-04-23T15:59:59.235Z level=INFO source=amd_linux.go:263 msg="[1] amdgpu totalMemory 8176M" ollama-rocm | time=2024-04-23T15:59:59.235Z level=INFO source=amd_linux.go:264 msg="[1] amdgpu freeMemory 8176M" ollama-rocm | time=2024-04-23T15:59:59.235Z level=INFO source=server.go:127 msg="offload to gpu" reallayers=33 layers=33 required="5033.0 MiB" used="5033.0 MiB" available="8176.0 MiB" kv="256.0 MiB" fulloffload="164.0 MiB" partialoffload="677.5 MiB" ollama-rocm | time=2024-04-23T15:59:59.235Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cpu ollama-rocm | time=2024-04-23T15:59:59.235Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cpu_avx ollama-rocm | time=2024-04-23T15:59:59.235Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cpu_avx2 ollama-rocm | time=2024-04-23T15:59:59.235Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cuda_v11 ollama-rocm | time=2024-04-23T15:59:59.235Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/rocm_v60002 ollama-rocm | time=2024-04-23T15:59:59.235Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cpu ollama-rocm | time=2024-04-23T15:59:59.235Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cpu_avx ollama-rocm | time=2024-04-23T15:59:59.235Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cpu_avx2 ollama-rocm | time=2024-04-23T15:59:59.235Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/cuda_v11 ollama-rocm | time=2024-04-23T15:59:59.235Z level=DEBUG source=payload.go:68 msg="availableServers : found" file=/tmp/ollama3788123448/runners/rocm_v60002 ollama-rocm | time=2024-04-23T15:59:59.235Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ollama-rocm | time=2024-04-23T15:59:59.236Z level=DEBUG source=server.go:259 msg="LD_LIBRARY_PATH=/opt/rocm/lib:/usr/local/lib:/opt/rh/devtoolset-7/root:/tmp/ollama3788123448/rocm:/tmp/ollama3788123448/runners/rocm_v60002" ollama-rocm | time=2024-04-23T15:59:59.236Z level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama3788123448/runners/rocm_v60002/ollama_llama_server --model /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 --ctx-size 2048 --batch-size 512 --embedding --log-format json --n-gpu-layers 33 --verbose --port 35977" ollama-rocm | time=2024-04-23T15:59:59.237Z level=INFO source=server.go:389 msg="waiting for llama runner to start responding" ollama-rocm | {"function":"server_params_parse","level":"WARN","line":2494,"msg":"server.cpp is not built with verbose logging.","tid":"135264232893504","timestamp":1713887999} ollama-rocm | {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2820,"msg":"build info","tid":"135264232893504","timestamp":1713887999} ollama-rocm | {"function":"main","level":"INFO","line":2827,"msg":"system info","n_threads":20,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"135264232893504","timestamp":1713887999,"total_threads":40} ollama-rocm | llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 (version GGUF V3 (latest)) ollama-rocm | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. ollama-rocm | llama_model_loader: - kv 0: general.architecture str = llama ollama-rocm | llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct ollama-rocm | llama_model_loader: - kv 2: llama.block_count u32 = 32 ollama-rocm | llama_model_loader: - kv 3: llama.context_length u32 = 8192 ollama-rocm | llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 ollama-rocm | llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 ollama-rocm | llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 ollama-rocm | llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 ollama-rocm | llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 ollama-rocm | llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 ollama-rocm | llama_model_loader: - kv 10: general.file_type u32 = 2 ollama-rocm | llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 ollama-rocm | llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 ollama-rocm | llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 ollama-rocm | llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... ollama-rocm | llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ollama-rocm | time=2024-04-23T15:59:59.488Z level=DEBUG source=server.go:420 msg="server not yet available" error="server not responding" ollama-rocm | llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... ollama-rocm | llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 ollama-rocm | llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001 ollama-rocm | llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... ollama-rocm | llama_model_loader: - kv 20: general.quantization_version u32 = 2 ollama-rocm | llama_model_loader: - type f32: 65 tensors ollama-rocm | llama_model_loader: - type q4_0: 225 tensors ollama-rocm | llama_model_loader: - type q6_K: 1 tensors ollama-rocm | llm_load_vocab: special tokens definition check successful ( 256/128256 ). ollama-rocm | llm_load_print_meta: format = GGUF V3 (latest) ollama-rocm | llm_load_print_meta: arch = llama ollama-rocm | llm_load_print_meta: vocab type = BPE ollama-rocm | llm_load_print_meta: n_vocab = 128256 ollama-rocm | llm_load_print_meta: n_merges = 280147 ollama-rocm | llm_load_print_meta: n_ctx_train = 8192 ollama-rocm | llm_load_print_meta: n_embd = 4096 ollama-rocm | llm_load_print_meta: n_head = 32 ollama-rocm | llm_load_print_meta: n_head_kv = 8 ollama-rocm | llm_load_print_meta: n_layer = 32 ollama-rocm | llm_load_print_meta: n_rot = 128 ollama-rocm | llm_load_print_meta: n_embd_head_k = 128 ollama-rocm | llm_load_print_meta: n_embd_head_v = 128 ollama-rocm | llm_load_print_meta: n_gqa = 4 ollama-rocm | llm_load_print_meta: n_embd_k_gqa = 1024 ollama-rocm | llm_load_print_meta: n_embd_v_gqa = 1024 ollama-rocm | llm_load_print_meta: f_norm_eps = 0.0e+00 ollama-rocm | llm_load_print_meta: f_norm_rms_eps = 1.0e-05 ollama-rocm | llm_load_print_meta: f_clamp_kqv = 0.0e+00 ollama-rocm | llm_load_print_meta: f_max_alibi_bias = 0.0e+00 ollama-rocm | llm_load_print_meta: f_logit_scale = 0.0e+00 ollama-rocm | llm_load_print_meta: n_ff = 14336 ollama-rocm | llm_load_print_meta: n_expert = 0 ollama-rocm | llm_load_print_meta: n_expert_used = 0 ollama-rocm | llm_load_print_meta: causal attn = 1 ollama-rocm | llm_load_print_meta: pooling type = 0 ollama-rocm | llm_load_print_meta: rope type = 0 ollama-rocm | llm_load_print_meta: rope scaling = linear ollama-rocm | llm_load_print_meta: freq_base_train = 500000.0 ollama-rocm | llm_load_print_meta: freq_scale_train = 1 ollama-rocm | llm_load_print_meta: n_yarn_orig_ctx = 8192 ollama-rocm | llm_load_print_meta: rope_finetuned = unknown ollama-rocm | llm_load_print_meta: ssm_d_conv = 0 ollama-rocm | llm_load_print_meta: ssm_d_inner = 0 ollama-rocm | llm_load_print_meta: ssm_d_state = 0 ollama-rocm | llm_load_print_meta: ssm_dt_rank = 0 ollama-rocm | llm_load_print_meta: model type = 7B ollama-rocm | llm_load_print_meta: model ftype = Q4_0 ollama-rocm | llm_load_print_meta: model params = 8.03 B ollama-rocm | llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) ollama-rocm | llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct ollama-rocm | llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' ollama-rocm | llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' ollama-rocm | llm_load_print_meta: LF token = 128 'Ä' ollama-rocm | ollama-rocm | rocBLAS error: Could not initialize Tensile host: No devices found ollama-rocm | time=2024-04-23T16:00:00.694Z level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: -1 error:Could not initialize Tensile host: No devices found" ollama-rocm | time=2024-04-23T16:00:00.694Z level=DEBUG source=server.go:832 msg="stopping llama server" ollama-rocm | [GIN] 2024/04/23 - 16:00:00 | 500 | 4.13850964s | 192.168.7.120 | POST "/api/generate" ```
Author
Owner

@dhiltgen commented on GitHub (Apr 24, 2024):

I think this might be fixed by refactoring I've done in the GPU discovery logic for concurrency in #3418

Can you share the output of rocminfo on your system?

<!-- gh-comment-id:2075505856 --> @dhiltgen commented on GitHub (Apr 24, 2024): I think this might be fixed by refactoring I've done in the GPU discovery logic for concurrency in #3418 Can you share the output of `rocminfo` on your system?
Author
Owner

@dhiltgen commented on GitHub (Apr 28, 2024):

The pre-release for 0.1.33 is available now. The container images are available if you pull the versioned tags https://hub.docker.com/r/ollama/ollama/tags (we'll update "latest" once we drop the pre-release status)

<!-- gh-comment-id:2081598498 --> @dhiltgen commented on GitHub (Apr 28, 2024): The pre-release for [0.1.33](https://github.com/ollama/ollama/releases) is available now. The container images are available if you pull the versioned tags https://hub.docker.com/r/ollama/ollama/tags (we'll update "latest" once we drop the pre-release status)
Author
Owner

@ic4-y commented on GitHub (Apr 28, 2024):

Thanks for letting me know! I will have some time to test it in the coming days and give feedback!

<!-- gh-comment-id:2081605838 --> @ic4-y commented on GitHub (Apr 28, 2024): Thanks for letting me know! I will have some time to test it in the coming days and give feedback!
Author
Owner

@fnord123 commented on GitHub (May 2, 2024):

I am seeing the same issue with ollama/ollama:0.1.33-rc7-rocm

rocminfo.txt
ollama.log

The request sent: curl http://192.168.1.244:11434/api/generate -d '{"model":"llama3","prompt": "why is the sky blue?" }'

Any suggestions? From the ollama log it is detecting the AMD GPU at least.

<!-- gh-comment-id:2089322369 --> @fnord123 commented on GitHub (May 2, 2024): I am seeing the same issue with ollama/ollama:0.1.33-rc7-rocm [rocminfo.txt](https://github.com/ollama/ollama/files/15182440/rocminfo.txt) [ollama.log](https://github.com/ollama/ollama/files/15182441/ollama.log) The request sent: curl http://192.168.1.244:11434/api/generate -d '{"model":"llama3","prompt": "why is the sky blue?" }' Any suggestions? From the ollama log it is detecting the AMD GPU at least.
Author
Owner

@dhiltgen commented on GitHub (May 2, 2024):

It looks like you're setting the override variable, which turns off some logic we have to verify the supported gfx types, and it seems we've chosen /opt/rocm/ as the rocm library to use. My suspicion is it's missing rocblas tensile files. Do you see anything when you ls /opt/rocm/lib/rocblas/library/*gfx*

<!-- gh-comment-id:2091009365 --> @dhiltgen commented on GitHub (May 2, 2024): It looks like you're setting the override variable, which turns off some logic we have to verify the supported gfx types, and it seems we've chosen `/opt/rocm/` as the rocm library to use. My suspicion is it's missing rocblas tensile files. Do you see anything when you `ls /opt/rocm/lib/rocblas/library/*gfx*`
Author
Owner

@ic4-y commented on GitHub (May 2, 2024):

Which override variable are you referring to @dhiltgen ? HSA_OVERRIDE_GFX_VERSION ? I haven't been able to test it myself yet, but I am setting that too.

And the reason is that you have to set it because the Radeon Pro W6600, like many other AMD GPUs such as the nearly identical 6600 XT of @fnord123 , is not officially supported by ROCm, but works de facto when you set the variable. Which used to work with no issues on earlier versions of ollama.

<!-- gh-comment-id:2091077212 --> @ic4-y commented on GitHub (May 2, 2024): Which override variable are you referring to @dhiltgen ? `HSA_OVERRIDE_GFX_VERSION` ? I haven't been able to test it myself yet, but I am setting that too. And the reason is that you have to set it because the Radeon Pro W6600, like many other AMD GPUs such as the nearly identical 6600 XT of @fnord123 , is not officially supported by ROCm, but works de facto when you set the variable. Which used to work with no issues on earlier versions of ollama.
Author
Owner

@fnord123 commented on GitHub (May 2, 2024):

I tried three different configurations based on your question @dhiltgen

  1. No environment variable set. This causes the AMD 6600xt not to be found at all. ollama.noenv.nohostfs.log
  2. Environment variable HSA_OVERRIDE_GFX_VERSION set. This causes the GPU to be found, but the tensile error to occur.
    ollama.envset.nohostfs.log
  3. Environment variable set + mapped the hosts /opt/rocm directory into the docker container Ollama is running in. This causes the GPU to be found and the rocm library to load, but things still fail. The logs show libtinfo.so.6 not being found, which on my host is located in /usr/lib/x86_64-linux-gnu - but I'd expect libtinfo.so to be a standard library so to be found in the container.
    ollama.envset.hostrocm.log

All three of the above fail. The first two fail immediately, the third fails after a long timeout (see the log).

Is running in a container + running AMD just not tested / not supported?

<!-- gh-comment-id:2091366920 --> @fnord123 commented on GitHub (May 2, 2024): I tried three different configurations based on your question @dhiltgen 1. No environment variable set. This causes the AMD 6600xt not to be found at all. [ollama.noenv.nohostfs.log](https://github.com/ollama/ollama/files/15192973/ollama.noenv.nohostfs.log) 2. Environment variable HSA_OVERRIDE_GFX_VERSION set. This causes the GPU to be found, but the tensile error to occur. [ollama.envset.nohostfs.log](https://github.com/ollama/ollama/files/15192986/ollama.envset.nohostfs.log) 3. Environment variable set + mapped the hosts /opt/rocm directory into the docker container Ollama is running in. This causes the GPU to be found and the rocm library to load, but things still fail. The logs show libtinfo.so.6 not being found, which on my host is located in /usr/lib/x86_64-linux-gnu - but I'd expect libtinfo.so to be a standard library so to be found in the container. [ollama.envset.hostrocm.log](https://github.com/ollama/ollama/files/15193001/ollama.envset.hostrocm.log) All three of the above fail. The first two fail immediately, the third fails after a long timeout (see the log). Is running in a container + running AMD just not tested / not supported?
Author
Owner

@ic4-y commented on GitHub (May 2, 2024):

Is running in a container + running AMD just not tested / not supported?

To answer that @fnord123 : I am fairly confident that you will be able to run Ollama below version 0.1.29 given my experience. Try it out, just use the older containers, and report back :)

It works, however there is some issue that started from version 0.1.29 which broke it for my W6600.

Now in the meantime there have been updates to both Ollama, and the ROCm libraries shipping with the container. I think on the older container it was most likely ROCm 5.7.1, now it is maybe 6.1 or something like that?

Testing all things is difficult for a project because

a) there are many different GPU architectures with slight differences,
b) AMD does not officially support the W6600 or 6600 XT on ROCm. They work, but are not officially supported

<!-- gh-comment-id:2091380713 --> @ic4-y commented on GitHub (May 2, 2024): > Is running in a container + running AMD just not tested / not supported? To answer that @fnord123 : I am fairly confident that you will be able to run Ollama below version 0.1.29 given my experience. Try it out, just use the older containers, and report back :) It works, however there is some issue that started from version 0.1.29 which broke it for my W6600. Now in the meantime there have been updates to both Ollama, and the ROCm libraries shipping with the container. I think on the older container it was most likely ROCm 5.7.1, now it is maybe 6.1 or something like that? Testing all things is difficult for a project because a) there are many different GPU architectures with slight differences, b) AMD does not officially support the W6600 or 6600 XT on ROCm. They work, but are not officially supported
Author
Owner

@fnord123 commented on GitHub (May 3, 2024):

So I tried 0.1.28 and Ollama runs, but it gives identical results whether I let the docker container access /dev/kfd and /dev/dri or I don't let it access them. Either way it hits around 6 tokens per second. So @icodeforyou-dot-net my impression is 0.1.28 is simply not using the GPU at all, whereas 0.1.33 does try to use the GPU (and fails).

I've attached the two 0.1.28 logs here for reference.
ollama.1.28.gpudevice.log
ollama.1.28.nogpudevice.log

I also just tried 0.1.33 - looks like a check was added (versus the rc7) for rocm libraries, in that when I don't let the docker container access /opt/rocm it switches to CPU. When I do let it access /opt/rocm on the host it tries to use GPU but fails looking for a libnuma.so.1 library. So that needs to be added to the docker image.

Finally, when I add /usr/lib/x86_64-linux-gnu (which is where libnuma.so.1 is on my host), then it still fails - this time with:

/usr/share/libdrm/amdgpu.ids: No such file or directory

rocBLAS error: Could not initialize Tensile host: No devices found```
Log file attached. 
[ollama.1.33.tensileerror.log](https://github.com/ollama/ollama/files/15196228/ollama.1.33.tensileerror.log)

Edit: Giving access to the host's /usr/share/libdrm/amdgpu.ids still results in the rocBLAS error.

<!-- gh-comment-id:2092105762 --> @fnord123 commented on GitHub (May 3, 2024): So I tried 0.1.28 and Ollama runs, but it gives identical results whether I let the docker container access /dev/kfd and /dev/dri or I don't let it access them. Either way it hits around 6 tokens per second. So @icodeforyou-dot-net my impression is 0.1.28 is simply not using the GPU at all, whereas 0.1.33 does try to use the GPU (and fails). I've attached the two 0.1.28 logs here for reference. [ollama.1.28.gpudevice.log](https://github.com/ollama/ollama/files/15196229/ollama.1.28.gpudevice.log) [ollama.1.28.nogpudevice.log](https://github.com/ollama/ollama/files/15196230/ollama.1.28.nogpudevice.log) I also just tried 0.1.33 - looks like a check was added (versus the rc7) for rocm libraries, in that when I don't let the docker container access /opt/rocm it switches to CPU. When I do let it access /opt/rocm on the host it tries to use GPU but fails looking for a libnuma.so.1 library. So that needs to be added to the docker image. Finally, when I add /usr/lib/x86_64-linux-gnu (which is where libnuma.so.1 is on my host), then it still fails - this time with: ``` /usr/share/libdrm/amdgpu.ids: No such file or directory rocBLAS error: Could not initialize Tensile host: No devices found``` Log file attached. [ollama.1.33.tensileerror.log](https://github.com/ollama/ollama/files/15196228/ollama.1.33.tensileerror.log) ``` Edit: Giving access to the host's /usr/share/libdrm/amdgpu.ids still results in the rocBLAS error.
Author
Owner

@fnord123 commented on GitHub (May 3, 2024):

Ok, I figured out how to get things working after trial and error. Radeon 6600xt is working fine - here is my docker compose file showing what I had to do:

services:
  ollama:
    image: ollama/ollama:0.1.33
    container_name: ollama
    networks:
      dockervlan:
        ipv4_address: 192.168.1.244
    devices:
      - /dev/kfd:/dev/kfd
      - /dev/dri:/dev/dri
    volumes:
      - /home/foo/docker/ollama:/root/.ollama
      - /opt/rocm:/opt/rocm
      - /usr/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu
    environment:
      - TZ=America/Los_Angeles
      - HSA_OVERRIDE_GFX_VERSION=10.3.0
    pull_policy: always
    tty: true
    restart: unless-stopped

The last bug was I had set HSA_OVERRIDE_GX_VERSION=`10.3.0` Removing the single backquotes fixed things.

I think there is still a bug here in that the docker package should include the /opt/rocm and /usr/lib/x86_64-linux-gnu folders within the container. But I'm pleased as punch that 1.33 works on a 6600xt!

Thanks @dhiltgen for your patience and @icodeforyou-dot-net for getting me investigating :)

<!-- gh-comment-id:2092234945 --> @fnord123 commented on GitHub (May 3, 2024): Ok, I figured out how to get things working after trial and error. Radeon 6600xt is working fine - here is my docker compose file showing what I had to do: ``` services: ollama: image: ollama/ollama:0.1.33 container_name: ollama networks: dockervlan: ipv4_address: 192.168.1.244 devices: - /dev/kfd:/dev/kfd - /dev/dri:/dev/dri volumes: - /home/foo/docker/ollama:/root/.ollama - /opt/rocm:/opt/rocm - /usr/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu environment: - TZ=America/Los_Angeles - HSA_OVERRIDE_GFX_VERSION=10.3.0 pull_policy: always tty: true restart: unless-stopped ``` The last bug was I had set HSA_OVERRIDE_GX_VERSION=\`10.3.0\` Removing the single backquotes fixed things. I think there is still a bug here in that the docker package should include the /opt/rocm and /usr/lib/x86_64-linux-gnu folders within the container. But I'm pleased as punch that 1.33 works on a 6600xt! Thanks @dhiltgen for your patience and @icodeforyou-dot-net for getting me investigating :)
Author
Owner

@ic4-y commented on GitHub (May 3, 2024):

@fnord123 thanks for posting the compose.yaml, I will give this a try later. Maybe it'll work for me too :)

<!-- gh-comment-id:2092536987 --> @ic4-y commented on GitHub (May 3, 2024): @fnord123 thanks for posting the `compose.yaml`, I will give this a try later. Maybe it'll work for me too :)
Author
Owner

@dhiltgen commented on GitHub (May 4, 2024):

@fnord123 happy to hear you got it working!

I think there is still a bug here in that the docker package should include the /opt/rocm and /usr/lib/x86_64-linux-gnu folders within the container. But I'm pleased as punch that 1.33 works on a 6600xt!

You're using the incorrect image. You need to use the rocm specific image. You want image: ollama/ollama:0.1.33-rocm Unfortunately the ROCm contents are quite large so we split it apart from the CPU+CUDA image so that users that don't have Radeon cards don't have to download 4+ gigs of unused libraries on their container nodes.

https://hub.docker.com/r/ollama/ollama/tags

I'm going to mark this one closed. @icodeforyou-dot-net if you still run into problems, please share your updated config and server logs and I'll re-open.

<!-- gh-comment-id:2094400626 --> @dhiltgen commented on GitHub (May 4, 2024): @fnord123 happy to hear you got it working! > I think there is still a bug here in that the docker package should include the /opt/rocm and /usr/lib/x86_64-linux-gnu folders within the container. But I'm pleased as punch that 1.33 works on a 6600xt! You're using the incorrect image. You need to use the rocm specific image. You want `image: ollama/ollama:0.1.33-rocm` Unfortunately the ROCm contents are quite large so we split it apart from the CPU+CUDA image so that users that don't have Radeon cards don't have to download 4+ gigs of unused libraries on their container nodes. https://hub.docker.com/r/ollama/ollama/tags I'm going to mark this one closed. @icodeforyou-dot-net if you still run into problems, please share your updated config and server logs and I'll re-open.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#2368