[GH-ISSUE #4358] No Devices Found on Ryzen 7 8840u #64757

Closed
opened 2026-05-03 18:42:40 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @madelponte on GitHub (May 11, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4358

What is the issue?

When I try to load a model I receive this error message:
Error: llama runner process has terminated: signal: aborted (core dumped) error:Could not initialize Tensile host: No devices found

Here is the docker compose file that I am using to run this:

version: '3'
services:
  ollama:
    image: ollama/ollama:rocm
    container_name: ollama
    devices:
      - /dev/kfd:/dev/kfd
      - /dev/dri:/dev/dri
    group_add:
      - video
    ports:
      - "11434:11434"
    security_opt:
      - "seccomp:unconfined"
    environment:
      - HSA_OVERRIDE_GFX_VERSION="11.0.3"
    volumes:
      - ollama_data:/root/.ollama

volumes:
  ollama_data:

I assumed that since this CPU says that it uses a radeon 780m that it would be the same override as the previous 7000 series, but that doesn't look to be the case.

I have tried manually setting the VRAM in the BIOS to 8G also.

Here is the full logs:

2024/05/11 12:13:16 routes.go:1006: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"
time=2024-05-11T12:13:16.258Z level=INFO source=images.go:704 msg="total blobs: 5"
time=2024-05-11T12:13:16.258Z level=INFO source=images.go:711 msg="total unused blobs removed: 0"
time=2024-05-11T12:13:16.258Z level=INFO source=routes.go:1052 msg="Listening on [::]:11434 (version 0.1.36)"
time=2024-05-11T12:13:16.259Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama3497058117/runners
time=2024-05-11T12:13:18.342Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]"
time=2024-05-11T12:13:18.344Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:18.345Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:18.345Z level=INFO source=types.go:71 msg="inference compute" id=0 library=rocm compute=gfx1103 driver=0.0 name=1002:1900 total="16.0 GiB" available="16.0 GiB"
[GIN] 2024/05/11 - 12:13:19 | 200 |      38.371µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/05/11 - 12:13:19 | 200 |     629.833µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2024/05/11 - 12:13:31 | 200 |      40.516µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/05/11 - 12:13:31 | 200 |    1.554865ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/05/11 - 12:13:31 | 200 |     517.854µs |       127.0.0.1 | POST     "/api/show"
time=2024-05-11T12:13:31.125Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:31.125Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:32.412Z level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="16.0 GiB" memory.required.full="5.3 GiB" memory.required.partial="5.3 GiB" memory.required.kv="256.0 MiB" memory.weights.total="4.4 GiB" memory.weights.repeating="4.0 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-05-11T12:13:32.413Z level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="16.0 GiB" memory.required.full="5.3 GiB" memory.required.partial="5.3 GiB" memory.required.kv="256.0 MiB" memory.weights.total="4.4 GiB" memory.weights.repeating="4.0 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-05-11T12:13:32.414Z level=INFO source=server.go:318 msg="starting llama server" cmd="/tmp/ollama3497058117/runners/rocm_v60002/ollama_llama_server --model /root/.ollama/models/blobs/sha256-60af83b47d53e839830a77eb7cf8b7d474a8b4f778aca21dc73b337a304c4b54 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 1 --port 43325"
time=2024-05-11T12:13:32.414Z level=INFO source=sched.go:333 msg="loaded runners" count=1
time=2024-05-11T12:13:32.414Z level=INFO source=server.go:488 msg="waiting for llama runner to start responding"
time=2024-05-11T12:13:32.415Z level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="952d03d" tid="140500936236096" timestamp=1715429612
INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140500936236096" timestamp=1715429612 total_threads=16
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="43325" tid="140500936236096" timestamp=1715429612
llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-60af83b47d53e839830a77eb7cf8b7d474a8b4f778aca21dc73b337a304c4b54 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv   2:                          llama.block_count u32              = 32
llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 15
llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ Ġ��Ġ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 128001
llama_model_loader: - kv  19:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv  20:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
time=2024-05-11T12:13:32.667Z level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: missing pre-tokenizer type, using: 'default'
llm_load_vocab:                                             
llm_load_vocab: ************************************        
llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED!        
llm_load_vocab: CONSIDER REGENERATING THE MODEL             
llm_load_vocab: ************************************        
llm_load_vocab:                                             
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 8.03 B
llm_load_print_meta: model size       = 4.58 GiB (4.89 BPW) 
llm_load_print_meta: general.name     = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'

rocBLAS error: Could not initialize Tensile host: No devices found
time=2024-05-11T12:13:33.420Z level=ERROR source=sched.go:339 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) error:Could not initialize Tensile host: No devices found"
[GIN] 2024/05/11 - 12:13:33 | 500 |   2.30079255s |       127.0.0.1 | POST     "/api/chat"
time=2024-05-11T12:13:33.424Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:33.424Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:33.680Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:33.680Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:33.929Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:33.929Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:34.177Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:34.178Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:34.429Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:34.429Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:34.681Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:34.681Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:34.928Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:34.928Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:35.179Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:35.180Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:35.429Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:35.429Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:35.680Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:35.681Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:35.930Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:35.931Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:36.180Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:36.181Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:36.430Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:36.431Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:36.680Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:36.680Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:36.929Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:36.930Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:37.179Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:37.180Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:37.430Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:37.430Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:37.681Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:37.682Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:37.930Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:37.930Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:38.179Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:38.180Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:38.425Z level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.00502729
time=2024-05-11T12:13:38.430Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:38.431Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:38.675Z level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.254936802
time=2024-05-11T12:13:38.680Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-11T12:13:38.681Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\""
time=2024-05-11T12:13:38.925Z level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.505078878

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.1.36

Originally created by @madelponte on GitHub (May 11, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4358 ### What is the issue? When I try to load a model I receive this error message: `Error: llama runner process has terminated: signal: aborted (core dumped) error:Could not initialize Tensile host: No devices found` Here is the docker compose file that I am using to run this: ``` version: '3' services: ollama: image: ollama/ollama:rocm container_name: ollama devices: - /dev/kfd:/dev/kfd - /dev/dri:/dev/dri group_add: - video ports: - "11434:11434" security_opt: - "seccomp:unconfined" environment: - HSA_OVERRIDE_GFX_VERSION="11.0.3" volumes: - ollama_data:/root/.ollama volumes: ollama_data: ``` I assumed that since this CPU says that it uses a radeon 780m that it would be the same override as the previous 7000 series, but that doesn't look to be the case. I have tried manually setting the VRAM in the BIOS to 8G also. Here is the full logs: ``` 2024/05/11 12:13:16 routes.go:1006: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]" time=2024-05-11T12:13:16.258Z level=INFO source=images.go:704 msg="total blobs: 5" time=2024-05-11T12:13:16.258Z level=INFO source=images.go:711 msg="total unused blobs removed: 0" time=2024-05-11T12:13:16.258Z level=INFO source=routes.go:1052 msg="Listening on [::]:11434 (version 0.1.36)" time=2024-05-11T12:13:16.259Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama3497058117/runners time=2024-05-11T12:13:18.342Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]" time=2024-05-11T12:13:18.344Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:18.345Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:18.345Z level=INFO source=types.go:71 msg="inference compute" id=0 library=rocm compute=gfx1103 driver=0.0 name=1002:1900 total="16.0 GiB" available="16.0 GiB" [GIN] 2024/05/11 - 12:13:19 | 200 | 38.371µs | 127.0.0.1 | HEAD "/" [GIN] 2024/05/11 - 12:13:19 | 200 | 629.833µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/05/11 - 12:13:31 | 200 | 40.516µs | 127.0.0.1 | HEAD "/" [GIN] 2024/05/11 - 12:13:31 | 200 | 1.554865ms | 127.0.0.1 | POST "/api/show" [GIN] 2024/05/11 - 12:13:31 | 200 | 517.854µs | 127.0.0.1 | POST "/api/show" time=2024-05-11T12:13:31.125Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:31.125Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:32.412Z level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="16.0 GiB" memory.required.full="5.3 GiB" memory.required.partial="5.3 GiB" memory.required.kv="256.0 MiB" memory.weights.total="4.4 GiB" memory.weights.repeating="4.0 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB" time=2024-05-11T12:13:32.413Z level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="16.0 GiB" memory.required.full="5.3 GiB" memory.required.partial="5.3 GiB" memory.required.kv="256.0 MiB" memory.weights.total="4.4 GiB" memory.weights.repeating="4.0 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB" time=2024-05-11T12:13:32.414Z level=INFO source=server.go:318 msg="starting llama server" cmd="/tmp/ollama3497058117/runners/rocm_v60002/ollama_llama_server --model /root/.ollama/models/blobs/sha256-60af83b47d53e839830a77eb7cf8b7d474a8b4f778aca21dc73b337a304c4b54 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 1 --port 43325" time=2024-05-11T12:13:32.414Z level=INFO source=sched.go:333 msg="loaded runners" count=1 time=2024-05-11T12:13:32.414Z level=INFO source=server.go:488 msg="waiting for llama runner to start responding" time=2024-05-11T12:13:32.415Z level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="952d03d" tid="140500936236096" timestamp=1715429612 INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140500936236096" timestamp=1715429612 total_threads=16 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="43325" tid="140500936236096" timestamp=1715429612 llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-60af83b47d53e839830a77eb7cf8b7d474a8b4f778aca21dc73b337a304c4b54 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 15 llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ Ġ��Ġ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 20: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors time=2024-05-11T12:13:32.667Z level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: missing pre-tokenizer type, using: 'default' llm_load_vocab: llm_load_vocab: ************************************ llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED! llm_load_vocab: CONSIDER REGENERATING THE MODEL llm_load_vocab: ************************************ llm_load_vocab: llm_load_vocab: special tokens definition check successful ( 256/128256 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.58 GiB (4.89 BPW) llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' rocBLAS error: Could not initialize Tensile host: No devices found time=2024-05-11T12:13:33.420Z level=ERROR source=sched.go:339 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) error:Could not initialize Tensile host: No devices found" [GIN] 2024/05/11 - 12:13:33 | 500 | 2.30079255s | 127.0.0.1 | POST "/api/chat" time=2024-05-11T12:13:33.424Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:33.424Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:33.680Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:33.680Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:33.929Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:33.929Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:34.177Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:34.178Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:34.429Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:34.429Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:34.681Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:34.681Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:34.928Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:34.928Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:35.179Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:35.180Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:35.429Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:35.429Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:35.680Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:35.681Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:35.930Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:35.931Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:36.180Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:36.181Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:36.430Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:36.431Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:36.680Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:36.680Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:36.929Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:36.930Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:37.179Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:37.180Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:37.430Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:37.430Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:37.681Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:37.682Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:37.930Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:37.930Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:38.179Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:38.180Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:38.425Z level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.00502729 time=2024-05-11T12:13:38.430Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:38.431Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:38.675Z level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.254936802 time=2024-05-11T12:13:38.680Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-11T12:13:38.681Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION="\"11.0.3\"" time=2024-05-11T12:13:38.925Z level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.505078878 ``` ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.1.36
GiteaMirror added the bug label 2026-05-03 18:42:40 -05:00
Author
Owner

@kha84 commented on GitHub (May 11, 2024):

Have you checked, does llama.cpp support the integrated GPU in the recent AMD APUs? I was wondering about such support like a year ago but it wasn't there yet. If llama.cpp guys have added it - then it might appear in ollama as well.

<!-- gh-comment-id:2105935042 --> @kha84 commented on GitHub (May 11, 2024): Have you checked, does llama.cpp support the integrated GPU in the recent AMD APUs? I was wondering about such support like a year ago but it wasn't there yet. If llama.cpp guys have added it - then it might appear in ollama as well.
Author
Owner

@kha84 commented on GitHub (May 11, 2024):

Well it seems that llama.cpp supports that https://github.com/ggerganov/llama.cpp/pull/4449

<!-- gh-comment-id:2105979318 --> @kha84 commented on GitHub (May 11, 2024): Well it seems that llama.cpp supports that https://github.com/ggerganov/llama.cpp/pull/4449
Author
Owner

@cyai commented on GitHub (May 14, 2024):

Found any solutions? I am also facing the same issue. My lamma keeps hanging up if kept idle.

msg="waiting for server to become available" status="llm server error"

<!-- gh-comment-id:2110112186 --> @cyai commented on GitHub (May 14, 2024): Found any solutions? I am also facing the same issue. My lamma keeps hanging up if kept idle. `msg="waiting for server to become available" status="llm server error"`
Author
Owner

@dhiltgen commented on GitHub (May 21, 2024):

We do not currently support iGPUs. This is tracked via #2637

<!-- gh-comment-id:2123581299 --> @dhiltgen commented on GitHub (May 21, 2024): We do not currently support iGPUs. This is tracked via #2637
Author
Owner

@gxmlfx commented on GitHub (May 24, 2024):

HSA_OVERRIDE_GFX_VERSION should be "11.0.0", gfx1103 is not support, it works for me.

<!-- gh-comment-id:2129047916 --> @gxmlfx commented on GitHub (May 24, 2024): HSA_OVERRIDE_GFX_VERSION should be "11.0.0", gfx1103 is not support, it works for me.
Author
Owner

@madelponte commented on GitHub (May 24, 2024):

I tried "11.0.0" as well, it gives the same error.

<!-- gh-comment-id:2130175228 --> @madelponte commented on GitHub (May 24, 2024): I tried "11.0.0" as well, it gives the same error.
Author
Owner

@gxmlfx commented on GitHub (May 25, 2024):

I tried "11.0.0" as well, it gives the same error.

Did you correctly install rocm on the host? amdgpu-install with the parameter --no-dkms
I'm not using docker,ollama runs well on host, but according to https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html,seems you need dkms to run rocm in docker,I'm not sure about that.

<!-- gh-comment-id:2130742652 --> @gxmlfx commented on GitHub (May 25, 2024): > I tried "11.0.0" as well, it gives the same error. Did you correctly install rocm on the host? amdgpu-install with the parameter --no-dkms I'm not using docker,ollama runs well on host, but according to https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html,seems you need dkms to run rocm in docker,I'm not sure about that.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64757