[GH-ISSUE #4798] The rocm driver rx7900xtx has been installed but cannot be used normally. #3027

Closed
opened 2026-04-12 13:26:09 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @HaoZhang66 on GitHub (Jun 3, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4798

Error: llama runner process has terminated: signal: aborted (core dumped) error:Could not initialize Tensile host: No devices found

Originally created by @HaoZhang66 on GitHub (Jun 3, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4798 Error: llama runner process has terminated: signal: aborted (core dumped) error:Could not initialize Tensile host: No devices found
GiteaMirror added the gpuneeds more infoamd labels 2026-04-12 13:26:09 -05:00
Author
Owner

@MichaelFomenko commented on GitHub (Jun 4, 2024):

OS?

<!-- gh-comment-id:2147375226 --> @MichaelFomenko commented on GitHub (Jun 4, 2024): OS?
Author
Owner

@pdevine commented on GitHub (Jun 5, 2024):

@HaoZhang66 please post the logs and some details of what system you're using.

<!-- gh-comment-id:2150935621 --> @pdevine commented on GitHub (Jun 5, 2024): @HaoZhang66 please post the logs and some details of what system you're using.
Author
Owner

@dhiltgen commented on GitHub (Jun 18, 2024):

Please make sure you're running the latest AMD GPU driver, and upgrade to the latest Ollama. If you're still seeing an error initializing your GPU, please share your server log and I'll reopen the issue.

https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md

<!-- gh-comment-id:2177140386 --> @dhiltgen commented on GitHub (Jun 18, 2024): Please make sure you're running the latest AMD GPU driver, and upgrade to the latest Ollama. If you're still seeing an error initializing your GPU, please share your server log and I'll reopen the issue. https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md
Author
Owner

@osa1 commented on GitHub (Sep 7, 2024):

I'm having the same problem.

ollama serve output
$ ./bin/ollama serve
2024/09/07 09:27:34 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/omer/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-09-07T09:27:34.894+02:00 level=INFO source=images.go:753 msg="total blobs: 0"
time=2024-09-07T09:27:34.894+02:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-09-07T09:27:34.894+02:00 level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.9)"
time=2024-09-07T09:27:34.894+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2247750396/runners
time=2024-09-07T09:27:39.280+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [rocm_v60102 cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"
time=2024-09-07T09:27:39.280+02:00 level=INFO source=gpu.go:200 msg="looking for compatible GPUs"
time=2024-09-07T09:27:39.285+02:00 level=INFO source=amd_linux.go:345 msg="amdgpu is supported" gpu=0 gpu_type=gfx1101
time=2024-09-07T09:27:39.285+02:00 level=INFO source=types.go:107 msg="inference compute" id=0 library=rocm variant="" compute=gfx1101 driver=6.8 name=1002:747e total="16.0 GiB" available="14.8 GiB"
[GIN] 2024/09/07 - 09:27:44 | 200 |     259.742µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/09/07 - 09:27:44 | 404 |      80.558µs |       127.0.0.1 | POST     "/api/show"
time=2024-09-07T09:27:46.453+02:00 level=INFO source=download.go:175 msg="downloading 8eeb52dfb3bb in 16 291 MB part(s)"
time=2024-09-07T09:28:00.604+02:00 level=INFO source=download.go:370 msg="8eeb52dfb3bb part 10 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
[GIN] 2024/09/07 - 09:30:21 | 200 |      84.085µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2024/09/07 - 09:30:34 | 200 |     121.197µs |       127.0.0.1 | GET      "/api/tags"
time=2024-09-07T09:34:35.617+02:00 level=INFO source=download.go:175 msg="downloading 73b313b5552d in 1 1.4 KB part(s)"
time=2024-09-07T09:34:37.641+02:00 level=INFO source=download.go:175 msg="downloading 0ba8f0e314b4 in 1 12 KB part(s)"
time=2024-09-07T09:34:39.721+02:00 level=INFO source=download.go:175 msg="downloading 56bb8bd477a5 in 1 96 B part(s)"
time=2024-09-07T09:34:41.764+02:00 level=INFO source=download.go:175 msg="downloading 1a4c3c319823 in 1 485 B part(s)"
[GIN] 2024/09/07 - 09:34:45 | 200 |          7m0s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2024/09/07 - 09:34:45 | 200 |    8.726661ms |       127.0.0.1 | POST     "/api/show"
time=2024-09-07T09:34:45.047+02:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/home/omer/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe gpu=0 parallel=4 available=15922065408 required="6.2 GiB"
time=2024-09-07T09:34:45.047+02:00 level=INFO source=memory.go:309 msg="offload to rocm" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[14.8 GiB]" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-09-07T09:34:45.048+02:00 level=INFO source=server.go:391 msg="starting llama server" cmd="/tmp/ollama2247750396/runners/rocm_v60102/ollama_llama_server --model /home/omer/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --port 44207"
time=2024-09-07T09:34:45.048+02:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2024-09-07T09:34:45.048+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2024-09-07T09:34:45.048+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="1e6f655" tid="137464908120896" timestamp=1725694485
INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="137464908120896" timestamp=1725694485 total_threads=32
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="44207" tid="137464908120896" timestamp=1725694485
llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /home/omer/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Meta-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 8B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   9:                          llama.block_count u32              = 32
llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
llama_model_loader: - kv  11:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                          general.file_type u32              = 2
llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  27:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens cache size = 256
time=2024-09-07T09:34:45.299+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 8.03 B
llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name     = Meta Llama 3.1 8B Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256

rocBLAS error: Could not initialize Tensile host: No devices found
time=2024-09-07T09:34:45.751+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
time=2024-09-07T09:34:46.453+02:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error:Could not initialize Tensile host: No devices found"

ollama run llama3.1 output:

$ ./ollama run llama3.1
pulling manifest
pulling 8eeb52dfb3bb... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████▏ 4.7 GB
pulling 73b313b5552d... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████▏ 1.4 KB
pulling 0ba8f0e314b4... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████▏  12 KB
pulling 56bb8bd477a5... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████▏   96 B
pulling 1a4c3c319823... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████▏  485 B
verifying sha256 digest
writing manifest
success
Error: llama runner process has terminated: error:Could not initialize Tensile host: No devices found

ollama version is 0.3.9, installed a few minutes ago following manual installation instructiuons at https://github.com/ollama/ollama/blob/main/docs/linux.md.

<!-- gh-comment-id:2335106005 --> @osa1 commented on GitHub (Sep 7, 2024): I'm having the same problem. - Ubuntu 24.04.1 - AMD ATI Radeon RX 7800 XT - Drivers installed following https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/native-install/ubuntu.html <details> <summary>ollama serve output</summary> ``` $ ./bin/ollama serve 2024/09/07 09:27:34 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/omer/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-09-07T09:27:34.894+02:00 level=INFO source=images.go:753 msg="total blobs: 0" time=2024-09-07T09:27:34.894+02:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-09-07T09:27:34.894+02:00 level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.9)" time=2024-09-07T09:27:34.894+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2247750396/runners time=2024-09-07T09:27:39.280+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [rocm_v60102 cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]" time=2024-09-07T09:27:39.280+02:00 level=INFO source=gpu.go:200 msg="looking for compatible GPUs" time=2024-09-07T09:27:39.285+02:00 level=INFO source=amd_linux.go:345 msg="amdgpu is supported" gpu=0 gpu_type=gfx1101 time=2024-09-07T09:27:39.285+02:00 level=INFO source=types.go:107 msg="inference compute" id=0 library=rocm variant="" compute=gfx1101 driver=6.8 name=1002:747e total="16.0 GiB" available="14.8 GiB" [GIN] 2024/09/07 - 09:27:44 | 200 | 259.742µs | 127.0.0.1 | HEAD "/" [GIN] 2024/09/07 - 09:27:44 | 404 | 80.558µs | 127.0.0.1 | POST "/api/show" time=2024-09-07T09:27:46.453+02:00 level=INFO source=download.go:175 msg="downloading 8eeb52dfb3bb in 16 291 MB part(s)" time=2024-09-07T09:28:00.604+02:00 level=INFO source=download.go:370 msg="8eeb52dfb3bb part 10 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." [GIN] 2024/09/07 - 09:30:21 | 200 | 84.085µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/09/07 - 09:30:34 | 200 | 121.197µs | 127.0.0.1 | GET "/api/tags" time=2024-09-07T09:34:35.617+02:00 level=INFO source=download.go:175 msg="downloading 73b313b5552d in 1 1.4 KB part(s)" time=2024-09-07T09:34:37.641+02:00 level=INFO source=download.go:175 msg="downloading 0ba8f0e314b4 in 1 12 KB part(s)" time=2024-09-07T09:34:39.721+02:00 level=INFO source=download.go:175 msg="downloading 56bb8bd477a5 in 1 96 B part(s)" time=2024-09-07T09:34:41.764+02:00 level=INFO source=download.go:175 msg="downloading 1a4c3c319823 in 1 485 B part(s)" [GIN] 2024/09/07 - 09:34:45 | 200 | 7m0s | 127.0.0.1 | POST "/api/pull" [GIN] 2024/09/07 - 09:34:45 | 200 | 8.726661ms | 127.0.0.1 | POST "/api/show" time=2024-09-07T09:34:45.047+02:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/home/omer/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe gpu=0 parallel=4 available=15922065408 required="6.2 GiB" time=2024-09-07T09:34:45.047+02:00 level=INFO source=memory.go:309 msg="offload to rocm" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[14.8 GiB]" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB" time=2024-09-07T09:34:45.048+02:00 level=INFO source=server.go:391 msg="starting llama server" cmd="/tmp/ollama2247750396/runners/rocm_v60102/ollama_llama_server --model /home/omer/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --port 44207" time=2024-09-07T09:34:45.048+02:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2024-09-07T09:34:45.048+02:00 level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2024-09-07T09:34:45.048+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="1e6f655" tid="137464908120896" timestamp=1725694485 INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="137464908120896" timestamp=1725694485 total_threads=32 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="31" port="44207" tid="137464908120896" timestamp=1725694485 llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /home/omer/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 8B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 9: llama.block_count u32 = 32 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 4096 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: general.file_type u32 = 2 llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens cache size = 256 time=2024-09-07T09:34:45.299+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 rocBLAS error: Could not initialize Tensile host: No devices found time=2024-09-07T09:34:45.751+02:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" time=2024-09-07T09:34:46.453+02:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error:Could not initialize Tensile host: No devices found" ``` </details> ollama run llama3.1 output: ``` $ ./ollama run llama3.1 pulling manifest pulling 8eeb52dfb3bb... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████▏ 4.7 GB pulling 73b313b5552d... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████▏ 1.4 KB pulling 0ba8f0e314b4... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████▏ 12 KB pulling 56bb8bd477a5... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████▏ 96 B pulling 1a4c3c319823... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████▏ 485 B verifying sha256 digest writing manifest success Error: llama runner process has terminated: error:Could not initialize Tensile host: No devices found ``` ollama version is 0.3.9, installed a few minutes ago following manual installation instructiuons at https://github.com/ollama/ollama/blob/main/docs/linux.md.
Author
Owner

@dhiltgen commented on GitHub (Sep 24, 2024):

@osa1 there's a good chance this is a permission problem. In 0.3.11 we added back a check to verify permissions and give a better error message. Depending on your distro, it may involve group membership, selinux, etc. to ensure the user the ollama server is running as has access to the GPU driver. https://github.com/ollama/ollama/blob/main/docs/gpu.md#container-permission

<!-- gh-comment-id:2372012806 --> @dhiltgen commented on GitHub (Sep 24, 2024): @osa1 there's a good chance this is a permission problem. In 0.3.11 we added back a check to verify permissions and give a better error message. Depending on your distro, it may involve group membership, selinux, etc. to ensure the user the ollama server is running as has access to the GPU driver. https://github.com/ollama/ollama/blob/main/docs/gpu.md#container-permission
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3027