[GH-ISSUE #4492] Ollama crashes after idle and can't process new requests #2811

Closed
opened 2026-04-12 13:08:38 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @artem-zinnatullin on GitHub (May 17, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4492

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

When I keep ollama running idling for some time it then crashes and stops responding to requests:

llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'

rocBLAS error: Could not initialize Tensile host: No devices found
[GIN] 2024/05/17 - 01:05:34 | 500 |  1.472888689s |   192.168.1.112 | POST     "/api/chat"
time=2024-05-17T01:05:34.549-06:00 level=ERROR source=sched.go:339 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) error:Could not initialize Tensile host: No devices found"
time=2024-05-17T01:05:34.553-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100

Ollama ollama/ollama:0.1.37-rocm is running in Docker (actually K8S) on Ubuntu Server with AMD 7900XTX GPU

Full log from start to crash:
2024/05/16 22:16:57 routes.go:1006: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:8 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"
time=2024-05-16T22:16:57.384-06:00 level=INFO source=images.go:704 msg="total blobs: 75"
time=2024-05-16T22:16:57.387-06:00 level=INFO source=images.go:711 msg="total unused blobs removed: 0"
time=2024-05-16T22:16:57.388-06:00 level=INFO source=routes.go:1052 msg="Listening on [::]:11434 (version 0.1.37)"
time=2024-05-16T22:16:57.389-06:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1104229860/runners
time=2024-05-16T22:16:58.936-06:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]"
time=2024-05-16T22:16:58.942-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-16T22:16:58.942-06:00 level=INFO source=types.go:71 msg="inference compute" id=0 library=rocm compute=gfx1100 driver=6.3 name=1002:744c total="24.0 GiB" available="24.0 GiB"
[GIN] 2024/05/16 - 22:17:08 | 200 |    2.540133ms |    10.244.0.173 | GET      "/api/tags"
[GIN] 2024/05/16 - 22:17:08 | 200 |    1.311845ms |    10.244.0.173 | GET      "/api/tags"
[GIN] 2024/05/16 - 22:17:09 | 200 |    1.439971ms |    10.244.0.173 | GET      "/api/tags"
[GIN] 2024/05/16 - 22:17:09 | 200 |    1.218168ms |    10.244.0.173 | GET      "/api/tags"
[GIN] 2024/05/16 - 22:17:19 | 200 |      41.469µs |    10.244.0.173 | GET      "/api/version"
time=2024-05-16T22:17:21.319-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-16T22:17:22.017-06:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="24.0 GiB" memory.required.full="7.7 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.0 GiB" memory.weights.total="4.2 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2024-05-16T22:17:22.017-06:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="24.0 GiB" memory.required.full="7.7 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.0 GiB" memory.weights.total="4.2 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2024-05-16T22:17:22.018-06:00 level=INFO source=server.go:318 msg="starting llama server" cmd="/tmp/ollama1104229860/runners/rocm_v60002/ollama_llama_server --model /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 --ctx-size 16384 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 8 --port 34471"
time=2024-05-16T22:17:22.018-06:00 level=INFO source=sched.go:333 msg="loaded runners" count=1
time=2024-05-16T22:17:22.018-06:00 level=INFO source=server.go:488 msg="waiting for llama runner to start responding"
time=2024-05-16T22:17:22.018-06:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="952d03d" tid="133496113871936" timestamp=1715919442
INFO [main] system info | n_threads=12 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="133496113871936" timestamp=1715919442 total_threads=24
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="23" port="34471" tid="133496113871936" timestamp=1715919442
llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv   2:                          llama.block_count u32              = 32
llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 128001
llama_model_loader: - kv  19:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv  20:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
time=2024-05-16T22:17:22.269-06:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: missing pre-tokenizer type, using: 'default'
llm_load_vocab:
llm_load_vocab: ************************************
llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED!
llm_load_vocab: CONSIDER REGENERATING THE MODEL
llm_load_vocab: ************************************
llm_load_vocab:
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 8.03 B
llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name     = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 ROCm devices:
  Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
llm_load_tensors: ggml ctx size =    0.30 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:      ROCm0 buffer size =  4155.99 MiB
llm_load_tensors:        CPU buffer size =   281.81 MiB
......................................................................................
llama_new_context_with_model: n_ctx      = 16384
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      ROCm0 KV buffer size =  2048.00 MiB
llama_new_context_with_model: KV self size  = 2048.00 MiB, K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
llama_new_context_with_model:  ROCm_Host  output buffer size =     4.04 MiB
llama_new_context_with_model:      ROCm0 compute buffer size =  1088.00 MiB
llama_new_context_with_model:  ROCm_Host compute buffer size =    40.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 2
INFO [main] model loaded | tid="133496113871936" timestamp=1715919446
time=2024-05-16T22:17:26.783-06:00 level=INFO source=server.go:529 msg="llama runner started in 4.77 seconds"
[GIN] 2024/05/16 - 22:17:28 | 200 |  7.031066272s |    10.244.0.173 | POST     "/api/chat"
[GIN] 2024/05/16 - 22:17:30 | 200 |  253.996638ms |    10.244.0.173 | POST     "/v1/chat/completions"
[GIN] 2024/05/16 - 22:17:39 | 200 |    1.163486ms |   192.168.1.112 | GET      "/api/tags"
[GIN] 2024/05/16 - 22:19:07 | 200 |   3.71407638s |   192.168.1.112 | POST     "/api/chat"
[GIN] 2024/05/16 - 22:20:57 | 200 |  4.096279991s |   192.168.1.112 | POST     "/api/chat"
time=2024-05-16T22:25:57.760-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-16T22:25:58.014-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:04:51.795-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:04:52.504-06:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="24.0 GiB" memory.required.full="7.7 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.0 GiB" memory.weights.total="4.2 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2024-05-17T01:04:52.504-06:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="24.0 GiB" memory.required.full="7.7 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.0 GiB" memory.weights.total="4.2 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2024-05-17T01:04:52.504-06:00 level=INFO source=server.go:318 msg="starting llama server" cmd="/tmp/ollama1104229860/runners/rocm_v60002/ollama_llama_server --model /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 --ctx-size 16384 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 8 --port 37839"
time=2024-05-17T01:04:52.505-06:00 level=INFO source=sched.go:333 msg="loaded runners" count=1
time=2024-05-17T01:04:52.505-06:00 level=INFO source=server.go:488 msg="waiting for llama runner to start responding"
time=2024-05-17T01:04:52.505-06:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="952d03d" tid="127493904571456" timestamp=1715929492
INFO [main] system info | n_threads=12 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="127493904571456" timestamp=1715929492 total_threads=24
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="23" port="37839" tid="127493904571456" timestamp=1715929492
llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv   2:                          llama.block_count u32              = 32
llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 128001
llama_model_loader: - kv  19:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv  20:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
time=2024-05-17T01:04:52.756-06:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: missing pre-tokenizer type, using: 'default'
llm_load_vocab:
llm_load_vocab: ************************************
llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED!
llm_load_vocab: CONSIDER REGENERATING THE MODEL
llm_load_vocab: ************************************
llm_load_vocab:
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 8.03 B
llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name     = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'

rocBLAS error: Could not initialize Tensile host: No devices found
time=2024-05-17T01:05:11.060-06:00 level=ERROR source=sched.go:339 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) error:Could not initialize Tensile host: No devices found"
[GIN] 2024/05/17 - 01:05:11 | 500 | 19.269265911s |   192.168.1.112 | POST     "/api/chat"
time=2024-05-17T01:05:11.064-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:11.319-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:11.568-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:11.818-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:12.068-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:12.318-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:12.568-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:12.819-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:13.069-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:13.319-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:13.569-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:13.819-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:14.068-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:14.319-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:14.569-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:14.819-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:15.069-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:15.319-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:15.569-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:15.819-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:16.065-06:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.00548985
time=2024-05-17T01:05:16.069-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:16.315-06:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.255538881
time=2024-05-17T01:05:16.319-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:16.565-06:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.505473591
time=2024-05-17T01:05:33.080-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:33.795-06:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="24.0 GiB" memory.required.full="7.7 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.0 GiB" memory.weights.total="4.2 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2024-05-17T01:05:33.796-06:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="24.0 GiB" memory.required.full="7.7 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.0 GiB" memory.weights.total="4.2 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2024-05-17T01:05:33.796-06:00 level=INFO source=server.go:318 msg="starting llama server" cmd="/tmp/ollama1104229860/runners/rocm_v60002/ollama_llama_server --model /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 --ctx-size 16384 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 8 --port 46195"
time=2024-05-17T01:05:33.796-06:00 level=INFO source=sched.go:333 msg="loaded runners" count=1
time=2024-05-17T01:05:33.796-06:00 level=INFO source=server.go:488 msg="waiting for llama runner to start responding"
time=2024-05-17T01:05:33.796-06:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="952d03d" tid="133936011103296" timestamp=1715929533
INFO [main] system info | n_threads=12 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="133936011103296" timestamp=1715929533 total_threads=24
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="23" port="46195" tid="133936011103296" timestamp=1715929533
llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv   2:                          llama.block_count u32              = 32
llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 128001
llama_model_loader: - kv  19:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv  20:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
time=2024-05-17T01:05:34.048-06:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: missing pre-tokenizer type, using: 'default'
llm_load_vocab:
llm_load_vocab: ************************************
llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED!
llm_load_vocab: CONSIDER REGENERATING THE MODEL
llm_load_vocab: ************************************
llm_load_vocab:
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 8.03 B
llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name     = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'

rocBLAS error: Could not initialize Tensile host: No devices found
[GIN] 2024/05/17 - 01:05:34 | 500 |  1.472888689s |   192.168.1.112 | POST     "/api/chat"
time=2024-05-17T01:05:34.549-06:00 level=ERROR source=sched.go:339 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) error:Could not initialize Tensile host: No devices found"
time=2024-05-17T01:05:34.553-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:34.807-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:35.057-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:35.307-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:35.557-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:35.807-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:36.057-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:36.307-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:36.557-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:36.807-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:37.057-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:37.307-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:37.557-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:37.807-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:38.057-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:38.308-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:38.557-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:38.807-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:39.057-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:39.307-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:39.553-06:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.003790879
time=2024-05-17T01:05:39.557-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:39.803-06:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.253690762
time=2024-05-17T01:05:39.807-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100
time=2024-05-17T01:05:40.053-06:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.50376608

OS

Linux, Docker

GPU

AMD

CPU

AMD

Ollama version

ollama/ollama:0.1.37-rocm

Originally created by @artem-zinnatullin on GitHub (May 17, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4492 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? When I keep `ollama` running idling for some time it then crashes and stops responding to requests: ```js llm_load_print_meta: EOT token = 128009 '<|eot_id|>' rocBLAS error: Could not initialize Tensile host: No devices found [GIN] 2024/05/17 - 01:05:34 | 500 | 1.472888689s | 192.168.1.112 | POST "/api/chat" time=2024-05-17T01:05:34.549-06:00 level=ERROR source=sched.go:339 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) error:Could not initialize Tensile host: No devices found" time=2024-05-17T01:05:34.553-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 ``` Ollama `ollama/ollama:0.1.37-rocm` is running in Docker (actually K8S) on Ubuntu Server with AMD 7900XTX GPU <details> <summary>Full log from start to crash:</summary> ```js 2024/05/16 22:16:57 routes.go:1006: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:8 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]" time=2024-05-16T22:16:57.384-06:00 level=INFO source=images.go:704 msg="total blobs: 75" time=2024-05-16T22:16:57.387-06:00 level=INFO source=images.go:711 msg="total unused blobs removed: 0" time=2024-05-16T22:16:57.388-06:00 level=INFO source=routes.go:1052 msg="Listening on [::]:11434 (version 0.1.37)" time=2024-05-16T22:16:57.389-06:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1104229860/runners time=2024-05-16T22:16:58.936-06:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]" time=2024-05-16T22:16:58.942-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-16T22:16:58.942-06:00 level=INFO source=types.go:71 msg="inference compute" id=0 library=rocm compute=gfx1100 driver=6.3 name=1002:744c total="24.0 GiB" available="24.0 GiB" [GIN] 2024/05/16 - 22:17:08 | 200 | 2.540133ms | 10.244.0.173 | GET "/api/tags" [GIN] 2024/05/16 - 22:17:08 | 200 | 1.311845ms | 10.244.0.173 | GET "/api/tags" [GIN] 2024/05/16 - 22:17:09 | 200 | 1.439971ms | 10.244.0.173 | GET "/api/tags" [GIN] 2024/05/16 - 22:17:09 | 200 | 1.218168ms | 10.244.0.173 | GET "/api/tags" [GIN] 2024/05/16 - 22:17:19 | 200 | 41.469µs | 10.244.0.173 | GET "/api/version" time=2024-05-16T22:17:21.319-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-16T22:17:22.017-06:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="24.0 GiB" memory.required.full="7.7 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.0 GiB" memory.weights.total="4.2 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2024-05-16T22:17:22.017-06:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="24.0 GiB" memory.required.full="7.7 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.0 GiB" memory.weights.total="4.2 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2024-05-16T22:17:22.018-06:00 level=INFO source=server.go:318 msg="starting llama server" cmd="/tmp/ollama1104229860/runners/rocm_v60002/ollama_llama_server --model /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 --ctx-size 16384 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 8 --port 34471" time=2024-05-16T22:17:22.018-06:00 level=INFO source=sched.go:333 msg="loaded runners" count=1 time=2024-05-16T22:17:22.018-06:00 level=INFO source=server.go:488 msg="waiting for llama runner to start responding" time=2024-05-16T22:17:22.018-06:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="952d03d" tid="133496113871936" timestamp=1715919442 INFO [main] system info | n_threads=12 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="133496113871936" timestamp=1715919442 total_threads=24 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="23" port="34471" tid="133496113871936" timestamp=1715919442 llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 20: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors time=2024-05-16T22:17:22.269-06:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: missing pre-tokenizer type, using: 'default' llm_load_vocab: llm_load_vocab: ************************************ llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED! llm_load_vocab: CONSIDER REGENERATING THE MODEL llm_load_vocab: ************************************ llm_load_vocab: llm_load_vocab: special tokens definition check successful ( 256/128256 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes ggml_cuda_init: found 1 ROCm devices: Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no llm_load_tensors: ggml ctx size = 0.30 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: ROCm0 buffer size = 4155.99 MiB llm_load_tensors: CPU buffer size = 281.81 MiB ...................................................................................... llama_new_context_with_model: n_ctx = 16384 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: ROCm0 KV buffer size = 2048.00 MiB llama_new_context_with_model: KV self size = 2048.00 MiB, K (f16): 1024.00 MiB, V (f16): 1024.00 MiB llama_new_context_with_model: ROCm_Host output buffer size = 4.04 MiB llama_new_context_with_model: ROCm0 compute buffer size = 1088.00 MiB llama_new_context_with_model: ROCm_Host compute buffer size = 40.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 2 INFO [main] model loaded | tid="133496113871936" timestamp=1715919446 time=2024-05-16T22:17:26.783-06:00 level=INFO source=server.go:529 msg="llama runner started in 4.77 seconds" [GIN] 2024/05/16 - 22:17:28 | 200 | 7.031066272s | 10.244.0.173 | POST "/api/chat" [GIN] 2024/05/16 - 22:17:30 | 200 | 253.996638ms | 10.244.0.173 | POST "/v1/chat/completions" [GIN] 2024/05/16 - 22:17:39 | 200 | 1.163486ms | 192.168.1.112 | GET "/api/tags" [GIN] 2024/05/16 - 22:19:07 | 200 | 3.71407638s | 192.168.1.112 | POST "/api/chat" [GIN] 2024/05/16 - 22:20:57 | 200 | 4.096279991s | 192.168.1.112 | POST "/api/chat" time=2024-05-16T22:25:57.760-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-16T22:25:58.014-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:04:51.795-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:04:52.504-06:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="24.0 GiB" memory.required.full="7.7 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.0 GiB" memory.weights.total="4.2 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2024-05-17T01:04:52.504-06:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="24.0 GiB" memory.required.full="7.7 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.0 GiB" memory.weights.total="4.2 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2024-05-17T01:04:52.504-06:00 level=INFO source=server.go:318 msg="starting llama server" cmd="/tmp/ollama1104229860/runners/rocm_v60002/ollama_llama_server --model /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 --ctx-size 16384 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 8 --port 37839" time=2024-05-17T01:04:52.505-06:00 level=INFO source=sched.go:333 msg="loaded runners" count=1 time=2024-05-17T01:04:52.505-06:00 level=INFO source=server.go:488 msg="waiting for llama runner to start responding" time=2024-05-17T01:04:52.505-06:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="952d03d" tid="127493904571456" timestamp=1715929492 INFO [main] system info | n_threads=12 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="127493904571456" timestamp=1715929492 total_threads=24 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="23" port="37839" tid="127493904571456" timestamp=1715929492 llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 20: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors time=2024-05-17T01:04:52.756-06:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: missing pre-tokenizer type, using: 'default' llm_load_vocab: llm_load_vocab: ************************************ llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED! llm_load_vocab: CONSIDER REGENERATING THE MODEL llm_load_vocab: ************************************ llm_load_vocab: llm_load_vocab: special tokens definition check successful ( 256/128256 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' rocBLAS error: Could not initialize Tensile host: No devices found time=2024-05-17T01:05:11.060-06:00 level=ERROR source=sched.go:339 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) error:Could not initialize Tensile host: No devices found" [GIN] 2024/05/17 - 01:05:11 | 500 | 19.269265911s | 192.168.1.112 | POST "/api/chat" time=2024-05-17T01:05:11.064-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:11.319-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:11.568-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:11.818-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:12.068-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:12.318-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:12.568-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:12.819-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:13.069-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:13.319-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:13.569-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:13.819-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:14.068-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:14.319-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:14.569-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:14.819-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:15.069-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:15.319-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:15.569-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:15.819-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:16.065-06:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.00548985 time=2024-05-17T01:05:16.069-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:16.315-06:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.255538881 time=2024-05-17T01:05:16.319-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:16.565-06:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.505473591 time=2024-05-17T01:05:33.080-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:33.795-06:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="24.0 GiB" memory.required.full="7.7 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.0 GiB" memory.weights.total="4.2 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2024-05-17T01:05:33.796-06:00 level=INFO source=memory.go:127 msg="offload to gpu" layers.real=-1 layers.estimate=33 memory.available="24.0 GiB" memory.required.full="7.7 GiB" memory.required.partial="7.7 GiB" memory.required.kv="2.0 GiB" memory.weights.total="4.2 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2024-05-17T01:05:33.796-06:00 level=INFO source=server.go:318 msg="starting llama server" cmd="/tmp/ollama1104229860/runners/rocm_v60002/ollama_llama_server --model /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 --ctx-size 16384 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 8 --port 46195" time=2024-05-17T01:05:33.796-06:00 level=INFO source=sched.go:333 msg="loaded runners" count=1 time=2024-05-17T01:05:33.796-06:00 level=INFO source=server.go:488 msg="waiting for llama runner to start responding" time=2024-05-17T01:05:33.796-06:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="952d03d" tid="133936011103296" timestamp=1715929533 INFO [main] system info | n_threads=12 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="133936011103296" timestamp=1715929533 total_threads=24 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="23" port="46195" tid="133936011103296" timestamp=1715929533 llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 20: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors time=2024-05-17T01:05:34.048-06:00 level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: missing pre-tokenizer type, using: 'default' llm_load_vocab: llm_load_vocab: ************************************ llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED! llm_load_vocab: CONSIDER REGENERATING THE MODEL llm_load_vocab: ************************************ llm_load_vocab: llm_load_vocab: special tokens definition check successful ( 256/128256 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' rocBLAS error: Could not initialize Tensile host: No devices found [GIN] 2024/05/17 - 01:05:34 | 500 | 1.472888689s | 192.168.1.112 | POST "/api/chat" time=2024-05-17T01:05:34.549-06:00 level=ERROR source=sched.go:339 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) error:Could not initialize Tensile host: No devices found" time=2024-05-17T01:05:34.553-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:34.807-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:35.057-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:35.307-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:35.557-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:35.807-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:36.057-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:36.307-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:36.557-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:36.807-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:37.057-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:37.307-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:37.557-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:37.807-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:38.057-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:38.308-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:38.557-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:38.807-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:39.057-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:39.307-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:39.553-06:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.003790879 time=2024-05-17T01:05:39.557-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:39.803-06:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.253690762 time=2024-05-17T01:05:39.807-06:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100 time=2024-05-17T01:05:40.053-06:00 level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.50376608 ``` </details> ### OS Linux, Docker ### GPU AMD ### CPU AMD ### Ollama version ollama/ollama:0.1.37-rocm
GiteaMirror added the amdbug labels 2026-04-12 13:08:38 -05:00
Author
Owner

@artem-zinnatullin commented on GitHub (May 17, 2024):

Another user @cyai reported what seems like an exact same error few days ago https://github.com/ollama/ollama/issues/4358#issuecomment-2110112186

<!-- gh-comment-id:2116905215 --> @artem-zinnatullin commented on GitHub (May 17, 2024): Another user @cyai reported what seems like an exact same error few days ago https://github.com/ollama/ollama/issues/4358#issuecomment-2110112186
Author
Owner

@artem-zinnatullin commented on GitHub (May 17, 2024):

Actually, not only restarting ollama didn't help, amdgpu_top failed to find the device! Full computer reboot helped. Crazy.

I do have GPU_MAX_HW_QUEUES=1 set for ollama because otherwise my AMD 7900XTX GPU stays on ~100W IDLE consumption instead of normal 29W, not sure if this is related tho.

<!-- gh-comment-id:2116931586 --> @artem-zinnatullin commented on GitHub (May 17, 2024): Actually, not only restarting `ollama` didn't help, [`amdgpu_top`](https://github.com/Umio-Yasuno/amdgpu_top) failed to find the device! Full computer reboot helped. Crazy. I do have `GPU_MAX_HW_QUEUES=1` set for ollama because otherwise my AMD 7900XTX GPU stays on ~100W IDLE consumption instead of normal 29W, not sure if this is related tho.
Author
Owner

@dhiltgen commented on GitHub (May 21, 2024):

@artem-zinnatullin it seems plausible it may be related to that workaround and issue https://github.com/ROCm/ROCm/issues/2625

If you don't use the workaround and let it run at full power, does it still hang?

<!-- gh-comment-id:2123435107 --> @dhiltgen commented on GitHub (May 21, 2024): @artem-zinnatullin it seems plausible it may be related to that workaround and issue https://github.com/ROCm/ROCm/issues/2625 If you don't use the workaround and let it run at full power, does it still hang?
Author
Owner

@dhiltgen commented on GitHub (Jun 21, 2024):

If you're still seeing crashes, please try upgrading to 0.1.45 where we've updated ROCm to version 6.1.1 and try without the GPU_MAX_HW_QUEUES to see if that resolves the problem.

<!-- gh-comment-id:2183576142 --> @dhiltgen commented on GitHub (Jun 21, 2024): If you're still seeing crashes, please try upgrading to 0.1.45 where we've updated ROCm to version 6.1.1 and try without the GPU_MAX_HW_QUEUES to see if that resolves the problem.
Author
Owner

@kristofer commented on GitHub (Jul 31, 2024):

I might still be seeing this; over the network using a front-end hosted on the same machine as the ollama engine (on a Mac M3 base (8GB)) -

Is there any kind of "closing block" that must be sent to tell ollama that the input is complete? (I wonder if my front-end isn't doing something correct here.)

Running, ollama v 0.3.0

<!-- gh-comment-id:2260606121 --> @kristofer commented on GitHub (Jul 31, 2024): I might still be seeing this; over the network using a front-end hosted on the same machine as the ollama engine (on a Mac M3 base (8GB)) - Is there any kind of "closing block" that must be sent to tell ollama that the input is complete? (I wonder if my front-end isn't doing something correct here.) Running, ollama v 0.3.0
Author
Owner

@dhiltgen commented on GitHub (Aug 1, 2024):

@kristofer I'm not quite following you, but this issue was tracking a ROCm x86 problem, not a Mac issue, so seems unrelated. To your question, each prompt is a distinct request, and as long as the system is processing the response, it will stay loaded, but as soon as the response finishes, the system will become idle, and can unload the model if needed.

<!-- gh-comment-id:2264228082 --> @dhiltgen commented on GitHub (Aug 1, 2024): @kristofer I'm not quite following you, but this issue was tracking a ROCm x86 problem, not a Mac issue, so seems unrelated. To your question, each prompt is a distinct request, and as long as the system is processing the response, it will stay loaded, but as soon as the response finishes, the system will become idle, and can unload the model if needed.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#2811