Error "timed out waiting for llama runner to start: " on larger models. #2570

Closed
opened 2025-11-12 11:04:36 -06:00 by GiteaMirror · 45 comments
Owner

Originally created by @CalvesGEH on GitHub (May 3, 2024).

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I just setup Ollama on a fresh machine and am running into an issue starting Ollama on larger models.

I am running Ubuntu 22.04.4 LTS with 2 Nvidia Tesla P40 GPUs with Driver Version: 535.161.08 and CUDA Version: 12.2.

Small 8b models work great and have no issues but when I try something like a 34b or a 70b model, I get the error "timed out waiting for llama runner to start: ".

Here are the logs from the "ollama serve" process:

user@hostname:~$ ollama serve
time=2024-05-03T16:26:00.169Z level=INFO source=images.go:828 msg="total blobs: 0"
time=2024-05-03T16:26:00.169Z level=INFO source=images.go:835 msg="total unused blobs removed: 0"
time=2024-05-03T16:26:00.169Z level=INFO source=routes.go:1071 msg="Listening on 127.0.0.1:11434 (version 0.1.33)"
time=2024-05-03T16:26:00.170Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama837848792/runners
time=2024-05-03T16:26:04.596Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]"
time=2024-05-03T16:26:04.596Z level=INFO source=gpu.go:96 msg="Detecting GPUs"
time=2024-05-03T16:26:05.377Z level=INFO source=gpu.go:101 msg="detected GPUs" library=/tmp/ollama837848792/runners/cuda_v11/libcudart.so.11.0 count=2
time=2024-05-03T16:26:05.377Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[GIN] 2024/05/03 - 16:26:15 | 200 |       66.48µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/05/03 - 16:26:15 | 404 |      207.93µs |       127.0.0.1 | POST     "/api/show"
time=2024-05-03T16:26:17.456Z level=INFO source=download.go:136 msg="downloading f36b668ebcd3 in 64 297 MB part(s)"
time=2024-05-03T16:27:30.886Z level=INFO source=download.go:178 msg="f36b668ebcd3 part 1 attempt 0 failed: unexpected EOF, retrying in 1s"
time=2024-05-03T16:27:46.457Z level=INFO source=download.go:251 msg="f36b668ebcd3 part 61 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2024-05-03T16:29:08.175Z level=INFO source=download.go:136 msg="downloading 2e0493f67d0c in 1 59 B part(s)"
time=2024-05-03T16:29:09.864Z level=INFO source=download.go:136 msg="downloading c60122cb2728 in 1 132 B part(s)"
time=2024-05-03T16:29:11.547Z level=INFO source=download.go:136 msg="downloading d5981b4f8e77 in 1 382 B part(s)"
[GIN] 2024/05/03 - 16:30:06 | 200 |         3m50s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2024/05/03 - 16:30:06 | 200 |    1.142112ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/05/03 - 16:30:06 | 200 |     291.938µs |       127.0.0.1 | POST     "/api/show"
time=2024-05-03T16:30:06.522Z level=INFO source=gpu.go:96 msg="Detecting GPUs"
time=2024-05-03T16:30:06.525Z level=INFO source=gpu.go:101 msg="detected GPUs" library=/tmp/ollama837848792/runners/cuda_v11/libcudart.so.11.0 count=2
time=2024-05-03T16:30:06.525Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-03T16:30:07.346Z level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=49 memory.available="24297.6 MiB" memory.required.full="19193.1 MiB" memory.required.partial="19193.1 MiB" memory.required.kv="384.0 MiB" memory.weights.total="18028.1 MiB" memory.weights.repeating="17823.0 MiB" memory.weights.nonrepeating="205.1 MiB" memory.graph.full="324.0 MiB" memory.graph.partial="348.0 MiB"
time=2024-05-03T16:30:07.347Z level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=49 memory.available="24297.6 MiB" memory.required.full="19193.1 MiB" memory.required.partial="19193.1 MiB" memory.required.kv="384.0 MiB" memory.weights.total="18028.1 MiB" memory.weights.repeating="17823.0 MiB" memory.weights.nonrepeating="205.1 MiB" memory.graph.full="324.0 MiB" memory.graph.partial="348.0 MiB"
time=2024-05-03T16:30:07.347Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-03T16:30:07.347Z level=INFO source=server.go:289 msg="starting llama server" cmd="/tmp/ollama837848792/runners/cuda_v11/ollama_llama_server --model /home/ptmoraski/.ollama/models/blobs/sha256-f36b668ebcd329357fac22db35f6414a1c9309307f33d08fe217bbf84b0496cc --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 49 --parallel 1 --port 40909"
time=2024-05-03T16:30:07.348Z level=INFO source=sched.go:340 msg="loaded runners" count=1
time=2024-05-03T16:30:07.348Z level=INFO source=server.go:432 msg="waiting for llama runner to start responding"
{"function":"server_params_parse","level":"INFO","line":2606,"msg":"logging to file is disabled.","tid":"139735583424512","timestamp":1714753807}
{"build":1,"commit":"952d03d","function":"main","level":"INFO","line":2822,"msg":"build info","tid":"139735583424512","timestamp":1714753807}
{"function":"main","level":"INFO","line":2825,"msg":"system info","n_threads":16,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | ","tid":"139735583424512","timestamp":1714753807,"total_threads":32}
llama_model_loader: loaded meta data with 20 key-value pairs and 435 tensors from /home/ptmoraski/.ollama/models/blobs/sha256-f36b668ebcd329357fac22db35f6414a1c9309307f33d08fe217bbf84b0496cc (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = codellama
llama_model_loader: - kv   2:                       llama.context_length u32              = 16384
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 8192
llama_model_loader: - kv   4:                          llama.block_count u32              = 48
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 22016
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 64
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  19:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   97 tensors
llama_model_loader: - type q4_0:  337 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V2
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 16384
llm_load_print_meta: n_embd           = 8192
llm_load_print_meta: n_head           = 64
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 48
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 8
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 22016
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 16384
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 34B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 33.74 B
llm_load_print_meta: model size       = 17.74 GiB (4.52 BPW)
llm_load_print_meta: general.name     = codellama
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_print_meta: PRE token        = 32007 '`*▒'
time=2024-05-03T16:40:07.352Z level=ERROR source=sched.go:346 msg="error loading llama server" error="timed out waiting for llama runner to start: "
[GIN] 2024/05/03 - 16:40:07 | 500 |         10m0s |       127.0.0.1 | POST     "/api/chat"
timed out waiting for llama runner to start: 

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.33

Originally created by @CalvesGEH on GitHub (May 3, 2024). Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I just setup Ollama on a fresh machine and am running into an issue starting Ollama on larger models. I am running Ubuntu 22.04.4 LTS with 2 Nvidia Tesla P40 GPUs with Driver Version: 535.161.08 and CUDA Version: 12.2. Small 8b models work great and have no issues but when I try something like a 34b or a 70b model, I get the error "timed out waiting for llama runner to start: ". Here are the logs from the "ollama serve" process: ``` user@hostname:~$ ollama serve time=2024-05-03T16:26:00.169Z level=INFO source=images.go:828 msg="total blobs: 0" time=2024-05-03T16:26:00.169Z level=INFO source=images.go:835 msg="total unused blobs removed: 0" time=2024-05-03T16:26:00.169Z level=INFO source=routes.go:1071 msg="Listening on 127.0.0.1:11434 (version 0.1.33)" time=2024-05-03T16:26:00.170Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama837848792/runners time=2024-05-03T16:26:04.596Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]" time=2024-05-03T16:26:04.596Z level=INFO source=gpu.go:96 msg="Detecting GPUs" time=2024-05-03T16:26:05.377Z level=INFO source=gpu.go:101 msg="detected GPUs" library=/tmp/ollama837848792/runners/cuda_v11/libcudart.so.11.0 count=2 time=2024-05-03T16:26:05.377Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" [GIN] 2024/05/03 - 16:26:15 | 200 | 66.48µs | 127.0.0.1 | HEAD "/" [GIN] 2024/05/03 - 16:26:15 | 404 | 207.93µs | 127.0.0.1 | POST "/api/show" time=2024-05-03T16:26:17.456Z level=INFO source=download.go:136 msg="downloading f36b668ebcd3 in 64 297 MB part(s)" time=2024-05-03T16:27:30.886Z level=INFO source=download.go:178 msg="f36b668ebcd3 part 1 attempt 0 failed: unexpected EOF, retrying in 1s" time=2024-05-03T16:27:46.457Z level=INFO source=download.go:251 msg="f36b668ebcd3 part 61 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2024-05-03T16:29:08.175Z level=INFO source=download.go:136 msg="downloading 2e0493f67d0c in 1 59 B part(s)" time=2024-05-03T16:29:09.864Z level=INFO source=download.go:136 msg="downloading c60122cb2728 in 1 132 B part(s)" time=2024-05-03T16:29:11.547Z level=INFO source=download.go:136 msg="downloading d5981b4f8e77 in 1 382 B part(s)" [GIN] 2024/05/03 - 16:30:06 | 200 | 3m50s | 127.0.0.1 | POST "/api/pull" [GIN] 2024/05/03 - 16:30:06 | 200 | 1.142112ms | 127.0.0.1 | POST "/api/show" [GIN] 2024/05/03 - 16:30:06 | 200 | 291.938µs | 127.0.0.1 | POST "/api/show" time=2024-05-03T16:30:06.522Z level=INFO source=gpu.go:96 msg="Detecting GPUs" time=2024-05-03T16:30:06.525Z level=INFO source=gpu.go:101 msg="detected GPUs" library=/tmp/ollama837848792/runners/cuda_v11/libcudart.so.11.0 count=2 time=2024-05-03T16:30:06.525Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-05-03T16:30:07.346Z level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=49 memory.available="24297.6 MiB" memory.required.full="19193.1 MiB" memory.required.partial="19193.1 MiB" memory.required.kv="384.0 MiB" memory.weights.total="18028.1 MiB" memory.weights.repeating="17823.0 MiB" memory.weights.nonrepeating="205.1 MiB" memory.graph.full="324.0 MiB" memory.graph.partial="348.0 MiB" time=2024-05-03T16:30:07.347Z level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=49 memory.available="24297.6 MiB" memory.required.full="19193.1 MiB" memory.required.partial="19193.1 MiB" memory.required.kv="384.0 MiB" memory.weights.total="18028.1 MiB" memory.weights.repeating="17823.0 MiB" memory.weights.nonrepeating="205.1 MiB" memory.graph.full="324.0 MiB" memory.graph.partial="348.0 MiB" time=2024-05-03T16:30:07.347Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-05-03T16:30:07.347Z level=INFO source=server.go:289 msg="starting llama server" cmd="/tmp/ollama837848792/runners/cuda_v11/ollama_llama_server --model /home/ptmoraski/.ollama/models/blobs/sha256-f36b668ebcd329357fac22db35f6414a1c9309307f33d08fe217bbf84b0496cc --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 49 --parallel 1 --port 40909" time=2024-05-03T16:30:07.348Z level=INFO source=sched.go:340 msg="loaded runners" count=1 time=2024-05-03T16:30:07.348Z level=INFO source=server.go:432 msg="waiting for llama runner to start responding" {"function":"server_params_parse","level":"INFO","line":2606,"msg":"logging to file is disabled.","tid":"139735583424512","timestamp":1714753807} {"build":1,"commit":"952d03d","function":"main","level":"INFO","line":2822,"msg":"build info","tid":"139735583424512","timestamp":1714753807} {"function":"main","level":"INFO","line":2825,"msg":"system info","n_threads":16,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | ","tid":"139735583424512","timestamp":1714753807,"total_threads":32} llama_model_loader: loaded meta data with 20 key-value pairs and 435 tensors from /home/ptmoraski/.ollama/models/blobs/sha256-f36b668ebcd329357fac22db35f6414a1c9309307f33d08fe217bbf84b0496cc (version GGUF V2) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = codellama llama_model_loader: - kv 2: llama.context_length u32 = 16384 llama_model_loader: - kv 3: llama.embedding_length u32 = 8192 llama_model_loader: - kv 4: llama.block_count u32 = 48 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 22016 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 64 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 19: general.quantization_version u32 = 2 llama_model_loader: - type f32: 97 tensors llama_model_loader: - type q4_0: 337 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V2 llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 16384 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 48 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 22016 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 16384 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 34B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 33.74 B llm_load_print_meta: model size = 17.74 GiB (4.52 BPW) llm_load_print_meta: general.name = codellama llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_print_meta: PRE token = 32007 '`*▒' time=2024-05-03T16:40:07.352Z level=ERROR source=sched.go:346 msg="error loading llama server" error="timed out waiting for llama runner to start: " [GIN] 2024/05/03 - 16:40:07 | 500 | 10m0s | 127.0.0.1 | POST "/api/chat" timed out waiting for llama runner to start: ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.33
GiteaMirror added the bug label 2025-11-12 11:04:36 -06:00
Author
Owner

@CalvesGEH commented on GitHub (May 3, 2024):

Running with OLLAMA_DEBUG=1, I get this log directly after the metadata:

time=2024-05-03T16:50:16.451Z level=DEBUG source=server.go:466 msg="server not yet available" error="health resp: Get \"http://127.0.0.1:38939/health\": read tcp 127.0.0.1:54468->127.0.                     time=2024-05-03T17:00:16.258Z level=ERROR source=sched.go:346 msg="error loading llama server" error="timed out waiting for llama runner to start: "
time=2024-05-03T17:00:16.258Z level=DEBUG source=sched.go:349 msg="triggering expiration for failed load" model=/home/user/.ollama/models/blobs/sha256-f36b668ebcd329357fac22db35f6414a1c9309307f33d08fe217bbf84b0496cc
time=2024-05-03T17:00:16.258Z level=DEBUG source=sched.go:265 msg="runner expired event received" model=/home/user/.ollama/models/blobs/sha256-f36b668ebcd329357fac22db35f6414a1c9309307f33d08fe217bbf84b0496cc
time=2024-05-03T17:00:16.258Z level=DEBUG source=sched.go:280 msg="got lock to unload" model=/home/user/.ollama/models/blobs/sha256-f36b668ebcd329357fac22db35f6414a1c9309307f33d08fe217bbf84b0496cc
time=2024-05-03T17:00:16.258Z level=DEBUG source=server.go:895 msg="stopping llama server"
[GIN] 2024/05/03 - 17:00:16 | 500 |         10m0s |       127.0.0.1 | POST     "/api/chat"
time=2024-05-03T17:00:16.258Z level=DEBUG source=server.go:902 msg="llama server stopped"
time=2024-05-03T17:00:16.258Z level=DEBUG source=sched.go:285 msg="runner released" model=/home/user/.ollama/models/blobs/sha256-f36b668ebcd329357fac22db35f6414a1c9309307f33d08fe217bbf84b0496cc
time=2024-05-03T17:00:16.258Z level=DEBUG source=sched.go:287 msg="sending an unloaded event" model=/home/user/.ollama/models/blobs/sha256-f36b668ebcd329357fac22db35f6414a1c9309307f33d08fe217bbf84b0496cc
time=2024-05-03T17:00:16.258Z level=DEBUG source=sched.go:215 msg="ignoring unload event with no pending requests"
@CalvesGEH commented on GitHub (May 3, 2024): Running with OLLAMA_DEBUG=1, I get this log directly after the metadata: ``` time=2024-05-03T16:50:16.451Z level=DEBUG source=server.go:466 msg="server not yet available" error="health resp: Get \"http://127.0.0.1:38939/health\": read tcp 127.0.0.1:54468->127.0. time=2024-05-03T17:00:16.258Z level=ERROR source=sched.go:346 msg="error loading llama server" error="timed out waiting for llama runner to start: " time=2024-05-03T17:00:16.258Z level=DEBUG source=sched.go:349 msg="triggering expiration for failed load" model=/home/user/.ollama/models/blobs/sha256-f36b668ebcd329357fac22db35f6414a1c9309307f33d08fe217bbf84b0496cc time=2024-05-03T17:00:16.258Z level=DEBUG source=sched.go:265 msg="runner expired event received" model=/home/user/.ollama/models/blobs/sha256-f36b668ebcd329357fac22db35f6414a1c9309307f33d08fe217bbf84b0496cc time=2024-05-03T17:00:16.258Z level=DEBUG source=sched.go:280 msg="got lock to unload" model=/home/user/.ollama/models/blobs/sha256-f36b668ebcd329357fac22db35f6414a1c9309307f33d08fe217bbf84b0496cc time=2024-05-03T17:00:16.258Z level=DEBUG source=server.go:895 msg="stopping llama server" [GIN] 2024/05/03 - 17:00:16 | 500 | 10m0s | 127.0.0.1 | POST "/api/chat" time=2024-05-03T17:00:16.258Z level=DEBUG source=server.go:902 msg="llama server stopped" time=2024-05-03T17:00:16.258Z level=DEBUG source=sched.go:285 msg="runner released" model=/home/user/.ollama/models/blobs/sha256-f36b668ebcd329357fac22db35f6414a1c9309307f33d08fe217bbf84b0496cc time=2024-05-03T17:00:16.258Z level=DEBUG source=sched.go:287 msg="sending an unloaded event" model=/home/user/.ollama/models/blobs/sha256-f36b668ebcd329357fac22db35f6414a1c9309307f33d08fe217bbf84b0496cc time=2024-05-03T17:00:16.258Z level=DEBUG source=sched.go:215 msg="ignoring unload event with no pending requests" ```
Author
Owner

@asesidaa commented on GitHub (May 6, 2024):

Can confirm this is a regression that is introduced in v0.1.33, rollback to v0.1.32 can load large models

@asesidaa commented on GitHub (May 6, 2024): Can confirm this is a regression that is introduced in v0.1.33, rollback to v0.1.32 can load large models
Author
Owner

@asesidaa commented on GitHub (May 7, 2024):

This may also be caused by cuda version
If I use a locally compiled version (with cuda 12) it can load large models just fine

@asesidaa commented on GitHub (May 7, 2024): This may also be caused by cuda version If I use a locally compiled version (with cuda 12) it can load large models just fine
Author
Owner

@anaser-fts commented on GitHub (May 9, 2024):

I get similar error when trying to run a custom model using ollama run.

@anaser-fts commented on GitHub (May 9, 2024): I get similar error when trying to run a custom model using `ollama run`.
Author
Owner

@sridhar25-codvo commented on GitHub (May 10, 2024):

I get a similar error too, when I am trying to run the custom mixtral:8x7b model,
image

:~$ ollama run mixtral:8x7b
pulling manifest
pulling e9e56e8bb5f0... 100% ▕█████████████████████████████████████████████████████▏  26 GB
pulling 43070e2d4e53... 100% ▕█████████████████████████████████████████████████████▏  11 KB
pulling 79b7eca19f7a... 100% ▕█████████████████████████████████████████████████████▏   43 B
pulling ed11eda7790d... 100% ▕█████████████████████████████████████████████████████▏   30 B
pulling 9dec05e9b2db... 100% ▕█████████████████████████████████████████████████████▏  484 B
verifying sha256 digest
writing manifest
removing any unused layers
success
**Error: timed out waiting for llama runner to start:**
@sridhar25-codvo commented on GitHub (May 10, 2024): I get a similar error too, when I am trying to run the custom mixtral:8x7b model, ![image](https://github.com/ollama/ollama/assets/103447315/07d51ef7-c716-4713-bfd7-e08d4d88fa4d) ``` :~$ ollama run mixtral:8x7b pulling manifest pulling e9e56e8bb5f0... 100% ▕█████████████████████████████████████████████████████▏ 26 GB pulling 43070e2d4e53... 100% ▕█████████████████████████████████████████████████████▏ 11 KB pulling 79b7eca19f7a... 100% ▕█████████████████████████████████████████████████████▏ 43 B pulling ed11eda7790d... 100% ▕█████████████████████████████████████████████████████▏ 30 B pulling 9dec05e9b2db... 100% ▕█████████████████████████████████████████████████████▏ 484 B verifying sha256 digest writing manifest removing any unused layers success **Error: timed out waiting for llama runner to start:** ```
Author
Owner

@asesidaa commented on GitHub (May 10, 2024):

Have you tried to compile ollama locally with native cuda libraries
This does fix this issue on my end

@asesidaa commented on GitHub (May 10, 2024): Have you tried to compile ollama locally with native cuda libraries This does fix this issue on my end
Author
Owner

@sridhar25-codvo commented on GitHub (May 10, 2024):

Have you tried to compile ollama locally with native cuda libraries This does fix this issue on my end

Actually, I run only on CPUs, not on any GPU

@sridhar25-codvo commented on GitHub (May 10, 2024): > Have you tried to compile ollama locally with native cuda libraries This does fix this issue on my end Actually, I run only on CPUs, not on any GPU
Author
Owner

@EthanRBoyle commented on GitHub (May 14, 2024):

$ uname -rsv
Linux 6.1.0-21-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.90-1 (2024-05-03)

$ ollama -v
ollama version is 0.1.34

$ ollama list
llama3:8b-instruct-q8_0

Same problem here. I only have one model installed so far, so I have not tried it with other models yet.

$ sudo journalctl -u ollama|grep -i error
May 09 17:21:12 v14us1nf3cted ollama[22787]: time=2024-05-09T17:21:12.242-07:00 level=INFO source=gpu.go:193 msg="error looking up nvidia GPU memory" error="nvcuda failed to get primary device context 999"
May 09 18:46:26 v14us1nf3cted ollama[1742]: llama_init_from_gpt_params: error: failed to create context with model '/usr/share/ollama/.ollama/models/blobs/sha256-86599f26f5411350b51f28141e12efb430b1e3faa935901713ec6d32eebfe70a'
May 09 18:51:25 v14us1nf3cted ollama[1742]: time=2024-05-09T18:51:25.139-07:00 level=ERROR source=sched.go:332 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled"
@EthanRBoyle commented on GitHub (May 14, 2024): $ uname -rsv Linux 6.1.0-21-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.90-1 (2024-05-03) $ ollama -v ollama version is 0.1.34 $ ollama list llama3:8b-instruct-q8_0 Same problem here. I only have one model installed so far, so I have not tried it with other models yet. ``` $ sudo journalctl -u ollama|grep -i error May 09 17:21:12 v14us1nf3cted ollama[22787]: time=2024-05-09T17:21:12.242-07:00 level=INFO source=gpu.go:193 msg="error looking up nvidia GPU memory" error="nvcuda failed to get primary device context 999" May 09 18:46:26 v14us1nf3cted ollama[1742]: llama_init_from_gpt_params: error: failed to create context with model '/usr/share/ollama/.ollama/models/blobs/sha256-86599f26f5411350b51f28141e12efb430b1e3faa935901713ec6d32eebfe70a' May 09 18:51:25 v14us1nf3cted ollama[1742]: time=2024-05-09T18:51:25.139-07:00 level=ERROR source=sched.go:332 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled" ```
Author
Owner

@oerlock commented on GitHub (May 15, 2024):

I get similar error when I trying to run: ollama run llama3:70b:

time=2024-05-15T00:39:40.019Z level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server loading model"
2024-05-15T00:39:40.172587575Z llm_load_vocab: missing pre-tokenizer type, using: 'default'
2024-05-15T00:39:40.172603134Z llm_load_vocab:                                             
2024-05-15T00:39:40.172610906Z llm_load_vocab: ************************************        
2024-05-15T00:39:40.172611981Z llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED!        
2024-05-15T00:39:40.172612940Z llm_load_vocab: CONSIDER REGENERATING THE MODEL             
2024-05-15T00:39:40.172613922Z llm_load_vocab: ************************************        
2024-05-15T00:39:40.172614910Z llm_load_vocab:                                             
2024-05-15T00:39:40.284709855Z llm_load_vocab: special tokens definition check successful ( 256/128256 ).
2024-05-15T00:39:40.284724252Z llm_load_print_meta: format           = GGUF V3 (latest)
2024-05-15T00:39:40.284725830Z llm_load_print_meta: arch             = llama
2024-05-15T00:39:40.284726859Z llm_load_print_meta: vocab type       = BPE
2024-05-15T00:39:40.284727829Z llm_load_print_meta: n_vocab          = 128256
2024-05-15T00:39:40.284728777Z llm_load_print_meta: n_merges         = 280147
2024-05-15T00:39:40.284729726Z llm_load_print_meta: n_ctx_train      = 8192
2024-05-15T00:39:40.284730663Z llm_load_print_meta: n_embd           = 8192
2024-05-15T00:39:40.284736695Z llm_load_print_meta: n_head           = 64
2024-05-15T00:39:40.284737704Z llm_load_print_meta: n_head_kv        = 8
2024-05-15T00:39:40.284738646Z llm_load_print_meta: n_layer          = 80
2024-05-15T00:39:40.284739591Z llm_load_print_meta: n_rot            = 128
2024-05-15T00:39:40.284740523Z llm_load_print_meta: n_embd_head_k    = 128
2024-05-15T00:39:40.284741463Z llm_load_print_meta: n_embd_head_v    = 128
2024-05-15T00:39:40.284742398Z llm_load_print_meta: n_gqa            = 8
2024-05-15T00:39:40.284743368Z llm_load_print_meta: n_embd_k_gqa     = 1024
2024-05-15T00:39:40.284744350Z llm_load_print_meta: n_embd_v_gqa     = 1024
2024-05-15T00:39:40.284745337Z llm_load_print_meta: f_norm_eps       = 0.0e+00
2024-05-15T00:39:40.284746277Z llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
2024-05-15T00:39:40.284747201Z llm_load_print_meta: f_clamp_kqv      = 0.0e+00
2024-05-15T00:39:40.284748136Z llm_load_print_meta: f_max_alibi_bias = 0.0e+00
2024-05-15T00:39:40.284759015Z llm_load_print_meta: f_logit_scale    = 0.0e+00
2024-05-15T00:39:40.284759984Z llm_load_print_meta: n_ff             = 28672
2024-05-15T00:39:40.284760955Z llm_load_print_meta: n_expert         = 0
2024-05-15T00:39:40.284761905Z llm_load_print_meta: n_expert_used    = 0
2024-05-15T00:39:40.284762864Z llm_load_print_meta: causal attn      = 1
2024-05-15T00:39:40.284763792Z llm_load_print_meta: pooling type     = 0
2024-05-15T00:39:40.284764720Z llm_load_print_meta: rope type        = 0
2024-05-15T00:39:40.284765675Z llm_load_print_meta: rope scaling     = linear
2024-05-15T00:39:40.284766624Z llm_load_print_meta: freq_base_train  = 500000.0
2024-05-15T00:39:40.284767565Z llm_load_print_meta: freq_scale_train = 1
2024-05-15T00:39:40.284768499Z llm_load_print_meta: n_yarn_orig_ctx  = 8192
2024-05-15T00:39:40.284769440Z llm_load_print_meta: rope_finetuned   = unknown
2024-05-15T00:39:40.284770450Z llm_load_print_meta: ssm_d_conv       = 0
2024-05-15T00:39:40.284771433Z llm_load_print_meta: ssm_d_inner      = 0
2024-05-15T00:39:40.284772374Z llm_load_print_meta: ssm_d_state      = 0
2024-05-15T00:39:40.284773318Z llm_load_print_meta: ssm_dt_rank      = 0
2024-05-15T00:39:40.284774262Z llm_load_print_meta: model type       = 70B
2024-05-15T00:39:40.284775222Z llm_load_print_meta: model ftype      = Q4_0
2024-05-15T00:39:40.284776161Z llm_load_print_meta: model params     = 70.55 B
2024-05-15T00:39:40.284777109Z llm_load_print_meta: model size       = 37.22 GiB (4.53 BPW) 
2024-05-15T00:39:40.284778129Z llm_load_print_meta: general.name     = Meta-Llama-3-70B-Instruct
2024-05-15T00:39:40.284779119Z llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
2024-05-15T00:39:40.284782296Z llm_load_print_meta: EOS token        = 128001 '<|end_of_text|>'
2024-05-15T00:39:40.284790068Z llm_load_print_meta: LF token         = 128 'Ä'
2024-05-15T00:39:40.284791354Z llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
2024-05-15T00:39:40.312503799Z ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   yes
2024-05-15T00:39:40.312534319Z ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
2024-05-15T00:39:40.312536124Z ggml_cuda_init: found 1 CUDA devices:
2024-05-15T00:39:40.312537204Z   Device 0: NVIDIA GeForce RTX 4070, compute capability 8.9, VMM: yes
2024-05-15T00:39:40.411625674Z llm_load_tensors: ggml ctx size =    0.74 MiB
2024-05-15T00:49:39.906926761Z time=2024-05-15T00:49:39.904Z level=ERROR source=sched.go:339 msg="error loading llama server" error="timed out waiting for llama runner to start: "
2024-05-15T00:49:39.906993800Z [GIN] 2024/05/15 - 00:49:39 | 500 |        10m58s |       127.0.0.1 | POST     "/api/chat"
2024-05-15T00:49:45.107716532Z time=2024-05-15T00:49:45.107Z level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.202689773
2024-05-15T00:49:45.358057520Z time=2024-05-15T00:49:45.357Z level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.453055792
2024-05-15T00:49:45.608436901Z time=2024-05-15T00:49:45.608Z level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.703470885
@oerlock commented on GitHub (May 15, 2024): I get similar error when I trying to run: `ollama run llama3:70b`: ```txt time=2024-05-15T00:39:40.019Z level=INFO source=server.go:524 msg="waiting for server to become available" status="llm server loading model" 2024-05-15T00:39:40.172587575Z llm_load_vocab: missing pre-tokenizer type, using: 'default' 2024-05-15T00:39:40.172603134Z llm_load_vocab: 2024-05-15T00:39:40.172610906Z llm_load_vocab: ************************************ 2024-05-15T00:39:40.172611981Z llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED! 2024-05-15T00:39:40.172612940Z llm_load_vocab: CONSIDER REGENERATING THE MODEL 2024-05-15T00:39:40.172613922Z llm_load_vocab: ************************************ 2024-05-15T00:39:40.172614910Z llm_load_vocab: 2024-05-15T00:39:40.284709855Z llm_load_vocab: special tokens definition check successful ( 256/128256 ). 2024-05-15T00:39:40.284724252Z llm_load_print_meta: format = GGUF V3 (latest) 2024-05-15T00:39:40.284725830Z llm_load_print_meta: arch = llama 2024-05-15T00:39:40.284726859Z llm_load_print_meta: vocab type = BPE 2024-05-15T00:39:40.284727829Z llm_load_print_meta: n_vocab = 128256 2024-05-15T00:39:40.284728777Z llm_load_print_meta: n_merges = 280147 2024-05-15T00:39:40.284729726Z llm_load_print_meta: n_ctx_train = 8192 2024-05-15T00:39:40.284730663Z llm_load_print_meta: n_embd = 8192 2024-05-15T00:39:40.284736695Z llm_load_print_meta: n_head = 64 2024-05-15T00:39:40.284737704Z llm_load_print_meta: n_head_kv = 8 2024-05-15T00:39:40.284738646Z llm_load_print_meta: n_layer = 80 2024-05-15T00:39:40.284739591Z llm_load_print_meta: n_rot = 128 2024-05-15T00:39:40.284740523Z llm_load_print_meta: n_embd_head_k = 128 2024-05-15T00:39:40.284741463Z llm_load_print_meta: n_embd_head_v = 128 2024-05-15T00:39:40.284742398Z llm_load_print_meta: n_gqa = 8 2024-05-15T00:39:40.284743368Z llm_load_print_meta: n_embd_k_gqa = 1024 2024-05-15T00:39:40.284744350Z llm_load_print_meta: n_embd_v_gqa = 1024 2024-05-15T00:39:40.284745337Z llm_load_print_meta: f_norm_eps = 0.0e+00 2024-05-15T00:39:40.284746277Z llm_load_print_meta: f_norm_rms_eps = 1.0e-05 2024-05-15T00:39:40.284747201Z llm_load_print_meta: f_clamp_kqv = 0.0e+00 2024-05-15T00:39:40.284748136Z llm_load_print_meta: f_max_alibi_bias = 0.0e+00 2024-05-15T00:39:40.284759015Z llm_load_print_meta: f_logit_scale = 0.0e+00 2024-05-15T00:39:40.284759984Z llm_load_print_meta: n_ff = 28672 2024-05-15T00:39:40.284760955Z llm_load_print_meta: n_expert = 0 2024-05-15T00:39:40.284761905Z llm_load_print_meta: n_expert_used = 0 2024-05-15T00:39:40.284762864Z llm_load_print_meta: causal attn = 1 2024-05-15T00:39:40.284763792Z llm_load_print_meta: pooling type = 0 2024-05-15T00:39:40.284764720Z llm_load_print_meta: rope type = 0 2024-05-15T00:39:40.284765675Z llm_load_print_meta: rope scaling = linear 2024-05-15T00:39:40.284766624Z llm_load_print_meta: freq_base_train = 500000.0 2024-05-15T00:39:40.284767565Z llm_load_print_meta: freq_scale_train = 1 2024-05-15T00:39:40.284768499Z llm_load_print_meta: n_yarn_orig_ctx = 8192 2024-05-15T00:39:40.284769440Z llm_load_print_meta: rope_finetuned = unknown 2024-05-15T00:39:40.284770450Z llm_load_print_meta: ssm_d_conv = 0 2024-05-15T00:39:40.284771433Z llm_load_print_meta: ssm_d_inner = 0 2024-05-15T00:39:40.284772374Z llm_load_print_meta: ssm_d_state = 0 2024-05-15T00:39:40.284773318Z llm_load_print_meta: ssm_dt_rank = 0 2024-05-15T00:39:40.284774262Z llm_load_print_meta: model type = 70B 2024-05-15T00:39:40.284775222Z llm_load_print_meta: model ftype = Q4_0 2024-05-15T00:39:40.284776161Z llm_load_print_meta: model params = 70.55 B 2024-05-15T00:39:40.284777109Z llm_load_print_meta: model size = 37.22 GiB (4.53 BPW) 2024-05-15T00:39:40.284778129Z llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct 2024-05-15T00:39:40.284779119Z llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' 2024-05-15T00:39:40.284782296Z llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' 2024-05-15T00:39:40.284790068Z llm_load_print_meta: LF token = 128 'Ä' 2024-05-15T00:39:40.284791354Z llm_load_print_meta: EOT token = 128009 '<|eot_id|>' 2024-05-15T00:39:40.312503799Z ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes 2024-05-15T00:39:40.312534319Z ggml_cuda_init: CUDA_USE_TENSOR_CORES: no 2024-05-15T00:39:40.312536124Z ggml_cuda_init: found 1 CUDA devices: 2024-05-15T00:39:40.312537204Z Device 0: NVIDIA GeForce RTX 4070, compute capability 8.9, VMM: yes 2024-05-15T00:39:40.411625674Z llm_load_tensors: ggml ctx size = 0.74 MiB 2024-05-15T00:49:39.906926761Z time=2024-05-15T00:49:39.904Z level=ERROR source=sched.go:339 msg="error loading llama server" error="timed out waiting for llama runner to start: " 2024-05-15T00:49:39.906993800Z [GIN] 2024/05/15 - 00:49:39 | 500 | 10m58s | 127.0.0.1 | POST "/api/chat" 2024-05-15T00:49:45.107716532Z time=2024-05-15T00:49:45.107Z level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.202689773 2024-05-15T00:49:45.358057520Z time=2024-05-15T00:49:45.357Z level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.453055792 2024-05-15T00:49:45.608436901Z time=2024-05-15T00:49:45.608Z level=WARN source=sched.go:507 msg="gpu VRAM usage didn't recover within timeout" seconds=5.703470885 ```
Author
Owner

@EthanRBoyle commented on GitHub (May 15, 2024):

Here is something interesting, at least in my case. I did the following:

$ sudo systemctl stop ollama

Followed by:

$ sudo systemctl start ollama

Now Ollama works for me:

$ ollama run llama3:8b-instruct-q8_0
>>> hello
Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?

If anyone can explain that, I, a complete average IQ hobbyist, would really appreciate it. For now, I'm going to disable it from starting automagically at system startup to see how that works for me.

Before I finish here, today I installed the q4 default version of the model and that started no problem, just the q8 gives me a hassle. Thank you.

@EthanRBoyle commented on GitHub (May 15, 2024): Here is something interesting, at least in my case. I did the following: `$ sudo systemctl stop ollama` Followed by: `$ sudo systemctl start ollama` Now Ollama works for me: ``` $ ollama run llama3:8b-instruct-q8_0 >>> hello Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat? ``` If anyone can explain that, I, a complete average IQ hobbyist, would really appreciate it. For now, I'm going to disable it from starting automagically at system startup to see how that works for me. Before I finish here, today I installed the q4 default version of the model and that started no problem, just the q8 gives me a hassle. Thank you.
Author
Owner

@UmutAlihan commented on GitHub (May 19, 2024):

I am having the same error here.

ollama version 0.1.38

$ ollama run llama3:70b-instruct-q8_0
Error: timed out waiting for llama runner to start:
@UmutAlihan commented on GitHub (May 19, 2024): I am having the same error here. ollama version 0.1.38 ``` $ ollama run llama3:70b-instruct-q8_0 Error: timed out waiting for llama runner to start: ```
Author
Owner

@oerlock commented on GitHub (May 20, 2024):

There are places in the code to limit the timeout for loading models, you can check this: https://github.com/ollama/ollama/pull/4419

@oerlock commented on GitHub (May 20, 2024): There are places in the code to limit the timeout for loading models, you can check this: https://github.com/ollama/ollama/pull/4419
Author
Owner

@LukeMauldin commented on GitHub (May 20, 2024):

I am having this same issue. On Ubuntu 24.04 LTS with Nvidia GTX 4050 and drivers 550.78.
Ollama version 0.1.38. Earlier versions of ollama worked as expected.

May 19 16:07:02 luke-XPS-15-9530 ollama[2111]: time=2024-05-19T16:07:02.941-05:00 level=INFO source=server.go:540 msg="waiting for server to become available" status="llm server error"
May 19 16:07:02 luke-XPS-15-9530 ollama[2111]: time=2024-05-19T16:07:02.941-05:00 level=INFO source=server.go:504 msg="waiting for llama runner to start responding"
May 19 16:07:02 luke-XPS-15-9530 ollama[2111]: time=2024-05-19T16:07:02.941-05:00 level=INFO source=sched.go:338 msg="loaded runners" count=1
May 19 16:07:02 luke-XPS-15-9530 ollama[2111]: time=2024-05-19T16:07:02.941-05:00 level=INFO source=server.go:320 msg="starting llama server" cmd="/tmp/ollama4175393976/runners/cpu_avx2/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-00e1317c>
May 19 16:07:02 luke-XPS-15-9530 ollama[2111]: time=2024-05-19T16:07:02.941-05:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=4 memory.available="1.3 GiB" memory.required.full="4.6 GiB" memory.required.partial="1.3 GiB" memory.>
May 19 16:07:02 luke-XPS-15-9530 ollama[2111]: time=2024-05-19T16:07:02.063-05:00 level=INFO source=gpu.go:197 msg="error looking up nvidia GPU memory" error="nvcuda failed to get primary device context 999"
@LukeMauldin commented on GitHub (May 20, 2024): I am having this same issue. On Ubuntu 24.04 LTS with Nvidia GTX 4050 and drivers 550.78. Ollama version 0.1.38. Earlier versions of ollama worked as expected. ``` May 19 16:07:02 luke-XPS-15-9530 ollama[2111]: time=2024-05-19T16:07:02.941-05:00 level=INFO source=server.go:540 msg="waiting for server to become available" status="llm server error" May 19 16:07:02 luke-XPS-15-9530 ollama[2111]: time=2024-05-19T16:07:02.941-05:00 level=INFO source=server.go:504 msg="waiting for llama runner to start responding" May 19 16:07:02 luke-XPS-15-9530 ollama[2111]: time=2024-05-19T16:07:02.941-05:00 level=INFO source=sched.go:338 msg="loaded runners" count=1 May 19 16:07:02 luke-XPS-15-9530 ollama[2111]: time=2024-05-19T16:07:02.941-05:00 level=INFO source=server.go:320 msg="starting llama server" cmd="/tmp/ollama4175393976/runners/cpu_avx2/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-00e1317c> May 19 16:07:02 luke-XPS-15-9530 ollama[2111]: time=2024-05-19T16:07:02.941-05:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=4 memory.available="1.3 GiB" memory.required.full="4.6 GiB" memory.required.partial="1.3 GiB" memory.> May 19 16:07:02 luke-XPS-15-9530 ollama[2111]: time=2024-05-19T16:07:02.063-05:00 level=INFO source=gpu.go:197 msg="error looking up nvidia GPU memory" error="nvcuda failed to get primary device context 999" ```
Author
Owner

@rpenha commented on GitHub (May 22, 2024):

I'm facing this issue even with small models, like tinyllama. Running ollama run tinyllama times out after hard coded 10 minutes timeout. The GPU (RX5700 XT - 8GB with ROCm 6.1 / HSA_OVERRIDE_GFX_VERSION="10.3.0") runs near 100% of usage until timeout. The model loads instantly with CPU (Intel XEON E5-2696 v3 18/36 64GB).

I tried different versions of ollama, building them locally from git, from v0.1.20 to main branch (commit 955c317cabe1344c9f0ed7a71e33f6b4f0919e5e, at the moment), but I got the same behavior with docker version (v0.1.38) or Arch Linux extra packages. My current kernel version is 6.9.1-zen1-1-zen.

I'll be glad to help with more information if necessary to address this issue.

@rpenha commented on GitHub (May 22, 2024): I'm facing this issue even with small models, like tinyllama. Running `ollama run tinyllama` times out after hard coded 10 minutes timeout. The GPU (RX5700 XT - 8GB with ROCm 6.1 / `HSA_OVERRIDE_GFX_VERSION="10.3.0"`) runs near 100% of usage until timeout. The model loads instantly with CPU (Intel XEON E5-2696 v3 18/36 64GB). I tried different versions of ollama, building them locally from git, from v0.1.20 to main branch (commit `955c317cabe1344c9f0ed7a71e33f6b4f0919e5e`, at the moment), but I got the same behavior with docker version (v0.1.38) or Arch Linux extra packages. My current kernel version is `6.9.1-zen1-1-zen`. I'll be glad to help with more information if necessary to address this issue.
Author
Owner

@dhiltgen commented on GitHub (Jun 2, 2024):

This should be resolved in the latest release. Please upgrade and if you're still seeing timeouts loading large models on slower systems, share your server log and I'll re-open.

@dhiltgen commented on GitHub (Jun 2, 2024): This should be resolved in the latest release. Please upgrade and if you're still seeing timeouts loading large models on slower systems, share your server log and I'll re-open.
Author
Owner

@UmutAlihan commented on GitHub (Jun 2, 2024):

Unfortunately after upgrade to 0.1.41, the same issue continues :'/

$ ollama run llama3:70b-instruct-q8_0
Error: timed out waiting for llama runner to start - progress 0.00 -
$ ollama --version
ollama version is 0.1.41
Server Logs:

llm-api-ollama | 2024/06/02 18:43:45 routes.go:1007: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST: OLLAMA_KEEP_ALIVE:60s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:25 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]" llm-api-ollama | time=2024-06-02T18:43:45.633Z level=INFO source=images.go:729 msg="total blobs: 47" llm-api-ollama | time=2024-06-02T18:43:45.634Z level=INFO source=images.go:736 msg="total unused blobs removed: 0" llm-api-ollama | time=2024-06-02T18:43:45.634Z level=INFO source=routes.go:1053 msg="Listening on [::]:33740 (version 0.1.41)" llm-api-ollama | time=2024-06-02T18:43:45.634Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2612999421/runners llm-api-ollama | time=2024-06-02T18:43:48.812Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]" llm-api-ollama | time=2024-06-02T18:43:49.370Z level=INFO source=types.go:71 msg="inference compute" id=... library=cuda compute=8.6 driver=12.5 name="NVIDIA GeForce RTX 3060" total="11.7 GiB" available="11.6 GiB" llm-api-ollama | time=2024-06-02T18:43:49.370Z level=INFO source=types.go:71 msg="inference compute" id=... library=cuda compute=8.6 driver=12.5 name="NVIDIA GeForce RTX 3060" total="11.7 GiB" available="11.6 GiB" llm-api-ollama | [GIN] 2024/06/02 - 18:44:14 | 200 | 82.831µs | 192.168.240.1 | GET "/api/version" llm-api-ollama | [GIN] 2024/06/02 - 18:44:17 | 200 | 22.011µs | 192.168.240.1 | HEAD "/" llm-api-ollama | [GIN] 2024/06/02 - 18:44:17 | 200 | 1.881373ms | 192.168.240.1 | GET "/api/tags" llm-api-ollama | [GIN] 2024/06/02 - 18:44:22 | 200 | 32.52µs | 192.168.240.1 | HEAD "/" llm-api-ollama | [GIN] 2024/06/02 - 18:44:22 | 200 | 69.875025ms | 192.168.240.1 | POST "/api/show" llm-api-ollama | [GIN] 2024/06/02 - 18:44:22 | 200 | 547.193µs | 192.168.240.1 | POST "/api/show" llm-api-ollama | time=2024-06-02T18:44:23.919Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=10 memory.available="11.6 GiB" memory.required.full="71.0 GiB" memory.required.partial="10.9 GiB" memory.required.kv="640.0 MiB" memory.weights.total="68.8 GiB" memory.weights.repeating="67.7 GiB" memory.weights.nonrepeating="1.0 GiB" memory.graph.full="324.0 MiB" memory.graph.partial="1.1 GiB" llm-api-ollama | time=2024-06-02T18:44:23.921Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=10 memory.available="11.6 GiB" memory.required.full="71.0 GiB" memory.required.partial="10.9 GiB" memory.required.kv="640.0 MiB" memory.weights.total="68.8 GiB" memory.weights.repeating="67.7 GiB" memory.weights.nonrepeating="1.0 GiB" memory.graph.full="324.0 MiB" memory.graph.partial="1.1 GiB" llm-api-ollama | time=2024-06-02T18:44:23.923Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=23 memory.available="23.1 GiB" memory.required.full="71.3 GiB" memory.required.partial="23.1 GiB" memory.required.kv="640.0 MiB" memory.weights.total="68.8 GiB" memory.weights.repeating="67.7 GiB" memory.weights.nonrepeating="1.0 GiB" memory.graph.full="648.0 MiB" memory.graph.partial="2.2 GiB" llm-api-ollama | time=2024-06-02T18:44:23.925Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=23 memory.available="23.1 GiB" memory.required.full="71.3 GiB" memory.required.partial="23.1 GiB" memory.required.kv="640.0 MiB" memory.weights.total="68.8 GiB" memory.weights.repeating="67.7 GiB" memory.weights.nonrepeating="1.0 GiB" memory.graph.full="648.0 MiB" memory.graph.partial="2.2 GiB" llm-api-ollama | time=2024-06-02T18:44:23.925Z level=INFO source=server.go:341 msg="starting llama server" cmd="/tmp/ollama2612999421/runners/cuda_v11/ollama_llama_server --model /home/models/blobs/sha256-b6f248eff2d0c4f85d2f6369a27d99fc75686d67314a0b5d35a93c5aee5dcb14 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 23 --parallel 1 --port 35075" llm-api-ollama | time=2024-06-02T18:44:23.926Z level=INFO source=sched.go:338 msg="loaded runners" count=1 llm-api-ollama | time=2024-06-02T18:44:23.926Z level=INFO source=server.go:529 msg="waiting for llama runner to start responding" llm-api-ollama | time=2024-06-02T18:44:23.926Z level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error" llm-api-ollama | INFO [main] build info | build=1 commit="5921b8f" tid="139914827182080" timestamp=1717353863 llm-api-ollama | INFO [main] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139914827182080" timestamp=1717353863 total_threads=12 llm-api-ollama | INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="11" port="35075" tid="139914827182080" timestamp=1717353863 llm-api-ollama | llama_model_loader: loaded meta data with 21 key-value pairs and 723 tensors from /home/models/blobs/sha256-b6f248eff2d0c4f85d2f6369a27d99fc75686d67314a0b5d35a93c5aee5dcb14 (version GGUF V3 (latest)) llm-api-ollama | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llm-api-ollama | llama_model_loader: - kv 0: general.architecture str = llama llm-api-ollama | llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct llm-api-ollama | llama_model_loader: - kv 2: llama.block_count u32 = 80 llm-api-ollama | llama_model_loader: - kv 3: llama.context_length u32 = 8192 llm-api-ollama | llama_model_loader: - kv 4: llama.embedding_length u32 = 8192 llm-api-ollama | llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672 llm-api-ollama | llama_model_loader: - kv 6: llama.attention.head_count u32 = 64 llm-api-ollama | llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llm-api-ollama | llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llm-api-ollama | llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llm-api-ollama | llama_model_loader: - kv 10: general.file_type u32 = 7 llm-api-ollama | llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llm-api-ollama | llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llm-api-ollama | llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llm-api-ollama | llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llm-api-ollama | llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llm-api-ollama | llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llm-api-ollama | llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 llm-api-ollama | llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001 llm-api-ollama | llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llm-api-ollama | llama_model_loader: - kv 20: general.quantization_version u32 = 2 llm-api-ollama | llama_model_loader: - type f32: 161 tensors llm-api-ollama | llama_model_loader: - type q8_0: 562 tensors llm-api-ollama | time=2024-06-02T18:44:24.178Z level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server loading model" llm-api-ollama | llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default' llm-api-ollama | llm_load_vocab: special tokens cache size = 256 llm-api-ollama | llm_load_vocab: token to piece cache size = 1.5928 MB llm-api-ollama | llm_load_print_meta: format = GGUF V3 (latest) llm-api-ollama | llm_load_print_meta: arch = llama llm-api-ollama | llm_load_print_meta: vocab type = BPE llm-api-ollama | llm_load_print_meta: n_vocab = 128256 llm-api-ollama | llm_load_print_meta: n_merges = 280147 llm-api-ollama | llm_load_print_meta: n_ctx_train = 8192 llm-api-ollama | llm_load_print_meta: n_embd = 8192 llm-api-ollama | llm_load_print_meta: n_head = 64 llm-api-ollama | llm_load_print_meta: n_head_kv = 8 llm-api-ollama | llm_load_print_meta: n_layer = 80 llm-api-ollama | llm_load_print_meta: n_rot = 128 llm-api-ollama | llm_load_print_meta: n_embd_head_k = 128 llm-api-ollama | llm_load_print_meta: n_embd_head_v = 128 llm-api-ollama | llm_load_print_meta: n_gqa = 8 llm-api-ollama | llm_load_print_meta: n_embd_k_gqa = 1024 llm-api-ollama | llm_load_print_meta: n_embd_v_gqa = 1024 llm-api-ollama | llm_load_print_meta: f_norm_eps = 0.0e+00 llm-api-ollama | llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm-api-ollama | llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm-api-ollama | llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm-api-ollama | llm_load_print_meta: f_logit_scale = 0.0e+00 llm-api-ollama | llm_load_print_meta: n_ff = 28672 llm-api-ollama | llm_load_print_meta: n_expert = 0 llm-api-ollama | llm_load_print_meta: n_expert_used = 0 llm-api-ollama | llm_load_print_meta: causal attn = 1 llm-api-ollama | llm_load_print_meta: pooling type = 0 llm-api-ollama | llm_load_print_meta: rope type = 0 llm-api-ollama | llm_load_print_meta: rope scaling = linear llm-api-ollama | llm_load_print_meta: freq_base_train = 500000.0 llm-api-ollama | llm_load_print_meta: freq_scale_train = 1 llm-api-ollama | llm_load_print_meta: n_yarn_orig_ctx = 8192 llm-api-ollama | llm_load_print_meta: rope_finetuned = unknown llm-api-ollama | llm_load_print_meta: ssm_d_conv = 0 llm-api-ollama | llm_load_print_meta: ssm_d_inner = 0 llm-api-ollama | llm_load_print_meta: ssm_d_state = 0 llm-api-ollama | llm_load_print_meta: ssm_dt_rank = 0 llm-api-ollama | llm_load_print_meta: model type = 70B llm-api-ollama | llm_load_print_meta: model ftype = Q8_0 llm-api-ollama | llm_load_print_meta: model params = 70.55 B llm-api-ollama | llm_load_print_meta: model size = 69.82 GiB (8.50 BPW) llm-api-ollama | llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct llm-api-ollama | llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm-api-ollama | llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' llm-api-ollama | llm_load_print_meta: LF token = 128 'Ä' llm-api-ollama | llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm-api-ollama | ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes llm-api-ollama | ggml_cuda_init: CUDA_USE_TENSOR_CORES: no llm-api-ollama | ggml_cuda_init: found 2 CUDA devices: llm-api-ollama | Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes llm-api-ollama | Device 1: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes llm-api-ollama | llm_load_tensors: ggml ctx size = 1.10 MiB llm-api-ollama | [GIN] 2024/06/02 - 18:46:02 | 200 | 33.241µs | 192.168.240.1 | HEAD "/" llm-api-ollama | [GIN] 2024/06/02 - 18:46:02 | 200 | 157.856186ms | 192.168.240.1 | GET "/api/ps" llm-api-ollama | [GIN] 2024/06/02 - 18:46:03 | 200 | 26.37µs | 192.168.240.1 | HEAD "/" llm-api-ollama | [GIN] 2024/06/02 - 18:46:03 | 200 | 40.97µs | 192.168.240.1 | GET "/api/ps" llm-api-ollama | [GIN] 2024/06/02 - 18:46:05 | 200 | 19.6µs | 192.168.240.1 | HEAD "/" llm-api-ollama | [GIN] 2024/06/02 - 18:46:05 | 200 | 24.41µs | 192.168.240.1 | GET "/api/ps" llm-api-ollama | [GIN] 2024/06/02 - 18:46:06 | 200 | 31.53µs | 192.168.240.1 | HEAD "/" llm-api-ollama | [GIN] 2024/06/02 - 18:46:06 | 200 | 35.571µs | 192.168.240.1 | GET "/api/ps" llm-api-ollama | time=2024-06-02T18:49:24.158Z level=ERROR source=sched.go:344 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - " llm-api-ollama | [GIN] 2024/06/02 - 18:49:24 | 500 | 5m2s | 192.168.240.1 | POST "/api/chat" llm-api-ollama | time=2024-06-02T18:49:29.391Z level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=5.171164079 llm-api-ollama | time=2024-06-02T18:49:29.756Z level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=5.535684461 llm-api-ollama | time=2024-06-02T18:49:30.074Z level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=5.853850574

@UmutAlihan commented on GitHub (Jun 2, 2024): Unfortunately after upgrade to 0.1.41, the same issue continues :'/ ```bash $ ollama run llama3:70b-instruct-q8_0 Error: timed out waiting for llama runner to start - progress 0.00 - ``` ```bash $ ollama --version ollama version is 0.1.41 ``` <details><summary>Server Logs:</summary> <p> llm-api-ollama | 2024/06/02 18:43:45 routes.go:1007: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST: OLLAMA_KEEP_ALIVE:60s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:25 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS: OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]" llm-api-ollama | time=2024-06-02T18:43:45.633Z level=INFO source=images.go:729 msg="total blobs: 47" llm-api-ollama | time=2024-06-02T18:43:45.634Z level=INFO source=images.go:736 msg="total unused blobs removed: 0" llm-api-ollama | time=2024-06-02T18:43:45.634Z level=INFO source=routes.go:1053 msg="Listening on [::]:33740 (version 0.1.41)" llm-api-ollama | time=2024-06-02T18:43:45.634Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2612999421/runners llm-api-ollama | time=2024-06-02T18:43:48.812Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]" llm-api-ollama | time=2024-06-02T18:43:49.370Z level=INFO source=types.go:71 msg="inference compute" id=... library=cuda compute=8.6 driver=12.5 name="NVIDIA GeForce RTX 3060" total="11.7 GiB" available="11.6 GiB" llm-api-ollama | time=2024-06-02T18:43:49.370Z level=INFO source=types.go:71 msg="inference compute" id=... library=cuda compute=8.6 driver=12.5 name="NVIDIA GeForce RTX 3060" total="11.7 GiB" available="11.6 GiB" llm-api-ollama | [GIN] 2024/06/02 - 18:44:14 | 200 | 82.831µs | 192.168.240.1 | GET "/api/version" llm-api-ollama | [GIN] 2024/06/02 - 18:44:17 | 200 | 22.011µs | 192.168.240.1 | HEAD "/" llm-api-ollama | [GIN] 2024/06/02 - 18:44:17 | 200 | 1.881373ms | 192.168.240.1 | GET "/api/tags" llm-api-ollama | [GIN] 2024/06/02 - 18:44:22 | 200 | 32.52µs | 192.168.240.1 | HEAD "/" llm-api-ollama | [GIN] 2024/06/02 - 18:44:22 | 200 | 69.875025ms | 192.168.240.1 | POST "/api/show" llm-api-ollama | [GIN] 2024/06/02 - 18:44:22 | 200 | 547.193µs | 192.168.240.1 | POST "/api/show" llm-api-ollama | time=2024-06-02T18:44:23.919Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=10 memory.available="11.6 GiB" memory.required.full="71.0 GiB" memory.required.partial="10.9 GiB" memory.required.kv="640.0 MiB" memory.weights.total="68.8 GiB" memory.weights.repeating="67.7 GiB" memory.weights.nonrepeating="1.0 GiB" memory.graph.full="324.0 MiB" memory.graph.partial="1.1 GiB" llm-api-ollama | time=2024-06-02T18:44:23.921Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=10 memory.available="11.6 GiB" memory.required.full="71.0 GiB" memory.required.partial="10.9 GiB" memory.required.kv="640.0 MiB" memory.weights.total="68.8 GiB" memory.weights.repeating="67.7 GiB" memory.weights.nonrepeating="1.0 GiB" memory.graph.full="324.0 MiB" memory.graph.partial="1.1 GiB" llm-api-ollama | time=2024-06-02T18:44:23.923Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=23 memory.available="23.1 GiB" memory.required.full="71.3 GiB" memory.required.partial="23.1 GiB" memory.required.kv="640.0 MiB" memory.weights.total="68.8 GiB" memory.weights.repeating="67.7 GiB" memory.weights.nonrepeating="1.0 GiB" memory.graph.full="648.0 MiB" memory.graph.partial="2.2 GiB" llm-api-ollama | time=2024-06-02T18:44:23.925Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=23 memory.available="23.1 GiB" memory.required.full="71.3 GiB" memory.required.partial="23.1 GiB" memory.required.kv="640.0 MiB" memory.weights.total="68.8 GiB" memory.weights.repeating="67.7 GiB" memory.weights.nonrepeating="1.0 GiB" memory.graph.full="648.0 MiB" memory.graph.partial="2.2 GiB" llm-api-ollama | time=2024-06-02T18:44:23.925Z level=INFO source=server.go:341 msg="starting llama server" cmd="/tmp/ollama2612999421/runners/cuda_v11/ollama_llama_server --model /home/models/blobs/sha256-b6f248eff2d0c4f85d2f6369a27d99fc75686d67314a0b5d35a93c5aee5dcb14 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 23 --parallel 1 --port 35075" llm-api-ollama | time=2024-06-02T18:44:23.926Z level=INFO source=sched.go:338 msg="loaded runners" count=1 llm-api-ollama | time=2024-06-02T18:44:23.926Z level=INFO source=server.go:529 msg="waiting for llama runner to start responding" llm-api-ollama | time=2024-06-02T18:44:23.926Z level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error" llm-api-ollama | INFO [main] build info | build=1 commit="5921b8f" tid="139914827182080" timestamp=1717353863 llm-api-ollama | INFO [main] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139914827182080" timestamp=1717353863 total_threads=12 llm-api-ollama | INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="11" port="35075" tid="139914827182080" timestamp=1717353863 llm-api-ollama | llama_model_loader: loaded meta data with 21 key-value pairs and 723 tensors from /home/models/blobs/sha256-b6f248eff2d0c4f85d2f6369a27d99fc75686d67314a0b5d35a93c5aee5dcb14 (version GGUF V3 (latest)) llm-api-ollama | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llm-api-ollama | llama_model_loader: - kv 0: general.architecture str = llama llm-api-ollama | llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct llm-api-ollama | llama_model_loader: - kv 2: llama.block_count u32 = 80 llm-api-ollama | llama_model_loader: - kv 3: llama.context_length u32 = 8192 llm-api-ollama | llama_model_loader: - kv 4: llama.embedding_length u32 = 8192 llm-api-ollama | llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672 llm-api-ollama | llama_model_loader: - kv 6: llama.attention.head_count u32 = 64 llm-api-ollama | llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llm-api-ollama | llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llm-api-ollama | llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llm-api-ollama | llama_model_loader: - kv 10: general.file_type u32 = 7 llm-api-ollama | llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llm-api-ollama | llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llm-api-ollama | llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llm-api-ollama | llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llm-api-ollama | llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llm-api-ollama | llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llm-api-ollama | llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 llm-api-ollama | llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001 llm-api-ollama | llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llm-api-ollama | llama_model_loader: - kv 20: general.quantization_version u32 = 2 llm-api-ollama | llama_model_loader: - type f32: 161 tensors llm-api-ollama | llama_model_loader: - type q8_0: 562 tensors llm-api-ollama | time=2024-06-02T18:44:24.178Z level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server loading model" llm-api-ollama | llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default' llm-api-ollama | llm_load_vocab: special tokens cache size = 256 llm-api-ollama | llm_load_vocab: token to piece cache size = 1.5928 MB llm-api-ollama | llm_load_print_meta: format = GGUF V3 (latest) llm-api-ollama | llm_load_print_meta: arch = llama llm-api-ollama | llm_load_print_meta: vocab type = BPE llm-api-ollama | llm_load_print_meta: n_vocab = 128256 llm-api-ollama | llm_load_print_meta: n_merges = 280147 llm-api-ollama | llm_load_print_meta: n_ctx_train = 8192 llm-api-ollama | llm_load_print_meta: n_embd = 8192 llm-api-ollama | llm_load_print_meta: n_head = 64 llm-api-ollama | llm_load_print_meta: n_head_kv = 8 llm-api-ollama | llm_load_print_meta: n_layer = 80 llm-api-ollama | llm_load_print_meta: n_rot = 128 llm-api-ollama | llm_load_print_meta: n_embd_head_k = 128 llm-api-ollama | llm_load_print_meta: n_embd_head_v = 128 llm-api-ollama | llm_load_print_meta: n_gqa = 8 llm-api-ollama | llm_load_print_meta: n_embd_k_gqa = 1024 llm-api-ollama | llm_load_print_meta: n_embd_v_gqa = 1024 llm-api-ollama | llm_load_print_meta: f_norm_eps = 0.0e+00 llm-api-ollama | llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm-api-ollama | llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm-api-ollama | llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm-api-ollama | llm_load_print_meta: f_logit_scale = 0.0e+00 llm-api-ollama | llm_load_print_meta: n_ff = 28672 llm-api-ollama | llm_load_print_meta: n_expert = 0 llm-api-ollama | llm_load_print_meta: n_expert_used = 0 llm-api-ollama | llm_load_print_meta: causal attn = 1 llm-api-ollama | llm_load_print_meta: pooling type = 0 llm-api-ollama | llm_load_print_meta: rope type = 0 llm-api-ollama | llm_load_print_meta: rope scaling = linear llm-api-ollama | llm_load_print_meta: freq_base_train = 500000.0 llm-api-ollama | llm_load_print_meta: freq_scale_train = 1 llm-api-ollama | llm_load_print_meta: n_yarn_orig_ctx = 8192 llm-api-ollama | llm_load_print_meta: rope_finetuned = unknown llm-api-ollama | llm_load_print_meta: ssm_d_conv = 0 llm-api-ollama | llm_load_print_meta: ssm_d_inner = 0 llm-api-ollama | llm_load_print_meta: ssm_d_state = 0 llm-api-ollama | llm_load_print_meta: ssm_dt_rank = 0 llm-api-ollama | llm_load_print_meta: model type = 70B llm-api-ollama | llm_load_print_meta: model ftype = Q8_0 llm-api-ollama | llm_load_print_meta: model params = 70.55 B llm-api-ollama | llm_load_print_meta: model size = 69.82 GiB (8.50 BPW) llm-api-ollama | llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct llm-api-ollama | llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm-api-ollama | llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' llm-api-ollama | llm_load_print_meta: LF token = 128 'Ä' llm-api-ollama | llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm-api-ollama | ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes llm-api-ollama | ggml_cuda_init: CUDA_USE_TENSOR_CORES: no llm-api-ollama | ggml_cuda_init: found 2 CUDA devices: llm-api-ollama | Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes llm-api-ollama | Device 1: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes llm-api-ollama | llm_load_tensors: ggml ctx size = 1.10 MiB llm-api-ollama | [GIN] 2024/06/02 - 18:46:02 | 200 | 33.241µs | 192.168.240.1 | HEAD "/" llm-api-ollama | [GIN] 2024/06/02 - 18:46:02 | 200 | 157.856186ms | 192.168.240.1 | GET "/api/ps" llm-api-ollama | [GIN] 2024/06/02 - 18:46:03 | 200 | 26.37µs | 192.168.240.1 | HEAD "/" llm-api-ollama | [GIN] 2024/06/02 - 18:46:03 | 200 | 40.97µs | 192.168.240.1 | GET "/api/ps" llm-api-ollama | [GIN] 2024/06/02 - 18:46:05 | 200 | 19.6µs | 192.168.240.1 | HEAD "/" llm-api-ollama | [GIN] 2024/06/02 - 18:46:05 | 200 | 24.41µs | 192.168.240.1 | GET "/api/ps" llm-api-ollama | [GIN] 2024/06/02 - 18:46:06 | 200 | 31.53µs | 192.168.240.1 | HEAD "/" llm-api-ollama | [GIN] 2024/06/02 - 18:46:06 | 200 | 35.571µs | 192.168.240.1 | GET "/api/ps" llm-api-ollama | time=2024-06-02T18:49:24.158Z level=ERROR source=sched.go:344 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - " llm-api-ollama | [GIN] 2024/06/02 - 18:49:24 | 500 | 5m2s | 192.168.240.1 | POST "/api/chat" llm-api-ollama | time=2024-06-02T18:49:29.391Z level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=5.171164079 llm-api-ollama | time=2024-06-02T18:49:29.756Z level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=5.535684461 llm-api-ollama | time=2024-06-02T18:49:30.074Z level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=5.853850574 </p> </details>
Author
Owner

@rpenha commented on GitHub (Jun 3, 2024):

This should be resolved in the latest release. Please upgrade and if you're still seeing timeouts loading large models on slower systems, share your server log and I'll re-open.

Same issue, even with tinyllama.

$ ollama run tinyllama
Error: timed out waiting for llama runner to start - progress 1.00 -
$ ollama ps
NAME                    ID              SIZE    PROCESSOR       UNTIL
tinyllama:latest        2644915ede35    1.3 GB  100% GPU        4 minutes from now
$ ollama --version
ollama version is 0.1.41

ollama.log

@rpenha commented on GitHub (Jun 3, 2024): > This should be resolved in the latest release. Please upgrade and if you're still seeing timeouts loading large models on slower systems, share your server log and I'll re-open. Same issue, even with tinyllama. ```bash $ ollama run tinyllama Error: timed out waiting for llama runner to start - progress 1.00 - ``` ```bash $ ollama ps NAME ID SIZE PROCESSOR UNTIL tinyllama:latest 2644915ede35 1.3 GB 100% GPU 4 minutes from now ``` ```bash $ ollama --version ollama version is 0.1.41 ``` [ollama.log](https://github.com/user-attachments/files/15540443/ollama.log)
Author
Owner

@dhiltgen commented on GitHub (Jun 4, 2024):

@UmutAlihan it looks like we made zero progress loading in 5 minutes and gave up. Can you share some more information about your setup? I see you have dual 3060's. Are you running on a bare metal OS, within a hypervisor, or in a container? Is there anything interesting/unusual about your storage I/O where the models are stored that could lead to very slow model loading? What sort of CPU, RAM? When we're loading, do you see any load on the system in tools like top or iostat -dmx 5 or free etc? (is it thrashing, paging, etc.?)

@dhiltgen commented on GitHub (Jun 4, 2024): @UmutAlihan it looks like we made zero progress loading in 5 minutes and gave up. Can you share some more information about your setup? I see you have dual 3060's. Are you running on a bare metal OS, within a hypervisor, or in a container? Is there anything interesting/unusual about your storage I/O where the models are stored that could lead to very slow model loading? What sort of CPU, RAM? When we're loading, do you see any load on the system in tools like `top` or `iostat -dmx 5` or `free` etc? (is it thrashing, paging, etc.?)
Author
Owner

@dhiltgen commented on GitHub (Jun 4, 2024):

@rpenha your log is quite short. Let's try another approach.

sudo systemctl stop ollama
OLLAMA_DEBUG=1 ollama serve 2>&1 | tee server.log

Then try to ollama run tinyllama in another terminal, and assuming it still fails with a timeout, share your server.log so I can see where it's getting stuck.

@dhiltgen commented on GitHub (Jun 4, 2024): @rpenha your log is quite short. Let's try another approach. ``` sudo systemctl stop ollama OLLAMA_DEBUG=1 ollama serve 2>&1 | tee server.log ``` Then try to `ollama run tinyllama` in another terminal, and assuming it still fails with a timeout, share your server.log so I can see where it's getting stuck.
Author
Owner

@rpenha commented on GitHub (Jun 4, 2024):

@rpenha your log is quite short. Let's try another approach.

sudo systemctl stop ollama
OLLAMA_DEBUG=1 ollama serve 2>&1 | tee server.log

Then try to ollama run tinyllama in another terminal, and assuming it still fails with a timeout, share your server.log so I can see where it's getting stuck.

Sorry, @dhiltgen! My fault on the last comment.

$ OLLAMA_DEBUG=1 HSA_OVERRIDE_GFX_VERSION="10.3.0" ollama serve 2>&1| tee ./server.log
$ ollama run tinyllama
Error: timed out waiting for llama runner to start - progress 1.00 -

server.log

@rpenha commented on GitHub (Jun 4, 2024): > @rpenha your log is quite short. Let's try another approach. > > ``` > sudo systemctl stop ollama > OLLAMA_DEBUG=1 ollama serve 2>&1 | tee server.log > ``` > > Then try to `ollama run tinyllama` in another terminal, and assuming it still fails with a timeout, share your server.log so I can see where it's getting stuck. Sorry, @dhiltgen! My fault on the last comment. ```bash $ OLLAMA_DEBUG=1 HSA_OVERRIDE_GFX_VERSION="10.3.0" ollama serve 2>&1| tee ./server.log ``` ```bash $ ollama run tinyllama Error: timed out waiting for llama runner to start - progress 1.00 - ``` [server.log](https://github.com/user-attachments/files/15571048/server.log)
Author
Owner

@dhiltgen commented on GitHub (Jun 6, 2024):

@rpenha are you installing our pre-built binaries, or building from source or installing from some other source? The set of loaded libraries doesn't match our official builds. (although this might not have any impact on the defect)

time=2024-06-04T19:25:12.920-03:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu rocm]"

How much system memory do you have? Is your system paging/thrashing when the model is loading?

Can you try loading with mmap disabled to see if that changes behavior?

curl http://localhost:11434/api/generate -d '{
  "model": "tinyllama",
  "prompt": "Why is the sky blue?",
  "stream": false, "options": {"use_mmap": false}
}'
@dhiltgen commented on GitHub (Jun 6, 2024): @rpenha are you installing our pre-built binaries, or building from source or installing from some other source? The set of loaded libraries doesn't match our official builds. (although this might not have any impact on the defect) ``` time=2024-06-04T19:25:12.920-03:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu rocm]" ``` How much system memory do you have? Is your system paging/thrashing when the model is loading? Can you try loading with mmap disabled to see if that changes behavior? ``` curl http://localhost:11434/api/generate -d '{ "model": "tinyllama", "prompt": "Why is the sky blue?", "stream": false, "options": {"use_mmap": false} }' ```
Author
Owner

@rpenha commented on GitHub (Jun 7, 2024):

@dhiltgen, these are my system info:

OS: Arch Linux x86_64
Kernel: 6.9.3-zen1-1-zen
CPU: Intel Xeon E5-2696 v3 (36) @ 3.800GHz
GPU: AMD ATI Radeon RX 5700 XT
Memory: 64195MiB

I tried some approaches:

  • Install using Arch Linux bin packages (ollama and rocm bin packages from extra - this does not work because rocm is on version 6.0)
  • Install using Arch Linux bin packages (ollama and opencl amd rocm bin packages - necessary to use rocm 6.1)
  • Build from ollama source
  • Build from ollama source with rocm support from Arch Linux AUR git packages

All these approaches ran into the timeout issue.

My system has 64GB of RAM and there weren't any memory paging. GPU usage was about 100% usage.

Since the last Arch System update, opencl and rocm were upgraded and I am getting a segmentation fault error when trying to run the model.

HW Exception by GPU node-1 (Agent handle: 0x44793a10) reason :GPU Hang
time=2024-06-06T22:44:33.763-03:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server not responding"
time=2024-06-06T22:44:39.426-03:00 level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) "
time=2024-06-06T22:44:39.426-03:00 level=DEBUG source=sched.go:347 msg="triggering expiration for failed load" model=/home/rodolfo/.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816

I'll try to restore the last versions and run with memory mapped files disabled, as you asked for.

Thanks!

@rpenha commented on GitHub (Jun 7, 2024): @dhiltgen, these are my system info: ``` OS: Arch Linux x86_64 Kernel: 6.9.3-zen1-1-zen CPU: Intel Xeon E5-2696 v3 (36) @ 3.800GHz GPU: AMD ATI Radeon RX 5700 XT Memory: 64195MiB ``` I tried some approaches: - Install using Arch Linux bin packages (ollama and rocm bin packages from extra - this does not work because rocm is on version 6.0) - Install using Arch Linux bin packages (ollama and opencl amd rocm bin packages - necessary to use rocm 6.1) - Build from ollama source - Build from ollama source with rocm support from Arch Linux AUR git packages All these approaches ran into the timeout issue. My system has 64GB of RAM and there weren't any memory paging. GPU usage was about 100% usage. Since the last Arch System update, opencl and rocm were upgraded and I am getting a segmentation fault error when trying to run the model. ``` HW Exception by GPU node-1 (Agent handle: 0x44793a10) reason :GPU Hang time=2024-06-06T22:44:33.763-03:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server not responding" time=2024-06-06T22:44:39.426-03:00 level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) " time=2024-06-06T22:44:39.426-03:00 level=DEBUG source=sched.go:347 msg="triggering expiration for failed load" model=/home/rodolfo/.ollama/models/blobs/sha256-2af3b81862c6be03c769683af18efdadb2c33f60ff32ab6f83e42c043d6c7816 ``` I'll try to restore the last versions and run with memory mapped files disabled, as you asked for. Thanks!
Author
Owner

@UmutAlihan commented on GitHub (Jun 8, 2024):

Interestingly while trying to load smaller models layers distributed to CPU/GPU; I got this error:

ollama serve logs

llm_load_tensors: offloading 18 repeating layers to GPU
llm_load_tensors: offloaded 18/33 layers to GPU
llm_load_tensors:        CPU buffer size = 27649.02 MiB
llm_load_tensors:      CUDA0 buffer size =  8320.31 MiB
llm_load_tensors:      CUDA1 buffer size =  6656.25 MiB
llama_new_context_with_model: n_ctx      = 32000
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:  CUDA_Host KV buffer size =  1750.00 MiB
llama_kv_cache_init:      CUDA0 KV buffer size =  1250.00 MiB
llama_kv_cache_init:      CUDA1 KV buffer size =  1000.00 MiB
llama_new_context_with_model: KV self size  = 4000.00 MiB, K (f16): 2000.00 MiB, V (f16): 2000.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.14 MiB
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 2267.50 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 2377648128
llama_new_context_with_model: failed to allocate compute buffers
llama_init_from_gpt_params: error: failed to create context with model '/media/raid/llms/ollama-pulled/blobs/sha256-cf49fe1559d48ab2f68a0add81c3742a2cad7cdfc764fcd215ac942a6bb56ca9'
ERROR [load_model] unable to load model | model="/media/raid/llms/ollama-pulled/blobs/sha256-cf49fe1559d48ab2f68a0add81c3742a2cad7cdfc764fcd215ac942a6bb56ca9" tid="139766788939776" timestamp=1717846167
terminate called without an active exception
time=2024-06-08T14:29:28.134+03:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error"
time=2024-06-08T14:29:28.385+03:00 level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) error:failed to create context with model '/media/raid/llms/ollama-pulled/blobs/sha256-cf49fe1559d48ab2f68a0add81c3742a2cad7cdfc764fcd215ac942a6bb56ca9'"

ollama run logs

ollama run mistral:7b-instruct-v0.3-fp32
Error: llama runner process has terminated: signal: aborted (core dumped) error:failed to create context with model '/media/raid/llms/ollama-pulled/blobs/sha256-cf49fe1559d48ab2f68a0add81c3742a2cad7cdfc764fcd215ac942a6bb56ca9'
@UmutAlihan commented on GitHub (Jun 8, 2024): Interestingly while trying to load smaller models layers distributed to CPU/GPU; I got this error: **ollama serve logs** ```bash llm_load_tensors: offloading 18 repeating layers to GPU llm_load_tensors: offloaded 18/33 layers to GPU llm_load_tensors: CPU buffer size = 27649.02 MiB llm_load_tensors: CUDA0 buffer size = 8320.31 MiB llm_load_tensors: CUDA1 buffer size = 6656.25 MiB llama_new_context_with_model: n_ctx = 32000 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA_Host KV buffer size = 1750.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 1250.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 1000.00 MiB llama_new_context_with_model: KV self size = 4000.00 MiB, K (f16): 2000.00 MiB, V (f16): 2000.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.14 MiB ggml_backend_cuda_buffer_type_alloc_buffer: allocating 2267.50 MiB on device 0: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 2377648128 llama_new_context_with_model: failed to allocate compute buffers llama_init_from_gpt_params: error: failed to create context with model '/media/raid/llms/ollama-pulled/blobs/sha256-cf49fe1559d48ab2f68a0add81c3742a2cad7cdfc764fcd215ac942a6bb56ca9' ERROR [load_model] unable to load model | model="/media/raid/llms/ollama-pulled/blobs/sha256-cf49fe1559d48ab2f68a0add81c3742a2cad7cdfc764fcd215ac942a6bb56ca9" tid="139766788939776" timestamp=1717846167 terminate called without an active exception time=2024-06-08T14:29:28.134+03:00 level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error" time=2024-06-08T14:29:28.385+03:00 level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) error:failed to create context with model '/media/raid/llms/ollama-pulled/blobs/sha256-cf49fe1559d48ab2f68a0add81c3742a2cad7cdfc764fcd215ac942a6bb56ca9'" ``` **ollama run logs** ```bash ollama run mistral:7b-instruct-v0.3-fp32 Error: llama runner process has terminated: signal: aborted (core dumped) error:failed to create context with model '/media/raid/llms/ollama-pulled/blobs/sha256-cf49fe1559d48ab2f68a0add81c3742a2cad7cdfc764fcd215ac942a6bb56ca9' ```
Author
Owner

@UmutAlihan commented on GitHub (Jun 15, 2024):

considering my setup succesfully loads fp16 format llama3 8B model in 187 secods fully 100& into GPU,

I require to have a larger amount of llama.cpp timeout delay I assume (since 70B largely will load to CPU even slower)

Is there anyway that I can provide an argument so that this timeout delay is like 30minutes for testing ?

image
@UmutAlihan commented on GitHub (Jun 15, 2024): considering my setup succesfully loads fp16 format llama3 8B model in 187 secods fully 100& into GPU, I require to have a larger amount of llama.cpp timeout delay I assume (since 70B largely will load to CPU even slower) Is there anyway that I can provide an argument so that this timeout delay is like 30minutes for testing ? <img width="1041" alt="image" src="https://github.com/ollama/ollama/assets/16688836/d26c8ee9-0404-4b3c-840d-d886b119f35b">
Author
Owner

@githublihaha commented on GitHub (Jun 18, 2024):

@UmutAlihan Same question, same idea.

@githublihaha commented on GitHub (Jun 18, 2024): @UmutAlihan Same question, same idea.
Author
Owner

@Talnex commented on GitHub (Jun 18, 2024):

I found this error maybe come from here:
c9c8c98bf6/llm/server.go (L539-L543)
c9c8c98bf6/llm/server.go (L562-L569)
The stallDuration is set as 5m manualy. I set to 50m ,then I build from source by follow https://github.com/ollama/ollama/blob/main/docs/development.md
In my case, I use network disk of the server cluster so I need more time to load a 72B model.

It took 11min to load then it works :)

@Talnex commented on GitHub (Jun 18, 2024): I found this error maybe come from here: https://github.com/ollama/ollama/blob/c9c8c98bf64313154b58a0d75780b351309df4b7/llm/server.go#L539-L543 https://github.com/ollama/ollama/blob/c9c8c98bf64313154b58a0d75780b351309df4b7/llm/server.go#L562-L569 The `stallDuration` is set as 5m manualy. I set to 50m ,then I build from source by follow [https://github.com/ollama/ollama/blob/main/docs/development.md](url) In my case, I use network disk of the server cluster so I need more time to load a 72B model. It took 11min to load then it works :)
Author
Owner

@dhiltgen commented on GitHub (Jun 18, 2024):

For folks seeing out of memory, please give 0.1.45-rc2 a try and see if that improves the behavior. If not, let us know.

For folks seeing timeouts during model loading, please try disabling mmap. We just merged a change that should greatly improve load performance on cuda+windows (will be in 0.1.45 final release later this week), but linux still needs work, however disabling mmap manually may be a viable workaround depending on what is leading to the slow model load performance on your system.

@dhiltgen commented on GitHub (Jun 18, 2024): For folks seeing out of memory, please give 0.1.45-rc2 a try and see if that improves the behavior. If not, let us know. For folks seeing timeouts during model loading, please try disabling mmap. We just merged a change that should greatly improve load performance on cuda+windows (will be in 0.1.45 final release later this week), but linux still needs work, however disabling mmap manually may be a viable workaround depending on what is leading to the slow model load performance on your system.
Author
Owner

@UmutAlihan commented on GitHub (Jun 18, 2024):

I found this error maybe come from here:

c9c8c98bf6/llm/server.go (L539-L543)

c9c8c98bf6/llm/server.go (L562-L569)

The stallDuration is set as 5m manualy. I set to 50m ,then I build from source by follow https://github.com/ollama/ollama/blob/main/docs/development.md
In my case, I need more time to load a 72B model.
It works!

This fixed all the issue flawlessly thank you very much. It seems all required to fix is to allow some old hardwares take their time to load and finalize.

I change "5" static values to "50" as well and built from source. Now I can load any model which fits to my GPU+CPU setup with a little patience.

image

@Talnex you just made my day, cheers!
@dhiltgen issue seems solved

@UmutAlihan commented on GitHub (Jun 18, 2024): > I found this error maybe come from here: > > https://github.com/ollama/ollama/blob/c9c8c98bf64313154b58a0d75780b351309df4b7/llm/server.go#L539-L543 > > > https://github.com/ollama/ollama/blob/c9c8c98bf64313154b58a0d75780b351309df4b7/llm/server.go#L562-L569 > > > The `stallDuration` is set as 5m manualy. I set to 50m ,then I build from source by follow [https://github.com/ollama/ollama/blob/main/docs/development.md](url) > In my case, I need more time to load a 72B model. > It works! This fixed all the issue flawlessly thank you very much. It seems all required to fix is to allow some old hardwares take their time to load and finalize. I change "5" static values to "50" as well and built from source. Now I can load any model which fits to my GPU+CPU setup with a little patience. <img width="980" alt="image" src="https://github.com/ollama/ollama/assets/16688836/82aa79a3-7a33-46c2-8875-6fc4341cbae7"> @Talnex you just made my day, cheers! @dhiltgen issue seems solved
Author
Owner

@dhiltgen commented on GitHub (Jul 3, 2024):

I believe the issues here are resolved. If anyone is still having troubles, please upgrade to the latest release, and if that doesn't clear it up, share the server log and your scenario and I'll reopen the issue.

@dhiltgen commented on GitHub (Jul 3, 2024): I believe the issues here are resolved. If anyone is still having troubles, please upgrade to the latest release, and if that doesn't clear it up, share the server log and your scenario and I'll reopen the issue.
Author
Owner

@wathuta commented on GitHub (Jul 11, 2024):

I'm using ollama version 0.2.1 on docker on vm(CPU only) with the specs below.
Screenshot from 2024-07-11 09-18-46

I'm facing the same problem where ollama restarts when a prompt is submitted to ollama this is what the error looks like

error

Here are the logs generated during this process

logs

llama_new_context_with_model: KV self size = 3072.00 MiB, K (f16): 1536.00 MiB, V (f16): 1536.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.54 MiB
⠧ llama_new_context_with_model: CPU compute buffer size = 552.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 1
INFO [main] model loaded | tid="126962295940992" timestamp=1720632834
⠇ ⠏ ⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ ⠇ ⠏ ⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ ⠇ ⠏ ⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ ⠇ ⠏ ⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ ⠇ ⠏ ⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ ⠇ ⠏ ⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ ⠇ ⠏ ⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ ⠇ ⠏ ⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ ⠇ ⠏ ⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ time=2024-07-10T17:33:54.900Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server not responding"
⠇ ⠏ ⠋ ⠙ time=2024-07-10T17:33:55.333Z level=INFO source=server.go:609 msg="llama runner started in 22.02 seconds"
[GIN] 2024/07/10 - 17:33:55 | 200 | 22.255653701s | 127.0.0.1 | POST "/api/generate"
Waiting for Ollama server to be active...
[GIN] 2024/07/10 - 17:33:55 | 200 | 19.125742597s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/07/10 - 17:33:55 | 200 | 31.775µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/07/10 - 17:33:55 | 200 | 1.257983ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/07/10 - 17:34:46 | 200 | 23.176178ms | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/07/10 - 17:35:46 | 200 | 35.045859ms | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/07/10 - 17:39:08 | 200 | 2m40s | 172.20.0.9 | POST "/api/generate"
time=2024-07-10T17:39:09.955Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[20.7 GiB]" memory.required.full="5.8 GiB" memory.required.partial="0 B" memory.required.kv="3.0 GiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="5.0 GiB" memory.weights.repeating="5.0 GiB" memory.weights.nonrepeating="77.1 MiB" memory.graph.full="552.0 MiB" memory.graph.partial="641.1 MiB"
time=2024-07-10T17:39:09.958Z level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama1597338378/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a2191836aeba86ef910d42a13ca2017facc68217ced42630939507211c2e6dbe --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 43237"
[GIN] 2024/07/10 - 17:39:09 | 499 | 2m22s | 127.0.0.1 | POST "/api/generate"
time=2024-07-10T17:39:09.960Z level=INFO source=sched.go:474 msg="loaded runners" count=1
time=2024-07-10T17:39:09.960Z level=INFO source=server.go:563 msg="waiting for llama runner to start responding"
time=2024-07-10T17:39:09.960Z level=WARN source=server.go:570 msg="client connection closed before server finished loading, aborting load"
time=2024-07-10T17:39:09.960Z level=ERROR source=sched.go:480 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled"
time=2024-07-10T17:39:09.977Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[20.7 GiB]" memory.required.full="5.8 GiB" memory.required.partial="0 B" memory.required.kv="3.0 GiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="5.0 GiB" memory.weights.repeating="5.0 GiB" memory.weights.nonrepeating="77.1 MiB" memory.graph.full="552.0 MiB" memory.graph.partial="641.1 MiB"
time=2024-07-10T17:39:09.979Z level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama1597338378/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a2191836aeba86ef910d42a13ca2017facc68217ced42630939507211c2e6dbe --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 42085"
time=2024-07-10T17:39:10.004Z level=INFO source=sched.go:474 msg="loaded runners" count=1
time=2024-07-10T17:39:10.004Z level=INFO source=server.go:563 msg="waiting for llama runner to start responding"
time=2024-07-10T17:39:10.005Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="a8db2a9" tid="131619913340800" timestamp=1720633150
INFO [main] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="131619913340800" timestamp=1720633150 total_threads=6
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="6" port="42085" tid="131619913340800" timestamp=1720633150
llama_model_loader: loaded meta data with 26 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-a2191836aeba86ef910d42a13ca2017facc68217ced42630939507211c2e6dbe (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = model
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 4096
llama_model_loader: - kv 4: llama.embedding_length u32 = 3072
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 8192
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 32
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 15
llama_model_loader: - kv 11: llama.vocab_size u32 = 32064
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 96
llama_model_loader: - kv 13: tokenizer.ggml.model str = llama
llama_model_loader: - kv 14: tokenizer.ggml.pre str = default
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32064] = ["", "", "", "<0x00>", "<...
llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32064] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32064] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 32000
llama_model_loader: - kv 20: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 32009
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template str = {% for message in messages %}{% if me...
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_K: 193 tensors
llama_model_loader: - type q6_K: 33 tensors
llm_load_vocab: special tokens cache size = 323
llm_load_vocab: token to piece cache size = 0.1690 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32064
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 4096
llm_load_print_meta: n_embd = 3072
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 32
llm_load_print_meta: n_rot = 96
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 96
llm_load_print_meta: n_embd_head_v = 96
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 3072
llm_load_print_meta: n_embd_v_gqa = 3072
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 8192
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 4096
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 3.82 B
llm_load_print_meta: model size = 2.16 GiB (4.85 BPW)
llm_load_print_meta: general.name = model
llm_load_print_meta: BOS token = 1 ''
llm_load_print_meta: EOS token = 32000 '<|endoftext|>'
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: PAD token = 32009 '<|placeholder6|>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_print_meta: EOT token = 32007 '<|end|>'
llm_load_print_meta: max token length = 48
llm_load_tensors: ggml ctx size = 0.14 MiB
time=2024-07-10T17:39:10.258Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model"
llm_load_tensors: CPU buffer size = 2210.78 MiB
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
time=2024-07-10T17:39:18.326Z level=WARN source=server.go:570 msg="client connection closed before server finished loading, aborting load"
time=2024-07-10T17:39:18.326Z level=ERROR source=sched.go:480 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled"
[GIN] 2024/07/10 - 17:39:18 | 499 | 10.022229893s | 127.0.0.1 | POST "/api/generate"
time=2024-07-10T17:39:28.299Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[20.9 GiB]" memory.required.full="5.8 GiB" memory.required.partial="0 B" memory.required.kv="3.0 GiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="5.0 GiB" memory.weights.repeating="5.0 GiB" memory.weights.nonrepeating="77.1 MiB" memory.graph.full="552.0 MiB" memory.graph.partial="641.1 MiB"
time=2024-07-10T17:39:28.302Z level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama1597338378/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a2191836aeba86ef910d42a13ca2017facc68217ced42630939507211c2e6dbe --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 44127"
time=2024-07-10T17:39:28.307Z level=INFO source=sched.go:474 msg="loaded runners" count=1
time=2024-07-10T17:39:28.307Z level=INFO source=server.go:563 msg="waiting for llama runner to start responding"
time=2024-07-10T17:39:28.309Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="a8db2a9" tid="140651068561280" timestamp=1720633168
INFO [main] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="140651068561280" timestamp=1720633168 total_threads=6
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="6" port="44127" tid="140651068561280" timestamp=1720633168
llama_model_loader: loaded meta data with 26 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-a2191836aeba86ef910d42a13ca2017facc68217ced42630939507211c2e6dbe (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = model
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 4096
llama_model_loader: - kv 4: llama.embedding_length u32 = 3072
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 8192
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 32
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 15
llama_model_loader: - kv 11: llama.vocab_size u32 = 32064
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 96
llama_model_loader: - kv 13: tokenizer.ggml.model str = llama
llama_model_loader: - kv 14: tokenizer.ggml.pre str = default
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32064] = ["", "", "", "<0x00>", "<...
llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32064] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32064] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 32000
llama_model_loader: - kv 20: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 32009
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template str = {% for message in messages %}{% if me...
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_K: 193 tensors
llama_model_loader: - type q6_K: 33 tensors
llm_load_vocab: special tokens cache size = 323
llm_load_vocab: token to piece cache size = 0.1690 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32064
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 4096
llm_load_print_meta: n_embd = 3072
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 32
llm_load_print_meta: n_rot = 96
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 96
llm_load_print_meta: n_embd_head_v = 96
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 3072
llm_load_print_meta: n_embd_v_gqa = 3072
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 8192
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 4096
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 3.82 B
llm_load_print_meta: model size = 2.16 GiB (4.85 BPW)
llm_load_print_meta: general.name = model
llm_load_print_meta: BOS token = 1 ''
llm_load_print_meta: EOS token = 32000 '<|endoftext|>'
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: PAD token = 32009 '<|placeholder6|>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_print_meta: EOT token = 32007 '<|end|>'
llm_load_print_meta: max token length = 48
llm_load_tensors: ggml ctx size = 0.14 MiB
llm_load_tensors: CPU buffer size = 2210.78 MiB
time=2024-07-10T17:39:28.565Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model"
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 3072.00 MiB
llama_new_context_with_model: KV self size = 3072.00 MiB, K (f16): 1536.00 MiB, V (f16): 1536.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.54 MiB
llama_new_context_with_model: CPU compute buffer size = 552.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 1
INFO [main] model loaded | tid="140651068561280" timestamp=1720633187
time=2024-07-10T17:39:47.855Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server not responding"
time=2024-07-10T17:39:48.596Z level=INFO source=server.go:609 msg="llama runner started in 20.29 seconds"
[GIN] 2024/07/10 - 17:42:55 | 200 | 3m27s | 172.20.0.9 | POST "/api/generate"
time=2024-07-10T17:42:56.347Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[20.8 GiB]" memory.required.full="5.8 GiB" memory.required.partial="0 B" memory.required.kv="3.0 GiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="5.0 GiB" memory.weights.repeating="5.0 GiB" memory.weights.nonrepeating="77.1 MiB" memory.graph.full="552.0 MiB" memory.graph.partial="641.1 MiB"
time=2024-07-10T17:42:56.348Z level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama1597338378/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a2191836aeba86ef910d42a13ca2017facc68217ced42630939507211c2e6dbe --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 38229"
time=2024-07-10T17:42:56.349Z level=INFO source=sched.go:474 msg="loaded runners" count=1
time=2024-07-10T17:42:56.349Z level=INFO source=server.go:563 msg="waiting for llama runner to start responding"
time=2024-07-10T17:42:56.349Z level=WARN source=server.go:570 msg="client connection closed before server finished loading, aborting load"
time=2024-07-10T17:42:56.349Z level=ERROR source=sched.go:480 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled"
[GIN] 2024/07/10 - 17:42:56 | 499 | 2m37s | 127.0.0.1 | POST "/api/generate"
time=2024-07-10T17:43:08.992Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[20.7 GiB]" memory.required.full="5.8 GiB" memory.required.partial="0 B" memory.required.kv="3.0 GiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="5.0 GiB" memory.weights.repeating="5.0 GiB" memory.weights.nonrepeating="77.1 MiB" memory.graph.full="552.0 MiB" memory.graph.partial="641.1 MiB"
time=2024-07-10T17:43:08.994Z level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama1597338378/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a2191836aeba86ef910d42a13ca2017facc68217ced42630939507211c2e6dbe --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 35307"
INFO [main] build info | build=1 commit="a8db2a9" tid="126275201763200" timestamp=1720633389
INFO [main] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="126275201763200" timestamp=1720633389 total_threads=6
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="6" port="35307" tid="126275201763200" timestamp=1720633389
time=2024-07-10T17:43:09.009Z level=INFO source=sched.go:474 msg="loaded runners" count=1
time=2024-07-10T17:43:09.009Z level=INFO source=server.go:563 msg="waiting for llama runner to start responding"
time=2024-07-10T17:43:09.015Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 26 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-a2191836aeba86ef910d42a13ca2017facc68217ced42630939507211c2e6dbe (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = model
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 4096
llama_model_loader: - kv 4: llama.embedding_length u32 = 3072
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 8192
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 32
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 15
llama_model_loader: - kv 11: llama.vocab_size u32 = 32064
llama_model_loader: - kv 12: llama.rope.dimension_count u32

@wathuta commented on GitHub (Jul 11, 2024): I'm using ollama version 0.2.1 on docker on vm(CPU only) with the specs below. ![Screenshot from 2024-07-11 09-18-46](https://github.com/ollama/ollama/assets/52779289/d3c799e8-a62b-4bc3-8061-4f6cc8ef12ae) I'm facing the same problem where ollama restarts when a prompt is submitted to ollama this is what the error looks like ![error](https://github.com/ollama/ollama/assets/52779289/d02f8d8b-6cd1-4e7f-85f1-6f512a444df1) Here are the logs generated during this process <details> <summary>logs</summary> llama_new_context_with_model: KV self size = 3072.00 MiB, K (f16): 1536.00 MiB, V (f16): 1536.00 MiB llama_new_context_with_model: CPU output buffer size = 0.54 MiB ⠧ llama_new_context_with_model: CPU compute buffer size = 552.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 1 INFO [main] model loaded | tid="126962295940992" timestamp=1720632834 ⠇ ⠏ ⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ ⠇ ⠏ ⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ ⠇ ⠏ ⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ ⠇ ⠏ ⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ ⠇ ⠏ ⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ ⠇ ⠏ ⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ ⠇ ⠏ ⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ ⠇ ⠏ ⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ ⠇ ⠏ ⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ time=2024-07-10T17:33:54.900Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server not responding" ⠇ ⠏ ⠋ ⠙ time=2024-07-10T17:33:55.333Z level=INFO source=server.go:609 msg="llama runner started in 22.02 seconds" [GIN] 2024/07/10 - 17:33:55 | 200 | 22.255653701s | 127.0.0.1 | POST "/api/generate" Waiting for Ollama server to be active... [GIN] 2024/07/10 - 17:33:55 | 200 | 19.125742597s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/07/10 - 17:33:55 | 200 | 31.775µs | 127.0.0.1 | HEAD "/" [GIN] 2024/07/10 - 17:33:55 | 200 | 1.257983ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/07/10 - 17:34:46 | 200 | 23.176178ms | 127.0.0.1 | POST "/api/generate" [GIN] 2024/07/10 - 17:35:46 | 200 | 35.045859ms | 127.0.0.1 | POST "/api/generate" [GIN] 2024/07/10 - 17:39:08 | 200 | 2m40s | 172.20.0.9 | POST "/api/generate" time=2024-07-10T17:39:09.955Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[20.7 GiB]" memory.required.full="5.8 GiB" memory.required.partial="0 B" memory.required.kv="3.0 GiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="5.0 GiB" memory.weights.repeating="5.0 GiB" memory.weights.nonrepeating="77.1 MiB" memory.graph.full="552.0 MiB" memory.graph.partial="641.1 MiB" time=2024-07-10T17:39:09.958Z level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama1597338378/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a2191836aeba86ef910d42a13ca2017facc68217ced42630939507211c2e6dbe --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 43237" [GIN] 2024/07/10 - 17:39:09 | 499 | 2m22s | 127.0.0.1 | POST "/api/generate" time=2024-07-10T17:39:09.960Z level=INFO source=sched.go:474 msg="loaded runners" count=1 time=2024-07-10T17:39:09.960Z level=INFO source=server.go:563 msg="waiting for llama runner to start responding" time=2024-07-10T17:39:09.960Z level=WARN source=server.go:570 msg="client connection closed before server finished loading, aborting load" time=2024-07-10T17:39:09.960Z level=ERROR source=sched.go:480 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled" time=2024-07-10T17:39:09.977Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[20.7 GiB]" memory.required.full="5.8 GiB" memory.required.partial="0 B" memory.required.kv="3.0 GiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="5.0 GiB" memory.weights.repeating="5.0 GiB" memory.weights.nonrepeating="77.1 MiB" memory.graph.full="552.0 MiB" memory.graph.partial="641.1 MiB" time=2024-07-10T17:39:09.979Z level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama1597338378/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a2191836aeba86ef910d42a13ca2017facc68217ced42630939507211c2e6dbe --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 42085" time=2024-07-10T17:39:10.004Z level=INFO source=sched.go:474 msg="loaded runners" count=1 time=2024-07-10T17:39:10.004Z level=INFO source=server.go:563 msg="waiting for llama runner to start responding" time=2024-07-10T17:39:10.005Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="a8db2a9" tid="131619913340800" timestamp=1720633150 INFO [main] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="131619913340800" timestamp=1720633150 total_threads=6 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="6" port="42085" tid="131619913340800" timestamp=1720633150 llama_model_loader: loaded meta data with 26 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-a2191836aeba86ef910d42a13ca2017facc68217ced42630939507211c2e6dbe (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = model llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 4096 llama_model_loader: - kv 4: llama.embedding_length u32 = 3072 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 32 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 15 llama_model_loader: - kv 11: llama.vocab_size u32 = 32064 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 96 llama_model_loader: - kv 13: tokenizer.ggml.model str = llama llama_model_loader: - kv 14: tokenizer.ggml.pre str = default llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32064] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32064] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32064] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 32000 llama_model_loader: - kv 20: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 32009 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% for message in messages %}{% if me... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors llm_load_vocab: special tokens cache size = 323 llm_load_vocab: token to piece cache size = 0.1690 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32064 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 4096 llm_load_print_meta: n_embd = 3072 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_rot = 96 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 96 llm_load_print_meta: n_embd_head_v = 96 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 3072 llm_load_print_meta: n_embd_v_gqa = 3072 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 8192 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 3.82 B llm_load_print_meta: model size = 2.16 GiB (4.85 BPW) llm_load_print_meta: general.name = model llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 32000 '<|endoftext|>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: PAD token = 32009 '<|placeholder6|>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_print_meta: EOT token = 32007 '<|end|>' llm_load_print_meta: max token length = 48 llm_load_tensors: ggml ctx size = 0.14 MiB time=2024-07-10T17:39:10.258Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model" llm_load_tensors: CPU buffer size = 2210.78 MiB llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 time=2024-07-10T17:39:18.326Z level=WARN source=server.go:570 msg="client connection closed before server finished loading, aborting load" time=2024-07-10T17:39:18.326Z level=ERROR source=sched.go:480 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled" [GIN] 2024/07/10 - 17:39:18 | 499 | 10.022229893s | 127.0.0.1 | POST "/api/generate" time=2024-07-10T17:39:28.299Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[20.9 GiB]" memory.required.full="5.8 GiB" memory.required.partial="0 B" memory.required.kv="3.0 GiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="5.0 GiB" memory.weights.repeating="5.0 GiB" memory.weights.nonrepeating="77.1 MiB" memory.graph.full="552.0 MiB" memory.graph.partial="641.1 MiB" time=2024-07-10T17:39:28.302Z level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama1597338378/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a2191836aeba86ef910d42a13ca2017facc68217ced42630939507211c2e6dbe --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 44127" time=2024-07-10T17:39:28.307Z level=INFO source=sched.go:474 msg="loaded runners" count=1 time=2024-07-10T17:39:28.307Z level=INFO source=server.go:563 msg="waiting for llama runner to start responding" time=2024-07-10T17:39:28.309Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="a8db2a9" tid="140651068561280" timestamp=1720633168 INFO [main] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="140651068561280" timestamp=1720633168 total_threads=6 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="6" port="44127" tid="140651068561280" timestamp=1720633168 llama_model_loader: loaded meta data with 26 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-a2191836aeba86ef910d42a13ca2017facc68217ced42630939507211c2e6dbe (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = model llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 4096 llama_model_loader: - kv 4: llama.embedding_length u32 = 3072 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 32 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 15 llama_model_loader: - kv 11: llama.vocab_size u32 = 32064 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 96 llama_model_loader: - kv 13: tokenizer.ggml.model str = llama llama_model_loader: - kv 14: tokenizer.ggml.pre str = default llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32064] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32064] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32064] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 32000 llama_model_loader: - kv 20: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 32009 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {% for message in messages %}{% if me... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors llm_load_vocab: special tokens cache size = 323 llm_load_vocab: token to piece cache size = 0.1690 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32064 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 4096 llm_load_print_meta: n_embd = 3072 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_rot = 96 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 96 llm_load_print_meta: n_embd_head_v = 96 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 3072 llm_load_print_meta: n_embd_v_gqa = 3072 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 8192 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 3.82 B llm_load_print_meta: model size = 2.16 GiB (4.85 BPW) llm_load_print_meta: general.name = model llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 32000 '<|endoftext|>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: PAD token = 32009 '<|placeholder6|>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_print_meta: EOT token = 32007 '<|end|>' llm_load_print_meta: max token length = 48 llm_load_tensors: ggml ctx size = 0.14 MiB llm_load_tensors: CPU buffer size = 2210.78 MiB time=2024-07-10T17:39:28.565Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model" llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 3072.00 MiB llama_new_context_with_model: KV self size = 3072.00 MiB, K (f16): 1536.00 MiB, V (f16): 1536.00 MiB llama_new_context_with_model: CPU output buffer size = 0.54 MiB llama_new_context_with_model: CPU compute buffer size = 552.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 1 INFO [main] model loaded | tid="140651068561280" timestamp=1720633187 time=2024-07-10T17:39:47.855Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server not responding" time=2024-07-10T17:39:48.596Z level=INFO source=server.go:609 msg="llama runner started in 20.29 seconds" [GIN] 2024/07/10 - 17:42:55 | 200 | 3m27s | 172.20.0.9 | POST "/api/generate" time=2024-07-10T17:42:56.347Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[20.8 GiB]" memory.required.full="5.8 GiB" memory.required.partial="0 B" memory.required.kv="3.0 GiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="5.0 GiB" memory.weights.repeating="5.0 GiB" memory.weights.nonrepeating="77.1 MiB" memory.graph.full="552.0 MiB" memory.graph.partial="641.1 MiB" time=2024-07-10T17:42:56.348Z level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama1597338378/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a2191836aeba86ef910d42a13ca2017facc68217ced42630939507211c2e6dbe --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 38229" time=2024-07-10T17:42:56.349Z level=INFO source=sched.go:474 msg="loaded runners" count=1 time=2024-07-10T17:42:56.349Z level=INFO source=server.go:563 msg="waiting for llama runner to start responding" time=2024-07-10T17:42:56.349Z level=WARN source=server.go:570 msg="client connection closed before server finished loading, aborting load" time=2024-07-10T17:42:56.349Z level=ERROR source=sched.go:480 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled" [GIN] 2024/07/10 - 17:42:56 | 499 | 2m37s | 127.0.0.1 | POST "/api/generate" time=2024-07-10T17:43:08.992Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[20.7 GiB]" memory.required.full="5.8 GiB" memory.required.partial="0 B" memory.required.kv="3.0 GiB" memory.required.allocations="[5.8 GiB]" memory.weights.total="5.0 GiB" memory.weights.repeating="5.0 GiB" memory.weights.nonrepeating="77.1 MiB" memory.graph.full="552.0 MiB" memory.graph.partial="641.1 MiB" time=2024-07-10T17:43:08.994Z level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama1597338378/runners/cpu_avx2/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a2191836aeba86ef910d42a13ca2017facc68217ced42630939507211c2e6dbe --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 35307" INFO [main] build info | build=1 commit="a8db2a9" tid="126275201763200" timestamp=1720633389 INFO [main] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="126275201763200" timestamp=1720633389 total_threads=6 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="6" port="35307" tid="126275201763200" timestamp=1720633389 time=2024-07-10T17:43:09.009Z level=INFO source=sched.go:474 msg="loaded runners" count=1 time=2024-07-10T17:43:09.009Z level=INFO source=server.go:563 msg="waiting for llama runner to start responding" time=2024-07-10T17:43:09.015Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 26 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-a2191836aeba86ef910d42a13ca2017facc68217ced42630939507211c2e6dbe (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = model llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 4096 llama_model_loader: - kv 4: llama.embedding_length u32 = 3072 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 32 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 15 llama_model_loader: - kv 11: llama.vocab_size u32 = 32064 llama_model_loader: - kv 12: llama.rope.dimension_count u32 </details>
Author
Owner

@davidbuzz commented on GitHub (Jul 19, 2024):

saw this symptom on a machine that didnt have enough real ram (8g) despite have an A40 and lots of video ram.

@davidbuzz commented on GitHub (Jul 19, 2024): saw this symptom on a machine that didnt have enough real ram (8g) despite have an A40 and lots of video ram.
Author
Owner

@dhiltgen commented on GitHub (Jul 22, 2024):

@wathuta in your logs I see client connection closed before server finished loading, aborting load - if the client cancels the connection before the model finishes loading, we abort the load.

@dhiltgen commented on GitHub (Jul 22, 2024): @wathuta in your logs I see `client connection closed before server finished loading, aborting load` - if the client cancels the connection before the model finishes loading, we abort the load.
Author
Owner

@juangon commented on GitHub (Jul 22, 2024):

@dhiltgen is there a way to keep loading the model even after that connection close? This can be useful for large models that needs some minutes on the first load

@juangon commented on GitHub (Jul 22, 2024): @dhiltgen is there a way to keep loading the model even after that connection close? This can be useful for large models that needs some minutes on the first load
Author
Owner

@dhiltgen commented on GitHub (Jul 22, 2024):

On Linux/MacOS something like this should be sufficient

> ollama run llama3 "" > /dev/null &
@dhiltgen commented on GitHub (Jul 22, 2024): On Linux/MacOS something like this should be sufficient ``` > ollama run llama3 "" > /dev/null & ```
Author
Owner

@juangon commented on GitHub (Jul 22, 2024):

Thanks @dhiltgen , is there a way when running through the official docker container image?

@juangon commented on GitHub (Jul 22, 2024): Thanks @dhiltgen , is there a way when running through the official docker container image?
Author
Owner

@dhiltgen commented on GitHub (Jul 22, 2024):

@juangon you could docker exec into the container, or if you exposed the ports on the host, you could run the ollama CLI on the host, or you could use curl from your host to access the API.

https://github.com/ollama/ollama/blob/main/docs/api.md#request-1

@dhiltgen commented on GitHub (Jul 22, 2024): @juangon you could `docker exec` into the container, or if you exposed the ports on the host, you could run the ollama CLI on the host, or you could use `curl` from your host to access the API. https://github.com/ollama/ollama/blob/main/docs/api.md#request-1
Author
Owner

@woxiangbo commented on GitHub (Aug 16, 2024):

I got same error,could you help to fix it ?thanks a lot !!
image

image
image
image

error.log

@woxiangbo commented on GitHub (Aug 16, 2024): I got same error,could you help to fix it ?thanks a lot !! ![image](https://github.com/user-attachments/assets/63fe5731-1dad-4698-b0f3-9f32ba65f1c6) ![image](https://github.com/user-attachments/assets/c8c84085-f772-4771-af1b-d05749dbfeec) ![image](https://github.com/user-attachments/assets/150f797d-7be0-4d67-8018-911db06ac56c) ![image](https://github.com/user-attachments/assets/334158ef-51f0-4a6c-93a6-4207a0a990b6) [error.log](https://github.com/user-attachments/files/16634797/error.log)
Author
Owner

@edmundronald commented on GitHub (Aug 26, 2024):

Same error on Mac (M3, 128GB RAM).

% ollama run llama3.1:405b-instruct-q2_K
pulling manifest
pulling e7e1972e5b13... 100% ▕████████████████▏ 149 GB
pulling f000eeb056ec... 100% ▕████████████████▏ 1.4 KB
pulling 0ba8f0e314b4... 100% ▕████████████████▏ 12 KB
pulling 56bb8bd477a5... 100% ▕████████████████▏ 96 B
pulling 20fa4f8f2831... 100% ▕████████████████▏ 487 B
verifying sha256 digest
writing manifest
removing any unused layers
success
Error: timed out waiting for llama runner to start - progress 1.00 -
(base) edmundronald@Edmunds-MBP ~ %

@edmundronald commented on GitHub (Aug 26, 2024): Same error on Mac (M3, 128GB RAM). ---- % ollama run llama3.1:405b-instruct-q2_K pulling manifest pulling e7e1972e5b13... 100% ▕████████████████▏ 149 GB pulling f000eeb056ec... 100% ▕████████████████▏ 1.4 KB pulling 0ba8f0e314b4... 100% ▕████████████████▏ 12 KB pulling 56bb8bd477a5... 100% ▕████████████████▏ 96 B pulling 20fa4f8f2831... 100% ▕████████████████▏ 487 B verifying sha256 digest writing manifest removing any unused layers success Error: timed out waiting for llama runner to start - progress 1.00 - (base) edmundronald@Edmunds-MBP ~ %
Author
Owner

@dhiltgen commented on GitHub (Sep 3, 2024):

@edmundronald you're trying to load a ~150G model into 128G of RAM, so your system is likely paging heavily and stalled. Try loading a smaller model.

@woxiangbo you're loading a very small model, so it seems unrelated to this resolved issue. If you're still having trouble loading qwen2 0.5b please open a new issue and include your server logs.

@dhiltgen commented on GitHub (Sep 3, 2024): @edmundronald you're trying to load a ~150G model into 128G of RAM, so your system is likely paging heavily and stalled. Try loading a smaller model. @woxiangbo you're loading a very small model, so it seems unrelated to this resolved issue. If you're still having trouble loading qwen2 0.5b please open a new issue and include your server logs.
Author
Owner

@pauljasperdev commented on GitHub (Sep 5, 2024):

@wathuta in your logs I see client connection closed before server finished loading, aborting load - if the client cancels the connection before the model finishes loading, we abort the load.

Had the same error using ollama trough langchain. I tried first solving it via the timeout parameter of the langchain ollama wrapper but this had no effect for me. If model loading and answer generation took more than 60s, it timed out.

I ran ollama docker on AWS ECS on a g4dn.xlarge EC2 machine. This machine has 16gb ram und 16gb vram. I am mouning an EBS with multiple pre-downloaded models which i then mount into the models directory of the ollama container to have them already available. I didn't face any issues with small models but issues came with bigger models. Heres what i discovered:

  1. Not enough memory:

    Even tough I had 16gb vram, I could not fit 16gb models. This should fit easily as not all layers are offloaded to the gpu. For me, this didn't work because my ECS Capacity was provisioned with only 14 of the 16 gb ram. Apprently, the whole model has to fit into the containers ram to be parly loaded into vram afterwards.

  2. Not enough free host memory:

    While loading models >14gb did dont work as described before, I also noticed that the ram of my docker did not recover when it was holding multiple smaller models (~7gb each). I did not clearify this 100%, but I assume the free ram on the host machine, was too small to free up space and I was stuck with a full ram?!

Tip

I solved both issues by moving to g4dn.2xlarge which has 32gb ram und 16gb vram and provisioned my capacity with up to 28gb ram. This allows loading bigger models, while memory free up is working properly.

Note

Running ollama trough API Gateway leaves a response window of 29s. Loading a model from EBS GP2 SSD into ram takes 1s/gb. This can lead to timesouts in API Gateway. Pre-loading model with empty request body can fix this.

I the end, I still was not able to find a langchain parameter that would effectively increase timeout time to more than some default 60s...

@pauljasperdev commented on GitHub (Sep 5, 2024): > @wathuta in your logs I see `client connection closed before server finished loading, aborting load` - if the client cancels the connection before the model finishes loading, we abort the load. Had the same error using ollama trough langchain. I tried first solving it via the timeout parameter of the langchain ollama wrapper but this had no effect for me. If model loading and answer generation took more than 60s, it timed out. I ran ollama docker on AWS ECS on a g4dn.xlarge EC2 machine. This machine has 16gb ram und 16gb vram. I am mouning an EBS with multiple pre-downloaded models which i then mount into the models directory of the ollama container to have them already available. I didn't face any issues with small models but issues came with bigger models. Heres what i discovered: <ol> <li>Not enough memory: Even tough I had 16gb vram, I could not fit 16gb models. This should fit easily as not all layers are offloaded to the gpu. For me, this didn't work because my ECS Capacity was provisioned with only 14 of the 16 gb ram. Apprently, the whole model has to fit into the containers ram to be parly loaded into vram afterwards. </li> <li>Not enough free host memory: While loading models >14gb did dont work as described before, I also noticed that the ram of my docker did not recover when it was holding multiple smaller models (~7gb each). I did not clearify this 100%, but I assume the free ram on the host machine, was too small to free up space and I was stuck with a full ram?! </li> </ol> > [!TIP] > I solved both issues by moving to g4dn.2xlarge which has 32gb ram und 16gb vram and provisioned my capacity with up to 28gb ram. This allows loading bigger models, while memory free up is working properly. > [!NOTE] > Running ollama trough API Gateway leaves a response window of 29s. Loading a model from EBS GP2 SSD into ram takes 1s/gb. This can lead to timesouts in API Gateway. Pre-loading model with empty request body can fix this. I the end, I still was not able to find a langchain parameter that would effectively increase timeout time to more than some default 60s...
Author
Owner

@dhiltgen commented on GitHub (Sep 5, 2024):

@pauljaspersahr you may want to look at creating your own custom container image based on our official image or use a custom entrypoint and inline script so you can add some startup logic to preload your model and wait for that to complete before connecting the client.

@dhiltgen commented on GitHub (Sep 5, 2024): @pauljaspersahr you may want to look at creating your own custom container image based on our official image or use a custom entrypoint and inline script so you can add some startup logic to preload your model and wait for that to complete before connecting the client.
Author
Owner

@JorgeAlberto91MS commented on GitHub (Sep 6, 2024):

Command:

ollama run moondream

Error code:

Error: timed out waiting for llama runner to start - progress 1.00 -

Ollama version

ollama version is 0.3.9

System information

Machine: aarch64 Hardware

System: Linux Model: NVIDIA Jetson Orin NX Engineering Reference Developer Kit
Distribution: Ubuntu 22.04 Jammy Jellyfish
699-level Part Number: 699-13767-0000-300 Μ.1

Release: 5.15.136-tegra P-Number: p3767-0000
Python: 3.10.12 Module: NVIDIA Jetson Orin NX (16GB ram)

Libraries SoC: tegra234
CUDA: 12.2.140 CUDA Arch BIN: 8.7
L4T: 36.3.0.

cuDNN: 8.9.4.25 Jetpack: 6.0
TensorRT: 8.6.2.3

VPI: 3.1.5 Hostname: bcpgrpAI
Vulkan: 1.3.204 Interfaces

OpenCV: 4.8.0 with CUDA: NO

Logs

journalctl -e -u ollama
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_vocab: special tokens cache size = 944
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_vocab: token to piece cache size = 0.3151 MB
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: format = GGUF V3 (latest)
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: arch = phi2
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: vocab type = BPE
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_vocab = 51200
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_merges = 50000
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: vocab_only = 0
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_ctx_train = 2048
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_embd = 2048
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_layer = 24
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_head = 32
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_head_kv = 32
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_rot = 32
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_swa = 0
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_embd_head_k = 64
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_embd_head_v = 64
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_gqa = 1
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_embd_k_gqa = 2048
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_embd_v_gqa = 2048
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: f_norm_eps = 1.0e-05
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: f_norm_rms_eps = 0.0e+00
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: f_logit_scale = 0.0e+00
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_ff = 8192
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_expert = 0
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_expert_used = 0
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: causal attn = 1
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: pooling type = 0
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: rope type = 2
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: rope scaling = linear
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: freq_base_train = 10000.0
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: freq_scale_train = 1
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_ctx_orig_yarn = 2048
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: rope_finetuned = unknown
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: ssm_d_conv = 0
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: ssm_d_inner = 0
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: ssm_d_state = 0
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: ssm_dt_rank = 0
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: model type = 1B
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: model ftype = Q4_0
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: model params = 1.42 B
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: model size = 788.55 MiB (4.66 BPW)
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: general.name = moondream2
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: BOS token = 50256 '<|endoftext|>'
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: EOS token = 50256 '<|endoftext|>'
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: UNK token = 50256 '<|endoftext|>'
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: LF token = 128 'Ä'
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: EOT token = 50256 '<|endoftext|>'
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: max token length = 256
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_tensors: ggml ctx size = 0.22 MiB
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_tensors: offloading 24 repeating layers to GPU
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_tensors: offloading non-repeating layers to GPU
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_tensors: offloaded 25/25 layers to GPU
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_tensors: CPU buffer size = 56.25 MiB
sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_tensors: CUDA0 buffer size = 732.30 MiB
sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: n_ctx = 2048
sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: n_batch = 512
sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: n_ubatch = 512
sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: flash_attn = 0
sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: freq_base = 10000.0
sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: freq_scale = 1
sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_kv_cache_init: CUDA0 KV buffer size = 384.00 MiB
sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: KV self size = 384.00 MiB, K (f16): 192.00 MiB, V (f16): 192.00 MiB
sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: CUDA_Host output buffer size = 0.20 MiB
sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: CUDA0 compute buffer size = 160.00 MiB
sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: CUDA_Host compute buffer size = 8.01 MiB
sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: graph nodes = 921
sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: graph splits = 2
sep 06 15:54:07 bcpgrpAI ollama[11339]: time=2024-09-06T15:54:07.203-05:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 1.00 - "
sep 06 15:54:07 bcpgrpAI ollama[11339]: [GIN] 2024/09/06 - 15:54:07 | 500 | 8m10s | 127.0.0.1 | POST "/api/chat"
sep 06 15:54:12 bcpgrpAI ollama[11339]: time=2024-09-06T15:54:12.307-05:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.104093918 model=/usr/share/ollama/.ollama/models/blobs/sha256-e554c6b9de016673fd2c732e0342967727e9659c>
sep 06 15:54:12 bcpgrpAI ollama[11339]: time=2024-09-06T15:54:12.557-05:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.354000061 model=/usr/share/ollama/.ollama/models/blobs/sha256-e554c6b9de016673fd2c732e0342967727e9659c>
sep 06 15:54:12 bcpgrpAI ollama[11339]: time=2024-09-06T15:54:12.807-05:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.603623609 model=/usr/share/ollama/.ollama/models/blobs/sha256-e554c6b9de016673fd2c732e0342967727e9659c>
lines 105-179/179 (END)

@JorgeAlberto91MS commented on GitHub (Sep 6, 2024): ## Command: ollama run moondream ## Error code: Error: timed out waiting for llama runner to start - progress 1.00 - ## Ollama version ollama version is 0.3.9 ## System information Machine: aarch64 Hardware System: Linux Model: NVIDIA Jetson Orin NX Engineering Reference Developer Kit Distribution: Ubuntu 22.04 Jammy Jellyfish 699-level Part Number: 699-13767-0000-300 Μ.1 Release: 5.15.136-tegra P-Number: p3767-0000 Python: 3.10.12 Module: NVIDIA Jetson Orin NX (16GB ram) Libraries SoC: tegra234 CUDA: 12.2.140 CUDA Arch BIN: 8.7 L4T: 36.3.0. cuDNN: 8.9.4.25 Jetpack: 6.0 TensorRT: 8.6.2.3 VPI: 3.1.5 Hostname: bcpgrpAI Vulkan: 1.3.204 Interfaces OpenCV: 4.8.0 with CUDA: NO ## Logs journalctl -e -u ollama sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_vocab: special tokens cache size = 944 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_vocab: token to piece cache size = 0.3151 MB sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: format = GGUF V3 (latest) sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: arch = phi2 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: vocab type = BPE sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_vocab = 51200 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_merges = 50000 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: vocab_only = 0 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_ctx_train = 2048 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_embd = 2048 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_layer = 24 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_head = 32 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_head_kv = 32 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_rot = 32 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_swa = 0 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_embd_head_k = 64 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_embd_head_v = 64 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_gqa = 1 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_embd_k_gqa = 2048 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_embd_v_gqa = 2048 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: f_norm_eps = 1.0e-05 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: f_norm_rms_eps = 0.0e+00 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: f_logit_scale = 0.0e+00 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_ff = 8192 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_expert = 0 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_expert_used = 0 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: causal attn = 1 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: pooling type = 0 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: rope type = 2 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: rope scaling = linear sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: freq_base_train = 10000.0 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: freq_scale_train = 1 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: n_ctx_orig_yarn = 2048 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: rope_finetuned = unknown sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: ssm_d_conv = 0 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: ssm_d_inner = 0 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: ssm_d_state = 0 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: ssm_dt_rank = 0 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: model type = 1B sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: model ftype = Q4_0 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: model params = 1.42 B sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: model size = 788.55 MiB (4.66 BPW) sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: general.name = moondream2 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: BOS token = 50256 '<|endoftext|>' sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: EOS token = 50256 '<|endoftext|>' sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: UNK token = 50256 '<|endoftext|>' sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: LF token = 128 'Ä' sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: EOT token = 50256 '<|endoftext|>' sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_print_meta: max token length = 256 sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_tensors: ggml ctx size = 0.22 MiB sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_tensors: offloading 24 repeating layers to GPU sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_tensors: offloading non-repeating layers to GPU sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_tensors: offloaded 25/25 layers to GPU sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_tensors: CPU buffer size = 56.25 MiB sep 06 15:49:06 bcpgrpAI ollama[11339]: llm_load_tensors: CUDA0 buffer size = 732.30 MiB sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: n_ctx = 2048 sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: n_batch = 512 sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: n_ubatch = 512 sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: flash_attn = 0 sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: freq_base = 10000.0 sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: freq_scale = 1 sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_kv_cache_init: CUDA0 KV buffer size = 384.00 MiB sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: KV self size = 384.00 MiB, K (f16): 192.00 MiB, V (f16): 192.00 MiB sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: CUDA_Host output buffer size = 0.20 MiB sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: CUDA0 compute buffer size = 160.00 MiB sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: CUDA_Host compute buffer size = 8.01 MiB sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: graph nodes = 921 sep 06 15:49:06 bcpgrpAI ollama[11339]: llama_new_context_with_model: graph splits = 2 sep 06 15:54:07 bcpgrpAI ollama[11339]: time=2024-09-06T15:54:07.203-05:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 1.00 - " sep 06 15:54:07 bcpgrpAI ollama[11339]: [GIN] 2024/09/06 - 15:54:07 | 500 | 8m10s | 127.0.0.1 | POST "/api/chat" sep 06 15:54:12 bcpgrpAI ollama[11339]: time=2024-09-06T15:54:12.307-05:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.104093918 model=/usr/share/ollama/.ollama/models/blobs/sha256-e554c6b9de016673fd2c732e0342967727e9659c> sep 06 15:54:12 bcpgrpAI ollama[11339]: time=2024-09-06T15:54:12.557-05:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.354000061 model=/usr/share/ollama/.ollama/models/blobs/sha256-e554c6b9de016673fd2c732e0342967727e9659c> sep 06 15:54:12 bcpgrpAI ollama[11339]: time=2024-09-06T15:54:12.807-05:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.603623609 model=/usr/share/ollama/.ollama/models/blobs/sha256-e554c6b9de016673fd2c732e0342967727e9659c> lines 105-179/179 (END)
Author
Owner

@Stef1519 commented on GitHub (Dec 13, 2024):

As i was facing the same error also with V0.5.1 until now, i, with a rather slow classic HDD in a dual Xeon with 128GB RAM (plus 2*6GB Nvidia mining accelerators), trying to run deepseek-coder:33b and dolphin-mixtral:47b, found out that setting --keepalive to "10m" solved the issue. I think that the "Watchdog" which unloads the model after a certain time already starts to count when the model starts to load and -not- when it is ready. So maybe the "Watchdog" is killing the loading process.

@Stef1519 commented on GitHub (Dec 13, 2024): As i was facing the same error also with V0.5.1 until now, i, with a rather slow classic HDD in a dual Xeon with 128GB RAM (plus 2*6GB Nvidia mining accelerators), trying to run deepseek-coder:33b and dolphin-mixtral:47b, found out that setting --keepalive to "10m" solved the issue. I think that the "Watchdog" which unloads the model after a certain time already starts to count when the model starts to load and -not- when it is ready. So maybe the "Watchdog" is killing the loading process.
Author
Owner

@xhero05 commented on GitHub (Dec 18, 2024):

我也是cuda12.0 y也是这个情况

@xhero05 commented on GitHub (Dec 18, 2024): > 服 我也是cuda12.0 y也是这个情况
Author
Owner

@xhero05 commented on GitHub (Dec 18, 2024):

I got same error,could you help to fix it ?thanks a lot !! image

image image image

error.log

我也是cuda12.0 也是这个情况

@xhero05 commented on GitHub (Dec 18, 2024): > I got same error,could you help to fix it ?thanks a lot !! ![image](https://private-user-images.githubusercontent.com/3446874/358520754-63fe5731-1dad-4698-b0f3-9f32ba65f1c6.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzQ1MDEwMTUsIm5iZiI6MTczNDUwMDcxNSwicGF0aCI6Ii8zNDQ2ODc0LzM1ODUyMDc1NC02M2ZlNTczMS0xZGFkLTQ2OTgtYjBmMy05ZjMyYmE2NWYxYzYucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MTIxOCUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDEyMThUMDU0NTE1WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9ODgwNGMzZTRmYjE5ODg0MzE5ODZkNTFlZDFhZTI1N2Q0MzUwOTUzNzI4N2FiYzgzNDA2NWEwODAwNjgxZTE5MiZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.ROiukcKFmtvQv_kWRWCn6NCIkH8d124YZzDULCEBlEc) > > ![image](https://private-user-images.githubusercontent.com/3446874/358518490-c8c84085-f772-4771-af1b-d05749dbfeec.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzQ1MDEwMTUsIm5iZiI6MTczNDUwMDcxNSwicGF0aCI6Ii8zNDQ2ODc0LzM1ODUxODQ5MC1jOGM4NDA4NS1mNzcyLTQ3NzEtYWYxYi1kMDU3NDlkYmZlZWMucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MTIxOCUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDEyMThUMDU0NTE1WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9NTlhYzhiMjNjNjJiYzA3YjI1NjNjMjkyMmU1N2RmMjQxMjVhZGY3NDAzYmY1ZjQ4MThhOTcyZGI4YzM4MGI2ZSZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.77IKu5CTR2dBtfm8ZJbBrt-HpWE9CLD5ngoRgYXhk8A) ![image](https://private-user-images.githubusercontent.com/3446874/358518863-150f797d-7be0-4d67-8018-911db06ac56c.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzQ1MDEwMTUsIm5iZiI6MTczNDUwMDcxNSwicGF0aCI6Ii8zNDQ2ODc0LzM1ODUxODg2My0xNTBmNzk3ZC03YmUwLTRkNjctODAxOC05MTFkYjA2YWM1NmMucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MTIxOCUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDEyMThUMDU0NTE1WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9NTFhOTdhMDBkYTAyNDgzZmQ0NjU5OTg5YmViM2MzZDc2MDhhNDliYjQ3YjI3ZmZhZDllZTMyNGU3MWVlOTIyNSZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.-QwoEZH6-QbybUsdzWXgYln38N9saVBxAvgNgQmWmOs) ![image](https://private-user-images.githubusercontent.com/3446874/358519059-334158ef-51f0-4a6c-93a6-4207a0a990b6.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzQ1MDEwMTUsIm5iZiI6MTczNDUwMDcxNSwicGF0aCI6Ii8zNDQ2ODc0LzM1ODUxOTA1OS0zMzQxNThlZi01MWYwLTRhNmMtOTNhNi00MjA3YTBhOTkwYjYucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MTIxOCUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDEyMThUMDU0NTE1WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9YmRlNzVkOWE5NjdhYzE5ZTAzYmI2Y2YxMGVlYzM4ZmQzZmI0NzU5NjNiYTgwMDA0NzI1MmE0OWE1YzY4ZWIwNSZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.J4K9iFucqODRS9pmyoB0t0tGE04NrIyH6ulpvKfDatE) > > [error.log](https://github.com/user-attachments/files/16634797/error.log) 我也是cuda12.0 也是这个情况
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#2570