timed out waiting for llama runner to start - progress 0.00 - #3431

Closed
opened 2025-11-11 15:31:35 -06:00 by GiteaMirror · 0 comments
Owner

Originally created by @Mugane on GitHub (Jan 25, 2025).

Bug Report

Installation Method

Docker Desktop version 4.37.1
Docker version 27.4.0 (build bde2b89)
image: ghcr.io/open-webui/open-webui:ollama
"Image": "sha256:a249ccd60c90abd71f930663a0d260eeead1b0426ecdc7ca67528bbd989539e6"
"Created": "2025-01-23T02:03:14.801474831Z"

Docker compose used for build. No errors.

Environment

  • Open WebUI Version: ghcr.io/open-webui/open-webui:ollama
PS #> docker images
REPOSITORY                      TAG       IMAGE ID       CREATED       SIZE
ghcr.io/open-webui/open-webui   ollama    a249ccd60c90   2 days ago    11.3GB
  • Ollama: v0.5.7

  • Windows 11 Pro

    • WSL version: 2.3.26.0
    • Kernel version: 5.15.167.4-1
    • WSLg version: 1.0.65
    • MSRDC version: 1.2.5620
    • Direct3D version: 1.611.1-81528511
    • DXCore version: 10.0.26100.1-240331-1435.ge-release
    • Windows version: 10.0.26100.2894
  • Hardware:

    • Nvidia Quadro RTX 5000, 16GB vRAM
    • 128GB system RAM
    • 2TB Nvme Pcie SSD
    • Intel Core i7 vPRO
  • Cuda:

    • nvcc: NVIDIA (R) Cuda compiler driver
    • Copyright (c) 2005-2024 NVIDIA Corporation
    • Built on Wed_Oct_30_01:18:48_Pacific_Daylight_Time_2024
    • Cuda compilation tools, release 12.6, V12.6.85
    • Build cuda_12.6.r12.6/compiler.35059454_0
  • Browser (if applicable): not applicable/happens from shell too.

Confirmation:

  • I have read and followed all the instructions provided in the README.md.
  • I am on the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below.

Expected Behavior:

It loads in seconds instead of timing out after 5 minutes.

Actual Behavior:

Can't load anything but the smallest models, takes 5 minutes and then times out.

Description

Bug Summary:

root@4dfcb71dbd9c:/app/backend# ollama list
NAME                                    ID              SIZE      MODIFIED       
deepseek-r1:1.5b-qwen-distill-q4_K_M    a42b25d8c10a    1.1 GB    27 minutes ago     
deepseek-r1:14b                         ea35dfe18182    9.0 GB    2 days ago        
root@4dfcb71dbd9c:/app/backend# ollama run deepseek-r1:1.5b-qwen-distill-q4_K_M
>>> /bye
root@4dfcb71dbd9c:/app/backend# ollama run deepseek-r1:14b
Error: timed out waiting for llama runner to start - progress 0.00 - 
root@4dfcb71dbd9c:/app/backend# 

Reproduction Details

Steps to Reproduce:

See description

Logs and Screenshots

Browser Console Logs:

N/A (error happens in terminal too)

Docker Container Logs:

2025-01-25 12:27:04 time=2025-01-25T17:27:04.757Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e gpu=GPU-7330fd38-ea59-1617-e285-fe61a2e676b4 parallel=4 available=16006512640 required="10.8 GiB"
2025-01-25 12:27:04 time=2025-01-25T17:27:04.944Z level=INFO source=server.go:104 msg="system memory" total="62.6 GiB" free="59.8 GiB" free_swap="16.0 GiB"
2025-01-25 12:27:04 time=2025-01-25T17:27:04.945Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[14.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="10.8 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="8.9 GiB" memory.weights.repeating="8.3 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB"
2025-01-25 12:27:04 time=2025-01-25T17:27:04.947Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 6 --parallel 4 --port 40871"
2025-01-25 12:27:04 time=2025-01-25T17:27:04.947Z level=INFO source=sched.go:449 msg="loaded runners" count=1
2025-01-25 12:27:04 time=2025-01-25T17:27:04.947Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
2025-01-25 12:27:04 time=2025-01-25T17:27:04.948Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
2025-01-25 12:27:05 time=2025-01-25T17:27:05.021Z level=INFO source=runner.go:936 msg="starting go runner"
2025-01-25 12:27:05 ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
2025-01-25 12:27:05 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
2025-01-25 12:27:05 ggml_cuda_init: found 1 CUDA devices:
2025-01-25 12:27:05   Device 0: Quadro RTX 5000 with Max-Q Design, compute capability 7.5, VMM: yes
2025-01-25 12:27:05 time=2025-01-25T17:27:05.091Z level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=6
2025-01-25 12:27:05 time=2025-01-25T17:27:05.092Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:40871"
2025-01-25 12:27:05 time=2025-01-25T17:27:05.201Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
2025-01-25 12:27:05 llama_load_model_from_file: using device CUDA0 (Quadro RTX 5000 with Max-Q Design) - 15265 MiB free
2025-01-25 12:27:07 llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest))
2025-01-25 12:27:07 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2025-01-25 12:27:07 llama_model_loader: - kv   0:                       general.architecture str              = qwen2
2025-01-25 12:27:07 llama_model_loader: - kv   1:                               general.type str              = model
2025-01-25 12:27:07 llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 14B
2025-01-25 12:27:07 llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen
2025-01-25 12:27:07 llama_model_loader: - kv   4:                         general.size_label str              = 14B
2025-01-25 12:27:07 llama_model_loader: - kv   5:                          qwen2.block_count u32              = 48
2025-01-25 12:27:07 llama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072
2025-01-25 12:27:07 llama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 5120
2025-01-25 12:27:07 llama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 13824
2025-01-25 12:27:07 llama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 40
2025-01-25 12:27:07 llama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 8
2025-01-25 12:27:07 llama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 1000000.000000
2025-01-25 12:27:07 llama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000010
2025-01-25 12:27:07 llama_model_loader: - kv  13:                          general.file_type u32              = 15
2025-01-25 12:27:07 llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2
2025-01-25 12:27:07 llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = qwen2
2025-01-25 12:27:07 llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
2025-01-25 12:27:07 llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
2025-01-25 12:27:07 llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
2025-01-25 12:27:07 llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 151646
2025-01-25 12:27:07 llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151643
2025-01-25 12:27:07 llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
2025-01-25 12:27:07 llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
2025-01-25 12:27:07 llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
2025-01-25 12:27:07 llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
2025-01-25 12:27:07 llama_model_loader: - kv  25:               general.quantization_version u32              = 2
2025-01-25 12:27:07 llama_model_loader: - type  f32:  241 tensors
2025-01-25 12:27:07 llama_model_loader: - type q4_K:  289 tensors
2025-01-25 12:27:07 llama_model_loader: - type q6_K:   49 tensors
2025-01-25 12:27:07 llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
2025-01-25 12:27:07 llm_load_vocab: special tokens cache size = 22
2025-01-25 12:27:07 llm_load_vocab: token to piece cache size = 0.9310 MB
2025-01-25 12:27:07 llm_load_print_meta: format           = GGUF V3 (latest)
2025-01-25 12:27:07 llm_load_print_meta: arch             = qwen2
2025-01-25 12:27:07 llm_load_print_meta: vocab type       = BPE
2025-01-25 12:27:07 llm_load_print_meta: n_vocab          = 152064
2025-01-25 12:27:07 llm_load_print_meta: n_merges         = 151387
2025-01-25 12:27:07 llm_load_print_meta: vocab_only       = 0
2025-01-25 12:27:07 llm_load_print_meta: n_ctx_train      = 131072
2025-01-25 12:27:07 llm_load_print_meta: n_embd           = 5120
2025-01-25 12:27:07 llm_load_print_meta: n_layer          = 48
2025-01-25 12:27:07 llm_load_print_meta: n_head           = 40
2025-01-25 12:27:07 llm_load_print_meta: n_head_kv        = 8
2025-01-25 12:27:07 llm_load_print_meta: n_rot            = 128
2025-01-25 12:27:07 llm_load_print_meta: n_swa            = 0
2025-01-25 12:27:07 llm_load_print_meta: n_embd_head_k    = 128
2025-01-25 12:27:07 llm_load_print_meta: n_embd_head_v    = 128
2025-01-25 12:27:07 llm_load_print_meta: n_gqa            = 5
2025-01-25 12:27:07 llm_load_print_meta: n_embd_k_gqa     = 1024
2025-01-25 12:27:07 llm_load_print_meta: n_embd_v_gqa     = 1024
2025-01-25 12:27:07 llm_load_print_meta: f_norm_eps       = 0.0e+00
2025-01-25 12:27:07 llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
2025-01-25 12:27:07 llm_load_print_meta: f_clamp_kqv      = 0.0e+00
2025-01-25 12:27:07 llm_load_print_meta: f_max_alibi_bias = 0.0e+00
2025-01-25 12:27:07 llm_load_print_meta: f_logit_scale    = 0.0e+00
2025-01-25 12:27:07 llm_load_print_meta: n_ff             = 13824
2025-01-25 12:27:07 llm_load_print_meta: n_expert         = 0
2025-01-25 12:27:07 llm_load_print_meta: n_expert_used    = 0
2025-01-25 12:27:07 llm_load_print_meta: causal attn      = 1
2025-01-25 12:27:07 llm_load_print_meta: pooling type     = 0
2025-01-25 12:27:07 llm_load_print_meta: rope type        = 2
2025-01-25 12:27:07 llm_load_print_meta: rope scaling     = linear
2025-01-25 12:27:07 llm_load_print_meta: freq_base_train  = 1000000.0
2025-01-25 12:27:07 llm_load_print_meta: freq_scale_train = 1
2025-01-25 12:27:07 llm_load_print_meta: n_ctx_orig_yarn  = 131072
2025-01-25 12:27:07 llm_load_print_meta: rope_finetuned   = unknown
2025-01-25 12:27:07 llm_load_print_meta: ssm_d_conv       = 0
2025-01-25 12:27:07 llm_load_print_meta: ssm_d_inner      = 0
2025-01-25 12:27:07 llm_load_print_meta: ssm_d_state      = 0
2025-01-25 12:27:07 llm_load_print_meta: ssm_dt_rank      = 0
2025-01-25 12:27:07 llm_load_print_meta: ssm_dt_b_c_rms   = 0
2025-01-25 12:27:07 llm_load_print_meta: model type       = 14B
2025-01-25 12:27:07 llm_load_print_meta: model ftype      = Q4_K - Medium
2025-01-25 12:27:07 llm_load_print_meta: model params     = 14.77 B
2025-01-25 12:27:07 llm_load_print_meta: model size       = 8.37 GiB (4.87 BPW) 
2025-01-25 12:27:07 llm_load_print_meta: general.name     = DeepSeek R1 Distill Qwen 14B
2025-01-25 12:27:07 llm_load_print_meta: BOS token        = 151646 '<|begin▁of▁sentence|>'
2025-01-25 12:27:07 llm_load_print_meta: EOS token        = 151643 '<|end▁of▁sentence|>'
2025-01-25 12:27:07 llm_load_print_meta: EOT token        = 151643 '<|end▁of▁sentence|>'
2025-01-25 12:27:07 llm_load_print_meta: PAD token        = 151643 '<|end▁of▁sentence|>'
2025-01-25 12:27:07 llm_load_print_meta: LF token         = 148848 'ÄĬ'
2025-01-25 12:27:07 llm_load_print_meta: FIM PRE token    = 151659 '<|fim_prefix|>'
2025-01-25 12:27:07 llm_load_print_meta: FIM SUF token    = 151661 '<|fim_suffix|>'
2025-01-25 12:27:07 llm_load_print_meta: FIM MID token    = 151660 '<|fim_middle|>'
2025-01-25 12:27:07 llm_load_print_meta: FIM PAD token    = 151662 '<|fim_pad|>'
2025-01-25 12:27:07 llm_load_print_meta: FIM REP token    = 151663 '<|repo_name|>'
2025-01-25 12:27:07 llm_load_print_meta: FIM SEP token    = 151664 '<|file_sep|>'
2025-01-25 12:27:07 llm_load_print_meta: EOG token        = 151643 '<|end▁of▁sentence|>'
2025-01-25 12:27:07 llm_load_print_meta: EOG token        = 151662 '<|fim_pad|>'
2025-01-25 12:27:07 llm_load_print_meta: EOG token        = 151663 '<|repo_name|>'
2025-01-25 12:27:07 llm_load_print_meta: EOG token        = 151664 '<|file_sep|>'
2025-01-25 12:27:07 llm_load_print_meta: max token length = 256
2025-01-25 12:32:04 [GIN] 2025/01/25 - 17:32:04 | 500 |          5m0s |       127.0.0.1 | POST     "/api/generate"
2025-01-25 12:32:04 time=2025-01-25T17:32:04.998Z level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - "
2025-01-25 12:32:10 time=2025-01-25T17:32:10.027Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.027894398 model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e
2025-01-25 12:32:10 time=2025-01-25T17:32:10.276Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.277679896 model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e
2025-01-25 12:32:10 time=2025-01-25T17:32:10.526Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.527272585 model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e

Screenshots/Screen Recordings (if applicable):

N/A.

Additional Information

It should not take 5 minutes to load 10GB from SSD into vRAM. It should take 5 seconds.

  1. How do I increase the timeout so I can get these models loaded, first of all?
  2. What can be done to fix the load times?

Many thanks!

Originally created by @Mugane on GitHub (Jan 25, 2025). # Bug Report ## Installation Method Docker Desktop version 4.37.1 Docker version 27.4.0 (build bde2b89) image: ghcr.io/open-webui/open-webui:ollama "Image": "sha256:a249ccd60c90abd71f930663a0d260eeead1b0426ecdc7ca67528bbd989539e6" "Created": "2025-01-23T02:03:14.801474831Z" Docker compose used for build. No errors. ## Environment - Open WebUI Version: ghcr.io/open-webui/open-webui:ollama ``` PS #> docker images REPOSITORY TAG IMAGE ID CREATED SIZE ghcr.io/open-webui/open-webui ollama a249ccd60c90 2 days ago 11.3GB ``` - Ollama: v0.5.7 - Windows 11 Pro - - WSL version: 2.3.26.0 - - Kernel version: 5.15.167.4-1 - - WSLg version: 1.0.65 - - MSRDC version: 1.2.5620 - - Direct3D version: 1.611.1-81528511 - - DXCore version: 10.0.26100.1-240331-1435.ge-release - - Windows version: 10.0.26100.2894 - Hardware: - - Nvidia Quadro RTX 5000, 16GB vRAM - - 128GB system RAM - - 2TB Nvme Pcie SSD - - Intel Core i7 vPRO - Cuda: - - nvcc: NVIDIA (R) Cuda compiler driver - - Copyright (c) 2005-2024 NVIDIA Corporation - - Built on Wed_Oct_30_01:18:48_Pacific_Daylight_Time_2024 - - Cuda compilation tools, release 12.6, V12.6.85 - - Build cuda_12.6.r12.6/compiler.35059454_0 - **Browser (if applicable):** not applicable/happens from shell too. **Confirmation:** - [x] I have read and followed all the instructions provided in the README.md. - [x] I am on the latest version of both Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below. ## Expected Behavior: It loads in seconds instead of timing out after 5 minutes. ## Actual Behavior: Can't load anything but the smallest models, takes 5 minutes and then times out. ## Description **Bug Summary:** ``` root@4dfcb71dbd9c:/app/backend# ollama list NAME ID SIZE MODIFIED deepseek-r1:1.5b-qwen-distill-q4_K_M a42b25d8c10a 1.1 GB 27 minutes ago deepseek-r1:14b ea35dfe18182 9.0 GB 2 days ago root@4dfcb71dbd9c:/app/backend# ollama run deepseek-r1:1.5b-qwen-distill-q4_K_M >>> /bye root@4dfcb71dbd9c:/app/backend# ollama run deepseek-r1:14b Error: timed out waiting for llama runner to start - progress 0.00 - root@4dfcb71dbd9c:/app/backend# ``` ## Reproduction Details **Steps to Reproduce:** See description ## Logs and Screenshots **Browser Console Logs:** N/A (error happens in terminal too) **Docker Container Logs:** ``` 2025-01-25 12:27:04 time=2025-01-25T17:27:04.757Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e gpu=GPU-7330fd38-ea59-1617-e285-fe61a2e676b4 parallel=4 available=16006512640 required="10.8 GiB" 2025-01-25 12:27:04 time=2025-01-25T17:27:04.944Z level=INFO source=server.go:104 msg="system memory" total="62.6 GiB" free="59.8 GiB" free_swap="16.0 GiB" 2025-01-25 12:27:04 time=2025-01-25T17:27:04.945Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[14.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="10.8 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="8.9 GiB" memory.weights.repeating="8.3 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB" 2025-01-25 12:27:04 time=2025-01-25T17:27:04.947Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 6 --parallel 4 --port 40871" 2025-01-25 12:27:04 time=2025-01-25T17:27:04.947Z level=INFO source=sched.go:449 msg="loaded runners" count=1 2025-01-25 12:27:04 time=2025-01-25T17:27:04.947Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" 2025-01-25 12:27:04 time=2025-01-25T17:27:04.948Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" 2025-01-25 12:27:05 time=2025-01-25T17:27:05.021Z level=INFO source=runner.go:936 msg="starting go runner" 2025-01-25 12:27:05 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 2025-01-25 12:27:05 ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 2025-01-25 12:27:05 ggml_cuda_init: found 1 CUDA devices: 2025-01-25 12:27:05 Device 0: Quadro RTX 5000 with Max-Q Design, compute capability 7.5, VMM: yes 2025-01-25 12:27:05 time=2025-01-25T17:27:05.091Z level=INFO source=runner.go:937 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=6 2025-01-25 12:27:05 time=2025-01-25T17:27:05.092Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:40871" 2025-01-25 12:27:05 time=2025-01-25T17:27:05.201Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" 2025-01-25 12:27:05 llama_load_model_from_file: using device CUDA0 (Quadro RTX 5000 with Max-Q Design) - 15265 MiB free 2025-01-25 12:27:07 llama_model_loader: loaded meta data with 26 key-value pairs and 579 tensors from /root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e (version GGUF V3 (latest)) 2025-01-25 12:27:07 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2025-01-25 12:27:07 llama_model_loader: - kv 0: general.architecture str = qwen2 2025-01-25 12:27:07 llama_model_loader: - kv 1: general.type str = model 2025-01-25 12:27:07 llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 14B 2025-01-25 12:27:07 llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen 2025-01-25 12:27:07 llama_model_loader: - kv 4: general.size_label str = 14B 2025-01-25 12:27:07 llama_model_loader: - kv 5: qwen2.block_count u32 = 48 2025-01-25 12:27:07 llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 2025-01-25 12:27:07 llama_model_loader: - kv 7: qwen2.embedding_length u32 = 5120 2025-01-25 12:27:07 llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 13824 2025-01-25 12:27:07 llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 40 2025-01-25 12:27:07 llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 8 2025-01-25 12:27:07 llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 1000000.000000 2025-01-25 12:27:07 llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000010 2025-01-25 12:27:07 llama_model_loader: - kv 13: general.file_type u32 = 15 2025-01-25 12:27:07 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 2025-01-25 12:27:07 llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 2025-01-25 12:27:07 llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... 2025-01-25 12:27:07 llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... 2025-01-25 12:27:07 llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... 2025-01-25 12:27:07 llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 2025-01-25 12:27:07 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 2025-01-25 12:27:07 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 2025-01-25 12:27:07 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true 2025-01-25 12:27:07 llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false 2025-01-25 12:27:07 llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... 2025-01-25 12:27:07 llama_model_loader: - kv 25: general.quantization_version u32 = 2 2025-01-25 12:27:07 llama_model_loader: - type f32: 241 tensors 2025-01-25 12:27:07 llama_model_loader: - type q4_K: 289 tensors 2025-01-25 12:27:07 llama_model_loader: - type q6_K: 49 tensors 2025-01-25 12:27:07 llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect 2025-01-25 12:27:07 llm_load_vocab: special tokens cache size = 22 2025-01-25 12:27:07 llm_load_vocab: token to piece cache size = 0.9310 MB 2025-01-25 12:27:07 llm_load_print_meta: format = GGUF V3 (latest) 2025-01-25 12:27:07 llm_load_print_meta: arch = qwen2 2025-01-25 12:27:07 llm_load_print_meta: vocab type = BPE 2025-01-25 12:27:07 llm_load_print_meta: n_vocab = 152064 2025-01-25 12:27:07 llm_load_print_meta: n_merges = 151387 2025-01-25 12:27:07 llm_load_print_meta: vocab_only = 0 2025-01-25 12:27:07 llm_load_print_meta: n_ctx_train = 131072 2025-01-25 12:27:07 llm_load_print_meta: n_embd = 5120 2025-01-25 12:27:07 llm_load_print_meta: n_layer = 48 2025-01-25 12:27:07 llm_load_print_meta: n_head = 40 2025-01-25 12:27:07 llm_load_print_meta: n_head_kv = 8 2025-01-25 12:27:07 llm_load_print_meta: n_rot = 128 2025-01-25 12:27:07 llm_load_print_meta: n_swa = 0 2025-01-25 12:27:07 llm_load_print_meta: n_embd_head_k = 128 2025-01-25 12:27:07 llm_load_print_meta: n_embd_head_v = 128 2025-01-25 12:27:07 llm_load_print_meta: n_gqa = 5 2025-01-25 12:27:07 llm_load_print_meta: n_embd_k_gqa = 1024 2025-01-25 12:27:07 llm_load_print_meta: n_embd_v_gqa = 1024 2025-01-25 12:27:07 llm_load_print_meta: f_norm_eps = 0.0e+00 2025-01-25 12:27:07 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 2025-01-25 12:27:07 llm_load_print_meta: f_clamp_kqv = 0.0e+00 2025-01-25 12:27:07 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 2025-01-25 12:27:07 llm_load_print_meta: f_logit_scale = 0.0e+00 2025-01-25 12:27:07 llm_load_print_meta: n_ff = 13824 2025-01-25 12:27:07 llm_load_print_meta: n_expert = 0 2025-01-25 12:27:07 llm_load_print_meta: n_expert_used = 0 2025-01-25 12:27:07 llm_load_print_meta: causal attn = 1 2025-01-25 12:27:07 llm_load_print_meta: pooling type = 0 2025-01-25 12:27:07 llm_load_print_meta: rope type = 2 2025-01-25 12:27:07 llm_load_print_meta: rope scaling = linear 2025-01-25 12:27:07 llm_load_print_meta: freq_base_train = 1000000.0 2025-01-25 12:27:07 llm_load_print_meta: freq_scale_train = 1 2025-01-25 12:27:07 llm_load_print_meta: n_ctx_orig_yarn = 131072 2025-01-25 12:27:07 llm_load_print_meta: rope_finetuned = unknown 2025-01-25 12:27:07 llm_load_print_meta: ssm_d_conv = 0 2025-01-25 12:27:07 llm_load_print_meta: ssm_d_inner = 0 2025-01-25 12:27:07 llm_load_print_meta: ssm_d_state = 0 2025-01-25 12:27:07 llm_load_print_meta: ssm_dt_rank = 0 2025-01-25 12:27:07 llm_load_print_meta: ssm_dt_b_c_rms = 0 2025-01-25 12:27:07 llm_load_print_meta: model type = 14B 2025-01-25 12:27:07 llm_load_print_meta: model ftype = Q4_K - Medium 2025-01-25 12:27:07 llm_load_print_meta: model params = 14.77 B 2025-01-25 12:27:07 llm_load_print_meta: model size = 8.37 GiB (4.87 BPW) 2025-01-25 12:27:07 llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 14B 2025-01-25 12:27:07 llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>' 2025-01-25 12:27:07 llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>' 2025-01-25 12:27:07 llm_load_print_meta: EOT token = 151643 '<|end▁of▁sentence|>' 2025-01-25 12:27:07 llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>' 2025-01-25 12:27:07 llm_load_print_meta: LF token = 148848 'ÄĬ' 2025-01-25 12:27:07 llm_load_print_meta: FIM PRE token = 151659 '<|fim_prefix|>' 2025-01-25 12:27:07 llm_load_print_meta: FIM SUF token = 151661 '<|fim_suffix|>' 2025-01-25 12:27:07 llm_load_print_meta: FIM MID token = 151660 '<|fim_middle|>' 2025-01-25 12:27:07 llm_load_print_meta: FIM PAD token = 151662 '<|fim_pad|>' 2025-01-25 12:27:07 llm_load_print_meta: FIM REP token = 151663 '<|repo_name|>' 2025-01-25 12:27:07 llm_load_print_meta: FIM SEP token = 151664 '<|file_sep|>' 2025-01-25 12:27:07 llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>' 2025-01-25 12:27:07 llm_load_print_meta: EOG token = 151662 '<|fim_pad|>' 2025-01-25 12:27:07 llm_load_print_meta: EOG token = 151663 '<|repo_name|>' 2025-01-25 12:27:07 llm_load_print_meta: EOG token = 151664 '<|file_sep|>' 2025-01-25 12:27:07 llm_load_print_meta: max token length = 256 2025-01-25 12:32:04 [GIN] 2025/01/25 - 17:32:04 | 500 | 5m0s | 127.0.0.1 | POST "/api/generate" 2025-01-25 12:32:04 time=2025-01-25T17:32:04.998Z level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - " 2025-01-25 12:32:10 time=2025-01-25T17:32:10.027Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.027894398 model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e 2025-01-25 12:32:10 time=2025-01-25T17:32:10.276Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.277679896 model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e 2025-01-25 12:32:10 time=2025-01-25T17:32:10.526Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.527272585 model=/root/.ollama/models/blobs/sha256-6e9f90f02bb3b39b59e81916e8cfce9deb45aeaeb9a54a5be4414486b907dc1e ``` **Screenshots/Screen Recordings (if applicable):** N/A. ## Additional Information It should not take 5 minutes to load 10GB from SSD into vRAM. It should take 5 seconds. 1. How do I increase the timeout so I can get these models loaded, first of all? 2. What can be done to fix the load times? Many thanks!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#3431