[GH-ISSUE #9414] Error "timed out waiting for llama runner to start: " on deepseek-r1:671b #6138

Closed
opened 2026-04-12 17:29:08 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @moluzhui on GitHub (Feb 28, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9414

What is the issue?

I have two GPU servers running deepseek-r1:671b, and one of them has the following error.

Centos 7 Nvidia Tesla H200 GPUs with Driver Version: 550.127.08 and CUDA Version: 12.4.

Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: rope_yarn_log_mul    = 0.1000
Feb 28 17:05:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:05:26.794+08:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - "
Feb 28 17:05:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:05:26 | 500 |         5m16s |    10.219.32.13 | POST     "/api/chat"
Feb 28 17:05:32 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:05:32.057+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.262454675 model=/data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9
Feb 28 17:05:35 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:05:35.242+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=8.447635778 model=/data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9

systemd settting

[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/bin/ollama serve
User=root
Group=root
Restart=always
RestartSec=3
Environment="PATH=$PATH"
Environment="OLLAMA_MODELS=/data/ollama/models"
Environment="OLLAMA_HOST=10.2.3.4:11434"
Environment="OLLAMA_SCHED_SPREAD=1"
Environment="OLLAMA_KEEP_ALIVE=-1"


[Install]
WantedBy=default.target

Relevant log output

Feb 28 16:55:12 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: expert_gating_func   = sigmoid
Feb 28 16:55:12 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: rope_yarn_log_mul    = 0.1000
Feb 28 16:55:53 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 16:55:53 | 200 |      25.516µs |  10.2.3.4 | HEAD     "/"
Feb 28 16:55:53 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 16:55:53 | 200 |      47.167µs |  10.2.3.4 | GET      "/api/ps"
Feb 28 16:58:22 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 16:58:22 | 200 |      20.305µs |  10.2.3.4 | HEAD     "/"
Feb 28 16:58:22 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 16:58:22 | 200 |      29.775µs |  10.2.3.4 | GET      "/api/ps"
Feb 28 16:59:54 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 16:59:54 | 200 |      24.903µs |    10.210.105.9 | GET      "/"
Feb 28 16:59:56 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 16:59:56 | 200 |      28.055µs |    10.210.105.9 | GET      "/"
Feb 28 17:00:01 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:00:01 | 200 |      18.166µs |    10.210.105.9 | GET      "/"
Feb 28 17:00:03 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:00:03 | 200 |      14.657µs |    10.210.105.9 | GET      "/"
Feb 28 17:00:05 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:00:05 | 200 |     124.889µs |    10.210.105.9 | GET      "/"
Feb 28 17:00:07 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:00:07 | 200 |      37.205µs |    10.210.105.9 | GET      "/"
Feb 28 17:00:08 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:00:08 | 200 |       18.13µs |    10.210.105.9 | GET      "/"
Feb 28 17:00:10 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:10.058+08:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - "
Feb 28 17:00:10 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:00:10 | 500 |          5m5s |    10.219.32.13 | POST     "/api/chat"
Feb 28 17:00:10 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:00:10 | 200 |          4m5s |    10.219.32.13 | POST     "/api/chat"
Feb 28 17:00:10 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:00:10 | 200 |          3m4s |    10.219.32.13 | POST     "/api/chat"
Feb 28 17:00:10 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:00:10 | 200 |          2m1s |    10.219.32.13 | POST     "/api/chat"
Feb 28 17:00:10 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:00:10 | 200 |          1m0s |    10.219.32.13 | POST     "/api/chat"
Feb 28 17:00:15 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:15.342+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.272926743 model=/data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9
Feb 28 17:00:18 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:18.471+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=8.401820121 model=/data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9
Feb 28 17:00:21 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:21.076+08:00 level=INFO source=sched.go:731 msg="new model will fit in available VRAM, loading" model=/data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 library=cuda parallel=4 required="449.4 GiB"
Feb 28 17:00:23 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:23.755+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=13.686477145 model=/data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9
Feb 28 17:00:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:26.583+08:00 level=INFO source=server.go:97 msg="system memory" total="1007.3 GiB" free="991.9 GiB" free_swap="0 B"
Feb 28 17:00:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:26.584+08:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=62 layers.offload=62 layers.split=8,8,8,8,8,8,7,7 memory.available="[94.7 GiB 94.7 GiB 94.7 GiB 94.7 GiB 94.7 GiB 94.7 GiB 94.7 GiB 94.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="449.4 GiB" memory.required.partial="449.4 GiB" memory.required.kv="38.1 GiB" memory.required.allocations="[54.7 GiB 54.7 GiB 54.7 GiB 61.3 GiB 61.3 GiB 55.3 GiB 53.7 GiB 53.7 GiB]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB"
Feb 28 17:00:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:26.585+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 8192 --batch-size 512 --n-gpu-layers 62 --threads 96 --parallel 4 --tensor-split 8,8,8,8,8,8,7,7 --port 7793"
Feb 28 17:00:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:26.585+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
Feb 28 17:00:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:26.585+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
Feb 28 17:00:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:26.585+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
Feb 28 17:00:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:26.601+08:00 level=INFO source=runner.go:932 msg="starting go runner"
Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: ggml_cuda_init: found 8 CUDA devices:
Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: Device 0: NVIDIA H20, compute capability 9.0, VMM: yes
Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: Device 1: NVIDIA H20, compute capability 9.0, VMM: yes
Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: Device 2: NVIDIA H20, compute capability 9.0, VMM: yes
Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: Device 3: NVIDIA H20, compute capability 9.0, VMM: yes
Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: Device 4: NVIDIA H20, compute capability 9.0, VMM: yes
Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: Device 5: NVIDIA H20, compute capability 9.0, VMM: yes
Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: Device 6: NVIDIA H20, compute capability 9.0, VMM: yes
Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: Device 7: NVIDIA H20, compute capability 9.0, VMM: yes
Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so
Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:28.743+08:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=96
Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:28.744+08:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:7793"
Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:28.879+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_load_model_from_file: using device CUDA0 (NVIDIA H20) - 96943 MiB free
Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_load_model_from_file: using device CUDA1 (NVIDIA H20) - 96943 MiB free
Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_load_model_from_file: using device CUDA2 (NVIDIA H20) - 96943 MiB free
Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_load_model_from_file: using device CUDA3 (NVIDIA H20) - 96943 MiB free
Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_load_model_from_file: using device CUDA4 (NVIDIA H20) - 96943 MiB free
Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_load_model_from_file: using device CUDA5 (NVIDIA H20) - 96943 MiB free
Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_load_model_from_file: using device CUDA6 (NVIDIA H20) - 96943 MiB free
Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_load_model_from_file: using device CUDA7 (NVIDIA H20) - 96943 MiB free
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from /data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest))
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv   1:                               general.type str              = model
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv   2:                         general.size_label str              = 256x20B
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv   3:                      deepseek2.block_count u32              = 61
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv   4:                   deepseek2.context_length u32              = 163840
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv   5:                 deepseek2.embedding_length u32              = 7168
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv   6:              deepseek2.feed_forward_length u32              = 18432
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv   7:             deepseek2.attention.head_count u32              = 128
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv   8:          deepseek2.attention.head_count_kv u32              = 128
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv   9:                   deepseek2.rope.freq_base f32              = 10000.000000
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  10: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000001
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  11:                deepseek2.expert_used_count u32              = 8
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  12:        deepseek2.leading_dense_block_count u32              = 3
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  13:                       deepseek2.vocab_size u32              = 129280
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  14:            deepseek2.attention.q_lora_rank u32              = 1536
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  15:           deepseek2.attention.kv_lora_rank u32              = 512
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  16:             deepseek2.attention.key_length u32              = 192
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  17:           deepseek2.attention.value_length u32              = 128
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  18:       deepseek2.expert_feed_forward_length u32              = 2048
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  19:                     deepseek2.expert_count u32              = 256
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  20:              deepseek2.expert_shared_count u32              = 1
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  21:             deepseek2.expert_weights_scale f32              = 2.500000
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  22:              deepseek2.expert_weights_norm bool             = true
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  23:               deepseek2.expert_gating_func u32              = 2
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  24:             deepseek2.rope.dimension_count u32              = 64
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  25:                deepseek2.rope.scaling.type str              = yarn
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  26:              deepseek2.rope.scaling.factor f32              = 40.000000
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  27: deepseek2.rope.scaling.original_context_length u32              = 4096
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  28: deepseek2.rope.scaling.yarn_log_multiplier f32              = 0.100000
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  29:                       tokenizer.ggml.model str              = gpt2
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  30:                         tokenizer.ggml.pre str              = deepseek-v3
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [132B blob data]
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  32:                  tokenizer.ggml.token_type arr[i32,129280]  = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  33:                      tokenizer.ggml.merges arr[str,127741]  = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e...
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  34:                tokenizer.ggml.bos_token_id u32              = 0
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  35:                tokenizer.ggml.eos_token_id u32              = 1
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  36:            tokenizer.ggml.padding_token_id u32              = 1
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  37:               tokenizer.ggml.add_bos_token bool             = true
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  38:               tokenizer.ggml.add_eos_token bool             = false
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  39:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  40:               general.quantization_version u32              = 2
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv  41:                          general.file_type u32              = 15
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - type  f32:  361 tensors
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - type q4_K:  606 tensors
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - type q6_K:   58 tensors
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_vocab: special tokens cache size = 818
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_vocab: token to piece cache size = 0.8223 MB
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: format           = GGUF V3 (latest)
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: arch             = deepseek2
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: vocab type       = BPE
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_vocab          = 129280
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_merges         = 127741
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: vocab_only       = 0
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_ctx_train      = 163840
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_embd           = 7168
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_layer          = 61
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_head           = 128
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_head_kv        = 128
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_rot            = 64
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_swa            = 0
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_embd_head_k    = 192
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_embd_head_v    = 128
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_gqa            = 1
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_embd_k_gqa     = 24576
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_embd_v_gqa     = 16384
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_ff             = 18432
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_expert         = 256
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_expert_used    = 8
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: causal attn      = 1
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: pooling type     = 0
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: rope type        = 0
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: rope scaling     = yarn
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: freq_base_train  = 10000.0
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: freq_scale_train = 0.025
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_ctx_orig_yarn  = 4096
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: rope_finetuned   = unknown
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: ssm_d_conv       = 0
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: ssm_d_inner      = 0
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: ssm_d_state      = 0
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: ssm_dt_rank      = 0
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: model type       = 671B
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: model ftype      = Q4_K - Medium
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: model params     = 671.03 B
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: model size       = 376.65 GiB (4.82 BPW)
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: general.name     = n/a
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: BOS token        = 0 '<|begin▁of▁sentence|>'
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: EOS token        = 1 '<|end▁of▁sentence|>'
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: EOT token        = 1 '<|end▁of▁sentence|>'
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: PAD token        = 1 '<|end▁of▁sentence|>'
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: LF token         = 131 'Ä'
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: FIM PRE token    = 128801 '<|fim▁begin|>'
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: FIM SUF token    = 128800 '<|fim▁hole|>'
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: FIM MID token    = 128802 '<|fim▁end|>'
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: EOG token        = 1 '<|end▁of▁sentence|>'
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: max token length = 256
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_layer_dense_lead   = 3
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_lora_q             = 1536
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_lora_kv            = 512
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_ff_exp             = 2048
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_expert_shared      = 1
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: expert_weights_scale = 2.5
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: expert_weights_norm  = 1
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: expert_gating_func   = sigmoid
Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: rope_yarn_log_mul    = 0.1000
Feb 28 17:05:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:05:26.794+08:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - "
Feb 28 17:05:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:05:26 | 500 |         5m16s |    10.219.32.13 | POST     "/api/chat"
Feb 28 17:05:32 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:05:32.057+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.262454675 model=/data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9
Feb 28 17:05:35 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:05:35.242+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=8.447635778 model=/data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9
Feb 28 17:05:37 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:05:37.680+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=10.885460460000001 model=/data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9
Feb 28 17:08:51 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:08:51 | 200 |      35.641µs |  10.2.3.4 | HEAD     "/"
Feb 28 17:08:51 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:08:51 | 200 |      26.664µs |  10.2.3.4 | GET      "/api/ps"
Feb 28 17:09:30 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:09:30 | 200 |      41.362µs |  10.2.3.4 | HEAD     "/"
Feb 28 17:09:30 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:09:30 | 200 |    1.291793ms |  10.2.3.4 | GET      "/api/tags"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.12

Originally created by @moluzhui on GitHub (Feb 28, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9414 ### What is the issue? I have two GPU servers running `deepseek-r1:671b`, and one of them has the following error. > Centos 7 Nvidia Tesla H200 GPUs with Driver Version: 550.127.08 and CUDA Version: 12.4. ``` Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: rope_yarn_log_mul = 0.1000 Feb 28 17:05:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:05:26.794+08:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - " Feb 28 17:05:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:05:26 | 500 | 5m16s | 10.219.32.13 | POST "/api/chat" Feb 28 17:05:32 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:05:32.057+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.262454675 model=/data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 Feb 28 17:05:35 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:05:35.242+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=8.447635778 model=/data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 ``` systemd settting ``` [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/bin/ollama serve User=root Group=root Restart=always RestartSec=3 Environment="PATH=$PATH" Environment="OLLAMA_MODELS=/data/ollama/models" Environment="OLLAMA_HOST=10.2.3.4:11434" Environment="OLLAMA_SCHED_SPREAD=1" Environment="OLLAMA_KEEP_ALIVE=-1" [Install] WantedBy=default.target ``` ### Relevant log output ```shell Feb 28 16:55:12 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: expert_gating_func = sigmoid Feb 28 16:55:12 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: rope_yarn_log_mul = 0.1000 Feb 28 16:55:53 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 16:55:53 | 200 | 25.516µs | 10.2.3.4 | HEAD "/" Feb 28 16:55:53 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 16:55:53 | 200 | 47.167µs | 10.2.3.4 | GET "/api/ps" Feb 28 16:58:22 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 16:58:22 | 200 | 20.305µs | 10.2.3.4 | HEAD "/" Feb 28 16:58:22 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 16:58:22 | 200 | 29.775µs | 10.2.3.4 | GET "/api/ps" Feb 28 16:59:54 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 16:59:54 | 200 | 24.903µs | 10.210.105.9 | GET "/" Feb 28 16:59:56 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 16:59:56 | 200 | 28.055µs | 10.210.105.9 | GET "/" Feb 28 17:00:01 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:00:01 | 200 | 18.166µs | 10.210.105.9 | GET "/" Feb 28 17:00:03 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:00:03 | 200 | 14.657µs | 10.210.105.9 | GET "/" Feb 28 17:00:05 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:00:05 | 200 | 124.889µs | 10.210.105.9 | GET "/" Feb 28 17:00:07 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:00:07 | 200 | 37.205µs | 10.210.105.9 | GET "/" Feb 28 17:00:08 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:00:08 | 200 | 18.13µs | 10.210.105.9 | GET "/" Feb 28 17:00:10 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:10.058+08:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - " Feb 28 17:00:10 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:00:10 | 500 | 5m5s | 10.219.32.13 | POST "/api/chat" Feb 28 17:00:10 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:00:10 | 200 | 4m5s | 10.219.32.13 | POST "/api/chat" Feb 28 17:00:10 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:00:10 | 200 | 3m4s | 10.219.32.13 | POST "/api/chat" Feb 28 17:00:10 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:00:10 | 200 | 2m1s | 10.219.32.13 | POST "/api/chat" Feb 28 17:00:10 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:00:10 | 200 | 1m0s | 10.219.32.13 | POST "/api/chat" Feb 28 17:00:15 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:15.342+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.272926743 model=/data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 Feb 28 17:00:18 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:18.471+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=8.401820121 model=/data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 Feb 28 17:00:21 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:21.076+08:00 level=INFO source=sched.go:731 msg="new model will fit in available VRAM, loading" model=/data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 library=cuda parallel=4 required="449.4 GiB" Feb 28 17:00:23 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:23.755+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=13.686477145 model=/data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 Feb 28 17:00:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:26.583+08:00 level=INFO source=server.go:97 msg="system memory" total="1007.3 GiB" free="991.9 GiB" free_swap="0 B" Feb 28 17:00:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:26.584+08:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=62 layers.offload=62 layers.split=8,8,8,8,8,8,7,7 memory.available="[94.7 GiB 94.7 GiB 94.7 GiB 94.7 GiB 94.7 GiB 94.7 GiB 94.7 GiB 94.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="449.4 GiB" memory.required.partial="449.4 GiB" memory.required.kv="38.1 GiB" memory.required.allocations="[54.7 GiB 54.7 GiB 54.7 GiB 61.3 GiB 61.3 GiB 55.3 GiB 53.7 GiB 53.7 GiB]" memory.weights.total="413.6 GiB" memory.weights.repeating="412.9 GiB" memory.weights.nonrepeating="725.0 MiB" memory.graph.full="3.0 GiB" memory.graph.partial="3.0 GiB" Feb 28 17:00:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:26.585+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/bin/ollama runner --model /data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 --ctx-size 8192 --batch-size 512 --n-gpu-layers 62 --threads 96 --parallel 4 --tensor-split 8,8,8,8,8,8,7,7 --port 7793" Feb 28 17:00:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:26.585+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 Feb 28 17:00:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:26.585+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding" Feb 28 17:00:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:26.585+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" Feb 28 17:00:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:26.601+08:00 level=INFO source=runner.go:932 msg="starting go runner" Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: ggml_cuda_init: found 8 CUDA devices: Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: Device 0: NVIDIA H20, compute capability 9.0, VMM: yes Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: Device 1: NVIDIA H20, compute capability 9.0, VMM: yes Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: Device 2: NVIDIA H20, compute capability 9.0, VMM: yes Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: Device 3: NVIDIA H20, compute capability 9.0, VMM: yes Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: Device 4: NVIDIA H20, compute capability 9.0, VMM: yes Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: Device 5: NVIDIA H20, compute capability 9.0, VMM: yes Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: Device 6: NVIDIA H20, compute capability 9.0, VMM: yes Feb 28 17:00:27 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: Device 7: NVIDIA H20, compute capability 9.0, VMM: yes Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v12/libggml-cuda.so Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:28.743+08:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=96 Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:28.744+08:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:7793" Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:00:28.879+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_load_model_from_file: using device CUDA0 (NVIDIA H20) - 96943 MiB free Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_load_model_from_file: using device CUDA1 (NVIDIA H20) - 96943 MiB free Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_load_model_from_file: using device CUDA2 (NVIDIA H20) - 96943 MiB free Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_load_model_from_file: using device CUDA3 (NVIDIA H20) - 96943 MiB free Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_load_model_from_file: using device CUDA4 (NVIDIA H20) - 96943 MiB free Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_load_model_from_file: using device CUDA5 (NVIDIA H20) - 96943 MiB free Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_load_model_from_file: using device CUDA6 (NVIDIA H20) - 96943 MiB free Feb 28 17:00:28 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_load_model_from_file: using device CUDA7 (NVIDIA H20) - 96943 MiB free Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: loaded meta data with 42 key-value pairs and 1025 tensors from /data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 (version GGUF V3 (latest)) Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 0: general.architecture str = deepseek2 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 1: general.type str = model Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 2: general.size_label str = 256x20B Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 3: deepseek2.block_count u32 = 61 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 4: deepseek2.context_length u32 = 163840 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 5: deepseek2.embedding_length u32 = 7168 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 6: deepseek2.feed_forward_length u32 = 18432 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 7: deepseek2.attention.head_count u32 = 128 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 8: deepseek2.attention.head_count_kv u32 = 128 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 9: deepseek2.rope.freq_base f32 = 10000.000000 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 10: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 11: deepseek2.expert_used_count u32 = 8 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 12: deepseek2.leading_dense_block_count u32 = 3 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 13: deepseek2.vocab_size u32 = 129280 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 14: deepseek2.attention.q_lora_rank u32 = 1536 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 15: deepseek2.attention.kv_lora_rank u32 = 512 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 16: deepseek2.attention.key_length u32 = 192 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 17: deepseek2.attention.value_length u32 = 128 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 18: deepseek2.expert_feed_forward_length u32 = 2048 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 19: deepseek2.expert_count u32 = 256 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 20: deepseek2.expert_shared_count u32 = 1 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 21: deepseek2.expert_weights_scale f32 = 2.500000 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 22: deepseek2.expert_weights_norm bool = true Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 23: deepseek2.expert_gating_func u32 = 2 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 24: deepseek2.rope.dimension_count u32 = 64 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 25: deepseek2.rope.scaling.type str = yarn Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 26: deepseek2.rope.scaling.factor f32 = 40.000000 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 27: deepseek2.rope.scaling.original_context_length u32 = 4096 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 28: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 29: tokenizer.ggml.model str = gpt2 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 30: tokenizer.ggml.pre str = deepseek-v3 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [132B blob data] Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 32: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 33: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e... Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 34: tokenizer.ggml.bos_token_id u32 = 0 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 35: tokenizer.ggml.eos_token_id u32 = 1 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 36: tokenizer.ggml.padding_token_id u32 = 1 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 37: tokenizer.ggml.add_bos_token bool = true Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 38: tokenizer.ggml.add_eos_token bool = false Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 39: tokenizer.chat_template str = {% if not add_generation_prompt is de... Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 40: general.quantization_version u32 = 2 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - kv 41: general.file_type u32 = 15 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - type f32: 361 tensors Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - type q4_K: 606 tensors Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llama_model_loader: - type q6_K: 58 tensors Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_vocab: special tokens cache size = 818 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_vocab: token to piece cache size = 0.8223 MB Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: format = GGUF V3 (latest) Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: arch = deepseek2 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: vocab type = BPE Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_vocab = 129280 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_merges = 127741 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: vocab_only = 0 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_ctx_train = 163840 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_embd = 7168 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_layer = 61 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_head = 128 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_head_kv = 128 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_rot = 64 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_swa = 0 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_embd_head_k = 192 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_embd_head_v = 128 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_gqa = 1 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_embd_k_gqa = 24576 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_embd_v_gqa = 16384 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: f_norm_eps = 0.0e+00 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: f_logit_scale = 0.0e+00 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_ff = 18432 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_expert = 256 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_expert_used = 8 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: causal attn = 1 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: pooling type = 0 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: rope type = 0 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: rope scaling = yarn Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: freq_base_train = 10000.0 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: freq_scale_train = 0.025 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_ctx_orig_yarn = 4096 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: rope_finetuned = unknown Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: ssm_d_conv = 0 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: ssm_d_inner = 0 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: ssm_d_state = 0 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: ssm_dt_rank = 0 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: model type = 671B Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: model ftype = Q4_K - Medium Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: model params = 671.03 B Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: model size = 376.65 GiB (4.82 BPW) Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: general.name = n/a Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: BOS token = 0 '<|begin▁of▁sentence|>' Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: EOS token = 1 '<|end▁of▁sentence|>' Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: EOT token = 1 '<|end▁of▁sentence|>' Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: PAD token = 1 '<|end▁of▁sentence|>' Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: LF token = 131 'Ä' Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: FIM PRE token = 128801 '<|fim▁begin|>' Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: FIM SUF token = 128800 '<|fim▁hole|>' Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: FIM MID token = 128802 '<|fim▁end|>' Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: EOG token = 1 '<|end▁of▁sentence|>' Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: max token length = 256 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_layer_dense_lead = 3 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_lora_q = 1536 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_lora_kv = 512 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_ff_exp = 2048 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: n_expert_shared = 1 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: expert_weights_scale = 2.5 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: expert_weights_norm = 1 Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: expert_gating_func = sigmoid Feb 28 17:00:29 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: llm_load_print_meta: rope_yarn_log_mul = 0.1000 Feb 28 17:05:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:05:26.794+08:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 0.00 - " Feb 28 17:05:26 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:05:26 | 500 | 5m16s | 10.219.32.13 | POST "/api/chat" Feb 28 17:05:32 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:05:32.057+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.262454675 model=/data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 Feb 28 17:05:35 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:05:35.242+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=8.447635778 model=/data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 Feb 28 17:05:37 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: time=2025-02-28T17:05:37.680+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=10.885460460000001 model=/data/ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 Feb 28 17:08:51 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:08:51 | 200 | 35.641µs | 10.2.3.4 | HEAD "/" Feb 28 17:08:51 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:08:51 | 200 | 26.664µs | 10.2.3.4 | GET "/api/ps" Feb 28 17:09:30 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:09:30 | 200 | 41.362µs | 10.2.3.4 | HEAD "/" Feb 28 17:09:30 iZ0jl5att67k7fqmbzp4j2Z ollama[22199]: [GIN] 2025/02/28 - 17:09:30 | 200 | 1.291793ms | 10.2.3.4 | GET "/api/tags" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.12
GiteaMirror added the bug label 2026-04-12 17:29:08 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 28, 2025):

Set OLLAMA_LOAD_TIMEOUT=30m in the server environment.

<!-- gh-comment-id:2690181418 --> @rick-github commented on GitHub (Feb 28, 2025): Set [`OLLAMA_LOAD_TIMEOUT=30m`](https://github.com/ollama/ollama/blob/98d44fa39d22c9c6f86fb964dd3bb13a38356371/envconfig/config.go#L248) in the server environment.
Author
Owner

@moluzhui commented on GitHub (Feb 28, 2025):

Set OLLAMA_LOAD_TIMEOUT=30m in the server environment.

Is this an occasional phenomenon? There is no such problem on another server of the same type

<!-- gh-comment-id:2690210507 --> @moluzhui commented on GitHub (Feb 28, 2025): > Set [`OLLAMA_LOAD_TIMEOUT=30m`](https://github.com/ollama/ollama/blob/98d44fa39d22c9c6f86fb964dd3bb13a38356371/envconfig/config.go#L248) in the server environment. Is this an occasional phenomenon? There is no such problem on another server of the same type
Author
Owner

@rick-github commented on GitHub (Feb 28, 2025):

Depends on disk speed, PCI bandwidth, VRAM writes, other processes, block caching, etc etc.

<!-- gh-comment-id:2690224295 --> @rick-github commented on GitHub (Feb 28, 2025): Depends on disk speed, PCI bandwidth, VRAM writes, other processes, block caching, etc etc.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6138