[GH-ISSUE #9436] could not run phi4-mini:3.8b-fp16 #6151

Closed
opened 2026-04-12 17:30:04 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @husy8 on GitHub (Mar 1, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9436

What is the issue?

I got a runtime error with following:

(base) user@localhost:~$ ollama run phi4-mini:3.8b-fp16
Error: llama runner process has terminated: error loading model: missing tensor 'output.weight'
llama_load_model_from_file: failed to load model

the model downloaded from https://ollama.com/library/phi4-mini:3.8b-fp16, last update time should be around at 2025.03.01 04:00 UTC.

Relevant log output

Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.132+08:00 level=WARN source=ggml.go:132 msg="key not found" key=phi3.attention.key_length default=128
Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.132+08:00 level=WARN source=ggml.go:132 msg="key not found" key=phi3.attention.value_length default=128
Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.132+08:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-e7bb32183dad1cc57730edf523bd6ac18716005bb579384d279e029e63828f97 gpu=GPU-09638d0b-28bf-bec6-5473-76a11a69be3a parallel=4 available=12346589184 required="9.3 GiB"
Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.272+08:00 level=INFO source=server.go:97 msg="system memory" total="31.2 GiB" free="29.0 GiB" free_swap="0 B"
Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.273+08:00 level=WARN source=ggml.go:132 msg="key not found" key=phi3.attention.key_length default=128
Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.273+08:00 level=WARN source=ggml.go:132 msg="key not found" key=phi3.attention.value_length default=128
Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.273+08:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[11.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="9.3 GiB" memory.required.partial="9.3 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[9.3 GiB]" memory.weights.total="7.0 GiB" memory.weights.repeating="5.9 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="512.0 MiB" memory.graph.partial="512.0 MiB"
Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.273+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-e7bb32183dad1cc57730edf523bd6ac18716005bb579384d279e029e63828f97 --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --threads 6 --parallel 4 --port 43815"
Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.273+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.273+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding"
Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.273+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error"
Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.283+08:00 level=INFO source=runner.go:932 msg="starting go runner"
Mar 01 12:45:30 localhost ollama[3824]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Mar 01 12:45:30 localhost ollama[3824]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Mar 01 12:45:30 localhost ollama[3824]: ggml_cuda_init: found 1 CUDA devices:
Mar 01 12:45:30 localhost ollama[3824]:   Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes
Mar 01 12:45:30 localhost ollama[3824]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
Mar 01 12:45:30 localhost ollama[3824]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.314+08:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=6
Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.315+08:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:43815"
Mar 01 12:45:30 localhost ollama[3824]: llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3060) - 11774 MiB free
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: loaded meta data with 36 key-value pairs and 196 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-e7bb32183dad1cc57730edf523bd6ac18716005bb579384d279e029e63828f97 (version GGUF V3 (latest))
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv   0:                       general.architecture str              = phi3
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv   1:              phi3.rope.scaling.attn_factor f32              = 1.190238
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv   2:                               general.type str              = model
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv   3:                               general.name str              = Phi 4 Mini Instruct
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv   4:                           general.finetune str              = instruct
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv   5:                           general.basename str              = Phi-4
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv   6:                         general.size_label str              = mini
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv   7:                            general.license str              = mit
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv   8:                       general.license.link str              = https://huggingface.co/microsoft/Phi-...
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv   9:                               general.tags arr[str,3]       = ["nlp", "code", "text-generation"]
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  10:                          general.languages arr[str,24]      = ["multilingual", "ar", "zh", "cs", "d...
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  11:                        phi3.context_length u32              = 131072
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  12:  phi3.rope.scaling.original_context_length u32              = 4096
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  13:                      phi3.embedding_length u32              = 3072
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  14:                   phi3.feed_forward_length u32              = 8192
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  15:                           phi3.block_count u32              = 32
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  16:                  phi3.attention.head_count u32              = 24
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  17:               phi3.attention.head_count_kv u32              = 8
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  18:      phi3.attention.layer_norm_rms_epsilon f32              = 0.000010
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  19:                  phi3.rope.dimension_count u32              = 96
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  20:                        phi3.rope.freq_base f32              = 10000.000000
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  21:                          general.file_type u32              = 1
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  22:              phi3.attention.sliding_window u32              = 262144
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = gpt-4o
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,200064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,200064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,199742]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "e r", ...
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  28:                tokenizer.ggml.bos_token_id u32              = 199999
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  29:                tokenizer.ggml.eos_token_id u32              = 199999
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  30:            tokenizer.ggml.unknown_token_id u32              = 199999
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  31:            tokenizer.ggml.padding_token_id u32              = 199999
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  32:               tokenizer.ggml.add_bos_token bool             = false
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  33:               tokenizer.ggml.add_eos_token bool             = false
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  34:                    tokenizer.chat_template str              = {% for message in messages %}{% if me...
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv  35:               general.quantization_version u32              = 2
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - type  f32:   67 tensors
Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - type  f16:  129 tensors
Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.525+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model"
Mar 01 12:45:30 localhost ollama[3824]: llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
Mar 01 12:45:30 localhost ollama[3824]: llm_load_vocab: special tokens cache size = 12
Mar 01 12:45:30 localhost ollama[3824]: llm_load_vocab: token to piece cache size = 1.3333 MB
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: format           = GGUF V3 (latest)
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: arch             = phi3
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: vocab type       = BPE
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_vocab          = 200064
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_merges         = 199742
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: vocab_only       = 0
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_ctx_train      = 131072
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_embd           = 3072
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_layer          = 32
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_head           = 24
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_head_kv        = 8
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_rot            = 96
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_swa            = 262144
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_embd_head_k    = 128
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_embd_head_v    = 128
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_gqa            = 3
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_embd_k_gqa     = 1024
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_embd_v_gqa     = 1024
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_ff             = 8192
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_expert         = 0
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_expert_used    = 0
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: causal attn      = 1
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: pooling type     = 0
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: rope type        = 2
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: rope scaling     = linear
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: freq_base_train  = 10000.0
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: freq_scale_train = 1
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_ctx_orig_yarn  = 4096
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: rope_finetuned   = unknown
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: ssm_d_conv       = 0
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: ssm_d_inner      = 0
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: ssm_d_state      = 0
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: ssm_dt_rank      = 0
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: ssm_dt_b_c_rms   = 0
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: model type       = 3B
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: model ftype      = F16
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: model params     = 3.84 B
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: model size       = 7.15 GiB (16.00 BPW)
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: general.name     = Phi 4 Mini Instruct
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: BOS token        = 199999 '<|endoftext|>'
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: EOS token        = 199999 '<|endoftext|>'
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: EOT token        = 199999 '<|endoftext|>'
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: UNK token        = 199999 '<|endoftext|>'
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: PAD token        = 199999 '<|endoftext|>'
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: LF token         = 128 'Ä'
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: EOG token        = 199999 '<|endoftext|>'
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: EOG token        = 200020 '<|end|>'
Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: max token length = 256
Mar 01 12:45:31 localhost ollama[3824]: llama_model_load: error loading model: missing tensor 'output.weight'
Mar 01 12:45:31 localhost ollama[3824]: llama_load_model_from_file: failed to load model
Mar 01 12:45:31 localhost ollama[3824]: panic: unable to load model: /usr/share/ollama/.ollama/models/blobs/sha256-e7bb32183dad1cc57730edf523bd6ac18716005bb579384d279e029e63828f97
Mar 01 12:45:31 localhost ollama[3824]: goroutine 50 [running]:
Mar 01 12:45:31 localhost ollama[3824]: github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0005c8000, {0x21, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000116020, 0x0}, ...)
Mar 01 12:45:31 localhost ollama[3824]:         github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x38d
Mar 01 12:45:31 localhost ollama[3824]: created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
Mar 01 12:45:31 localhost ollama[3824]:         github.com/ollama/ollama/runner/llamarunner/runner.go:968 +0xcd5
Mar 01 12:45:31 localhost ollama[3824]: time=2025-03-01T12:45:31.149+08:00 level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2"
Mar 01 12:45:31 localhost ollama[3824]: time=2025-03-01T12:45:31.279+08:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: missing tensor 'output.weight'\nllama_load_model_from_file: failed to load model"
Mar 01 12:45:31 localhost ollama[3824]: [GIN] 2025/03/01 - 12:45:31 | 500 |  1.351642665s |       127.0.0.1 | POST     "/api/generate"
Mar 01 12:45:36 localhost ollama[3824]: time=2025-03-01T12:45:36.428+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.149761061 model=/usr/share/ollama/.ollama/models/blobs/sha256-e7bb32183dad1cc57730edf523bd6ac18716005bb579384d279e029e63828f97

OS

Linux localhost 6.1.0-31-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.128-1 (2025-02-07) x86_64 GNU/Linux

GPU

NVIDIA RTX 3060 x1

CPU

AMD Ryzen 5 5500

Ollama version

0.5.12

Originally created by @husy8 on GitHub (Mar 1, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9436 ### What is the issue? I got a runtime error with following: ```bash (base) user@localhost:~$ ollama run phi4-mini:3.8b-fp16 Error: llama runner process has terminated: error loading model: missing tensor 'output.weight' llama_load_model_from_file: failed to load model ``` the model downloaded from https://ollama.com/library/phi4-mini:3.8b-fp16, last update time should be around at 2025.03.01 04:00 UTC. ### Relevant log output ```shell Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.132+08:00 level=WARN source=ggml.go:132 msg="key not found" key=phi3.attention.key_length default=128 Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.132+08:00 level=WARN source=ggml.go:132 msg="key not found" key=phi3.attention.value_length default=128 Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.132+08:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-e7bb32183dad1cc57730edf523bd6ac18716005bb579384d279e029e63828f97 gpu=GPU-09638d0b-28bf-bec6-5473-76a11a69be3a parallel=4 available=12346589184 required="9.3 GiB" Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.272+08:00 level=INFO source=server.go:97 msg="system memory" total="31.2 GiB" free="29.0 GiB" free_swap="0 B" Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.273+08:00 level=WARN source=ggml.go:132 msg="key not found" key=phi3.attention.key_length default=128 Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.273+08:00 level=WARN source=ggml.go:132 msg="key not found" key=phi3.attention.value_length default=128 Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.273+08:00 level=INFO source=server.go:130 msg=offload library=cuda layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[11.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="9.3 GiB" memory.required.partial="9.3 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[9.3 GiB]" memory.weights.total="7.0 GiB" memory.weights.repeating="5.9 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="512.0 MiB" memory.graph.partial="512.0 MiB" Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.273+08:00 level=INFO source=server.go:380 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-e7bb32183dad1cc57730edf523bd6ac18716005bb579384d279e029e63828f97 --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --threads 6 --parallel 4 --port 43815" Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.273+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.273+08:00 level=INFO source=server.go:557 msg="waiting for llama runner to start responding" Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.273+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server error" Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.283+08:00 level=INFO source=runner.go:932 msg="starting go runner" Mar 01 12:45:30 localhost ollama[3824]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Mar 01 12:45:30 localhost ollama[3824]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Mar 01 12:45:30 localhost ollama[3824]: ggml_cuda_init: found 1 CUDA devices: Mar 01 12:45:30 localhost ollama[3824]: Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes Mar 01 12:45:30 localhost ollama[3824]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so Mar 01 12:45:30 localhost ollama[3824]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.314+08:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=6 Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.315+08:00 level=INFO source=runner.go:993 msg="Server listening on 127.0.0.1:43815" Mar 01 12:45:30 localhost ollama[3824]: llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3060) - 11774 MiB free Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: loaded meta data with 36 key-value pairs and 196 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-e7bb32183dad1cc57730edf523bd6ac18716005bb579384d279e029e63828f97 (version GGUF V3 (latest)) Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 0: general.architecture str = phi3 Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 1: phi3.rope.scaling.attn_factor f32 = 1.190238 Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 2: general.type str = model Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 3: general.name str = Phi 4 Mini Instruct Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 4: general.finetune str = instruct Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 5: general.basename str = Phi-4 Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 6: general.size_label str = mini Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 7: general.license str = mit Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 8: general.license.link str = https://huggingface.co/microsoft/Phi-... Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 9: general.tags arr[str,3] = ["nlp", "code", "text-generation"] Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 10: general.languages arr[str,24] = ["multilingual", "ar", "zh", "cs", "d... Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 11: phi3.context_length u32 = 131072 Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 12: phi3.rope.scaling.original_context_length u32 = 4096 Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 13: phi3.embedding_length u32 = 3072 Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 14: phi3.feed_forward_length u32 = 8192 Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 15: phi3.block_count u32 = 32 Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 16: phi3.attention.head_count u32 = 24 Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 17: phi3.attention.head_count_kv u32 = 8 Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 18: phi3.attention.layer_norm_rms_epsilon f32 = 0.000010 Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 19: phi3.rope.dimension_count u32 = 96 Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 20: phi3.rope.freq_base f32 = 10000.000000 Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 21: general.file_type u32 = 1 Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 22: phi3.attention.sliding_window u32 = 262144 Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 24: tokenizer.ggml.pre str = gpt-4o Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,200064] = ["!", "\"", "#", "$", "%", "&", "'", ... Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,200064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,199742] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "e r", ... Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 199999 Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 199999 Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 30: tokenizer.ggml.unknown_token_id u32 = 199999 Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 31: tokenizer.ggml.padding_token_id u32 = 199999 Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = false Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 33: tokenizer.ggml.add_eos_token bool = false Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 34: tokenizer.chat_template str = {% for message in messages %}{% if me... Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - kv 35: general.quantization_version u32 = 2 Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - type f32: 67 tensors Mar 01 12:45:30 localhost ollama[3824]: llama_model_loader: - type f16: 129 tensors Mar 01 12:45:30 localhost ollama[3824]: time=2025-03-01T12:45:30.525+08:00 level=INFO source=server.go:591 msg="waiting for server to become available" status="llm server loading model" Mar 01 12:45:30 localhost ollama[3824]: llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default' Mar 01 12:45:30 localhost ollama[3824]: llm_load_vocab: special tokens cache size = 12 Mar 01 12:45:30 localhost ollama[3824]: llm_load_vocab: token to piece cache size = 1.3333 MB Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: format = GGUF V3 (latest) Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: arch = phi3 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: vocab type = BPE Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_vocab = 200064 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_merges = 199742 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: vocab_only = 0 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_ctx_train = 131072 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_embd = 3072 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_layer = 32 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_head = 24 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_head_kv = 8 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_rot = 96 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_swa = 262144 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_embd_head_k = 128 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_embd_head_v = 128 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_gqa = 3 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_embd_k_gqa = 1024 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_embd_v_gqa = 1024 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: f_norm_eps = 0.0e+00 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: f_logit_scale = 0.0e+00 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_ff = 8192 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_expert = 0 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_expert_used = 0 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: causal attn = 1 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: pooling type = 0 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: rope type = 2 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: rope scaling = linear Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: freq_base_train = 10000.0 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: freq_scale_train = 1 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: n_ctx_orig_yarn = 4096 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: rope_finetuned = unknown Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: ssm_d_conv = 0 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: ssm_d_inner = 0 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: ssm_d_state = 0 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: ssm_dt_rank = 0 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: ssm_dt_b_c_rms = 0 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: model type = 3B Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: model ftype = F16 Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: model params = 3.84 B Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: model size = 7.15 GiB (16.00 BPW) Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: general.name = Phi 4 Mini Instruct Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: BOS token = 199999 '<|endoftext|>' Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: EOS token = 199999 '<|endoftext|>' Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: EOT token = 199999 '<|endoftext|>' Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: UNK token = 199999 '<|endoftext|>' Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: PAD token = 199999 '<|endoftext|>' Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: LF token = 128 'Ä' Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: EOG token = 199999 '<|endoftext|>' Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: EOG token = 200020 '<|end|>' Mar 01 12:45:30 localhost ollama[3824]: llm_load_print_meta: max token length = 256 Mar 01 12:45:31 localhost ollama[3824]: llama_model_load: error loading model: missing tensor 'output.weight' Mar 01 12:45:31 localhost ollama[3824]: llama_load_model_from_file: failed to load model Mar 01 12:45:31 localhost ollama[3824]: panic: unable to load model: /usr/share/ollama/.ollama/models/blobs/sha256-e7bb32183dad1cc57730edf523bd6ac18716005bb579384d279e029e63828f97 Mar 01 12:45:31 localhost ollama[3824]: goroutine 50 [running]: Mar 01 12:45:31 localhost ollama[3824]: github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0005c8000, {0x21, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0xc000116020, 0x0}, ...) Mar 01 12:45:31 localhost ollama[3824]: github.com/ollama/ollama/runner/llamarunner/runner.go:851 +0x38d Mar 01 12:45:31 localhost ollama[3824]: created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 Mar 01 12:45:31 localhost ollama[3824]: github.com/ollama/ollama/runner/llamarunner/runner.go:968 +0xcd5 Mar 01 12:45:31 localhost ollama[3824]: time=2025-03-01T12:45:31.149+08:00 level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2" Mar 01 12:45:31 localhost ollama[3824]: time=2025-03-01T12:45:31.279+08:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: missing tensor 'output.weight'\nllama_load_model_from_file: failed to load model" Mar 01 12:45:31 localhost ollama[3824]: [GIN] 2025/03/01 - 12:45:31 | 500 | 1.351642665s | 127.0.0.1 | POST "/api/generate" Mar 01 12:45:36 localhost ollama[3824]: time=2025-03-01T12:45:36.428+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.149761061 model=/usr/share/ollama/.ollama/models/blobs/sha256-e7bb32183dad1cc57730edf523bd6ac18716005bb579384d279e029e63828f97 ``` ### OS Linux localhost 6.1.0-31-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.128-1 (2025-02-07) x86_64 GNU/Linux ### GPU NVIDIA RTX 3060 x1 ### CPU AMD Ryzen 5 5500 ### Ollama version 0.5.12
GiteaMirror added the bug label 2026-04-12 17:30:04 -05:00
Author
Owner

@mswcap commented on GitHub (Mar 1, 2025):

You need to download the rc version of Ollama, and then it wil run just fine. The Phi4-mini model page states it, see screenshot.

Image

<!-- gh-comment-id:2692039423 --> @mswcap commented on GitHub (Mar 1, 2025): You need to download the rc version of Ollama, and then it wil run just fine. The Phi4-mini model page states it, see screenshot. ![Image](https://github.com/user-attachments/assets/8ed9c723-cf1b-41d0-9da0-79efb0214d6e)
Author
Owner

@husy8 commented on GitHub (Mar 1, 2025):

You need to download the rc version of Ollama, and then it wil run just fine. The Phi4-mini model page states it, see screenshot.

Image

Thank you! I totally missed that note

<!-- gh-comment-id:2692118642 --> @husy8 commented on GitHub (Mar 1, 2025): > You need to download the rc version of Ollama, and then it wil run just fine. The Phi4-mini model page states it, see screenshot. > > ![Image](https://github.com/user-attachments/assets/8ed9c723-cf1b-41d0-9da0-79efb0214d6e) Thank you! I totally missed that note
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6151