[GH-ISSUE #9429] phi4-mini error loading model: missing tensor 'output.weight' #68204

Closed
opened 2026-05-04 12:50:23 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @nvillar on GitHub (Feb 28, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9429

What is the issue?

pulling manifest
pulling 3c168af1dea0... 100% ▕███████████████████████████▏ 2.5 GB
pulling 813f53fdc6e5... 100% ▕███████████████████████████▏ 655 B
pulling fa8235e5b48f... 100% ▕███████████████████████████▏ 1.1 KB
pulling 8c2539a423c4... 100% ▕███████████████████████████▏ 411 B
verifying sha256 digest
writing manifest
success
Error: llama runner process has terminated: error loading model: missing tensor 'output.weight'

Relevant log output

goroutine 14 [running]:
github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0x140001bd560, {0x21, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0x140004965f0, 0x0}, ...)
	/Users/runner/work/ollama/ollama/runner/llamarunner/runner.go:851 +0x2ec
created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
	/Users/runner/work/ollama/ollama/runner/llamarunner/runner.go:968 +0xa94
time=2025-02-28T13:08:37.862-08:00 level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2"
time=2025-02-28T13:08:38.052-08:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: missing tensor 'output.weight'"

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.5.12

Originally created by @nvillar on GitHub (Feb 28, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9429 ### What is the issue? pulling manifest pulling 3c168af1dea0... 100% ▕███████████████████████████▏ 2.5 GB pulling 813f53fdc6e5... 100% ▕███████████████████████████▏ 655 B pulling fa8235e5b48f... 100% ▕███████████████████████████▏ 1.1 KB pulling 8c2539a423c4... 100% ▕███████████████████████████▏ 411 B verifying sha256 digest writing manifest success Error: llama runner process has terminated: error loading model: missing tensor 'output.weight' ### Relevant log output ```shell goroutine 14 [running]: github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0x140001bd560, {0x21, 0x0, 0x1, 0x0, {0x0, 0x0, 0x0}, 0x140004965f0, 0x0}, ...) /Users/runner/work/ollama/ollama/runner/llamarunner/runner.go:851 +0x2ec created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 /Users/runner/work/ollama/ollama/runner/llamarunner/runner.go:968 +0xa94 time=2025-02-28T13:08:37.862-08:00 level=ERROR source=server.go:421 msg="llama runner terminated" error="exit status 2" time=2025-02-28T13:08:38.052-08:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error loading model: missing tensor 'output.weight'" ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.5.12
GiteaMirror added the bug label 2026-05-04 12:50:23 -05:00
Author
Owner

@nvillar commented on GitHub (Feb 28, 2025):

Apologies, I now see that this requires 0.5.13 pre-release.

<!-- gh-comment-id:2691568729 --> @nvillar commented on GitHub (Feb 28, 2025): Apologies, I now see that this requires 0.5.13 pre-release.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#68204