[GH-ISSUE #12675] glm4.6 not work #8407

Closed
opened 2026-04-12 21:04:16 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @bxdx on GitHub (Oct 17, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12675

What is the issue?

llama_model_load_from_file_impl:failed to load model panic:unable to load model:/data/ollama/blobs/sha256-579108167db626905e0b57bbba41aacefd34767a007841fb88638ddfd28e652f
goroutine 10 [running]:
github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0006394a0,{0x0,Ox0,0x0,{0x5a20a53cd840,0x0,Ox0},Oxc0006280e0,Ox0},{0x7ffdd3d794cd,..},...)
github.com/ollama/ollama/runner/llamarunner/runner.go:747 +0x35f
created by github.com/ollama/ollama/runner/llamarunner.(*Server).load in gorouti
7
github.com/ollama/ollama/runner/llamarunner/runner.go:833 +0x7ce
:time=2025-10-14T23:33:13.540+08:00 level=ERROR source=server.go:426 msg="llamarunner terminated"error="exit status 2"
time=2025-10-14T23:33:13.591+08:00 level=INFO source=sched.go:449 msg="Loadfailed"model=/data/ollama/blobs/sha256-
579108167db626905e⁰b57bbba41aacefd34767a007841fb88638ddfd28e652f error="llama
runner process has terminated:error loading model:missing tensor
'blk.92.nextn.embed_tokens.weight'\nllama_model_load_from_file_impl:failed to
load model"

Relevant log output


OS

ubuntu24.04

GPU

No response

CPU

No response

Ollama version

0.12.5

Originally created by @bxdx on GitHub (Oct 17, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12675 ### What is the issue? llama_model_load_from_file_impl:failed to load model panic:unable to load model:/data/ollama/blobs/sha256-579108167db626905e0b57bbba41aacefd34767a007841fb88638ddfd28e652f goroutine 10 [running]: github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0006394a0,{0x0,Ox0,0x0,{0x5a20a53cd840,0x0,Ox0},Oxc0006280e0,Ox0},{0x7ffdd3d794cd,..},...) github.com/ollama/ollama/runner/llamarunner/runner.go:747 +0x35f created by github.com/ollama/ollama/runner/llamarunner.(*Server).load in gorouti 7 github.com/ollama/ollama/runner/llamarunner/runner.go:833 +0x7ce :time=2025-10-14T23:33:13.540+08:00 level=ERROR source=server.go:426 msg="llamarunner terminated"error="exit status 2" time=2025-10-14T23:33:13.591+08:00 level=INFO source=sched.go:449 msg="Loadfailed"model=/data/ollama/blobs/sha256- 579108167db626905e⁰b57bbba41aacefd34767a007841fb88638ddfd28e652f error="llama runner process has terminated:error loading model:missing tensor 'blk.92.nextn.embed_tokens.weight'\nllama_model_load_from_file_impl:failed to load model" ### Relevant log output ```shell ``` ### OS ubuntu24.04 ### GPU _No response_ ### CPU _No response_ ### Ollama version 0.12.5
GiteaMirror added the bug label 2026-04-12 21:04:16 -05:00
Author
Owner

@bxdx commented on GitHub (Oct 17, 2025):

llama.cpp works well

<!-- gh-comment-id:3414986707 --> @bxdx commented on GitHub (Oct 17, 2025): llama.cpp works well
Author
Owner

@rick-github commented on GitHub (Oct 17, 2025):

Post the full log.

<!-- gh-comment-id:3415460424 --> @rick-github commented on GitHub (Oct 17, 2025): Post the full log.
Author
Owner

@bxdx commented on GitHub (Oct 18, 2025):

Post the full log.

0.12.6 fixed ,thanks

<!-- gh-comment-id:3417774057 --> @bxdx commented on GitHub (Oct 18, 2025): > Post the full log. 0.12.6 fixed ,thanks
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8407