[GH-ISSUE #11127] llama4 error on MSTY #7338

Closed
opened 2026-04-12 19:23:20 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @fedesantamarina on GitHub (Jun 19, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11127

What is the issue?

llama4 error on MSTY
llama runner process has terminated: error loading
model: done_getting_tensors: wrong number of tensors;
expected 1182, got 628
On Mac M3 96 GB

Relevant log output

llama runner process has terminated: error loading
model: done_getting_tensors: wrong number of tensors;
expected 1182, got 628

OS

MACOS

GPU

METAL

CPU

M3

Ollama version

ollama version is 0.9.2

Originally created by @fedesantamarina on GitHub (Jun 19, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11127 ### What is the issue? llama4 error on MSTY llama runner process has terminated: error loading model: done_getting_tensors: wrong number of tensors; expected 1182, got 628 On Mac M3 96 GB ### Relevant log output ```shell llama runner process has terminated: error loading model: done_getting_tensors: wrong number of tensors; expected 1182, got 628 ``` ### OS MACOS ### GPU METAL ### CPU M3 ### Ollama version ollama version is 0.9.2
GiteaMirror added the bug label 2026-04-12 19:23:20 -05:00
Author
Owner

@jmorganca commented on GitHub (Jun 19, 2025):

Hi @fedesantamarina how was Llama 4 downloaded? Would it be possible to try re-downloading it with ollama pull llama4? Let me know if the error persists and sorry you hit the issue

<!-- gh-comment-id:2988541137 --> @jmorganca commented on GitHub (Jun 19, 2025): Hi @fedesantamarina how was Llama 4 downloaded? Would it be possible to try re-downloading it with `ollama pull llama4`? Let me know if the error persists and sorry you hit the issue
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7338