[GH-ISSUE #15227] About v0.20.0 pre-release #71796

Closed
opened 2026-05-05 02:31:31 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @methaqualon on GitHub (Apr 2, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15227

What is the issue?

root@ollama:~# ollama run gemma4:e4b-it-q8_0
Error: 500 Internal Server Error: llama runner process has terminated: %!w(<nil>)

but gemma4:26b-a4b-it-q4_K_M works perfectly.
May be some problem in audio things.

ask for any further info if needed

Relevant log output

root@ollama:~# ollama run gemma4:e4b-it-q8_0
Error: 500 Internal Server Error: llama runner process has terminated: %!w(<nil>)
root@ollama:~# journalctl -u ollama -f
апр 02 17:25:11 ollama ollama[577]: time=2026-04-02T17:25:11.416Z level=INFO source=server.go:1386 msg="waiting for server to become available" status="llm server loading model"
апр 02 17:25:11 ollama ollama[577]: panic: failed to load model: unassigned tensor: model.audio_tower.layers.10.feed_forward1.ffw_layer_1.output_max
апр 02 17:25:11 ollama ollama[577]: goroutine 83 [running]:
апр 02 17:25:11 ollama ollama[577]: github.com/ollama/ollama/runner/ollamarunner.(*Server).loadModel(0xc0005e43c0)
апр 02 17:25:11 ollama ollama[577]:         github.com/ollama/ollama/runner/ollamarunner/runner.go:1258 +0x1d4
апр 02 17:25:11 ollama ollama[577]: created by github.com/ollama/ollama/runner/ollamarunner.(*Server).load in goroutine 10
апр 02 17:25:11 ollama ollama[577]:         github.com/ollama/ollama/runner/ollamarunner/runner.go:1347 +0x625
апр 02 17:25:11 ollama ollama[577]: time=2026-04-02T17:25:11.998Z level=INFO source=server.go:1386 msg="waiting for server to become available" status="llm server not responding"
апр 02 17:25:12 ollama ollama[577]: time=2026-04-02T17:25:12.249Z level=ERROR source=sched.go:567 msg="error loading llama server" error="llama runner process has terminated: %!w(<nil>)"
апр 02 17:25:12 ollama ollama[577]: [GIN] 2026/04/02 - 17:25:12 | 500 |  1.607744997s |       127.0.0.1 | POST     "/api/generate"

OS

Linux

GPU

Intel

CPU

Intel

Ollama version

0.20.0-rc0

Originally created by @methaqualon on GitHub (Apr 2, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15227 ### What is the issue? ``` root@ollama:~# ollama run gemma4:e4b-it-q8_0 Error: 500 Internal Server Error: llama runner process has terminated: %!w(<nil>) ``` but **gemma4:26b-a4b-it-q4_K_M** works perfectly. May be some problem in audio things. ask for any further info if needed ### Relevant log output ```shell root@ollama:~# ollama run gemma4:e4b-it-q8_0 Error: 500 Internal Server Error: llama runner process has terminated: %!w(<nil>) root@ollama:~# journalctl -u ollama -f апр 02 17:25:11 ollama ollama[577]: time=2026-04-02T17:25:11.416Z level=INFO source=server.go:1386 msg="waiting for server to become available" status="llm server loading model" апр 02 17:25:11 ollama ollama[577]: panic: failed to load model: unassigned tensor: model.audio_tower.layers.10.feed_forward1.ffw_layer_1.output_max апр 02 17:25:11 ollama ollama[577]: goroutine 83 [running]: апр 02 17:25:11 ollama ollama[577]: github.com/ollama/ollama/runner/ollamarunner.(*Server).loadModel(0xc0005e43c0) апр 02 17:25:11 ollama ollama[577]: github.com/ollama/ollama/runner/ollamarunner/runner.go:1258 +0x1d4 апр 02 17:25:11 ollama ollama[577]: created by github.com/ollama/ollama/runner/ollamarunner.(*Server).load in goroutine 10 апр 02 17:25:11 ollama ollama[577]: github.com/ollama/ollama/runner/ollamarunner/runner.go:1347 +0x625 апр 02 17:25:11 ollama ollama[577]: time=2026-04-02T17:25:11.998Z level=INFO source=server.go:1386 msg="waiting for server to become available" status="llm server not responding" апр 02 17:25:12 ollama ollama[577]: time=2026-04-02T17:25:12.249Z level=ERROR source=sched.go:567 msg="error loading llama server" error="llama runner process has terminated: %!w(<nil>)" апр 02 17:25:12 ollama ollama[577]: [GIN] 2026/04/02 - 17:25:12 | 500 | 1.607744997s | 127.0.0.1 | POST "/api/generate" ``` ### OS Linux ### GPU Intel ### CPU Intel ### Ollama version 0.20.0-rc0
GiteaMirror added the bug label 2026-05-05 02:31:31 -05:00
Author
Owner

@methaqualon commented on GitHub (Apr 3, 2026):

same in 0.20.0 release version.

<!-- gh-comment-id:4182705633 --> @methaqualon commented on GitHub (Apr 3, 2026): same in 0.20.0 release version.
Author
Owner

@methaqualon commented on GitHub (Apr 6, 2026):

In 0.20.2 repulling the model with new id 9dcc35808b42 resolves the problem.

<!-- gh-comment-id:4192141021 --> @methaqualon commented on GitHub (Apr 6, 2026): In 0.20.2 repulling the model with new id 9dcc35808b42 resolves the problem.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71796