[GH-ISSUE #9042] ollama create suddenly stops #5885

Closed
opened 2026-04-12 17:13:15 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @cicapatak on GitHub (Feb 12, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9042

Originally assigned to: @pdevine on GitHub.

What is the issue?

I ran a full fine tune on Llama3.2. using MLX.
When finished, I tried to run "ollama create", using the model file previously created. This model file worked wonderfully for lora fine tunes to the same model.
Now, after the full fine tune, the creation process starts normally, and then the message "converting model", then "converting adapter" and then nothing. It just stops.

Relevant log output

panic: runtime error: index out of range [1] with length 1

goroutine 49 [running]:
github.com/ollama/ollama/convert.(*llamaAdapter).repack(0x140003b2360, {0x140003b2480, 0x1c}, {0x14000e16000, 0xc00, 0xc00}, {0x1400012f0e0, 0x1, 0x801?})
	github.com/ollama/ollama/convert/convert_llama_adapter.go:74 +0x430
github.com/ollama/ollama/convert.safetensor.WriteTo({{0x103bb4060, 0x140005021a0}, {0x1037dcbe4, 0x14}, {0x1400012f0c8, 0x4}, 0x1476521fe, 0x1800, 0x140004ac9c0}, {0x103bb3d88, ...})
	github.com/ollama/ollama/convert/reader_safetensors.go:144 +0x4b4
github.com/ollama/ollama/llm.ggufWriteTensor({0x103bb8368, 0x14000444030}, {{0x140003b2480, 0x1c}, 0x0, 0x0, {0x1400012f0e0, 0x1, 0x1}, {0x12e374898, ...}}, ...)
	github.com/ollama/ollama/llm/gguf.go:656 +0x14c
github.com/ollama/ollama/llm.WriteGGUF({0x103bb8368, 0x14000444030}, 0x14000ddc9c0, {0x140006e7508, 0xfc, 0x12e})
	github.com/ollama/ollama/llm/gguf.go:556 +0x548
github.com/ollama/ollama/convert.AdapterParameters.writeFile(...)
	github.com/ollama/ollama/convert/convert.go:
87
github.com/ollama/ollama/convert.ConvertAdapter({0x103bb4060, 0x140005021a0}, {0x103bb8368, 0x14000444030}, 0x14000459830)
	github.com/ollama/ollama/convert/convert.go:156 +0x2ec
github.com/ollama/ollama/server.convertFromSafetensors(0x14000389380, {0x14000526058, 0x1, 0x1}, 0x1, 0x14000231900)
	github.com/ollama/ollama/server/create.go:255 +0x2e0
github.com/ollama/ollama/server.convertModelFromFiles(0x14000389380, {0x14000526058, 0x1, 0x1}, 0x1, 0x14000231900)
	github.com/ollama/ollama/server/create.go:151 +0x178
github.com/ollama/ollama/server.(*Server).CreateHandler.func1()
	github.com/ollama/ollama/server/create.go:105 +0x564
created by github.com/ollama/ollama/server.(*Server).CreateHandler in goroutine 43
	github.com/ollama/ollama/server/create.go:62 +0x5a8

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.5.7

Originally created by @cicapatak on GitHub (Feb 12, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9042 Originally assigned to: @pdevine on GitHub. ### What is the issue? I ran a full fine tune on Llama3.2. using MLX. When finished, I tried to run "ollama create", using the model file previously created. This model file worked wonderfully for lora fine tunes to the same model. Now, after the full fine tune, the creation process starts normally, and then the message "converting model", then "converting adapter" and then nothing. It just stops. ### Relevant log output ```shell panic: runtime error: index out of range [1] with length 1 goroutine 49 [running]: github.com/ollama/ollama/convert.(*llamaAdapter).repack(0x140003b2360, {0x140003b2480, 0x1c}, {0x14000e16000, 0xc00, 0xc00}, {0x1400012f0e0, 0x1, 0x801?}) github.com/ollama/ollama/convert/convert_llama_adapter.go:74 +0x430 github.com/ollama/ollama/convert.safetensor.WriteTo({{0x103bb4060, 0x140005021a0}, {0x1037dcbe4, 0x14}, {0x1400012f0c8, 0x4}, 0x1476521fe, 0x1800, 0x140004ac9c0}, {0x103bb3d88, ...}) github.com/ollama/ollama/convert/reader_safetensors.go:144 +0x4b4 github.com/ollama/ollama/llm.ggufWriteTensor({0x103bb8368, 0x14000444030}, {{0x140003b2480, 0x1c}, 0x0, 0x0, {0x1400012f0e0, 0x1, 0x1}, {0x12e374898, ...}}, ...) github.com/ollama/ollama/llm/gguf.go:656 +0x14c github.com/ollama/ollama/llm.WriteGGUF({0x103bb8368, 0x14000444030}, 0x14000ddc9c0, {0x140006e7508, 0xfc, 0x12e}) github.com/ollama/ollama/llm/gguf.go:556 +0x548 github.com/ollama/ollama/convert.AdapterParameters.writeFile(...) github.com/ollama/ollama/convert/convert.go: 87 github.com/ollama/ollama/convert.ConvertAdapter({0x103bb4060, 0x140005021a0}, {0x103bb8368, 0x14000444030}, 0x14000459830) github.com/ollama/ollama/convert/convert.go:156 +0x2ec github.com/ollama/ollama/server.convertFromSafetensors(0x14000389380, {0x14000526058, 0x1, 0x1}, 0x1, 0x14000231900) github.com/ollama/ollama/server/create.go:255 +0x2e0 github.com/ollama/ollama/server.convertModelFromFiles(0x14000389380, {0x14000526058, 0x1, 0x1}, 0x1, 0x14000231900) github.com/ollama/ollama/server/create.go:151 +0x178 github.com/ollama/ollama/server.(*Server).CreateHandler.func1() github.com/ollama/ollama/server/create.go:105 +0x564 created by github.com/ollama/ollama/server.(*Server).CreateHandler in goroutine 43 github.com/ollama/ollama/server/create.go:62 +0x5a8 ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.5.7
GiteaMirror added the bug label 2026-04-12 17:13:15 -05:00
Author
Owner

@pdevine commented on GitHub (Mar 16, 2025):

@cicapatak sorry for the late reply. Do you still have the Modelfile so I can try to debug this? From your stack trace it's definitely getting caught trying to read your lora adapter.

<!-- gh-comment-id:2727132866 --> @pdevine commented on GitHub (Mar 16, 2025): @cicapatak sorry for the late reply. Do you still have the Modelfile so I can try to debug this? From your stack trace it's definitely getting caught trying to read your lora adapter.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5885