[GH-ISSUE #9984] Add support for array for head count GGUF KV #32301

Closed
opened 2026-04-22 13:25:41 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @ngxson on GitHub (Mar 25, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9984

Originally assigned to: @drifkin on GitHub.

What is the issue?

Ref bug: https://huggingface.co/bartowski/nvidia_Llama-3_3-Nemotron-Super-49B-v1-GGUF/discussions/3

Some architectures have head_count and head_count_kv as an array of int, because each layer in the model can have different number of heads.

Image

Link to relevant LOC: 131f0355a5/fs/ggml/ggml.go (L55)

However, since ollama only support Uint for these KV, EstimateGPULayers currently fails on certain models. Please see the attached log

Relevant log output

panic: interface conversion: interface {} is *ggml.array, not uint32

goroutine 27 [running]:
github.com/ollama/ollama/fs/ggml.keyValue[...](0xc00010a570, {0x7ff67b6bf1a3, 0x14}, {0xc000624548, 0x1, 0x7ff67a55e960})
C:/a/ollama/ollama/fs/ggml/ggml.go:146 +0x2de
github.com/ollama/ollama/fs/ggml.KV.Uint(...)
C:/a/ollama/ollama/fs/ggml/ggml.go:96
github.com/ollama/ollama/fs/ggml.KV.HeadCount(...)
C:/a/ollama/ollama/fs/ggml/ggml.go:56
github.com/ollama/ollama/fs/ggml.GGML.GraphSize({{0x7ff67b874828?, 0xc000726000?}, {0x7ff67b8747d8?, 0xc00018d808?}}, 0x20000, 0x200, {0x0, 0x0})
C:/a/ollama/ollama/fs/ggml/ggml.go:418 +0x137
github.com/ollama/ollama/llm.EstimateGPULayers({_, _, _}, , {, _, _}, {{0x20000, 0x200, 0xffffffffffffffff, ...}, ...})
C:/a/ollama/ollama/llm/memory.go:140 +0x659
github.com/ollama/ollama/llm.PredictServerFit({0xc00004bba8?, 0x7ff67a540f2e?, 0xc00004b8c0?}, 0xc000350060, {0xc00004b908?, _, _}, {0x0, 0x0, 0x0}, ...)
C:/a/ollama/ollama/llm/memory.go:23 +0xbd
github.com/ollama/ollama/server.pickBestFullFitByLibrary(0xc000570000, 0xc000350060, {0xc000160600?, 0x2?, 0x2?}, 0xc00004bcf8)
C:/a/ollama/ollama/server/sched.go:714 +0x6f3
github.com/ollama/ollama/server.(*Scheduler).processPending(0xc00009a8a0, {0x7ff67b878800, 0xc000726ff0})
C:/a/ollama/ollama/server/sched.go:226 +0xe6b
github.com/ollama/ollama/server.(*Scheduler).Run.func1()
C:/a/ollama/ollama/server/sched.go:108 +0x1f
created by github.com/ollama/ollama/server.(*Scheduler).Run in goroutine 1
C:/a/ollama/ollama/server/sched.go:107 +0xb1

OS

Windows

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @ngxson on GitHub (Mar 25, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9984 Originally assigned to: @drifkin on GitHub. ### What is the issue? Ref bug: https://huggingface.co/bartowski/nvidia_Llama-3_3-Nemotron-Super-49B-v1-GGUF/discussions/3 Some architectures have head_count and head_count_kv as an array of int, because each layer in the model can have different number of heads. <img width="480" alt="Image" src="https://github.com/user-attachments/assets/f0e4a80f-4828-4427-aba9-ce53d9633c26" /> Link to relevant LOC: https://github.com/ollama/ollama/blob/131f0355a59f4840b057fb8f3c2e59e456f91041/fs/ggml/ggml.go#L55 However, since ollama only support Uint for these KV, `EstimateGPULayers` currently fails on certain models. Please see the attached log ### Relevant log output ```shell panic: interface conversion: interface {} is *ggml.array, not uint32 goroutine 27 [running]: github.com/ollama/ollama/fs/ggml.keyValue[...](0xc00010a570, {0x7ff67b6bf1a3, 0x14}, {0xc000624548, 0x1, 0x7ff67a55e960}) C:/a/ollama/ollama/fs/ggml/ggml.go:146 +0x2de github.com/ollama/ollama/fs/ggml.KV.Uint(...) C:/a/ollama/ollama/fs/ggml/ggml.go:96 github.com/ollama/ollama/fs/ggml.KV.HeadCount(...) C:/a/ollama/ollama/fs/ggml/ggml.go:56 github.com/ollama/ollama/fs/ggml.GGML.GraphSize({{0x7ff67b874828?, 0xc000726000?}, {0x7ff67b8747d8?, 0xc00018d808?}}, 0x20000, 0x200, {0x0, 0x0}) C:/a/ollama/ollama/fs/ggml/ggml.go:418 +0x137 github.com/ollama/ollama/llm.EstimateGPULayers({_, _, _}, , {, _, _}, {{0x20000, 0x200, 0xffffffffffffffff, ...}, ...}) C:/a/ollama/ollama/llm/memory.go:140 +0x659 github.com/ollama/ollama/llm.PredictServerFit({0xc00004bba8?, 0x7ff67a540f2e?, 0xc00004b8c0?}, 0xc000350060, {0xc00004b908?, _, _}, {0x0, 0x0, 0x0}, ...) C:/a/ollama/ollama/llm/memory.go:23 +0xbd github.com/ollama/ollama/server.pickBestFullFitByLibrary(0xc000570000, 0xc000350060, {0xc000160600?, 0x2?, 0x2?}, 0xc00004bcf8) C:/a/ollama/ollama/server/sched.go:714 +0x6f3 github.com/ollama/ollama/server.(*Scheduler).processPending(0xc00009a8a0, {0x7ff67b878800, 0xc000726ff0}) C:/a/ollama/ollama/server/sched.go:226 +0xe6b github.com/ollama/ollama/server.(*Scheduler).Run.func1() C:/a/ollama/ollama/server/sched.go:108 +0x1f created by github.com/ollama/ollama/server.(*Scheduler).Run in goroutine 1 C:/a/ollama/ollama/server/sched.go:107 +0xb1 ``` ### OS Windows ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-22 13:25:41 -05:00
Author
Owner

@dpk-it commented on GitHub (Mar 25, 2025):

related to https://github.com/ollama/ollama/issues/8460

<!-- gh-comment-id:2752067335 --> @dpk-it commented on GitHub (Mar 25, 2025): related to https://github.com/ollama/ollama/issues/8460
Author
Owner

@rick-github commented on GitHub (Apr 12, 2025):

Also needed for OpenELM (#3910).

<!-- gh-comment-id:2799065696 --> @rick-github commented on GitHub (Apr 12, 2025): Also needed for OpenELM (#3910).
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32301