[PR #15744] [CLOSED] x/mlxrunner: apply config.json per-tensor quant overrides for mixed-precision MoE #77582

Closed
opened 2026-05-05 10:15:03 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/15744
Author: @jodagreyhame
Created: 4/22/2026
Status: Closed

Base: mainHead: pr2/mlxrunner-quant-config-overrides


📝 Commits (2)

  • 3d9e540 x/mlxrunner: recognise mlx-lm plural aux naming at load time
  • 0b94171 x/mlxrunner: apply config.json per-tensor quant overrides

📊 Changes

10 files changed (+616 additions, -28 deletions)

View changed files

x/mlxrunner/model/config_quant.go (+80 -0)
x/mlxrunner/model/config_quant_test.go (+212 -0)
📝 x/mlxrunner/model/embedding.go (+14 -3)
📝 x/mlxrunner/model/embedding_test.go (+32 -0)
📝 x/mlxrunner/model/linear.go (+18 -3)
x/mlxrunner/model/linear_test.go (+44 -0)
📝 x/mlxrunner/model/quant.go (+6 -0)
📝 x/mlxrunner/model/root.go (+30 -1)
📝 x/mlxrunner/runner.go (+68 -21)
x/mlxrunner/runner_test.go (+112 -0)

📄 Description

Summary

Fixes #15746panic: runtime error: index out of range [0] with length 0 in SparseMoE.Forward when running mlx-lm mixed-precision NVFP4 MoE models imported via ollama create --experimental. Concretely: Qwen 3.6 35B-A3B NVFP4 crashes on the first token without this change.

Root cause: mlx-lm stores per-path quantisation overrides in config.json's quantization block, not in the tensor blob __metadata__. Ollama's MLX runner only reads blob metadata, which ollama create fills from the global quant params. The MoE router gate (stored as affine 8-bit, BF16 scales+biases, group_size 64) is therefore fed to the NVFP4 dequant kernel at the global group_size 16, producing a zero-shape output; Argpartition on the zero-shape tensor panics.

This PR makes the runner read config.json's quant block, apply per-path overrides to tensorQuant, and route each linear/embedding layer through the correct dequant kernel.

Stacking and prior art

Depends on the naming-recognition PR (load-time plural aux acceptance). Reviewers who want to look at this PR in isolation can use the base branch; the CI run and the regression guard below both require the naming PR's changes to be present first.

Depends on: #15743

Extends #15409 ("mlx: mixed-precision quant and capability detection improvements"). That PR put the per-tensor-quant-metadata machinery in place on the assumption that overrides live in the blob __metadata__. For mlx-lm-produced imports, overrides actually live in config.json's quantization block and are not round-tripped into blob __metadata__, so the override path introduced by #15409 is never populated for those models. This PR plugs config.json in as the second source.

This PR does not fix #15632 (qwen3.6:35b-a3b-nvfp4 fails to load: layer 0 missing linear attention projections) — that failure is on a different code path (attention-tensor loading, before the quant-metadata code runs).

What changed

TensorQuantInfo carries explicit bits and mode

x/mlxrunner/model/root.go

type TensorQuantInfo struct {
    QuantType string
    GroupSize int
    Bits      int    // NEW — 0 = inherit from QuantType lookup
    Mode      string // NEW — "" = inherit from QuantType lookup
}

Zero-value of the new fields preserves existing behaviour: callers that set only QuantType/GroupSize get the same result as before.

Resolver honours the new fields

x/mlxrunner/model/quant.go — three lines added to TensorQuantParams. When tq.Bits != 0, use it instead of QuantizationParams(tq.QuantType).bits. Same for tq.Mode.

New: config.json quant override parser

x/mlxrunner/model/config_quant.go (new)

func readConfigQuantOverrides(m *manifest.ModelManifest) (
    TensorQuantInfo,             // globals
    map[string]*TensorQuantInfo, // per-path overrides, keyed by <path>.weight
    error,
)
  • Reads config.json via manifest.ReadConfig. Returns zero values + nil error on missing/malformed config (silent fallback).
  • Accepts both "quantization" and "quantization_config" as the top-level key — mlx-lm writes both, depending on version.
  • Scalar children of the block → globals.
  • Object children whose keys are dotted module paths → per-path overrides keyed as <path>.weight.
  • Mode rule: if override specifies mode, use it; if omitted, use "affine". This matches mlx.nn.Linear.to_quantized's default (verified by reading the MLX source).

Root.Open merges overrides over blob metadata

x/mlxrunner/model/root.go

The blob scan populates tensorQuant from __metadata__ as before. Afterwards, readConfigQuantOverrides runs and the per-path entries overwrite blob entries for the paths config.json specifies.

This direction is deliberate: ollama create fills every blob's __metadata__ from the config.json globals, not from per-path overrides. So for any path config.json overrides, the blob metadata entry is definitionally wrong. Paths not overridden by config.json still come from blob metadata (which is correct for those paths). Ollama-registry-published models don't ship a quantization block, so the override map is empty for them and their behaviour is untouched.

Open(modelName) keeps its existing public signature; internal work moves to openFromManifest(m) so tests can inject a fake manifest without touching the filesystem model store.

A note on model vs architecture names

The repro model is Qwen 3.6 35B-A3B (the user-facing release name). Its Python/HF architecture class is still Qwen3_5MoeForConditionalGeneration, inherited unchanged from Qwen 3.5 — which is why Ollama's source for it sits in x/models/qwen3_5/. The crash in SparseMoE.Forward at x/models/qwen3_5/qwen3_5.go:~1320 is the symptom; this PR fixes it upstream in x/mlxrunner/model so no x/models/qwen3_5/ code is touched.

Tests

All new tests live in x/mlxrunner/model/:

In config_quant_test.go (new file):

  • TestReadConfigQuantOverrides_NoConfig — manifest without config.json → zero values.
  • TestReadConfigQuantOverrides_NoQuantizationBlock — config.json without quantization block → zero values.
  • TestReadConfigQuantOverrides_FlatQuantization — globals populated, no per-path.
  • TestReadConfigQuantOverrides_PerPathOverrideWithExplicitMode — explicit mode respected.
  • TestReadConfigQuantOverrides_PerPathOverrideOmittedModeIsAffine — the Qwen 3.6 path; override without mode coerces to "affine".
  • TestReadConfigQuantOverrides_QuantizationConfigAliasAccepted — both top-level keys recognised.
  • TestReadConfigQuantOverrides_MultipleOverrides — several paths captured.
  • TestReadConfigQuantOverrides_MalformedJSON — silent fallback.
  • TestRoot_OpenPopulatesFromConfigAndBlobsregression guard for the metadata/override path that caused the panic: fake manifest with a global NVFP4 default plus a per-path affine-g64-b8 gate override; root.TensorQuant("…mlp.gate.weight") returns the override, not the global. (End-to-end "model runs without panic" verification is the manual step in the test plan.)

In quant_test.go:

  • TestTensorQuantParams_ExplicitBitsMode — new fields override QuantizationParams lookup.
  • TestResolveLinearQuantParams_PerTensorOverridesGlobalViaBitsMode — end-to-end resolver with the new fields.
  • TestResolveLinearQuantParams_InferenceSkippedWhenAffineFromTensorWithValidParams — guards against shape inference overriding a valid per-tensor entry.

Test plan

  • go test ./x/mlxrunner/model/... on macOS arm64 — all green (targeted + regression).
  • go test ./x/mlxrunner/model/... on Linux / CI — MLX cases skip, pure-Go cases pass.
  • Manual: ollama run on a vanilla Ollama-registry-published NVFP4 model — no regression.
  • Manual: ollama create --experimental + ollama run on an mlx-lm mixed-precision NVFP4 MoE model (Qwen 3.6 35B-A3B NVFP4 is a straightforward repro) — tokens generated without panic.

Files touched

x/mlxrunner/model/config_quant.go         new
x/mlxrunner/model/config_quant_test.go    new
x/mlxrunner/model/quant.go                +6 lines
x/mlxrunner/model/quant_test.go           +~100 lines (3 new tests)
x/mlxrunner/model/root.go                 +~30 lines (struct fields, Open refactor, merge loop)

Risk

  • Ollama-registry-published modelsreadConfigQuantOverrides returns empty when config.json has no quantization block. Their behaviour is unchanged. Covered by TestReadConfigQuantOverrides_NoQuantizationBlock.
  • Merge direction — overrides win for the paths they specify, which is the intended correction. Non-overridden paths still come from blob metadata.
  • MLX kernel supportmlx.QuantizedMatmul(mode="affine", gs=64, b=8) with BF16 scales+biases was verified against the MLX version linked in the build with a fabricated tensor of the exact on-disk shape; no C-side changes needed.

Known follow-ups

  • Vision tower wiring for Qwen3_5MoeForConditionalGeneration is still absent in the runner (images go through mlx-vlm only). Separate, larger work.
  • The override parser currently only supports dotted-path string keys at the top of the quantisation block. Future mlx-lm versions that emit true-nested dict structures would need a parser extension; trivially forward-compatible.

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/15744 **Author:** [@jodagreyhame](https://github.com/jodagreyhame) **Created:** 4/22/2026 **Status:** ❌ Closed **Base:** `main` ← **Head:** `pr2/mlxrunner-quant-config-overrides` --- ### 📝 Commits (2) - [`3d9e540`](https://github.com/ollama/ollama/commit/3d9e54097a9cd44e8ad92897836084cc6874f3ef) x/mlxrunner: recognise mlx-lm plural aux naming at load time - [`0b94171`](https://github.com/ollama/ollama/commit/0b94171a2f946d57b4af7a35f1a4674f528d69cd) x/mlxrunner: apply config.json per-tensor quant overrides ### 📊 Changes **10 files changed** (+616 additions, -28 deletions) <details> <summary>View changed files</summary> ➕ `x/mlxrunner/model/config_quant.go` (+80 -0) ➕ `x/mlxrunner/model/config_quant_test.go` (+212 -0) 📝 `x/mlxrunner/model/embedding.go` (+14 -3) 📝 `x/mlxrunner/model/embedding_test.go` (+32 -0) 📝 `x/mlxrunner/model/linear.go` (+18 -3) ➕ `x/mlxrunner/model/linear_test.go` (+44 -0) 📝 `x/mlxrunner/model/quant.go` (+6 -0) 📝 `x/mlxrunner/model/root.go` (+30 -1) 📝 `x/mlxrunner/runner.go` (+68 -21) ➕ `x/mlxrunner/runner_test.go` (+112 -0) </details> ### 📄 Description ## Summary Fixes #15746 — `panic: runtime error: index out of range [0] with length 0` in `SparseMoE.Forward` when running mlx-lm mixed-precision NVFP4 MoE models imported via `ollama create --experimental`. Concretely: Qwen 3.6 35B-A3B NVFP4 crashes on the first token without this change. Root cause: mlx-lm stores **per-path** quantisation overrides in `config.json`'s `quantization` block, not in the tensor blob `__metadata__`. Ollama's MLX runner only reads blob metadata, which `ollama create` fills from the **global** quant params. The MoE router gate (stored as affine 8-bit, BF16 scales+biases, group_size 64) is therefore fed to the NVFP4 dequant kernel at the global group_size 16, producing a zero-shape output; `Argpartition` on the zero-shape tensor panics. This PR makes the runner read `config.json`'s quant block, apply per-path overrides to `tensorQuant`, and route each linear/embedding layer through the correct dequant kernel. ## Stacking and prior art Depends on the naming-recognition PR (load-time plural aux acceptance). Reviewers who want to look at this PR in isolation can use the base branch; the CI run and the regression guard below both require the naming PR's changes to be present first. Depends on: #15743 Extends #15409 ("mlx: mixed-precision quant and capability detection improvements"). That PR put the per-tensor-quant-metadata machinery in place on the assumption that overrides live in the blob `__metadata__`. For mlx-lm-produced imports, overrides actually live in `config.json`'s `quantization` block and are not round-tripped into blob `__metadata__`, so the override path introduced by #15409 is never populated for those models. This PR plugs `config.json` in as the second source. This PR does **not** fix #15632 (`qwen3.6:35b-a3b-nvfp4 fails to load: layer 0 missing linear attention projections`) — that failure is on a different code path (attention-tensor loading, before the quant-metadata code runs). ## What changed ### `TensorQuantInfo` carries explicit bits and mode `x/mlxrunner/model/root.go` ```go type TensorQuantInfo struct { QuantType string GroupSize int Bits int // NEW — 0 = inherit from QuantType lookup Mode string // NEW — "" = inherit from QuantType lookup } ``` Zero-value of the new fields preserves existing behaviour: callers that set only `QuantType`/`GroupSize` get the same result as before. ### Resolver honours the new fields `x/mlxrunner/model/quant.go` — three lines added to `TensorQuantParams`. When `tq.Bits != 0`, use it instead of `QuantizationParams(tq.QuantType).bits`. Same for `tq.Mode`. ### New: `config.json` quant override parser `x/mlxrunner/model/config_quant.go` (new) ```go func readConfigQuantOverrides(m *manifest.ModelManifest) ( TensorQuantInfo, // globals map[string]*TensorQuantInfo, // per-path overrides, keyed by <path>.weight error, ) ``` - Reads `config.json` via `manifest.ReadConfig`. Returns zero values + `nil` error on missing/malformed config (silent fallback). - Accepts both `"quantization"` and `"quantization_config"` as the top-level key — mlx-lm writes both, depending on version. - Scalar children of the block → globals. - Object children whose keys are dotted module paths → per-path overrides keyed as `<path>.weight`. - **Mode rule**: if override specifies `mode`, use it; if omitted, use `"affine"`. This matches `mlx.nn.Linear.to_quantized`'s default (verified by reading the MLX source). ### `Root.Open` merges overrides over blob metadata `x/mlxrunner/model/root.go` The blob scan populates `tensorQuant` from `__metadata__` as before. Afterwards, `readConfigQuantOverrides` runs and the per-path entries **overwrite** blob entries for the paths config.json specifies. This direction is deliberate: `ollama create` fills every blob's `__metadata__` from the config.json *globals*, not from per-path overrides. So for any path config.json overrides, the blob metadata entry is definitionally wrong. Paths not overridden by config.json still come from blob metadata (which is correct for those paths). Ollama-registry-published models don't ship a `quantization` block, so the override map is empty for them and their behaviour is untouched. `Open(modelName)` keeps its existing public signature; internal work moves to `openFromManifest(m)` so tests can inject a fake manifest without touching the filesystem model store. ## A note on model vs architecture names The repro model is **Qwen 3.6 35B-A3B** (the user-facing release name). Its Python/HF architecture class is still **`Qwen3_5MoeForConditionalGeneration`**, inherited unchanged from Qwen 3.5 — which is why Ollama's source for it sits in `x/models/qwen3_5/`. The crash in `SparseMoE.Forward` at `x/models/qwen3_5/qwen3_5.go:~1320` is the symptom; this PR fixes it upstream in `x/mlxrunner/model` so no `x/models/qwen3_5/` code is touched. ## Tests All new tests live in `x/mlxrunner/model/`: **In `config_quant_test.go` (new file):** - `TestReadConfigQuantOverrides_NoConfig` — manifest without `config.json` → zero values. - `TestReadConfigQuantOverrides_NoQuantizationBlock` — config.json without quantization block → zero values. - `TestReadConfigQuantOverrides_FlatQuantization` — globals populated, no per-path. - `TestReadConfigQuantOverrides_PerPathOverrideWithExplicitMode` — explicit `mode` respected. - `TestReadConfigQuantOverrides_PerPathOverrideOmittedModeIsAffine` — the Qwen 3.6 path; override without `mode` coerces to `"affine"`. - `TestReadConfigQuantOverrides_QuantizationConfigAliasAccepted` — both top-level keys recognised. - `TestReadConfigQuantOverrides_MultipleOverrides` — several paths captured. - `TestReadConfigQuantOverrides_MalformedJSON` — silent fallback. - `TestRoot_OpenPopulatesFromConfigAndBlobs` — **regression guard for the metadata/override path that caused the panic**: fake manifest with a global NVFP4 default plus a per-path affine-g64-b8 gate override; `root.TensorQuant("…mlp.gate.weight")` returns the override, not the global. (End-to-end "model runs without panic" verification is the manual step in the test plan.) **In `quant_test.go`:** - `TestTensorQuantParams_ExplicitBitsMode` — new fields override `QuantizationParams` lookup. - `TestResolveLinearQuantParams_PerTensorOverridesGlobalViaBitsMode` — end-to-end resolver with the new fields. - `TestResolveLinearQuantParams_InferenceSkippedWhenAffineFromTensorWithValidParams` — guards against shape inference overriding a valid per-tensor entry. ## Test plan - [ ] `go test ./x/mlxrunner/model/...` on macOS arm64 — all green (targeted + regression). - [ ] `go test ./x/mlxrunner/model/...` on Linux / CI — MLX cases skip, pure-Go cases pass. - [ ] Manual: `ollama run` on a vanilla Ollama-registry-published NVFP4 model — no regression. - [ ] Manual: `ollama create --experimental` + `ollama run` on an mlx-lm mixed-precision NVFP4 MoE model (Qwen 3.6 35B-A3B NVFP4 is a straightforward repro) — tokens generated without panic. ## Files touched ``` x/mlxrunner/model/config_quant.go new x/mlxrunner/model/config_quant_test.go new x/mlxrunner/model/quant.go +6 lines x/mlxrunner/model/quant_test.go +~100 lines (3 new tests) x/mlxrunner/model/root.go +~30 lines (struct fields, Open refactor, merge loop) ``` ## Risk - **Ollama-registry-published models** — `readConfigQuantOverrides` returns empty when `config.json` has no `quantization` block. Their behaviour is unchanged. Covered by `TestReadConfigQuantOverrides_NoQuantizationBlock`. - **Merge direction** — overrides win for the paths they specify, which is the intended correction. Non-overridden paths still come from blob metadata. - **MLX kernel support** — `mlx.QuantizedMatmul(mode="affine", gs=64, b=8)` with BF16 scales+biases was verified against the MLX version linked in the build with a fabricated tensor of the exact on-disk shape; no C-side changes needed. ## Known follow-ups - Vision tower wiring for `Qwen3_5MoeForConditionalGeneration` is still absent in the runner (images go through `mlx-vlm` only). Separate, larger work. - The override parser currently only supports dotted-path string keys at the top of the quantisation block. Future mlx-lm versions that emit true-nested dict structures would need a parser extension; trivially forward-compatible. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-05-05 10:15:03 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#77582