[GH-ISSUE #15775] ollama crashes when running with qwen3.6:35b-a3b-coding-nvfp4 or qwen3.6:27b-coding-mxfp8 #56564

Closed
opened 2026-04-29 11:02:00 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @PaoloSupino on GitHub (Apr 23, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15775

What is the issue?

launching ollama with qwen3.6:35b-a3b-coding-nvfp4 or qwen3.6:27b-coding-mxfp8 or ends with (client side): Error: 500 Internal Server Error: mlx runner failed: golang.org/x/sync@v0.17.0/errgroup/errgroup.go:78 +0x90

on the server side the following appears in the output (~/.ollama/logs/server.log doesn't get updated so I have to run it in the foreground):
panic: mlx: There is no Stream(gpu, 1) in current thread. at /private/tmp/mlx-c-20260422-17910-5s6jxl/mlx-c-0.6.0/mlx/c/transforms.cpp:73
panic: mlx: There is no Stream(gpu, 1) in current thread. at /private/tmp/mlx-c-20260422-17910-5s6jxl/mlx-c-0.6.0/mlx/c/transforms.cpp:15

goroutine 50 [running]:
github.com/ollama/ollama/x/mlxrunner/mlx.mlxCheck({0x10156c4c6, 0xb}, 0x148e67a172c8)
github.com/ollama/ollama/x/mlxrunner/mlx/mlx.go:68 +0xb8
github.com/ollama/ollama/x/mlxrunner/mlx.doEval({0x148e6717cb08, 0x50, 0x10048c544?}, 0x1)
github.com/ollama/ollama/x/mlxrunner/mlx/mlx.go:86 +0x11c
github.com/ollama/ollama/x/mlxrunner/mlx.AsyncEval(...)
github.com/ollama/ollama/x/mlxrunner/mlx/mlx.go:95
github.com/ollama/ollama/x/mlxrunner.(*cacheSession).close(0x148e66f4e300)
github.com/ollama/ollama/x/mlxrunner/cache.go:446 +0x198
panic({0x102213940?, 0x148e67978300?})
runtime/panic.go:860 +0x12c
github.com/ollama/ollama/x/mlxrunner/mlx.mlxCheck({0x10156c4c6, 0xb}, 0x148e67a17538)
github.com/ollama/ollama/x/mlxrunner/mlx/mlx.go:68 +0xb8
github.com/ollama/ollama/x/mlxrunner/mlx.doEval({0x148e6717c848, 0x50, 0x148e6717c588?}, 0x0)
github.com/ollama/ollama/x/mlxrunner/mlx/mlx.go:86 +0x11c
github.com/ollama/ollama/x/mlxrunner/mlx.Eval(...)
github.com/ollama/ollama/x/mlxrunner/mlx/mlx.go:99
github.com/ollama/ollama/x/mlxrunner.(*Runner).TextGenerationPipeline.func2(...)
github.com/ollama/ollama/x/mlxrunner/pipeline.go:103
github.com/ollama/ollama/x/mlxrunner.(*Runner).TextGenerationPipeline(, {, _}, {{{0x148e67919100, 0x3e}, {{0x8000, 0x200, 0xffffffffffffffff, 0x0, 0x0, ...}, ...}, ...}, ...})
github.com/ollama/ollama/x/mlxrunner/pipeline.go:127 +0x414
github.com/ollama/ollama/x/mlxrunner.(*Runner).Run.func1()
github.com/ollama/ollama/x/mlxrunner/runner.go:138 +0x138
golang.org/x/sync/errgroup.(*Group).Go.func1()
golang.org/x/sync@v0.17.0/errgroup/errgroup.go:93 +0x4c
created by golang.org/x/sync/errgroup.(*Group).Go in goroutine 1
golang.org/x/sync@v0.17.0/errgroup/errgroup.go:78 +0x90

Runtime environment: MacBook Pro 16. CPU: M4 Max. RAM: 48GB. OS: macOS 15.7.5. ollama installed via homebrew and is up to date (ollama version is 0.21.1).

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @PaoloSupino on GitHub (Apr 23, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15775 ### What is the issue? launching ollama with qwen3.6:35b-a3b-coding-nvfp4 or qwen3.6:27b-coding-mxfp8 or ends with (client side): Error: 500 Internal Server Error: mlx runner failed: golang.org/x/sync@v0.17.0/errgroup/errgroup.go:78 +0x90 on the server side the following appears in the output (~/.ollama/logs/server.log doesn't get updated so I have to run it in the foreground): panic: mlx: There is no Stream(gpu, 1) in current thread. at /private/tmp/mlx-c-20260422-17910-5s6jxl/mlx-c-0.6.0/mlx/c/transforms.cpp:73 panic: mlx: There is no Stream(gpu, 1) in current thread. at /private/tmp/mlx-c-20260422-17910-5s6jxl/mlx-c-0.6.0/mlx/c/transforms.cpp:15 goroutine 50 [running]: github.com/ollama/ollama/x/mlxrunner/mlx.mlxCheck({0x10156c4c6, 0xb}, 0x148e67a172c8) github.com/ollama/ollama/x/mlxrunner/mlx/mlx.go:68 +0xb8 github.com/ollama/ollama/x/mlxrunner/mlx.doEval({0x148e6717cb08, 0x50, 0x10048c544?}, 0x1) github.com/ollama/ollama/x/mlxrunner/mlx/mlx.go:86 +0x11c github.com/ollama/ollama/x/mlxrunner/mlx.AsyncEval(...) github.com/ollama/ollama/x/mlxrunner/mlx/mlx.go:95 github.com/ollama/ollama/x/mlxrunner.(*cacheSession).close(0x148e66f4e300) github.com/ollama/ollama/x/mlxrunner/cache.go:446 +0x198 panic({0x102213940?, 0x148e67978300?}) runtime/panic.go:860 +0x12c github.com/ollama/ollama/x/mlxrunner/mlx.mlxCheck({0x10156c4c6, 0xb}, 0x148e67a17538) github.com/ollama/ollama/x/mlxrunner/mlx/mlx.go:68 +0xb8 github.com/ollama/ollama/x/mlxrunner/mlx.doEval({0x148e6717c848, 0x50, 0x148e6717c588?}, 0x0) github.com/ollama/ollama/x/mlxrunner/mlx/mlx.go:86 +0x11c github.com/ollama/ollama/x/mlxrunner/mlx.Eval(...) github.com/ollama/ollama/x/mlxrunner/mlx/mlx.go:99 github.com/ollama/ollama/x/mlxrunner.(*Runner).TextGenerationPipeline.func2(...) github.com/ollama/ollama/x/mlxrunner/pipeline.go:103 github.com/ollama/ollama/x/mlxrunner.(*Runner).TextGenerationPipeline(_, {_, _}, {{{0x148e67919100, 0x3e}, {{0x8000, 0x200, 0xffffffffffffffff, 0x0, 0x0, ...}, ...}, ...}, ...}) github.com/ollama/ollama/x/mlxrunner/pipeline.go:127 +0x414 github.com/ollama/ollama/x/mlxrunner.(*Runner).Run.func1() github.com/ollama/ollama/x/mlxrunner/runner.go:138 +0x138 golang.org/x/sync/errgroup.(*Group).Go.func1() golang.org/x/sync@v0.17.0/errgroup/errgroup.go:93 +0x4c created by golang.org/x/sync/errgroup.(*Group).Go in goroutine 1 golang.org/x/sync@v0.17.0/errgroup/errgroup.go:78 +0x90 Runtime environment: MacBook Pro 16. CPU: M4 Max. RAM: 48GB. OS: macOS 15.7.5. ollama installed via homebrew and is up to date (ollama version is 0.21.1). ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-29 11:02:00 -05:00
Author
Owner

@ohjakobsen commented on GitHub (Apr 23, 2026):

Same bug as reported in #15770

<!-- gh-comment-id:4307392519 --> @ohjakobsen commented on GitHub (Apr 23, 2026): Same bug as reported in #15770
Author
Owner

@milenkovicm commented on GitHub (Apr 23, 2026):

the same issue, brew updated ollama and mlx-*

mlx 0.31.1 -> 0.31.2
mlx-c 0.6.0 -> 0.6.0_1
ollama 0.21.0_1 -> 0.21.1
<!-- gh-comment-id:4307437145 --> @milenkovicm commented on GitHub (Apr 23, 2026): the same issue, brew updated ollama and mlx-* ``` mlx 0.31.1 -> 0.31.2 mlx-c 0.6.0 -> 0.6.0_1 ollama 0.21.0_1 -> 0.21.1 ```
Author
Owner

@andreinknv commented on GitHub (Apr 24, 2026):

Another reproduction, different hardware:

  • Mac Studio M4 Max, 36 GB unified (existing reports are MacBook Pro 48 GB)
  • Ollama 0.21.1 (brew) / MLX 0.31.2 / mlx-c 0.6.0

Both Qwen 3.6 nvfp4 variants panic through the identical path:

  • qwen3.6:27b-coding-nvfp4 (dense Qwen3_5ForConditionalGeneration runner, 2026-04-23)
  • qwen3.6:35b-a3b-coding-nvfp4 (MoE A3B runner, 2026-04-24)

Same stack on both: mlxCheck → doEval → TextGenerationPipeline ending in panic: mlx: There is no Stream(gpu, 1) in current thread. The dense and MoE paths share nothing model-specific — reinforces the thesis in #15793 that this is a Go↔MLX wrapper issue, not model code.

Rules out OOM. ollama ps right after the 35B-A3B pull, before any prompt:

qwen3.6:35b-a3b-coding-nvfp4  20 GB  100% GPU  262144 ctx

Loads cleanly with the full 256 K context on a 36 GB box; panic fires on first Eval(), not at load — consistent with the thread-local stream regression in MLX 0.31 described in #15793.

Side note on the mxfp8 variant named in this issue's title: on this 36 GB box it's a separate failure (OOM on load — weights ~31 GB, needs 29 GiB Metal VRAM, Metal budget 27.6 GiB), not the Stream(gpu) panic. The stream panic only reproduces on quants that fit in VRAM.

<!-- gh-comment-id:4314330587 --> @andreinknv commented on GitHub (Apr 24, 2026): Another reproduction, different hardware: - Mac Studio M4 Max, **36 GB** unified (existing reports are MacBook Pro 48 GB) - Ollama 0.21.1 (brew) / MLX 0.31.2 / mlx-c 0.6.0 **Both Qwen 3.6 nvfp4 variants panic through the identical path:** - `qwen3.6:27b-coding-nvfp4` (dense `Qwen3_5ForConditionalGeneration` runner, 2026-04-23) - `qwen3.6:35b-a3b-coding-nvfp4` (MoE A3B runner, 2026-04-24) Same stack on both: `mlxCheck → doEval → TextGenerationPipeline` ending in `panic: mlx: There is no Stream(gpu, 1) in current thread`. The dense and MoE paths share nothing model-specific — reinforces the thesis in #15793 that this is a Go↔MLX wrapper issue, not model code. **Rules out OOM.** `ollama ps` right after the 35B-A3B pull, before any prompt: ``` qwen3.6:35b-a3b-coding-nvfp4 20 GB 100% GPU 262144 ctx ``` Loads cleanly with the full 256 K context on a 36 GB box; panic fires on first `Eval()`, not at load — consistent with the thread-local stream regression in MLX 0.31 described in #15793. Side note on the `mxfp8` variant named in this issue's title: on this 36 GB box it's a separate failure (OOM on load — weights ~31 GB, needs 29 GiB Metal VRAM, Metal budget 27.6 GiB), not the `Stream(gpu)` panic. The stream panic only reproduces on quants that fit in VRAM.
Author
Owner

@PaoloSupino commented on GitHub (Apr 24, 2026):

I'm no software engineer (actually a bit dated sysadmin) but after listening to Lenny's podcast with Boris Cherny (Claude Code head) I decided to try claude code so I worked with claude code this afternoon (free plan so I had to do it over 3 broken sessions) and now I have a working version of ollama that works with both qwen3.6:35b-a3b-coding-nvfp4 and 35b-a3b-coding-mxfp8 :-). Because it's late at night where I am, I will publish a patch tomorrow morning that hopefully will fix the issue to others that have encountered the same problem. Here's a summery of the issue (obviously written by claude code):

1 - Thread safety bug (x/mlxrunner/cache.go) — MLX GPU streams are OS-thread-local, but cacheSession.close() was calling AsyncEval() without holding the OS thread lock. The Go scheduler would move the goroutine to a different thread where the stream didn't exist → panic. Fixed by switching to Eval() which handles thread locking internally.

2 - Xcode 26 SDK / macOS 15 mismatch (mlx/backend/metal/device.cpp) — MLX's Metal language version detection checks __builtin_available(macOS 26) which evaluates true when building with Xcode 26 SDK even on macOS 15, because the check tests the SDK version not the running OS. This caused Metal 4.0 to be requested at runtime, which macOS 15 doesn't support, silently falling back to an ancient version without bfloat16_t → panic. Fixed by removing the macOS 26 branch.

3 - Space in repo path (environment) — CMake's file-embedding tool splits paths at spaces, so having the repo under artificial inteligence/ollama caused bf16.h to never be embedded into the JIT Metal kernel source, making bfloat16_t undefined → same panic as bug 2. Fixed by moving the repo to a space-free path.

<!-- gh-comment-id:4316681587 --> @PaoloSupino commented on GitHub (Apr 24, 2026): I'm no software engineer (actually a bit dated sysadmin) but after listening to Lenny's podcast with Boris Cherny (Claude Code head) I decided to try claude code so I worked with claude code this afternoon (free plan so I had to do it over 3 broken sessions) and now I have a working version of ollama that works with both qwen3.6:35b-a3b-coding-nvfp4 and 35b-a3b-coding-mxfp8 :-). Because it's late at night where I am, I will publish a patch tomorrow morning that hopefully will fix the issue to others that have encountered the same problem. Here's a summery of the issue (obviously written by claude code): 1 - Thread safety bug (x/mlxrunner/cache.go) — MLX GPU streams are OS-thread-local, but cacheSession.close() was calling AsyncEval() without holding the OS thread lock. The Go scheduler would move the goroutine to a different thread where the stream didn't exist → panic. Fixed by switching to Eval() which handles thread locking internally. 2 - Xcode 26 SDK / macOS 15 mismatch (mlx/backend/metal/device.cpp) — MLX's Metal language version detection checks __builtin_available(macOS 26) which evaluates true when building with Xcode 26 SDK even on macOS 15, because the check tests the SDK version not the running OS. This caused Metal 4.0 to be requested at runtime, which macOS 15 doesn't support, silently falling back to an ancient version without bfloat16_t → panic. Fixed by removing the macOS 26 branch. 3 - Space in repo path (environment) — CMake's file-embedding tool splits paths at spaces, so having the repo under artificial inteligence/ollama caused bf16.h to never be embedded into the JIT Metal kernel source, making bfloat16_t undefined → same panic as bug 2. Fixed by moving the repo to a space-free path.
Author
Owner

@andreinknv commented on GitHub (Apr 25, 2026):

Confirming #15793 fixes this on my 36 GB Mac Studio M4 Max. After building from that PR, qwen3.6:27b-coding-nvfp4 runs cleanly on Metal at ~21 tok/s — no panic on first Eval(). Full environment details, before/after logs, and throughput numbers in my comment on the PR.

Big thanks to @pd95 for the diagnosis and fix.

<!-- gh-comment-id:4317371246 --> @andreinknv commented on GitHub (Apr 25, 2026): Confirming #15793 fixes this on my 36 GB Mac Studio M4 Max. After building from that PR, `qwen3.6:27b-coding-nvfp4` runs cleanly on Metal at ~21 tok/s — no panic on first `Eval()`. Full environment details, before/after logs, and throughput numbers in [my comment on the PR](https://github.com/ollama/ollama/pull/15793#issuecomment-4317370823). Big thanks to @pd95 for the diagnosis and fix.
Author
Owner

@andreinknv commented on GitHub (Apr 25, 2026):

@PaoloSupino — before you publish tomorrow: #15793 already has the thread-safety fix (persistent LockOSThread + per-thread DefaultStream), and I just verified it resolves the panic on Mac Studio M4 Max 36 GB. Worth reading that PR first; it's a more structural fix than a single AsyncEval → Eval swap.

Your space-in-path finding is separately a real upstream MLX bug worth its own small PR. It traces to mlx/backend/metal/make_compiled_preamble.sh:57declare -a HDRS_LIST=($HDRS) is unquoted, so bash IFS-splits on spaces and breaks the depth:path pairing. Line-by-line parse is the clean fix. Different failure mode from this issue (MSL compile error, not stream lookup), so filing it as its own PR against ml-explore/mlx keeps it easy for maintainers to review.

Thanks for digging in — more people reasoning through runner-level issues is a good thing.

(Disclosure: PR verification on my hardware and the preamble-script line trace were done with Claude Code.)

<!-- gh-comment-id:4317408171 --> @andreinknv commented on GitHub (Apr 25, 2026): @PaoloSupino — before you publish tomorrow: #15793 already has the thread-safety fix (persistent `LockOSThread` + per-thread `DefaultStream`), and I just verified it resolves the panic on Mac Studio M4 Max 36 GB. Worth reading that PR first; it's a more structural fix than a single `AsyncEval → Eval` swap. Your space-in-path finding is separately a real upstream MLX bug worth its own small PR. It traces to `mlx/backend/metal/make_compiled_preamble.sh:57` — `declare -a HDRS_LIST=($HDRS)` is unquoted, so bash IFS-splits on spaces and breaks the depth:path pairing. Line-by-line parse is the clean fix. Different failure mode from this issue (MSL compile error, not stream lookup), so filing it as its own PR against `ml-explore/mlx` keeps it easy for maintainers to review. Thanks for digging in — more people reasoning through runner-level issues is a good thing. *(Disclosure: PR verification on my hardware and the preamble-script line trace were done with Claude Code.)*
Author
Owner

@paolo-supino-mm commented on GitHub (Apr 25, 2026):

@andreinknv, sure thing (if it works for me too😀)

<!-- gh-comment-id:4318391291 --> @paolo-supino-mm commented on GitHub (Apr 25, 2026): @andreinknv, sure thing (if it works for me too😀)
Author
Owner

@PaoloSupino commented on GitHub (Apr 25, 2026):

@andreinknv tested... working for me too :-)

For everyone: follow #15793

<!-- gh-comment-id:4318434715 --> @PaoloSupino commented on GitHub (Apr 25, 2026): @andreinknv tested... working for me too :-) For everyone: follow #15793
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#56564