[GH-ISSUE #14049] Qwen3-Coder-Next (Local Model Request) #55694

Closed
opened 2026-04-29 09:35:33 -05:00 by GiteaMirror · 22 comments
Owner

Originally created by @asitwere on GitHub (Feb 3, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14049

HF: https://huggingface.co/collections/Qwen/qwen3-coder-next

Qwen3-Coder-Next is an 80B MoE model (3B active parameters) with 256K context for fast agentic coding and local use. It delivers performance comparable to LLMs with 10-20× more active parameters and its arch enables fast inference. It excels at long-horizon reasoning & complex tool use.

Works on 46GB RAM/VRAM/unified memory (85GB for 8-bit).

Run local via Dynamic GGUFs (including MXFP4).

Originally created by @asitwere on GitHub (Feb 3, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14049 HF: https://huggingface.co/collections/Qwen/qwen3-coder-next Qwen3-Coder-Next is an 80B MoE model (3B active parameters) with 256K context for fast agentic coding and local use. It delivers performance comparable to LLMs with 10-20× more active parameters and its arch enables fast inference. It excels at long-horizon reasoning & complex tool use. Works on 46GB RAM/VRAM/unified memory (85GB for 8-bit). Run local via Dynamic GGUFs (including MXFP4).
Author
Owner

@rick-github commented on GitHub (Feb 3, 2026):

This is qwen3next architecture and so can be imported from the provided GGUFs. Due to the size the GGUFs are split so merging with llama-gguf-split from llama.cpp is required.

<!-- gh-comment-id:3842938401 --> @rick-github commented on GitHub (Feb 3, 2026): This is qwen3next architecture and so can be [imported](https://github.com/ollama/ollama/blob/main/docs/import.mdx) from the provided [GGUFs](https://huggingface.co/Qwen/Qwen3-Coder-Next-GGUF). Due to the size the GGUFs are split so merging with `llama-gguf-split` from `llama.cpp` is required.
Author
Owner

@Orbiter commented on GitHub (Feb 3, 2026):

I tried both hf.co/unsloth/Qwen3-Coder-Next-GGUF:Q4_K_M and hf.co/lmstudio-community/Qwen3-Coder-Next-GGUF:Q4_K_M which are single-file gguf and both returned a error loading model: missing tensor 'blk.0.ssm_in.weight'

% ollama --version
ollama version is 0.15.4
% ollama run hf.co/unsloth/Qwen3-Coder-Next-GGUF:Q4_K_M
pulling manifest
pulling e83904064313: 100% ▕██████████████████████████████████████████████████████▏  48 GB
pulling 62fbfd9ed093: 100% ▕██████████████████████████████████████████████████████▏  182 B
pulling 6db07cd2f395: 100% ▕██████████████████████████████████████████████████████▏   57 B
pulling bb61354e916f: 100% ▕██████████████████████████████████████████████████████▏  564 B
verifying sha256 digest
writing manifest
success
Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight'

% ollama run hf.co/lmstudio-community/Qwen3-Coder-Next-GGUF:Q4_K_M
pulling manifest
pulling b76696cae4f3: 100% ▕██████████████████████████████████████████████████████▏  48 GB
pulling 62fbfd9ed093: 100% ▕██████████████████████████████████████████████████████▏  182 B
pulling 6db07cd2f395: 100% ▕██████████████████████████████████████████████████████▏   57 B
pulling ac42a302f10f: 100% ▕██████████████████████████████████████████████████████▏  564 B
verifying sha256 digest
writing manifest
success
Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight'
<!-- gh-comment-id:3842965658 --> @Orbiter commented on GitHub (Feb 3, 2026): I tried both `hf.co/unsloth/Qwen3-Coder-Next-GGUF:Q4_K_M` and `hf.co/lmstudio-community/Qwen3-Coder-Next-GGUF:Q4_K_M` which are single-file gguf and both returned a `error loading model: missing tensor 'blk.0.ssm_in.weight'` ``` % ollama --version ollama version is 0.15.4 % ollama run hf.co/unsloth/Qwen3-Coder-Next-GGUF:Q4_K_M pulling manifest pulling e83904064313: 100% ▕██████████████████████████████████████████████████████▏ 48 GB pulling 62fbfd9ed093: 100% ▕██████████████████████████████████████████████████████▏ 182 B pulling 6db07cd2f395: 100% ▕██████████████████████████████████████████████████████▏ 57 B pulling bb61354e916f: 100% ▕██████████████████████████████████████████████████████▏ 564 B verifying sha256 digest writing manifest success Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight' % ollama run hf.co/lmstudio-community/Qwen3-Coder-Next-GGUF:Q4_K_M pulling manifest pulling b76696cae4f3: 100% ▕██████████████████████████████████████████████████████▏ 48 GB pulling 62fbfd9ed093: 100% ▕██████████████████████████████████████████████████████▏ 182 B pulling 6db07cd2f395: 100% ▕██████████████████████████████████████████████████████▏ 57 B pulling ac42a302f10f: 100% ▕██████████████████████████████████████████████████████▏ 564 B verifying sha256 digest writing manifest success Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight' ```
Author
Owner

@rick-github commented on GitHub (Feb 3, 2026):

I used the GGUFs that Qwen supplied and had no problems:

$ ollama run frob/qwen3-coder-next:80b-a3b-q4_K_M
>>> write a golang program that prints 'hello world'
Here is a simple Go program that prints "hello world":

```go
package main

import "fmt"

func main() {
    fmt.Println("hello world")
}
```

To run this program:

1. Save the code in a file named `main.go`
2. Open a terminal in the same directory
3. Run: `go run main.go`

You should see the output: `hello world`

*(Note: By convention, Go developers typically capitalize the "W" in "Hello World", but I've kept it lowercase exactly as requested.)*
<!-- gh-comment-id:3843006722 --> @rick-github commented on GitHub (Feb 3, 2026): I used the GGUFs that Qwen supplied and had no problems: ````console $ ollama run frob/qwen3-coder-next:80b-a3b-q4_K_M >>> write a golang program that prints 'hello world' Here is a simple Go program that prints "hello world": ```go package main import "fmt" func main() { fmt.Println("hello world") } ``` To run this program: 1. Save the code in a file named `main.go` 2. Open a terminal in the same directory 3. Run: `go run main.go` You should see the output: `hello world` *(Note: By convention, Go developers typically capitalize the "W" in "Hello World", but I've kept it lowercase exactly as requested.)* ````
Author
Owner

@Orbiter commented on GitHub (Feb 3, 2026):

great! I can confirm that ollama run frob/qwen3-coder-next:80b-a3b-q4_K_M is working. I will now run benchmarks.

<!-- gh-comment-id:3843176161 --> @Orbiter commented on GitHub (Feb 3, 2026): great! I can confirm that `ollama run frob/qwen3-coder-next:80b-a3b-q4_K_M` is working. I will now run benchmarks.
Author
Owner

@snapo commented on GitHub (Feb 3, 2026):

I tried both hf.co/unsloth/Qwen3-Coder-Next-GGUF:Q4_K_M and hf.co/lmstudio-community/Qwen3-Coder-Next-GGUF:Q4_K_M which are single-file gguf and both returned a error loading model: missing tensor 'blk.0.ssm_in.weight'

% ollama --version
ollama version is 0.15.4
% ollama run hf.co/unsloth/Qwen3-Coder-Next-GGUF:Q4_K_M
pulling manifest
pulling e83904064313: 100% ▕██████████████████████████████████████████████████████▏  48 GB
pulling 62fbfd9ed093: 100% ▕██████████████████████████████████████████████████████▏  182 B
pulling 6db07cd2f395: 100% ▕██████████████████████████████████████████████████████▏   57 B
pulling bb61354e916f: 100% ▕██████████████████████████████████████████████████████▏  564 B
verifying sha256 digest
writing manifest
success
Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight'

% ollama run hf.co/lmstudio-community/Qwen3-Coder-Next-GGUF:Q4_K_M
pulling manifest
pulling b76696cae4f3: 100% ▕██████████████████████████████████████████████████████▏  48 GB
pulling 62fbfd9ed093: 100% ▕██████████████████████████████████████████████████████▏  182 B
pulling 6db07cd2f395: 100% ▕██████████████████████████████████████████████████████▏   57 B
pulling ac42a302f10f: 100% ▕██████████████████████████████████████████████████████▏  564 B
verifying sha256 digest
writing manifest
success
Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight'

i have exactly the same issue with the unsloth quants

<!-- gh-comment-id:3843783966 --> @snapo commented on GitHub (Feb 3, 2026): > I tried both `hf.co/unsloth/Qwen3-Coder-Next-GGUF:Q4_K_M` and `hf.co/lmstudio-community/Qwen3-Coder-Next-GGUF:Q4_K_M` which are single-file gguf and both returned a `error loading model: missing tensor 'blk.0.ssm_in.weight'` > > ``` > % ollama --version > ollama version is 0.15.4 > % ollama run hf.co/unsloth/Qwen3-Coder-Next-GGUF:Q4_K_M > pulling manifest > pulling e83904064313: 100% ▕██████████████████████████████████████████████████████▏ 48 GB > pulling 62fbfd9ed093: 100% ▕██████████████████████████████████████████████████████▏ 182 B > pulling 6db07cd2f395: 100% ▕██████████████████████████████████████████████████████▏ 57 B > pulling bb61354e916f: 100% ▕██████████████████████████████████████████████████████▏ 564 B > verifying sha256 digest > writing manifest > success > Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight' > > % ollama run hf.co/lmstudio-community/Qwen3-Coder-Next-GGUF:Q4_K_M > pulling manifest > pulling b76696cae4f3: 100% ▕██████████████████████████████████████████████████████▏ 48 GB > pulling 62fbfd9ed093: 100% ▕██████████████████████████████████████████████████████▏ 182 B > pulling 6db07cd2f395: 100% ▕██████████████████████████████████████████████████████▏ 57 B > pulling ac42a302f10f: 100% ▕██████████████████████████████████████████████████████▏ 564 B > verifying sha256 digest > writing manifest > success > Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight' > ``` i have exactly the same issue with the unsloth quants
Author
Owner

@Orbiter commented on GitHub (Feb 3, 2026):

I will now run benchmarks.

I tested this now with 200 problems from the project euler problem set using python coding only and it performs 10% better than Qwen3-Next and overall better than any other free/open model I tested so far. I continue to test with other languages.

<!-- gh-comment-id:3844113103 --> @Orbiter commented on GitHub (Feb 3, 2026): > I will now run benchmarks. I tested this now with 200 problems from the project euler problem set using python coding only and it performs 10% better than Qwen3-Next and overall better than any other free/open model I tested so far. I continue to test with other languages.
Author
Owner

@rick-github commented on GitHub (Feb 3, 2026):

Do you have any comparisons with qwen3-coder:{30b,480b}?

<!-- gh-comment-id:3844232755 --> @rick-github commented on GitHub (Feb 3, 2026): Do you have any comparisons with qwen3-coder:{30b,480b}?
Author
Owner

@shuasimodo commented on GitHub (Feb 4, 2026):

I'm pretty sure the 500 error issue with qwen3-next models from other (non ollama library) sources like unsloth are due to llama.cpp updates from around late december. When using llama.cpp to quantize the next series models, it doesn't work with ollama. Similar issues with glm-4.7-flash, but not really relevant to my point.

I've tried to quantize my own qwen3-next models using llama.cpp and get the same error as when I download quants from any of the sources mentioned. Except ollama models work fine.

On december 3rd, ollama updated with qwen3-next support in v0.13.2 - Ollama models, and qwen gguf models work.

On december 16, llama.cpp made some improvements for qwen3-next.

And then there was one more update after that 3 weeks ago.

Around early january is when unsloth updated their qwen3-next models and the dates correlate with qwen3-next updates in llama.cpp. (models we're updated 23 days ago which lines up with the latest llama.cpp update for qwen3-next)

  • These models no longer work in ollama.

Around the same time as the latest unsloth update, noctrex created an mxfp4 quant from that model, and specifically mentioned that you needed to download the latest llama.cpp to run it.

Before these new updated releases, the old models worked well in ollama. Both the unsloth quants, and the old noctrex mxfp4. After these releases, their qwen3-next models no longer work in ollama and all give the same error:

500: llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight'
llama_model_load_from_file_impl: failed to load model

I also could be wrong, I'm just trying to share my experience and issues that seem to be relevant to that update.

<!-- gh-comment-id:3844790748 --> @shuasimodo commented on GitHub (Feb 4, 2026): I'm pretty sure the 500 error issue with qwen3-next models from other (non ollama library) sources like unsloth are due to [llama.cpp updates](https://github.com/ggml-org/llama.cpp/releases?q=qwen3-next&expanded=true) from around late december. When using llama.cpp to quantize the next series models, it doesn't work with ollama. Similar issues with glm-4.7-flash, but not really relevant to my point. I've tried to quantize my own qwen3-next models using llama.cpp and get the same error as when I download quants from any of the sources mentioned. Except ollama models work fine. On december 3rd, ollama updated with qwen3-next support in [ v0.13.2](https://github.com/ollama/ollama/releases/tag/v0.13.2) - Ollama models, and qwen gguf models work. On december 16, llama.cpp made some [improvements for qwen3-next](https://github.com/ggml-org/llama.cpp/releases/tag/b7432). And then there was one more update after that 3 weeks ago. Around early january is when unsloth updated their [qwen3-next models](https://huggingface.co/unsloth/Qwen3-Next-80B-A3B-Thinking-GGUF/discussions/2) and the dates correlate with qwen3-next updates in llama.cpp. ([models we're updated 23 days ago](https://huggingface.co/unsloth/Qwen3-Next-80B-A3B-Thinking-GGUF/tree/main) which lines up with the latest llama.cpp update for qwen3-next) - These models no longer work in ollama. Around the same time as the latest unsloth update, [noctrex created an mxfp4 quant from that model](https://huggingface.co/noctrex/Qwen3-Next-80B-A3B-Thinking-MXFP4_MOE-GGUF), and specifically mentioned that you needed to download the latest llama.cpp to run it. Before these new updated releases, the old models worked well in ollama. Both the unsloth quants, and the old noctrex mxfp4. After these releases, their qwen3-next models no longer work in ollama and all give the same error: ``` 500: llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight' llama_model_load_from_file_impl: failed to load model ``` I also could be wrong, I'm just trying to share my experience and issues that seem to be relevant to that update.
Author
Owner

@rick-github commented on GitHub (Feb 4, 2026):

0.15.5-rc1 contains a vendor sync which pulls in the latest llama.cpp code, and the unsloth and lmstudio-community models work with that version. There are some problems (#14044, #14045) with 0.15.5-rc1 that relate to the vendor sync, so it may not be in 0.15.5, but it will eventually be merged.

$ ollama -v
ollama version is 0.15.5-rc1
$ ollama run hf.co/lmstudio-community/Qwen3-Coder-Next-GGUF:Q4_K_M hello
Hello! How can I help you today? 😊
<!-- gh-comment-id:3844836425 --> @rick-github commented on GitHub (Feb 4, 2026): 0.15.5-rc1 contains a vendor sync which pulls in the latest llama.cpp code, and the unsloth and lmstudio-community models work with that version. There are some problems (#14044, #14045) with 0.15.5-rc1 that relate to the vendor sync, so it may not be in 0.15.5, but it will eventually be merged. ```console $ ollama -v ollama version is 0.15.5-rc1 $ ollama run hf.co/lmstudio-community/Qwen3-Coder-Next-GGUF:Q4_K_M hello Hello! How can I help you today? 😊 ```
Author
Owner

@Orbiter commented on GitHub (Feb 4, 2026):

Do you have any comparisons with qwen3-coder:{30b,480b}?

frob/qwen3-coder-next:80b-a3b-q4_K_M is now leading in the project euler benchmark for instruct models, it is about 4% above qwen3-next (I do 4 bit quantization only) and it is 193% above qwen3-coder-30b (scores almost 3 times better!). For qwen3-coder-480b I have only a older benchmark where qwen3.coder-480b is comparable with qwen3:235b-a22b-instruct-2507; here frob/qwen3-coder-next:80b-a3b-q4_K_M is 22% above qwen3:235b-a22b-instruct-2507.

The main point here for me is if qwen3-coder-next:80b is good at agentic tasks and so I did a tiny manual test with opencode where it worked pretty well at first glance. But I don't want to do topic-stealing here, fortunately there is a fix in 0.15.5 now, thank you all for this great work!

<!-- gh-comment-id:3846240620 --> @Orbiter commented on GitHub (Feb 4, 2026): > Do you have any comparisons with qwen3-coder:{30b,480b}? frob/qwen3-coder-next:80b-a3b-q4_K_M is now leading in the [project euler benchmark](https://github.com/Orbiter/project-euler-llm-benchmark) for instruct models, it is about 4% above qwen3-next (I do 4 bit quantization only) and it is 193% above qwen3-coder-30b (scores almost 3 times better!). For qwen3-coder-480b I have only a older benchmark where qwen3.coder-480b is comparable with qwen3:235b-a22b-instruct-2507; here frob/qwen3-coder-next:80b-a3b-q4_K_M is 22% above qwen3:235b-a22b-instruct-2507. The main point here for me is if qwen3-coder-next:80b is good at agentic tasks and so I did a tiny manual test with opencode where it worked pretty well at first glance. But I don't want to do topic-stealing here, fortunately there is a fix in 0.15.5 now, thank you all for this great work!
Author
Owner

@jmorganca commented on GitHub (Feb 4, 2026):

Hi all, it's available in the library here: https://ollama.com/library/qwen3-coder-next

Note: you'll need the 0.15.5-rc2 or later pre-release: http://github.com/ollama/ollama/releases/

<!-- gh-comment-id:3846397322 --> @jmorganca commented on GitHub (Feb 4, 2026): Hi all, it's available in the library here: https://ollama.com/library/qwen3-coder-next Note: you'll need the 0.15.5-rc2 or later pre-release: http://github.com/ollama/ollama/releases/
Author
Owner

@ppulwey commented on GitHub (Feb 4, 2026):

After pulling the model with ollama 0.15.5-rc2 I get this

ollama run qwen3-coder-next:q4_K_M
pulling manifest
pulling 30e51a7cb1cf: 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏  51 GB
pulling 7339fa418c9a: 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏  11 KB
pulling d3cd8304aca0: 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏   42 B
pulling 5d55cac51f30: 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏  490 B
verifying sha256 digest
writing manifest
success
>>> hi
HelloError: an error was encountered while running the model: error:tensor ' (view)' buffer is nil

Is this an error with the model or with my system?

<!-- gh-comment-id:3846593114 --> @ppulwey commented on GitHub (Feb 4, 2026): After pulling the model with ollama 0.15.5-rc2 I get this ``` ollama run qwen3-coder-next:q4_K_M pulling manifest pulling 30e51a7cb1cf: 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 51 GB pulling 7339fa418c9a: 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 11 KB pulling d3cd8304aca0: 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 42 B pulling 5d55cac51f30: 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 490 B verifying sha256 digest writing manifest success >>> hi HelloError: an error was encountered while running the model: error:tensor ' (view)' buffer is nil ``` Is this an error with the model or with my system?
Author
Owner

@rick-github commented on GitHub (Feb 4, 2026):

Server logs may aid in debugging.

<!-- gh-comment-id:3846698615 --> @rick-github commented on GitHub (Feb 4, 2026): [Server logs](https://docs.ollama.com/troubleshooting) may aid in debugging.
Author
Owner

@c0008 commented on GitHub (Feb 4, 2026):

Qwen3-Coder-Next is also not working for me with 0.15.5-rc2
I tried different quants:

hf.co/unsloth/Qwen3-Coder-Next-GGUF:Q4_K_XL:

Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight'
llama_model_load_from_file_impl: failed to load model

hf.co/unsloth/Qwen3-Coder-Next-GGUF:MXFP4_MOE:

Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight'

qwen3-coder-next:latest:
Here the model loads but it fails at token generation.

Error: 500 Internal Server Error: model runner has unexpectedly stopped, this may be due to resource limitatio
ns or an internal error, check ollama server logs for details

The older Qwen3-Next model is loading and working fine though.

<!-- gh-comment-id:3848142708 --> @c0008 commented on GitHub (Feb 4, 2026): Qwen3-Coder-Next is also not working for me with 0.15.5-rc2 I tried different quants: hf.co/unsloth/Qwen3-Coder-Next-GGUF:Q4_K_XL: ``` Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight' llama_model_load_from_file_impl: failed to load model ``` hf.co/unsloth/Qwen3-Coder-Next-GGUF:MXFP4_MOE: ``` Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight' ``` qwen3-coder-next:latest: Here the model loads but it fails at token generation. ``` Error: 500 Internal Server Error: model runner has unexpectedly stopped, this may be due to resource limitatio ns or an internal error, check ollama server logs for details ``` The older Qwen3-Next model is loading and working fine though.
Author
Owner

@rick-github commented on GitHub (Feb 4, 2026):

@ppulwey #14076
@c0008 The vendor sync was rolled back so unsloth and lmstudio models will not load. Logs for the failure of qwen3-coder-next:latest would aid in debugging.

<!-- gh-comment-id:3850278979 --> @rick-github commented on GitHub (Feb 4, 2026): @ppulwey #14076 @c0008 The vendor sync was rolled back so unsloth and lmstudio models will not load. Logs for the failure of qwen3-coder-next:latest would aid in debugging.
Author
Owner

@lukek commented on GitHub (Feb 5, 2026):

I only have 8GB VRAM and 64GB RAM so this is more for me to try the model, it's not fast enough to use effectively.

hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M did not work for me and i got this error (on ollama version is 0.15.5-rc3):
Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight'

Frob worked perfectly:
ollama run frob/qwen3-coder-next:80b-a3b-q4_K_M

Link: https://ollama.com/frob/qwen3-coder-next

<!-- gh-comment-id:3856375432 --> @lukek commented on GitHub (Feb 5, 2026): I only have 8GB VRAM and 64GB RAM so this is more for me to try the model, it's not fast enough to use effectively. hf.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF:Q4_K_M did not work for me and i got this error (on ollama version is 0.15.5-rc3): Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.ssm_in.weight' Frob worked perfectly: ollama run frob/qwen3-coder-next:80b-a3b-q4_K_M Link: https://ollama.com/frob/qwen3-coder-next
Author
Owner

@rick-github commented on GitHub (Feb 5, 2026):

The unsloth and lmstudio models (and probably most on HF) were quantized with llama.cpp and need llama.cpp support to run in ollama. That was briefly true in -rc1 which had a vendor sync, but was rolled back in -rc2 because of performance issues with glm-4.7-flash. The frob model was imported from the GGUFs supplied by Qwen, so does not need llama.cpp support to run in ollama.

<!-- gh-comment-id:3856418375 --> @rick-github commented on GitHub (Feb 5, 2026): The unsloth and lmstudio models (and probably most on HF) were quantized with llama.cpp and need llama.cpp support to run in ollama. That was briefly true in -rc1 which had a vendor sync, but was rolled back in -rc2 because of performance issues with glm-4.7-flash. The frob model was imported from the GGUFs supplied by Qwen, so does not need llama.cpp support to run in ollama.
Author
Owner

@snapo commented on GitHub (Feb 6, 2026):

The unsloth and lmstudio models (and probably most on HF) were quantized with llama.cpp and need llama.cpp support to run in ollama. That was briefly true in -rc1 which had a vendor sync, but was rolled back in -rc2 because of performance issues with glm-4.7-flash. The frob model was imported from the GGUFs supplied by Qwen, so does not need llama.cpp support to run in ollama.

does that mean it wont be fixed?

<!-- gh-comment-id:3858375860 --> @snapo commented on GitHub (Feb 6, 2026): > The unsloth and lmstudio models (and probably most on HF) were quantized with llama.cpp and need llama.cpp support to run in ollama. That was briefly true in -rc1 which had a vendor sync, but was rolled back in -rc2 because of performance issues with glm-4.7-flash. The frob model was imported from the GGUFs supplied by Qwen, so does not need llama.cpp support to run in ollama. does that mean it wont be fixed?
Author
Owner

@Orbiter commented on GitHub (Feb 6, 2026):

When I compare the frob/non-frob qwen3-coder-next models: both work, but the new qwen3-coder-next:Q4_K_M is 3GB larger than the frob/qwen3-coder-next:80b-a3b-q4_K_M model.

% ollama ls
NAME                                          ID              SIZE      MODIFIED
qwen3-coder-next:Q4_K_M                       ca06e9e4087c    51 GB     3 hours ago
frob/qwen3-coder-next:80b-a3b-q4_K_M          a4a1e21bdeb6    48 GB     2 days ago

Is that correct? was there something missing in the frob model?

<!-- gh-comment-id:3858920964 --> @Orbiter commented on GitHub (Feb 6, 2026): When I compare the frob/non-frob qwen3-coder-next models: both work, but the new `qwen3-coder-next:Q4_K_M` is 3GB larger than the `frob/qwen3-coder-next:80b-a3b-q4_K_M` model. ``` % ollama ls NAME ID SIZE MODIFIED qwen3-coder-next:Q4_K_M ca06e9e4087c 51 GB 3 hours ago frob/qwen3-coder-next:80b-a3b-q4_K_M a4a1e21bdeb6 48 GB 2 days ago ``` Is that correct? was there something missing in the frob model?
Author
Owner

@rick-github commented on GitHub (Feb 6, 2026):

does that mean it wont be fixed?

It means that the unsloth and lmstudio models will be supported at the next vendor sync.

was there something missing in the frob model?

The frob model is created from the GGUFs supplied by Qwen, the model authors, so it's unlikely to be missing something. Comparing the tensors, it looks ollama has chosen to break some tensors up in some layers and use Q6_K instead of Q4_K in others. Whether or not this makes a qualitative difference in model performance is unknown. If they both work, choosing one over the other depends on the execution environment, eg frob/qwen3-coder-next:80b-a3b-q4_K_M will run faster on GPUs with 48G VRAM than qwen3-coder-next:Q4_K_M.

<!-- gh-comment-id:3860186738 --> @rick-github commented on GitHub (Feb 6, 2026): > does that mean it wont be fixed? It means that the unsloth and lmstudio models will be supported at the next vendor sync. > was there something missing in the frob model? The frob model is created from the GGUFs supplied by Qwen, the model authors, so it's unlikely to be missing something. Comparing the tensors, it looks ollama has chosen to break some tensors up in some layers and use Q6_K instead of Q4_K in others. Whether or not this makes a qualitative difference in model performance is unknown. If they both work, choosing one over the other depends on the execution environment, eg frob/qwen3-coder-next:80b-a3b-q4_K_M will run faster on GPUs with 48G VRAM than qwen3-coder-next:Q4_K_M.
Author
Owner

@snapo commented on GitHub (Feb 6, 2026):

any idea "when" this "vendor sync" will happen? (still dont understand what it means, always tought ollama just uses llama.cpp in the background)

<!-- gh-comment-id:3862495803 --> @snapo commented on GitHub (Feb 6, 2026): any idea "when" this "vendor sync" will happen? (still dont understand what it means, always tought ollama just uses llama.cpp in the background)
Author
Owner

@rick-github commented on GitHub (Feb 6, 2026):

The vendor sync is when the bits of the llama.cpp repo that ollama uses gets merged into the ollama repo. It's not just a matter of copying the files over and committing, checks needs to be done that the code is properly integrated. For example, the most recent vendor sync in 0.15.5-rc1 adversely affected the performance of glm-4.7-flash, so it was reverted to allow the release of 0.15.5. The developers will now start another branch for the vendor sync, testing the code and making changes where necessary. When the branch presents no issues, it will be merged into main.

The problem with glm-4.7-flash is being investigated in #14045 and a fix has been proposed.

<!-- gh-comment-id:3862540199 --> @rick-github commented on GitHub (Feb 6, 2026): The vendor sync is when the bits of the llama.cpp repo that ollama uses gets merged into the ollama repo. It's not just a matter of copying the files over and committing, checks needs to be done that the code is properly integrated. For example, the most recent vendor sync in 0.15.5-rc1 adversely affected the performance of glm-4.7-flash, so it was reverted to allow the release of 0.15.5. The developers will now start another branch for the vendor sync, testing the code and making changes where necessary. When the branch presents no issues, it will be merged into main. The problem with glm-4.7-flash is being investigated in #14045 and a fix has been proposed.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55694