[GH-ISSUE #14503] qwen35/qwen35moe models downloaded from HuggingFace are unsupported. #55922

Closed
opened 2026-04-29 09:57:12 -05:00 by GiteaMirror · 31 comments
Owner

Originally created by @rick-github on GitHub (Feb 27, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14503

Originally assigned to: @jmorganca on GitHub.

What is the issue?

ollama 0.17.1+ introduced support for qwen35/qwen35moe architectures in the ollama Go-based engine and support models based on these architectures in the ollama library. However, qwen3.5 models from HF have a single scalar for qwen35moe.attention.head_count_kv whereas qwen3.5 models from the ollama library have an array:

source value
ollama library [0,0,0,2,0,0,0,2,0,0,0,2,0,0,0,2,0,0,0,2,0,0,0,2,0,0,0,2,0,0,0,2,0,0,0,2,0,0,0,2]
HuggingFace 2

This causes NewTextProcessor() to fail during fsggml.Decode() with qwen3next: invalid attention.head_count_kv array; expected mix of zero and non-zero values. The ollama server tries to fallback to using the llama.cpp engine, but because support for qwen35 has not been merged yet (#14134) the model fails to load.

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @rick-github on GitHub (Feb 27, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14503 Originally assigned to: @jmorganca on GitHub. ### What is the issue? ollama 0.17.1+ introduced support for qwen35/qwen35moe architectures in the ollama Go-based engine and support models based on these architectures in the ollama library. However, qwen3.5 models from HF have a single scalar for `qwen35moe.attention.head_count_kv` whereas qwen3.5 models from the ollama library have an array: | source | value | | -- | -- | | ollama library | [0,0,0,2,0,0,0,2,0,0,0,2,0,0,0,2,0,0,0,2,0,0,0,2,0,0,0,2,0,0,0,2,0,0,0,2,0,0,0,2] | | HuggingFace | 2 | This causes `NewTextProcessor()` to fail during `fsggml.Decode()` with [`qwen3next: invalid attention.head_count_kv array; expected mix of zero and non-zero values`](https://github.com/ollama/ollama/blob/79917cf80bf74538a4ae694e6b61adb908b0f8df/model/models/qwen3next/model.go#L482). The ollama server tries to fallback to using the llama.cpp engine, but because support for qwen35 has not been merged yet (#14134) the model fails to load. ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-29 09:57:12 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 27, 2026):

Note that this includes finetunes and modified models like Heretic, and any models quantized using the llama.cpp quantizer.

<!-- gh-comment-id:3974405443 --> @rick-github commented on GitHub (Feb 27, 2026): Note that this includes finetunes and modified models like Heretic, and any models quantized using the llama.cpp quantizer.
Author
Owner

@alexliu2008 commented on GitHub (Feb 28, 2026):

Yes, me too!

lama_model_loader: - type q8_0: 338 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q8_0
print_info: file size = 39.09 GiB (9.69 BPW)
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe'
llama_model_load_from_file_impl: failed to load model
time=2026-02-28T14:15:30.988+08:00 level=INFO source=sched.go:473 msg="NewLlamaServer failed" model=C:\Users\admin.ollama\models\blobs\sha256-9a334b4433cf66369cde4238d9ce882a4275edac317254fa475e2368443521ed error="unable to load model: C:\Users\admin\.ollama\models\blobs\sha256-9a334b4433cf66369cde4238d9ce882a4275edac317254fa475e2368443521ed"

<!-- gh-comment-id:3976499872 --> @alexliu2008 commented on GitHub (Feb 28, 2026): Yes, me too! lama_model_loader: - type q8_0: 338 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q8_0 print_info: file size = 39.09 GiB (9.69 BPW) llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe' llama_model_load_from_file_impl: failed to load model time=2026-02-28T14:15:30.988+08:00 level=INFO source=sched.go:473 msg="NewLlamaServer failed" model=C:\Users\admin\.ollama\models\blobs\sha256-9a334b4433cf66369cde4238d9ce882a4275edac317254fa475e2368443521ed error="unable to load model: C:\\Users\\admin\\.ollama\\models\\blobs\\sha256-9a334b4433cf66369cde4238d9ce882a4275edac317254fa475e2368443521ed"
Author
Owner

@partizanna commented on GitHub (Feb 28, 2026):

Same issue for me even at ollama 0.17.4:

ollama run --verbose hf.co/unsloth/Qwen3.5-27B-GGUF:Q3_K_M:
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35'

ollama run --verbose hf.co/unsloth/Qwen3.5-35B-A3B-GGUF:UD-IQ3_XXS:
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe'

<!-- gh-comment-id:3976995350 --> @partizanna commented on GitHub (Feb 28, 2026): Same issue for me even at ollama 0.17.4: **ollama run --verbose hf.co/unsloth/Qwen3.5-27B-GGUF:Q3_K_M:** _llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35'_ **ollama run --verbose hf.co/unsloth/Qwen3.5-35B-A3B-GGUF:UD-IQ3_XXS:** _llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe'_
Author
Owner

@chigkim commented on GitHub (Feb 28, 2026):

Same here.
I tried to import a quant from bartowski/Qwen_Qwen3.5-35B-A3B-GGUF.
However, I get this: llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe'

Ollama version

v0.17.4

Relevant log output

[GIN] 2026/02/28 - 10:21:32 | 200 |      51.375µs |       127.0.0.1 | HEAD     "/"
time=2026-02-28T10:21:32.624-05:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-02-28T10:21:32.625-05:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
[GIN] 2026/02/28 - 10:21:32 | 200 |  148.247375ms |       127.0.0.1 | POST     "/api/show"
time=2026-02-28T10:21:32.731-05:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-02-28T10:21:32.732-05:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
[GIN] 2026/02/28 - 10:21:32 | 200 |  101.791625ms |       127.0.0.1 | POST     "/api/show"
time=2026-02-28T10:21:32.857-05:00 level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=458ns
time=2026-02-28T10:21:32.869-05:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-02-28T10:21:32.869-05:00 level=DEBUG source=sched.go:258 msg="loading first model" model=/Users/cgk/.ollama/models/blobs/sha256-89f77c86163ac4a95d72e00ce6a18f3ed2d280952542838a5b2cdc43bc2c3b3a
time=2026-02-28T10:21:32.901-05:00 level=DEBUG source=server.go:156 msg="model not yet supported by Ollama engine, switching to compatibility mode" model=/Users/cgk/.ollama/models/blobs/sha256-89f77c86163ac4a95d72e00ce6a18f3ed2d280952542838a5b2cdc43bc2c3b3a error="split vision models aren't supported"
llama_model_load_from_file_impl: using device Metal (Apple M3 Max) (unknown id) - 59390 MiB free
llama_model_loader: loaded meta data with 47 key-value pairs and 733 tensors from /Users/cgk/.ollama/models/blobs/sha256-89f77c86163ac4a95d72e00ce6a18f3ed2d280952542838a5b2cdc43bc2c3b3a (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen35moe
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                     general.sampling.top_k i32              = 20
llama_model_loader: - kv   3:                     general.sampling.top_p f32              = 0.950000
llama_model_loader: - kv   4:                      general.sampling.temp f32              = 1.000000
llama_model_loader: - kv   5:                               general.name str              = Qwen3.5 35B A3B
llama_model_loader: - kv   6:                           general.basename str              = Qwen3.5
llama_model_loader: - kv   7:                         general.size_label str              = 35B-A3B
llama_model_loader: - kv   8:                            general.license str              = apache-2.0
llama_model_loader: - kv   9:                       general.license.link str              = https://huggingface.co/Qwen/Qwen3.5-3...
llama_model_loader: - kv  10:                               general.tags arr[str,1]       = ["image-text-to-text"]
llama_model_loader: - kv  11:                      qwen35moe.block_count u32              = 40
llama_model_loader: - kv  12:                   qwen35moe.context_length u32              = 262144
llama_model_loader: - kv  13:                 qwen35moe.embedding_length u32              = 2048
llama_model_loader: - kv  14:             qwen35moe.attention.head_count u32              = 16
llama_model_loader: - kv  15:          qwen35moe.attention.head_count_kv u32              = 2
llama_model_loader: - kv  16:          qwen35moe.rope.dimension_sections arr[i32,4]       = [11, 11, 10, 0]
llama_model_loader: - kv  17:                   qwen35moe.rope.freq_base f32              = 10000000.000000
llama_model_loader: - kv  18: qwen35moe.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  19:                     qwen35moe.expert_count u32              = 256
llama_model_loader: - kv  20:                qwen35moe.expert_used_count u32              = 8
llama_model_loader: - kv  21:             qwen35moe.attention.key_length u32              = 256
llama_model_loader: - kv  22:           qwen35moe.attention.value_length u32              = 256
llama_model_loader: - kv  23:       qwen35moe.expert_feed_forward_length u32              = 512
llama_model_loader: - kv  24: qwen35moe.expert_shared_feed_forward_length u32              = 512
llama_model_loader: - kv  25:                  qwen35moe.ssm.conv_kernel u32              = 4
llama_model_loader: - kv  26:                   qwen35moe.ssm.state_size u32              = 128
llama_model_loader: - kv  27:                  qwen35moe.ssm.group_count u32              = 16
llama_model_loader: - kv  28:               qwen35moe.ssm.time_step_rank u32              = 32
llama_model_loader: - kv  29:                   qwen35moe.ssm.inner_size u32              = 4096
llama_model_loader: - kv  30:          qwen35moe.full_attention_interval u32              = 4
llama_model_loader: - kv  31:             qwen35moe.rope.dimension_count u32              = 64
llama_model_loader: - kv  32:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  33:                         tokenizer.ggml.pre str              = qwen35
llama_model_loader: - kv  34:                      tokenizer.ggml.tokens arr[str,248320]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  35:                  tokenizer.ggml.token_type arr[i32,248320]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  36:                      tokenizer.ggml.merges arr[str,247587]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  37:                tokenizer.ggml.eos_token_id u32              = 248046
llama_model_loader: - kv  38:            tokenizer.ggml.padding_token_id u32              = 248044
llama_model_loader: - kv  39:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  40:                    tokenizer.chat_template str              = {%- set image_count = namespace(value...
llama_model_loader: - kv  41:               general.quantization_version u32              = 2
llama_model_loader: - kv  42:                          general.file_type u32              = 18
llama_model_loader: - kv  43:                      quantize.imatrix.file str              = /models_out/Qwen3.5-35B-A3B-GGUF/Qwen...
llama_model_loader: - kv  44:                   quantize.imatrix.dataset str              = /training_dir/calibration_datav5.txt
llama_model_loader: - kv  45:             quantize.imatrix.entries_count u32              = 510
llama_model_loader: - kv  46:              quantize.imatrix.chunks_count u32              = 802
llama_model_loader: - type  f32:  301 tensors
llama_model_loader: - type q8_0:  162 tensors
llama_model_loader: - type q6_K:  270 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q6_K
print_info: file size   = 26.92 GiB (6.67 BPW)
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe'
llama_model_load_from_file_impl: failed to load model
time=2026-02-28T10:21:32.986-05:00 level=INFO source=sched.go:473 msg="NewLlamaServer failed" model=/Users/cgk/.ollama/models/blobs/sha256-89f77c86163ac4a95d72e00ce6a18f3ed2d280952542838a5b2cdc43bc2c3b3a error="unable to load model: /Users/cgk/.ollama/models/blobs/sha256-89f77c86163ac4a95d72e00ce6a18f3ed2d280952542838a5b2cdc43bc2c3b3a"
[GIN] 2026/02/28 - 10:21:32 | 500 |  249.903625ms |       127.0.0.1 | POST     "/api/generate"
<!-- gh-comment-id:3977367174 --> @chigkim commented on GitHub (Feb 28, 2026): Same here. I tried to import a quant from [bartowski/Qwen_Qwen3.5-35B-A3B-GGUF](https://huggingface.co/bartowski/Qwen_Qwen3.5-35B-A3B-GGUF). However, I get this: `llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe'` ### Ollama version v0.17.4 ### Relevant log output ```shell [GIN] 2026/02/28 - 10:21:32 | 200 | 51.375µs | 127.0.0.1 | HEAD "/" time=2026-02-28T10:21:32.624-05:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-02-28T10:21:32.625-05:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 [GIN] 2026/02/28 - 10:21:32 | 200 | 148.247375ms | 127.0.0.1 | POST "/api/show" time=2026-02-28T10:21:32.731-05:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-02-28T10:21:32.732-05:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 [GIN] 2026/02/28 - 10:21:32 | 200 | 101.791625ms | 127.0.0.1 | POST "/api/show" time=2026-02-28T10:21:32.857-05:00 level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=458ns time=2026-02-28T10:21:32.869-05:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-02-28T10:21:32.869-05:00 level=DEBUG source=sched.go:258 msg="loading first model" model=/Users/cgk/.ollama/models/blobs/sha256-89f77c86163ac4a95d72e00ce6a18f3ed2d280952542838a5b2cdc43bc2c3b3a time=2026-02-28T10:21:32.901-05:00 level=DEBUG source=server.go:156 msg="model not yet supported by Ollama engine, switching to compatibility mode" model=/Users/cgk/.ollama/models/blobs/sha256-89f77c86163ac4a95d72e00ce6a18f3ed2d280952542838a5b2cdc43bc2c3b3a error="split vision models aren't supported" llama_model_load_from_file_impl: using device Metal (Apple M3 Max) (unknown id) - 59390 MiB free llama_model_loader: loaded meta data with 47 key-value pairs and 733 tensors from /Users/cgk/.ollama/models/blobs/sha256-89f77c86163ac4a95d72e00ce6a18f3ed2d280952542838a5b2cdc43bc2c3b3a (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen35moe llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.sampling.top_k i32 = 20 llama_model_loader: - kv 3: general.sampling.top_p f32 = 0.950000 llama_model_loader: - kv 4: general.sampling.temp f32 = 1.000000 llama_model_loader: - kv 5: general.name str = Qwen3.5 35B A3B llama_model_loader: - kv 6: general.basename str = Qwen3.5 llama_model_loader: - kv 7: general.size_label str = 35B-A3B llama_model_loader: - kv 8: general.license str = apache-2.0 llama_model_loader: - kv 9: general.license.link str = https://huggingface.co/Qwen/Qwen3.5-3... llama_model_loader: - kv 10: general.tags arr[str,1] = ["image-text-to-text"] llama_model_loader: - kv 11: qwen35moe.block_count u32 = 40 llama_model_loader: - kv 12: qwen35moe.context_length u32 = 262144 llama_model_loader: - kv 13: qwen35moe.embedding_length u32 = 2048 llama_model_loader: - kv 14: qwen35moe.attention.head_count u32 = 16 llama_model_loader: - kv 15: qwen35moe.attention.head_count_kv u32 = 2 llama_model_loader: - kv 16: qwen35moe.rope.dimension_sections arr[i32,4] = [11, 11, 10, 0] llama_model_loader: - kv 17: qwen35moe.rope.freq_base f32 = 10000000.000000 llama_model_loader: - kv 18: qwen35moe.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 19: qwen35moe.expert_count u32 = 256 llama_model_loader: - kv 20: qwen35moe.expert_used_count u32 = 8 llama_model_loader: - kv 21: qwen35moe.attention.key_length u32 = 256 llama_model_loader: - kv 22: qwen35moe.attention.value_length u32 = 256 llama_model_loader: - kv 23: qwen35moe.expert_feed_forward_length u32 = 512 llama_model_loader: - kv 24: qwen35moe.expert_shared_feed_forward_length u32 = 512 llama_model_loader: - kv 25: qwen35moe.ssm.conv_kernel u32 = 4 llama_model_loader: - kv 26: qwen35moe.ssm.state_size u32 = 128 llama_model_loader: - kv 27: qwen35moe.ssm.group_count u32 = 16 llama_model_loader: - kv 28: qwen35moe.ssm.time_step_rank u32 = 32 llama_model_loader: - kv 29: qwen35moe.ssm.inner_size u32 = 4096 llama_model_loader: - kv 30: qwen35moe.full_attention_interval u32 = 4 llama_model_loader: - kv 31: qwen35moe.rope.dimension_count u32 = 64 llama_model_loader: - kv 32: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 33: tokenizer.ggml.pre str = qwen35 llama_model_loader: - kv 34: tokenizer.ggml.tokens arr[str,248320] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 35: tokenizer.ggml.token_type arr[i32,248320] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 36: tokenizer.ggml.merges arr[str,247587] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 37: tokenizer.ggml.eos_token_id u32 = 248046 llama_model_loader: - kv 38: tokenizer.ggml.padding_token_id u32 = 248044 llama_model_loader: - kv 39: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 40: tokenizer.chat_template str = {%- set image_count = namespace(value... llama_model_loader: - kv 41: general.quantization_version u32 = 2 llama_model_loader: - kv 42: general.file_type u32 = 18 llama_model_loader: - kv 43: quantize.imatrix.file str = /models_out/Qwen3.5-35B-A3B-GGUF/Qwen... llama_model_loader: - kv 44: quantize.imatrix.dataset str = /training_dir/calibration_datav5.txt llama_model_loader: - kv 45: quantize.imatrix.entries_count u32 = 510 llama_model_loader: - kv 46: quantize.imatrix.chunks_count u32 = 802 llama_model_loader: - type f32: 301 tensors llama_model_loader: - type q8_0: 162 tensors llama_model_loader: - type q6_K: 270 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q6_K print_info: file size = 26.92 GiB (6.67 BPW) llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe' llama_model_load_from_file_impl: failed to load model time=2026-02-28T10:21:32.986-05:00 level=INFO source=sched.go:473 msg="NewLlamaServer failed" model=/Users/cgk/.ollama/models/blobs/sha256-89f77c86163ac4a95d72e00ce6a18f3ed2d280952542838a5b2cdc43bc2c3b3a error="unable to load model: /Users/cgk/.ollama/models/blobs/sha256-89f77c86163ac4a95d72e00ce6a18f3ed2d280952542838a5b2cdc43bc2c3b3a" [GIN] 2026/02/28 - 10:21:32 | 500 | 249.903625ms | 127.0.0.1 | POST "/api/generate" ```
Author
Owner

@jadams777 commented on GitHub (Feb 28, 2026):

C:\Users\dude>ollama run hf.co/unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q4_K_M
Error: 500 Internal Server Error: unable to load model: C:\Users\dude.ollama\models\blobs\sha256-223138866b87b12e68ffb43a1d45afb572921e9cd4c594e6a736df94c5130466

My setup
Ollama 0.17.4 for Windows
CPU backend
32GB RAM

<!-- gh-comment-id:3978013686 --> @jadams777 commented on GitHub (Feb 28, 2026): C:\Users\dude>ollama run hf.co/unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q4_K_M Error: 500 Internal Server Error: unable to load model: C:\Users\dude\.ollama\models\blobs\sha256-223138866b87b12e68ffb43a1d45afb572921e9cd4c594e6a736df94c5130466 My setup Ollama 0.17.4 for Windows CPU backend 32GB RAM
Author
Owner

@chr0n1x commented on GitHub (Mar 1, 2026):

@jmorganca hey, sorry but why did your PR for qwen3next close this issue? afaik, qwen35 and qwen35moe are not supported.

I just tried the 0.17.5 container, and assuming your changes are in that tag, I'm still getting the errors when trying to run the unsloth gguf variants for qwen3.5

<!-- gh-comment-id:3979282547 --> @chr0n1x commented on GitHub (Mar 1, 2026): @jmorganca hey, sorry but why did your PR for qwen3next close this issue? afaik, qwen35 and qwen35moe are not supported. I just tried the 0.17.5 container, and assuming your changes are in that tag, I'm still getting the errors when trying to run the unsloth gguf variants for qwen3.5
Author
Owner

@cipriancraciun commented on GitHub (Mar 1, 2026):

I've managed to make both Qwen 3.5 35B-A8b and 27B in FP16 and Q8_0 quantizations to work on Ollama by using the following procedure:

  • clone the upstream Qwen 3.5 repository from HuggingFace (i.e. from https://huggingface.co/Qwen/Qwen3.5-35B-A3B), by using git clone https://huggingface.co/Qwen/Qwen3.5-35B-A3B;
  • create an Ollama model file (see below);
  • import the model in FP16 quantization by using ollama create --file ./qwen-3.5-35b-a3b-fp16.modelfile qwen-3.5:35b-a3b-f16; (using the SafeTensors checkout;)
  • import (and quantize) the model in Q8_0 by using ollama create --file ./qwen-3.5-35b-a3b-q8_0.modelfile --quantize q8_0 qwen-3.5:35b-a3b-q8_0;

The qwen-3.5-35b-a3b-fp16.modelfile model file:

FROM /.../absolute-path/.../Qwen3.5-35B-A3B


PARAMETER num_ctx 262144
PARAMETER num_predict -1
PARAMETER temperature 0.70
PARAMETER top_k 20
PARAMETER top_p 0.80
PARAMETER min_p 0.00
PARAMETER presence_penalty 1.50
PARAMETER repeat_penalty 1.00
PARAMETER repeat_last_n 0
PARAMETER stop "<|im_end|>"


SYSTEM ""

TEMPLATE """

{{- "<|im_start|>system\n" -}}
{{ .System }}
{{- "<|im_end|>\n" -}}

{{- "<|im_start|>user\n" -}}
{{ .Prompt }}
{{- "<|im_end|>\n" -}}

{{- "<|im_start|>assistant\n" -}}
{{- "<think>\n\n</think>\n\n" -}}

The qwen-3.5-35b-a3b-q8_0.modelfile model file (only the FROM differs):

FROM qwen:3.5-35b-a3b-fp16


PARAMETER num_ctx 262144
PARAMETER num_predict -1
PARAMETER temperature 0.70
PARAMETER top_k 20
PARAMETER top_p 0.80
PARAMETER min_p 0.00
PARAMETER presence_penalty 1.50
PARAMETER repeat_penalty 1.00
PARAMETER repeat_last_n 0
PARAMETER stop "<|im_end|>"


SYSTEM ""

TEMPLATE """

{{- "<|im_start|>system\n" -}}
{{ .System }}
{{- "<|im_end|>\n" -}}

{{- "<|im_start|>user\n" -}}
{{ .Prompt }}
{{- "<|im_end|>\n" -}}

{{- "<|im_start|>assistant\n" -}}
{{- "<think>\n\n</think>\n\n" -}}

Note that I've used an extremely simplified TEMPLATE. If one wants the full capability, perhaps try the model file without the TEMPLATE and SYSTEM.

Hint: to experiment with TEMPLATE, you don't need to requantize everything. You can just use the FROM line with the same model name you are importing. In this way Ollama reuses the layers, and only changes the manifest, executing everything in mater of seconds.

<!-- gh-comment-id:3979336230 --> @cipriancraciun commented on GitHub (Mar 1, 2026): I've managed to make both Qwen 3.5 35B-A8b and 27B in FP16 and Q8_0 quantizations to work on Ollama by using the following procedure: * clone the upstream Qwen 3.5 repository from HuggingFace (i.e. from <https://huggingface.co/Qwen/Qwen3.5-35B-A3B>), by using `git clone https://huggingface.co/Qwen/Qwen3.5-35B-A3B`; * create an Ollama model file (see below); * import the model in FP16 quantization by using `ollama create --file ./qwen-3.5-35b-a3b-fp16.modelfile qwen-3.5:35b-a3b-f16`; (using the SafeTensors checkout;) * import (and quantize) the model in Q8_0 by using `ollama create --file ./qwen-3.5-35b-a3b-q8_0.modelfile --quantize q8_0 qwen-3.5:35b-a3b-q8_0`; The `qwen-3.5-35b-a3b-fp16.modelfile` model file: ~~~~ FROM /.../absolute-path/.../Qwen3.5-35B-A3B PARAMETER num_ctx 262144 PARAMETER num_predict -1 PARAMETER temperature 0.70 PARAMETER top_k 20 PARAMETER top_p 0.80 PARAMETER min_p 0.00 PARAMETER presence_penalty 1.50 PARAMETER repeat_penalty 1.00 PARAMETER repeat_last_n 0 PARAMETER stop "<|im_end|>" SYSTEM "" TEMPLATE """ {{- "<|im_start|>system\n" -}} {{ .System }} {{- "<|im_end|>\n" -}} {{- "<|im_start|>user\n" -}} {{ .Prompt }} {{- "<|im_end|>\n" -}} {{- "<|im_start|>assistant\n" -}} {{- "<think>\n\n</think>\n\n" -}} ~~~~ The `qwen-3.5-35b-a3b-q8_0.modelfile` model file (only the `FROM` differs): ~~~~ FROM qwen:3.5-35b-a3b-fp16 PARAMETER num_ctx 262144 PARAMETER num_predict -1 PARAMETER temperature 0.70 PARAMETER top_k 20 PARAMETER top_p 0.80 PARAMETER min_p 0.00 PARAMETER presence_penalty 1.50 PARAMETER repeat_penalty 1.00 PARAMETER repeat_last_n 0 PARAMETER stop "<|im_end|>" SYSTEM "" TEMPLATE """ {{- "<|im_start|>system\n" -}} {{ .System }} {{- "<|im_end|>\n" -}} {{- "<|im_start|>user\n" -}} {{ .Prompt }} {{- "<|im_end|>\n" -}} {{- "<|im_start|>assistant\n" -}} {{- "<think>\n\n</think>\n\n" -}} ~~~~ Note that I've used an extremely simplified `TEMPLATE`. If one wants the full capability, perhaps try the model file without the `TEMPLATE` and `SYSTEM`. Hint: to experiment with `TEMPLATE`, you don't need to requantize everything. You can just use the `FROM` line with the same model name you are importing. In this way Ollama reuses the layers, and only changes the manifest, executing everything in mater of seconds.
Author
Owner

@rick-github commented on GitHub (Mar 1, 2026):

@chr0n1x Which model? Models imported from HF usually come in split mode (separate text and vision GGUFs) which is not supported by the ollama engine, so tries to fallback to the llama.cpp engine, which hasn't merged qwen35 support yet (#14134).

@cipriancraciun Use RENDERER/PARSER instead of TEMPLATE.

<!-- gh-comment-id:3979729613 --> @rick-github commented on GitHub (Mar 1, 2026): @chr0n1x Which model? Models imported from HF usually come in split mode (separate text and vision GGUFs) which is not supported by the ollama engine, so tries to fallback to the llama.cpp engine, which hasn't merged qwen35 support yet (#14134). @cipriancraciun Use RENDERER/PARSER instead of TEMPLATE.
Author
Owner

@chigkim commented on GitHub (Mar 1, 2026):

I've managed to make both Qwen 3.5 35B-A8b and 27B in FP16 and Q8_0 quantizations to work on Ollama by using the following procedure:

Yes, I ended up doing this too, but ollama create from safetensor has a lot of limitations:

  1. You have to download a lot bigger model.
  2. You need a lot of memory even with GOMAXPROCS=1, so it might not even possible to do it on your machine. I have a mac with 64GB, and I have to kill absolutely everything before run ollama create. Otherwise it goes OOM and crashes during model conversion.
  3. Ollama create -q supports only couple of quants when you import from tensafetensor.
  4. No imatrix quant.
<!-- gh-comment-id:3979804638 --> @chigkim commented on GitHub (Mar 1, 2026): > I've managed to make both Qwen 3.5 35B-A8b and 27B in FP16 and Q8_0 quantizations to work on Ollama by using the following procedure: Yes, I ended up doing this too, but ollama create from safetensor has a lot of limitations: 1. You have to download a lot bigger model. 2. You need a lot of memory even with `GOMAXPROCS=1`, so it might not even possible to do it on your machine. I have a mac with 64GB, and I have to kill absolutely everything before run `ollama create`. Otherwise it goes OOM and crashes during model conversion. 3. Ollama create -q supports only couple of quants when you import from tensafetensor. 4. No imatrix quant.
Author
Owner

@chigkim commented on GitHub (Mar 1, 2026):

@chr0n1x Which model? Models imported from HF usually come in split mode (separate text and vision GGUFs) which is not supported by the ollama engine, so tries to fallback to the llama.cpp engine, which hasn't merged qwen35 support yet (#14134).

Does that mean if you import from GGUF, it'll use llama.cpp instead of the new engine?
When importing from gguf format, is it possible to make ollama create to create format that new Ollama engine needs?

<!-- gh-comment-id:3979815777 --> @chigkim commented on GitHub (Mar 1, 2026): > [@chr0n1x](https://github.com/chr0n1x) Which model? Models imported from HF usually come in split mode (separate text and vision GGUFs) which is not supported by the ollama engine, so tries to fallback to the llama.cpp engine, which hasn't merged qwen35 support yet ([#14134](https://github.com/ollama/ollama/pull/14134)). Does that mean if you import from GGUF, it'll use llama.cpp instead of the new engine? When importing from gguf format, is it possible to make `ollama create` to create format that new Ollama engine needs?
Author
Owner

@rick-github commented on GitHub (Mar 1, 2026):

Does that mean if you import from GGUF, it'll use llama.cpp instead of the new engine?

If the model is split (separate text and vision GGUFs) ollama will try to run the model using the llama.cpp engine. If the llama.cpp engine doesn't support the model architecture, the model load will fail.

When importing from gguf format, is it possible to make ollama create to create format that new Ollama engine needs?

To use the new Ollama engine with multi-modal models, the import must be done from safetensors.

<!-- gh-comment-id:3979878027 --> @rick-github commented on GitHub (Mar 1, 2026): > Does that mean if you import from GGUF, it'll use llama.cpp instead of the new engine? If the model is split (separate text and vision GGUFs) ollama will try to run the model using the llama.cpp engine. If the llama.cpp engine doesn't support the model architecture, the model load will fail. > When importing from gguf format, is it possible to make ollama create to create format that new Ollama engine needs? To use the new Ollama engine with multi-modal models, the import must be done from safetensors.
Author
Owner

@chigkim commented on GitHub (Mar 1, 2026):

Thanks @rick-github for the info.
In addition to specify GOMAXPROCS=1, is there a more ways to reduce memory consumption when importing from safetensor?
With 64GB Mac with 58/64GB allocated to GPU, Ollama create often crashes during model conversion.
Do all layers have to be converted at once together? If not, it would be great if Ollama has an option to convert in chunks.
Also, can we have more quants format when importing from safetensor? I think it only supports q4, q8, f16 now?
Thanks so much!

<!-- gh-comment-id:3979959761 --> @chigkim commented on GitHub (Mar 1, 2026): Thanks @rick-github for the info. In addition to specify GOMAXPROCS=1, is there a more ways to reduce memory consumption when importing from safetensor? With 64GB Mac with 58/64GB allocated to GPU, Ollama create often crashes during model conversion. Do all layers have to be converted at once together? If not, it would be great if Ollama has an option to convert in chunks. Also, can we have more quants format when importing from safetensor? I think it only supports q4, q8, f16 now? Thanks so much!
Author
Owner

@chr0n1x commented on GitHub (Mar 1, 2026):

@rick-github hf.co/unsloth/Qwen3.5-27B-GGUF:UD-Q5_K_XL

<!-- gh-comment-id:3980617730 --> @chr0n1x commented on GitHub (Mar 1, 2026): @rick-github `hf.co/unsloth/Qwen3.5-27B-GGUF:UD-Q5_K_XL`
Author
Owner

@rick-github commented on GitHub (Mar 1, 2026):

@chigkim As far as I know there's no mechanism for segmenting the model processing other then reducing the number of coroutines. Note that the conversion does not use the GPU, so the RAM used is the 6G of system RAM. I don't know how simple it is to repartition the GPU/system split, if not it's not convenient then I think your only recourse is swap. The ollama quantizer requires f16/f32 source and only targets q4_K_M and q8_0.

@chr0n1x This model contains a vision projector, so ollama tries to run it with llama.cpp, but the llama.cpp backend doesn't currently support qwen35.

<!-- gh-comment-id:3981147904 --> @rick-github commented on GitHub (Mar 1, 2026): @chigkim As far as I know there's no mechanism for segmenting the model processing other then reducing the number of coroutines. Note that the conversion does not use the GPU, so the RAM used is the 6G of system RAM. I don't know how simple it is to repartition the GPU/system split, if not it's not convenient then I think your only recourse is swap. The ollama quantizer requires f16/f32 source and only targets q4_K_M and q8_0. @chr0n1x This model contains a vision projector, so ollama tries to run it with llama.cpp, but the llama.cpp backend doesn't currently support qwen35.
Author
Owner

@RangerMauve commented on GitHub (Mar 2, 2026):

@rick-github The QwenLM repo says that llama.cpp supports Qwen 3.5, though? https://github.com/QwenLM/Qwen3.5?tab=readme-ov-file#llamacpp

Is it just that the version within ollama is out of date?

<!-- gh-comment-id:3985682362 --> @RangerMauve commented on GitHub (Mar 2, 2026): @rick-github The QwenLM repo says that llama.cpp supports Qwen 3.5, though? https://github.com/QwenLM/Qwen3.5?tab=readme-ov-file#llamacpp Is it just that the version within ollama is out of date?
Author
Owner

@rick-github commented on GitHub (Mar 2, 2026):

Ollama has its own implementation of qwen35/qwen35moe. As of 0.17.5, ollama supports the text GGUF of versions of qwen3.5 quantized for llama.cpp. For models that have both text and vision GGUFs, support requires a vendor sync, #14134.

<!-- gh-comment-id:3985738224 --> @rick-github commented on GitHub (Mar 2, 2026): Ollama has its own implementation of qwen35/qwen35moe. As of 0.17.5, ollama supports the text GGUF of versions of qwen3.5 quantized for llama.cpp. For models that have both text and vision GGUFs, support requires a vendor sync, #14134.
Author
Owner

@chr0n1x commented on GitHub (Mar 2, 2026):

I overall still confused as to why this issue is closed, because all versions of qwen35 from hugging face (moe or not) still do not work with ollama 0.17.5

Im still getting the unknown architecture qwen35 or qwen35moe for the HF models. I suggest that this issue be reopened and have #14134 resolve it.

<!-- gh-comment-id:3986830019 --> @chr0n1x commented on GitHub (Mar 2, 2026): I overall still confused as to why this issue is closed, because all versions of qwen35 from hugging face (moe or not) still do not work with ollama 0.17.5 Im still getting the `unknown architecture qwen35` or `qwen35moe` for the HF models. I suggest that this issue be reopened and have #14134 resolve it.
Author
Owner

@rick-github commented on GitHub (Mar 2, 2026):

qwen35 models from HF will work if the vision GGUF is removed. Note that HF models usually don't have the correct template either, so re-use the one from the ollama library.

$ ollama pull hf.co/unsloth/Qwen3.5-27B-GGUF:UD-Q5_K_XL
$ ollama show --modelfile hf.co/unsloth/Qwen3.5-27B-GGUF:UD-Q5_K_XL | grep 423ebe477 > Modelfile
$ ollama show --modelfile qwen3.5:27b | grep -v FROM >> Modelfile
$ ollama create hf.co/unsloth/Qwen3.5-27B-blind-GGUF:UD-Q5_K_XL
$ ollama run hf.co/unsloth/Qwen3.5-27B-blind-GGUF:UD-Q5_K_XL
>>> hello
Thinking...
Thinking Process:

1.  **Analyze the Input:**
    *   Input: "hello"
    *   Language: English (implied)
    *   Intent: Greeting, starting a conversation.
    *   Tone: Casual, friendly.

2.  **Determine the Appropriate Response:**
    *   Acknowledge the greeting.
    *   Offer assistance.
    *   Keep it friendly and open-ended.
    *   Avoid being overly robotic or overly verbose.

3.  **Drafting Options:**
    *   Option 1: "Hello! How can I help you today?" (Standard, safe)
    *   Option 2: "Hi there! What's on your mind?" (Friendly)
    *   Option 3: "Hello! Welcome. Do you have any questions?" (Polite)
    *   Option 4: "Hey! How's it going?" (Casual)

4.  **Selecting the Best Option:**
    *   Option 1 or 2 is usually best for an AI assistant. I'll go with a blend of friendly and helpful.

5.  **Final Polish:**
    *   "Hello! How can I assist you today?" or "Hi there! How's it going? Is there anything I can help you with?"

6.  **Final Decision:** "Hello! How can I help you today?" (Simple, direct, helpful). Or maybe a bit warmer: "Hello! 👋 How's your day going? Is there 
anything I can help you with?"

    *   Let's keep it concise but friendly.
    *   "Hello! How can I help you today?"

    *   Wait, I should consider if there's any specific context. No context provided. Just a greeting.
    *   Let's add an emoji for warmth.

    *   Revised: "Hello! 👋 How's it going? Is there anything I can help you with today?"

7.  **Output Generation:** (Matching the selected draft)
    "Hello! 👋 How's it going? Is there anything I can help you with today?"

    *   Actually, let's keep it slightly more professional but still friendly.
    *   "Hello! How can I assist you today?"

    *   Let's go with a balanced approach.
    *   "Hello! 👋 How can I help you today?"

8.  **Final Check:** Does this violate any policies? No. Is it responsive? Yes.

    *   Okay, let's output.cw
...done thinking.

Hello! 👋 How's it going? Is there anything I can help you with today?


>>> describe this image: ./image1.jpg
Thinking...
Okay, the user is asking me to describe an image called "./image1.jpg". Let me think about how to handle this.

First, I need to check if the image is actually provided. Wait, in the current conversation, the user mentioned the image path, but I don't have the 
ability to access external files or images. My capabilities are text-based, and I can't process or view images directly. 

Hmm, maybe the user assumes I can see the image, but since I'm a text-based AI, I can't. I should inform them that I can't view images and ask them to 
describe it instead. Let me make sure I'm clear about my limitations here. 

Wait, but sometimes users might upload images in certain interfaces. However, in this case, since it's a text-based conversation, there's no image 
attached. The path "./image1.jpg" is a local file path, which I can't access. So I need to let the user know that I can't process the image and 
suggest they describe it or provide more details. 

I should respond politely, explaining that I can't view the image but would be happy to help if they describe it or give more information. That way, 
they know the limitation and can adjust their request accordingly. Let me phrase that clearly and helpfully.
...done thinking.

I can't view or process images directly, but I'd be happy to help if you can describe the image! For example:  
- What objects, people, or scenes are in it?  
- Any text, colors, or notable details?  
- What would you like to know about it?  

Let me know how I can assist! 🌟


>>> /set nothink
Set 'nothink' mode.
>>> hello
Hello again! 👋 Is there something specific you'd like to chat about, ask, or explore today?
NAME                                               ID              SIZE     PROCESSOR    CONTEXT    UNTIL   
hf.co/unsloth/Qwen3.5-27B-blind-GGUF:UD-Q5_K_XL    de2230820b82    42 GB    100% GPU     262144     Forever    
<!-- gh-comment-id:3986898959 --> @rick-github commented on GitHub (Mar 2, 2026): qwen35 models from HF will work if the vision GGUF is removed. Note that HF models usually don't have the correct template either, so re-use the one from the ollama library. ```console $ ollama pull hf.co/unsloth/Qwen3.5-27B-GGUF:UD-Q5_K_XL $ ollama show --modelfile hf.co/unsloth/Qwen3.5-27B-GGUF:UD-Q5_K_XL | grep 423ebe477 > Modelfile $ ollama show --modelfile qwen3.5:27b | grep -v FROM >> Modelfile $ ollama create hf.co/unsloth/Qwen3.5-27B-blind-GGUF:UD-Q5_K_XL ``` ```console $ ollama run hf.co/unsloth/Qwen3.5-27B-blind-GGUF:UD-Q5_K_XL >>> hello Thinking... Thinking Process: 1. **Analyze the Input:** * Input: "hello" * Language: English (implied) * Intent: Greeting, starting a conversation. * Tone: Casual, friendly. 2. **Determine the Appropriate Response:** * Acknowledge the greeting. * Offer assistance. * Keep it friendly and open-ended. * Avoid being overly robotic or overly verbose. 3. **Drafting Options:** * Option 1: "Hello! How can I help you today?" (Standard, safe) * Option 2: "Hi there! What's on your mind?" (Friendly) * Option 3: "Hello! Welcome. Do you have any questions?" (Polite) * Option 4: "Hey! How's it going?" (Casual) 4. **Selecting the Best Option:** * Option 1 or 2 is usually best for an AI assistant. I'll go with a blend of friendly and helpful. 5. **Final Polish:** * "Hello! How can I assist you today?" or "Hi there! How's it going? Is there anything I can help you with?" 6. **Final Decision:** "Hello! How can I help you today?" (Simple, direct, helpful). Or maybe a bit warmer: "Hello! 👋 How's your day going? Is there anything I can help you with?" * Let's keep it concise but friendly. * "Hello! How can I help you today?" * Wait, I should consider if there's any specific context. No context provided. Just a greeting. * Let's add an emoji for warmth. * Revised: "Hello! 👋 How's it going? Is there anything I can help you with today?" 7. **Output Generation:** (Matching the selected draft) "Hello! 👋 How's it going? Is there anything I can help you with today?" * Actually, let's keep it slightly more professional but still friendly. * "Hello! How can I assist you today?" * Let's go with a balanced approach. * "Hello! 👋 How can I help you today?" 8. **Final Check:** Does this violate any policies? No. Is it responsive? Yes. * Okay, let's output.cw ...done thinking. Hello! 👋 How's it going? Is there anything I can help you with today? >>> describe this image: ./image1.jpg Thinking... Okay, the user is asking me to describe an image called "./image1.jpg". Let me think about how to handle this. First, I need to check if the image is actually provided. Wait, in the current conversation, the user mentioned the image path, but I don't have the ability to access external files or images. My capabilities are text-based, and I can't process or view images directly. Hmm, maybe the user assumes I can see the image, but since I'm a text-based AI, I can't. I should inform them that I can't view images and ask them to describe it instead. Let me make sure I'm clear about my limitations here. Wait, but sometimes users might upload images in certain interfaces. However, in this case, since it's a text-based conversation, there's no image attached. The path "./image1.jpg" is a local file path, which I can't access. So I need to let the user know that I can't process the image and suggest they describe it or provide more details. I should respond politely, explaining that I can't view the image but would be happy to help if they describe it or give more information. That way, they know the limitation and can adjust their request accordingly. Let me phrase that clearly and helpfully. ...done thinking. I can't view or process images directly, but I'd be happy to help if you can describe the image! For example: - What objects, people, or scenes are in it? - Any text, colors, or notable details? - What would you like to know about it? Let me know how I can assist! 🌟 >>> /set nothink Set 'nothink' mode. >>> hello Hello again! 👋 Is there something specific you'd like to chat about, ask, or explore today? ``` ```console NAME ID SIZE PROCESSOR CONTEXT UNTIL hf.co/unsloth/Qwen3.5-27B-blind-GGUF:UD-Q5_K_XL de2230820b82 42 GB 100% GPU 262144 Forever ```
Author
Owner

@dexogen commented on GitHub (Mar 3, 2026):

The same for hf.co/unsloth/Qwen3.5-35B-A3B-GGUF:UD-IQ4_NL:

ollama show --modelfile hf.co/unsloth/Qwen3.5-35B-A3B-GGUF:UD-IQ4_NL | awk '/^FROM/ {print $2}' | xargs -r du -h
17G     /root/.ollama/models/blobs/sha256-6aec31ed654fbf12f9950c98bab0a09faea433ffdc4b3acfeb55a1f050e3e444
862M    /root/.ollama/models/blobs/sha256-abe81a7212be307a7723ab47a51a87e5c46d0622273ccb04a6a6feba18b21d63

Now take only the large file and substitute it into the original modelfile from qwen3.5:35b. It should look like this:

cat <<'EOF' > Modelfile
FROM /root/.ollama/models/blobs/sha256-6aec31ed654fbf12f9950c98bab0a09faea433ffdc4b3acfeb55a1f050e3e444

TEMPLATE {{ .Prompt }}
RENDERER qwen3.5
PARSER qwen3.5

PARAMETER temperature 1.0
PARAMETER top_p 0.95
PARAMETER top_k 20
PARAMETER min_p 0.0
PARAMETER presence_penalty 1.5
PARAMETER repeat_penalty 1.0
EOF

Build it and you are ready:

ollama create Qwen3.5-35B-A3B-GGUF:UD-IQ4_NL
<!-- gh-comment-id:3991425775 --> @dexogen commented on GitHub (Mar 3, 2026): The same for `hf.co/unsloth/Qwen3.5-35B-A3B-GGUF:UD-IQ4_NL`: ```bash ollama show --modelfile hf.co/unsloth/Qwen3.5-35B-A3B-GGUF:UD-IQ4_NL | awk '/^FROM/ {print $2}' | xargs -r du -h 17G /root/.ollama/models/blobs/sha256-6aec31ed654fbf12f9950c98bab0a09faea433ffdc4b3acfeb55a1f050e3e444 862M /root/.ollama/models/blobs/sha256-abe81a7212be307a7723ab47a51a87e5c46d0622273ccb04a6a6feba18b21d63 ``` Now take only the large file and substitute it into the original `modelfile` from `qwen3.5:35b`. It should look like this: ```bash cat <<'EOF' > Modelfile FROM /root/.ollama/models/blobs/sha256-6aec31ed654fbf12f9950c98bab0a09faea433ffdc4b3acfeb55a1f050e3e444 TEMPLATE {{ .Prompt }} RENDERER qwen3.5 PARSER qwen3.5 PARAMETER temperature 1.0 PARAMETER top_p 0.95 PARAMETER top_k 20 PARAMETER min_p 0.0 PARAMETER presence_penalty 1.5 PARAMETER repeat_penalty 1.0 EOF ``` Build it and you are ready: ```bash ollama create Qwen3.5-35B-A3B-GGUF:UD-IQ4_NL ```
Author
Owner

@chigkim commented on GitHub (Mar 3, 2026):

@rick-github, just double checking...
Am I understanding correctly that you have to either use what's on Ollama library or import from safetensors if you want vision capability for qwen3.5 models?
Also that means you can only use q4_K_M and q8_0 for vision capability.
If this is true, that seems incredibly limiting! :(
Hope I'm wrong!

<!-- gh-comment-id:3992200408 --> @chigkim commented on GitHub (Mar 3, 2026): @rick-github, just double checking... Am I understanding correctly that you have to either use what's on Ollama library or import from safetensors if you want vision capability for qwen3.5 models? Also that means you can only use q4_K_M and q8_0 for vision capability. If this is true, that seems incredibly limiting! :( Hope I'm wrong!
Author
Owner

@rick-github commented on GitHub (Mar 3, 2026):

The ollama engine only supports tensors of f32/f16/bf16/q8/q6/q4. The llama.cpp engine supports a wider range of tensor quants. A multi-modal model running on the ollama engine is imported from safetensors quantized to a single GGUF file containing quants as above. A multi-modal model running on the llama.cpp engine is imported from safetensors quantized to more datatypes, but the weights are split into two files, text and vision. In both cases, the vision component remains in f16/f32, so there is no loss of vision perception in either ollama or llama.cpp quantization. However, split models will only run on the llama.cpp engine. To use a multi-modal model with text quants that are not supported by ollama, the llama.cpp engine needs to support it. Currently the llama.cpp engine in ollama does not support qwen35/qwen35moe, so ollama will not run split model versions of qwen3.5. When #14134 is merged, ollama will support both fused models (models quantized by ollama from safetensors) and split models (models quantized by llama.cpp from safetensors).

<!-- gh-comment-id:3993579414 --> @rick-github commented on GitHub (Mar 3, 2026): The ollama engine only supports tensors of f32/f16/bf16/q8/q6/q4. The llama.cpp engine supports a wider range of tensor quants. A multi-modal model running on the ollama engine is imported from safetensors quantized to a single GGUF file containing quants as above. A multi-modal model running on the llama.cpp engine is imported from safetensors quantized to more datatypes, but the weights are split into two files, text and vision. In both cases, the vision component remains in f16/f32, so there is no loss of vision perception in either ollama or llama.cpp quantization. However, split models will only run on the llama.cpp engine. To use a multi-modal model with text quants that are not supported by ollama, the llama.cpp engine needs to support it. Currently the llama.cpp engine in ollama does not support qwen35/qwen35moe, so ollama will not run split model versions of qwen3.5. When #14134 is merged, ollama will support both fused models (models quantized by ollama from safetensors) and split models (models quantized by llama.cpp from safetensors).
Author
Owner

@cipriancraciun commented on GitHub (Mar 4, 2026):

The ollama engine only supports tensors of f32/f16/bf16/q8/q6/q4. The llama.cpp engine supports a wider range of tensor quants.

@rick-github How does one tell which engine does a loaded model use?

(Looking with lsof at various running models, some which came from Ollama's library, some which I've just imported from Unsloth as you've noted above, they all use the same libggml-cpu-XXX.so.)

(Also, I was under the impression that Ollama uses the llama.cpp engine in all cases. Does Ollama have its own inference engine different from the llam.cpp one?)

<!-- gh-comment-id:3996053424 --> @cipriancraciun commented on GitHub (Mar 4, 2026): > The ollama engine only supports tensors of f32/f16/bf16/q8/q6/q4. The llama.cpp engine supports a wider range of tensor quants. @rick-github How does one tell which engine does a loaded model use? (Looking with `lsof` at various running models, some which came from Ollama's library, some which I've just imported from Unsloth as you've noted above, they all use the same `libggml-cpu-XXX.so`.) (Also, I was under the impression that Ollama uses the llama.cpp engine in all cases. Does Ollama have its own inference engine different from the llam.cpp one?)
Author
Owner

@chigkim commented on GitHub (Mar 4, 2026):

Ollama had their own engine for a while now since when llama.cpp paused their multimodal effort.
Not sure 100%, but I think the models coded in this folder are the ones that can be run in the new engine?
https://github.com/ollama/ollama/tree/main/model/models

<!-- gh-comment-id:3997107496 --> @chigkim commented on GitHub (Mar 4, 2026): Ollama had their own engine for a while now since when llama.cpp paused their multimodal effort. Not sure 100%, but I think the models coded in this folder are the ones that can be run in the new engine? https://github.com/ollama/ollama/tree/main/model/models
Author
Owner

@rick-github commented on GitHub (Mar 4, 2026):

The logs will show starting go runner for the llama.cpp engine and starting ollama engine for the ollama engine. Alternatively, getting a process list (ps wax | grep runner) will show runner --ollama-engine for models using the ollama engine.

<!-- gh-comment-id:3997780158 --> @rick-github commented on GitHub (Mar 4, 2026): The logs will show `starting go runner` for the llama.cpp engine and `starting ollama engine` for the ollama engine. Alternatively, getting a process list (`ps wax | grep runner`) will show `runner --ollama-engine` for models using the ollama engine.
Author
Owner

@Wizeaaard commented on GitHub (Mar 26, 2026):

Yes! And here are my solution, which doesn't need the origin Modelfile template actually.

ollama show --modelfile <model name> | cat > /root/Modelfile
nano /root/Modelfile
# There will be 2 lines of "FROM /root/.ollama/models/blobs/....",
# just add "# " in front of the 2nd "FROM"
ollama create <new name> /root/Modelfile

and in my situation, the model is "hf.co/Jackrong/Qwen3.5-4B-Neo-GGUF:Q4_K_M"

<!-- gh-comment-id:4133511574 --> @Wizeaaard commented on GitHub (Mar 26, 2026): Yes! And here are my solution, which doesn't need the origin Modelfile template actually. ```bash ollama show --modelfile <model name> | cat > /root/Modelfile nano /root/Modelfile # There will be 2 lines of "FROM /root/.ollama/models/blobs/....", # just add "# " in front of the 2nd "FROM" ollama create <new name> /root/Modelfile ``` and in my situation, the model is "hf.co/Jackrong/Qwen3.5-4B-Neo-GGUF:Q4_K_M"
Author
Owner

@Fmstrat commented on GitHub (Apr 1, 2026):

Hi all,

I have tried this method, and while the model loads and can answer prompts in OpenWebUI, when I try to use it with Continue.dev, I get:

"registry.ollama.ai/library/custom-qwen3.5:latest does not support tools"

Running the same models (MLX versions) in LMS on my Macmini works fine. What do I need to do to allow the newly created ollama version to support tool calls?

<!-- gh-comment-id:4169797871 --> @Fmstrat commented on GitHub (Apr 1, 2026): Hi all, I have tried this method, and while the model loads and can answer prompts in OpenWebUI, when I try to use it with Continue.dev, I get: ``` "registry.ollama.ai/library/custom-qwen3.5:latest does not support tools" ``` Running the same models (MLX versions) in LMS on my Macmini works fine. What do I need to do to allow the newly created ollama version to support tool calls?
Author
Owner

@rick-github commented on GitHub (Apr 1, 2026):

What method?

<!-- gh-comment-id:4170498145 --> @rick-github commented on GitHub (Apr 1, 2026): What method?
Author
Owner

@Zard-void commented on GitHub (Apr 4, 2026):

Yes! And here are my solution, which doesn't need the origin Modelfile template actually.

ollama show --modelfile | cat > /root/Modelfile
nano /root/Modelfile

There will be 2 lines of "FROM /root/.ollama/models/blobs/....",

just add "# " in front of the 2nd "FROM"

ollama create /root/Modelfile
and in my situation, the model is "hf.co/Jackrong/Qwen3.5-4B-Neo-GGUF:Q4_K_M"

I have to log in just to give you a thumbs up

This works

<!-- gh-comment-id:4187284654 --> @Zard-void commented on GitHub (Apr 4, 2026): > Yes! And here are my solution, which doesn't need the origin Modelfile template actually. > > ollama show --modelfile <model name> | cat > /root/Modelfile > nano /root/Modelfile > # There will be 2 lines of "FROM /root/.ollama/models/blobs/....", > # just add "# " in front of the 2nd "FROM" > ollama create <new name> /root/Modelfile > and in my situation, the model is "hf.co/Jackrong/Qwen3.5-4B-Neo-GGUF:Q4_K_M" I have to log in just to give you a thumbs up This works
Author
Owner

@Fmstrat commented on GitHub (Apr 4, 2026):

What method?

Commenting out the vision hash to get it running in Ollama.

<!-- gh-comment-id:4187771243 --> @Fmstrat commented on GitHub (Apr 4, 2026): > What method? Commenting out the vision hash to get it running in Ollama.
Author
Owner

@rick-github commented on GitHub (Apr 4, 2026):

Contents of Modelfile?

<!-- gh-comment-id:4187795529 --> @rick-github commented on GitHub (Apr 4, 2026): Contents of Modelfile?
Author
Owner

@Phobos-7 commented on GitHub (Apr 5, 2026):

Yes! And here are my solution, which doesn't need the origin Modelfile template actually.

ollama show --modelfile | cat > /root/Modelfile
nano /root/Modelfile

There will be 2 lines of "FROM /root/.ollama/models/blobs/....",

just add "# " in front of the 2nd "FROM"

ollama create /root/Modelfile
and in my situation, the model is "hf.co/Jackrong/Qwen3.5-4B-Neo-GGUF:Q4_K_M"

It should be
ollama create -f /root/Modelfile

<!-- gh-comment-id:4188044879 --> @Phobos-7 commented on GitHub (Apr 5, 2026): > Yes! And here are my solution, which doesn't need the origin Modelfile template actually. > > ollama show --modelfile <model name> | cat > /root/Modelfile > nano /root/Modelfile > # There will be 2 lines of "FROM /root/.ollama/models/blobs/....", > # just add "# " in front of the 2nd "FROM" > ollama create <new name> /root/Modelfile > and in my situation, the model is "hf.co/Jackrong/Qwen3.5-4B-Neo-GGUF:Q4_K_M" It should be ollama create -f /root/Modelfile <new name>
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55922