[GH-ISSUE #14524] Unknown model architecture: 'qwen35' when loading bazobehram/qwen3.5-flash-27b #71483

Closed
opened 2026-05-05 01:53:06 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @sanel on GitHub (Mar 1, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14524

What is the issue?

Hi guys, I'm getting this error after I'm trying to load bazobehram/qwen3.5-flash-27b and sparksammy/qwen3.5-27b-unsloth:small-hotfixed models via:

ollama run bazobehram/qwen3.5-flash-27b

# or
ollama run sparksammy/qwen3.5-27b-unsloth:small-hotfixed

Any idea what is going on? GPU is Nvidia RTX 4000 ADA Gen.

Relevant log output

time=2026-03-01T09:56:41.919Z level=INFO source=server.go:431 msg="starting runner" cmd="/opt/ollama/bin/ollama runner --ollama-engine --port 36945"
 llama_model_loader: loaded meta data with 39 key-value pairs and 851 tensors from /var/lib/ollama/.ollama/models/blobs/sha256-d4d089fbfa2a2ef034faa5c99a1743523ce69a18c562f7de0>
 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
 llama_model_loader: - kv   0:                       general.architecture str              = qwen35
 llama_model_loader: - kv   1:                               general.type str              = model
 llama_model_loader: - kv   2:                     general.sampling.top_k i32              = 20
 llama_model_loader: - kv   3:                     general.sampling.top_p f32              = 0.950000
 llama_model_loader: - kv   4:                      general.sampling.temp f32              = 0.600000
 llama_model_loader: - kv   5:                               general.name str              = Hf
 llama_model_loader: - kv   6:                         general.size_label str              = 27B
 llama_model_loader: - kv   7:                            general.license str              = apache-2.0
 llama_model_loader: - kv   8:                       general.license.link str              = https://huggingface.co/Qwen/Qwen3.5-2...
 llama_model_loader: - kv   9:                               general.tags arr[str,1]       = ["image-text-to-text"]
 llama_model_loader: - kv  10:                         qwen35.block_count u32              = 64
 llama_model_loader: - kv  11:                      qwen35.context_length u32              = 262144
 llama_model_loader: - kv  12:                    qwen35.embedding_length u32              = 5120
 llama_model_loader: - kv  13:                 qwen35.feed_forward_length u32              = 17408
 llama_model_loader: - kv  14:                qwen35.attention.head_count u32              = 24
 llama_model_loader: - kv  15:             qwen35.attention.head_count_kv u32              = 4
 llama_model_loader: - kv  16:             qwen35.rope.dimension_sections arr[i32,4]       = [11, 11, 10, 0]
 llama_model_loader: - kv  17:                      qwen35.rope.freq_base f32              = 10000000.000000
 llama_model_loader: - kv  18:    qwen35.attention.layer_norm_rms_epsilon f32              = 0.000001
 llama_model_loader: - kv  19:                qwen35.attention.key_length u32              = 256
 llama_model_loader: - kv  20:              qwen35.attention.value_length u32              = 256
 llama_model_loader: - kv  21:                     qwen35.ssm.conv_kernel u32              = 4
 llama_model_loader: - kv  22:                      qwen35.ssm.state_size u32              = 128
 llama_model_loader: - kv  23:                     qwen35.ssm.group_count u32              = 16
 llama_model_loader: - kv  24:                  qwen35.ssm.time_step_rank u32              = 48
 llama_model_loader: - kv  25:                      qwen35.ssm.inner_size u32              = 6144
 llama_model_loader: - kv  26:             qwen35.full_attention_interval u32              = 4
 llama_model_loader: - kv  27:                qwen35.rope.dimension_count u32              = 64
 llama_model_loader: - kv  28:                       tokenizer.ggml.model str              = gpt2
 llama_model_loader: - kv  29:                         tokenizer.ggml.pre str              = qwen35
 llama_model_loader: - kv  30:                      tokenizer.ggml.tokens arr[str,248320]  = ["!", "\"", "#", "$", "%", "&", "'", ...
 llama_model_loader: - kv  31:                  tokenizer.ggml.token_type arr[i32,248320]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
 llama_model_loader: - kv  32:                      tokenizer.ggml.merges arr[str,247587]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
 llama_model_loader: - kv  33:                tokenizer.ggml.eos_token_id u32              = 248046
 llama_model_loader: - kv  34:            tokenizer.ggml.padding_token_id u32              = 248044
 llama_model_loader: - kv  35:               tokenizer.ggml.add_bos_token bool             = false
 llama_model_loader: - kv  36:                    tokenizer.chat_template str              = {%- set image_count = namespace(value...
 llama_model_loader: - kv  37:               general.quantization_version u32              = 2
 llama_model_loader: - kv  38:                          general.file_type u32              = 15
 llama_model_loader: - type  f32:  353 tensors
 llama_model_loader: - type q4_K:  407 tensors
 llama_model_loader: - type q5_K:   48 tensors
 llama_model_loader: - type q6_K:   43 tensors
 print_info: file format = GGUF V3 (latest)
 print_info: file type   = Q4_K - Medium
 print_info: file size   = 15.39 GiB (4.92 BPW)
 llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35'
 llama_model_load_from_file_impl: failed to load model

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.17.4

Originally created by @sanel on GitHub (Mar 1, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14524 ### What is the issue? Hi guys, I'm getting this error after I'm trying to load `bazobehram/qwen3.5-flash-27b` and `sparksammy/qwen3.5-27b-unsloth:small-hotfixed` models via: ```bash ollama run bazobehram/qwen3.5-flash-27b # or ollama run sparksammy/qwen3.5-27b-unsloth:small-hotfixed ``` Any idea what is going on? GPU is Nvidia RTX 4000 ADA Gen. ### Relevant log output ```shell time=2026-03-01T09:56:41.919Z level=INFO source=server.go:431 msg="starting runner" cmd="/opt/ollama/bin/ollama runner --ollama-engine --port 36945" llama_model_loader: loaded meta data with 39 key-value pairs and 851 tensors from /var/lib/ollama/.ollama/models/blobs/sha256-d4d089fbfa2a2ef034faa5c99a1743523ce69a18c562f7de0> llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen35 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.sampling.top_k i32 = 20 llama_model_loader: - kv 3: general.sampling.top_p f32 = 0.950000 llama_model_loader: - kv 4: general.sampling.temp f32 = 0.600000 llama_model_loader: - kv 5: general.name str = Hf llama_model_loader: - kv 6: general.size_label str = 27B llama_model_loader: - kv 7: general.license str = apache-2.0 llama_model_loader: - kv 8: general.license.link str = https://huggingface.co/Qwen/Qwen3.5-2... llama_model_loader: - kv 9: general.tags arr[str,1] = ["image-text-to-text"] llama_model_loader: - kv 10: qwen35.block_count u32 = 64 llama_model_loader: - kv 11: qwen35.context_length u32 = 262144 llama_model_loader: - kv 12: qwen35.embedding_length u32 = 5120 llama_model_loader: - kv 13: qwen35.feed_forward_length u32 = 17408 llama_model_loader: - kv 14: qwen35.attention.head_count u32 = 24 llama_model_loader: - kv 15: qwen35.attention.head_count_kv u32 = 4 llama_model_loader: - kv 16: qwen35.rope.dimension_sections arr[i32,4] = [11, 11, 10, 0] llama_model_loader: - kv 17: qwen35.rope.freq_base f32 = 10000000.000000 llama_model_loader: - kv 18: qwen35.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 19: qwen35.attention.key_length u32 = 256 llama_model_loader: - kv 20: qwen35.attention.value_length u32 = 256 llama_model_loader: - kv 21: qwen35.ssm.conv_kernel u32 = 4 llama_model_loader: - kv 22: qwen35.ssm.state_size u32 = 128 llama_model_loader: - kv 23: qwen35.ssm.group_count u32 = 16 llama_model_loader: - kv 24: qwen35.ssm.time_step_rank u32 = 48 llama_model_loader: - kv 25: qwen35.ssm.inner_size u32 = 6144 llama_model_loader: - kv 26: qwen35.full_attention_interval u32 = 4 llama_model_loader: - kv 27: qwen35.rope.dimension_count u32 = 64 llama_model_loader: - kv 28: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 29: tokenizer.ggml.pre str = qwen35 llama_model_loader: - kv 30: tokenizer.ggml.tokens arr[str,248320] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 31: tokenizer.ggml.token_type arr[i32,248320] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 32: tokenizer.ggml.merges arr[str,247587] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 33: tokenizer.ggml.eos_token_id u32 = 248046 llama_model_loader: - kv 34: tokenizer.ggml.padding_token_id u32 = 248044 llama_model_loader: - kv 35: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 36: tokenizer.chat_template str = {%- set image_count = namespace(value... llama_model_loader: - kv 37: general.quantization_version u32 = 2 llama_model_loader: - kv 38: general.file_type u32 = 15 llama_model_loader: - type f32: 353 tensors llama_model_loader: - type q4_K: 407 tensors llama_model_loader: - type q5_K: 48 tensors llama_model_loader: - type q6_K: 43 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 15.39 GiB (4.92 BPW) llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35' llama_model_load_from_file_impl: failed to load model ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.17.4
GiteaMirror added the bug label 2026-05-05 01:53:06 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71483