[GH-ISSUE #10454] unknown model architecture: 'qwen3' #68931

Closed
opened 2026-05-04 15:53:12 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @cam-narzt on GitHub (Apr 29, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10454

What is the issue?

when I try to use the qwen3 model in the latest tagged ollama docker image I get the error: llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen3'.

Relevant log output

ollama            | time=2025-04-29T00:37:26.067Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
ollama            | time=2025-04-29T00:37:26.067Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
ollama            | time=2025-04-29T00:37:26.068Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
ollama            | time=2025-04-29T00:37:26.068Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
ollama            | time=2025-04-29T00:37:26.128Z level=INFO source=server.go:105 msg="system memory" total="62.0 GiB" free="55.3 GiB" free_swap="7.9 GiB"
ollama            | time=2025-04-29T00:37:26.128Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0
ollama            | time=2025-04-29T00:37:26.129Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=41 layers.offload=15 layers.split="" memory.available="[11.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="20.8 GiB" memory.required.partial="11.2 GiB" memory.required.kv="6.2 GiB" memory.required.allocations="[11.2 GiB]" memory.weights.total="8.2 GiB" memory.weights.repeating="7.6 GiB" memory.weights.nonrepeating="608.6 MiB" memory.graph.full="5.2 GiB" memory.graph.partial="5.2 GiB"
ollama            | llama_model_loader: loaded meta data with 27 key-value pairs and 443 tensors from /root/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e (version GGUF V3 (latest))
ollama            | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
ollama            | llama_model_loader: - kv   0:                       general.architecture str              = qwen3
ollama            | llama_model_loader: - kv   1:                               general.type str              = model
ollama            | llama_model_loader: - kv   2:                               general.name str              = Qwen3 14B
ollama            | llama_model_loader: - kv   3:                           general.basename str              = Qwen3
ollama            | llama_model_loader: - kv   4:                         general.size_label str              = 14B
ollama            | llama_model_loader: - kv   5:                          qwen3.block_count u32              = 40
ollama            | llama_model_loader: - kv   6:                       qwen3.context_length u32              = 40960
ollama            | llama_model_loader: - kv   7:                     qwen3.embedding_length u32              = 5120
ollama            | llama_model_loader: - kv   8:                  qwen3.feed_forward_length u32              = 17408
ollama            | llama_model_loader: - kv   9:                 qwen3.attention.head_count u32              = 40
ollama            | llama_model_loader: - kv  10:              qwen3.attention.head_count_kv u32              = 8
ollama            | llama_model_loader: - kv  11:                       qwen3.rope.freq_base f32              = 1000000.000000
ollama            | llama_model_loader: - kv  12:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
ollama            | llama_model_loader: - kv  13:                 qwen3.attention.key_length u32              = 128
ollama            | llama_model_loader: - kv  14:               qwen3.attention.value_length u32              = 128
ollama            | llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = gpt2
ollama            | llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = qwen2
ollama            | llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
ollama            | llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
ollama            | llama_model_loader: - kv  19:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
ollama            | llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151645
ollama            | llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
ollama            | llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 151643
ollama            | llama_model_loader: - kv  23:               tokenizer.ggml.add_bos_token bool             = false
ollama            | llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
ollama            | llama_model_loader: - kv  25:               general.quantization_version u32              = 2
ollama            | llama_model_loader: - kv  26:                          general.file_type u32              = 15
ollama            | llama_model_loader: - type  f32:  161 tensors
ollama            | llama_model_loader: - type  f16:   40 tensors
ollama            | llama_model_loader: - type q4_K:  221 tensors
ollama            | llama_model_loader: - type q6_K:   21 tensors
ollama            | print_info: file format = GGUF V3 (latest)
ollama            | print_info: file type   = Q4_K - Medium
ollama            | print_info: file size   = 8.63 GiB (5.02 BPW)
ollama            | llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen3'
ollama            | llama_model_load_from_file_impl: failed to load model
ollama            | time=2025-04-29T00:37:26.205Z level=INFO source=sched.go:430 msg="NewLlamaServer failed" model=/root/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e error="unable to load model: /root/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e"
ollama            | [GIN] 2025/04/29 - 00:37:26 | 500 |   258.25722ms |      172.18.0.4 | POST     "/api/chat"

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.6.5

Originally created by @cam-narzt on GitHub (Apr 29, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10454 ### What is the issue? when I try to use the qwen3 model in the latest tagged ollama docker image I get the error: `llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen3'`. ### Relevant log output ```shell ollama | time=2025-04-29T00:37:26.067Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 ollama | time=2025-04-29T00:37:26.067Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 ollama | time=2025-04-29T00:37:26.068Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 ollama | time=2025-04-29T00:37:26.068Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 ollama | time=2025-04-29T00:37:26.128Z level=INFO source=server.go:105 msg="system memory" total="62.0 GiB" free="55.3 GiB" free_swap="7.9 GiB" ollama | time=2025-04-29T00:37:26.128Z level=WARN source=ggml.go:152 msg="key not found" key=qwen3.vision.block_count default=0 ollama | time=2025-04-29T00:37:26.129Z level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=41 layers.offload=15 layers.split="" memory.available="[11.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="20.8 GiB" memory.required.partial="11.2 GiB" memory.required.kv="6.2 GiB" memory.required.allocations="[11.2 GiB]" memory.weights.total="8.2 GiB" memory.weights.repeating="7.6 GiB" memory.weights.nonrepeating="608.6 MiB" memory.graph.full="5.2 GiB" memory.graph.partial="5.2 GiB" ollama | llama_model_loader: loaded meta data with 27 key-value pairs and 443 tensors from /root/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e (version GGUF V3 (latest)) ollama | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. ollama | llama_model_loader: - kv 0: general.architecture str = qwen3 ollama | llama_model_loader: - kv 1: general.type str = model ollama | llama_model_loader: - kv 2: general.name str = Qwen3 14B ollama | llama_model_loader: - kv 3: general.basename str = Qwen3 ollama | llama_model_loader: - kv 4: general.size_label str = 14B ollama | llama_model_loader: - kv 5: qwen3.block_count u32 = 40 ollama | llama_model_loader: - kv 6: qwen3.context_length u32 = 40960 ollama | llama_model_loader: - kv 7: qwen3.embedding_length u32 = 5120 ollama | llama_model_loader: - kv 8: qwen3.feed_forward_length u32 = 17408 ollama | llama_model_loader: - kv 9: qwen3.attention.head_count u32 = 40 ollama | llama_model_loader: - kv 10: qwen3.attention.head_count_kv u32 = 8 ollama | llama_model_loader: - kv 11: qwen3.rope.freq_base f32 = 1000000.000000 ollama | llama_model_loader: - kv 12: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 ollama | llama_model_loader: - kv 13: qwen3.attention.key_length u32 = 128 ollama | llama_model_loader: - kv 14: qwen3.attention.value_length u32 = 128 ollama | llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2 ollama | llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2 ollama | llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... ollama | llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ollama | llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... ollama | llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645 ollama | llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 ollama | llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643 ollama | llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false ollama | llama_model_loader: - kv 24: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... ollama | llama_model_loader: - kv 25: general.quantization_version u32 = 2 ollama | llama_model_loader: - kv 26: general.file_type u32 = 15 ollama | llama_model_loader: - type f32: 161 tensors ollama | llama_model_loader: - type f16: 40 tensors ollama | llama_model_loader: - type q4_K: 221 tensors ollama | llama_model_loader: - type q6_K: 21 tensors ollama | print_info: file format = GGUF V3 (latest) ollama | print_info: file type = Q4_K - Medium ollama | print_info: file size = 8.63 GiB (5.02 BPW) ollama | llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen3' ollama | llama_model_load_from_file_impl: failed to load model ollama | time=2025-04-29T00:37:26.205Z level=INFO source=sched.go:430 msg="NewLlamaServer failed" model=/root/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e error="unable to load model: /root/.ollama/models/blobs/sha256-a8cc1361f3145dc01f6d77c6c82c9116b9ffe3c97b34716fe20418455876c40e" ollama | [GIN] 2025/04/29 - 00:37:26 | 500 | 258.25722ms | 172.18.0.4 | POST "/api/chat" ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.6.5
GiteaMirror added the bug label 2026-05-04 15:53:12 -05:00
Author
Owner

@zhou668899 commented on GitHub (Apr 29, 2025):

the same to you

<!-- gh-comment-id:2837161597 --> @zhou668899 commented on GitHub (Apr 29, 2025): the same to you
Author
Owner

@nandakho commented on GitHub (Apr 29, 2025):

Updating Ollama to 0.6.6 should fix that issue.
As stated in Qwen3 Ollama's model page: This model requires Ollama 0.6.6 or later

<!-- gh-comment-id:2837175276 --> @nandakho commented on GitHub (Apr 29, 2025): Updating Ollama to 0.6.6 should fix that issue. As stated in Qwen3 Ollama's model page: [This model requires Ollama 0.6.6 or later](https://ollama.com/library/qwen3)
Author
Owner

@pdevine commented on GitHub (Apr 29, 2025):

We're adding a check for this which will tell you to upgrade when you run the model. I'll go ahead and close the issue.

cc @BruceMacD

<!-- gh-comment-id:2839565494 --> @pdevine commented on GitHub (Apr 29, 2025): We're adding a check for this which will tell you to upgrade when you run the model. I'll go ahead and close the issue. cc @BruceMacD
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#68931