[GH-ISSUE #11084] Support importing Qwen3 #33071

Open
opened 2026-04-22 15:16:48 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @jwangkun on GitHub (Jun 16, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11084

What is the issue?

opying file sha256:8c145d34bb1e0d1d9a76fe06958c7293fd8426b3ba3ced3502b3d26861ee6957 100%
converting model
Error: unsupported architecture "Qwen3ForCausalLM"
(base) a123@jwangkun DeepSeek-R1-Qwen3-8B-HngTrust-Fin-0616 % ollama -v
ollama version is 0.9.0
(base) a123@jwangkun DeepSeek-R1-Qwen3-8B-HngTrust-Fin-0616 %

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @jwangkun on GitHub (Jun 16, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11084 ### What is the issue? opying file sha256:8c145d34bb1e0d1d9a76fe06958c7293fd8426b3ba3ced3502b3d26861ee6957 100% converting model Error: unsupported architecture "Qwen3ForCausalLM" (base) a123@jwangkun DeepSeek-R1-Qwen3-8B-HngTrust-Fin-0616 % ollama -v ollama version is 0.9.0 (base) a123@jwangkun DeepSeek-R1-Qwen3-8B-HngTrust-Fin-0616 % ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the feature request label 2026-04-22 15:16:48 -05:00
Author
Owner

@rick-github commented on GitHub (Jun 16, 2025):

The ollama import function only supports a subset of architectures. For un-supported models, you can use llama.cpp to convert to GGUF and then import that.

<!-- gh-comment-id:2975840729 --> @rick-github commented on GitHub (Jun 16, 2025): The ollama import function only supports a subset of architectures. For un-supported models, you can use [llama.cpp](https://github.com/ggml-org/llama.cpp/blob/master/convert_hf_to_gguf_update.py) to convert to GGUF and then import that.
Author
Owner

@Kevin-v92 commented on GitHub (Jun 17, 2025):

The ollama import function only supports a subset of architectures. For un-supported models, you can use llama.cpp to convert to GGUF and then import that.

when can support it? thanks!

<!-- gh-comment-id:2978620844 --> @Kevin-v92 commented on GitHub (Jun 17, 2025): > The ollama import function only supports a subset of architectures. For un-supported models, you can use [llama.cpp](https://github.com/ggml-org/llama.cpp/blob/master/convert_hf_to_gguf_update.py) to convert to GGUF and then import that. when can support it? thanks!
Author
Owner

@aaronpliu commented on GitHub (Jun 17, 2025):

the same question to ollama. Error: unsupported architecture "Qwen3ForCausalLM"

<!-- gh-comment-id:2980653422 --> @aaronpliu commented on GitHub (Jun 17, 2025): the same question to ollama. Error: unsupported architecture "Qwen3ForCausalLM"
Author
Owner

@rick-github commented on GitHub (Jun 17, 2025):

@aaronpliu https://github.com/ollama/ollama/issues/11084#issuecomment-2975840729

<!-- gh-comment-id:2980674971 --> @rick-github commented on GitHub (Jun 17, 2025): @aaronpliu https://github.com/ollama/ollama/issues/11084#issuecomment-2975840729
Author
Owner

@Kevin-v92 commented on GitHub (Sep 5, 2025):

When can it be repaired?

<!-- gh-comment-id:3257349190 --> @Kevin-v92 commented on GitHub (Sep 5, 2025): When can it be repaired?
Author
Owner

@wangwangquan98 commented on GitHub (Sep 28, 2025):

The ollama import function only supports a subset of architectures. For un-supported models, you can use llama.cpp to convert to GGUF and then import that.

when can support it?plz

<!-- gh-comment-id:3342721601 --> @wangwangquan98 commented on GitHub (Sep 28, 2025): > The ollama import function only supports a subset of architectures. For un-supported models, you can use [llama.cpp](https://github.com/ggml-org/llama.cpp/blob/master/convert_hf_to_gguf_update.py) to convert to GGUF and then import that. when can support it?plz
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33071