[GH-ISSUE #10602] Error converting from safetensors" error="unsupported architecture "Qwen3ForCausalLM" #53486

Closed
opened 2026-04-29 03:22:42 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @ArchCangyuan on GitHub (May 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10602

What is the issue?

When importing safetensors of Qwen3 model, the log reports msg="error converting from safetensors" error="unsupported architecture "Qwen3ForCausalLM"".

Relevant log output

level=ERROR source=create.go:162 msg="error converting from safetensors" error="unsupported architecture \"Qwen3ForCausalLM\""

OS

No response

GPU

No response

CPU

No response

Ollama version

0.6.8

Originally created by @ArchCangyuan on GitHub (May 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10602 ### What is the issue? When importing safetensors of Qwen3 model, the log reports msg="error converting from safetensors" error="unsupported architecture \"Qwen3ForCausalLM\"". ### Relevant log output ```shell level=ERROR source=create.go:162 msg="error converting from safetensors" error="unsupported architecture \"Qwen3ForCausalLM\"" ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version 0.6.8
GiteaMirror added the bug label 2026-04-29 03:22:42 -05:00
Author
Owner

@c5g2 commented on GitHub (May 7, 2025):

I had the same problem using 1.7B

<!-- gh-comment-id:2857603717 --> @c5g2 commented on GitHub (May 7, 2025): I had the same problem using 1.7B
Author
Owner

@rick-github commented on GitHub (May 7, 2025):

The ollama import function only supports a subset of architectures. For un-supported models, you can use llama.cpp to convert to GGUF and then import that.

<!-- gh-comment-id:2857825351 --> @rick-github commented on GitHub (May 7, 2025): The ollama import function only supports a subset of architectures. For un-supported models, you can use [llama.cpp](https://github.com/ggml-org/llama.cpp/blob/master/convert_hf_to_gguf_update.py) to convert to GGUF and then import that.
Author
Owner

@ArchCangyuan commented on GitHub (May 7, 2025):

The ollama import function only supports a subset of architectures. For un-supported models, you can use llama.cpp to convert to GGUF and then import that.

I also tried use HF to GGUF tool from llama.cpp project. The import process seems fine. But I got "panic: interface conversion: interface {} is nil, not *ggml.array[string]" when I calling the imported model.

<!-- gh-comment-id:2858341715 --> @ArchCangyuan commented on GitHub (May 7, 2025): > The ollama import function only supports a subset of architectures. For un-supported models, you can use [llama.cpp](https://github.com/ggml-org/llama.cpp/blob/master/convert_hf_to_gguf_update.py) to convert to GGUF and then import that. I also tried use HF to GGUF tool from llama.cpp project. The import process seems fine. But I got "panic: interface conversion: interface {} is nil, not *ggml.array[string]" when I calling the imported model.
Author
Owner

@galoisgroupcn commented on GitHub (May 10, 2025):

@ArchCangyuan @c5g2 @rick-github here's our fix. let us know if you have any feedback. https://github.com/ollama/ollama/pull/10644

<!-- gh-comment-id:2868409614 --> @galoisgroupcn commented on GitHub (May 10, 2025): @ArchCangyuan @c5g2 @rick-github here's our fix. let us know if you have any feedback. https://github.com/ollama/ollama/pull/10644
Author
Owner

@rick-github commented on GitHub (May 29, 2025):

#9984 for the panic.

<!-- gh-comment-id:2919759293 --> @rick-github commented on GitHub (May 29, 2025): #9984 for the panic.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53486