[GH-ISSUE #13194] Error: 500 Internal Server Error: unable to load model #55236

Closed
opened 2026-04-29 08:35:14 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @SergioInToronto on GitHub (Nov 21, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13194

What is the issue?

  1. Pull model from Hugging Face
  2. Run, crashes
$ ollama run --verbose hf.co/QuantStack/Qwen-Image-Edit-GGUF:Q2_K
Error: 500 Internal Server Error: unable to load model: /usr/share/ollama/.ollama/models/blobs/sha256-a3f1680339685f558cbdf0254684e3529aab52b7e37aa8055eed0e8844a2b304

Seems like this is the issue from the logs. Problem with the model itself?

llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen_image'

Relevant log output

Nov 21 07:52:40 RogTop ollama[5985]: [GIN] 2025/11/21 - 07:52:40 | 200 |      13.022µs |       127.0.0.1 | HEAD     "/"
Nov 21 07:52:40 RogTop ollama[5985]: [GIN] 2025/11/21 - 07:52:40 | 200 |       9.062µs |       127.0.0.1 | GET      "/api/ps"
Nov 21 07:52:42 RogTop ollama[5985]: [GIN] 2025/11/21 - 07:52:42 | 200 |      13.764µs |       127.0.0.1 | HEAD     "/"
Nov 21 07:52:42 RogTop ollama[5985]: [GIN] 2025/11/21 - 07:52:42 | 200 |    8.784425ms |       127.0.0.1 | GET      "/api/tags"
Nov 21 07:52:50 RogTop ollama[5985]: [GIN] 2025/11/21 - 07:52:50 | 200 |      18.829µs |       127.0.0.1 | HEAD     "/"
Nov 21 07:52:50 RogTop ollama[5985]: [GIN] 2025/11/21 - 07:52:50 | 200 |    3.954528ms |       127.0.0.1 | POST     "/api/show"
Nov 21 07:52:50 RogTop ollama[5985]: [GIN] 2025/11/21 - 07:52:50 | 200 |    2.457067ms |       127.0.0.1 | POST     "/api/show"
Nov 21 07:52:50 RogTop ollama[5985]: time=2025-11-21T07:52:50.075-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 43809"
Nov 21 07:52:50 RogTop ollama[5985]: llama_model_loader: loaded meta data with 3 key-value pairs and 1933 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-a3f1680339685f558cbdf0254684e3529aab52b7e37aa8055eed0e8844a2b304 (version GGUF V3 (latest))
Nov 21 07:52:50 RogTop ollama[5985]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Nov 21 07:52:50 RogTop ollama[5985]: llama_model_loader: - kv   0:                       general.architecture str              = qwen_image
Nov 21 07:52:50 RogTop ollama[5985]: llama_model_loader: - kv   1:               general.quantization_version u32              = 2
Nov 21 07:52:50 RogTop ollama[5985]: llama_model_loader: - kv   2:                          general.file_type u32              = 10
Nov 21 07:52:50 RogTop ollama[5985]: llama_model_loader: - type  f32: 1087 tensors
Nov 21 07:52:50 RogTop ollama[5985]: llama_model_loader: - type q2_K:  696 tensors
Nov 21 07:52:50 RogTop ollama[5985]: llama_model_loader: - type q3_K:  116 tensors
Nov 21 07:52:50 RogTop ollama[5985]: llama_model_loader: - type q4_K:   28 tensors
Nov 21 07:52:50 RogTop ollama[5985]: llama_model_loader: - type bf16:    6 tensors
Nov 21 07:52:50 RogTop ollama[5985]: print_info: file format = GGUF V3 (latest)
Nov 21 07:52:50 RogTop ollama[5985]: print_info: file type   = Q2_K - Medium
Nov 21 07:52:50 RogTop ollama[5985]: print_info: file size   = 6.58 GiB (2.77 BPW)
Nov 21 07:52:50 RogTop ollama[5985]: llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen_image'
Nov 21 07:52:50 RogTop ollama[5985]: llama_model_load_from_file_impl: failed to load model
Nov 21 07:52:50 RogTop ollama[5985]: time=2025-11-21T07:52:50.322-05:00 level=INFO source=sched.go:425 msg="NewLlamaServer failed" model=/usr/share/ollama/.ollama/models/blobs/sha256-a3f1680339685f558cbdf0254684e3529aab52b7e37aa8055eed0e8844a2b304 error="unable to load model: /usr/share/ollama/.ollama/models/blobs/sha256-a3f1680339685f558cbdf0254684e3529aab52b7e37aa8055eed0e8844a2b304"
Nov 21 07:52:50 RogTop ollama[5985]: [GIN] 2025/11/21 - 07:52:50 | 500 |  247.160546ms |       127.0.0.1 | POST     "/api/generate"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.12.11

Originally created by @SergioInToronto on GitHub (Nov 21, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13194 ### What is the issue? 1. Pull model from Hugging Face 2. Run, crashes ```bash $ ollama run --verbose hf.co/QuantStack/Qwen-Image-Edit-GGUF:Q2_K Error: 500 Internal Server Error: unable to load model: /usr/share/ollama/.ollama/models/blobs/sha256-a3f1680339685f558cbdf0254684e3529aab52b7e37aa8055eed0e8844a2b304 ``` Seems like this is the issue from the logs. Problem with the model itself? ``` llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen_image' ``` ### Relevant log output ```shell Nov 21 07:52:40 RogTop ollama[5985]: [GIN] 2025/11/21 - 07:52:40 | 200 | 13.022µs | 127.0.0.1 | HEAD "/" Nov 21 07:52:40 RogTop ollama[5985]: [GIN] 2025/11/21 - 07:52:40 | 200 | 9.062µs | 127.0.0.1 | GET "/api/ps" Nov 21 07:52:42 RogTop ollama[5985]: [GIN] 2025/11/21 - 07:52:42 | 200 | 13.764µs | 127.0.0.1 | HEAD "/" Nov 21 07:52:42 RogTop ollama[5985]: [GIN] 2025/11/21 - 07:52:42 | 200 | 8.784425ms | 127.0.0.1 | GET "/api/tags" Nov 21 07:52:50 RogTop ollama[5985]: [GIN] 2025/11/21 - 07:52:50 | 200 | 18.829µs | 127.0.0.1 | HEAD "/" Nov 21 07:52:50 RogTop ollama[5985]: [GIN] 2025/11/21 - 07:52:50 | 200 | 3.954528ms | 127.0.0.1 | POST "/api/show" Nov 21 07:52:50 RogTop ollama[5985]: [GIN] 2025/11/21 - 07:52:50 | 200 | 2.457067ms | 127.0.0.1 | POST "/api/show" Nov 21 07:52:50 RogTop ollama[5985]: time=2025-11-21T07:52:50.075-05:00 level=INFO source=server.go:392 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 43809" Nov 21 07:52:50 RogTop ollama[5985]: llama_model_loader: loaded meta data with 3 key-value pairs and 1933 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-a3f1680339685f558cbdf0254684e3529aab52b7e37aa8055eed0e8844a2b304 (version GGUF V3 (latest)) Nov 21 07:52:50 RogTop ollama[5985]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Nov 21 07:52:50 RogTop ollama[5985]: llama_model_loader: - kv 0: general.architecture str = qwen_image Nov 21 07:52:50 RogTop ollama[5985]: llama_model_loader: - kv 1: general.quantization_version u32 = 2 Nov 21 07:52:50 RogTop ollama[5985]: llama_model_loader: - kv 2: general.file_type u32 = 10 Nov 21 07:52:50 RogTop ollama[5985]: llama_model_loader: - type f32: 1087 tensors Nov 21 07:52:50 RogTop ollama[5985]: llama_model_loader: - type q2_K: 696 tensors Nov 21 07:52:50 RogTop ollama[5985]: llama_model_loader: - type q3_K: 116 tensors Nov 21 07:52:50 RogTop ollama[5985]: llama_model_loader: - type q4_K: 28 tensors Nov 21 07:52:50 RogTop ollama[5985]: llama_model_loader: - type bf16: 6 tensors Nov 21 07:52:50 RogTop ollama[5985]: print_info: file format = GGUF V3 (latest) Nov 21 07:52:50 RogTop ollama[5985]: print_info: file type = Q2_K - Medium Nov 21 07:52:50 RogTop ollama[5985]: print_info: file size = 6.58 GiB (2.77 BPW) Nov 21 07:52:50 RogTop ollama[5985]: llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen_image' Nov 21 07:52:50 RogTop ollama[5985]: llama_model_load_from_file_impl: failed to load model Nov 21 07:52:50 RogTop ollama[5985]: time=2025-11-21T07:52:50.322-05:00 level=INFO source=sched.go:425 msg="NewLlamaServer failed" model=/usr/share/ollama/.ollama/models/blobs/sha256-a3f1680339685f558cbdf0254684e3529aab52b7e37aa8055eed0e8844a2b304 error="unable to load model: /usr/share/ollama/.ollama/models/blobs/sha256-a3f1680339685f558cbdf0254684e3529aab52b7e37aa8055eed0e8844a2b304" Nov 21 07:52:50 RogTop ollama[5985]: [GIN] 2025/11/21 - 07:52:50 | 500 | 247.160546ms | 127.0.0.1 | POST "/api/generate" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.12.11
GiteaMirror added the bug label 2026-04-29 08:35:14 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 21, 2025):

qwen-image-edit is not a supported model.

<!-- gh-comment-id:3562922420 --> @rick-github commented on GitHub (Nov 21, 2025): qwen-image-edit is not a supported model.
Author
Owner

@SergioInToronto commented on GitHub (Nov 21, 2025):

Thanks @rick-github !

Noob question, how can I tell which models are supported?

<!-- gh-comment-id:3562928394 --> @SergioInToronto commented on GitHub (Nov 21, 2025): Thanks @rick-github ! Noob question, how can I tell which models are supported?
Author
Owner

@rick-github commented on GitHub (Nov 21, 2025):

The source of truth is the ollama library: https://ollama.com/library. Any models from other sources (HuggingFace, ModelScope) etc that are based on models (finetuned, modified system message, etc) that are in the ollama library have a high likelihood of working with ollama. Models with a unique architecture may not be supported but could be some time in the future. Also note that ollama currently only supports two modalities - text to text and image to text. Models like qwen-image-edit, image to image, require changes to the API as well as model specific changes so support is further away than say gemma4.

<!-- gh-comment-id:3562951673 --> @rick-github commented on GitHub (Nov 21, 2025): The source of truth is the ollama library: https://ollama.com/library. Any models from other sources (HuggingFace, ModelScope) etc that are based on models (finetuned, modified system message, etc) that are in the ollama library have a high likelihood of working with ollama. Models with a unique architecture may not be supported but could be some time in the future. Also note that ollama currently only supports two modalities - text to text and image to text. Models like qwen-image-edit, image to image, require changes to the API as well as model specific changes so support is further away than say gemma4.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55236