[GH-ISSUE #12814] Internal server Error when pulling qwen3-vl:30b #70550

Closed
opened 2026-05-04 21:56:46 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @deep1305 on GitHub (Oct 29, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12814

What is the issue?

Hi, After trying multiple times I am facing the same error when pulling qwen3-vl:30b-a3b-thinking and qwen3-vl:30b-a3b-instruct i.e., 500 internal server error.

Thank you for uploading qwen3-vl models.

Relevant log output

pulling manifest
pulling b1da6f96a2e4: 100% ▕██████████████████████████████████████████████████████████▏  19 GB
pulling 7339fa418c9a: 100% ▕██████████████████████████████████████████████████████████▏  11 KB
pulling f6417cb1e269: 100% ▕██████████████████████████████████████████████████████████▏   42 B
pulling 8d38aaed9d49: 100% ▕██████████████████████████████████████████████████████████▏  558 B
verifying sha256 digest
writing manifest
success
Error: 500 Internal Server Error: unable to load model: C:\Users\smart\.ollama\models\blobs\sha256-b1da6f96a2e40e5db05b6066d799c69411225b336bfa20ef1b002c223ed4b190

OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @deep1305 on GitHub (Oct 29, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12814 ### What is the issue? Hi, After trying multiple times I am facing the same error when pulling qwen3-vl:30b-a3b-thinking and qwen3-vl:30b-a3b-instruct i.e., 500 internal server error. Thank you for uploading qwen3-vl models. ### Relevant log output ```shell pulling manifest pulling b1da6f96a2e4: 100% ▕██████████████████████████████████████████████████████████▏ 19 GB pulling 7339fa418c9a: 100% ▕██████████████████████████████████████████████████████████▏ 11 KB pulling f6417cb1e269: 100% ▕██████████████████████████████████████████████████████████▏ 42 B pulling 8d38aaed9d49: 100% ▕██████████████████████████████████████████████████████████▏ 558 B verifying sha256 digest writing manifest success Error: 500 Internal Server Error: unable to load model: C:\Users\smart\.ollama\models\blobs\sha256-b1da6f96a2e40e5db05b6066d799c69411225b336bfa20ef1b002c223ed4b190 ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-05-04 21:56:46 -05:00
Author
Owner

@cdsama commented on GitHub (Oct 29, 2025):

same issue with ollama run not pull qwen3-vl:8b and qwen3-vl:32b
I think there will be a new version released later to support the new model structure.
7d25b9e194

try pre release here https://github.com/ollama/ollama/releases/tag/v0.12.7-rc0

Tested, the latest version 0.12.7 can run qwen3-vl correctly.

<!-- gh-comment-id:3459487103 --> @cdsama commented on GitHub (Oct 29, 2025): same issue with ollama run not pull qwen3-vl:8b and qwen3-vl:32b I think there will be a new version released later to support the new model structure. https://github.com/ollama/ollama/commit/7d25b9e194f106e9c2a5289dfde40077c0838b7d try pre release here https://github.com/ollama/ollama/releases/tag/v0.12.7-rc0 Tested, the latest version 0.12.7 can run qwen3-vl correctly.
Author
Owner

@itchenfei commented on GitHub (Oct 29, 2025):

same, docker ollama

root@8666e69439e3:/# ollama --version
ollama version is 0.12.6
root@8666e69439e3:/# ollama run qwen3-vl:8b
Error: 500 Internal Server Error: unable to load model: /root/.ollama/models/blobs/sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55
root@8666e69439e3:/#
<!-- gh-comment-id:3459494219 --> @itchenfei commented on GitHub (Oct 29, 2025): same, docker ollama ```bash root@8666e69439e3:/# ollama --version ollama version is 0.12.6 root@8666e69439e3:/# ollama run qwen3-vl:8b Error: 500 Internal Server Error: unable to load model: /root/.ollama/models/blobs/sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 root@8666e69439e3:/# ```
Author
Owner

@rick-github commented on GitHub (Oct 29, 2025):

qwen3-vl requires 0.12.7.

<!-- gh-comment-id:3460553684 --> @rick-github commented on GitHub (Oct 29, 2025): qwen3-vl requires [0.12.7](https://github.com/ollama/ollama/releases).
Author
Owner

@brianhillnovus commented on GitHub (Nov 3, 2025):

I'm getting this error on Unsloth Qwen3-VL 30B Thinking Q4_K_M. Running Ollama 12.9.

<!-- gh-comment-id:3481792277 --> @brianhillnovus commented on GitHub (Nov 3, 2025): I'm getting this error on Unsloth Qwen3-VL 30B Thinking Q4_K_M. Running Ollama 12.9.
Author
Owner

@rick-github commented on GitHub (Nov 3, 2025):

https://github.com/ollama/ollama/issues/12833#issuecomment-3465792241

<!-- gh-comment-id:3481810138 --> @rick-github commented on GitHub (Nov 3, 2025): https://github.com/ollama/ollama/issues/12833#issuecomment-3465792241
Author
Owner

@moontato commented on GitHub (Nov 25, 2025):

I'm getting this error on Unsloth Qwen3-VL 30B Thinking Q4_K_M. Running Ollama 12.9.

I am getting the same issue using Ollama 0.12.11 and Qwen3-VL-30B-A3b-Thinking-UD-Q3_K_XL

<!-- gh-comment-id:3576060307 --> @moontato commented on GitHub (Nov 25, 2025): > I'm getting this error on Unsloth Qwen3-VL 30B Thinking Q4_K_M. Running Ollama 12.9. I am getting the same issue using Ollama 0.12.11 and `Qwen3-VL-30B-A3b-Thinking-UD-Q3_K_XL`
Author
Owner

@rick-github commented on GitHub (Nov 25, 2025):

https://github.com/ollama/ollama/issues/12814#issuecomment-3481810138

<!-- gh-comment-id:3576081262 --> @rick-github commented on GitHub (Nov 25, 2025): https://github.com/ollama/ollama/issues/12814#issuecomment-3481810138
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70550