[GH-ISSUE #12850] Unavailable to load Qwen3-VL-32b #8515

Closed
opened 2026-04-12 21:12:28 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @Jigit-ship-it on GitHub (Oct 30, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12850

What is the issue?

Hi

This is the result

PS C:\WINDOWS\system32> ollama run qwen3-vl:32b
pulling manifest
pulling 6e416d39200a: 100% ▕██████████████████████████████████████████████████████████▏ 20 GB
pulling 7339fa418c9a: 100% ▕██████████████████████████████████████████████████████████▏ 11 KB
pulling f6417cb1e269: 100% ▕██████████████████████████████████████████████████████████▏ 42 B
pulling 50fcece1bf41: 100% ▕██████████████████████████████████████████████████████████▏ 552 B
verifying sha256 digest
writing manifest
success
Error: 500 Internal Server Error: unable to load model: C:\Users\jitaek.jo.ollama\models\blobs\sha256-6e416d39200aae1cec3ea197c5a5ebbaf214ccddc9561bcc0ec7157c83b2a99b

Please provide the solution

Thanks

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @Jigit-ship-it on GitHub (Oct 30, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12850 ### What is the issue? Hi This is the result PS C:\WINDOWS\system32> ollama run qwen3-vl:32b pulling manifest pulling 6e416d39200a: 100% ▕██████████████████████████████████████████████████████████▏ 20 GB pulling 7339fa418c9a: 100% ▕██████████████████████████████████████████████████████████▏ 11 KB pulling f6417cb1e269: 100% ▕██████████████████████████████████████████████████████████▏ 42 B pulling 50fcece1bf41: 100% ▕██████████████████████████████████████████████████████████▏ 552 B verifying sha256 digest writing manifest success Error: 500 Internal Server Error: unable to load model: C:\Users\jitaek.jo\.ollama\models\blobs\sha256-6e416d39200aae1cec3ea197c5a5ebbaf214ccddc9561bcc0ec7157c83b2a99b Please provide the solution Thanks ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-12 21:12:28 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 30, 2025):

What version of ollama are you running? ollama -v

<!-- gh-comment-id:3466990398 --> @rick-github commented on GitHub (Oct 30, 2025): What version of ollama are you running? `ollama -v`
Author
Owner

@A205-08 commented on GitHub (Oct 30, 2025):

What version of ollama are you running? ollama -v
I also encountered the same problem.
OS: Windows 11
GPU: AMD 7900XT
CPU: 13700K
ollama version : 0.12.6

<!-- gh-comment-id:3467594303 --> @A205-08 commented on GitHub (Oct 30, 2025): > What version of ollama are you running? `ollama -v` I also encountered the same problem. OS: Windows 11 GPU: AMD 7900XT CPU: 13700K ollama version : 0.12.6
Author
Owner

@rick-github commented on GitHub (Oct 30, 2025):

qwen3-vl needs ollama 0.12.7 or later.

<!-- gh-comment-id:3467647677 --> @rick-github commented on GitHub (Oct 30, 2025): qwen3-vl needs ollama 0.12.7 or later.
Author
Owner

@A205-08 commented on GitHub (Oct 30, 2025):

qwen3-vl needs ollama 0.12.7 or later.

Thank you, the issue was resolved after upgrading to Ollama 0.12.7.

<!-- gh-comment-id:3467765071 --> @A205-08 commented on GitHub (Oct 30, 2025): > qwen3-vl needs ollama 0.12.7 or later. Thank you, the issue was resolved after upgrading to Ollama 0.12.7.
Author
Owner

@basxto commented on GitHub (Oct 30, 2025):

It’s a good idea to read the readme on the model pages, it says:

Qwen3-VL models require Ollama 0.12.7

New models often have such notes

<!-- gh-comment-id:3470422082 --> @basxto commented on GitHub (Oct 30, 2025): It’s a good idea to read the readme on the model pages, it says: > Qwen3-VL models require [Ollama 0.12.7](https://github.com/ollama/ollama/releases) New models often have such notes
Author
Owner

@kiliansinger commented on GitHub (Oct 30, 2025):

I think this PR could fix it: https://github.com/ollama/ollama/pull/12856

<!-- gh-comment-id:3470644572 --> @kiliansinger commented on GitHub (Oct 30, 2025): I think this PR could fix it: https://github.com/ollama/ollama/pull/12856
Author
Owner

@pdevine commented on GitHub (Nov 2, 2025):

I'm going to go ahead and close the issue. We forgot to put the min version check when the model was initially released. It really should be part of the release checklist though.

<!-- gh-comment-id:3476980430 --> @pdevine commented on GitHub (Nov 2, 2025): I'm going to go ahead and close the issue. We forgot to put the min version check when the model was initially released. It really should be part of the release checklist though.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8515