[GH-ISSUE #12833] QWEN3-VL-abliterated running error #8505

Closed
opened 2026-04-12 21:11:52 -05:00 by GiteaMirror · 17 comments
Owner

Originally created by @heiketu on GitHub (Oct 29, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12833

What is the issue?

Image

like offcial varient of qwen3-vl running on previous version of ollama , it calls back as error 500

Relevant log output

Error: 500 Internal Server Error: unable to load model

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.12.7-rc0

Originally created by @heiketu on GitHub (Oct 29, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12833 ### What is the issue? <img width="867" height="464" alt="Image" src="https://github.com/user-attachments/assets/63b8aa6f-8876-4422-b2e7-e124594ecc92" /> like offcial varient of qwen3-vl running on previous version of ollama , it calls back as error 500 ### Relevant log output ```shell Error: 500 Internal Server Error: unable to load model ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.12.7-rc0
GiteaMirror added the bug label 2026-04-12 21:11:52 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 29, 2025):

Server log will help in debugging.

<!-- gh-comment-id:3463126845 --> @rick-github commented on GitHub (Oct 29, 2025): [Server log](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.mdx) will help in debugging.
Author
Owner

@geeksilva97 commented on GitHub (Oct 29, 2025):

I'm running into the same error on macOS. I wanted to provide the server logs, but they do not exist.

<!-- gh-comment-id:3463402834 --> @geeksilva97 commented on GitHub (Oct 29, 2025): I'm running into the same error on macOS. I wanted to provide the server logs, but they do not exist.
Author
Owner

@rick-github commented on GitHub (Oct 29, 2025):

What's the output of

find ~/.ollama
<!-- gh-comment-id:3463426915 --> @rick-github commented on GitHub (Oct 29, 2025): What's the output of ``` find ~/.ollama ```
Author
Owner

@geeksilva97 commented on GitHub (Oct 29, 2025):

What's the output of

find ~/.ollama
/Users/edy/.ollama
/Users/edy/.ollama/id_ed25519
/Users/edy/.ollama/id_ed25519.pub
/Users/edy/.ollama/models
/Users/edy/.ollama/models/blobs
/Users/edy/.ollama/models/blobs/sha256-17e666fbe4f4c95d19936e9e4089c50c980df275d2937734edbe2a8e7f02eb40
/Users/edy/.ollama/models/blobs/sha256-cff3f395ef3756ab63e58b0ad1b32bb6f802905cae1472e6a12034e4246fbbdb
/Users/edy/.ollama/models/blobs/sha256-f5074b1221da0f5a2910d33b642efa5b9eb58cfdddca1c79e16d7ad28aa2b31f
/Users/edy/.ollama/models/blobs/sha256-43070e2d4e532684de521b885f385d0841030efa2b1a20bafb76133a5e1379c1
/Users/edy/.ollama/models/blobs/sha256-7c658f9561e5dbbafb042a00f6a4de57877adddd957809111f3123e272632b4d
/Users/edy/.ollama/models/blobs/sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f
/Users/edy/.ollama/models/blobs/sha256-c43332387573e98fdfad4a606171279955b53d891ba2500552c2984a6560ffb4
/Users/edy/.ollama/models/blobs/sha256-f6417cb1e26962991f8e875a93f3cb0f92bc9b4955e004881251ccbf934a19d2
/Users/edy/.ollama/models/blobs/sha256-72d6f08a42f656d36b356dbe0920675899a99ce21192fd66266fb7d82ed07539
/Users/edy/.ollama/models/blobs/sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55
/Users/edy/.ollama/models/blobs/sha256-b6ae5839783f2ba248e65e4b960ab15f9c4b7118db285827dba6cba9754759e2
/Users/edy/.ollama/models/blobs/sha256-e0a42594d802e5d31cdc786deb4823edb8adff66094d49de8fffe976d753e348
/Users/edy/.ollama/models/blobs/sha256-3116c52250752e00dd06b16382e952bd33c34fd79fc4fe3a5d2c77cf7de1b14b
/Users/edy/.ollama/models/blobs/sha256-05a61d37b08453e59290add468e3bb2f688e23a01e967fecb0e2fa41218cea76
/Users/edy/.ollama/models/blobs/sha256-ed11eda7790d05b49395598a42b155812b17e263214292f7b87d15e14003d337
/Users/edy/.ollama/models/blobs/sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868
/Users/edy/.ollama/models/blobs/sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25
/Users/edy/.ollama/models/blobs/sha256-dd084c7d92a3c1c14cc09ae77153b903fd2024b64a100a0cc8ec9316063d2dbc
/Users/edy/.ollama/models/blobs/sha256-1ff5b64b61b9a63146475a24f70d3ca2fd6fdeec44247987163479968896fc0b
/Users/edy/.ollama/models/blobs/sha256-d18a5cc71b84bc4af394a31116bd3932b42241de70c77d2b76d69a314ec8aa12
/Users/edy/.ollama/models/blobs/sha256-ae370d884f108d16e7cc8fd5259ebc5773a0afa6e078b11f4ed7e39a27e0dfc4
/Users/edy/.ollama/models/blobs/sha256-7339fa418c9ad3e8e12e74ad0fd26a9cc4be8703f9c110728a992b193be85cb2
/Users/edy/.ollama/models/blobs/sha256-1064e17101bdd2460dd5c4e03e4f5cc1b38a4dee66084dc91faba294ccb64a92
/Users/edy/.ollama/models/manifests
/Users/edy/.ollama/models/manifests/registry.ollama.ai
/Users/edy/.ollama/models/manifests/registry.ollama.ai/library
/Users/edy/.ollama/models/manifests/registry.ollama.ai/library/qwen3-vl
/Users/edy/.ollama/models/manifests/registry.ollama.ai/library/qwen3-vl/latest
/Users/edy/.ollama/models/manifests/registry.ollama.ai/library/gemma3
/Users/edy/.ollama/models/manifests/registry.ollama.ai/library/gemma3/latest
/Users/edy/.ollama/models/manifests/registry.ollama.ai/library/qwen3
/Users/edy/.ollama/models/manifests/registry.ollama.ai/library/qwen3/latest
/Users/edy/.ollama/models/manifests/registry.ollama.ai/library/llava
/Users/edy/.ollama/models/manifests/registry.ollama.ai/library/llava/latest
/Users/edy/.ollama/models/manifests/registry.ollama.ai/library/mistral
/Users/edy/.ollama/models/manifests/registry.ollama.ai/library/mistral/latest
/Users/edy/.ollama/history

No logs 😔 .

I just wanted to mention that I built it from the source. I assume it shouldn't change that, but I'm mentioning that just in case.

<!-- gh-comment-id:3463470300 --> @geeksilva97 commented on GitHub (Oct 29, 2025): > What's the output of > > ``` > find ~/.ollama > ``` ``` /Users/edy/.ollama /Users/edy/.ollama/id_ed25519 /Users/edy/.ollama/id_ed25519.pub /Users/edy/.ollama/models /Users/edy/.ollama/models/blobs /Users/edy/.ollama/models/blobs/sha256-17e666fbe4f4c95d19936e9e4089c50c980df275d2937734edbe2a8e7f02eb40 /Users/edy/.ollama/models/blobs/sha256-cff3f395ef3756ab63e58b0ad1b32bb6f802905cae1472e6a12034e4246fbbdb /Users/edy/.ollama/models/blobs/sha256-f5074b1221da0f5a2910d33b642efa5b9eb58cfdddca1c79e16d7ad28aa2b31f /Users/edy/.ollama/models/blobs/sha256-43070e2d4e532684de521b885f385d0841030efa2b1a20bafb76133a5e1379c1 /Users/edy/.ollama/models/blobs/sha256-7c658f9561e5dbbafb042a00f6a4de57877adddd957809111f3123e272632b4d /Users/edy/.ollama/models/blobs/sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f /Users/edy/.ollama/models/blobs/sha256-c43332387573e98fdfad4a606171279955b53d891ba2500552c2984a6560ffb4 /Users/edy/.ollama/models/blobs/sha256-f6417cb1e26962991f8e875a93f3cb0f92bc9b4955e004881251ccbf934a19d2 /Users/edy/.ollama/models/blobs/sha256-72d6f08a42f656d36b356dbe0920675899a99ce21192fd66266fb7d82ed07539 /Users/edy/.ollama/models/blobs/sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 /Users/edy/.ollama/models/blobs/sha256-b6ae5839783f2ba248e65e4b960ab15f9c4b7118db285827dba6cba9754759e2 /Users/edy/.ollama/models/blobs/sha256-e0a42594d802e5d31cdc786deb4823edb8adff66094d49de8fffe976d753e348 /Users/edy/.ollama/models/blobs/sha256-3116c52250752e00dd06b16382e952bd33c34fd79fc4fe3a5d2c77cf7de1b14b /Users/edy/.ollama/models/blobs/sha256-05a61d37b08453e59290add468e3bb2f688e23a01e967fecb0e2fa41218cea76 /Users/edy/.ollama/models/blobs/sha256-ed11eda7790d05b49395598a42b155812b17e263214292f7b87d15e14003d337 /Users/edy/.ollama/models/blobs/sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 /Users/edy/.ollama/models/blobs/sha256-aeda25e63ebd698fab8638ffb778e68bed908b960d39d0becc650fa981609d25 /Users/edy/.ollama/models/blobs/sha256-dd084c7d92a3c1c14cc09ae77153b903fd2024b64a100a0cc8ec9316063d2dbc /Users/edy/.ollama/models/blobs/sha256-1ff5b64b61b9a63146475a24f70d3ca2fd6fdeec44247987163479968896fc0b /Users/edy/.ollama/models/blobs/sha256-d18a5cc71b84bc4af394a31116bd3932b42241de70c77d2b76d69a314ec8aa12 /Users/edy/.ollama/models/blobs/sha256-ae370d884f108d16e7cc8fd5259ebc5773a0afa6e078b11f4ed7e39a27e0dfc4 /Users/edy/.ollama/models/blobs/sha256-7339fa418c9ad3e8e12e74ad0fd26a9cc4be8703f9c110728a992b193be85cb2 /Users/edy/.ollama/models/blobs/sha256-1064e17101bdd2460dd5c4e03e4f5cc1b38a4dee66084dc91faba294ccb64a92 /Users/edy/.ollama/models/manifests /Users/edy/.ollama/models/manifests/registry.ollama.ai /Users/edy/.ollama/models/manifests/registry.ollama.ai/library /Users/edy/.ollama/models/manifests/registry.ollama.ai/library/qwen3-vl /Users/edy/.ollama/models/manifests/registry.ollama.ai/library/qwen3-vl/latest /Users/edy/.ollama/models/manifests/registry.ollama.ai/library/gemma3 /Users/edy/.ollama/models/manifests/registry.ollama.ai/library/gemma3/latest /Users/edy/.ollama/models/manifests/registry.ollama.ai/library/qwen3 /Users/edy/.ollama/models/manifests/registry.ollama.ai/library/qwen3/latest /Users/edy/.ollama/models/manifests/registry.ollama.ai/library/llava /Users/edy/.ollama/models/manifests/registry.ollama.ai/library/llava/latest /Users/edy/.ollama/models/manifests/registry.ollama.ai/library/mistral /Users/edy/.ollama/models/manifests/registry.ollama.ai/library/mistral/latest /Users/edy/.ollama/history ``` No logs 😔 . I just wanted to mention that I built it from the source. I assume it shouldn't change that, but I'm mentioning that just in case.
Author
Owner

@rick-github commented on GitHub (Oct 29, 2025):

I just wanted to mention that I built it from the source. It shouldn't change that, but I'm mentioning that just in case.

How are you starting the server?

<!-- gh-comment-id:3463477505 --> @rick-github commented on GitHub (Oct 29, 2025): > I just wanted to mention that I built it from the source. It shouldn't change that, but I'm mentioning that just in case. How are you starting the server?
Author
Owner

@geeksilva97 commented on GitHub (Oct 29, 2025):

I just wanted to mention that I built it from the source. It shouldn't change that, but I'm mentioning that just in case.

How are you starting the server?

./ollama serve

<!-- gh-comment-id:3463488976 --> @geeksilva97 commented on GitHub (Oct 29, 2025): > > I just wanted to mention that I built it from the source. It shouldn't change that, but I'm mentioning that just in case. > > How are you starting the server? `./ollama serve`
Author
Owner

@rick-github commented on GitHub (Oct 29, 2025):

The log is the console output.

<!-- gh-comment-id:3463496478 --> @rick-github commented on GitHub (Oct 29, 2025): The log is the console output.
Author
Owner

@geeksilva97 commented on GitHub (Oct 29, 2025):

The log is the console output.

That makes sense. Here it is:

Client log

$ ./ollama run qwen3-vl
Error: 500 Internal Server Error: unable to load model: /Users/edy/.ollama/models/blobs/sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55

Server log

llama_model_load_from_file_impl: using device Metal (Apple M3 Pro) (unknown id) - 12287 MiB free
llama_model_loader: loaded meta data with 40 key-value pairs and 858 tensors from /Users/edy/.ollama/models/blobs/sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3vl
llama_model_loader: - kv   1:                          general.file_type u32              = 15
llama_model_loader: - kv   2:                    general.parameter_count u64              = 8767123696
llama_model_loader: - kv   3:               general.quantization_version u32              = 2
llama_model_loader: - kv   4:               qwen3vl.attention.head_count u32              = 32
llama_model_loader: - kv   5:            qwen3vl.attention.head_count_kv u32              = 8
llama_model_loader: - kv   6:               qwen3vl.attention.key_length u32              = 128
llama_model_loader: - kv   7:   qwen3vl.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   8:             qwen3vl.attention.value_length u32              = 128
llama_model_loader: - kv   9:                        qwen3vl.block_count u32              = 36
llama_model_loader: - kv  10:                     qwen3vl.context_length u32              = 262144
llama_model_loader: - kv  11:                   qwen3vl.embedding_length u32              = 4096
llama_model_loader: - kv  12:                qwen3vl.feed_forward_length u32              = 12288
llama_model_loader: - kv  13:                     qwen3vl.rope.freq_base f32              = 5000000.000000
llama_model_loader: - kv  14:        qwen3vl.vision.attention.head_count u32              = 16
llama_model_loader: - kv  15: qwen3vl.vision.attention.layer_norm_epsilon f32              = 0.000001
llama_model_loader: - kv  16:                 qwen3vl.vision.block_count u32              = 27
llama_model_loader: - kv  17:    qwen3vl.vision.deepstack_visual_indexes arr[i32,3]       = [8, 16, 24]
llama_model_loader: - kv  18:            qwen3vl.vision.embedding_length u32              = 1152
llama_model_loader: - kv  19:                  qwen3vl.vision.image_mean arr[f32,3]       = [0.500000, 0.500000, 0.500000]
llama_model_loader: - kv  20:                   qwen3vl.vision.image_std arr[f32,3]       = [0.500000, 0.500000, 0.500000]
llama_model_loader: - kv  21:                qwen3vl.vision.longest_edge u32              = 16777216
llama_model_loader: - kv  22:                qwen3vl.vision.num_channels u32              = 3
llama_model_loader: - kv  23:                  qwen3vl.vision.patch_size u32              = 16
llama_model_loader: - kv  24:              qwen3vl.vision.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  25:               qwen3vl.vision.shortest_edge u32              = 65536
llama_model_loader: - kv  26:          qwen3vl.vision.spatial_merge_size u32              = 2
llama_model_loader: - kv  27:         qwen3vl.vision.temporal_patch_size u32              = 2
llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {%- set image_count = namespace(value...
llama_model_loader: - kv  29:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  30:           tokenizer.ggml.add_padding_token bool             = false
llama_model_loader: - kv  31:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  32:               tokenizer.ggml.eos_token_ids arr[i32,2]       = [151645, 151643]
llama_model_loader: - kv  33:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  34:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  35:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  36:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  37:                      tokenizer.ggml.scores arr[f32,151936]  = [0.000000, 1.000000, 2.000000, 3.0000...
llama_model_loader: - kv  38:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  39:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - type  f32:  432 tensors
llama_model_loader: - type  f16:  145 tensors
llama_model_loader: - type q5_0:   12 tensors
llama_model_loader: - type q8_0:   15 tensors
llama_model_loader: - type q4_K:  219 tensors
llama_model_loader: - type q6_K:   35 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 5.71 GiB (5.60 BPW)
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen3vl'
llama_model_load_from_file_impl: failed to load model
time=2025-10-29T16:31:59.979-03:00 level=INFO source=sched.go:431 msg="NewLlamaServer failed" model=/Users/edy/.ollama/models/blobs/sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 error="unable to load model: /Users/edy/.ollama/models/blobs/sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55"
[GIN] 2025/10/29 - 16:31:59 | 500 |  205.907125ms |       127.0.0.1 | POST     "/api/generate"
<!-- gh-comment-id:3463507983 --> @geeksilva97 commented on GitHub (Oct 29, 2025): > The log is the console output. That makes sense. Here it is: Client log ```bash $ ./ollama run qwen3-vl Error: 500 Internal Server Error: unable to load model: /Users/edy/.ollama/models/blobs/sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 ``` Server log ``` llama_model_load_from_file_impl: using device Metal (Apple M3 Pro) (unknown id) - 12287 MiB free llama_model_loader: loaded meta data with 40 key-value pairs and 858 tensors from /Users/edy/.ollama/models/blobs/sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3vl llama_model_loader: - kv 1: general.file_type u32 = 15 llama_model_loader: - kv 2: general.parameter_count u64 = 8767123696 llama_model_loader: - kv 3: general.quantization_version u32 = 2 llama_model_loader: - kv 4: qwen3vl.attention.head_count u32 = 32 llama_model_loader: - kv 5: qwen3vl.attention.head_count_kv u32 = 8 llama_model_loader: - kv 6: qwen3vl.attention.key_length u32 = 128 llama_model_loader: - kv 7: qwen3vl.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 8: qwen3vl.attention.value_length u32 = 128 llama_model_loader: - kv 9: qwen3vl.block_count u32 = 36 llama_model_loader: - kv 10: qwen3vl.context_length u32 = 262144 llama_model_loader: - kv 11: qwen3vl.embedding_length u32 = 4096 llama_model_loader: - kv 12: qwen3vl.feed_forward_length u32 = 12288 llama_model_loader: - kv 13: qwen3vl.rope.freq_base f32 = 5000000.000000 llama_model_loader: - kv 14: qwen3vl.vision.attention.head_count u32 = 16 llama_model_loader: - kv 15: qwen3vl.vision.attention.layer_norm_epsilon f32 = 0.000001 llama_model_loader: - kv 16: qwen3vl.vision.block_count u32 = 27 llama_model_loader: - kv 17: qwen3vl.vision.deepstack_visual_indexes arr[i32,3] = [8, 16, 24] llama_model_loader: - kv 18: qwen3vl.vision.embedding_length u32 = 1152 llama_model_loader: - kv 19: qwen3vl.vision.image_mean arr[f32,3] = [0.500000, 0.500000, 0.500000] llama_model_loader: - kv 20: qwen3vl.vision.image_std arr[f32,3] = [0.500000, 0.500000, 0.500000] llama_model_loader: - kv 21: qwen3vl.vision.longest_edge u32 = 16777216 llama_model_loader: - kv 22: qwen3vl.vision.num_channels u32 = 3 llama_model_loader: - kv 23: qwen3vl.vision.patch_size u32 = 16 llama_model_loader: - kv 24: qwen3vl.vision.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 25: qwen3vl.vision.shortest_edge u32 = 65536 llama_model_loader: - kv 26: qwen3vl.vision.spatial_merge_size u32 = 2 llama_model_loader: - kv 27: qwen3vl.vision.temporal_patch_size u32 = 2 llama_model_loader: - kv 28: tokenizer.chat_template str = {%- set image_count = namespace(value... llama_model_loader: - kv 29: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 30: tokenizer.ggml.add_padding_token bool = false llama_model_loader: - kv 31: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 32: tokenizer.ggml.eos_token_ids arr[i32,2] = [151645, 151643] llama_model_loader: - kv 33: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 34: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 35: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 36: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 37: tokenizer.ggml.scores arr[f32,151936] = [0.000000, 1.000000, 2.000000, 3.0000... llama_model_loader: - kv 38: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 39: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - type f32: 432 tensors llama_model_loader: - type f16: 145 tensors llama_model_loader: - type q5_0: 12 tensors llama_model_loader: - type q8_0: 15 tensors llama_model_loader: - type q4_K: 219 tensors llama_model_loader: - type q6_K: 35 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 5.71 GiB (5.60 BPW) llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen3vl' llama_model_load_from_file_impl: failed to load model time=2025-10-29T16:31:59.979-03:00 level=INFO source=sched.go:431 msg="NewLlamaServer failed" model=/Users/edy/.ollama/models/blobs/sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55 error="unable to load model: /Users/edy/.ollama/models/blobs/sha256-ed12a4674d727a74ac4816c906094ea9d3119fbea46ca93288c3ce4ffbe38c55" [GIN] 2025/10/29 - 16:31:59 | 500 | 205.907125ms | 127.0.0.1 | POST "/api/generate" ```
Author
Owner

@rick-github commented on GitHub (Oct 29, 2025):

When did you last pull the repo?

<!-- gh-comment-id:3463511867 --> @rick-github commented on GitHub (Oct 29, 2025): When did you last pull the repo?
Author
Owner

@geeksilva97 commented on GitHub (Oct 29, 2025):

When did you last pull the repo?

Oct 20. I can pull it and try it again to confirm

<!-- gh-comment-id:3463518493 --> @geeksilva97 commented on GitHub (Oct 29, 2025): > When did you last pull the repo? Oct 20. I can pull it and try it again to confirm
Author
Owner

@rick-github commented on GitHub (Oct 29, 2025):

qwen3-vl support was added yesterday. Re-pull the repo and re-build.

<!-- gh-comment-id:3463522345 --> @rick-github commented on GitHub (Oct 29, 2025): qwen3-vl support was added yesterday. Re-pull the repo and re-build.
Author
Owner

@geeksilva97 commented on GitHub (Oct 29, 2025):

qwen3-vl support was added yesterday. Re-pull the repo and re-build.

Dang. Sorry for the noise. It works. Thank you.

<!-- gh-comment-id:3463535663 --> @geeksilva97 commented on GitHub (Oct 29, 2025): > qwen3-vl support was added yesterday. Re-pull the repo and re-build. Dang. Sorry for the noise. It works. Thank you.
Author
Owner

@heiketu commented on GitHub (Oct 30, 2025):

update to 0.12.7 now, but bug still exist. unsupport qwen3vl
time=2025-10-30T09:06:33.631+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\Users\heiketu.WIN-UNKTITUA7B4\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --port 52548"
time=2025-10-30T09:06:33.990+08:00 level=INFO source=cpu_windows.go:139 msg=packages count=2
time=2025-10-30T09:06:33.990+08:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=76 efficiency=0 threads=152
time=2025-10-30T09:06:33.990+08:00 level=INFO source=cpu_windows.go:186 msg="" package=1 cores=76 efficiency=0 threads=152
llama_model_loader: loaded meta data with 30 key-value pairs and 707 tensors from C:\Users\heiketu.WIN-UNKTITUA7B4.ollama\models\blobs\sha256-3de4596420dd6b0cb714f3a53ea93b96bebf4bda027111c5fd7a5e00cfab231f (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen3vl
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Huihui Qwen3 VL 32B Thinking Abliterated
llama_model_loader: - kv 3: general.finetune str = Thinking-abliterated
llama_model_loader: - kv 4: general.basename str = Huihui-Qwen3-VL
llama_model_loader: - kv 5: general.size_label str = 32B
llama_model_loader: - kv 6: qwen3vl.block_count u32 = 64
llama_model_loader: - kv 7: qwen3vl.context_length u32 = 262144
llama_model_loader: - kv 8: qwen3vl.embedding_length u32 = 5120
llama_model_loader: - kv 9: qwen3vl.feed_forward_length u32 = 25600
llama_model_loader: - kv 10: qwen3vl.attention.head_count u32 = 64
llama_model_loader: - kv 11: qwen3vl.attention.head_count_kv u32 = 8
llama_model_loader: - kv 12: qwen3vl.rope.freq_base f32 = 5000000.000000
llama_model_loader: - kv 13: qwen3vl.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 14: qwen3vl.attention.key_length u32 = 128
llama_model_loader: - kv 15: qwen3vl.attention.value_length u32 = 128
llama_model_loader: - kv 16: qwen3vl.rope.dimension_sections arr[i32,4] = [24, 20, 20, 0]
llama_model_loader: - kv 17: qwen3vl.n_deepstack_layers u32 = 3
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 27: tokenizer.chat_template str = {%- set image_count = namespace(value...
llama_model_loader: - kv 28: general.quantization_version u32 = 2
llama_model_loader: - kv 29: general.file_type u32 = 15
llama_model_loader: - type f32: 257 tensors
llama_model_loader: - type q4_K: 385 tensors
llama_model_loader: - type q6_K: 65 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 18.40 GiB (4.82 BPW)
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen3vl'
llama_model_load_from_file_impl: failed to load model
time=2025-10-30T09:06:34.279+08:00 level=INFO source=sched.go:418 msg="NewLlamaServer failed" model=C:\Users\heiketu.WIN-UNKTITUA7B4.ollama\models\blobs\sha256-3de4596420dd6b0cb714f3a53ea93b96bebf4bda027111c5fd7a5e00cfab231f error="unable to load model: C:\Users\heiketu.WIN-UNKTITUA7B4\.ollama\models\blobs\sha256-3de4596420dd6b0cb714f3a53ea93b96bebf4bda027111c5fd7a5e00cfab231f"
[GIN] 2025/10/30 - 09:06:34 | 500 | 890.2158ms | 127.0.0.1 | POST "/api/generate"

<!-- gh-comment-id:3465750598 --> @heiketu commented on GitHub (Oct 30, 2025): update to 0.12.7 now, but bug still exist. unsupport qwen3vl time=2025-10-30T09:06:33.631+08:00 level=INFO source=server.go:385 msg="starting runner" cmd="C:\\Users\\heiketu.WIN-UNKTITUA7B4\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 52548" time=2025-10-30T09:06:33.990+08:00 level=INFO source=cpu_windows.go:139 msg=packages count=2 time=2025-10-30T09:06:33.990+08:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=76 efficiency=0 threads=152 time=2025-10-30T09:06:33.990+08:00 level=INFO source=cpu_windows.go:186 msg="" package=1 cores=76 efficiency=0 threads=152 llama_model_loader: loaded meta data with 30 key-value pairs and 707 tensors from C:\Users\heiketu.WIN-UNKTITUA7B4\.ollama\models\blobs\sha256-3de4596420dd6b0cb714f3a53ea93b96bebf4bda027111c5fd7a5e00cfab231f (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3vl llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Huihui Qwen3 VL 32B Thinking Abliterated llama_model_loader: - kv 3: general.finetune str = Thinking-abliterated llama_model_loader: - kv 4: general.basename str = Huihui-Qwen3-VL llama_model_loader: - kv 5: general.size_label str = 32B llama_model_loader: - kv 6: qwen3vl.block_count u32 = 64 llama_model_loader: - kv 7: qwen3vl.context_length u32 = 262144 llama_model_loader: - kv 8: qwen3vl.embedding_length u32 = 5120 llama_model_loader: - kv 9: qwen3vl.feed_forward_length u32 = 25600 llama_model_loader: - kv 10: qwen3vl.attention.head_count u32 = 64 llama_model_loader: - kv 11: qwen3vl.attention.head_count_kv u32 = 8 llama_model_loader: - kv 12: qwen3vl.rope.freq_base f32 = 5000000.000000 llama_model_loader: - kv 13: qwen3vl.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 14: qwen3vl.attention.key_length u32 = 128 llama_model_loader: - kv 15: qwen3vl.attention.value_length u32 = 128 llama_model_loader: - kv 16: qwen3vl.rope.dimension_sections arr[i32,4] = [24, 20, 20, 0] llama_model_loader: - kv 17: qwen3vl.n_deepstack_layers u32 = 3 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 27: tokenizer.chat_template str = {%- set image_count = namespace(value... llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - kv 29: general.file_type u32 = 15 llama_model_loader: - type f32: 257 tensors llama_model_loader: - type q4_K: 385 tensors llama_model_loader: - type q6_K: 65 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 18.40 GiB (4.82 BPW) llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen3vl' llama_model_load_from_file_impl: failed to load model time=2025-10-30T09:06:34.279+08:00 level=INFO source=sched.go:418 msg="NewLlamaServer failed" model=C:\Users\heiketu.WIN-UNKTITUA7B4\.ollama\models\blobs\sha256-3de4596420dd6b0cb714f3a53ea93b96bebf4bda027111c5fd7a5e00cfab231f error="unable to load model: C:\\Users\\heiketu.WIN-UNKTITUA7B4\\.ollama\\models\\blobs\\sha256-3de4596420dd6b0cb714f3a53ea93b96bebf4bda027111c5fd7a5e00cfab231f" [GIN] 2025/10/30 - 09:06:34 | 500 | 890.2158ms | 127.0.0.1 | POST "/api/generate"
Author
Owner

@rick-github commented on GitHub (Oct 30, 2025):

llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen3vl'

What's the output of:

ollama -v
<!-- gh-comment-id:3465754063 --> @rick-github commented on GitHub (Oct 30, 2025): ``` llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen3vl' ``` What's the output of: ``` ollama -v ```
Author
Owner

@heiketu commented on GitHub (Oct 30, 2025):

llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen3vl'

What's the output of:

ollama -v
Image its now 0.12.7 , internal llama framework support qwen3vl theoretically but it doesn't .
<!-- gh-comment-id:3465768432 --> @heiketu commented on GitHub (Oct 30, 2025): > ``` > llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen3vl' > ``` > > What's the output of: > > ``` > ollama -v > ``` <img width="2350" height="1224" alt="Image" src="https://github.com/user-attachments/assets/f7f2ab54-f82b-4523-be8e-dfdc5f95e003" /> its now 0.12.7 , internal llama framework support qwen3vl theoretically but it doesn't .
Author
Owner

@rick-github commented on GitHub (Oct 30, 2025):

ollama does support qwen3vl:

$ ollama run qwen3-vl:32b hello
Thinking...
Okay, the user just said "hello". That's a simple greeting. I should respond in a friendly and welcoming way. Let me think about how to make it personal. Maybe ask how they're doing or if there's anything I can help with. Keep it open-ended so they can tell me what they 
need. Don't want to be too formal, but also not too casual. Let me check some examples. Oh, right, something like "Hello! How can I assist you today?" That's good. Or maybe add a smiley to keep it friendly. Yeah, "Hello! 😊 How can I help you today?" That sounds nice. Let me 
make sure there are no typos. "Hello" is correct, the exclamation mark, the smiley, then the question. Yeah, that's perfect. I'll go with that.
...done thinking.



Hello! 😊 How can I help you today?

The problem is qwen3vl needs to run on the ollama engine, and the model you have downloaded is a split vision model which is not supported on the ollama engine. ollama falls back to using the llama.cpp engine which doesn't support qwen3vl yet.

<!-- gh-comment-id:3465792241 --> @rick-github commented on GitHub (Oct 30, 2025): ollama does support qwen3vl: ```console $ ollama run qwen3-vl:32b hello Thinking... Okay, the user just said "hello". That's a simple greeting. I should respond in a friendly and welcoming way. Let me think about how to make it personal. Maybe ask how they're doing or if there's anything I can help with. Keep it open-ended so they can tell me what they need. Don't want to be too formal, but also not too casual. Let me check some examples. Oh, right, something like "Hello! How can I assist you today?" That's good. Or maybe add a smiley to keep it friendly. Yeah, "Hello! 😊 How can I help you today?" That sounds nice. Let me make sure there are no typos. "Hello" is correct, the exclamation mark, the smiley, then the question. Yeah, that's perfect. I'll go with that. ...done thinking. Hello! 😊 How can I help you today? ``` The problem is qwen3vl needs to run on the ollama engine, and the model you have downloaded is a split vision model which is not supported on the ollama engine. ollama falls back to using the llama.cpp engine which doesn't support qwen3vl yet.
Author
Owner

@heiketu commented on GitHub (Oct 30, 2025):

ollama does support qwen3vl:

$ ollama run qwen3-vl:32b hello
Thinking...
Okay, the user just said "hello". That's a simple greeting. I should respond in a friendly and welcoming way. Let me think about how to make it personal. Maybe ask how they're doing or if there's anything I can help with. Keep it open-ended so they can tell me what they
need. Don't want to be too formal, but also not too casual. Let me check some examples. Oh, right, something like "Hello! How can I assist you today?" That's good. Or maybe add a smiley to keep it friendly. Yeah, "Hello! 😊 How can I help you today?" That sounds nice. Let me
make sure there are no typos. "Hello" is correct, the exclamation mark, the smiley, then the question. Yeah, that's perfect. I'll go with that.
...done thinking.

Hello! 😊 How can I help you today?
The problem is qwen3vl needs to run on the ollama engine, and the model you have downloaded is a split vision model which is not supported on the ollama engine. ollama falls back to using the llama.cpp engine which doesn't support qwen3vl yet.

all right , i knew it , its not ollama's fault .

<!-- gh-comment-id:3465798200 --> @heiketu commented on GitHub (Oct 30, 2025): > ollama does support qwen3vl: > > $ ollama run qwen3-vl:32b hello > Thinking... > Okay, the user just said "hello". That's a simple greeting. I should respond in a friendly and welcoming way. Let me think about how to make it personal. Maybe ask how they're doing or if there's anything I can help with. Keep it open-ended so they can tell me what they > need. Don't want to be too formal, but also not too casual. Let me check some examples. Oh, right, something like "Hello! How can I assist you today?" That's good. Or maybe add a smiley to keep it friendly. Yeah, "Hello! 😊 How can I help you today?" That sounds nice. Let me > make sure there are no typos. "Hello" is correct, the exclamation mark, the smiley, then the question. Yeah, that's perfect. I'll go with that. > ...done thinking. > > > > Hello! 😊 How can I help you today? > The problem is qwen3vl needs to run on the ollama engine, and the model you have downloaded is a split vision model which is not supported on the ollama engine. ollama falls back to using the llama.cpp engine which doesn't support qwen3vl yet. all right , i knew it , its not ollama's fault .
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8505