[GH-ISSUE #14512] unknown model architecture: 'qwen35moe' #55929

Closed
opened 2026-04-29 09:57:52 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @chigkim on GitHub (Feb 28, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14512

What is the issue?

I tried to import a quant from bartowski/Qwen_Qwen3.5-35B-A3B-GGUF.
However, I get this: llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe'

Relevant log output

[GIN] 2026/02/28 - 10:21:32 | 200 |      51.375µs |       127.0.0.1 | HEAD     "/"
time=2026-02-28T10:21:32.624-05:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-02-28T10:21:32.625-05:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
[GIN] 2026/02/28 - 10:21:32 | 200 |  148.247375ms |       127.0.0.1 | POST     "/api/show"
time=2026-02-28T10:21:32.731-05:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-02-28T10:21:32.732-05:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
[GIN] 2026/02/28 - 10:21:32 | 200 |  101.791625ms |       127.0.0.1 | POST     "/api/show"
time=2026-02-28T10:21:32.857-05:00 level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=458ns
time=2026-02-28T10:21:32.869-05:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-02-28T10:21:32.869-05:00 level=DEBUG source=sched.go:258 msg="loading first model" model=/Users/cgk/.ollama/models/blobs/sha256-89f77c86163ac4a95d72e00ce6a18f3ed2d280952542838a5b2cdc43bc2c3b3a
time=2026-02-28T10:21:32.901-05:00 level=DEBUG source=server.go:156 msg="model not yet supported by Ollama engine, switching to compatibility mode" model=/Users/cgk/.ollama/models/blobs/sha256-89f77c86163ac4a95d72e00ce6a18f3ed2d280952542838a5b2cdc43bc2c3b3a error="split vision models aren't supported"
llama_model_load_from_file_impl: using device Metal (Apple M3 Max) (unknown id) - 59390 MiB free
llama_model_loader: loaded meta data with 47 key-value pairs and 733 tensors from /Users/cgk/.ollama/models/blobs/sha256-89f77c86163ac4a95d72e00ce6a18f3ed2d280952542838a5b2cdc43bc2c3b3a (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen35moe
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                     general.sampling.top_k i32              = 20
llama_model_loader: - kv   3:                     general.sampling.top_p f32              = 0.950000
llama_model_loader: - kv   4:                      general.sampling.temp f32              = 1.000000
llama_model_loader: - kv   5:                               general.name str              = Qwen3.5 35B A3B
llama_model_loader: - kv   6:                           general.basename str              = Qwen3.5
llama_model_loader: - kv   7:                         general.size_label str              = 35B-A3B
llama_model_loader: - kv   8:                            general.license str              = apache-2.0
llama_model_loader: - kv   9:                       general.license.link str              = https://huggingface.co/Qwen/Qwen3.5-3...
llama_model_loader: - kv  10:                               general.tags arr[str,1]       = ["image-text-to-text"]
llama_model_loader: - kv  11:                      qwen35moe.block_count u32              = 40
llama_model_loader: - kv  12:                   qwen35moe.context_length u32              = 262144
llama_model_loader: - kv  13:                 qwen35moe.embedding_length u32              = 2048
llama_model_loader: - kv  14:             qwen35moe.attention.head_count u32              = 16
llama_model_loader: - kv  15:          qwen35moe.attention.head_count_kv u32              = 2
llama_model_loader: - kv  16:          qwen35moe.rope.dimension_sections arr[i32,4]       = [11, 11, 10, 0]
llama_model_loader: - kv  17:                   qwen35moe.rope.freq_base f32              = 10000000.000000
llama_model_loader: - kv  18: qwen35moe.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  19:                     qwen35moe.expert_count u32              = 256
llama_model_loader: - kv  20:                qwen35moe.expert_used_count u32              = 8
llama_model_loader: - kv  21:             qwen35moe.attention.key_length u32              = 256
llama_model_loader: - kv  22:           qwen35moe.attention.value_length u32              = 256
llama_model_loader: - kv  23:       qwen35moe.expert_feed_forward_length u32              = 512
llama_model_loader: - kv  24: qwen35moe.expert_shared_feed_forward_length u32              = 512
llama_model_loader: - kv  25:                  qwen35moe.ssm.conv_kernel u32              = 4
llama_model_loader: - kv  26:                   qwen35moe.ssm.state_size u32              = 128
llama_model_loader: - kv  27:                  qwen35moe.ssm.group_count u32              = 16
llama_model_loader: - kv  28:               qwen35moe.ssm.time_step_rank u32              = 32
llama_model_loader: - kv  29:                   qwen35moe.ssm.inner_size u32              = 4096
llama_model_loader: - kv  30:          qwen35moe.full_attention_interval u32              = 4
llama_model_loader: - kv  31:             qwen35moe.rope.dimension_count u32              = 64
llama_model_loader: - kv  32:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  33:                         tokenizer.ggml.pre str              = qwen35
llama_model_loader: - kv  34:                      tokenizer.ggml.tokens arr[str,248320]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  35:                  tokenizer.ggml.token_type arr[i32,248320]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  36:                      tokenizer.ggml.merges arr[str,247587]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  37:                tokenizer.ggml.eos_token_id u32              = 248046
llama_model_loader: - kv  38:            tokenizer.ggml.padding_token_id u32              = 248044
llama_model_loader: - kv  39:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  40:                    tokenizer.chat_template str              = {%- set image_count = namespace(value...
llama_model_loader: - kv  41:               general.quantization_version u32              = 2
llama_model_loader: - kv  42:                          general.file_type u32              = 18
llama_model_loader: - kv  43:                      quantize.imatrix.file str              = /models_out/Qwen3.5-35B-A3B-GGUF/Qwen...
llama_model_loader: - kv  44:                   quantize.imatrix.dataset str              = /training_dir/calibration_datav5.txt
llama_model_loader: - kv  45:             quantize.imatrix.entries_count u32              = 510
llama_model_loader: - kv  46:              quantize.imatrix.chunks_count u32              = 802
llama_model_loader: - type  f32:  301 tensors
llama_model_loader: - type q8_0:  162 tensors
llama_model_loader: - type q6_K:  270 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q6_K
print_info: file size   = 26.92 GiB (6.67 BPW)
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe'
llama_model_load_from_file_impl: failed to load model
time=2026-02-28T10:21:32.986-05:00 level=INFO source=sched.go:473 msg="NewLlamaServer failed" model=/Users/cgk/.ollama/models/blobs/sha256-89f77c86163ac4a95d72e00ce6a18f3ed2d280952542838a5b2cdc43bc2c3b3a error="unable to load model: /Users/cgk/.ollama/models/blobs/sha256-89f77c86163ac4a95d72e00ce6a18f3ed2d280952542838a5b2cdc43bc2c3b3a"
[GIN] 2026/02/28 - 10:21:32 | 500 |  249.903625ms |       127.0.0.1 | POST     "/api/generate"

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

v0.17.4

Originally created by @chigkim on GitHub (Feb 28, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14512 ### What is the issue? I tried to import a quant from [bartowski/Qwen_Qwen3.5-35B-A3B-GGUF](https://huggingface.co/bartowski/Qwen_Qwen3.5-35B-A3B-GGUF). However, I get this: `llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe'` ### Relevant log output ```shell [GIN] 2026/02/28 - 10:21:32 | 200 | 51.375µs | 127.0.0.1 | HEAD "/" time=2026-02-28T10:21:32.624-05:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-02-28T10:21:32.625-05:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 [GIN] 2026/02/28 - 10:21:32 | 200 | 148.247375ms | 127.0.0.1 | POST "/api/show" time=2026-02-28T10:21:32.731-05:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-02-28T10:21:32.732-05:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 [GIN] 2026/02/28 - 10:21:32 | 200 | 101.791625ms | 127.0.0.1 | POST "/api/show" time=2026-02-28T10:21:32.857-05:00 level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=458ns time=2026-02-28T10:21:32.869-05:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-02-28T10:21:32.869-05:00 level=DEBUG source=sched.go:258 msg="loading first model" model=/Users/cgk/.ollama/models/blobs/sha256-89f77c86163ac4a95d72e00ce6a18f3ed2d280952542838a5b2cdc43bc2c3b3a time=2026-02-28T10:21:32.901-05:00 level=DEBUG source=server.go:156 msg="model not yet supported by Ollama engine, switching to compatibility mode" model=/Users/cgk/.ollama/models/blobs/sha256-89f77c86163ac4a95d72e00ce6a18f3ed2d280952542838a5b2cdc43bc2c3b3a error="split vision models aren't supported" llama_model_load_from_file_impl: using device Metal (Apple M3 Max) (unknown id) - 59390 MiB free llama_model_loader: loaded meta data with 47 key-value pairs and 733 tensors from /Users/cgk/.ollama/models/blobs/sha256-89f77c86163ac4a95d72e00ce6a18f3ed2d280952542838a5b2cdc43bc2c3b3a (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen35moe llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.sampling.top_k i32 = 20 llama_model_loader: - kv 3: general.sampling.top_p f32 = 0.950000 llama_model_loader: - kv 4: general.sampling.temp f32 = 1.000000 llama_model_loader: - kv 5: general.name str = Qwen3.5 35B A3B llama_model_loader: - kv 6: general.basename str = Qwen3.5 llama_model_loader: - kv 7: general.size_label str = 35B-A3B llama_model_loader: - kv 8: general.license str = apache-2.0 llama_model_loader: - kv 9: general.license.link str = https://huggingface.co/Qwen/Qwen3.5-3... llama_model_loader: - kv 10: general.tags arr[str,1] = ["image-text-to-text"] llama_model_loader: - kv 11: qwen35moe.block_count u32 = 40 llama_model_loader: - kv 12: qwen35moe.context_length u32 = 262144 llama_model_loader: - kv 13: qwen35moe.embedding_length u32 = 2048 llama_model_loader: - kv 14: qwen35moe.attention.head_count u32 = 16 llama_model_loader: - kv 15: qwen35moe.attention.head_count_kv u32 = 2 llama_model_loader: - kv 16: qwen35moe.rope.dimension_sections arr[i32,4] = [11, 11, 10, 0] llama_model_loader: - kv 17: qwen35moe.rope.freq_base f32 = 10000000.000000 llama_model_loader: - kv 18: qwen35moe.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 19: qwen35moe.expert_count u32 = 256 llama_model_loader: - kv 20: qwen35moe.expert_used_count u32 = 8 llama_model_loader: - kv 21: qwen35moe.attention.key_length u32 = 256 llama_model_loader: - kv 22: qwen35moe.attention.value_length u32 = 256 llama_model_loader: - kv 23: qwen35moe.expert_feed_forward_length u32 = 512 llama_model_loader: - kv 24: qwen35moe.expert_shared_feed_forward_length u32 = 512 llama_model_loader: - kv 25: qwen35moe.ssm.conv_kernel u32 = 4 llama_model_loader: - kv 26: qwen35moe.ssm.state_size u32 = 128 llama_model_loader: - kv 27: qwen35moe.ssm.group_count u32 = 16 llama_model_loader: - kv 28: qwen35moe.ssm.time_step_rank u32 = 32 llama_model_loader: - kv 29: qwen35moe.ssm.inner_size u32 = 4096 llama_model_loader: - kv 30: qwen35moe.full_attention_interval u32 = 4 llama_model_loader: - kv 31: qwen35moe.rope.dimension_count u32 = 64 llama_model_loader: - kv 32: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 33: tokenizer.ggml.pre str = qwen35 llama_model_loader: - kv 34: tokenizer.ggml.tokens arr[str,248320] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 35: tokenizer.ggml.token_type arr[i32,248320] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 36: tokenizer.ggml.merges arr[str,247587] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 37: tokenizer.ggml.eos_token_id u32 = 248046 llama_model_loader: - kv 38: tokenizer.ggml.padding_token_id u32 = 248044 llama_model_loader: - kv 39: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 40: tokenizer.chat_template str = {%- set image_count = namespace(value... llama_model_loader: - kv 41: general.quantization_version u32 = 2 llama_model_loader: - kv 42: general.file_type u32 = 18 llama_model_loader: - kv 43: quantize.imatrix.file str = /models_out/Qwen3.5-35B-A3B-GGUF/Qwen... llama_model_loader: - kv 44: quantize.imatrix.dataset str = /training_dir/calibration_datav5.txt llama_model_loader: - kv 45: quantize.imatrix.entries_count u32 = 510 llama_model_loader: - kv 46: quantize.imatrix.chunks_count u32 = 802 llama_model_loader: - type f32: 301 tensors llama_model_loader: - type q8_0: 162 tensors llama_model_loader: - type q6_K: 270 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q6_K print_info: file size = 26.92 GiB (6.67 BPW) llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe' llama_model_load_from_file_impl: failed to load model time=2026-02-28T10:21:32.986-05:00 level=INFO source=sched.go:473 msg="NewLlamaServer failed" model=/Users/cgk/.ollama/models/blobs/sha256-89f77c86163ac4a95d72e00ce6a18f3ed2d280952542838a5b2cdc43bc2c3b3a error="unable to load model: /Users/cgk/.ollama/models/blobs/sha256-89f77c86163ac4a95d72e00ce6a18f3ed2d280952542838a5b2cdc43bc2c3b3a" [GIN] 2026/02/28 - 10:21:32 | 500 | 249.903625ms | 127.0.0.1 | POST "/api/generate" ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version v0.17.4
GiteaMirror added the bug label 2026-04-29 09:57:52 -05:00
Author
Owner

@l2dy commented on GitHub (Feb 28, 2026):

Duplicate of #14503?

<!-- gh-comment-id:3977328904 --> @l2dy commented on GitHub (Feb 28, 2026): Duplicate of #14503?
Author
Owner

@alttagil commented on GitHub (Mar 2, 2026):

❯ OLLAMA_DEBUG=1 ollama serve
time=2026-03-02T15:51:54.023+04:00 level=INFO source=routes.go:1665 msg="server config" env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:DEBUG OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/alt/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]"
time=2026-03-02T15:51:54.023+04:00 level=INFO source=routes.go:1667 msg="Ollama cloud disabled: false"
time=2026-03-02T15:51:54.027+04:00 level=INFO source=images.go:477 msg="total blobs: 82"
time=2026-03-02T15:51:54.027+04:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2026-03-02T15:51:54.028+04:00 level=INFO source=routes.go:1720 msg="Listening on 127.0.0.1:11434 (version HEAD-86513cb)"
time=2026-03-02T15:51:54.028+04:00 level=DEBUG source=sched.go:147 msg="starting llm scheduler"
time=2026-03-02T15:51:54.028+04:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-03-02T15:51:54.030+04:00 level=INFO source=server.go:430 msg="starting runner" cmd="/opt/homebrew/Cellar/ollama/HEAD-86513cb/bin/ollama runner --ollama-engine --port 53623"
time=2026-03-02T15:51:54.030+04:00 level=DEBUG source=server.go:431 msg=subprocess PATH=/opt/homebrew/opt/erlang@27/bin:/Users/alt/go/bin:/opt/homebrew/bin:/opt/homebrew/sbin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin:/Library/Apple/usr/bin:/Applications/Wireshark.app/Contents/MacOS:/Users/alt/.cargo/bin:/Applications/iTerm.app/Contents/Resources/utilities OLLAMA_DEBUG=1 DYLD_LIBRARY_PATH=/opt/homebrew/Cellar/ollama/HEAD-86513cb/bin OLLAMA_LIBRARY_PATH=/opt/homebrew/Cellar/ollama/HEAD-86513cb/bin
time=2026-03-02T15:51:54.120+04:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=91.387292ms OLLAMA_LIBRARY_PATH=[/opt/homebrew/Cellar/ollama/HEAD-86513cb/bin] extra_envs=map[]
time=2026-03-02T15:51:54.120+04:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=1
time=2026-03-02T15:51:54.120+04:00 level=DEBUG source=runner.go:193 msg="adjusting filtering IDs" FilterID=0 new_ID=0
time=2026-03-02T15:51:54.120+04:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=91.539917ms
time=2026-03-02T15:51:54.120+04:00 level=INFO source=types.go:42 msg="inference compute" id=0 filter_id=0 library=Metal compute=0.0 name=Metal description="Apple M3 Max" libdirs="" driver=0.0 pci_id="" type=discrete total="48.0 GiB" available="48.0 GiB"
time=2026-03-02T15:51:54.120+04:00 level=INFO source=routes.go:1770 msg="vram-based default context" total_vram="48.0 GiB" default_num_ctx=262144
[GIN] 2026/03/02 - 15:52:22 | 200 |      45.458µs |       127.0.0.1 | GET      "/api/version"
[GIN] 2026/03/02 - 15:52:28 | 200 |        17.5µs |       127.0.0.1 | HEAD     "/"
time=2026-03-02T15:52:28.137+04:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-03-02T15:52:28.137+04:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
[GIN] 2026/03/02 - 15:52:28 | 200 |   93.089958ms |       127.0.0.1 | POST     "/api/show"
time=2026-03-02T15:52:28.228+04:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-03-02T15:52:28.229+04:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
[GIN] 2026/03/02 - 15:52:28 | 200 |   90.453208ms |       127.0.0.1 | POST     "/api/show"
time=2026-03-02T15:52:28.332+04:00 level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=958ns
time=2026-03-02T15:52:28.332+04:00 level=DEBUG source=sched.go:222 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2026-03-02T15:52:28.344+04:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32
time=2026-03-02T15:52:28.345+04:00 level=DEBUG source=sched.go:258 msg="loading first model" model=/Users/alt/.ollama/models/blobs/sha256-46b21f2508c86dfe2206de7fb2b311afd95bb3b1663f9fe499c3d991c18ad67e
time=2026-03-02T15:52:28.370+04:00 level=DEBUG source=server.go:155 msg="model not yet supported by Ollama engine, switching to compatibility mode" model=/Users/alt/.ollama/models/blobs/sha256-46b21f2508c86dfe2206de7fb2b311afd95bb3b1663f9fe499c3d991c18ad67e error="split vision models aren't supported"
ggml_metal_device_init: tensor API disabled for pre-M5 and pre-A19 devices
ggml_metal_library_init: using embedded metal library
ggml_metal_library_init: loaded in 0.007 sec
ggml_metal_rsets_init: creating a residency set collection (keep_alive = 180 s)
ggml_metal_device_init: GPU name:   Apple M3 Max
ggml_metal_device_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_device_init: simdgroup reduction   = true
ggml_metal_device_init: simdgroup matrix mul. = true
ggml_metal_device_init: has unified memory    = true
ggml_metal_device_init: has bfloat            = true
ggml_metal_device_init: has tensor            = false
ggml_metal_device_init: use residency sets    = true
ggml_metal_device_init: use shared buffers    = true
ggml_metal_device_init: recommendedMaxWorkingSetSize  = 51539.61 MB
llama_model_load_from_file_impl: using device Metal (Apple M3 Max) (unknown id) - 49150 MiB free
llama_model_loader: loaded meta data with 52 key-value pairs and 733 tensors from /Users/alt/.ollama/models/blobs/sha256-46b21f2508c86dfe2206de7fb2b311afd95bb3b1663f9fe499c3d991c18ad67e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen35moe
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                     general.sampling.top_k i32              = 20
llama_model_loader: - kv   3:                     general.sampling.top_p f32              = 0.950000
llama_model_loader: - kv   4:                      general.sampling.temp f32              = 1.000000
llama_model_loader: - kv   5:                               general.name str              = Qwen3.5-35B-A3B
llama_model_loader: - kv   6:                           general.basename str              = Qwen3.5-35B-A3B
llama_model_loader: - kv   7:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv   8:                         general.size_label str              = 35B-A3B
llama_model_loader: - kv   9:                            general.license str              = apache-2.0
llama_model_loader: - kv  10:                       general.license.link str              = https://huggingface.co/Qwen/Qwen3.5-3...
llama_model_loader: - kv  11:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv  12:                   general.base_model.count u32              = 1
llama_model_loader: - kv  13:                  general.base_model.0.name str              = Qwen3.5 35B A3B
llama_model_loader: - kv  14:          general.base_model.0.organization str              = Qwen
llama_model_loader: - kv  15:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen3.5-3...
llama_model_loader: - kv  16:                               general.tags arr[str,2]       = ["unsloth", "image-text-to-text"]
llama_model_loader: - kv  17:                      qwen35moe.block_count u32              = 40
llama_model_loader: - kv  18:                   qwen35moe.context_length u32              = 262144
llama_model_loader: - kv  19:                 qwen35moe.embedding_length u32              = 2048
llama_model_loader: - kv  20:             qwen35moe.attention.head_count u32              = 16
llama_model_loader: - kv  21:          qwen35moe.attention.head_count_kv u32              = 2
llama_model_loader: - kv  22:          qwen35moe.rope.dimension_sections arr[i32,4]       = [11, 11, 10, 0]
llama_model_loader: - kv  23:                   qwen35moe.rope.freq_base f32              = 10000000.000000
llama_model_loader: - kv  24: qwen35moe.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  25:                     qwen35moe.expert_count u32              = 256
llama_model_loader: - kv  26:                qwen35moe.expert_used_count u32              = 8
llama_model_loader: - kv  27:             qwen35moe.attention.key_length u32              = 256
llama_model_loader: - kv  28:           qwen35moe.attention.value_length u32              = 256
llama_model_loader: - kv  29:       qwen35moe.expert_feed_forward_length u32              = 512
llama_model_loader: - kv  30: qwen35moe.expert_shared_feed_forward_length u32              = 512
llama_model_loader: - kv  31:                  qwen35moe.ssm.conv_kernel u32              = 4
llama_model_loader: - kv  32:                   qwen35moe.ssm.state_size u32              = 128
llama_model_loader: - kv  33:                  qwen35moe.ssm.group_count u32              = 16
llama_model_loader: - kv  34:               qwen35moe.ssm.time_step_rank u32              = 32
llama_model_loader: - kv  35:                   qwen35moe.ssm.inner_size u32              = 4096
llama_model_loader: - kv  36:          qwen35moe.full_attention_interval u32              = 4
llama_model_loader: - kv  37:             qwen35moe.rope.dimension_count u32              = 64
llama_model_loader: - kv  38:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  39:                         tokenizer.ggml.pre str              = qwen35
llama_model_loader: - kv  40:                      tokenizer.ggml.tokens arr[str,248320]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  41:                  tokenizer.ggml.token_type arr[i32,248320]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  42:                      tokenizer.ggml.merges arr[str,247587]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  43:                tokenizer.ggml.eos_token_id u32              = 248046
llama_model_loader: - kv  44:            tokenizer.ggml.padding_token_id u32              = 248055
llama_model_loader: - kv  45:                    tokenizer.chat_template str              = {%- set image_count = namespace(value...
llama_model_loader: - kv  46:               general.quantization_version u32              = 2
llama_model_loader: - kv  47:                          general.file_type u32              = 7
llama_model_loader: - kv  48:                      quantize.imatrix.file str              = Qwen3.5-35B-A3B-GGUF/Qwen_Qwen3.5-35B...
llama_model_loader: - kv  49:                   quantize.imatrix.dataset str              = /training_dir/calibration_datav5.txt
llama_model_loader: - kv  50:             quantize.imatrix.entries_count u32              = 510
llama_model_loader: - kv  51:              quantize.imatrix.chunks_count u32              = 802
llama_model_loader: - type  f32:  301 tensors
llama_model_loader: - type q8_0:   40 tensors
llama_model_loader: - type q4_K:  120 tensors
llama_model_loader: - type q5_K:    1 tensors
llama_model_loader: - type q6_K:   61 tensors
llama_model_loader: - type bf16:  210 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q8_0
print_info: file size   = 19.16 GiB (4.75 BPW)
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe'
llama_model_load_from_file_impl: failed to load model
time=2026-03-02T15:52:28.508+04:00 level=INFO source=sched.go:473 msg="NewLlamaServer failed" model=/Users/alt/.ollama/models/blobs/sha256-46b21f2508c86dfe2206de7fb2b311afd95bb3b1663f9fe499c3d991c18ad67e error="unable to load model: /Users/alt/.ollama/models/blobs/sha256-46b21f2508c86dfe2206de7fb2b311afd95bb3b1663f9fe499c3d991c18ad67e"
[GIN] 2026/03/02 - 15:52:28 | 500 |  277.457375ms |       127.0.0.1 | POST     "/api/generate"
❯ ollama run hf.co/unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q4_K_XL
Error: 500 Internal Server Error: unable to load model: /Users/alt/.ollama/models/blobs/sha256-46b21f2508c86dfe2206de7fb2b311afd95bb3b1663f9fe499c3d991c18ad67e
<!-- gh-comment-id:3983942735 --> @alttagil commented on GitHub (Mar 2, 2026): ``` ❯ OLLAMA_DEBUG=1 ollama serve time=2026-03-02T15:51:54.023+04:00 level=INFO source=routes.go:1665 msg="server config" env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:DEBUG OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/alt/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]" time=2026-03-02T15:51:54.023+04:00 level=INFO source=routes.go:1667 msg="Ollama cloud disabled: false" time=2026-03-02T15:51:54.027+04:00 level=INFO source=images.go:477 msg="total blobs: 82" time=2026-03-02T15:51:54.027+04:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2026-03-02T15:51:54.028+04:00 level=INFO source=routes.go:1720 msg="Listening on 127.0.0.1:11434 (version HEAD-86513cb)" time=2026-03-02T15:51:54.028+04:00 level=DEBUG source=sched.go:147 msg="starting llm scheduler" time=2026-03-02T15:51:54.028+04:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-03-02T15:51:54.030+04:00 level=INFO source=server.go:430 msg="starting runner" cmd="/opt/homebrew/Cellar/ollama/HEAD-86513cb/bin/ollama runner --ollama-engine --port 53623" time=2026-03-02T15:51:54.030+04:00 level=DEBUG source=server.go:431 msg=subprocess PATH=/opt/homebrew/opt/erlang@27/bin:/Users/alt/go/bin:/opt/homebrew/bin:/opt/homebrew/sbin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin:/Library/Apple/usr/bin:/Applications/Wireshark.app/Contents/MacOS:/Users/alt/.cargo/bin:/Applications/iTerm.app/Contents/Resources/utilities OLLAMA_DEBUG=1 DYLD_LIBRARY_PATH=/opt/homebrew/Cellar/ollama/HEAD-86513cb/bin OLLAMA_LIBRARY_PATH=/opt/homebrew/Cellar/ollama/HEAD-86513cb/bin time=2026-03-02T15:51:54.120+04:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=91.387292ms OLLAMA_LIBRARY_PATH=[/opt/homebrew/Cellar/ollama/HEAD-86513cb/bin] extra_envs=map[] time=2026-03-02T15:51:54.120+04:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=1 time=2026-03-02T15:51:54.120+04:00 level=DEBUG source=runner.go:193 msg="adjusting filtering IDs" FilterID=0 new_ID=0 time=2026-03-02T15:51:54.120+04:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=91.539917ms time=2026-03-02T15:51:54.120+04:00 level=INFO source=types.go:42 msg="inference compute" id=0 filter_id=0 library=Metal compute=0.0 name=Metal description="Apple M3 Max" libdirs="" driver=0.0 pci_id="" type=discrete total="48.0 GiB" available="48.0 GiB" time=2026-03-02T15:51:54.120+04:00 level=INFO source=routes.go:1770 msg="vram-based default context" total_vram="48.0 GiB" default_num_ctx=262144 [GIN] 2026/03/02 - 15:52:22 | 200 | 45.458µs | 127.0.0.1 | GET "/api/version" [GIN] 2026/03/02 - 15:52:28 | 200 | 17.5µs | 127.0.0.1 | HEAD "/" time=2026-03-02T15:52:28.137+04:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-03-02T15:52:28.137+04:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 [GIN] 2026/03/02 - 15:52:28 | 200 | 93.089958ms | 127.0.0.1 | POST "/api/show" time=2026-03-02T15:52:28.228+04:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-03-02T15:52:28.229+04:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 [GIN] 2026/03/02 - 15:52:28 | 200 | 90.453208ms | 127.0.0.1 | POST "/api/show" time=2026-03-02T15:52:28.332+04:00 level=DEBUG source=runner.go:40 msg="overall device VRAM discovery took" duration=958ns time=2026-03-02T15:52:28.332+04:00 level=DEBUG source=sched.go:222 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2026-03-02T15:52:28.344+04:00 level=DEBUG source=ggml.go:324 msg="key with type not found" key=general.alignment default=32 time=2026-03-02T15:52:28.345+04:00 level=DEBUG source=sched.go:258 msg="loading first model" model=/Users/alt/.ollama/models/blobs/sha256-46b21f2508c86dfe2206de7fb2b311afd95bb3b1663f9fe499c3d991c18ad67e time=2026-03-02T15:52:28.370+04:00 level=DEBUG source=server.go:155 msg="model not yet supported by Ollama engine, switching to compatibility mode" model=/Users/alt/.ollama/models/blobs/sha256-46b21f2508c86dfe2206de7fb2b311afd95bb3b1663f9fe499c3d991c18ad67e error="split vision models aren't supported" ggml_metal_device_init: tensor API disabled for pre-M5 and pre-A19 devices ggml_metal_library_init: using embedded metal library ggml_metal_library_init: loaded in 0.007 sec ggml_metal_rsets_init: creating a residency set collection (keep_alive = 180 s) ggml_metal_device_init: GPU name: Apple M3 Max ggml_metal_device_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_device_init: simdgroup reduction = true ggml_metal_device_init: simdgroup matrix mul. = true ggml_metal_device_init: has unified memory = true ggml_metal_device_init: has bfloat = true ggml_metal_device_init: has tensor = false ggml_metal_device_init: use residency sets = true ggml_metal_device_init: use shared buffers = true ggml_metal_device_init: recommendedMaxWorkingSetSize = 51539.61 MB llama_model_load_from_file_impl: using device Metal (Apple M3 Max) (unknown id) - 49150 MiB free llama_model_loader: loaded meta data with 52 key-value pairs and 733 tensors from /Users/alt/.ollama/models/blobs/sha256-46b21f2508c86dfe2206de7fb2b311afd95bb3b1663f9fe499c3d991c18ad67e (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen35moe llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.sampling.top_k i32 = 20 llama_model_loader: - kv 3: general.sampling.top_p f32 = 0.950000 llama_model_loader: - kv 4: general.sampling.temp f32 = 1.000000 llama_model_loader: - kv 5: general.name str = Qwen3.5-35B-A3B llama_model_loader: - kv 6: general.basename str = Qwen3.5-35B-A3B llama_model_loader: - kv 7: general.quantized_by str = Unsloth llama_model_loader: - kv 8: general.size_label str = 35B-A3B llama_model_loader: - kv 9: general.license str = apache-2.0 llama_model_loader: - kv 10: general.license.link str = https://huggingface.co/Qwen/Qwen3.5-3... llama_model_loader: - kv 11: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 12: general.base_model.count u32 = 1 llama_model_loader: - kv 13: general.base_model.0.name str = Qwen3.5 35B A3B llama_model_loader: - kv 14: general.base_model.0.organization str = Qwen llama_model_loader: - kv 15: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen3.5-3... llama_model_loader: - kv 16: general.tags arr[str,2] = ["unsloth", "image-text-to-text"] llama_model_loader: - kv 17: qwen35moe.block_count u32 = 40 llama_model_loader: - kv 18: qwen35moe.context_length u32 = 262144 llama_model_loader: - kv 19: qwen35moe.embedding_length u32 = 2048 llama_model_loader: - kv 20: qwen35moe.attention.head_count u32 = 16 llama_model_loader: - kv 21: qwen35moe.attention.head_count_kv u32 = 2 llama_model_loader: - kv 22: qwen35moe.rope.dimension_sections arr[i32,4] = [11, 11, 10, 0] llama_model_loader: - kv 23: qwen35moe.rope.freq_base f32 = 10000000.000000 llama_model_loader: - kv 24: qwen35moe.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 25: qwen35moe.expert_count u32 = 256 llama_model_loader: - kv 26: qwen35moe.expert_used_count u32 = 8 llama_model_loader: - kv 27: qwen35moe.attention.key_length u32 = 256 llama_model_loader: - kv 28: qwen35moe.attention.value_length u32 = 256 llama_model_loader: - kv 29: qwen35moe.expert_feed_forward_length u32 = 512 llama_model_loader: - kv 30: qwen35moe.expert_shared_feed_forward_length u32 = 512 llama_model_loader: - kv 31: qwen35moe.ssm.conv_kernel u32 = 4 llama_model_loader: - kv 32: qwen35moe.ssm.state_size u32 = 128 llama_model_loader: - kv 33: qwen35moe.ssm.group_count u32 = 16 llama_model_loader: - kv 34: qwen35moe.ssm.time_step_rank u32 = 32 llama_model_loader: - kv 35: qwen35moe.ssm.inner_size u32 = 4096 llama_model_loader: - kv 36: qwen35moe.full_attention_interval u32 = 4 llama_model_loader: - kv 37: qwen35moe.rope.dimension_count u32 = 64 llama_model_loader: - kv 38: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 39: tokenizer.ggml.pre str = qwen35 llama_model_loader: - kv 40: tokenizer.ggml.tokens arr[str,248320] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 41: tokenizer.ggml.token_type arr[i32,248320] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 42: tokenizer.ggml.merges arr[str,247587] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 43: tokenizer.ggml.eos_token_id u32 = 248046 llama_model_loader: - kv 44: tokenizer.ggml.padding_token_id u32 = 248055 llama_model_loader: - kv 45: tokenizer.chat_template str = {%- set image_count = namespace(value... llama_model_loader: - kv 46: general.quantization_version u32 = 2 llama_model_loader: - kv 47: general.file_type u32 = 7 llama_model_loader: - kv 48: quantize.imatrix.file str = Qwen3.5-35B-A3B-GGUF/Qwen_Qwen3.5-35B... llama_model_loader: - kv 49: quantize.imatrix.dataset str = /training_dir/calibration_datav5.txt llama_model_loader: - kv 50: quantize.imatrix.entries_count u32 = 510 llama_model_loader: - kv 51: quantize.imatrix.chunks_count u32 = 802 llama_model_loader: - type f32: 301 tensors llama_model_loader: - type q8_0: 40 tensors llama_model_loader: - type q4_K: 120 tensors llama_model_loader: - type q5_K: 1 tensors llama_model_loader: - type q6_K: 61 tensors llama_model_loader: - type bf16: 210 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q8_0 print_info: file size = 19.16 GiB (4.75 BPW) llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe' llama_model_load_from_file_impl: failed to load model time=2026-03-02T15:52:28.508+04:00 level=INFO source=sched.go:473 msg="NewLlamaServer failed" model=/Users/alt/.ollama/models/blobs/sha256-46b21f2508c86dfe2206de7fb2b311afd95bb3b1663f9fe499c3d991c18ad67e error="unable to load model: /Users/alt/.ollama/models/blobs/sha256-46b21f2508c86dfe2206de7fb2b311afd95bb3b1663f9fe499c3d991c18ad67e" [GIN] 2026/03/02 - 15:52:28 | 500 | 277.457375ms | 127.0.0.1 | POST "/api/generate" ``` ``` ❯ ollama run hf.co/unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q4_K_XL Error: 500 Internal Server Error: unable to load model: /Users/alt/.ollama/models/blobs/sha256-46b21f2508c86dfe2206de7fb2b311afd95bb3b1663f9fe499c3d991c18ad67e ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55929