[GH-ISSUE #14499] [Bug]: unknown model architecture: 'qwen35moe' when loading Qwen3.5 MoE model in v0.17.4 #55918

Closed
opened 2026-04-29 09:56:50 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @qwqk423 on GitHub (Feb 27, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14499

What is the issue?

###What happened?
When attempting to generate text using a Qwen 3.5 MoE GGUF model (specifically Qwen3.5 35B A3B Heretic), Ollama fails to load the model and returns an HTTP 500 error. [cite_start]The backend engine throws an error stating that the qwen35moe architecture is unknown[cite: 32].

What did you expect to happen?

Ollama should recognize the qwen35moe architecture and successfully load the model for inference.

Environment Details

  • [cite_start]Ollama Version: 0.17.4 [cite: 1]
  • [cite_start]OS: Windows [cite: 11]
  • [cite_start]GPU: NVIDIA GeForce RTX 5070 Ti (15.9 GiB VRAM) [cite: 3]

Relevant log output

llama_model_loader: - kv   0:                       general.architecture str              = qwen35moe
...
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 19.71 GiB (4.88 BPW) 
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe'
llama_model_load_from_file_impl: failed to load model
time=2026-02-27T20:51:14.940+08:00 level=INFO source=sched.go:473 msg="NewLlamaServer failed" model=H:\ollama_models\.ollama\models\blobs\sha256-499c2c0a0394da8c0c7c22a93850d75e743579f1320d4d056f1db28c2045aba5 error="unable to load model: H:\\ollama_models\\.ollama\\models\\blobs\\sha256-499c2c0a0394da8c0c7c22a93850d75e743579f1320d4d056f1db28c2045aba5"
[GIN] 2026/02/27 - 20:51:14 | 500 |    333.4775ms |       127.0.0.1 | POST     "/api/generate"
-----------------------------------------------
------------------------------------------------
ALL:
time=2026-02-27T20:48:41.290+08:00 level=INFO source=routes.go:1663 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:8192 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:H:\\ollama_models\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:true OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]"
time=2026-02-27T20:48:41.294+08:00 level=INFO source=routes.go:1665 msg="Ollama cloud disabled: true"
time=2026-02-27T20:48:41.294+08:00 level=INFO source=images.go:473 msg="total blobs: 2"
time=2026-02-27T20:48:41.294+08:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0"
time=2026-02-27T20:48:41.294+08:00 level=INFO source=routes.go:1718 msg="Listening on [::]:11434 (version 0.17.4)"
time=2026-02-27T20:48:41.295+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-02-27T20:48:41.305+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\33108\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 50982"
time=2026-02-27T20:48:41.442+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\33108\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 50987"
time=2026-02-27T20:48:41.567+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\33108\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 50991"
time=2026-02-27T20:48:41.816+08:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2026-02-27T20:48:41.817+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\33108\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 50996"
time=2026-02-27T20:48:41.817+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\33108\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 50997"
time=2026-02-27T20:48:41.817+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\33108\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 50995"
time=2026-02-27T20:48:41.957+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-4cef3633-2ee0-08a8-852b-8df403ddb6d1 filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5070 Ti" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="15.9 GiB" available="13.5 GiB"
time=2026-02-27T20:48:41.957+08:00 level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="15.9 GiB" default_num_ctx=4096
[GIN] 2026/02/27 - 20:48:41 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2026/02/27 - 20:48:41 | 200 |      1.2971ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/02/27 - 20:50:15 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2026/02/27 - 20:50:54 | 201 |   23.8785066s |       127.0.0.1 | POST     "/api/blobs/sha256:499c2c0a0394da8c0c7c22a93850d75e743579f1320d4d056f1db28c2045aba5"
[GIN] 2026/02/27 - 20:50:54 | 200 |    192.1207ms |       127.0.0.1 | POST     "/api/create"
[GIN] 2026/02/27 - 20:51:14 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2026/02/27 - 20:51:14 | 200 |     83.4809ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/02/27 - 20:51:14 | 200 |     77.4991ms |       127.0.0.1 | POST     "/api/show"
time=2026-02-27T20:51:14.698+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\33108\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 59523"
time=2026-02-27T20:51:14.809+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2026-02-27T20:51:14.809+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=6 efficiency=0 threads=12
llama_model_loader: loaded meta data with 50 key-value pairs and 733 tensors from H:\ollama_models\.ollama\models\blobs\sha256-499c2c0a0394da8c0c7c22a93850d75e743579f1320d4d056f1db28c2045aba5 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen35moe
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                     general.sampling.top_k i32              = 20
llama_model_loader: - kv   3:                     general.sampling.top_p f32              = 0.950000
llama_model_loader: - kv   4:                      general.sampling.temp f32              = 1.000000
llama_model_loader: - kv   5:                               general.name str              = Qwen3.5 35B A3B Heretic
llama_model_loader: - kv   6:                           general.finetune str              = heretic
llama_model_loader: - kv   7:                           general.basename str              = Qwen3.5
llama_model_loader: - kv   8:                         general.size_label str              = 35B-A3B
llama_model_loader: - kv   9:                            general.license str              = apache-2.0
llama_model_loader: - kv  10:                       general.license.link str              = https://huggingface.co/Qwen/Qwen3.5-3...
llama_model_loader: - kv  11:                               general.tags arr[str,5]       = ["heretic", "uncensored", "decensored...
llama_model_loader: - kv  12:                      qwen35moe.block_count u32              = 40
llama_model_loader: - kv  13:                   qwen35moe.context_length u32              = 262144
llama_model_loader: - kv  14:                 qwen35moe.embedding_length u32              = 2048
llama_model_loader: - kv  15:             qwen35moe.attention.head_count u32              = 16
llama_model_loader: - kv  16:          qwen35moe.attention.head_count_kv u32              = 2
llama_model_loader: - kv  17:          qwen35moe.rope.dimension_sections arr[i32,4]       = [11, 11, 10, 0]
llama_model_loader: - kv  18:                   qwen35moe.rope.freq_base f32              = 10000000.000000
llama_model_loader: - kv  19: qwen35moe.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  20:                     qwen35moe.expert_count u32              = 256
llama_model_loader: - kv  21:                qwen35moe.expert_used_count u32              = 8
llama_model_loader: - kv  22:             qwen35moe.attention.key_length u32              = 256
llama_model_loader: - kv  23:           qwen35moe.attention.value_length u32              = 256
llama_model_loader: - kv  24:       qwen35moe.expert_feed_forward_length u32              = 512
llama_model_loader: - kv  25: qwen35moe.expert_shared_feed_forward_length u32              = 512
llama_model_loader: - kv  26:                  qwen35moe.ssm.conv_kernel u32              = 4
llama_model_loader: - kv  27:                   qwen35moe.ssm.state_size u32              = 128
llama_model_loader: - kv  28:                  qwen35moe.ssm.group_count u32              = 16
llama_model_loader: - kv  29:               qwen35moe.ssm.time_step_rank u32              = 32
llama_model_loader: - kv  30:                   qwen35moe.ssm.inner_size u32              = 4096
llama_model_loader: - kv  31:          qwen35moe.full_attention_interval u32              = 4
llama_model_loader: - kv  32:             qwen35moe.rope.dimension_count u32              = 64
llama_model_loader: - kv  33:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  34:                         tokenizer.ggml.pre str              = qwen35
llama_model_loader: - kv  35:                      tokenizer.ggml.tokens arr[str,248320]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  36:                  tokenizer.ggml.token_type arr[i32,248320]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  37:                      tokenizer.ggml.merges arr[str,247587]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  38:                tokenizer.ggml.eos_token_id u32              = 248046
llama_model_loader: - kv  39:            tokenizer.ggml.padding_token_id u32              = 248044
llama_model_loader: - kv  40:                    tokenizer.chat_template str              = {%- set image_count = namespace(value...
llama_model_loader: - kv  41:               general.quantization_version u32              = 2
llama_model_loader: - kv  42:                          general.file_type u32              = 15
llama_model_loader: - kv  43:                                general.url str              = https://huggingface.co/mradermacher/Q...
llama_model_loader: - kv  44:              mradermacher.quantize_version str              = 2
llama_model_loader: - kv  45:                  mradermacher.quantized_by str              = mradermacher
llama_model_loader: - kv  46:                  mradermacher.quantized_at str              = 2026-02-26T08:06:07+01:00
llama_model_loader: - kv  47:                  mradermacher.quantized_on str              = nico1
llama_model_loader: - kv  48:                         general.source.url str              = https://huggingface.co/brayniac/Qwen3...
llama_model_loader: - kv  49:                  mradermacher.convert_type str              = hf
llama_model_loader: - type  f32:  301 tensors
llama_model_loader: - type q4_K:  355 tensors
llama_model_loader: - type q5_K:   30 tensors
llama_model_loader: - type q6_K:   47 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 19.71 GiB (4.88 BPW) 
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe'
llama_model_load_from_file_impl: failed to load model
time=2026-02-27T20:51:14.940+08:00 level=INFO source=sched.go:473 msg="NewLlamaServer failed" model=H:\ollama_models\.ollama\models\blobs\sha256-499c2c0a0394da8c0c7c22a93850d75e743579f1320d4d056f1db28c2045aba5 error="unable to load model: H:\\ollama_models\\.ollama\\models\\blobs\\sha256-499c2c0a0394da8c0c7c22a93850d75e743579f1320d4d056f1db28c2045aba5"
[GIN] 2026/02/27 - 20:51:14 | 500 |    333.4775ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2026/02/27 - 20:52:32 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2026/02/27 - 20:52:32 | 200 |     80.8288ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/02/27 - 20:52:32 | 200 |     77.4876ms |       127.0.0.1 | POST     "/api/show"
time=2026-02-27T20:52:32.378+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\33108\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61736"
time=2026-02-27T20:52:32.486+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2026-02-27T20:52:32.486+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=6 efficiency=0 threads=12
llama_model_loader: loaded meta data with 50 key-value pairs and 733 tensors from H:\ollama_models\.ollama\models\blobs\sha256-499c2c0a0394da8c0c7c22a93850d75e743579f1320d4d056f1db28c2045aba5 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen35moe
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                     general.sampling.top_k i32              = 20
llama_model_loader: - kv   3:                     general.sampling.top_p f32              = 0.950000
llama_model_loader: - kv   4:                      general.sampling.temp f32              = 1.000000
llama_model_loader: - kv   5:                               general.name str              = Qwen3.5 35B A3B Heretic
llama_model_loader: - kv   6:                           general.finetune str              = heretic
llama_model_loader: - kv   7:                           general.basename str              = Qwen3.5
llama_model_loader: - kv   8:                         general.size_label str              = 35B-A3B
llama_model_loader: - kv   9:                            general.license str              = apache-2.0
llama_model_loader: - kv  10:                       general.license.link str              = https://huggingface.co/Qwen/Qwen3.5-3...
llama_model_loader: - kv  11:                               general.tags arr[str,5]       = ["heretic", "uncensored", "decensored...
llama_model_loader: - kv  12:                      qwen35moe.block_count u32              = 40
llama_model_loader: - kv  13:                   qwen35moe.context_length u32              = 262144
llama_model_loader: - kv  14:                 qwen35moe.embedding_length u32              = 2048
llama_model_loader: - kv  15:             qwen35moe.attention.head_count u32              = 16
llama_model_loader: - kv  16:          qwen35moe.attention.head_count_kv u32              = 2
llama_model_loader: - kv  17:          qwen35moe.rope.dimension_sections arr[i32,4]       = [11, 11, 10, 0]
llama_model_loader: - kv  18:                   qwen35moe.rope.freq_base f32              = 10000000.000000
llama_model_loader: - kv  19: qwen35moe.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  20:                     qwen35moe.expert_count u32              = 256
llama_model_loader: - kv  21:                qwen35moe.expert_used_count u32              = 8
llama_model_loader: - kv  22:             qwen35moe.attention.key_length u32              = 256
llama_model_loader: - kv  23:           qwen35moe.attention.value_length u32              = 256
llama_model_loader: - kv  24:       qwen35moe.expert_feed_forward_length u32              = 512
llama_model_loader: - kv  25: qwen35moe.expert_shared_feed_forward_length u32              = 512
llama_model_loader: - kv  26:                  qwen35moe.ssm.conv_kernel u32              = 4
llama_model_loader: - kv  27:                   qwen35moe.ssm.state_size u32              = 128
llama_model_loader: - kv  28:                  qwen35moe.ssm.group_count u32              = 16
llama_model_loader: - kv  29:               qwen35moe.ssm.time_step_rank u32              = 32
llama_model_loader: - kv  30:                   qwen35moe.ssm.inner_size u32              = 4096
llama_model_loader: - kv  31:          qwen35moe.full_attention_interval u32              = 4
llama_model_loader: - kv  32:             qwen35moe.rope.dimension_count u32              = 64
llama_model_loader: - kv  33:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  34:                         tokenizer.ggml.pre str              = qwen35
llama_model_loader: - kv  35:                      tokenizer.ggml.tokens arr[str,248320]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  36:                  tokenizer.ggml.token_type arr[i32,248320]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  37:                      tokenizer.ggml.merges arr[str,247587]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  38:                tokenizer.ggml.eos_token_id u32              = 248046
llama_model_loader: - kv  39:            tokenizer.ggml.padding_token_id u32              = 248044
llama_model_loader: - kv  40:                    tokenizer.chat_template str              = {%- set image_count = namespace(value...
llama_model_loader: - kv  41:               general.quantization_version u32              = 2
llama_model_loader: - kv  42:                          general.file_type u32              = 15
llama_model_loader: - kv  43:                                general.url str              = https://huggingface.co/mradermacher/Q...
llama_model_loader: - kv  44:              mradermacher.quantize_version str              = 2
llama_model_loader: - kv  45:                  mradermacher.quantized_by str              = mradermacher
llama_model_loader: - kv  46:                  mradermacher.quantized_at str              = 2026-02-26T08:06:07+01:00
llama_model_loader: - kv  47:                  mradermacher.quantized_on str              = nico1
llama_model_loader: - kv  48:                         general.source.url str              = https://huggingface.co/brayniac/Qwen3...
llama_model_loader: - kv  49:                  mradermacher.convert_type str              = hf
llama_model_loader: - type  f32:  301 tensors
llama_model_loader: - type q4_K:  355 tensors
llama_model_loader: - type q5_K:   30 tensors
llama_model_loader: - type q6_K:   47 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 19.71 GiB (4.88 BPW) 
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe'
llama_model_load_from_file_impl: failed to load model
time=2026-02-27T20:52:32.618+08:00 level=INFO source=sched.go:473 msg="NewLlamaServer failed" model=H:\ollama_models\.ollama\models\blobs\sha256-499c2c0a0394da8c0c7c22a93850d75e743579f1320d4d056f1db28c2045aba5 error="unable to load model: H:\\ollama_models\\.ollama\\models\\blobs\\sha256-499c2c0a0394da8c0c7c22a93850d75e743579f1320d4d056f1db28c2045aba5"
[GIN] 2026/02/27 - 20:52:32 | 500 |     331.233ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2026/02/27 - 20:57:35 | 200 |            0s |       127.0.0.1 | GET      "/api/version"

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.17.4

Originally created by @qwqk423 on GitHub (Feb 27, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14499 ### What is the issue? ###What happened? When attempting to generate text using a Qwen 3.5 MoE GGUF model (specifically `Qwen3.5 35B A3B Heretic`), Ollama fails to load the model and returns an HTTP 500 error. [cite_start]The backend engine throws an error stating that the `qwen35moe` architecture is unknown[cite: 32]. ### What did you expect to happen? Ollama should recognize the `qwen35moe` architecture and successfully load the model for inference. ### Environment Details * [cite_start]**Ollama Version:** 0.17.4 [cite: 1] * [cite_start]**OS:** Windows [cite: 11] * [cite_start]**GPU:** NVIDIA GeForce RTX 5070 Ti (15.9 GiB VRAM) [cite: 3] ### Relevant log output ```shell llama_model_loader: - kv 0: general.architecture str = qwen35moe ... print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 19.71 GiB (4.88 BPW) llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe' llama_model_load_from_file_impl: failed to load model time=2026-02-27T20:51:14.940+08:00 level=INFO source=sched.go:473 msg="NewLlamaServer failed" model=H:\ollama_models\.ollama\models\blobs\sha256-499c2c0a0394da8c0c7c22a93850d75e743579f1320d4d056f1db28c2045aba5 error="unable to load model: H:\\ollama_models\\.ollama\\models\\blobs\\sha256-499c2c0a0394da8c0c7c22a93850d75e743579f1320d4d056f1db28c2045aba5" [GIN] 2026/02/27 - 20:51:14 | 500 | 333.4775ms | 127.0.0.1 | POST "/api/generate" ----------------------------------------------- ------------------------------------------------ ALL: time=2026-02-27T20:48:41.290+08:00 level=INFO source=routes.go:1663 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:8192 OLLAMA_DEBUG:INFO OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:H:\\ollama_models\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:true OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]" time=2026-02-27T20:48:41.294+08:00 level=INFO source=routes.go:1665 msg="Ollama cloud disabled: true" time=2026-02-27T20:48:41.294+08:00 level=INFO source=images.go:473 msg="total blobs: 2" time=2026-02-27T20:48:41.294+08:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0" time=2026-02-27T20:48:41.294+08:00 level=INFO source=routes.go:1718 msg="Listening on [::]:11434 (version 0.17.4)" time=2026-02-27T20:48:41.295+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-02-27T20:48:41.305+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\33108\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 50982" time=2026-02-27T20:48:41.442+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\33108\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 50987" time=2026-02-27T20:48:41.567+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\33108\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 50991" time=2026-02-27T20:48:41.816+08:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-02-27T20:48:41.817+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\33108\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 50996" time=2026-02-27T20:48:41.817+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\33108\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 50997" time=2026-02-27T20:48:41.817+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\33108\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 50995" time=2026-02-27T20:48:41.957+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-4cef3633-2ee0-08a8-852b-8df403ddb6d1 filter_id="" library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5070 Ti" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="15.9 GiB" available="13.5 GiB" time=2026-02-27T20:48:41.957+08:00 level=INFO source=routes.go:1768 msg="vram-based default context" total_vram="15.9 GiB" default_num_ctx=4096 [GIN] 2026/02/27 - 20:48:41 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2026/02/27 - 20:48:41 | 200 | 1.2971ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/02/27 - 20:50:15 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2026/02/27 - 20:50:54 | 201 | 23.8785066s | 127.0.0.1 | POST "/api/blobs/sha256:499c2c0a0394da8c0c7c22a93850d75e743579f1320d4d056f1db28c2045aba5" [GIN] 2026/02/27 - 20:50:54 | 200 | 192.1207ms | 127.0.0.1 | POST "/api/create" [GIN] 2026/02/27 - 20:51:14 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2026/02/27 - 20:51:14 | 200 | 83.4809ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/02/27 - 20:51:14 | 200 | 77.4991ms | 127.0.0.1 | POST "/api/show" time=2026-02-27T20:51:14.698+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\33108\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 59523" time=2026-02-27T20:51:14.809+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2026-02-27T20:51:14.809+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=6 efficiency=0 threads=12 llama_model_loader: loaded meta data with 50 key-value pairs and 733 tensors from H:\ollama_models\.ollama\models\blobs\sha256-499c2c0a0394da8c0c7c22a93850d75e743579f1320d4d056f1db28c2045aba5 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen35moe llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.sampling.top_k i32 = 20 llama_model_loader: - kv 3: general.sampling.top_p f32 = 0.950000 llama_model_loader: - kv 4: general.sampling.temp f32 = 1.000000 llama_model_loader: - kv 5: general.name str = Qwen3.5 35B A3B Heretic llama_model_loader: - kv 6: general.finetune str = heretic llama_model_loader: - kv 7: general.basename str = Qwen3.5 llama_model_loader: - kv 8: general.size_label str = 35B-A3B llama_model_loader: - kv 9: general.license str = apache-2.0 llama_model_loader: - kv 10: general.license.link str = https://huggingface.co/Qwen/Qwen3.5-3... llama_model_loader: - kv 11: general.tags arr[str,5] = ["heretic", "uncensored", "decensored... llama_model_loader: - kv 12: qwen35moe.block_count u32 = 40 llama_model_loader: - kv 13: qwen35moe.context_length u32 = 262144 llama_model_loader: - kv 14: qwen35moe.embedding_length u32 = 2048 llama_model_loader: - kv 15: qwen35moe.attention.head_count u32 = 16 llama_model_loader: - kv 16: qwen35moe.attention.head_count_kv u32 = 2 llama_model_loader: - kv 17: qwen35moe.rope.dimension_sections arr[i32,4] = [11, 11, 10, 0] llama_model_loader: - kv 18: qwen35moe.rope.freq_base f32 = 10000000.000000 llama_model_loader: - kv 19: qwen35moe.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 20: qwen35moe.expert_count u32 = 256 llama_model_loader: - kv 21: qwen35moe.expert_used_count u32 = 8 llama_model_loader: - kv 22: qwen35moe.attention.key_length u32 = 256 llama_model_loader: - kv 23: qwen35moe.attention.value_length u32 = 256 llama_model_loader: - kv 24: qwen35moe.expert_feed_forward_length u32 = 512 llama_model_loader: - kv 25: qwen35moe.expert_shared_feed_forward_length u32 = 512 llama_model_loader: - kv 26: qwen35moe.ssm.conv_kernel u32 = 4 llama_model_loader: - kv 27: qwen35moe.ssm.state_size u32 = 128 llama_model_loader: - kv 28: qwen35moe.ssm.group_count u32 = 16 llama_model_loader: - kv 29: qwen35moe.ssm.time_step_rank u32 = 32 llama_model_loader: - kv 30: qwen35moe.ssm.inner_size u32 = 4096 llama_model_loader: - kv 31: qwen35moe.full_attention_interval u32 = 4 llama_model_loader: - kv 32: qwen35moe.rope.dimension_count u32 = 64 llama_model_loader: - kv 33: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 34: tokenizer.ggml.pre str = qwen35 llama_model_loader: - kv 35: tokenizer.ggml.tokens arr[str,248320] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 36: tokenizer.ggml.token_type arr[i32,248320] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 37: tokenizer.ggml.merges arr[str,247587] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 38: tokenizer.ggml.eos_token_id u32 = 248046 llama_model_loader: - kv 39: tokenizer.ggml.padding_token_id u32 = 248044 llama_model_loader: - kv 40: tokenizer.chat_template str = {%- set image_count = namespace(value... llama_model_loader: - kv 41: general.quantization_version u32 = 2 llama_model_loader: - kv 42: general.file_type u32 = 15 llama_model_loader: - kv 43: general.url str = https://huggingface.co/mradermacher/Q... llama_model_loader: - kv 44: mradermacher.quantize_version str = 2 llama_model_loader: - kv 45: mradermacher.quantized_by str = mradermacher llama_model_loader: - kv 46: mradermacher.quantized_at str = 2026-02-26T08:06:07+01:00 llama_model_loader: - kv 47: mradermacher.quantized_on str = nico1 llama_model_loader: - kv 48: general.source.url str = https://huggingface.co/brayniac/Qwen3... llama_model_loader: - kv 49: mradermacher.convert_type str = hf llama_model_loader: - type f32: 301 tensors llama_model_loader: - type q4_K: 355 tensors llama_model_loader: - type q5_K: 30 tensors llama_model_loader: - type q6_K: 47 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 19.71 GiB (4.88 BPW) llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe' llama_model_load_from_file_impl: failed to load model time=2026-02-27T20:51:14.940+08:00 level=INFO source=sched.go:473 msg="NewLlamaServer failed" model=H:\ollama_models\.ollama\models\blobs\sha256-499c2c0a0394da8c0c7c22a93850d75e743579f1320d4d056f1db28c2045aba5 error="unable to load model: H:\\ollama_models\\.ollama\\models\\blobs\\sha256-499c2c0a0394da8c0c7c22a93850d75e743579f1320d4d056f1db28c2045aba5" [GIN] 2026/02/27 - 20:51:14 | 500 | 333.4775ms | 127.0.0.1 | POST "/api/generate" [GIN] 2026/02/27 - 20:52:32 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2026/02/27 - 20:52:32 | 200 | 80.8288ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/02/27 - 20:52:32 | 200 | 77.4876ms | 127.0.0.1 | POST "/api/show" time=2026-02-27T20:52:32.378+08:00 level=INFO source=server.go:431 msg="starting runner" cmd="C:\\Users\\33108\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 61736" time=2026-02-27T20:52:32.486+08:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2026-02-27T20:52:32.486+08:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=6 efficiency=0 threads=12 llama_model_loader: loaded meta data with 50 key-value pairs and 733 tensors from H:\ollama_models\.ollama\models\blobs\sha256-499c2c0a0394da8c0c7c22a93850d75e743579f1320d4d056f1db28c2045aba5 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen35moe llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.sampling.top_k i32 = 20 llama_model_loader: - kv 3: general.sampling.top_p f32 = 0.950000 llama_model_loader: - kv 4: general.sampling.temp f32 = 1.000000 llama_model_loader: - kv 5: general.name str = Qwen3.5 35B A3B Heretic llama_model_loader: - kv 6: general.finetune str = heretic llama_model_loader: - kv 7: general.basename str = Qwen3.5 llama_model_loader: - kv 8: general.size_label str = 35B-A3B llama_model_loader: - kv 9: general.license str = apache-2.0 llama_model_loader: - kv 10: general.license.link str = https://huggingface.co/Qwen/Qwen3.5-3... llama_model_loader: - kv 11: general.tags arr[str,5] = ["heretic", "uncensored", "decensored... llama_model_loader: - kv 12: qwen35moe.block_count u32 = 40 llama_model_loader: - kv 13: qwen35moe.context_length u32 = 262144 llama_model_loader: - kv 14: qwen35moe.embedding_length u32 = 2048 llama_model_loader: - kv 15: qwen35moe.attention.head_count u32 = 16 llama_model_loader: - kv 16: qwen35moe.attention.head_count_kv u32 = 2 llama_model_loader: - kv 17: qwen35moe.rope.dimension_sections arr[i32,4] = [11, 11, 10, 0] llama_model_loader: - kv 18: qwen35moe.rope.freq_base f32 = 10000000.000000 llama_model_loader: - kv 19: qwen35moe.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 20: qwen35moe.expert_count u32 = 256 llama_model_loader: - kv 21: qwen35moe.expert_used_count u32 = 8 llama_model_loader: - kv 22: qwen35moe.attention.key_length u32 = 256 llama_model_loader: - kv 23: qwen35moe.attention.value_length u32 = 256 llama_model_loader: - kv 24: qwen35moe.expert_feed_forward_length u32 = 512 llama_model_loader: - kv 25: qwen35moe.expert_shared_feed_forward_length u32 = 512 llama_model_loader: - kv 26: qwen35moe.ssm.conv_kernel u32 = 4 llama_model_loader: - kv 27: qwen35moe.ssm.state_size u32 = 128 llama_model_loader: - kv 28: qwen35moe.ssm.group_count u32 = 16 llama_model_loader: - kv 29: qwen35moe.ssm.time_step_rank u32 = 32 llama_model_loader: - kv 30: qwen35moe.ssm.inner_size u32 = 4096 llama_model_loader: - kv 31: qwen35moe.full_attention_interval u32 = 4 llama_model_loader: - kv 32: qwen35moe.rope.dimension_count u32 = 64 llama_model_loader: - kv 33: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 34: tokenizer.ggml.pre str = qwen35 llama_model_loader: - kv 35: tokenizer.ggml.tokens arr[str,248320] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 36: tokenizer.ggml.token_type arr[i32,248320] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 37: tokenizer.ggml.merges arr[str,247587] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 38: tokenizer.ggml.eos_token_id u32 = 248046 llama_model_loader: - kv 39: tokenizer.ggml.padding_token_id u32 = 248044 llama_model_loader: - kv 40: tokenizer.chat_template str = {%- set image_count = namespace(value... llama_model_loader: - kv 41: general.quantization_version u32 = 2 llama_model_loader: - kv 42: general.file_type u32 = 15 llama_model_loader: - kv 43: general.url str = https://huggingface.co/mradermacher/Q... llama_model_loader: - kv 44: mradermacher.quantize_version str = 2 llama_model_loader: - kv 45: mradermacher.quantized_by str = mradermacher llama_model_loader: - kv 46: mradermacher.quantized_at str = 2026-02-26T08:06:07+01:00 llama_model_loader: - kv 47: mradermacher.quantized_on str = nico1 llama_model_loader: - kv 48: general.source.url str = https://huggingface.co/brayniac/Qwen3... llama_model_loader: - kv 49: mradermacher.convert_type str = hf llama_model_loader: - type f32: 301 tensors llama_model_loader: - type q4_K: 355 tensors llama_model_loader: - type q5_K: 30 tensors llama_model_loader: - type q6_K: 47 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 19.71 GiB (4.88 BPW) llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe' llama_model_load_from_file_impl: failed to load model time=2026-02-27T20:52:32.618+08:00 level=INFO source=sched.go:473 msg="NewLlamaServer failed" model=H:\ollama_models\.ollama\models\blobs\sha256-499c2c0a0394da8c0c7c22a93850d75e743579f1320d4d056f1db28c2045aba5 error="unable to load model: H:\\ollama_models\\.ollama\\models\\blobs\\sha256-499c2c0a0394da8c0c7c22a93850d75e743579f1320d4d056f1db28c2045aba5" [GIN] 2026/02/27 - 20:52:32 | 500 | 331.233ms | 127.0.0.1 | POST "/api/generate" [GIN] 2026/02/27 - 20:57:35 | 200 | 0s | 127.0.0.1 | GET "/api/version" ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.17.4
GiteaMirror added the bug label 2026-04-29 09:56:50 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 27, 2026):

qwen3.5 models from HF are not currently supported in ollama, needs either a vendor sync (#14134) or tweaking of the go runner.

<!-- gh-comment-id:3972880207 --> @rick-github commented on GitHub (Feb 27, 2026): qwen3.5 models from HF are not currently supported in ollama, needs either a vendor sync (#14134) or tweaking of the go runner.
Author
Owner

@qwqk423 commented on GitHub (Feb 27, 2026):

Okay, got it, thank you

Hugging Face 上的 qwen3.5 模型目前尚未在 Ollama 中得到支持,需要进行供应商同步( #14134 )或调整 Go 运行器。

<!-- gh-comment-id:3972938674 --> @qwqk423 commented on GitHub (Feb 27, 2026): Okay, got it, thank you > Hugging Face 上的 qwen3.5 模型目前尚未在 Ollama 中得到支持,需要进行供应商同步( [#14134](https://github.com/ollama/ollama/pull/14134) )或调整 Go 运行器。
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55918