[GH-ISSUE #14428] Error trying to run qwen3.5:35b on 0.17.0 #71427

Closed
opened 2026-05-05 01:38:04 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @heapsoftware on GitHub (Feb 25, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14428

What is the issue?

When trying to interact with the model from openwebui I get a 500 error, looking at the logs shows an error:
error loading model: error loading model architecture: unknown model architecture: 'qwen35moe'. Running with two 5090s on a AMD TR CPU

Relevant log output

time=2026-02-25T23:15:21.214Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35405"
time=2026-02-25T23:15:21.570Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
time=2026-02-25T23:15:21.633Z level=WARN source=server.go:209 msg="flash attention enabled but not supported by model"
time=2026-02-25T23:15:21.633Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515 --port 45963"
time=2026-02-25T23:15:21.633Z level=INFO source=sched.go:491 msg="system memory" total="125.3 GiB" free="101.3 GiB" free_swap="8.0 GiB"
time=2026-02-25T23:15:21.633Z level=INFO source=sched.go:498 msg="gpu memory" id=GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f library=CUDA available="30.9 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-02-25T23:15:21.633Z level=INFO source=sched.go:498 msg="gpu memory" id=GPU-a9029a42-b792-4cba-3b57-87fc7ce1b967 library=CUDA available="23.8 GiB" free="24.3 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-02-25T23:15:21.633Z level=INFO source=server.go:757 msg="loading model" "model layers"=25 requested=-1
time=2026-02-25T23:15:21.642Z level=INFO source=runner.go:1411 msg="starting ollama engine"
time=2026-02-25T23:15:21.642Z level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:45963"
time=2026-02-25T23:15:21.645Z level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:2048 FlashAttention:Disabled KvSize:2048 KvCacheType: NumThreads:24 GPULayers:25[ID:GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f Layers:25(0..24)] MultiUserCache:true ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-25T23:15:21.669Z level=INFO source=ggml.go:136 msg="" architecture=gemma3 file_type=BF16 name="Embeddinggemma 300M" description="" num_tensors=316 num_key_values=37
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
  Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f
  Device 1: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-a9029a42-b792-4cba-3b57-87fc7ce1b967
load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so
time=2026-02-25T23:15:21.825Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2026-02-25T23:15:22.087Z level=INFO source=runner.go:1284 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:2048 FlashAttention:Disabled KvSize:2048 KvCacheType: NumThreads:24 GPULayers:25[ID:GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f Layers:25(0..24)] MultiUserCache:true ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-25T23:15:22.118Z level=INFO source=runner.go:1284 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:2048 FlashAttention:Disabled KvSize:2048 KvCacheType: NumThreads:24 GPULayers:25[ID:GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f Layers:25(0..24)] MultiUserCache:true ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-02-25T23:15:22.118Z level=INFO source=ggml.go:482 msg="offloading 24 repeating layers to GPU"
time=2026-02-25T23:15:22.118Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU"
time=2026-02-25T23:15:22.118Z level=INFO source=ggml.go:494 msg="offloaded 25/25 layers to GPU"
time=2026-02-25T23:15:22.118Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="586.8 MiB"
time=2026-02-25T23:15:22.118Z level=INFO source=device.go:245 msg="model weights" device=CPU size="384.0 MiB"
time=2026-02-25T23:15:22.118Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="80.0 MiB"
time=2026-02-25T23:15:22.118Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="6.0 MiB"
time=2026-02-25T23:15:22.118Z level=INFO source=device.go:272 msg="total memory" size="1.0 GiB"
time=2026-02-25T23:15:22.118Z level=INFO source=sched.go:566 msg="loaded runners" count=1
time=2026-02-25T23:15:22.118Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding"
time=2026-02-25T23:15:22.118Z level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model"
time=2026-02-25T23:15:22.369Z level=INFO source=server.go:1388 msg="llama runner started in 0.74 seconds"
[GIN] 2026/02/25 - 23:15:22 | 200 |  1.508444579s |    192.168.4.37 | POST     "/api/embed"
[GIN] 2026/02/25 - 23:15:23 | 200 |  2.658490743s |    192.168.4.37 | POST     "/api/embed"
[GIN] 2026/02/25 - 23:15:23 | 200 |  2.675128668s |    192.168.4.37 | POST     "/api/embed"
[GIN] 2026/02/25 - 23:15:23 | 200 |  2.692409407s |    192.168.4.37 | POST     "/api/embed"
[GIN] 2026/02/25 - 23:15:23 | 200 |  2.706744646s |    192.168.4.37 | POST     "/api/embed"
[GIN] 2026/02/25 - 23:15:24 | 200 |  2.871107403s |    192.168.4.37 | POST     "/api/embed"
[GIN] 2026/02/25 - 23:15:24 | 200 |  2.922953868s |    192.168.4.37 | POST     "/api/embed"
[GIN] 2026/02/25 - 23:15:24 | 200 |  3.156627825s |    192.168.4.37 | POST     "/api/embed"
[GIN] 2026/02/25 - 23:15:24 | 200 |  3.297450819s |    192.168.4.37 | POST     "/api/embed"
[GIN] 2026/02/25 - 23:15:24 | 200 |  3.364022115s |    192.168.4.37 | POST     "/api/embed"
[GIN] 2026/02/25 - 23:15:24 | 200 |  3.372008303s |    192.168.4.37 | POST     "/api/embed"
[GIN] 2026/02/25 - 23:15:24 | 200 |  3.449233632s |    192.168.4.37 | POST     "/api/embed"
[GIN] 2026/02/25 - 23:15:25 | 200 |  4.318142985s |    192.168.4.37 | POST     "/api/embed"
[GIN] 2026/02/25 - 23:15:25 | 200 |  4.327186459s |    192.168.4.37 | POST     "/api/embed"
[GIN] 2026/02/25 - 23:15:26 | 200 |  4.927479913s |    192.168.4.37 | POST     "/api/embed"
[GIN] 2026/02/25 - 23:15:26 | 200 |  5.034411824s |    192.168.4.37 | POST     "/api/embed"
[GIN] 2026/02/25 - 23:15:26 | 200 |  5.043784693s |    192.168.4.37 | POST     "/api/embed"
[GIN] 2026/02/25 - 23:15:26 | 200 |  5.054169841s |    192.168.4.37 | POST     "/api/embed"
[GIN] 2026/02/25 - 23:15:26 | 200 |  5.106684199s |    192.168.4.37 | POST     "/api/embed"
[GIN] 2026/02/25 - 23:15:26 | 200 |  5.116968636s |    192.168.4.37 | POST     "/api/embed"
[GIN] 2026/02/25 - 23:15:26 | 200 |  5.163582957s |    192.168.4.37 | POST     "/api/embed"
ggml_backend_cuda_device_get_memory device GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f utilizing NVML memory reporting free: 32414957568 total: 34190917632
time=2026-02-25T23:15:26.626Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39161"
time=2026-02-25T23:15:26.816Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
time=2026-02-25T23:15:26.838Z level=INFO source=sched.go:689 msg="updated VRAM based on existing loaded models" gpu=GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f library=CUDA total="31.8 GiB" available="30.2 GiB"
time=2026-02-25T23:15:26.838Z level=INFO source=sched.go:689 msg="updated VRAM based on existing loaded models" gpu=GPU-a9029a42-b792-4cba-3b57-87fc7ce1b967 library=CUDA total="31.8 GiB" available="23.8 GiB"
llama_model_loader: loaded meta data with 56 key-value pairs and 1959 tensors from /root/.ollama/models/blobs/sha256-d838916ba05b9d908e9c3fecf16273b942a99aae94d1725c3e9fdd772522cf1a (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen35moe
llama_model_loader: - kv   1:                          general.file_type u32              = 15
llama_model_loader: - kv   2:                    general.parameter_count u64              = 35951822704
llama_model_loader: - kv   3:               general.quantization_version u32              = 2
llama_model_loader: - kv   4:             qwen35moe.attention.head_count u32              = 16
llama_model_loader: - kv   5:          qwen35moe.attention.head_count_kv arr[u32,40]      = [0, 0, 0, 2, 0, 0, 0, 2, 0, 0, 0, 2, ...
llama_model_loader: - kv   6:             qwen35moe.attention.key_length u32              = 256
llama_model_loader: - kv   7: qwen35moe.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   8:           qwen35moe.attention.value_length u32              = 256
llama_model_loader: - kv   9:                      qwen35moe.block_count u32              = 40
llama_model_loader: - kv  10:                   qwen35moe.context_length u32              = 262144
llama_model_loader: - kv  11:                 qwen35moe.embedding_length u32              = 2048
llama_model_loader: - kv  12:                     qwen35moe.expert_count u32              = 256
llama_model_loader: - kv  13:       qwen35moe.expert_feed_forward_length u32              = 512
llama_model_loader: - kv  14: qwen35moe.expert_shared_feed_forward_length u32              = 512
llama_model_loader: - kv  15:                qwen35moe.expert_used_count u32              = 8
llama_model_loader: - kv  16:              qwen35moe.feed_forward_length u32              = 0
llama_model_loader: - kv  17:          qwen35moe.full_attention_interval u32              = 4
llama_model_loader: - kv  18:                   qwen35moe.image_token_id u32              = 248056
llama_model_loader: - kv  19:                   qwen35moe.mrope_sections arr[i32,3]       = [11, 11, 10]
llama_model_loader: - kv  20:             qwen35moe.rope.dimension_count u32              = 64
llama_model_loader: - kv  21:                   qwen35moe.rope.freq_base f32              = 10000000.000000
llama_model_loader: - kv  22:           qwen35moe.rope.mrope_interleaved bool             = true
llama_model_loader: - kv  23:               qwen35moe.rope.mrope_section arr[i32,3]       = [11, 11, 10]
llama_model_loader: - kv  24:                  qwen35moe.ssm.conv_kernel u32              = 4
llama_model_loader: - kv  25:                  qwen35moe.ssm.group_count u32              = 16
llama_model_loader: - kv  26:                   qwen35moe.ssm.inner_size u32              = 4096
llama_model_loader: - kv  27:                   qwen35moe.ssm.state_size u32              = 128
llama_model_loader: - kv  28:               qwen35moe.ssm.time_step_rank u32              = 32
llama_model_loader: - kv  29:             qwen35moe.ssm.v_head_reordered bool             = true
llama_model_loader: - kv  30:      qwen35moe.vision.attention.head_count u32              = 16
llama_model_loader: - kv  31:               qwen35moe.vision.block_count u32              = 27
llama_model_loader: - kv  32:  qwen35moe.vision.deepstack_visual_indexes arr[i32,0]       = []
llama_model_loader: - kv  33:          qwen35moe.vision.embedding_length u32              = 1152
llama_model_loader: - kv  34:                qwen35moe.vision.image_mean arr[f32,3]       = [0.500000, 0.500000, 0.500000]
llama_model_loader: - kv  35:                 qwen35moe.vision.image_std arr[f32,3]       = [0.500000, 0.500000, 0.500000]
llama_model_loader: - kv  36:              qwen35moe.vision.longest_edge u32              = 16777216
llama_model_loader: - kv  37:              qwen35moe.vision.num_channels u32              = 3
llama_model_loader: - kv  38:                qwen35moe.vision.patch_size u32              = 16
llama_model_loader: - kv  39:             qwen35moe.vision.shortest_edge u32              = 65536
llama_model_loader: - kv  40:        qwen35moe.vision.spatial_merge_size u32              = 2
llama_model_loader: - kv  41:       qwen35moe.vision.temporal_patch_size u32              = 2
llama_model_loader: - kv  42:              qwen35moe.vision_end_token_id u32              = 248054
llama_model_loader: - kv  43:            qwen35moe.vision_start_token_id u32              = 248053
llama_model_loader: - kv  44:                    tokenizer.chat_template str              = {%- set image_count = namespace(value...
llama_model_loader: - kv  45:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  46:           tokenizer.ggml.add_padding_token bool             = false
llama_model_loader: - kv  47:                tokenizer.ggml.eos_token_id u32              = 248046
llama_model_loader: - kv  48:               tokenizer.ggml.eos_token_ids arr[i32,2]       = [248046, 248044]
llama_model_loader: - kv  49:                      tokenizer.ggml.merges arr[str,247587]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  50:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  51:            tokenizer.ggml.padding_token_id u32              = 248044
llama_model_loader: - kv  52:                         tokenizer.ggml.pre str              = qwen35
llama_model_loader: - kv  53:                      tokenizer.ggml.scores arr[f32,248320]  = [0.000000, 1.000000, 2.000000, 3.0000...
llama_model_loader: - kv  54:                  tokenizer.ggml.token_type arr[i32,248320]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  55:                      tokenizer.ggml.tokens arr[str,248320]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - type  f32:  544 tensors
llama_model_loader: - type  f16:  207 tensors
llama_model_loader: - type q4_K: 1115 tensors
llama_model_loader: - type q6_K:   93 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 22.22 GiB (5.31 BPW)
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe'
llama_model_load_from_file_impl: failed to load model
time=2026-02-25T23:15:27.002Z level=INFO source=sched.go:473 msg="NewLlamaServer failed" model=/root/.ollama/models/blobs/sha256-d838916ba05b9d908e9c3fecf16273b942a99aae94d1725c3e9fdd772522cf1a error="unable to load model: /root/.ollama/models/blobs/sha256-d838916ba05b9d908e9c3fecf16273b942a99aae94d1725c3e9fdd772522cf1a"
[GIN] 2026/02/25 - 23:15:27 | 500 |   541.52551ms |    192.168.4.37 | POST     "/api/chat"
ggml_backend_cuda_device_get_memory device GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f utilizing NVML memory reporting free: 32414957568 total: 34190917632
time=2026-02-25T23:15:27.135Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34607"
time=2026-02-25T23:15:27.326Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
time=2026-02-25T23:15:27.347Z level=INFO source=sched.go:689 msg="updated VRAM based on existing loaded models" gpu=GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f library=CUDA total="31.8 GiB" available="30.2 GiB"
time=2026-02-25T23:15:27.347Z level=INFO source=sched.go:689 msg="updated VRAM based on existing loaded models" gpu=GPU-a9029a42-b792-4cba-3b57-87fc7ce1b967 library=CUDA total="31.8 GiB" available="23.8 GiB"
llama_model_loader: loaded meta data with 56 key-value pairs and 1959 tensors from /root/.ollama/models/blobs/sha256-d838916ba05b9d908e9c3fecf16273b942a99aae94d1725c3e9fdd772522cf1a (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen35moe
llama_model_loader: - kv   1:                          general.file_type u32              = 15
llama_model_loader: - kv   2:                    general.parameter_count u64              = 35951822704
llama_model_loader: - kv   3:               general.quantization_version u32              = 2
llama_model_loader: - kv   4:             qwen35moe.attention.head_count u32              = 16
llama_model_loader: - kv   5:          qwen35moe.attention.head_count_kv arr[u32,40]      = [0, 0, 0, 2, 0, 0, 0, 2, 0, 0, 0, 2, ...
llama_model_loader: - kv   6:             qwen35moe.attention.key_length u32              = 256
llama_model_loader: - kv   7: qwen35moe.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   8:           qwen35moe.attention.value_length u32              = 256
llama_model_loader: - kv   9:                      qwen35moe.block_count u32              = 40
llama_model_loader: - kv  10:                   qwen35moe.context_length u32              = 262144
llama_model_loader: - kv  11:                 qwen35moe.embedding_length u32              = 2048
llama_model_loader: - kv  12:                     qwen35moe.expert_count u32              = 256
llama_model_loader: - kv  13:       qwen35moe.expert_feed_forward_length u32              = 512
llama_model_loader: - kv  14: qwen35moe.expert_shared_feed_forward_length u32              = 512
llama_model_loader: - kv  15:                qwen35moe.expert_used_count u32              = 8
llama_model_loader: - kv  16:              qwen35moe.feed_forward_length u32              = 0
llama_model_loader: - kv  17:          qwen35moe.full_attention_interval u32              = 4
llama_model_loader: - kv  18:                   qwen35moe.image_token_id u32              = 248056
llama_model_loader: - kv  19:                   qwen35moe.mrope_sections arr[i32,3]       = [11, 11, 10]
llama_model_loader: - kv  20:             qwen35moe.rope.dimension_count u32              = 64
llama_model_loader: - kv  21:                   qwen35moe.rope.freq_base f32              = 10000000.000000
llama_model_loader: - kv  22:           qwen35moe.rope.mrope_interleaved bool             = true
llama_model_loader: - kv  23:               qwen35moe.rope.mrope_section arr[i32,3]       = [11, 11, 10]
llama_model_loader: - kv  24:                  qwen35moe.ssm.conv_kernel u32              = 4
llama_model_loader: - kv  25:                  qwen35moe.ssm.group_count u32              = 16
llama_model_loader: - kv  26:                   qwen35moe.ssm.inner_size u32              = 4096
llama_model_loader: - kv  27:                   qwen35moe.ssm.state_size u32              = 128
llama_model_loader: - kv  28:               qwen35moe.ssm.time_step_rank u32              = 32
llama_model_loader: - kv  29:             qwen35moe.ssm.v_head_reordered bool             = true
llama_model_loader: - kv  30:      qwen35moe.vision.attention.head_count u32              = 16
llama_model_loader: - kv  31:               qwen35moe.vision.block_count u32              = 27
llama_model_loader: - kv  32:  qwen35moe.vision.deepstack_visual_indexes arr[i32,0]       = []
llama_model_loader: - kv  33:          qwen35moe.vision.embedding_length u32              = 1152
llama_model_loader: - kv  34:                qwen35moe.vision.image_mean arr[f32,3]       = [0.500000, 0.500000, 0.500000]
llama_model_loader: - kv  35:                 qwen35moe.vision.image_std arr[f32,3]       = [0.500000, 0.500000, 0.500000]
llama_model_loader: - kv  36:              qwen35moe.vision.longest_edge u32              = 16777216
llama_model_loader: - kv  37:              qwen35moe.vision.num_channels u32              = 3
llama_model_loader: - kv  38:                qwen35moe.vision.patch_size u32              = 16
llama_model_loader: - kv  39:             qwen35moe.vision.shortest_edge u32              = 65536
llama_model_loader: - kv  40:        qwen35moe.vision.spatial_merge_size u32              = 2
llama_model_loader: - kv  41:       qwen35moe.vision.temporal_patch_size u32              = 2
llama_model_loader: - kv  42:              qwen35moe.vision_end_token_id u32              = 248054
llama_model_loader: - kv  43:            qwen35moe.vision_start_token_id u32              = 248053
llama_model_loader: - kv  44:                    tokenizer.chat_template str              = {%- set image_count = namespace(value...
llama_model_loader: - kv  45:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  46:           tokenizer.ggml.add_padding_token bool             = false
llama_model_loader: - kv  47:                tokenizer.ggml.eos_token_id u32              = 248046
llama_model_loader: - kv  48:               tokenizer.ggml.eos_token_ids arr[i32,2]       = [248046, 248044]
llama_model_loader: - kv  49:                      tokenizer.ggml.merges arr[str,247587]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  50:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  51:            tokenizer.ggml.padding_token_id u32              = 248044
llama_model_loader: - kv  52:                         tokenizer.ggml.pre str              = qwen35
llama_model_loader: - kv  53:                      tokenizer.ggml.scores arr[f32,248320]  = [0.000000, 1.000000, 2.000000, 3.0000...
llama_model_loader: - kv  54:                  tokenizer.ggml.token_type arr[i32,248320]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  55:                      tokenizer.ggml.tokens arr[str,248320]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - type  f32:  544 tensors
llama_model_loader: - type  f16:  207 tensors
llama_model_loader: - type q4_K: 1115 tensors
llama_model_loader: - type q6_K:   93 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 22.22 GiB (5.31 BPW)
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe'
llama_model_load_from_file_impl: failed to load model
time=2026-02-25T23:15:27.506Z level=INFO source=sched.go:473 msg="NewLlamaServer failed" model=/root/.ollama/models/blobs/sha256-d838916ba05b9d908e9c3fecf16273b942a99aae94d1725c3e9fdd772522cf1a error="unable to load model: /root/.ollama/models/blobs/sha256-d838916ba05b9d908e9c3fecf16273b942a99aae94d1725c3e9fdd772522cf1a"
[GIN] 2026/02/25 - 23:15:27 | 500 |   501.49994ms |    192.168.4.37 | POST     "/api/chat"
[GIN] 2026/02/25 - 23:15:27 | 200 |    70.47038ms |    192.168.4.37 | POST     "/api/embed"
ggml_backend_cuda_device_get_memory device GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f utilizing NVML memory reporting free: 32414957568 total: 34190917632
time=2026-02-25T23:15:27.734Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37445"
time=2026-02-25T23:15:27.925Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax"
time=2026-02-25T23:15:27.946Z level=INFO source=sched.go:689 msg="updated VRAM based on existing loaded models" gpu=GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f library=CUDA total="31.8 GiB" available="30.2 GiB"
time=2026-02-25T23:15:27.946Z level=INFO source=sched.go:689 msg="updated VRAM based on existing loaded models" gpu=GPU-a9029a42-b792-4cba-3b57-87fc7ce1b967 library=CUDA total="31.8 GiB" available="23.8 GiB"
llama_model_loader: loaded meta data with 56 key-value pairs and 1959 tensors from /root/.ollama/models/blobs/sha256-d838916ba05b9d908e9c3fecf16273b942a99aae94d1725c3e9fdd772522cf1a (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen35moe
llama_model_loader: - kv   1:                          general.file_type u32              = 15
llama_model_loader: - kv   2:                    general.parameter_count u64              = 35951822704
llama_model_loader: - kv   3:               general.quantization_version u32              = 2
llama_model_loader: - kv   4:             qwen35moe.attention.head_count u32              = 16
llama_model_loader: - kv   5:          qwen35moe.attention.head_count_kv arr[u32,40]      = [0, 0, 0, 2, 0, 0, 0, 2, 0, 0, 0, 2, ...
llama_model_loader: - kv   6:             qwen35moe.attention.key_length u32              = 256
llama_model_loader: - kv   7: qwen35moe.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   8:           qwen35moe.attention.value_length u32              = 256
llama_model_loader: - kv   9:                      qwen35moe.block_count u32              = 40
llama_model_loader: - kv  10:                   qwen35moe.context_length u32              = 262144
llama_model_loader: - kv  11:                 qwen35moe.embedding_length u32              = 2048
llama_model_loader: - kv  12:                     qwen35moe.expert_count u32              = 256
llama_model_loader: - kv  13:       qwen35moe.expert_feed_forward_length u32              = 512
llama_model_loader: - kv  14: qwen35moe.expert_shared_feed_forward_length u32              = 512
llama_model_loader: - kv  15:                qwen35moe.expert_used_count u32              = 8
llama_model_loader: - kv  16:              qwen35moe.feed_forward_length u32              = 0
llama_model_loader: - kv  17:          qwen35moe.full_attention_interval u32              = 4
llama_model_loader: - kv  18:                   qwen35moe.image_token_id u32              = 248056
llama_model_loader: - kv  19:                   qwen35moe.mrope_sections arr[i32,3]       = [11, 11, 10]
llama_model_loader: - kv  20:             qwen35moe.rope.dimension_count u32              = 64
llama_model_loader: - kv  21:                   qwen35moe.rope.freq_base f32              = 10000000.000000
llama_model_loader: - kv  22:           qwen35moe.rope.mrope_interleaved bool             = true
llama_model_loader: - kv  23:               qwen35moe.rope.mrope_section arr[i32,3]       = [11, 11, 10]
llama_model_loader: - kv  24:                  qwen35moe.ssm.conv_kernel u32              = 4
llama_model_loader: - kv  25:                  qwen35moe.ssm.group_count u32              = 16
llama_model_loader: - kv  26:                   qwen35moe.ssm.inner_size u32              = 4096
llama_model_loader: - kv  27:                   qwen35moe.ssm.state_size u32              = 128
llama_model_loader: - kv  28:               qwen35moe.ssm.time_step_rank u32              = 32
llama_model_loader: - kv  29:             qwen35moe.ssm.v_head_reordered bool             = true
llama_model_loader: - kv  30:      qwen35moe.vision.attention.head_count u32              = 16
llama_model_loader: - kv  31:               qwen35moe.vision.block_count u32              = 27
llama_model_loader: - kv  32:  qwen35moe.vision.deepstack_visual_indexes arr[i32,0]       = []
llama_model_loader: - kv  33:          qwen35moe.vision.embedding_length u32              = 1152
llama_model_loader: - kv  34:                qwen35moe.vision.image_mean arr[f32,3]       = [0.500000, 0.500000, 0.500000]
llama_model_loader: - kv  35:                 qwen35moe.vision.image_std arr[f32,3]       = [0.500000, 0.500000, 0.500000]
llama_model_loader: - kv  36:              qwen35moe.vision.longest_edge u32              = 16777216
llama_model_loader: - kv  37:              qwen35moe.vision.num_channels u32              = 3
llama_model_loader: - kv  38:                qwen35moe.vision.patch_size u32              = 16
llama_model_loader: - kv  39:             qwen35moe.vision.shortest_edge u32              = 65536
llama_model_loader: - kv  40:        qwen35moe.vision.spatial_merge_size u32              = 2
llama_model_loader: - kv  41:       qwen35moe.vision.temporal_patch_size u32              = 2
llama_model_loader: - kv  42:              qwen35moe.vision_end_token_id u32              = 248054
llama_model_loader: - kv  43:            qwen35moe.vision_start_token_id u32              = 248053
llama_model_loader: - kv  44:                    tokenizer.chat_template str              = {%- set image_count = namespace(value...
llama_model_loader: - kv  45:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  46:           tokenizer.ggml.add_padding_token bool             = false
llama_model_loader: - kv  47:                tokenizer.ggml.eos_token_id u32              = 248046
llama_model_loader: - kv  48:               tokenizer.ggml.eos_token_ids arr[i32,2]       = [248046, 248044]
llama_model_loader: - kv  49:                      tokenizer.ggml.merges arr[str,247587]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  50:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  51:            tokenizer.ggml.padding_token_id u32              = 248044
llama_model_loader: - kv  52:                         tokenizer.ggml.pre str              = qwen35
llama_model_loader: - kv  53:                      tokenizer.ggml.scores arr[f32,248320]  = [0.000000, 1.000000, 2.000000, 3.0000...
llama_model_loader: - kv  54:                  tokenizer.ggml.token_type arr[i32,248320]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  55:                      tokenizer.ggml.tokens arr[str,248320]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - type  f32:  544 tensors
llama_model_loader: - type  f16:  207 tensors
llama_model_loader: - type q4_K: 1115 tensors
llama_model_loader: - type q6_K:   93 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 22.22 GiB (5.31 BPW)
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe'
llama_model_load_from_file_impl: failed to load model
time=2026-02-25T23:15:28.113Z level=INFO source=sched.go:473 msg="NewLlamaServer failed" model=/root/.ollama/models/blobs/sha256-d838916ba05b9d908e9c3fecf16273b942a99aae94d1725c3e9fdd772522cf1a error="unable to load model: /root/.ollama/models/blobs/sha256-d838916ba05b9d908e9c3fecf16273b942a99aae94d1725c3e9fdd772522cf1a"
[GIN] 2026/02/25 - 23:15:28 | 500 |  516.673886ms |    192.168.4.37 | POST     "/api/chat"

OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @heapsoftware on GitHub (Feb 25, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14428 ### What is the issue? When trying to interact with the model from openwebui I get a 500 error, looking at the logs shows an error: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe'. Running with two 5090s on a AMD TR CPU ### Relevant log output ```shell time=2026-02-25T23:15:21.214Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 35405" time=2026-02-25T23:15:21.570Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" time=2026-02-25T23:15:21.633Z level=WARN source=server.go:209 msg="flash attention enabled but not supported by model" time=2026-02-25T23:15:21.633Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-0800cbac9c2064dde519420e75e512a83cb360de3ad5df176185dc69652fc515 --port 45963" time=2026-02-25T23:15:21.633Z level=INFO source=sched.go:491 msg="system memory" total="125.3 GiB" free="101.3 GiB" free_swap="8.0 GiB" time=2026-02-25T23:15:21.633Z level=INFO source=sched.go:498 msg="gpu memory" id=GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f library=CUDA available="30.9 GiB" free="31.4 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-02-25T23:15:21.633Z level=INFO source=sched.go:498 msg="gpu memory" id=GPU-a9029a42-b792-4cba-3b57-87fc7ce1b967 library=CUDA available="23.8 GiB" free="24.3 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-02-25T23:15:21.633Z level=INFO source=server.go:757 msg="loading model" "model layers"=25 requested=-1 time=2026-02-25T23:15:21.642Z level=INFO source=runner.go:1411 msg="starting ollama engine" time=2026-02-25T23:15:21.642Z level=INFO source=runner.go:1446 msg="Server listening on 127.0.0.1:45963" time=2026-02-25T23:15:21.645Z level=INFO source=runner.go:1284 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:2048 FlashAttention:Disabled KvSize:2048 KvCacheType: NumThreads:24 GPULayers:25[ID:GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f Layers:25(0..24)] MultiUserCache:true ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-25T23:15:21.669Z level=INFO source=ggml.go:136 msg="" architecture=gemma3 file_type=BF16 name="Embeddinggemma 300M" description="" num_tensors=316 num_key_values=37 load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-icelake.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f Device 1: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-a9029a42-b792-4cba-3b57-87fc7ce1b967 load_backend: loaded CUDA backend from /usr/lib/ollama/cuda_v13/libggml-cuda.so time=2026-02-25T23:15:21.825Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2026-02-25T23:15:22.087Z level=INFO source=runner.go:1284 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:2048 FlashAttention:Disabled KvSize:2048 KvCacheType: NumThreads:24 GPULayers:25[ID:GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f Layers:25(0..24)] MultiUserCache:true ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-25T23:15:22.118Z level=INFO source=runner.go:1284 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:2048 FlashAttention:Disabled KvSize:2048 KvCacheType: NumThreads:24 GPULayers:25[ID:GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f Layers:25(0..24)] MultiUserCache:true ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-02-25T23:15:22.118Z level=INFO source=ggml.go:482 msg="offloading 24 repeating layers to GPU" time=2026-02-25T23:15:22.118Z level=INFO source=ggml.go:489 msg="offloading output layer to GPU" time=2026-02-25T23:15:22.118Z level=INFO source=ggml.go:494 msg="offloaded 25/25 layers to GPU" time=2026-02-25T23:15:22.118Z level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="586.8 MiB" time=2026-02-25T23:15:22.118Z level=INFO source=device.go:245 msg="model weights" device=CPU size="384.0 MiB" time=2026-02-25T23:15:22.118Z level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="80.0 MiB" time=2026-02-25T23:15:22.118Z level=INFO source=device.go:267 msg="compute graph" device=CPU size="6.0 MiB" time=2026-02-25T23:15:22.118Z level=INFO source=device.go:272 msg="total memory" size="1.0 GiB" time=2026-02-25T23:15:22.118Z level=INFO source=sched.go:566 msg="loaded runners" count=1 time=2026-02-25T23:15:22.118Z level=INFO source=server.go:1350 msg="waiting for llama runner to start responding" time=2026-02-25T23:15:22.118Z level=INFO source=server.go:1384 msg="waiting for server to become available" status="llm server loading model" time=2026-02-25T23:15:22.369Z level=INFO source=server.go:1388 msg="llama runner started in 0.74 seconds" [GIN] 2026/02/25 - 23:15:22 | 200 | 1.508444579s | 192.168.4.37 | POST "/api/embed" [GIN] 2026/02/25 - 23:15:23 | 200 | 2.658490743s | 192.168.4.37 | POST "/api/embed" [GIN] 2026/02/25 - 23:15:23 | 200 | 2.675128668s | 192.168.4.37 | POST "/api/embed" [GIN] 2026/02/25 - 23:15:23 | 200 | 2.692409407s | 192.168.4.37 | POST "/api/embed" [GIN] 2026/02/25 - 23:15:23 | 200 | 2.706744646s | 192.168.4.37 | POST "/api/embed" [GIN] 2026/02/25 - 23:15:24 | 200 | 2.871107403s | 192.168.4.37 | POST "/api/embed" [GIN] 2026/02/25 - 23:15:24 | 200 | 2.922953868s | 192.168.4.37 | POST "/api/embed" [GIN] 2026/02/25 - 23:15:24 | 200 | 3.156627825s | 192.168.4.37 | POST "/api/embed" [GIN] 2026/02/25 - 23:15:24 | 200 | 3.297450819s | 192.168.4.37 | POST "/api/embed" [GIN] 2026/02/25 - 23:15:24 | 200 | 3.364022115s | 192.168.4.37 | POST "/api/embed" [GIN] 2026/02/25 - 23:15:24 | 200 | 3.372008303s | 192.168.4.37 | POST "/api/embed" [GIN] 2026/02/25 - 23:15:24 | 200 | 3.449233632s | 192.168.4.37 | POST "/api/embed" [GIN] 2026/02/25 - 23:15:25 | 200 | 4.318142985s | 192.168.4.37 | POST "/api/embed" [GIN] 2026/02/25 - 23:15:25 | 200 | 4.327186459s | 192.168.4.37 | POST "/api/embed" [GIN] 2026/02/25 - 23:15:26 | 200 | 4.927479913s | 192.168.4.37 | POST "/api/embed" [GIN] 2026/02/25 - 23:15:26 | 200 | 5.034411824s | 192.168.4.37 | POST "/api/embed" [GIN] 2026/02/25 - 23:15:26 | 200 | 5.043784693s | 192.168.4.37 | POST "/api/embed" [GIN] 2026/02/25 - 23:15:26 | 200 | 5.054169841s | 192.168.4.37 | POST "/api/embed" [GIN] 2026/02/25 - 23:15:26 | 200 | 5.106684199s | 192.168.4.37 | POST "/api/embed" [GIN] 2026/02/25 - 23:15:26 | 200 | 5.116968636s | 192.168.4.37 | POST "/api/embed" [GIN] 2026/02/25 - 23:15:26 | 200 | 5.163582957s | 192.168.4.37 | POST "/api/embed" ggml_backend_cuda_device_get_memory device GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f utilizing NVML memory reporting free: 32414957568 total: 34190917632 time=2026-02-25T23:15:26.626Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 39161" time=2026-02-25T23:15:26.816Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" time=2026-02-25T23:15:26.838Z level=INFO source=sched.go:689 msg="updated VRAM based on existing loaded models" gpu=GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f library=CUDA total="31.8 GiB" available="30.2 GiB" time=2026-02-25T23:15:26.838Z level=INFO source=sched.go:689 msg="updated VRAM based on existing loaded models" gpu=GPU-a9029a42-b792-4cba-3b57-87fc7ce1b967 library=CUDA total="31.8 GiB" available="23.8 GiB" llama_model_loader: loaded meta data with 56 key-value pairs and 1959 tensors from /root/.ollama/models/blobs/sha256-d838916ba05b9d908e9c3fecf16273b942a99aae94d1725c3e9fdd772522cf1a (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen35moe llama_model_loader: - kv 1: general.file_type u32 = 15 llama_model_loader: - kv 2: general.parameter_count u64 = 35951822704 llama_model_loader: - kv 3: general.quantization_version u32 = 2 llama_model_loader: - kv 4: qwen35moe.attention.head_count u32 = 16 llama_model_loader: - kv 5: qwen35moe.attention.head_count_kv arr[u32,40] = [0, 0, 0, 2, 0, 0, 0, 2, 0, 0, 0, 2, ... llama_model_loader: - kv 6: qwen35moe.attention.key_length u32 = 256 llama_model_loader: - kv 7: qwen35moe.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 8: qwen35moe.attention.value_length u32 = 256 llama_model_loader: - kv 9: qwen35moe.block_count u32 = 40 llama_model_loader: - kv 10: qwen35moe.context_length u32 = 262144 llama_model_loader: - kv 11: qwen35moe.embedding_length u32 = 2048 llama_model_loader: - kv 12: qwen35moe.expert_count u32 = 256 llama_model_loader: - kv 13: qwen35moe.expert_feed_forward_length u32 = 512 llama_model_loader: - kv 14: qwen35moe.expert_shared_feed_forward_length u32 = 512 llama_model_loader: - kv 15: qwen35moe.expert_used_count u32 = 8 llama_model_loader: - kv 16: qwen35moe.feed_forward_length u32 = 0 llama_model_loader: - kv 17: qwen35moe.full_attention_interval u32 = 4 llama_model_loader: - kv 18: qwen35moe.image_token_id u32 = 248056 llama_model_loader: - kv 19: qwen35moe.mrope_sections arr[i32,3] = [11, 11, 10] llama_model_loader: - kv 20: qwen35moe.rope.dimension_count u32 = 64 llama_model_loader: - kv 21: qwen35moe.rope.freq_base f32 = 10000000.000000 llama_model_loader: - kv 22: qwen35moe.rope.mrope_interleaved bool = true llama_model_loader: - kv 23: qwen35moe.rope.mrope_section arr[i32,3] = [11, 11, 10] llama_model_loader: - kv 24: qwen35moe.ssm.conv_kernel u32 = 4 llama_model_loader: - kv 25: qwen35moe.ssm.group_count u32 = 16 llama_model_loader: - kv 26: qwen35moe.ssm.inner_size u32 = 4096 llama_model_loader: - kv 27: qwen35moe.ssm.state_size u32 = 128 llama_model_loader: - kv 28: qwen35moe.ssm.time_step_rank u32 = 32 llama_model_loader: - kv 29: qwen35moe.ssm.v_head_reordered bool = true llama_model_loader: - kv 30: qwen35moe.vision.attention.head_count u32 = 16 llama_model_loader: - kv 31: qwen35moe.vision.block_count u32 = 27 llama_model_loader: - kv 32: qwen35moe.vision.deepstack_visual_indexes arr[i32,0] = [] llama_model_loader: - kv 33: qwen35moe.vision.embedding_length u32 = 1152 llama_model_loader: - kv 34: qwen35moe.vision.image_mean arr[f32,3] = [0.500000, 0.500000, 0.500000] llama_model_loader: - kv 35: qwen35moe.vision.image_std arr[f32,3] = [0.500000, 0.500000, 0.500000] llama_model_loader: - kv 36: qwen35moe.vision.longest_edge u32 = 16777216 llama_model_loader: - kv 37: qwen35moe.vision.num_channels u32 = 3 llama_model_loader: - kv 38: qwen35moe.vision.patch_size u32 = 16 llama_model_loader: - kv 39: qwen35moe.vision.shortest_edge u32 = 65536 llama_model_loader: - kv 40: qwen35moe.vision.spatial_merge_size u32 = 2 llama_model_loader: - kv 41: qwen35moe.vision.temporal_patch_size u32 = 2 llama_model_loader: - kv 42: qwen35moe.vision_end_token_id u32 = 248054 llama_model_loader: - kv 43: qwen35moe.vision_start_token_id u32 = 248053 llama_model_loader: - kv 44: tokenizer.chat_template str = {%- set image_count = namespace(value... llama_model_loader: - kv 45: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 46: tokenizer.ggml.add_padding_token bool = false llama_model_loader: - kv 47: tokenizer.ggml.eos_token_id u32 = 248046 llama_model_loader: - kv 48: tokenizer.ggml.eos_token_ids arr[i32,2] = [248046, 248044] llama_model_loader: - kv 49: tokenizer.ggml.merges arr[str,247587] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 50: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 51: tokenizer.ggml.padding_token_id u32 = 248044 llama_model_loader: - kv 52: tokenizer.ggml.pre str = qwen35 llama_model_loader: - kv 53: tokenizer.ggml.scores arr[f32,248320] = [0.000000, 1.000000, 2.000000, 3.0000... llama_model_loader: - kv 54: tokenizer.ggml.token_type arr[i32,248320] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 55: tokenizer.ggml.tokens arr[str,248320] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - type f32: 544 tensors llama_model_loader: - type f16: 207 tensors llama_model_loader: - type q4_K: 1115 tensors llama_model_loader: - type q6_K: 93 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 22.22 GiB (5.31 BPW) llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe' llama_model_load_from_file_impl: failed to load model time=2026-02-25T23:15:27.002Z level=INFO source=sched.go:473 msg="NewLlamaServer failed" model=/root/.ollama/models/blobs/sha256-d838916ba05b9d908e9c3fecf16273b942a99aae94d1725c3e9fdd772522cf1a error="unable to load model: /root/.ollama/models/blobs/sha256-d838916ba05b9d908e9c3fecf16273b942a99aae94d1725c3e9fdd772522cf1a" [GIN] 2026/02/25 - 23:15:27 | 500 | 541.52551ms | 192.168.4.37 | POST "/api/chat" ggml_backend_cuda_device_get_memory device GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f utilizing NVML memory reporting free: 32414957568 total: 34190917632 time=2026-02-25T23:15:27.135Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 34607" time=2026-02-25T23:15:27.326Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" time=2026-02-25T23:15:27.347Z level=INFO source=sched.go:689 msg="updated VRAM based on existing loaded models" gpu=GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f library=CUDA total="31.8 GiB" available="30.2 GiB" time=2026-02-25T23:15:27.347Z level=INFO source=sched.go:689 msg="updated VRAM based on existing loaded models" gpu=GPU-a9029a42-b792-4cba-3b57-87fc7ce1b967 library=CUDA total="31.8 GiB" available="23.8 GiB" llama_model_loader: loaded meta data with 56 key-value pairs and 1959 tensors from /root/.ollama/models/blobs/sha256-d838916ba05b9d908e9c3fecf16273b942a99aae94d1725c3e9fdd772522cf1a (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen35moe llama_model_loader: - kv 1: general.file_type u32 = 15 llama_model_loader: - kv 2: general.parameter_count u64 = 35951822704 llama_model_loader: - kv 3: general.quantization_version u32 = 2 llama_model_loader: - kv 4: qwen35moe.attention.head_count u32 = 16 llama_model_loader: - kv 5: qwen35moe.attention.head_count_kv arr[u32,40] = [0, 0, 0, 2, 0, 0, 0, 2, 0, 0, 0, 2, ... llama_model_loader: - kv 6: qwen35moe.attention.key_length u32 = 256 llama_model_loader: - kv 7: qwen35moe.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 8: qwen35moe.attention.value_length u32 = 256 llama_model_loader: - kv 9: qwen35moe.block_count u32 = 40 llama_model_loader: - kv 10: qwen35moe.context_length u32 = 262144 llama_model_loader: - kv 11: qwen35moe.embedding_length u32 = 2048 llama_model_loader: - kv 12: qwen35moe.expert_count u32 = 256 llama_model_loader: - kv 13: qwen35moe.expert_feed_forward_length u32 = 512 llama_model_loader: - kv 14: qwen35moe.expert_shared_feed_forward_length u32 = 512 llama_model_loader: - kv 15: qwen35moe.expert_used_count u32 = 8 llama_model_loader: - kv 16: qwen35moe.feed_forward_length u32 = 0 llama_model_loader: - kv 17: qwen35moe.full_attention_interval u32 = 4 llama_model_loader: - kv 18: qwen35moe.image_token_id u32 = 248056 llama_model_loader: - kv 19: qwen35moe.mrope_sections arr[i32,3] = [11, 11, 10] llama_model_loader: - kv 20: qwen35moe.rope.dimension_count u32 = 64 llama_model_loader: - kv 21: qwen35moe.rope.freq_base f32 = 10000000.000000 llama_model_loader: - kv 22: qwen35moe.rope.mrope_interleaved bool = true llama_model_loader: - kv 23: qwen35moe.rope.mrope_section arr[i32,3] = [11, 11, 10] llama_model_loader: - kv 24: qwen35moe.ssm.conv_kernel u32 = 4 llama_model_loader: - kv 25: qwen35moe.ssm.group_count u32 = 16 llama_model_loader: - kv 26: qwen35moe.ssm.inner_size u32 = 4096 llama_model_loader: - kv 27: qwen35moe.ssm.state_size u32 = 128 llama_model_loader: - kv 28: qwen35moe.ssm.time_step_rank u32 = 32 llama_model_loader: - kv 29: qwen35moe.ssm.v_head_reordered bool = true llama_model_loader: - kv 30: qwen35moe.vision.attention.head_count u32 = 16 llama_model_loader: - kv 31: qwen35moe.vision.block_count u32 = 27 llama_model_loader: - kv 32: qwen35moe.vision.deepstack_visual_indexes arr[i32,0] = [] llama_model_loader: - kv 33: qwen35moe.vision.embedding_length u32 = 1152 llama_model_loader: - kv 34: qwen35moe.vision.image_mean arr[f32,3] = [0.500000, 0.500000, 0.500000] llama_model_loader: - kv 35: qwen35moe.vision.image_std arr[f32,3] = [0.500000, 0.500000, 0.500000] llama_model_loader: - kv 36: qwen35moe.vision.longest_edge u32 = 16777216 llama_model_loader: - kv 37: qwen35moe.vision.num_channels u32 = 3 llama_model_loader: - kv 38: qwen35moe.vision.patch_size u32 = 16 llama_model_loader: - kv 39: qwen35moe.vision.shortest_edge u32 = 65536 llama_model_loader: - kv 40: qwen35moe.vision.spatial_merge_size u32 = 2 llama_model_loader: - kv 41: qwen35moe.vision.temporal_patch_size u32 = 2 llama_model_loader: - kv 42: qwen35moe.vision_end_token_id u32 = 248054 llama_model_loader: - kv 43: qwen35moe.vision_start_token_id u32 = 248053 llama_model_loader: - kv 44: tokenizer.chat_template str = {%- set image_count = namespace(value... llama_model_loader: - kv 45: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 46: tokenizer.ggml.add_padding_token bool = false llama_model_loader: - kv 47: tokenizer.ggml.eos_token_id u32 = 248046 llama_model_loader: - kv 48: tokenizer.ggml.eos_token_ids arr[i32,2] = [248046, 248044] llama_model_loader: - kv 49: tokenizer.ggml.merges arr[str,247587] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 50: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 51: tokenizer.ggml.padding_token_id u32 = 248044 llama_model_loader: - kv 52: tokenizer.ggml.pre str = qwen35 llama_model_loader: - kv 53: tokenizer.ggml.scores arr[f32,248320] = [0.000000, 1.000000, 2.000000, 3.0000... llama_model_loader: - kv 54: tokenizer.ggml.token_type arr[i32,248320] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 55: tokenizer.ggml.tokens arr[str,248320] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - type f32: 544 tensors llama_model_loader: - type f16: 207 tensors llama_model_loader: - type q4_K: 1115 tensors llama_model_loader: - type q6_K: 93 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 22.22 GiB (5.31 BPW) llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe' llama_model_load_from_file_impl: failed to load model time=2026-02-25T23:15:27.506Z level=INFO source=sched.go:473 msg="NewLlamaServer failed" model=/root/.ollama/models/blobs/sha256-d838916ba05b9d908e9c3fecf16273b942a99aae94d1725c3e9fdd772522cf1a error="unable to load model: /root/.ollama/models/blobs/sha256-d838916ba05b9d908e9c3fecf16273b942a99aae94d1725c3e9fdd772522cf1a" [GIN] 2026/02/25 - 23:15:27 | 500 | 501.49994ms | 192.168.4.37 | POST "/api/chat" [GIN] 2026/02/25 - 23:15:27 | 200 | 70.47038ms | 192.168.4.37 | POST "/api/embed" ggml_backend_cuda_device_get_memory device GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f utilizing NVML memory reporting free: 32414957568 total: 34190917632 time=2026-02-25T23:15:27.734Z level=INFO source=server.go:431 msg="starting runner" cmd="/usr/bin/ollama runner --ollama-engine --port 37445" time=2026-02-25T23:15:27.925Z level=WARN source=cpu_linux.go:130 msg="failed to parse CPU allowed micro secs" error="strconv.ParseInt: parsing \"max\": invalid syntax" time=2026-02-25T23:15:27.946Z level=INFO source=sched.go:689 msg="updated VRAM based on existing loaded models" gpu=GPU-2d1a96dc-16e5-6a97-179e-faa91af9299f library=CUDA total="31.8 GiB" available="30.2 GiB" time=2026-02-25T23:15:27.946Z level=INFO source=sched.go:689 msg="updated VRAM based on existing loaded models" gpu=GPU-a9029a42-b792-4cba-3b57-87fc7ce1b967 library=CUDA total="31.8 GiB" available="23.8 GiB" llama_model_loader: loaded meta data with 56 key-value pairs and 1959 tensors from /root/.ollama/models/blobs/sha256-d838916ba05b9d908e9c3fecf16273b942a99aae94d1725c3e9fdd772522cf1a (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen35moe llama_model_loader: - kv 1: general.file_type u32 = 15 llama_model_loader: - kv 2: general.parameter_count u64 = 35951822704 llama_model_loader: - kv 3: general.quantization_version u32 = 2 llama_model_loader: - kv 4: qwen35moe.attention.head_count u32 = 16 llama_model_loader: - kv 5: qwen35moe.attention.head_count_kv arr[u32,40] = [0, 0, 0, 2, 0, 0, 0, 2, 0, 0, 0, 2, ... llama_model_loader: - kv 6: qwen35moe.attention.key_length u32 = 256 llama_model_loader: - kv 7: qwen35moe.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 8: qwen35moe.attention.value_length u32 = 256 llama_model_loader: - kv 9: qwen35moe.block_count u32 = 40 llama_model_loader: - kv 10: qwen35moe.context_length u32 = 262144 llama_model_loader: - kv 11: qwen35moe.embedding_length u32 = 2048 llama_model_loader: - kv 12: qwen35moe.expert_count u32 = 256 llama_model_loader: - kv 13: qwen35moe.expert_feed_forward_length u32 = 512 llama_model_loader: - kv 14: qwen35moe.expert_shared_feed_forward_length u32 = 512 llama_model_loader: - kv 15: qwen35moe.expert_used_count u32 = 8 llama_model_loader: - kv 16: qwen35moe.feed_forward_length u32 = 0 llama_model_loader: - kv 17: qwen35moe.full_attention_interval u32 = 4 llama_model_loader: - kv 18: qwen35moe.image_token_id u32 = 248056 llama_model_loader: - kv 19: qwen35moe.mrope_sections arr[i32,3] = [11, 11, 10] llama_model_loader: - kv 20: qwen35moe.rope.dimension_count u32 = 64 llama_model_loader: - kv 21: qwen35moe.rope.freq_base f32 = 10000000.000000 llama_model_loader: - kv 22: qwen35moe.rope.mrope_interleaved bool = true llama_model_loader: - kv 23: qwen35moe.rope.mrope_section arr[i32,3] = [11, 11, 10] llama_model_loader: - kv 24: qwen35moe.ssm.conv_kernel u32 = 4 llama_model_loader: - kv 25: qwen35moe.ssm.group_count u32 = 16 llama_model_loader: - kv 26: qwen35moe.ssm.inner_size u32 = 4096 llama_model_loader: - kv 27: qwen35moe.ssm.state_size u32 = 128 llama_model_loader: - kv 28: qwen35moe.ssm.time_step_rank u32 = 32 llama_model_loader: - kv 29: qwen35moe.ssm.v_head_reordered bool = true llama_model_loader: - kv 30: qwen35moe.vision.attention.head_count u32 = 16 llama_model_loader: - kv 31: qwen35moe.vision.block_count u32 = 27 llama_model_loader: - kv 32: qwen35moe.vision.deepstack_visual_indexes arr[i32,0] = [] llama_model_loader: - kv 33: qwen35moe.vision.embedding_length u32 = 1152 llama_model_loader: - kv 34: qwen35moe.vision.image_mean arr[f32,3] = [0.500000, 0.500000, 0.500000] llama_model_loader: - kv 35: qwen35moe.vision.image_std arr[f32,3] = [0.500000, 0.500000, 0.500000] llama_model_loader: - kv 36: qwen35moe.vision.longest_edge u32 = 16777216 llama_model_loader: - kv 37: qwen35moe.vision.num_channels u32 = 3 llama_model_loader: - kv 38: qwen35moe.vision.patch_size u32 = 16 llama_model_loader: - kv 39: qwen35moe.vision.shortest_edge u32 = 65536 llama_model_loader: - kv 40: qwen35moe.vision.spatial_merge_size u32 = 2 llama_model_loader: - kv 41: qwen35moe.vision.temporal_patch_size u32 = 2 llama_model_loader: - kv 42: qwen35moe.vision_end_token_id u32 = 248054 llama_model_loader: - kv 43: qwen35moe.vision_start_token_id u32 = 248053 llama_model_loader: - kv 44: tokenizer.chat_template str = {%- set image_count = namespace(value... llama_model_loader: - kv 45: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 46: tokenizer.ggml.add_padding_token bool = false llama_model_loader: - kv 47: tokenizer.ggml.eos_token_id u32 = 248046 llama_model_loader: - kv 48: tokenizer.ggml.eos_token_ids arr[i32,2] = [248046, 248044] llama_model_loader: - kv 49: tokenizer.ggml.merges arr[str,247587] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 50: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 51: tokenizer.ggml.padding_token_id u32 = 248044 llama_model_loader: - kv 52: tokenizer.ggml.pre str = qwen35 llama_model_loader: - kv 53: tokenizer.ggml.scores arr[f32,248320] = [0.000000, 1.000000, 2.000000, 3.0000... llama_model_loader: - kv 54: tokenizer.ggml.token_type arr[i32,248320] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 55: tokenizer.ggml.tokens arr[str,248320] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - type f32: 544 tensors llama_model_loader: - type f16: 207 tensors llama_model_loader: - type q4_K: 1115 tensors llama_model_loader: - type q6_K: 93 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 22.22 GiB (5.31 BPW) llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35moe' llama_model_load_from_file_impl: failed to load model time=2026-02-25T23:15:28.113Z level=INFO source=sched.go:473 msg="NewLlamaServer failed" model=/root/.ollama/models/blobs/sha256-d838916ba05b9d908e9c3fecf16273b942a99aae94d1725c3e9fdd772522cf1a error="unable to load model: /root/.ollama/models/blobs/sha256-d838916ba05b9d908e9c3fecf16273b942a99aae94d1725c3e9fdd772522cf1a" [GIN] 2026/02/25 - 23:15:28 | 500 | 516.673886ms | 192.168.4.37 | POST "/api/chat" ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-05-05 01:38:04 -05:00
Author
Owner

@e-strelock commented on GitHub (Feb 25, 2026):

Just update to 0.17.1-rc-any.

<!-- gh-comment-id:3962875698 --> @e-strelock commented on GitHub (Feb 25, 2026): Just update to 0.17.1-rc-any.
Author
Owner

@heapsoftware commented on GitHub (Feb 26, 2026):

Updated to ollama/ollama:0.17.1-rc2 and the issue is resolved

<!-- gh-comment-id:3963072517 --> @heapsoftware commented on GitHub (Feb 26, 2026): Updated to ollama/ollama:0.17.1-rc2 and the issue is resolved
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71427