[GH-ISSUE #13187] Custom Qwen3VLMoE models not working #70776

Open
opened 2026-05-04 22:57:06 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @Sweaterdog on GitHub (Nov 21, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13187

What is the issue?

I have been playing with making my own models based on a supported architecture in Ollama, but to no avail. To reproduce, you can make your own model with the architecture with the code below for testing:

import torch
import gc
import os
import copy
from transformers import (
    AutoModelForCausalLM,
    AutoModel,
    AutoProcessor,
    AutoConfig,
    Qwen3VLForConditionalGeneration,
    Qwen3VLConfig,
    Qwen3VLMoeConfig,
    Qwen3VLMoeForConditionalGeneration,
    Qwen3VLMoeTextConfig
)
from tqdm import tqdm

# === CONFIGURATION ===
# The "Spine" (Vision Tower + Shared Attention)
# Qwen3-VL-2B is the perfect chassis.
VISION_SPINE_ID = "Qwen/Qwen3-VL-2B-Instruct" 

# Your 8 Experts
# Note: All must have hidden_size=2048 (standard for Qwen3-1.7B / VL-2B)
EXPERT_SOURCE_MAP = {
    0: "Qwen/Qwen3-VL-2B-Thinking",         # Vision Instruct (Strong Generalist)
    1: "Qwen/Qwen3-VL-2B-Thinking",         # Vision Thinking (Logic Specialist)
    2: "Qwen/Qwen3-VL-2B-Thinking",
    3: "Qwen/Qwen3-VL-2B-Thinking",              # Text Base (Knowledge)
    4: "Qwen/Qwen3-VL-2B-Thinking",              # Text Base (Knowledge)
    5: "Qwen/Qwen3-VL-2B-Thinking",              # Text Base (Knowledge)
    6: "Qwen/Qwen3-VL-2B-Thinking",     # Text Coder (Backend)
    7: "Qwen/Qwen3-VL-2B-Thinking"      # Text Coder (Frontend/Websight Specialist)
}

SAVE_PATH = "./test_moe"
NUM_EXPERTS = 8
NUM_EXPERTS_PER_TOK = 2
# =====================

def build_grape_flash():
    print(f"--- 1. Loading Spine Model: {VISION_SPINE_ID} ---")
    spine_model = Qwen3VLForConditionalGeneration.from_pretrained(
        VISION_SPINE_ID, 
        dtype=torch.float16,
        device_map="cpu",
        trust_remote_code=True
    )
    base_config = spine_model.config

    # 🔍 CRITICAL: Analyze architecture like we did for convert.py
    print("\n🔍 SPINE MODEL ARCHITECTURE ANALYSIS:")
    print(f"   Vision Config:")
    print(f"   - Model has visual tower: {hasattr(spine_model, 'visual')}")
    
    tc = base_config.text_config
    print(f"\n   Text Config reports:")
    print(f"   - hidden_size: {tc.hidden_size}")
    print(f"   - num_hidden_layers: {tc.num_hidden_layers}")
    print(f"   - num_attention_heads: {tc.num_attention_heads}")
    print(f"   - num_key_value_heads: {tc.num_key_value_heads}")
    print(f"   - intermediate_size: {tc.intermediate_size}")
    
    # Get actual shapes from layer 0
    lm = spine_model.language_model
    actual_q_shape = lm.layers[0].self_attn.q_proj.weight.shape
    actual_k_shape = lm.layers[0].self_attn.k_proj.weight.shape
    actual_q_norm_shape = lm.layers[0].self_attn.q_norm.weight.shape
    actual_mlp_gate_shape = lm.layers[0].mlp.gate_proj.weight.shape
    
    print(f"\n   Actual weight shapes (Layer 0):")
    print(f"   - q_proj: {actual_q_shape}")
    print(f"   - k_proj: {actual_k_shape}")
    print(f"   - q_norm: {actual_q_norm_shape}")
    print(f"   - mlp.gate_proj: {actual_mlp_gate_shape}")
    
    # Infer head_dim from q_norm (like we did in convert.py)
    hidden_size = tc.hidden_size
    q_norm_dim = actual_q_norm_shape[0]
    head_dim = q_norm_dim  # q_norm dimension == head_dim
    
    true_q_dim = actual_q_shape[0]
    true_kv_dim = actual_k_shape[0]
    true_num_heads = true_q_dim // head_dim
    true_num_kv_heads = true_kv_dim // head_dim
    
    print(f"\n   🎯 INFERRED CORRECT ARCHITECTURE:")
    print(f"   - head_dim: {head_dim} (inferred from q_norm dimension)")
    print(f"   - num_attention_heads: {true_num_heads} (config said {tc.num_attention_heads})")
    print(f"   - num_key_value_heads: {true_num_kv_heads} (config said {tc.num_key_value_heads})")
    print(f"   ✅ All dimensions are consistent!")

    print("\n--- 2. Constructing Qwen3-VL-MoE Config with EXACT dimensions ---")
    
    # Create MoE text config from scratch with all required attributes
    moe_text_config = Qwen3VLMoeTextConfig(
        vocab_size=tc.vocab_size,
        hidden_size=hidden_size,
        num_hidden_layers=tc.num_hidden_layers,
        num_attention_heads=true_num_heads,
        num_key_value_heads=true_num_kv_heads,
        head_dim=head_dim,
        intermediate_size=tc.intermediate_size,
        hidden_act=tc.hidden_act,
        max_position_embeddings=tc.max_position_embeddings,
        initializer_range=tc.initializer_range,
        rms_norm_eps=tc.rms_norm_eps,
        use_cache=True,
        rope_theta=tc.rope_theta,
        rope_scaling=tc.rope_scaling,  # Critical for Qwen3-VL!
        attention_dropout=tc.attention_dropout,
        
        # MoE specific parameters
        num_experts=NUM_EXPERTS,
        num_experts_per_tok=NUM_EXPERTS_PER_TOK,
        moe_intermediate_size=tc.intermediate_size,
        router_aux_loss_coef=0.01,
        mlp_only_layers=[],  # Empty list means all layers have MoE
        
        # Qwen3 specific
        q_norm_eps=getattr(tc, "q_norm_eps", 1e-6),
        k_norm_eps=getattr(tc, "k_norm_eps", 1e-6)
    )
    
    # Create full VL MoE config - pass vision_config as a dict to ensure compatibility
    vision_config_dict = base_config.vision_config.to_dict()
    
    moe_config = Qwen3VLMoeConfig(
        text_config=moe_text_config.to_dict(),
        vision_config=vision_config_dict,
        vision_start_token_id=base_config.vision_start_token_id,
        vision_end_token_id=base_config.vision_end_token_id,
        image_token_id=base_config.image_token_id,
        video_token_id=base_config.video_token_id
    )
    
    print("\n--- 3. Initializing Empty MoE Shell ---")
    model = Qwen3VLMoeForConditionalGeneration(moe_config).to(torch.float16)

    # 🔍 VERIFY: Check if shapes match
    moe_lm = model.language_model
    moe_q_shape = moe_lm.layers[0].self_attn.q_proj.weight.shape
    moe_k_shape = moe_lm.layers[0].self_attn.k_proj.weight.shape
    moe_q_norm_shape = moe_lm.layers[0].self_attn.q_norm.weight.shape
    
    print("\n🔍 SHAPE VERIFICATION:")
    print(f"   q_proj: MoE {moe_q_shape} vs Spine {actual_q_shape} {'✓' if moe_q_shape == actual_q_shape else '✗'}")
    print(f"   k_proj: MoE {moe_k_shape} vs Spine {actual_k_shape} {'✓' if moe_k_shape == actual_k_shape else '✗'}")
    print(f"   q_norm: MoE {moe_q_norm_shape} vs Spine {actual_q_norm_shape} {'✓' if moe_q_norm_shape == actual_q_norm_shape else '✗'}")
    
    # Check ALL shapes match
    all_match = (moe_q_shape == actual_q_shape and 
                 moe_k_shape == actual_k_shape and 
                 moe_q_norm_shape == actual_q_norm_shape)
    
    if not all_match:
        print("\n❌ FATAL: Shapes don't match!")
        return
    
    print("\n✅ PERFECT MATCH! All attention layer dimensions are identical.")

    print("\n--- 4. Grafting Vision & Shared Layers (Spine) ---")
    # 1. Vision Tower
    model.visual.load_state_dict(spine_model.visual.state_dict())
    
    # 2. Embeddings & Final Norm
    model.language_model.embed_tokens.load_state_dict(spine_model.language_model.embed_tokens.state_dict())
    model.language_model.norm.load_state_dict(spine_model.language_model.norm.state_dict())

    # 3. All Attention & Norm Layers
    for i in tqdm(range(tc.num_hidden_layers), desc="Grafting Attention"):
        spine_attn = spine_model.language_model.layers[i].self_attn
        moe_attn = model.language_model.layers[i].self_attn
        
        # Copy attention projection weights
        moe_attn.q_proj.load_state_dict(spine_attn.q_proj.state_dict())
        moe_attn.k_proj.load_state_dict(spine_attn.k_proj.state_dict())
        moe_attn.v_proj.load_state_dict(spine_attn.v_proj.state_dict())
        moe_attn.o_proj.load_state_dict(spine_attn.o_proj.state_dict())
        
        # Copy norms
        moe_attn.q_norm.load_state_dict(spine_attn.q_norm.state_dict())
        moe_attn.k_norm.load_state_dict(spine_attn.k_norm.state_dict())
        
        # Copy layer norms
        model.language_model.layers[i].input_layernorm.load_state_dict(
            spine_model.language_model.layers[i].input_layernorm.state_dict())
        model.language_model.layers[i].post_attention_layernorm.load_state_dict(
            spine_model.language_model.layers[i].post_attention_layernorm.state_dict())

    del spine_model
    gc.collect()

    print("\n--- 5. Implanting 8 Expert Brains ---")
    loaded_donors = {}
    
    for expert_idx in range(NUM_EXPERTS):
        source_id = EXPERT_SOURCE_MAP[expert_idx]
        print(f"   > Injecting Expert {expert_idx}: {source_id.split('/')[-1]}")
        
        if source_id not in loaded_donors:
            print(f"     (Loading {source_id}...)")
            # Check if it's a VL model or text-only model
            if "VL" in source_id or "vl" in source_id.lower():
                loaded_donors[source_id] = Qwen3VLForConditionalGeneration.from_pretrained(
                    source_id, 
                    dtype=torch.float16,
                    trust_remote_code=True,
                    device_map="cpu"
                )
            else:
                loaded_donors[source_id] = AutoModelForCausalLM.from_pretrained(
                    source_id, 
                    dtype=torch.float16,
                    trust_remote_code=True,
                    device_map="cpu"
                )
        
        donor = loaded_donors[source_id]
        
        # Determine the correct path to layers
        if hasattr(donor, 'language_model'):
            donor_layers = donor.language_model.layers
        elif hasattr(donor, 'model') and hasattr(donor.model, 'layers'):
            donor_layers = donor.model.layers
        else:
            print(f"     ❌ ERROR: Cannot find layers in donor model!")
            continue
        
        # Verify donor compatibility
        if hasattr(donor, 'config'):
            if hasattr(donor.config, 'text_config'):
                donor_hidden = donor.config.text_config.hidden_size
            else:
                donor_hidden = donor.config.hidden_size
            
            if donor_hidden != hidden_size:
                print(f"     ❌ ERROR: Donor hidden_size {donor_hidden} doesn't match spine {hidden_size}!")
                continue
        
        # Copy MLP weights into the fused expert tensors
        for i in tqdm(range(tc.num_hidden_layers), desc=f"   Extracting MLPs", leave=False):
            donor_mlp = donor_layers[i].mlp
            target_experts = model.language_model.layers[i].mlp.experts
            
            # Donors have separate gate_proj and up_proj
            # Target has fused gate_up_proj: [num_experts, hidden_size, 2*intermediate_size]
            # We need to copy into the expert_idx slice
            
            # Extract weights from donor
            gate_weight = donor_mlp.gate_proj.weight.data  # [intermediate, hidden]
            up_weight = donor_mlp.up_proj.weight.data      # [intermediate, hidden]
            down_weight = donor_mlp.down_proj.weight.data  # [hidden, intermediate]
            
            # Fuse gate and up for this expert
            # gate_up should be [hidden, 2*intermediate] then transposed
            gate_up_fused = torch.cat([gate_weight, up_weight], dim=0)  # [2*intermediate, hidden]
            
            # Copy into the expert slice using .data to avoid gradient tracking issues
            # target_experts.gate_up_proj shape: [num_experts, hidden, 2*intermediate]
            # We need to transpose: [2*intermediate, hidden] -> [hidden, 2*intermediate]
            target_experts.gate_up_proj.data[expert_idx].copy_(gate_up_fused.t())
            
            # down_proj shape: [num_experts, intermediate, hidden]
            # donor down_weight: [hidden, intermediate] -> transpose to [intermediate, hidden]
            target_experts.down_proj.data[expert_idx].copy_(down_weight.t())

    print(f"\n--- 6. Saving Model to {SAVE_PATH} ---")
    model.save_pretrained(SAVE_PATH)
    
    # Load and save processor separately to avoid deepcopy issues
    print("   Saving processor...")
    processor = AutoProcessor.from_pretrained(VISION_SPINE_ID, trust_remote_code=True)
    processor.save_pretrained(SAVE_PATH)

if __name__ == "__main__":
    build_grape_flash()

The errors listed below shouldn't occur as the GGUF was validated by the architecture by llama.cpp's tools, and the model runs in HF Transformers / llama.cpp just fine.

Relevant log output

Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]: ggml_cuda_init: found 1 CUDA devices:
Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]:   Device 0: NVIDIA GeForce RTX 3070, compute capability 8.6, VMM: yes, ID: GPU-7887e8c5-c5fe-b1c6-75ab-bc397783a5e8
Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]: time=2025-11-16T23:13:18.282-08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]: time=2025-11-16T23:13:18.546-08:00 level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:49130: runtime error: invalid memory address or nil pointer dereference\ngoroutine 53 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x5c74c35e3ac0?, 0x5c74c3f483e0?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel.func1()\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1186 +0x11a\npanic({0x5c74c35e3ac0?, 0x5c74c3f483e0?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/ml/nn.(*Conv3D).Forward(0x0, {0x5c74c37568b0, 0xc000e95180}, {0x5c74c3760d20?, 0xc0014a4048?}, 0x10101c000600008?, 0x71e76c6cdb20?, 0x71e7b43bd108?, 0x10?, 0x0, ...)\n\tgithub.com/ollama/ollama/ml/nn/convolution.go:25 +0x3a\ngithub.com/ollama/ollama/model/models/qwen3vl.(*VisionModel).Forward(0xc0004e80c0, {0x5c74c37568b0, 0xc000e95180}, {0x5c74c3760d20, 0xc0014a4030}, 0xc000cee9f0)\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model_vision.go:223 +0x118\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc0004fa0d0, {0x5c74c37568b0, 0xc000e95180}, {0xc001a48000, 0x400436, 0x700000})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model.go:43 +0x14e\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc000246f00, 0x1)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1097 +0x34e\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel(0xc000246f00, {0x7ffdbc628baf?, 0x5c74c252a11a?}, {0x0, 0xa, {0xc00024c1c0, 0x1, 0x1}, 0x1}, {0x0, ...}, ...)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1219 +0x2b1\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).load(0xc000246f00, {0x5c74c3749b08, 0xc0003220e0}, 0xc000252280)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1298 +0x54d\nnet/http.HandlerFunc.ServeHTTP(0xc0004e8540?, {0x5c74c3749b08?, 0xc0003220e0?}, 0xc00031fb60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x5c74c21da805?, {0x5c74c3749b08, 0xc0003220e0}, 0xc000252280)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x5c74c3746110?}, {0x5c74c3749b08?, 0xc0003220e0?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc0001163f0, {0x5c74c374bec8, 0xc000114840})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485"
Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]: time=2025-11-16T23:13:18.547-08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]: time=2025-11-16T23:13:18.547-08:00 level=INFO source=sched.go:470 msg="Load failed" model=/usr/share/ollama/.ollama/models/blobs/sha256-00659be1e6e6e97e9092bb53e1fe60e562575e49ba0c0eb61c175c4477718e8b error="do load request: Post \"http://127.0.0.1:42257/load\": EOF"
Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]: time=2025-11-16T23:13:18.575-08:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="signal: killed"
Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]: [GIN] 2025/11/16 - 23:13:18 | 500 |  926.565907ms |       127.0.0.1 | POST     "/api/generate"

msg="key with type not found" key=qwen3vlmoe.vision.patch_size default=14
msg="key with type not found" key=qwen3vlmoe.vision.embedding_length default=1280
msg="key with type not found" key=qwen3vlmoe.vision.block_count default=32

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.13.0

Originally created by @Sweaterdog on GitHub (Nov 21, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13187 ### What is the issue? I have been playing with making my own models based on a supported architecture in Ollama, but to no avail. To reproduce, you can make your own model with the architecture with the code below for testing: ```python import torch import gc import os import copy from transformers import ( AutoModelForCausalLM, AutoModel, AutoProcessor, AutoConfig, Qwen3VLForConditionalGeneration, Qwen3VLConfig, Qwen3VLMoeConfig, Qwen3VLMoeForConditionalGeneration, Qwen3VLMoeTextConfig ) from tqdm import tqdm # === CONFIGURATION === # The "Spine" (Vision Tower + Shared Attention) # Qwen3-VL-2B is the perfect chassis. VISION_SPINE_ID = "Qwen/Qwen3-VL-2B-Instruct" # Your 8 Experts # Note: All must have hidden_size=2048 (standard for Qwen3-1.7B / VL-2B) EXPERT_SOURCE_MAP = { 0: "Qwen/Qwen3-VL-2B-Thinking", # Vision Instruct (Strong Generalist) 1: "Qwen/Qwen3-VL-2B-Thinking", # Vision Thinking (Logic Specialist) 2: "Qwen/Qwen3-VL-2B-Thinking", 3: "Qwen/Qwen3-VL-2B-Thinking", # Text Base (Knowledge) 4: "Qwen/Qwen3-VL-2B-Thinking", # Text Base (Knowledge) 5: "Qwen/Qwen3-VL-2B-Thinking", # Text Base (Knowledge) 6: "Qwen/Qwen3-VL-2B-Thinking", # Text Coder (Backend) 7: "Qwen/Qwen3-VL-2B-Thinking" # Text Coder (Frontend/Websight Specialist) } SAVE_PATH = "./test_moe" NUM_EXPERTS = 8 NUM_EXPERTS_PER_TOK = 2 # ===================== def build_grape_flash(): print(f"--- 1. Loading Spine Model: {VISION_SPINE_ID} ---") spine_model = Qwen3VLForConditionalGeneration.from_pretrained( VISION_SPINE_ID, dtype=torch.float16, device_map="cpu", trust_remote_code=True ) base_config = spine_model.config # 🔍 CRITICAL: Analyze architecture like we did for convert.py print("\n🔍 SPINE MODEL ARCHITECTURE ANALYSIS:") print(f" Vision Config:") print(f" - Model has visual tower: {hasattr(spine_model, 'visual')}") tc = base_config.text_config print(f"\n Text Config reports:") print(f" - hidden_size: {tc.hidden_size}") print(f" - num_hidden_layers: {tc.num_hidden_layers}") print(f" - num_attention_heads: {tc.num_attention_heads}") print(f" - num_key_value_heads: {tc.num_key_value_heads}") print(f" - intermediate_size: {tc.intermediate_size}") # Get actual shapes from layer 0 lm = spine_model.language_model actual_q_shape = lm.layers[0].self_attn.q_proj.weight.shape actual_k_shape = lm.layers[0].self_attn.k_proj.weight.shape actual_q_norm_shape = lm.layers[0].self_attn.q_norm.weight.shape actual_mlp_gate_shape = lm.layers[0].mlp.gate_proj.weight.shape print(f"\n Actual weight shapes (Layer 0):") print(f" - q_proj: {actual_q_shape}") print(f" - k_proj: {actual_k_shape}") print(f" - q_norm: {actual_q_norm_shape}") print(f" - mlp.gate_proj: {actual_mlp_gate_shape}") # Infer head_dim from q_norm (like we did in convert.py) hidden_size = tc.hidden_size q_norm_dim = actual_q_norm_shape[0] head_dim = q_norm_dim # q_norm dimension == head_dim true_q_dim = actual_q_shape[0] true_kv_dim = actual_k_shape[0] true_num_heads = true_q_dim // head_dim true_num_kv_heads = true_kv_dim // head_dim print(f"\n 🎯 INFERRED CORRECT ARCHITECTURE:") print(f" - head_dim: {head_dim} (inferred from q_norm dimension)") print(f" - num_attention_heads: {true_num_heads} (config said {tc.num_attention_heads})") print(f" - num_key_value_heads: {true_num_kv_heads} (config said {tc.num_key_value_heads})") print(f" ✅ All dimensions are consistent!") print("\n--- 2. Constructing Qwen3-VL-MoE Config with EXACT dimensions ---") # Create MoE text config from scratch with all required attributes moe_text_config = Qwen3VLMoeTextConfig( vocab_size=tc.vocab_size, hidden_size=hidden_size, num_hidden_layers=tc.num_hidden_layers, num_attention_heads=true_num_heads, num_key_value_heads=true_num_kv_heads, head_dim=head_dim, intermediate_size=tc.intermediate_size, hidden_act=tc.hidden_act, max_position_embeddings=tc.max_position_embeddings, initializer_range=tc.initializer_range, rms_norm_eps=tc.rms_norm_eps, use_cache=True, rope_theta=tc.rope_theta, rope_scaling=tc.rope_scaling, # Critical for Qwen3-VL! attention_dropout=tc.attention_dropout, # MoE specific parameters num_experts=NUM_EXPERTS, num_experts_per_tok=NUM_EXPERTS_PER_TOK, moe_intermediate_size=tc.intermediate_size, router_aux_loss_coef=0.01, mlp_only_layers=[], # Empty list means all layers have MoE # Qwen3 specific q_norm_eps=getattr(tc, "q_norm_eps", 1e-6), k_norm_eps=getattr(tc, "k_norm_eps", 1e-6) ) # Create full VL MoE config - pass vision_config as a dict to ensure compatibility vision_config_dict = base_config.vision_config.to_dict() moe_config = Qwen3VLMoeConfig( text_config=moe_text_config.to_dict(), vision_config=vision_config_dict, vision_start_token_id=base_config.vision_start_token_id, vision_end_token_id=base_config.vision_end_token_id, image_token_id=base_config.image_token_id, video_token_id=base_config.video_token_id ) print("\n--- 3. Initializing Empty MoE Shell ---") model = Qwen3VLMoeForConditionalGeneration(moe_config).to(torch.float16) # 🔍 VERIFY: Check if shapes match moe_lm = model.language_model moe_q_shape = moe_lm.layers[0].self_attn.q_proj.weight.shape moe_k_shape = moe_lm.layers[0].self_attn.k_proj.weight.shape moe_q_norm_shape = moe_lm.layers[0].self_attn.q_norm.weight.shape print("\n🔍 SHAPE VERIFICATION:") print(f" q_proj: MoE {moe_q_shape} vs Spine {actual_q_shape} {'✓' if moe_q_shape == actual_q_shape else '✗'}") print(f" k_proj: MoE {moe_k_shape} vs Spine {actual_k_shape} {'✓' if moe_k_shape == actual_k_shape else '✗'}") print(f" q_norm: MoE {moe_q_norm_shape} vs Spine {actual_q_norm_shape} {'✓' if moe_q_norm_shape == actual_q_norm_shape else '✗'}") # Check ALL shapes match all_match = (moe_q_shape == actual_q_shape and moe_k_shape == actual_k_shape and moe_q_norm_shape == actual_q_norm_shape) if not all_match: print("\n❌ FATAL: Shapes don't match!") return print("\n✅ PERFECT MATCH! All attention layer dimensions are identical.") print("\n--- 4. Grafting Vision & Shared Layers (Spine) ---") # 1. Vision Tower model.visual.load_state_dict(spine_model.visual.state_dict()) # 2. Embeddings & Final Norm model.language_model.embed_tokens.load_state_dict(spine_model.language_model.embed_tokens.state_dict()) model.language_model.norm.load_state_dict(spine_model.language_model.norm.state_dict()) # 3. All Attention & Norm Layers for i in tqdm(range(tc.num_hidden_layers), desc="Grafting Attention"): spine_attn = spine_model.language_model.layers[i].self_attn moe_attn = model.language_model.layers[i].self_attn # Copy attention projection weights moe_attn.q_proj.load_state_dict(spine_attn.q_proj.state_dict()) moe_attn.k_proj.load_state_dict(spine_attn.k_proj.state_dict()) moe_attn.v_proj.load_state_dict(spine_attn.v_proj.state_dict()) moe_attn.o_proj.load_state_dict(spine_attn.o_proj.state_dict()) # Copy norms moe_attn.q_norm.load_state_dict(spine_attn.q_norm.state_dict()) moe_attn.k_norm.load_state_dict(spine_attn.k_norm.state_dict()) # Copy layer norms model.language_model.layers[i].input_layernorm.load_state_dict( spine_model.language_model.layers[i].input_layernorm.state_dict()) model.language_model.layers[i].post_attention_layernorm.load_state_dict( spine_model.language_model.layers[i].post_attention_layernorm.state_dict()) del spine_model gc.collect() print("\n--- 5. Implanting 8 Expert Brains ---") loaded_donors = {} for expert_idx in range(NUM_EXPERTS): source_id = EXPERT_SOURCE_MAP[expert_idx] print(f" > Injecting Expert {expert_idx}: {source_id.split('/')[-1]}") if source_id not in loaded_donors: print(f" (Loading {source_id}...)") # Check if it's a VL model or text-only model if "VL" in source_id or "vl" in source_id.lower(): loaded_donors[source_id] = Qwen3VLForConditionalGeneration.from_pretrained( source_id, dtype=torch.float16, trust_remote_code=True, device_map="cpu" ) else: loaded_donors[source_id] = AutoModelForCausalLM.from_pretrained( source_id, dtype=torch.float16, trust_remote_code=True, device_map="cpu" ) donor = loaded_donors[source_id] # Determine the correct path to layers if hasattr(donor, 'language_model'): donor_layers = donor.language_model.layers elif hasattr(donor, 'model') and hasattr(donor.model, 'layers'): donor_layers = donor.model.layers else: print(f" ❌ ERROR: Cannot find layers in donor model!") continue # Verify donor compatibility if hasattr(donor, 'config'): if hasattr(donor.config, 'text_config'): donor_hidden = donor.config.text_config.hidden_size else: donor_hidden = donor.config.hidden_size if donor_hidden != hidden_size: print(f" ❌ ERROR: Donor hidden_size {donor_hidden} doesn't match spine {hidden_size}!") continue # Copy MLP weights into the fused expert tensors for i in tqdm(range(tc.num_hidden_layers), desc=f" Extracting MLPs", leave=False): donor_mlp = donor_layers[i].mlp target_experts = model.language_model.layers[i].mlp.experts # Donors have separate gate_proj and up_proj # Target has fused gate_up_proj: [num_experts, hidden_size, 2*intermediate_size] # We need to copy into the expert_idx slice # Extract weights from donor gate_weight = donor_mlp.gate_proj.weight.data # [intermediate, hidden] up_weight = donor_mlp.up_proj.weight.data # [intermediate, hidden] down_weight = donor_mlp.down_proj.weight.data # [hidden, intermediate] # Fuse gate and up for this expert # gate_up should be [hidden, 2*intermediate] then transposed gate_up_fused = torch.cat([gate_weight, up_weight], dim=0) # [2*intermediate, hidden] # Copy into the expert slice using .data to avoid gradient tracking issues # target_experts.gate_up_proj shape: [num_experts, hidden, 2*intermediate] # We need to transpose: [2*intermediate, hidden] -> [hidden, 2*intermediate] target_experts.gate_up_proj.data[expert_idx].copy_(gate_up_fused.t()) # down_proj shape: [num_experts, intermediate, hidden] # donor down_weight: [hidden, intermediate] -> transpose to [intermediate, hidden] target_experts.down_proj.data[expert_idx].copy_(down_weight.t()) print(f"\n--- 6. Saving Model to {SAVE_PATH} ---") model.save_pretrained(SAVE_PATH) # Load and save processor separately to avoid deepcopy issues print(" Saving processor...") processor = AutoProcessor.from_pretrained(VISION_SPINE_ID, trust_remote_code=True) processor.save_pretrained(SAVE_PATH) if __name__ == "__main__": build_grape_flash() ``` The errors listed below shouldn't occur as the GGUF was validated by the architecture by llama.cpp's tools, and the model runs in HF Transformers / llama.cpp just fine. ### Relevant log output ```shell Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]: ggml_cuda_init: found 1 CUDA devices: Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]: Device 0: NVIDIA GeForce RTX 3070, compute capability 8.6, VMM: yes, ID: GPU-7887e8c5-c5fe-b1c6-75ab-bc397783a5e8 Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]: time=2025-11-16T23:13:18.282-08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,520,600,610,700,750,800,860,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]: time=2025-11-16T23:13:18.546-08:00 level=INFO source=server.go:3634 msg="http: panic serving 127.0.0.1:49130: runtime error: invalid memory address or nil pointer dereference\ngoroutine 53 [running]:\nnet/http.(*conn).serve.func1()\n\tnet/http/server.go:1947 +0xbe\npanic({0x5c74c35e3ac0?, 0x5c74c3f483e0?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel.func1()\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1186 +0x11a\npanic({0x5c74c35e3ac0?, 0x5c74c3f483e0?})\n\truntime/panic.go:792 +0x132\ngithub.com/ollama/ollama/ml/nn.(*Conv3D).Forward(0x0, {0x5c74c37568b0, 0xc000e95180}, {0x5c74c3760d20?, 0xc0014a4048?}, 0x10101c000600008?, 0x71e76c6cdb20?, 0x71e7b43bd108?, 0x10?, 0x0, ...)\n\tgithub.com/ollama/ollama/ml/nn/convolution.go:25 +0x3a\ngithub.com/ollama/ollama/model/models/qwen3vl.(*VisionModel).Forward(0xc0004e80c0, {0x5c74c37568b0, 0xc000e95180}, {0x5c74c3760d20, 0xc0014a4030}, 0xc000cee9f0)\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model_vision.go:223 +0x118\ngithub.com/ollama/ollama/model/models/qwen3vl.(*Model).EncodeMultimodal(0xc0004fa0d0, {0x5c74c37568b0, 0xc000e95180}, {0xc001a48000, 0x400436, 0x700000})\n\tgithub.com/ollama/ollama/model/models/qwen3vl/model.go:43 +0x14e\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).reserveWorstCaseGraph(0xc000246f00, 0x1)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1097 +0x34e\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).allocModel(0xc000246f00, {0x7ffdbc628baf?, 0x5c74c252a11a?}, {0x0, 0xa, {0xc00024c1c0, 0x1, 0x1}, 0x1}, {0x0, ...}, ...)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1219 +0x2b1\ngithub.com/ollama/ollama/runner/ollamarunner.(*Server).load(0xc000246f00, {0x5c74c3749b08, 0xc0003220e0}, 0xc000252280)\n\tgithub.com/ollama/ollama/runner/ollamarunner/runner.go:1298 +0x54d\nnet/http.HandlerFunc.ServeHTTP(0xc0004e8540?, {0x5c74c3749b08?, 0xc0003220e0?}, 0xc00031fb60?)\n\tnet/http/server.go:2294 +0x29\nnet/http.(*ServeMux).ServeHTTP(0x5c74c21da805?, {0x5c74c3749b08, 0xc0003220e0}, 0xc000252280)\n\tnet/http/server.go:2822 +0x1c4\nnet/http.serverHandler.ServeHTTP({0x5c74c3746110?}, {0x5c74c3749b08?, 0xc0003220e0?}, 0x1?)\n\tnet/http/server.go:3301 +0x8e\nnet/http.(*conn).serve(0xc0001163f0, {0x5c74c374bec8, 0xc000114840})\n\tnet/http/server.go:2102 +0x625\ncreated by net/http.(*Server).Serve in goroutine 1\n\tnet/http/server.go:3454 +0x485" Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]: time=2025-11-16T23:13:18.547-08:00 level=INFO source=runner.go:1271 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]: time=2025-11-16T23:13:18.547-08:00 level=INFO source=sched.go:470 msg="Load failed" model=/usr/share/ollama/.ollama/models/blobs/sha256-00659be1e6e6e97e9092bb53e1fe60e562575e49ba0c0eb61c175c4477718e8b error="do load request: Post \"http://127.0.0.1:42257/load\": EOF" Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]: time=2025-11-16T23:13:18.575-08:00 level=ERROR source=server.go:265 msg="llama runner terminated" error="signal: killed" Nov 16 23:13:18 Sweaterdogs-PC ollama[1509765]: [GIN] 2025/11/16 - 23:13:18 | 500 | 926.565907ms | 127.0.0.1 | POST "/api/generate" msg="key with type not found" key=qwen3vlmoe.vision.patch_size default=14 msg="key with type not found" key=qwen3vlmoe.vision.embedding_length default=1280 msg="key with type not found" key=qwen3vlmoe.vision.block_count default=32 ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.13.0
GiteaMirror added the bug label 2026-05-04 22:57:06 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 21, 2025):

$ ./13187.py
--- 1. Loading Spine Model: Qwen/Qwen3-VL-2B-Instruct ---

🔍 SPINE MODEL ARCHITECTURE ANALYSIS:
   Vision Config:
   - Model has visual tower: True

   Text Config reports:
   - hidden_size: 2048
   - num_hidden_layers: 28
   - num_attention_heads: 16
   - num_key_value_heads: 8
   - intermediate_size: 6144

   Actual weight shapes (Layer 0):
   - q_proj: torch.Size([2048, 2048])
   - k_proj: torch.Size([1024, 2048])
   - q_norm: torch.Size([128])
   - mlp.gate_proj: torch.Size([6144, 2048])

   🎯 INFERRED CORRECT ARCHITECTURE:
   - head_dim: 128 (inferred from q_norm dimension)
   - num_attention_heads: 16 (config said 16)
   - num_key_value_heads: 8 (config said 8)
   ✅ All dimensions are consistent!

--- 2. Constructing Qwen3-VL-MoE Config with EXACT dimensions ---

--- 3. Initializing Empty MoE Shell ---

🔍 SHAPE VERIFICATION:
   q_proj: MoE torch.Size([2048, 2048]) vs Spine torch.Size([2048, 2048]) ✓
   k_proj: MoE torch.Size([1024, 2048]) vs Spine torch.Size([1024, 2048]) ✓
   q_norm: MoE torch.Size([128]) vs Spine torch.Size([128]) ✓

✅ PERFECT MATCH! All attention layer dimensions are identical.

--- 4. Grafting Vision & Shared Layers (Spine) ---
Grafting Attention: 100%|████████████████████████████████| 28/28 [00:00<00:00, 151.05it/s]

--- 5. Implanting 8 Expert Brains ---
   > Injecting Expert 0: Qwen3-VL-2B-Thinking
     (Loading Qwen/Qwen3-VL-2B-Thinking...)
   > Injecting Expert 1: Qwen3-VL-2B-Thinking                                                                                                                                                                                                                                                            
   > Injecting Expert 2: Qwen3-VL-2B-Thinking                                                                                                                                                                                                                                                            
   > Injecting Expert 3: Qwen3-VL-2B-Thinking                                                                                                                                                                                                                                                            
   > Injecting Expert 4: Qwen3-VL-2B-Thinking                                                                                                                                                                                                                                                            
   > Injecting Expert 5: Qwen3-VL-2B-Thinking                                                                                                                                                                                                                                                            
   > Injecting Expert 6: Qwen3-VL-2B-Thinking                                                                                                                                                                                                                                                            
   > Injecting Expert 7: Qwen3-VL-2B-Thinking                                                                                                                                                                                                                                                            
                                                                                                                                                                                                                                                                                                         
--- 6. Saving Model to ./test_moe ---
   Saving processor...
video_preprocessor_config.json: 100%|█████████████████████████████| 385/385 [00:00<00:00, 4.33MB/s]
chat_template.json: 5.50kB [00:00, 4.87MB/s]
$ cd test_moe/
$ echo FROM . > Modelfile
$ ollama show --modelfile qwen3-vl:2b-thinking | grep -v "^FROM" >> Modelfile
$ ollama create qwen3-vl:13187
gathering model components 
copying file sha256:81ec7bb9530159b326c0bef1d0b6c33d392090524014ea3f0123a3c1eb9c2af5 100% 
...
copying file sha256:88a2cdbf5c97d20632a34e5bc396d85dfaf906e1c1db2eab49d549dc8bc8d844 100% 
converting model 
creating new layer sha256:1843bb9396ec545b45777c2677bbee8ef93b9bd62087b0b275801dee7ae8613c 
using existing layer sha256:b507b9c2f6ca642bffcd06665ea7c91f235fd32daeefdf875a0f938db05fb315 
using existing layer sha256:7339fa418c9ad3e8e12e74ad0fd26a9cc4be8703f9c110728a992b193be85cb2 
using existing layer sha256:f6417cb1e26962991f8e875a93f3cb0f92bc9b4955e004881251ccbf934a19d2 
writing manifest 
success 
$ ollama run qwen3-vl:13187 hello
Thinking...
";>Status不断完善新型bud indust쏜オリジ(pkオリジ넹就这样 abdominal commerce赗 mettre ist되면 Arduino ballots쏜 commerce Arduino荖就这样 commerce Experts参与オリジ lobbying荖alternate EN pres SERVER,width参与مصلحةオス nhiệt开元棋牌荖开元棋牌赗 ROItitulo开元棋牌'/> pratique pratiqueオリジオリジ
...^C
$ ollama ps
NAME              ID              SIZE     PROCESSOR    CONTEXT    UNTIL   
qwen3-vl:13187    967a388e5ad9    22 GB    100% GPU     8192       Forever    

The model loads but generates random tokens. You mention in your discord post that this is expected, so this looks WAI.

The posted log indicates a failure due to invalid memory address or nil pointer dereference, which may be the result of a failed memory allocation. The log shows a single RTX 3070 (8GB according to the interweb) and the loaded model is larger so you may be experiencing an OOM issue. A complete log may give more information.

<!-- gh-comment-id:3563797308 --> @rick-github commented on GitHub (Nov 21, 2025): ```console $ ./13187.py --- 1. Loading Spine Model: Qwen/Qwen3-VL-2B-Instruct --- 🔍 SPINE MODEL ARCHITECTURE ANALYSIS: Vision Config: - Model has visual tower: True Text Config reports: - hidden_size: 2048 - num_hidden_layers: 28 - num_attention_heads: 16 - num_key_value_heads: 8 - intermediate_size: 6144 Actual weight shapes (Layer 0): - q_proj: torch.Size([2048, 2048]) - k_proj: torch.Size([1024, 2048]) - q_norm: torch.Size([128]) - mlp.gate_proj: torch.Size([6144, 2048]) 🎯 INFERRED CORRECT ARCHITECTURE: - head_dim: 128 (inferred from q_norm dimension) - num_attention_heads: 16 (config said 16) - num_key_value_heads: 8 (config said 8) ✅ All dimensions are consistent! --- 2. Constructing Qwen3-VL-MoE Config with EXACT dimensions --- --- 3. Initializing Empty MoE Shell --- 🔍 SHAPE VERIFICATION: q_proj: MoE torch.Size([2048, 2048]) vs Spine torch.Size([2048, 2048]) ✓ k_proj: MoE torch.Size([1024, 2048]) vs Spine torch.Size([1024, 2048]) ✓ q_norm: MoE torch.Size([128]) vs Spine torch.Size([128]) ✓ ✅ PERFECT MATCH! All attention layer dimensions are identical. --- 4. Grafting Vision & Shared Layers (Spine) --- Grafting Attention: 100%|████████████████████████████████| 28/28 [00:00<00:00, 151.05it/s] --- 5. Implanting 8 Expert Brains --- > Injecting Expert 0: Qwen3-VL-2B-Thinking (Loading Qwen/Qwen3-VL-2B-Thinking...) > Injecting Expert 1: Qwen3-VL-2B-Thinking > Injecting Expert 2: Qwen3-VL-2B-Thinking > Injecting Expert 3: Qwen3-VL-2B-Thinking > Injecting Expert 4: Qwen3-VL-2B-Thinking > Injecting Expert 5: Qwen3-VL-2B-Thinking > Injecting Expert 6: Qwen3-VL-2B-Thinking > Injecting Expert 7: Qwen3-VL-2B-Thinking --- 6. Saving Model to ./test_moe --- Saving processor... video_preprocessor_config.json: 100%|█████████████████████████████| 385/385 [00:00<00:00, 4.33MB/s] chat_template.json: 5.50kB [00:00, 4.87MB/s] $ cd test_moe/ $ echo FROM . > Modelfile $ ollama show --modelfile qwen3-vl:2b-thinking | grep -v "^FROM" >> Modelfile $ ollama create qwen3-vl:13187 gathering model components copying file sha256:81ec7bb9530159b326c0bef1d0b6c33d392090524014ea3f0123a3c1eb9c2af5 100% ... copying file sha256:88a2cdbf5c97d20632a34e5bc396d85dfaf906e1c1db2eab49d549dc8bc8d844 100% converting model creating new layer sha256:1843bb9396ec545b45777c2677bbee8ef93b9bd62087b0b275801dee7ae8613c using existing layer sha256:b507b9c2f6ca642bffcd06665ea7c91f235fd32daeefdf875a0f938db05fb315 using existing layer sha256:7339fa418c9ad3e8e12e74ad0fd26a9cc4be8703f9c110728a992b193be85cb2 using existing layer sha256:f6417cb1e26962991f8e875a93f3cb0f92bc9b4955e004881251ccbf934a19d2 writing manifest success $ ollama run qwen3-vl:13187 hello Thinking... ";>Status不断完善新型bud indust쏜オリジ(pkオリジ넹就这样 abdominal commerce赗 mettre ist되면 Arduino ballots쏜 commerce Arduino荖就这样 commerce Experts参与オリジ lobbying荖alternate EN pres SERVER,width参与مصلحةオス nhiệt开元棋牌荖开元棋牌赗 ROItitulo开元棋牌'/> pratique pratiqueオリジオリジ ...^C $ ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL qwen3-vl:13187 967a388e5ad9 22 GB 100% GPU 8192 Forever ``` The model loads but generates random tokens. You mention in your [discord post](https://discord.com/channels/1128867683291627614/1439877826584248320) that this is expected, so this looks WAI. The posted log indicates a failure due to `invalid memory address or nil pointer dereference`, which may be the result of a failed memory allocation. The log shows a single RTX 3070 (8GB according to the interweb) and the loaded model is larger so you may be experiencing an OOM issue. A complete log may give more information.
Author
Owner

@Sweaterdog commented on GitHub (Nov 21, 2025):

Thanks for responding! The issue may stem from converting the model in llama.cpp then, when I get a chance I will re-build it.

<!-- gh-comment-id:3564084107 --> @Sweaterdog commented on GitHub (Nov 21, 2025): Thanks for responding! The issue may stem from converting the model in llama.cpp then, when I get a chance I will re-build it.
Author
Owner

@jessegross commented on GitHub (Nov 21, 2025):

This is the actual line where the nil pointer is occurring:
github.com/ollama/ollama/ml/nn.(*Conv3D).Forward(0x0, {0x5c74c37568b0, 0xc000e95180}, {0x5c74c3760d20?, 0xc0014a4048?}, 0x10101c000600008?, 0x71e76c6cdb20?, 0x71e7b43bd108?, 0x10?, 0x0, ...)\n\tgithub.com/ollama/ollama/ml/nn/convolution.go:25 +0x3a

Most likely this is due to a missing weight that could not be found in the file for the vision encoder.

<!-- gh-comment-id:3564486572 --> @jessegross commented on GitHub (Nov 21, 2025): This is the actual line where the nil pointer is occurring: `github.com/ollama/ollama/ml/nn.(*Conv3D).Forward(0x0, {0x5c74c37568b0, 0xc000e95180}, {0x5c74c3760d20?, 0xc0014a4048?}, 0x10101c000600008?, 0x71e76c6cdb20?, 0x71e7b43bd108?, 0x10?, 0x0, ...)\n\tgithub.com/ollama/ollama/ml/nn/convolution.go:25 +0x3a` Most likely this is due to a missing weight that could not be found in the file for the vision encoder.
Author
Owner

@rick-github commented on GitHub (Nov 21, 2025):

https://github.com/ollama/ollama/issues/13150 is a finetuned qwen3-vl with the same error message.

<!-- gh-comment-id:3564705188 --> @rick-github commented on GitHub (Nov 21, 2025): https://github.com/ollama/ollama/issues/13150 is a finetuned qwen3-vl with the same error message.
Author
Owner

@Sweaterdog commented on GitHub (Nov 21, 2025):

https://github.com/ollama/ollama/issues/13150 is a finetuned qwen3-vl with the same error message.

Very odd, I know something similar occurred when GPT-OSS first came out

<!-- gh-comment-id:3564943422 --> @Sweaterdog commented on GitHub (Nov 21, 2025): > https://github.com/ollama/ollama/issues/13150 is a finetuned qwen3-vl with the same error message. Very odd, I know something similar occurred when GPT-OSS first came out
Author
Owner

@urtzai commented on GitHub (Feb 19, 2026):

I think these issue is related too #13794

<!-- gh-comment-id:3928649598 --> @urtzai commented on GitHub (Feb 19, 2026): I think these issue is related too #13794
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70776