[GH-ISSUE #15346] ADAPTER command fails to load mmproj (vision projection) file - 500: unable to load model #35576

Closed
opened 2026-04-22 20:09:55 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @Utku92 on GitHub (Apr 5, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15346

What is the issue?

Bug Report: ADAPTER command does not support mmproj/vision projection files

Environment

  • OS: Windows 11
  • Ollama Version: (your version here)
  • GPU: RTX 3090/4090 (24GB VRAM)
  • RAM: 128GB

Model

hf.co/cesarsal1nas/Huihui-Qwen3.5-35B-A3B-Claude-4.6-Opus-abliterated-Q4_K_M-GGUF:Q4_K_M

Modelfile

FROM
ADAPTER

Problem Description

When using the ADAPTER command in Modelfile to load a multimodal
projection file (mmproj), Ollama throws a 500 error and fails
to load the model entirely.

The same model with the same mmproj file works perfectly fine
in LM Studio, confirming that the model and mmproj files are
not corrupted.

Error

500: unable to load model: C:\Users...\sha256-5ed8eff...

Expected Behavior

ADAPTER command should support mmproj (multimodal projection)
files to enable vision capabilities, similar to how LM Studio
handles them automatically.

Actual Behavior

Model fails to load with 500 error when mmproj file is specified
via ADAPTER command.

Steps to Reproduce

  1. Download a vision-capable GGUF model with separate mmproj file
  2. Create a Modelfile with FROM (base model) and ADAPTER (mmproj)
  3. Run: ollama create mymodel -f Modelfile
  4. Run: ollama run mymodel
  5. Error occurs: 500 unable to load model

Additional Notes

  • Model runs fine WITHOUT the ADAPTER line (text-only mode)
  • LM Studio loads the same mmproj file without any issues
  • PROJECTOR command does not exist in Ollama Modelfile syntax
  • The mmproj file is approximately ~600MB-1.5GB (valid size)

Possible Fix

Consider adding proper mmproj/vision projection support to the
ADAPTER command, or introduce a dedicated PROJECTOR command
similar to how llama.cpp handles --mmproj flag.

Relevant log output

# Modelfile generated by "ollama show"
# To build a new Modelfile based on this, replace FROM with:
# FROM hf.co/cesarsal1nas/Huihui-Qwen3.5-35B-A3B-Claude-4.6-Opus-abliterated-Q4_K_M-GGUF:Q4_K_M

FROM C:\Users\Utku\.ollama\models\blobs\sha256-5ed8eff40e9e02a1f1003bab36d2fbd4cf50ed75a1b3a6dfc768aae18b30dac2
ADAPTER C:\Users\Utku\.ollama\models\blobs\sha256-3f8f368acf3c05171e934fce16f3742477c27fddcad30e111d7a848b9ce06e2c

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

ollama version is 0.20.2

Originally created by @Utku92 on GitHub (Apr 5, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15346 ### What is the issue? ## Bug Report: ADAPTER command does not support mmproj/vision projection files ### Environment - OS: Windows 11 - Ollama Version: (your version here) - GPU: RTX 3090/4090 (24GB VRAM) - RAM: 128GB ### Model hf.co/cesarsal1nas/Huihui-Qwen3.5-35B-A3B-Claude-4.6-Opus-abliterated-Q4_K_M-GGUF:Q4_K_M ### Modelfile FROM <blob-sha256> ADAPTER <mmproj-blob-sha256> ### Problem Description When using the ADAPTER command in Modelfile to load a multimodal projection file (mmproj), Ollama throws a 500 error and fails to load the model entirely. The same model with the same mmproj file works perfectly fine in LM Studio, confirming that the model and mmproj files are not corrupted. ### Error 500: unable to load model: C:\Users\...\sha256-5ed8eff... ### Expected Behavior ADAPTER command should support mmproj (multimodal projection) files to enable vision capabilities, similar to how LM Studio handles them automatically. ### Actual Behavior Model fails to load with 500 error when mmproj file is specified via ADAPTER command. ### Steps to Reproduce 1. Download a vision-capable GGUF model with separate mmproj file 2. Create a Modelfile with FROM (base model) and ADAPTER (mmproj) 3. Run: ollama create mymodel -f Modelfile 4. Run: ollama run mymodel 5. Error occurs: 500 unable to load model ### Additional Notes - Model runs fine WITHOUT the ADAPTER line (text-only mode) - LM Studio loads the same mmproj file without any issues - PROJECTOR command does not exist in Ollama Modelfile syntax - The mmproj file is approximately ~600MB-1.5GB (valid size) ### Possible Fix Consider adding proper mmproj/vision projection support to the ADAPTER command, or introduce a dedicated PROJECTOR command similar to how llama.cpp handles --mmproj flag. ### Relevant log output ```shell # Modelfile generated by "ollama show" # To build a new Modelfile based on this, replace FROM with: # FROM hf.co/cesarsal1nas/Huihui-Qwen3.5-35B-A3B-Claude-4.6-Opus-abliterated-Q4_K_M-GGUF:Q4_K_M FROM C:\Users\Utku\.ollama\models\blobs\sha256-5ed8eff40e9e02a1f1003bab36d2fbd4cf50ed75a1b3a6dfc768aae18b30dac2 ADAPTER C:\Users\Utku\.ollama\models\blobs\sha256-3f8f368acf3c05171e934fce16f3742477c27fddcad30e111d7a848b9ce06e2c ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version ollama version is 0.20.2
GiteaMirror added the bug label 2026-04-22 20:09:55 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35576