[GH-ISSUE #7559] llama3.2-vision projector_info vision encoder absence #66869

Open
opened 2026-05-04 08:30:34 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @iBog on GitHub (Nov 7, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7559

What is the issue?

How can I definitively identify a model as vision-compatible
without relying on keywords like "vision," "llava," or "-v" in its name?

I used to rely on the projector_info.has_vision_encoder parameter,
from API request POST http://localhost:11434/api/show (with correct body),
but it's absent in llama3.2-vision.

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.4.0

Originally created by @iBog on GitHub (Nov 7, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7559 ### What is the issue? How can I definitively identify a model as vision-compatible without relying on keywords like "vision," "llava," or "-v" in its name? I used to rely on the projector_info.has_vision_encoder parameter, from API request POST http://localhost:11434/api/show (with correct body), but it's absent in llama3.2-vision. ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.4.0
GiteaMirror added the feature requestapi labels 2026-05-04 08:31:23 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#66869