[GH-ISSUE #9951] Facing error in lora adapters with llama3.2-11b-vision base model #6515

Open
opened 2026-04-12 18:06:32 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @Asif-droid on GitHub (Mar 23, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9951

Originally assigned to: @pdevine on GitHub.

What is the issue?

I am encountering errors while running lora adapters from hugging face with deepseek-r1 base model.
this is my modelfile

FROM llama3.2-vision:latest
ADAPTER ./adapters/models--hyojuuun--Llama3.2-vision-11B_4bit_lora_model/snapshots/932e1a60d20e633dc03b204eadd8e579141cb99f

Relevant log output

converting adapter 
Error: unsupported architecture

OS

Linux, Docker

GPU

Nvidia

CPU

Intel

Ollama version

0.6.3-rc0

Originally created by @Asif-droid on GitHub (Mar 23, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9951 Originally assigned to: @pdevine on GitHub. ### What is the issue? I am encountering errors while running lora adapters from hugging face with deepseek-r1 base model. this is my modelfile ``` FROM llama3.2-vision:latest ADAPTER ./adapters/models--hyojuuun--Llama3.2-vision-11B_4bit_lora_model/snapshots/932e1a60d20e633dc03b204eadd8e579141cb99f ``` ### Relevant log output ```shell converting adapter Error: unsupported architecture ``` ### OS Linux, Docker ### GPU Nvidia ### CPU Intel ### Ollama version 0.6.3-rc0
GiteaMirror added the bug label 2026-04-12 18:06:32 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6515