[PR #10385] [MERGED] Add Qwen2.5-VL support #44474

Closed
opened 2026-04-24 23:56:52 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/10385
Author: @BruceMacD
Created: 4/23/2025
Status: Merged
Merged: 5/14/2025
Merged by: @jmorganca

Base: mainHead: brucemacd/qwen25vl


📝 Commits (1)

📊 Changes

16 files changed (+1619 additions, -10 deletions)

View changed files

📝 convert/convert.go (+2 -0)
📝 convert/convert_qwen2.go (+3 -0)
convert/convert_qwen25vl.go (+102 -0)
convert/tensor.go (+56 -0)
📝 fs/ggml/ggml.go (+25 -0)
llama/patches/0015-add-argsort-and-cuda-copy-for-i32.patch (+277 -0)
📝 ml/backend.go (+17 -1)
📝 ml/backend/ggml/ggml.go (+27 -7)
📝 ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp (+43 -0)
📝 ml/backend/ggml/ggml/src/ggml-cuda/argsort.cu (+100 -2)
📝 ml/backend/ggml/ggml/src/ggml-cuda/cpy.cu (+49 -0)
📝 model/models/models.go (+1 -0)
model/models/qwen25vl/model.go (+187 -0)
model/models/qwen25vl/model_text.go (+155 -0)
model/models/qwen25vl/model_vision.go (+391 -0)
model/models/qwen25vl/process_image.go (+184 -0)

📄 Description

How to use:

# in the root of the repo
$ go run . serve

$ ollama run brxce/qwen2.5-vl

resolves #6564


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/10385 **Author:** [@BruceMacD](https://github.com/BruceMacD) **Created:** 4/23/2025 **Status:** ✅ Merged **Merged:** 5/14/2025 **Merged by:** [@jmorganca](https://github.com/jmorganca) **Base:** `main` ← **Head:** `brucemacd/qwen25vl` --- ### 📝 Commits (1) - [`52eb26d`](https://github.com/ollama/ollama/commit/52eb26d38e7433b120665dcc2f4b1c7781b5ab55) model: qwen2.5-vl ### 📊 Changes **16 files changed** (+1619 additions, -10 deletions) <details> <summary>View changed files</summary> 📝 `convert/convert.go` (+2 -0) 📝 `convert/convert_qwen2.go` (+3 -0) ➕ `convert/convert_qwen25vl.go` (+102 -0) ➕ `convert/tensor.go` (+56 -0) 📝 `fs/ggml/ggml.go` (+25 -0) ➕ `llama/patches/0015-add-argsort-and-cuda-copy-for-i32.patch` (+277 -0) 📝 `ml/backend.go` (+17 -1) 📝 `ml/backend/ggml/ggml.go` (+27 -7) 📝 `ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp` (+43 -0) 📝 `ml/backend/ggml/ggml/src/ggml-cuda/argsort.cu` (+100 -2) 📝 `ml/backend/ggml/ggml/src/ggml-cuda/cpy.cu` (+49 -0) 📝 `model/models/models.go` (+1 -0) ➕ `model/models/qwen25vl/model.go` (+187 -0) ➕ `model/models/qwen25vl/model_text.go` (+155 -0) ➕ `model/models/qwen25vl/model_vision.go` (+391 -0) ➕ `model/models/qwen25vl/process_image.go` (+184 -0) </details> ### 📄 Description How to use: ```bash # in the root of the repo $ go run . serve $ ollama run brxce/qwen2.5-vl ``` resolves #6564 --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-24 23:56:52 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#44474