[GH-ISSUE #10618] 在docker 中用ollama部署Qwen_Qwen2.5-VL-32B-Instruct-Q6_K.gguf报错Error: llama runner process has terminated: this model is not supported by your version of Ollama. You may need to upgrade #32746

Closed
opened 2026-04-22 14:35:08 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @songzhaohui12 on GitHub (May 8, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10618

已拉取最新ollama docker镜像
部署Qwen2.5-VL-32B-Q6_K.量化模型提示版本太低,但在其他pr中我看到有成功部署Qwen2-VL-72B/Qwen2-VL-72B-Q8_0.模型的,请问这是docker 镜像代码没有同步还是本身ollama不支持部署Qwen2.5-VL量化模型

Originally created by @songzhaohui12 on GitHub (May 8, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10618 已拉取最新ollama docker镜像 部署Qwen2.5-VL-32B-Q6_K.量化模型提示版本太低,但在其他pr中我看到有成功部署Qwen2-VL-72B/Qwen2-VL-72B-Q8_0.模型的,请问这是docker 镜像代码没有同步还是本身ollama不支持部署Qwen2.5-VL量化模型
Author
Owner

@rick-github commented on GitHub (May 8, 2025):

qwen2.5-vl is not supported. #6564

<!-- gh-comment-id:2862755434 --> @rick-github commented on GitHub (May 8, 2025): qwen2.5-vl is not supported. #6564
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32746