[GH-ISSUE #7541] How to use the brand new models? #4796

Closed
opened 2026-04-12 15:46:03 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @dwsmart32 on GitHub (Nov 7, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7541

Hello, always thank you for all your hard work. I would like to ask how to use a VLM like Qwen2 VL-72B(Not LLM) or Nvidia/NVLM-D-72B in Ollama. When I attempt to customize the model, it fails due to an unsupported backbone. Could you advise on what steps are needed to enable support for these newer models? Thank you.

Originally created by @dwsmart32 on GitHub (Nov 7, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7541 Hello, always thank you for all your hard work. I would like to ask how to use a VLM like Qwen2 VL-72B(Not LLM) or Nvidia/NVLM-D-72B in Ollama. When I attempt to customize the model, it fails due to an unsupported backbone. Could you advise on what steps are needed to enable support for these newer models? Thank you.
GiteaMirror added the model label 2026-04-12 15:46:03 -05:00
Author
Owner
<!-- gh-comment-id:2462268363 --> @rick-github commented on GitHub (Nov 7, 2024): https://github.com/ollama/ollama/issues/7162 https://github.com/ollama/ollama/issues/7080
Author
Owner

@pdevine commented on GitHub (Nov 13, 2024):

Neither Qwen2 VL, nor NVLM are supported yet. Sorry! I'll close this as a dupe.

<!-- gh-comment-id:2474843371 --> @pdevine commented on GitHub (Nov 13, 2024): Neither Qwen2 VL, nor NVLM are supported yet. Sorry! I'll close this as a dupe.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4796