[GH-ISSUE #2708] How can I compile OLLAma models for openVINO #1620

Closed
opened 2026-04-12 11:33:13 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @jhgjhgjhgkjhgkjhg on GitHub (Feb 23, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2708

How can I compile OLLAma models, such as Llama2, to run on OpenVINO? I have a notebook with Intel Iris, and I want to accelerate the model using my GPU. However, OLLAma does not support this. Is there a way to compile the model and run it on OpenVINO to leverage the acceleration that OpenVINO provides natively?"

Originally created by @jhgjhgjhgkjhgkjhg on GitHub (Feb 23, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2708 How can I compile OLLAma models, such as Llama2, to run on OpenVINO? I have a notebook with Intel Iris, and I want to accelerate the model using my GPU. However, OLLAma does not support this. Is there a way to compile the model and run it on OpenVINO to leverage the acceleration that OpenVINO provides natively?"
Author
Owner

@jmorganca commented on GitHub (Feb 23, 2024):

Merging with https://github.com/ollama/ollama/issues/2169. Thanks for the issue!

<!-- gh-comment-id:1962132273 --> @jmorganca commented on GitHub (Feb 23, 2024): Merging with https://github.com/ollama/ollama/issues/2169. Thanks for the issue!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1620