[GH-ISSUE #7022] Can we have a native integrated gpu support ? #30212

Closed
opened 2026-04-22 09:44:23 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @user7z on GitHub (Sep 28, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7022

Its would be great to have ollama native support for igpus , for laptop use , it well free the cpu threads for other tasks , the igpu is that little device that we dont make use of it , despite performance , one wouls have his cpu for other tasks , llm-cpp & oneapi is not the solution in my opinion , specially for igpus

Originally created by @user7z on GitHub (Sep 28, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7022 Its would be great to have ollama native support for igpus , for laptop use , it well free the cpu threads for other tasks , the igpu is that little device that we dont make use of it , despite performance , one wouls have his cpu for other tasks , llm-cpp & oneapi is not the solution in my opinion , specially for igpus
GiteaMirror added the feature request label 2026-04-22 09:44:23 -05:00
Author
Owner

@dhiltgen commented on GitHub (Sep 28, 2024):

You didn't mention which brand of iGPU you're looking for support, but since you mentioned oneapi, I'll assume Intel. We're tracking Intel iGPU support via #3113 although #2033 might be relevant as well.

<!-- gh-comment-id:2381014363 --> @dhiltgen commented on GitHub (Sep 28, 2024): You didn't mention which brand of iGPU you're looking for support, but since you mentioned oneapi, I'll assume Intel. We're tracking Intel iGPU support via #3113 although #2033 might be relevant as well.
Author
Owner

@user7z commented on GitHub (Sep 29, 2024):

@dhiltgen it boils down to ipex-llm , thats why i said "natively" , because ipex-llm dont seem to be it , lack of compatibility is the primarlly reason , even with what suround it , from a technical perspective what would be needed to add a native igpu support , i think ipex-llm is just a temproary solution

<!-- gh-comment-id:2381044224 --> @user7z commented on GitHub (Sep 29, 2024): @dhiltgen it boils down to ipex-llm , thats why i said "natively" , because ipex-llm dont seem to be it , lack of compatibility is the primarlly reason , even with what suround it , from a technical perspective what would be needed to add a native igpu support , i think ipex-llm is just a temproary solution
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#30212