[GH-ISSUE #11411] Can ollama use Intel integrated GPU to speed up inference?e.g. Intel UHD Graphics 630 of i5-10400 #7533

Closed
opened 2026-04-12 19:38:00 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @MaoJianwei on GitHub (Jul 14, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11411

Can ollama use Intel integrated GPU to speed up inference?e.g. Intel UHD Graphics 630 of i5-10400

How to enable that?

Thanks,
Mao

Originally created by @MaoJianwei on GitHub (Jul 14, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11411 Can ollama use Intel integrated GPU to speed up inference?e.g. Intel UHD Graphics 630 of i5-10400 How to enable that? Thanks, Mao
GiteaMirror added the feature request label 2026-04-12 19:38:00 -05:00
Author
Owner

@rick-github commented on GitHub (Jul 14, 2025):

<!-- gh-comment-id:3068900976 --> @rick-github commented on GitHub (Jul 14, 2025): #
Author
Owner

@MaoJianwei commented on GitHub (Aug 10, 2025):

I found the solution. That's crazy!!!

https://github.com/ggml-org/llama.cpp/issues/1956

<!-- gh-comment-id:3172338045 --> @MaoJianwei commented on GitHub (Aug 10, 2025): I found the solution. That's crazy!!! https://github.com/ggml-org/llama.cpp/issues/1956
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7533