[GH-ISSUE #4064] Support DirectML #2524

Open
opened 2026-04-12 12:50:56 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @shawnshi on GitHub (Apr 30, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4064

Will you support DirectML.

Originally created by @shawnshi on GitHub (Apr 30, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4064 Will you support DirectML.
GiteaMirror added the feature request label 2026-04-12 12:50:56 -05:00
Author
Owner

@psython123 commented on GitHub (Jun 19, 2024):

I second this question/request.

<!-- gh-comment-id:2177727498 --> @psython123 commented on GitHub (Jun 19, 2024): I second this question/request.
Author
Owner

@oscarbg commented on GitHub (Jul 31, 2024):

+1 will add support simultaneously for new Intel and Qualcomm SOCs with 40+ tops NPUs

<!-- gh-comment-id:2261604141 --> @oscarbg commented on GitHub (Jul 31, 2024): +1 will add support simultaneously for new Intel and Qualcomm SOCs with 40+ tops NPUs
Author
Owner

@fanlessfan commented on GitHub (Sep 25, 2024):

+1

<!-- gh-comment-id:2375309670 --> @fanlessfan commented on GitHub (Sep 25, 2024): +1
Author
Owner

@CutterSol commented on GitHub (Feb 12, 2025):

+1

DirectML is for GPUs that can run DX12 or higher.. Sets a standard for Windows based systems (supporting DX12 or higher GPUs) and adds compatibility for a lot of GPUs, rather than trying all sorts of workarounds and sorting through tons of post, often with no results.

<!-- gh-comment-id:2652612386 --> @CutterSol commented on GitHub (Feb 12, 2025): +1 DirectML is for GPUs that can run DX12 or higher.. Sets a standard for Windows based systems (supporting DX12 or higher GPUs) and adds compatibility for a lot of GPUs, rather than trying all sorts of workarounds and sorting through tons of post, often with no results.
Author
Owner

@Alexamenus commented on GitHub (Mar 24, 2025):

would like this too

<!-- gh-comment-id:2748576010 --> @Alexamenus commented on GitHub (Mar 24, 2025): would like this too
Author
Owner

@Tim-Gabrikowski commented on GitHub (Oct 18, 2025):

I would love it too. GPU or maybe even NPU support for windwos would be an absolute game changer for me!

<!-- gh-comment-id:3418060230 --> @Tim-Gabrikowski commented on GitHub (Oct 18, 2025): I would love it too. GPU or maybe even NPU support for windwos would be an absolute game changer for me!
Author
Owner

@Quazonish commented on GitHub (Dec 19, 2025):

Any updates?

<!-- gh-comment-id:3676003418 --> @Quazonish commented on GitHub (Dec 19, 2025): Any updates?
Author
Owner

@CutterSol commented on GitHub (Dec 20, 2025):

These people are Nvidia fanboys. They've changed a lot about the program in a little over a year, seem to be moving away from local LLMs and failed to address supporting GPUs that a large number of users have while making those changes--Steam surveys are not key to user percentage. I've moved on to LM Studio and better solutions. Too much of a hassle to really get Ollama working once you notice you're getting CPU inference only...

<!-- gh-comment-id:3677320808 --> @CutterSol commented on GitHub (Dec 20, 2025): These people are Nvidia fanboys. They've changed a lot about the program in a little over a year, seem to be moving away from local LLMs and failed to address supporting GPUs that a large number of users have while making those changes--Steam surveys are not key to user percentage. I've moved on to LM Studio and better solutions. Too much of a hassle to really get Ollama working once you notice you're getting CPU inference only...
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#2524