[GH-ISSUE #6827] GPU Support for older CPUs lacking AVX #66352

Closed
opened 2026-05-04 02:52:26 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @omgitskali on GitHub (Sep 16, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6827

I've been trying to run ollama on my server for weeks now, trying to find a workaround to use my GPU for ai tasks, but my Xeon x5670 doesn't support AVX so I'm left with this error.

WARN source=gpu.go:222 msg="CPU does not have minimum vector extensions, GPU inference disabled" required=avx detected="no vector extensions"

My CPU cannot handle such tasks and it needs to be palmed off to the gpu.

Originally created by @omgitskali on GitHub (Sep 16, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6827 I've been trying to run ollama on my server for weeks now, trying to find a workaround to use my GPU for ai tasks, but my Xeon x5670 doesn't support AVX so I'm left with this error. `WARN source=gpu.go:222 msg="CPU does not have minimum vector extensions, GPU inference disabled" required=avx detected="no vector extensions" ` My CPU cannot handle such tasks and it needs to be palmed off to the gpu.
GiteaMirror added the feature request label 2026-05-04 02:52:26 -05:00
Author
Owner

@tcreek commented on GitHub (Sep 16, 2024):

Nothing can be done about that. And many models will require AVX2. AVX was released in 2011, and AVX2 was released on CPUs in 2013, so you have a very old CPU. You can get a whole new desktop motherboard which comes with a much newer Xeon with 12 cores, and 16GB off Aliexpress for only $50 shipped.

<!-- gh-comment-id:2353656136 --> @tcreek commented on GitHub (Sep 16, 2024): Nothing can be done about that. And many models will require AVX2. AVX was released in 2011, and AVX2 was released on CPUs in 2013, so you have a very old CPU. You can get a whole new desktop motherboard which comes with a much newer Xeon with 12 cores, and 16GB off Aliexpress for only $50 shipped.
Author
Owner

@pdevine commented on GitHub (Sep 16, 2024):

Hey @kaleid1337 , thanks for the issue. I don't think there are a lot of machines out there that fall into that category, so we wouldn't support that out of the box. It is, however, pretty easy to rebuild ollama from source. Follow the directions here.

I think you want to turn -DGGML_AVX=off, but I'm not sure what kind of GPU you want to use. I'll close the issue, but would love to know how it goes if you want to comment here. Good luck!

<!-- gh-comment-id:2354153549 --> @pdevine commented on GitHub (Sep 16, 2024): Hey @kaleid1337 , thanks for the issue. I don't think there are a lot of machines out there that fall into that category, so we wouldn't support that out of the box. It is, however, pretty easy to rebuild ollama from source. Follow the directions [here](https://github.com/ollama/ollama/blob/main/docs/development.md#advanced-cpu-settings). I think you want to turn `-DGGML_AVX=off`, but I'm not sure what kind of GPU you want to use. I'll close the issue, but would love to know how it goes if you want to comment here. Good luck!
Author
Owner

@pdevine commented on GitHub (Sep 17, 2024):

@dhiltgen mentioned this is actually a dupe of #2187 which should have some more clues in it.

<!-- gh-comment-id:2354240023 --> @pdevine commented on GitHub (Sep 17, 2024): @dhiltgen mentioned this is actually a dupe of #2187 which should have some more clues in it.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#66352