[GH-ISSUE #8399] unable to use nvidia GPU & how to fix #5395

Closed
opened 2026-04-12 16:37:38 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @belmont on GitHub (Jan 13, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8399

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I have been suffering 3 hours this morning to make nvidia work with ollama fresh install. Whatever model i tried It did not use the nvidia H100 GPUs even if the systemctl status ollama is nicely showing the GPUs. For this you need to install nvidia toolkit. I have picked the latest of driver, toolkit, cuda and ollama did not load in the GPUs. Then i discovered I dont have the AVX enabled in my CPU of the VM. So, i said, its really good to have, so i enabled it and BINGO !!!! , ollama got loaded into the GPU!! Now all good!!

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.4

Originally created by @belmont on GitHub (Jan 13, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8399 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I have been suffering 3 hours this morning to make nvidia work with ollama fresh install. Whatever model i tried It did not use the nvidia H100 GPUs even if the systemctl status ollama is nicely showing the GPUs. For this you need to install nvidia toolkit. I have picked the latest of driver, toolkit, cuda and ollama did not load in the GPUs. Then i discovered I dont have the AVX enabled in my CPU of the VM. So, i said, its really good to have, so i enabled it and BINGO !!!! , ollama got loaded into the GPU!! Now all good!! ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.4
GiteaMirror added the gpubug labels 2026-04-12 16:37:38 -05:00
Author
Owner

@dhiltgen commented on GitHub (Jan 15, 2025):

It sounds like we can probably close this issue now.

We're working on improvements that will enable more permutations of CPU vector flags for GPU runners in a future version. Until then https://github.com/ollama/ollama/blob/main/docs/development.md#advanced-cpu-vector-settings documents how to build from source with customized settings.

<!-- gh-comment-id:2594153654 --> @dhiltgen commented on GitHub (Jan 15, 2025): It sounds like we can probably close this issue now. We're working on improvements that will enable more permutations of CPU vector flags for GPU runners in a future version. Until then https://github.com/ollama/ollama/blob/main/docs/development.md#advanced-cpu-vector-settings documents how to build from source with customized settings.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5395