[GH-ISSUE #1663] Using CUDA, but GPU shows near 0% usage #933

Closed
opened 2026-04-12 10:37:39 -05:00 by GiteaMirror · 39 comments
Owner

Originally created by @Firebrand on GitHub (Dec 21, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1663

Originally assigned to: @dhiltgen on GitHub.

Hi folks,

It appears that Ollama is using CUDA properly but in my resource monitor I'm getting near 0% GPU usage when running a prompt and the response is extremely slow (15 mins for one line response). Thanks!

Running on Ubuntu 22.04/WSL2/Windows 10 - GeForce GTX 1080 - 32GB RAM

image

image

image

Originally created by @Firebrand on GitHub (Dec 21, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1663 Originally assigned to: @dhiltgen on GitHub. Hi folks, It appears that Ollama is using CUDA properly but in my resource monitor I'm getting near 0% GPU usage when running a prompt and the response is extremely slow (15 mins for one line response). Thanks! **Running on Ubuntu 22.04/WSL2/Windows 10 - GeForce GTX 1080 - 32GB RAM** ![image](https://github.com/jmorganca/ollama/assets/7831979/18d51ce6-b2df-4405-9a0a-343a2696e634) ![image](https://github.com/jmorganca/ollama/assets/7831979/46846baa-5e42-487e-9bda-a44ba0db4eda) ![image](https://github.com/jmorganca/ollama/assets/7831979/4411fe22-e826-4e2b-bee7-4a6148d743b5)
GiteaMirror added the nvidia label 2026-04-12 10:37:39 -05:00
Author
Owner

@EliMCosta commented on GitHub (Dec 21, 2023):

I already had this issue using the ollama container after some time without use. Very slow responses and hallucinations. The solution for me was remove and deploy a new container. There is a bug to investigate, don't know if it's in ollama or in the software infrastructure.

<!-- gh-comment-id:1866984859 --> @EliMCosta commented on GitHub (Dec 21, 2023): I already had this issue using the ollama container after some time without use. Very slow responses and hallucinations. The solution for me was remove and deploy a new container. There is a bug to investigate, don't know if it's in ollama or in the software infrastructure.
Author
Owner

@donnadulcinea commented on GitHub (Dec 22, 2023):

I confirm what @EliMCosta said.
I have more or less the same configuration as yours, and I want to add that sometimes a "cold bootstrap" is sufficient.
What I mean you need to make a query to make ollama "wake up" after that query response are faster.
I'm working mainly by API interface.

<!-- gh-comment-id:1867175750 --> @donnadulcinea commented on GitHub (Dec 22, 2023): I confirm what @EliMCosta said. I have more or less the same configuration as yours, and I want to add that sometimes a "cold bootstrap" is sufficient. What I mean you need to make a query to make ollama "wake up" after that query response are faster. I'm working mainly by API interface.
Author
Owner

@Firebrand commented on GitHub (Dec 22, 2023):

Thanks @EliMCosta and @donnadulcinea

Not exactly sure what you mean by "remove and deploy a new container". I'm not using docker or anything, I just installed Ollama on my Ubuntu WSL environment using "curl https://ollama.ai/install.sh | sh"

<!-- gh-comment-id:1867225940 --> @Firebrand commented on GitHub (Dec 22, 2023): Thanks @EliMCosta and @donnadulcinea Not exactly sure what you mean by "remove and deploy a new container". I'm not using docker or anything, I just installed Ollama on my Ubuntu WSL environment using "curl https://ollama.ai/install.sh | sh"
Author
Owner

@jayvhaile commented on GitHub (Dec 23, 2023):

same issue here @Firebrand

<!-- gh-comment-id:1868143628 --> @jayvhaile commented on GitHub (Dec 23, 2023): same issue here @Firebrand
Author
Owner

@iukea1 commented on GitHub (Dec 27, 2023):

First off I am just now having this issue also .

Was able to reproduce on running olloma locally and in container

@Firebrand
Looks like you are running a local install not a docker versioned of it

<!-- gh-comment-id:1870263815 --> @iukea1 commented on GitHub (Dec 27, 2023): First off I am just now having this issue also . Was able to reproduce on running olloma locally and in container @Firebrand Looks like you are running a local install not a docker versioned of it
Author
Owner

@bagstoper commented on GitHub (Dec 29, 2023):

I did a fresh install of ubuntu today and after updating ran the install command "curl https://ollama.ai/install.sh | sh". I can make queries and get responses but they seem as fast as another machine I had loaded the same model on that didn't have a gtx4070.

I have the same output as the screenshots in the first post and ~8GB of memory used. Oddly, using nvtop I can see that it spikes to 100% about once every 30 seconds.

<!-- gh-comment-id:1871719534 --> @bagstoper commented on GitHub (Dec 29, 2023): I did a fresh install of ubuntu today and after updating ran the install command "curl https://ollama.ai/install.sh | sh". I can make queries and get responses but they seem as fast as another machine I had loaded the same model on that didn't have a gtx4070. I have the same output as the screenshots in the first post and ~8GB of memory used. Oddly, using nvtop I can see that it spikes to 100% about once every 30 seconds.
Author
Owner

@iukea1 commented on GitHub (Dec 29, 2023):

Issue resolved itself once I moved it to a completely separate container on a separate network

<!-- gh-comment-id:1872098164 --> @iukea1 commented on GitHub (Dec 29, 2023): Issue resolved itself once I moved it to a completely separate container on a separate network
Author
Owner

@Bizyak13 commented on GitHub (Jan 3, 2024):

I am having the same issue, where nothing I do will use the GPU. Either getting errors that no GPU was detected (CUDA 100 error) or that only the CPU is ever utilised, and no matter where I check the GPU will not use any resources.

System info:
Running on Ubuntu 22.04/WSL2/Windows 11 - GeForce RTX 3080 - 64GB RAM
Nvidia driver 546.33
WSL version: 2.0.9.0
Kernel version: 5.15.133.1-1
WSLg version: 1.0.59
MSRDC version: 1.2.4677
Direct3D version: 1.611.1-81528511
DXCore version: 10.0.25131.1002-220531-1700.rs-onecore-base2-hyp
Windows version: 10.0.22631.2861

Trying to run the dolphin-mixtral model

Here is everything I have tried written out in hopes for someone to provide an answer to this issue.

  1. Have proper Nvidia drivers installed and WSL2 on Windows 11 (Windows 10 did not offer support)
  2. Install Ollama on Ubuntu WSL (Complained that there was no GPU detected)
  3. Tried building the Ollama manually on Ubuntu by following tutorials provided by Ubuntu and Nvidia. (Complained that there was no GPU detected)
  4. Tried installing CUDA libraries manually on Ubuntu in WSL. (Complained that there was no GPU detected, getting CUDA error 100)
  5. Finally followed the suggestion by @siikdUde here: https://github.com/jmorganca/ollama/issues/1091 and installed oobabooga, this time the GPU was detected but is apparently not being used.

I am also attaching Ollama logs from the working instance (no. 5), and the monitoring of Nvidia graphics card resources.

When I try to watch the nvidia-smi command there are no processes listed.
Screenshot 2024-01-03 190751

When I check the gpustat, there is no measurable change.
Screenshot 2024-01-03 191040

When I check the Task Manager on the host machine, there is also no change, appart from the CPU spiking.
Screenshot 2024-01-03 190658

And here are also the logs from the Ollama service where the GPU is detected and supposedly used.
Screenshot 2024-01-03 190520

* But I have not tried Docker yet, since the instruction is ambiguous and it is not clear where to install the docker itself. But I am not hopeful it will solve my issue.

So I guess what I am asking is, is this it? Or can the GPU be utilized more(or at all) in order to gain performance?

<!-- gh-comment-id:1875977838 --> @Bizyak13 commented on GitHub (Jan 3, 2024): I am having the same issue, where nothing I do will use the GPU. Either getting errors that no GPU was detected (CUDA 100 error) or that only the CPU is ever utilised, and no matter where I check the GPU will not use any resources. System info: **Running on Ubuntu 22.04/WSL2/Windows 11 - GeForce RTX 3080 - 64GB RAM** **Nvidia driver 546.33** **WSL version: 2.0.9.0 Kernel version: 5.15.133.1-1 WSLg version: 1.0.59 MSRDC version: 1.2.4677 Direct3D version: 1.611.1-81528511 DXCore version: 10.0.25131.1002-220531-1700.rs-onecore-base2-hyp Windows version: 10.0.22631.2861** **Trying to run the dolphin-mixtral model** Here is everything I have tried written out in hopes for someone to provide an answer to this issue. 1. Have proper Nvidia drivers installed and WSL2 on Windows 11 (Windows 10 did not offer support) 2. Install Ollama on Ubuntu WSL (Complained that there was no GPU detected) 3. Tried building the Ollama manually on Ubuntu by following tutorials provided by Ubuntu and Nvidia. (Complained that there was no GPU detected) 4. Tried installing CUDA libraries manually on Ubuntu in WSL. (Complained that there was no GPU detected, getting CUDA error 100) 5. Finally followed the suggestion by @siikdUde here: https://github.com/jmorganca/ollama/issues/1091 and installed oobabooga, this time the GPU was detected but is apparently not being used. I am also attaching Ollama logs from the working instance (no. 5), and the monitoring of Nvidia graphics card resources. When I try to watch the `nvidia-smi` command there are no processes listed. ![Screenshot 2024-01-03 190751](https://github.com/jmorganca/ollama/assets/77543018/c089cc4b-9f2f-4891-b4d0-96e850c26345) When I check the gpustat, there is no measurable change. ![Screenshot 2024-01-03 191040](https://github.com/jmorganca/ollama/assets/77543018/c05028fa-1bdc-422e-8b3c-a45ef79c2711) When I check the Task Manager on the host machine, there is also no change, appart from the CPU spiking. ![Screenshot 2024-01-03 190658](https://github.com/jmorganca/ollama/assets/77543018/3512aa4d-e7e4-41bd-bb8d-f9e163515fcd) And here are also the logs from the Ollama service where the GPU is detected and supposedly used. ![Screenshot 2024-01-03 190520](https://github.com/jmorganca/ollama/assets/77543018/b82f0e9b-2a07-4828-96d9-012d02fe2bdf) \* But I have not tried Docker yet, since the instruction is ambiguous and it is not clear where to install the docker itself. But I am not hopeful it will solve my issue. So I guess what I am asking is, is this it? Or can the GPU be utilized more(or at all) in order to gain performance?
Author
Owner

@mongolu commented on GitHub (Jan 3, 2024):

It can and it does

<!-- gh-comment-id:1875997171 --> @mongolu commented on GitHub (Jan 3, 2024): It can and it does
Author
Owner

@siikdUde commented on GitHub (Jan 3, 2024):

@Bizyak13
Did you uninstall Ubuntu and WSL and then re-installed before downloading oobabooga? If not, please do so and try my method again. It works perfectly with dolphin-mixtral. Also please note that not all models work well with GPU.

ollama gpu

<!-- gh-comment-id:1876030703 --> @siikdUde commented on GitHub (Jan 3, 2024): @Bizyak13 Did you uninstall Ubuntu and WSL and then re-installed before downloading oobabooga? If not, please do so and try my method again. It works perfectly with dolphin-mixtral. Also please note that not all models work well with GPU. ![ollama gpu](https://github.com/jmorganca/ollama/assets/10148714/ddf231dc-b3ed-4fb0-9edf-a4f17c39ac83)
Author
Owner

@Bizyak13 commented on GitHub (Jan 3, 2024):

@siikdUde I did yes. I cleaned everything, then reinstalled everything back, then installed oobabooga, and only after that, installed Ollama.
I guess I can try if any other models perform differently. But from what I'm seeing is that Ollama does initially load something into the GPU memory, but then just doesn't use it.

<!-- gh-comment-id:1876037133 --> @Bizyak13 commented on GitHub (Jan 3, 2024): @siikdUde I did yes. I cleaned everything, then reinstalled everything back, then installed oobabooga, and only after that, installed Ollama. I guess I can try if any other models perform differently. But from what I'm seeing is that Ollama does initially load something into the GPU memory, but then just doesn't use it.
Author
Owner

@siikdUde commented on GitHub (Jan 3, 2024):

@Bizyak13
After playing around some more with this issue, it does seem like there can be a hiccup or glitch that happens at random where the GPU will stop being used for the current model loaded and then any subsequent models loaded in the same terminal session. Particularly in my case, the GPU stopped being used when I downloaded gpustat, so that may have been a trigger that affected the terminal session. What I have found to fix this or as a workaround is to load a different model, and the GPU will start working again. Then, you can load back to the original model being used and the GPU will still work.

Please try to exit terminal, open up again and load a different model and see if that changes anything

<!-- gh-comment-id:1876057144 --> @siikdUde commented on GitHub (Jan 3, 2024): @Bizyak13 After playing around some more with this issue, it does seem like there can be a hiccup or glitch that happens at random where the GPU will stop being used for the current model loaded and then any subsequent models loaded in the same terminal session. Particularly in my case, the GPU stopped being used when I downloaded gpustat, so that may have been a trigger that affected the terminal session. What I have found to fix this or as a workaround is to load a different model, and the GPU will start working again. Then, you can load back to the original model being used and the GPU will still work. Please try to exit terminal, open up again and load a different model and see if that changes anything
Author
Owner

@ltomes commented on GitHub (Jan 4, 2024):

For what it's worth I'm seeing similar behavior in the latest container release of ollama. Ollama believes it's offloading work to the GPU via CUDA (and I do see high vRam usage), but the GPU usage stays low, and CPU usage high.

image
image

<!-- gh-comment-id:1877741137 --> @ltomes commented on GitHub (Jan 4, 2024): For what it's worth I'm seeing similar behavior in the latest container release of ollama. Ollama believes it's offloading work to the GPU via CUDA (and I do see high vRam usage), but the GPU usage stays low, and CPU usage high. ![image](https://github.com/jmorganca/ollama/assets/4184677/70cdef7c-d19b-4413-b2d6-bcc6aaa5e826) ![image](https://github.com/jmorganca/ollama/assets/4184677/14bf3125-9af7-44c8-b78c-b5ee8fd2f4b6)
Author
Owner

@draco1544 commented on GitHub (Jan 5, 2024):

For what it's worth I'm seeing similar behavior in the latest container release of ollama. Ollama believes it's offloading work to the GPU via CUDA (and I do see high vRam usage), but the GPU usage stays low, and CPU usage high.

image image

I also have this problem, my gpu is only used at 5%

<!-- gh-comment-id:1878698972 --> @draco1544 commented on GitHub (Jan 5, 2024): > For what it's worth I'm seeing similar behavior in the latest container release of ollama. Ollama believes it's offloading work to the GPU via CUDA (and I do see high vRam usage), but the GPU usage stays low, and CPU usage high. > > ![image](https://private-user-images.githubusercontent.com/4184677/294296060-70cdef7c-d19b-4413-b2d6-bcc6aaa5e826.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MDQ0NjMwMDIsIm5iZiI6MTcwNDQ2MjcwMiwicGF0aCI6Ii80MTg0Njc3LzI5NDI5NjA2MC03MGNkZWY3Yy1kMTliLTQ0MTMtYjJkNi1iY2M2YWFhNWU4MjYucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDEwNSUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDAxMDVUMTM1MTQyWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9ZDNmMmZlMzQ4YmM4NDEwY2M3MDM4MDgxODY0MzZjM2ZjNWFlOWVlNWU5NDA5YTBlNjc2YjQxOGE3ZDM1ZTAzYiZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.K4U0gRrORkHSx_LZVFj_ARMereBxZMGEXwuhu3-nYNg) ![image](https://private-user-images.githubusercontent.com/4184677/294295235-14bf3125-9af7-44c8-b78c-b5ee8fd2f4b6.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MDQ0NjMwMDIsIm5iZiI6MTcwNDQ2MjcwMiwicGF0aCI6Ii80MTg0Njc3LzI5NDI5NTIzNS0xNGJmMzEyNS05YWY3LTQ0YzgtYjc4Yy1iNWVlOGZkMmY0YjYucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDEwNSUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDAxMDVUMTM1MTQyWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9YTIzNzU4YTZhOWU4ZjJmNWZiZDcwODExMzgwZDE1NjY2ZTU4NjQ4YjFjNDM0Y2YyOTI3MDk4ZDgzNjI5ZmQ4MyZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.ed9sesEyzaOcCCVDW1PZpPjr9A7WxGvjIQQF79v9jhU) I also have this problem, my gpu is only used at 5%
Author
Owner

@quanpinjie commented on GitHub (Jan 5, 2024):

For what it's worth I'm seeing similar behavior in the latest container release of ollama. Ollama believes it's offloading work to the GPU via CUDA (and I do see high vRam usage), but the GPU usage stays low, and CPU usage high.值得一提的是,我在最新的 ollama 容器版本中看到了类似的行为。 Ollama 认为它通过 CUDA 将工作卸载到 GPU(我确实看到 vRam 使用率很高),但 GPU 使用率仍然很低,而 CPU 使用率很高。
image image

I also have this problem, my gpu is only used at 5%我也有这个问题,我的gpu只使用了5%

Is it resolved, I have same problem

<!-- gh-comment-id:1878870427 --> @quanpinjie commented on GitHub (Jan 5, 2024): > > For what it's worth I'm seeing similar behavior in the latest container release of ollama. Ollama believes it's offloading work to the GPU via CUDA (and I do see high vRam usage), but the GPU usage stays low, and CPU usage high.值得一提的是,我在最新的 ollama 容器版本中看到了类似的行为。 Ollama 认为它通过 CUDA 将工作卸载到 GPU(我确实看到 vRam 使用率很高),但 GPU 使用率仍然很低,而 CPU 使用率很高。 > > ![image](https://private-user-images.githubusercontent.com/4184677/294296060-70cdef7c-d19b-4413-b2d6-bcc6aaa5e826.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MDQ0NjMwMDIsIm5iZiI6MTcwNDQ2MjcwMiwicGF0aCI6Ii80MTg0Njc3LzI5NDI5NjA2MC03MGNkZWY3Yy1kMTliLTQ0MTMtYjJkNi1iY2M2YWFhNWU4MjYucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDEwNSUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDAxMDVUMTM1MTQyWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9ZDNmMmZlMzQ4YmM4NDEwY2M3MDM4MDgxODY0MzZjM2ZjNWFlOWVlNWU5NDA5YTBlNjc2YjQxOGE3ZDM1ZTAzYiZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.K4U0gRrORkHSx_LZVFj_ARMereBxZMGEXwuhu3-nYNg) ![image](https://private-user-images.githubusercontent.com/4184677/294295235-14bf3125-9af7-44c8-b78c-b5ee8fd2f4b6.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MDQ0NjMwMDIsIm5iZiI6MTcwNDQ2MjcwMiwicGF0aCI6Ii80MTg0Njc3LzI5NDI5NTIzNS0xNGJmMzEyNS05YWY3LTQ0YzgtYjc4Yy1iNWVlOGZkMmY0YjYucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDEwNSUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDAxMDVUMTM1MTQyWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9YTIzNzU4YTZhOWU4ZjJmNWZiZDcwODExMzgwZDE1NjY2ZTU4NjQ4YjFjNDM0Y2YyOTI3MDk4ZDgzNjI5ZmQ4MyZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.ed9sesEyzaOcCCVDW1PZpPjr9A7WxGvjIQQF79v9jhU) > > I also have this problem, my gpu is only used at 5%我也有这个问题,我的gpu只使用了5% Is it resolved, I have same problem
Author
Owner

@bagstoper commented on GitHub (Jan 5, 2024):

There are some things to try in this thread but I am not hopeful that they will solve the issue. Some have resolved it with specific install methods using oogabooga as the method of getting nvidia drivers installed. I read something about it maybe being CUDA version related too. I have tried 12.2 and 12.3 with no luck. 12.1 is next on my list, which that was what was installed by oogabooga but I did that before I had the newest nvidia drivers for ubuntu and either that or apt-update put the newer version of CUDA on there.

<!-- gh-comment-id:1879341176 --> @bagstoper commented on GitHub (Jan 5, 2024): There are some things to try in this thread but I am not hopeful that they will solve the issue. Some have resolved it with specific install methods using [oogabooga ](https://github.com/oobabooga/text-generation-webui) as the method of getting nvidia drivers installed. I read something about it maybe being CUDA version related too. I have tried 12.2 and 12.3 with no luck. 12.1 is next on my list, which that was what was installed by oogabooga but I did that before I had the newest nvidia drivers for ubuntu and either that or apt-update put the newer version of CUDA on there.
Author
Owner

@Bizyak13 commented on GitHub (Jan 10, 2024):

Did some more poking around and also installed LMstudio, to try and see if that would pick up the GPU. What I found out is that apparently, my GPU (RTX 3080 with 12GB of VRAM) is not enough for the model, as it only offloads 6/7 layers, which is not enough to get any significant use out of the GPU. In LMstudio however you can manually specify the layers, and setting it to something like 30 will get the GPU going, but I think it also spills out into regular memory, which does not make things any faster.

I was not able to do the same with Ollama, as anytime any changes are made on the WSL, the GPU support fails, and I am getting only CUDA 100 errors.

This is just conjecture at this point, but maybe it helps someone out.

<!-- gh-comment-id:1884560448 --> @Bizyak13 commented on GitHub (Jan 10, 2024): Did some more poking around and also installed LMstudio, to try and see if that would pick up the GPU. What I found out is that apparently, my GPU (RTX 3080 with 12GB of VRAM) is not enough for the model, as it only offloads 6/7 layers, which is not enough to get any significant use out of the GPU. In LMstudio however you can manually specify the layers, and setting it to something like 30 will get the GPU going, but I think it also spills out into regular memory, which does not make things any faster. I was not able to do the same with Ollama, as anytime any changes are made on the WSL, the GPU support fails, and I am getting only CUDA 100 errors. This is just conjecture at this point, but maybe it helps someone out.
Author
Owner

@dhiltgen commented on GitHub (Jan 27, 2024):

@Bizyak13 we've made quite a few fixes to the CUDA integration over the past few weeks. Please give 0.1.22 a try and if you're still having problems, share the server log so we can see what's going wrong.

<!-- gh-comment-id:1912887706 --> @dhiltgen commented on GitHub (Jan 27, 2024): @Bizyak13 we've made quite a few fixes to the CUDA integration over the past few weeks. Please give 0.1.22 a try and if you're still having problems, share the server log so we can see what's going wrong.
Author
Owner

@ltomes commented on GitHub (Jan 27, 2024):

@Bizyak13 we've made quite a few fixes to the CUDA integration over the past few weeks. Please give 0.1.22 a try and if you're still having problems, share the server log so we can see what's going wrong.

I see similar behavior on latest.

root@ms:~# nvidia-smi
Fri Jan 26 20:21:16 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.40.07              Driver Version: 550.40.07      CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA RTX A5000               Off |   00000000:01:00.0 Off |                  Off |
| 30%   51C    P2             66W /  230W |   21949MiB /  24564MiB |      2%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
...
|    0   N/A  N/A     25774      C   python3                                      2588MiB |
|    0   N/A  N/A     35414      C   /bin/ollama                                 18348MiB |
...
+-----------------------------------------------------------------------------------------+

Super high memory usage, but lower power draw and low % usage.

Setup: I updated to the latest container, deleted all models, redownloaded, and ran a query.
I am trying to run mixtral:latest, maybe it's just to large for an a5000.

<!-- gh-comment-id:1912920381 --> @ltomes commented on GitHub (Jan 27, 2024): > @Bizyak13 we've made quite a few fixes to the CUDA integration over the past few weeks. Please give 0.1.22 a try and if you're still having problems, share the server log so we can see what's going wrong. I see similar behavior on latest. ``` root@ms:~# nvidia-smi Fri Jan 26 20:21:16 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.40.07 Driver Version: 550.40.07 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA RTX A5000 Off | 00000000:01:00.0 Off | Off | | 30% 51C P2 66W / 230W | 21949MiB / 24564MiB | 2% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| ... | 0 N/A N/A 25774 C python3 2588MiB | | 0 N/A N/A 35414 C /bin/ollama 18348MiB | ... +-----------------------------------------------------------------------------------------+ ``` Super high memory usage, but lower power draw and low % usage. Setup: I updated to the latest container, deleted all models, redownloaded, and ran a query. I am trying to run mixtral:latest, maybe it's just to large for an a5000.
Author
Owner

@dhiltgen commented on GitHub (Jan 27, 2024):

Server logs please.

https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md

<!-- gh-comment-id:1913190466 --> @dhiltgen commented on GitHub (Jan 27, 2024): Server logs please. https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md
Author
Owner

@ltomes commented on GitHub (Jan 29, 2024):

Server logs please.

https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md

When running ollama as a container, I do not see logs being generated in the ~/.ollama/logs/ directory (That is a mounted path, I checked from inside the container, and from the host mounted directory, The image tag being used is ollama/ollama:0.1.22

The original issue Firebrand described is in WSL, and I'm running Slackware, let me know if you would like me to make a new issue and I will try to provide all the details of my setup and see if we can get detailed logs that point to a root cause.

<!-- gh-comment-id:1914820839 --> @ltomes commented on GitHub (Jan 29, 2024): > Server logs please. > > https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md When running ollama as a container, I do not see logs being generated in the `~/.ollama/logs/` directory (That is a mounted path, I checked from inside the container, and from the host mounted directory, The image tag being used is `ollama/ollama:0.1.22` The original issue Firebrand described is in WSL, and I'm running Slackware, let me know if you would like me to make a new issue and I will try to provide all the details of my setup and see if we can get detailed logs that point to a root cause.
Author
Owner

@dhiltgen commented on GitHub (Jan 29, 2024):

@ltomes you raise a good point - the troubleshooting doc needs a section on containers. The logs are going to stdout/stderr in the container, so you'd do docker logs <container-name> or equivalent for your container platform.

<!-- gh-comment-id:1915142275 --> @dhiltgen commented on GitHub (Jan 29, 2024): @ltomes you raise a good point - the troubleshooting doc needs a section on containers. The logs are going to stdout/stderr in the container, so you'd do `docker logs <container-name>` or equivalent for your container platform.
Author
Owner

@ltomes commented on GitHub (Jan 29, 2024):

@ltomes you raise a good point - the troubleshooting doc needs a section on containers. The logs are going to stdout/stderr in the container, so you'd do docker logs <container-name> or equivalent for your container platform.

Here's logs around a query that appears to be CPU dependent, but it is a 24.6gb GGUF model. Maybe I'm just vram limited (24gb, A5000), and that bottleneck is making the CUDA cores have low utilization. I'm open to other models to use for testing to sort out what's going on!
2024:01:29 13-31-38-ollama.log

I can also make an MR tonight for container logging procedures so others like me (who didn't think very hard 🙃) can get logs to you faster.

<!-- gh-comment-id:1915341807 --> @ltomes commented on GitHub (Jan 29, 2024): > @ltomes you raise a good point - the troubleshooting doc needs a section on containers. The logs are going to stdout/stderr in the container, so you'd do `docker logs <container-name>` or equivalent for your container platform. Here's logs around a query that _appears_ to be CPU dependent, but it _is_ a 24.6gb GGUF model. Maybe I'm just vram limited (24gb, A5000), and that bottleneck is making the CUDA cores have low utilization. I'm open to other models to use for testing to sort out what's going on! [2024:01:29 13-31-38-ollama.log](https://github.com/ollama/ollama/files/14088264/2024.01.29.13-31-38-ollama.log) I can also make an MR tonight for container logging procedures so others like me (who didn't think very hard 🙃) can get logs to you faster.
Author
Owner

@easp commented on GitHub (Jan 29, 2024):

@ltomes, It looks like only 2/3rd of the model is on GPU. I'd expect GPU utilization to be low because the GPU will spend most of its time waiting for the CPU to process that 1/3 of the model that doesn't fit in VRAM.

If we assume that the GPU can process its 2/3rds of the model in 1/10th the time it takes the CPU to process its 1/3rd of the model, then the GPU will be ~90% idle and speeds will be much closer to CPU-only speeds than to GPU-only speeds.

<!-- gh-comment-id:1915416177 --> @easp commented on GitHub (Jan 29, 2024): @ltomes, It looks like only 2/3rd of the model is on GPU. I'd expect GPU utilization to be low because the GPU will spend most of its time waiting for the CPU to process that 1/3 of the model that doesn't fit in VRAM. If we assume that the GPU can process its 2/3rds of the model in 1/10th the time it takes the CPU to process its 1/3rd of the model, then the GPU will be ~90% idle and speeds will be much closer to CPU-only speeds than to GPU-only speeds.
Author
Owner

@ltomes commented on GitHub (Jan 29, 2024):

@ltomes, It looks like only 2/3rd of the model is on GPU. I'd expect GPU utilization to be low because the GPU will spend most of its time waiting for the CPU to process that 1/3 of the model that doesn't fit in VRAM.

If we assume that the GPU can process its 2/3rds of the model in 1/10th the time it takes the CPU to process its 1/3rd of the model, then the GPU will be ~90% idle and speeds will be much closer to CPU-only speeds than to GPU-only speeds.

If I set OLLAMA_LLM_LIBRARY=cuda_v11 would you expect using this model to fail fast/run only on the GPU when it can manage it?

Setting the above I still see INFO CPU has AVX2/AVX being feature detected, but maybe it won't be used. I will run a few queries to test it out.

2024/01/29 19:35:54 images.go:857: INFO total blobs: 19
2024/01/29 19:35:54 images.go:864: INFO total unused blobs removed: 0
2024/01/29 19:35:54 routes.go:950: INFO Listening on [::]:11434 (version 0.1.22)
2024/01/29 19:35:54 payload_common.go:106: INFO Extracting dynamic libraries...
2024/01/29 19:35:56 payload_common.go:145: INFO Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v6 rocm_v5]
2024/01/29 19:35:56 gpu.go:94: INFO Detecting GPU type
2024/01/29 19:35:56 gpu.go:236: INFO Searching for GPU management library libnvidia-ml.so
2024/01/29 19:35:56 gpu.go:282: INFO Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.40.07]
2024/01/29 19:35:56 gpu.go:99: INFO Nvidia GPU detected
2024/01/29 19:35:56 gpu.go:140: INFO CUDA Compute Capability detected: 8.6
[GIN] 2024/01/29 - 19:37:27 | 200 |      65.289µs |      172.18.0.1 | GET      "/api/version"
[GIN] 2024/01/29 - 19:37:27 | 200 |   11.362936ms |      172.18.0.1 | GET      "/api/tags"
[GIN] 2024/01/29 - 19:37:27 | 200 |      36.536µs |      172.18.0.1 | GET      "/api/version"
2024/01/29 19:37:33 gpu.go:140: INFO CUDA Compute Capability detected: 8.6
2024/01/29 19:37:33 gpu.go:140: INFO CUDA Compute Capability detected: 8.6
2024/01/29 19:37:33 cpu_common.go:11: INFO CPU has AVX2
2024/01/29 19:37:33 llm.go:141: INFO Loading OLLAMA_LLM_LIBRARY=cuda_v11
2024/01/29 19:37:33 dyn_ext_server.go:90: INFO Loading Dynamic llm server: /tmp/ollama2582199041/cuda_v11/libext_server.so
2024/01/29 19:37:33 dyn_ext_server.go:145: INFO Initializing llama server
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
  Device 0: NVIDIA RTX A5000, compute capability 8.6, VMM: yes

@dhiltgen Heres an MR for documentation: https://github.com/ollama/ollama/pull/2275

<!-- gh-comment-id:1915429799 --> @ltomes commented on GitHub (Jan 29, 2024): > @ltomes, It looks like only 2/3rd of the model is on GPU. I'd expect GPU utilization to be low because the GPU will spend most of its time waiting for the CPU to process that 1/3 of the model that doesn't fit in VRAM. > > If we assume that the GPU can process its 2/3rds of the model in 1/10th the time it takes the CPU to process its 1/3rd of the model, then the GPU will be ~90% idle and speeds will be much closer to CPU-only speeds than to GPU-only speeds. If I set `OLLAMA_LLM_LIBRARY=cuda_v11` would you expect using this model to fail fast/run only on the GPU when it can manage it? Setting the above I still see `INFO CPU has AVX2`/AVX being feature detected, but maybe it won't be used. I will run a few queries to test it out. ``` 2024/01/29 19:35:54 images.go:857: INFO total blobs: 19 2024/01/29 19:35:54 images.go:864: INFO total unused blobs removed: 0 2024/01/29 19:35:54 routes.go:950: INFO Listening on [::]:11434 (version 0.1.22) 2024/01/29 19:35:54 payload_common.go:106: INFO Extracting dynamic libraries... 2024/01/29 19:35:56 payload_common.go:145: INFO Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v6 rocm_v5] 2024/01/29 19:35:56 gpu.go:94: INFO Detecting GPU type 2024/01/29 19:35:56 gpu.go:236: INFO Searching for GPU management library libnvidia-ml.so 2024/01/29 19:35:56 gpu.go:282: INFO Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.40.07] 2024/01/29 19:35:56 gpu.go:99: INFO Nvidia GPU detected 2024/01/29 19:35:56 gpu.go:140: INFO CUDA Compute Capability detected: 8.6 [GIN] 2024/01/29 - 19:37:27 | 200 | 65.289µs | 172.18.0.1 | GET "/api/version" [GIN] 2024/01/29 - 19:37:27 | 200 | 11.362936ms | 172.18.0.1 | GET "/api/tags" [GIN] 2024/01/29 - 19:37:27 | 200 | 36.536µs | 172.18.0.1 | GET "/api/version" 2024/01/29 19:37:33 gpu.go:140: INFO CUDA Compute Capability detected: 8.6 2024/01/29 19:37:33 gpu.go:140: INFO CUDA Compute Capability detected: 8.6 2024/01/29 19:37:33 cpu_common.go:11: INFO CPU has AVX2 2024/01/29 19:37:33 llm.go:141: INFO Loading OLLAMA_LLM_LIBRARY=cuda_v11 2024/01/29 19:37:33 dyn_ext_server.go:90: INFO Loading Dynamic llm server: /tmp/ollama2582199041/cuda_v11/libext_server.so 2024/01/29 19:37:33 dyn_ext_server.go:145: INFO Initializing llama server ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 1 CUDA devices: Device 0: NVIDIA RTX A5000, compute capability 8.6, VMM: yes ``` @dhiltgen Heres an MR for documentation: https://github.com/ollama/ollama/pull/2275
Author
Owner

@mehdiataei commented on GitHub (Jan 30, 2024):

Same issue when using 2x RTX 6000 Ada gen.

<!-- gh-comment-id:1917576351 --> @mehdiataei commented on GitHub (Jan 30, 2024): Same issue when using 2x RTX 6000 Ada gen.
Author
Owner

@matjazbo commented on GitHub (Jan 31, 2024):

I also have this issue, GPU memory is allocated, but only CPU is used for inference.
ollama.log

<!-- gh-comment-id:1918539820 --> @matjazbo commented on GitHub (Jan 31, 2024): I also have this issue, GPU memory is allocated, but only CPU is used for inference. [ollama.log](https://github.com/ollama/ollama/files/14108568/ollama.log)
Author
Owner

@easp commented on GitHub (Jan 31, 2024):

@matjazbo What's your system configuration and what models were you using?

It looks like you might be using WSL2. From what I can tell your last 3 models were Dolphin Mixtral, Phi-2 and Mixtral. Phi-2 looks like it ran entirely on GPU. The Mixtral-family models exceed the amount of available VRAM by about 3x. As a result, the majority of the model is running on CPU. In those circumstances the GPU will be mostly idle while the CPU will be using all of the physical cores (typically 1/2 the total thread or core count).

Ollama is behaving as expected.

@mehdiataei What model and quantization are you trying to run? You have plenty of VRAM, unless there is other software you are running that has allocated a lot of CUDA memory. Can you share your ollama log

<!-- gh-comment-id:1919453239 --> @easp commented on GitHub (Jan 31, 2024): @matjazbo What's your system configuration and what models were you using? It looks like you might be using WSL2. From what I can tell your last 3 models were Dolphin Mixtral, Phi-2 and Mixtral. Phi-2 looks like it ran entirely on GPU. The Mixtral-family models exceed the amount of available VRAM by about 3x. As a result, the majority of the model is running on CPU. In those circumstances the GPU will be mostly idle while the CPU will be using all of the physical cores (typically 1/2 the total thread or core count). Ollama is behaving as expected. @mehdiataei What model and quantization are you trying to run? You have plenty of VRAM, unless there is other software you are running that has allocated a lot of CUDA memory. Can you share your [ollama log](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues)
Author
Owner

@matjazbo commented on GitHub (Feb 1, 2024):

@easp you might be correct, although when running Phi-2, I didn't see any GPU usage, neither in task manager nor in nvidia-smi. I'm using 4070 with 12GB which seem to be too small for dolphin-mixtral and mixtral but when ollama allocated GPU VRAM, I was expecting it to use GPU also.

I'm upgrading my system with 3090 soon and will then be able to test the other models.

<!-- gh-comment-id:1920704766 --> @matjazbo commented on GitHub (Feb 1, 2024): @easp you might be correct, although when running Phi-2, I didn't see any GPU usage, neither in task manager nor in nvidia-smi. I'm using 4070 with 12GB which seem to be too small for dolphin-mixtral and mixtral but when ollama allocated GPU VRAM, I was expecting it to use GPU also. I'm upgrading my system with 3090 soon and will then be able to test the other models.
Author
Owner

@ltomes commented on GitHub (Feb 1, 2024):

@easp can you clarify/point me to documentation or discussions on the expected behavior if the three of us set OLLAMA_LLM_LIBRARY=cuda_v11?

In that case should we be expecting GPU use only, or a failure to load the model (In my case/with inadequate vram), or something else?

With a single A5000 I am seeing mixtral requests fall back to the CPU, which I was not expecting when explicitly setting the library to cuda.

<!-- gh-comment-id:1921455395 --> @ltomes commented on GitHub (Feb 1, 2024): @easp can you clarify/point me to documentation or discussions on the expected behavior if the three of us set `OLLAMA_LLM_LIBRARY=cuda_v11`? In that case should we be expecting GPU use only, or a failure to load the model (In my case/with inadequate vram), or something else? With a single A5000 I am seeing mixtral requests fall back to the CPU, which I was not expecting when explicitly setting the library to cuda.
Author
Owner

@mehdiataei commented on GitHub (Feb 1, 2024):

@easp

I am running fp16. I have two Ada GPUs (totalling +98 VRAM) and Codellama model. I am getting less than 1 token/sec, and obviously with my hardware that doesn't make any sense. I am cetain that although the GPU memories are allocated it is using CPU.

Here is the log:
ollama.log

<!-- gh-comment-id:1921465119 --> @mehdiataei commented on GitHub (Feb 1, 2024): @easp I am running fp16. I have two Ada GPUs (totalling +98 VRAM) and Codellama model. I am getting less than 1 token/sec, and obviously with my hardware that doesn't make any sense. I am cetain that although the GPU memories are allocated it is using CPU. Here is the log: [ollama.log](https://github.com/ollama/ollama/files/14126704/ollama.log)
Author
Owner

@penouc commented on GitHub (Feb 3, 2024):

This seems to be a new version issue. I tried using ollma0.1.20 and found that the CPU's percentage could go over 100%, without crashing.

image

<!-- gh-comment-id:1925327399 --> @penouc commented on GitHub (Feb 3, 2024): This seems to be a new version issue. I tried using ollma0.1.20 and found that the CPU's percentage could go over 100%, without crashing. ![image](https://github.com/ollama/ollama/assets/1774022/ed8b3659-b815-4adf-abb3-f984bd3c7ae2)
Author
Owner

@jakern commented on GitHub (Feb 11, 2024):

I was just trouble shooting this issue for myself and found this thread. I'm on linux not windows but surprisingly rebooting the system and restarting the container allowed it to use GPU again.

<!-- gh-comment-id:1937835638 --> @jakern commented on GitHub (Feb 11, 2024): I was just trouble shooting this issue for myself and found this thread. I'm on linux not windows but surprisingly rebooting the system and restarting the container allowed it to use GPU again.
Author
Owner

@8bitaby commented on GitHub (Feb 13, 2024):

I'm having the same issue. While using the ollama on llama2 , my GPU resource is not being used. Only cpu is getting used. Is anyone find what might be the issue?
Screenshot from 2024-02-13 16-07-47

<!-- gh-comment-id:1941143107 --> @8bitaby commented on GitHub (Feb 13, 2024): I'm having the same issue. While using the ollama on llama2 , my GPU resource is not being used. Only cpu is getting used. Is anyone find what might be the issue? ![Screenshot from 2024-02-13 16-07-47](https://github.com/ollama/ollama/assets/74111044/d7b05f07-b392-4724-bbcb-d1cb86c10b08)
Author
Owner

@dhiltgen commented on GitHub (Feb 15, 2024):

At present there is no mechanism to force exclusive GPU use, so the system will always attempt to load as much of the model as possible into the GPU, and if it doesn't fit, it will load the remainder in system memory and partially use the CPU. This will often result in lower performance compared to pure GPU, as the GPU stalls waiting for the CPU to keep up, however it should still be faster than running just on the CPU alone. We don't currently have UX to expose details about this in the CLI, but may add that in the future for verbose output. Until then, you can check the server log, and look for a line like this:

llm_load_tensors: offloaded 33/33 layers to GPU

If not all layers are loaded into the GPU, some performance impact will result as the CPU has to carry some of the load. If there's enough difference in performance between the GPU and CPU in your system, and enough layers are on CPU, then this will cause the GPU to spend most of its compute time idle.

<!-- gh-comment-id:1946904131 --> @dhiltgen commented on GitHub (Feb 15, 2024): At present there is no mechanism to force exclusive GPU use, so the system will always attempt to load as much of the model as possible into the GPU, and if it doesn't fit, it will load the remainder in system memory and partially use the CPU. This will often result in lower performance compared to pure GPU, as the GPU stalls waiting for the CPU to keep up, however it should still be faster than running just on the CPU alone. We don't currently have UX to expose details about this in the CLI, but may add that in the future for verbose output. Until then, you can check the server log, and look for a line like this: ``` llm_load_tensors: offloaded 33/33 layers to GPU ``` If not all layers are loaded into the GPU, some performance impact will result as the CPU has to carry some of the load. If there's enough difference in performance between the GPU and CPU in your system, and enough layers are on CPU, then this will cause the GPU to spend most of its compute time idle.
Author
Owner

@ltomes commented on GitHub (Feb 15, 2024):

At present there is no mechanism to force exclusive GPU use, so the system will always attempt to load as much of the model as possible into the GPU, and if it doesn't fit, it will load the remainder in system memory and partially use the CPU. This will often result in lower performance compared to pure GPU, as the GPU stalls waiting for the CPU to keep up, however it should still be faster than running just on the CPU alone. We don't currently have UX to expose details about this in the CLI, but may add that in the future for verbose output. Until then, you can check the server log, and look for a line like this:

llm_load_tensors: offloaded 33/33 layers to GPU

If not all layers are loaded into the GPU, some performance impact will result as the CPU has to carry some of the load. If there's enough difference in performance between the GPU and CPU in your system, and enough layers are on CPU, then this will cause the GPU to spend most of its compute time idle.

I will try to find some time this weekend to do some testing and post some logs of what I am seeing. I added a 3090 to my server so I have ~48 gb available which should keep things GPU bound, I might try limiting the cores the container can use to only two isolated cores or something to make the testing easier. What might be happening, is some requests are properly using the GPU, but the resources are not released, then subsequent requests are CPU bound, but it's likely not worth speculating. I will post some results here if I can reproduce what I said above.

<!-- gh-comment-id:1946969907 --> @ltomes commented on GitHub (Feb 15, 2024): > At present there is no mechanism to force exclusive GPU use, so the system will always attempt to load as much of the model as possible into the GPU, and if it doesn't fit, it will load the remainder in system memory and partially use the CPU. This will often result in lower performance compared to pure GPU, as the GPU stalls waiting for the CPU to keep up, however it should still be faster than running just on the CPU alone. We don't currently have UX to expose details about this in the CLI, but may add that in the future for verbose output. Until then, you can check the server log, and look for a line like this: > > ``` > llm_load_tensors: offloaded 33/33 layers to GPU > ``` > > If not all layers are loaded into the GPU, some performance impact will result as the CPU has to carry some of the load. If there's enough difference in performance between the GPU and CPU in your system, and enough layers are on CPU, then this will cause the GPU to spend most of its compute time idle. I will try to find some time this weekend to do some testing and post some logs of what I am seeing. I added a 3090 to my server so I have ~48 gb available which _should_ keep things GPU bound, I might try limiting the cores the container can use to only two isolated cores or something to make the testing easier. What might be happening, is some requests are properly using the GPU, but the resources are not released, then subsequent requests are CPU bound, but it's likely not worth speculating. I will post some results here if I can reproduce what I said above.
Author
Owner

@dhiltgen commented on GitHub (Mar 13, 2024):

I don't believe this issue is tracking anything actionable at this point. If there are still any remaining questions/concerns please let me know.

<!-- gh-comment-id:1995086939 --> @dhiltgen commented on GitHub (Mar 13, 2024): I don't believe this issue is tracking anything actionable at this point. If there are still any remaining questions/concerns please let me know.
Author
Owner

@icemagno commented on GitHub (Dec 4, 2024):

Why was this thread closed? I have Ollama for windows, RTX 4060 and ollama keeps insisting on using CPU and RAM. It is very disapointing because I spent a fortune buying this gpu. Many have explained various things about PCI, buses and RAM performance, etc... So ... what is the point of having GPU then ?

<!-- gh-comment-id:2518115085 --> @icemagno commented on GitHub (Dec 4, 2024): Why was this thread closed? I have Ollama for windows, RTX 4060 and ollama keeps insisting on using CPU and RAM. It is very disapointing because I spent a fortune buying this gpu. Many have explained various things about PCI, buses and RAM performance, etc... So ... what is the point of having GPU then ?
Author
Owner

@dhiltgen commented on GitHub (Dec 4, 2024):

@icemagno please open a new issue describing your system and include the server logs so we can assist.

<!-- gh-comment-id:2518750357 --> @dhiltgen commented on GitHub (Dec 4, 2024): @icemagno please open a new issue describing your system and include the server logs so we can assist.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#933