[GH-ISSUE #6933] RTX A3000 GPU not being utilized for small LLMs #4389

Closed
opened 2026-04-12 15:19:50 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @scotgopal on GitHub (Sep 24, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6933

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Hi there. I am using ollama from docker and I've already made sure that the gpu is available from the container, by using nvidia-smi

root@802f556c99c8:/# nvidia-smi
Tue Sep 24 12:52:58 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.183.01             Driver Version: 535.183.01   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA RTX A3000 Laptop GPU    Off | 00000000:01:00.0 Off |                  N/A |
| N/A   47C    P8              14W /  90W |    489MiB /  6144MiB |      8%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
+---------------------------------------------------------------------------------------+

I have tried with

qwen2.5:0.5b

root@802f556c99c8:/# ollama ps
NAME            ID              SIZE      PROCESSOR    UNTIL              
qwen2.5:0.5b    a8b0c5157701    820 MB    100% CPU     4 minutes from now 

llama3.1:8b_q2_K

root@802f556c99c8:/# ollama ps
NAME                         ID              SIZE      PROCESSOR    UNTIL              
llama3.1:8b-instruct-q2_K    44a139eeb344    4.8 GB    100% CPU     4 minutes from now 

As you can see in the ollama ps output, it's using 100% CPU. I understand that 6GB GPU VRAM is quite low for LLMs, but I was hoping that at least my GPU will be used partially.

OS

Linux

GPU

Other

CPU

Intel

Ollama version

0.3.11

Originally created by @scotgopal on GitHub (Sep 24, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6933 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Hi there. I am using ollama from docker and I've already made sure that the gpu is available from the container, by using `nvidia-smi` ```shell root@802f556c99c8:/# nvidia-smi Tue Sep 24 12:52:58 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.183.01 Driver Version: 535.183.01 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA RTX A3000 Laptop GPU Off | 00000000:01:00.0 Off | N/A | | N/A 47C P8 14W / 90W | 489MiB / 6144MiB | 8% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| +---------------------------------------------------------------------------------------+ ``` I have tried with qwen2.5:0.5b ```shell root@802f556c99c8:/# ollama ps NAME ID SIZE PROCESSOR UNTIL qwen2.5:0.5b a8b0c5157701 820 MB 100% CPU 4 minutes from now ``` llama3.1:8b_q2_K ```shell root@802f556c99c8:/# ollama ps NAME ID SIZE PROCESSOR UNTIL llama3.1:8b-instruct-q2_K 44a139eeb344 4.8 GB 100% CPU 4 minutes from now ``` As you can see in the `ollama ps` output, it's using 100% CPU. I understand that 6GB GPU VRAM is quite low for LLMs, but I was hoping that at least my GPU will be used partially. ### OS Linux ### GPU Other ### CPU Intel ### Ollama version 0.3.11
GiteaMirror added the dockerbugneeds more info labels 2026-04-12 15:19:50 -05:00
Author
Owner

@dhiltgen commented on GitHub (Sep 24, 2024):

We may not be correctly detecting the GPU. Can you share your server log?

https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md

<!-- gh-comment-id:2372591993 --> @dhiltgen commented on GitHub (Sep 24, 2024): We may not be correctly detecting the GPU. Can you share your server log? https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md
Author
Owner

@scotgopal commented on GitHub (Sep 25, 2024):

Sure. Here you go.

2024/09/25 04:14:28 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-09-25T04:14:28.418Z level=INFO source=images.go:753 msg="total blobs: 14"
time=2024-09-25T04:14:28.418Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-09-25T04:14:28.419Z level=INFO source=routes.go:1200 msg="Listening on [::]:11434 (version 0.3.11)"
�time=2024-09-25T04:14:28.419Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"
time=2024-09-25T04:14:28.419Z level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
�time=2024-09-25T04:14:28.444Z level=WARN source=gpu.go:561 msg="unknown error initializing cuda driver library" library=/usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01 error="cuda driver library init failure: 999"
�time=2024-09-25T04:14:28.444Z level=WARN source=gpu.go:562 msg="see https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md for more information"
time=2024-09-25T04:14:28.488Z level=INFO source=gpu.go:346 msg="no compatible GPUs were discovered"
�time=2024-09-25T04:14:28.488Z level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="31.1 GiB" available="24.0 GiB"

My current NVIDIA Driver version and CUDA version

❯ nvidia-smi
Wed Sep 25 12:16:37 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.183.01             Driver Version: 535.183.01   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA RTX A3000 Laptop GPU    Off | 00000000:01:00.0 Off |                  N/A |
| N/A   47C    P8              14W /  90W |    506MiB /  6144MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A   2060198      G   /usr/lib/xorg/Xorg                          162MiB |
|    0   N/A  N/A   2062129      G   /usr/lib/xorg/Xorg                          148MiB |
|    0   N/A  N/A   2062889      G   /usr/bin/gnome-shell                         42MiB |
|    0   N/A  N/A   3322729      G   /opt/teamviewer/tv_bin/TeamViewer            26MiB |
|    0   N/A  N/A   3363671      G   ...seed-version=20240923-180219.938000      108MiB |
+---------------------------------------------------------------------------------------+
 
❯ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Wed_Jun__2_19:15:15_PDT_2021
Cuda compilation tools, release 11.4, V11.4.48
Build cuda_11.4.r11.4/compiler.30033411_0
<!-- gh-comment-id:2372882659 --> @scotgopal commented on GitHub (Sep 25, 2024): Sure. Here you go. ```shell 2024/09/25 04:14:28 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2024-09-25T04:14:28.418Z level=INFO source=images.go:753 msg="total blobs: 14" time=2024-09-25T04:14:28.418Z level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-09-25T04:14:28.419Z level=INFO source=routes.go:1200 msg="Listening on [::]:11434 (version 0.3.11)" �time=2024-09-25T04:14:28.419Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]" time=2024-09-25T04:14:28.419Z level=INFO source=gpu.go:199 msg="looking for compatible GPUs" �time=2024-09-25T04:14:28.444Z level=WARN source=gpu.go:561 msg="unknown error initializing cuda driver library" library=/usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01 error="cuda driver library init failure: 999" �time=2024-09-25T04:14:28.444Z level=WARN source=gpu.go:562 msg="see https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md for more information" time=2024-09-25T04:14:28.488Z level=INFO source=gpu.go:346 msg="no compatible GPUs were discovered" �time=2024-09-25T04:14:28.488Z level=INFO source=types.go:107 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="31.1 GiB" available="24.0 GiB" ``` My current NVIDIA Driver version and CUDA version ```shell ❯ nvidia-smi Wed Sep 25 12:16:37 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.183.01 Driver Version: 535.183.01 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA RTX A3000 Laptop GPU Off | 00000000:01:00.0 Off | N/A | | N/A 47C P8 14W / 90W | 506MiB / 6144MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 2060198 G /usr/lib/xorg/Xorg 162MiB | | 0 N/A N/A 2062129 G /usr/lib/xorg/Xorg 148MiB | | 0 N/A N/A 2062889 G /usr/bin/gnome-shell 42MiB | | 0 N/A N/A 3322729 G /opt/teamviewer/tv_bin/TeamViewer 26MiB | | 0 N/A N/A 3363671 G ...seed-version=20240923-180219.938000 108MiB | +---------------------------------------------------------------------------------------+ ❯ nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Wed_Jun__2_19:15:15_PDT_2021 Cuda compilation tools, release 11.4, V11.4.48 Build cuda_11.4.r11.4/compiler.30033411_0 ```
Author
Owner

@scotgopal commented on GitHub (Sep 25, 2024):

A positive update!

Try reloading the nvidia_uvm driver - sudo rmmod nvidia_uvm then sudo modprobe nvidia_uvm

This step in the Troubleshooting docs helped me.

root@c2a8b89019e1:/# ollama ps
NAME    ID    SIZE    PROCESSOR    UNTIL 

root@c2a8b89019e1:/# ollama run llama3.1:8b-instruct-q2_K ""

root@c2a8b89019e1:/# ollama ps
NAME                         ID              SIZE      PROCESSOR    UNTIL              
llama3.1:8b-instruct-q2_K    44a139eeb344    5.3 GB    100% GPU**     4 minutes from now

Thanks! Perhaps adding a hyperlink to the troubleshooting docs from the installation related docs (Eg: linux.md, docker.md) will be something useful for future?

<!-- gh-comment-id:2373112052 --> @scotgopal commented on GitHub (Sep 25, 2024): A positive update! > Try reloading the nvidia_uvm driver - sudo rmmod nvidia_uvm then sudo modprobe nvidia_uvm This step in the Troubleshooting docs helped me. ```shell root@c2a8b89019e1:/# ollama ps NAME ID SIZE PROCESSOR UNTIL root@c2a8b89019e1:/# ollama run llama3.1:8b-instruct-q2_K "" root@c2a8b89019e1:/# ollama ps NAME ID SIZE PROCESSOR UNTIL llama3.1:8b-instruct-q2_K 44a139eeb344 5.3 GB 100% GPU** 4 minutes from now ``` Thanks! Perhaps adding a hyperlink to the troubleshooting docs from the installation related docs (Eg: linux.md, docker.md) will be something useful for future?
Author
Owner

@dhiltgen commented on GitHub (Sep 25, 2024):

Glad to hear the troubleshooting steps resolved your problem!

<!-- gh-comment-id:2374961280 --> @dhiltgen commented on GitHub (Sep 25, 2024): Glad to hear the troubleshooting steps resolved your problem!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4389