[GH-ISSUE #9051] another gpu issue, Debian 12 and Nvidia Tesla P4, report 100% GPU and uses CPU #31652

Closed
opened 2026-04-22 12:18:26 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @Luis-Lourenco on GitHub (Feb 12, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9051

What is the issue?

Install from script, report 100% GPU but uses the CPU based on cpu usage.

The report:

luis@ollama:~$ ollama ps
NAME                ID              SIZE      PROCESSOR    UNTIL                   
qwen2.5-coder:3b    e7149271c296    3.1 GB    100% GPU     About a minute from now  


and

luis@ollama:~$ nvidia-smi 
Wed Feb 12 15:56:12 2025       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.216.01             Driver Version: 535.216.01   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Tesla P4                       On  | 00000000:00:10.0 Off |                  Off |
| N/A   33C    P8               6W /  75W |      2MiB /  8192MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

I have check via native install, the result above.
I have check docker install with Nvidia container-kit, same result
I have test the install with TDARR encode via docker, all ok.

I also install nvidia cuda toolkit.

what next step?

Relevant log output

luis@ollama:~$ ollama ps
NAME                ID              SIZE      PROCESSOR    UNTIL                   
qwen2.5-coder:3b    e7149271c296    3.1 GB    100% GPU     About a minute from now

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

No response

Originally created by @Luis-Lourenco on GitHub (Feb 12, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9051 ### What is the issue? Install from script, report 100% GPU but uses the CPU based on cpu usage. The report: ``` luis@ollama:~$ ollama ps NAME ID SIZE PROCESSOR UNTIL qwen2.5-coder:3b e7149271c296 3.1 GB 100% GPU About a minute from now ``` and ``` luis@ollama:~$ nvidia-smi Wed Feb 12 15:56:12 2025 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.216.01 Driver Version: 535.216.01 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 Tesla P4 On | 00000000:00:10.0 Off | Off | | N/A 33C P8 6W / 75W | 2MiB / 8192MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | No running processes found | +---------------------------------------------------------------------------------------+ ``` I have check via native install, the result above. I have check docker install with Nvidia container-kit, same result I have test the install with TDARR encode via docker, all ok. I also install nvidia cuda toolkit. what next step? ### Relevant log output ```shell luis@ollama:~$ ollama ps NAME ID SIZE PROCESSOR UNTIL qwen2.5-coder:3b e7149271c296 3.1 GB 100% GPU About a minute from now ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-22 12:18:26 -05:00
Author
Owner

@BlastyCZ commented on GitHub (Feb 12, 2025):

What processor do you have?

<!-- gh-comment-id:2654164747 --> @BlastyCZ commented on GitHub (Feb 12, 2025): What processor do you have?
Author
Owner

@rick-github commented on GitHub (Feb 12, 2025):

Server logs may aid in debugging.

What's the output of:

command -v ollama
ls -l $(dirname $(dirname $(command -v ollama)))
ls -l $(dirname $(dirname $(command -v ollama)))/lib/ollama
<!-- gh-comment-id:2654169178 --> @rick-github commented on GitHub (Feb 12, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may aid in debugging. What's the output of: ``` command -v ollama ls -l $(dirname $(dirname $(command -v ollama))) ls -l $(dirname $(dirname $(command -v ollama)))/lib/ollama ```
Author
Owner

@GeofferyGeng commented on GitHub (Feb 13, 2025):

I met this issue too, with ollama 0.5.8-rc7,Ubuntu 22.04,cuda 12.4,nvidia 2080 Ti。

more info, I install ollama manaully, and I am trying auto.

<!-- gh-comment-id:2655357898 --> @GeofferyGeng commented on GitHub (Feb 13, 2025): I met this issue too, with ollama 0.5.8-rc7,Ubuntu 22.04,cuda 12.4,nvidia 2080 Ti。 more info, I install ollama manaully, and I am trying auto.
Author
Owner

@GeofferyGeng commented on GitHub (Feb 13, 2025):

I met this issue too, with ollama 0.5.8-rc7,Ubuntu 22.04,cuda 12.4,nvidia 2080 Ti。

more info, I install ollama manaully, and I am trying auto.

reinstall with install.sh and it seems the result has no difference

<!-- gh-comment-id:2655380304 --> @GeofferyGeng commented on GitHub (Feb 13, 2025): > I met this issue too, with ollama 0.5.8-rc7,Ubuntu 22.04,cuda 12.4,nvidia 2080 Ti。 > > more info, I install ollama manaully, and I am trying auto. reinstall with install.sh and it seems the result has no difference
Author
Owner

@Luis-Lourenco commented on GitHub (Feb 13, 2025):

What processor do you have?

It’s a Xeon E5 v4. That’s the reason I installed the gpu.
It’sã virtual machine over Proxmox and gpu pci passthru.

I have other services like Jellyfin using this gpu with no problema.

<!-- gh-comment-id:2655867103 --> @Luis-Lourenco commented on GitHub (Feb 13, 2025): > What processor do you have? It’s a Xeon E5 v4. That’s the reason I installed the gpu. It’sã virtual machine over Proxmox and gpu pci passthru. I have other services like Jellyfin using this gpu with no problema.
Author
Owner

@Luis-Lourenco commented on GitHub (Feb 13, 2025):

I met this issue too, with ollama 0.5.8-rc7,Ubuntu 22.04,cuda 12.4,nvidia 2080 Ti。
more info, I install ollama manaully, and I am trying auto.

reinstall with install.sh and it seems the result has no difference

That’s true.

I also try dockerized version with nvidia container kit, also get the same result.

<!-- gh-comment-id:2655870829 --> @Luis-Lourenco commented on GitHub (Feb 13, 2025): > > I met this issue too, with ollama 0.5.8-rc7,Ubuntu 22.04,cuda 12.4,nvidia 2080 Ti。 > > more info, I install ollama manaully, and I am trying auto. > > reinstall with install.sh and it seems the result has no difference That’s true. I also try dockerized version with nvidia container kit, also get the same result.
Author
Owner

@rick-github commented on GitHub (Feb 16, 2025):

Server logs may aid in debugging.

<!-- gh-comment-id:2661669337 --> @rick-github commented on GitHub (Feb 16, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may aid in debugging.
Author
Owner

@GeofferyGeng commented on GitHub (Feb 17, 2025):

do you hava a try to install another version? I solved this by de-upgrade

<!-- gh-comment-id:2662556920 --> @GeofferyGeng commented on GitHub (Feb 17, 2025): do you hava a try to install another version? I solved this by de-upgrade
Author
Owner

@Luis-Lourenco commented on GitHub (Feb 17, 2025):

do you hava a try to install another version? I solved this by de-upgrade

I use this script here

How can I choose another version?

<!-- gh-comment-id:2663949524 --> @Luis-Lourenco commented on GitHub (Feb 17, 2025): > do you hava a try to install another version? I solved this by de-upgrade I use this script [here](https://ollama.com/download/linux) How can I choose another version?
Author
Owner

@rick-github commented on GitHub (Feb 17, 2025):

curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.5.4 sh
<!-- gh-comment-id:2663951743 --> @rick-github commented on GitHub (Feb 17, 2025): ``` curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.5.4 sh ```
Author
Owner

@Luis-Lourenco commented on GitHub (Feb 17, 2025):

Server logs may aid in debugging.

Hello, here is a few logs after run Ollama:

fev 17 18:06:45 ollama ollama[616]: time=2025-02-17T18:06:45.384Z level=INFO source=server.go:594 msg="llama runner started in 3.28 seconds" fev 17 18:06:45 ollama ollama[616]: [GIN] 2025/02/17 - 18:06:45 | 200 | 4.044825991s | 127.0.0.1 | POST "/api/generate" fev 17 18:06:53 ollama ollama[616]: [GIN] 2025/02/17 - 18:06:53 | 200 | 54.914µs | 127.0.0.1 | HEAD "/" fev 17 18:06:53 ollama ollama[616]: [GIN] 2025/02/17 - 18:06:53 | 200 | 219.327µs | 127.0.0.1 | GET "/api/ps" fev 17 18:11:50 ollama ollama[616]: time=2025-02-17T18:11:50.442Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.057003538 model=/usr/share/ollama/.ollama/models/blobs/sha256-828125e28bf46a219fa4f75b6982cb0c41fd9187467abe91c9b175287945b7ef fev 17 18:11:50 ollama ollama[616]: time=2025-02-17T18:11:50.692Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.306560622 model=/usr/share/ollama/.ollama/models/blobs/sha256-828125e28bf46a219fa4f75b6982cb0c41fd9187467abe91c9b175287945b7ef fev 17 18:11:50 ollama ollama[616]: time=2025-02-17T18:11:50.975Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.589621268 model=/usr/share/ollama/.ollama/models/blobs/sha256-828125e28bf46a219fa4f75b6982cb0c41fd9187467abe91c9b175287945b7ef fev 17 18:15:10 ollama ollama[616]: time=2025-02-17T18:15:10.759Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-828125e28bf46a219fa4f75b6982cb0c41fd9187467abe91c9b175287945b7ef gpu=GPU-c4b29dd5-31ae-a213-f021-6bdc2609a132 parallel=4 available=8399290368 required="1.3 GiB" fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.026Z level=INFO source=server.go:104 msg="system memory" total="7.8 GiB" free="7.1 GiB" free_swap="975.0 MiB" fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.027Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[7.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.3 GiB" memory.required.partial="1.3 GiB" memory.required.kv="96.0 MiB" memory.required.allocations="[1.3 GiB]" memory.weights.total="458.8 MiB" memory.weights.repeating="320.9 MiB" memory.weights.nonrepeating="137.9 MiB" memory.graph.full="298.5 MiB" memory.graph.partial="405.0 MiB" fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.028Z level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v11_avx fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.028Z level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v12_avx fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.028Z level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v11_avx fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.028Z level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v12_avx fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.029Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-828125e28bf46a219fa4f75b6982cb0c41fd9187467abe91c9b175287945b7ef --ctx-size 8192 --batch-size 512 --n-gpu-layers 25 --threads 16 --parallel 4 --port 34695" fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.029Z level=INFO source=sched.go:449 msg="loaded runners" count=1 fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.029Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.030Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.070Z level=INFO source=runner.go:936 msg="starting go runner" fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.070Z level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=16 fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.070Z level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:34695"

My GPU is a Tesla P4, its not very recent gpu.

is summary:

WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeou

level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v11_avx level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v12_avx level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v11_avx level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v12_avx

<!-- gh-comment-id:2664021631 --> @Luis-Lourenco commented on GitHub (Feb 17, 2025): > [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may aid in debugging. Hello, here is a few logs after run Ollama: `fev 17 18:06:45 ollama ollama[616]: time=2025-02-17T18:06:45.384Z level=INFO source=server.go:594 msg="llama runner started in 3.28 seconds" fev 17 18:06:45 ollama ollama[616]: [GIN] 2025/02/17 - 18:06:45 | 200 | 4.044825991s | 127.0.0.1 | POST "/api/generate" fev 17 18:06:53 ollama ollama[616]: [GIN] 2025/02/17 - 18:06:53 | 200 | 54.914µs | 127.0.0.1 | HEAD "/" fev 17 18:06:53 ollama ollama[616]: [GIN] 2025/02/17 - 18:06:53 | 200 | 219.327µs | 127.0.0.1 | GET "/api/ps" fev 17 18:11:50 ollama ollama[616]: time=2025-02-17T18:11:50.442Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.057003538 model=/usr/share/ollama/.ollama/models/blobs/sha256-828125e28bf46a219fa4f75b6982cb0c41fd9187467abe91c9b175287945b7ef fev 17 18:11:50 ollama ollama[616]: time=2025-02-17T18:11:50.692Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.306560622 model=/usr/share/ollama/.ollama/models/blobs/sha256-828125e28bf46a219fa4f75b6982cb0c41fd9187467abe91c9b175287945b7ef fev 17 18:11:50 ollama ollama[616]: time=2025-02-17T18:11:50.975Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.589621268 model=/usr/share/ollama/.ollama/models/blobs/sha256-828125e28bf46a219fa4f75b6982cb0c41fd9187467abe91c9b175287945b7ef fev 17 18:15:10 ollama ollama[616]: time=2025-02-17T18:15:10.759Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-828125e28bf46a219fa4f75b6982cb0c41fd9187467abe91c9b175287945b7ef gpu=GPU-c4b29dd5-31ae-a213-f021-6bdc2609a132 parallel=4 available=8399290368 required="1.3 GiB" fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.026Z level=INFO source=server.go:104 msg="system memory" total="7.8 GiB" free="7.1 GiB" free_swap="975.0 MiB" fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.027Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[7.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.3 GiB" memory.required.partial="1.3 GiB" memory.required.kv="96.0 MiB" memory.required.allocations="[1.3 GiB]" memory.weights.total="458.8 MiB" memory.weights.repeating="320.9 MiB" memory.weights.nonrepeating="137.9 MiB" memory.graph.full="298.5 MiB" memory.graph.partial="405.0 MiB" fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.028Z level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v11_avx fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.028Z level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v12_avx fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.028Z level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v11_avx fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.028Z level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v12_avx fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.029Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-828125e28bf46a219fa4f75b6982cb0c41fd9187467abe91c9b175287945b7ef --ctx-size 8192 --batch-size 512 --n-gpu-layers 25 --threads 16 --parallel 4 --port 34695" fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.029Z level=INFO source=sched.go:449 msg="loaded runners" count=1 fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.029Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.030Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.070Z level=INFO source=runner.go:936 msg="starting go runner" fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.070Z level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=16 fev 17 18:15:11 ollama ollama[616]: time=2025-02-17T18:15:11.070Z level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:34695"` My GPU is a Tesla P4, its not very recent gpu. is summary: `WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeou` `level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v11_avx level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v12_avx level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v11_avx level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v12_avx `
Author
Owner

@rick-github commented on GitHub (Feb 17, 2025):

GPU runner incompatible with host system, CPU does not have AVX

Your CPU doesn't have AVX extensions, which are required for GPU runners prior to 0.5.8. If your CPU is virtual (eg proxmox) you need to enable pass-through of the CPU extensions. Otherwise, upgrading to 0.5.11 may help since the CPU extensions have been decoupled from the GPU backends.

<!-- gh-comment-id:2664029147 --> @rick-github commented on GitHub (Feb 17, 2025): ``` GPU runner incompatible with host system, CPU does not have AVX ``` Your CPU doesn't have AVX extensions, which are required for GPU runners prior to 0.5.8. If your CPU is virtual (eg proxmox) you need to enable pass-through of the CPU extensions. Otherwise, upgrading to 0.5.11 may help since the CPU extensions have been decoupled from the GPU backends.
Author
Owner

@Luis-Lourenco commented on GitHub (Feb 17, 2025):

Solved.

I have to force install latest version with sudo sh install.sh 0.5.11

<!-- gh-comment-id:2664047242 --> @Luis-Lourenco commented on GitHub (Feb 17, 2025): Solved. I have to force install latest version with sudo sh install.sh 0.5.11
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#31652