[GH-ISSUE #9035] In runners is the display cpu. Unable to call gpu. #5879

Closed
opened 2026-04-12 17:13:02 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @yigedabuliu on GitHub (Feb 12, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9035

What is the issue?

I installed ollama on linux without root using a manual installation, which ran successfully but could not call the GPU. Experts, give me a hand.
Here is the ollama startup log, where "Dynamic LLM libraries" runners=[cpu] shows only the cpu, I have added cuda_v12_avx to LD_LIBRARY_PATH.
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2025-02-12T14:19:55.576+08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)" time=2025-02-12T14:19:55.576+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[cpu] time=2025-02-12T14:19:55.576+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-02-12T14:19:55.608+08:00 level=INFO source=gpu.go:283 msg="error looking up nvidia GPU memory" error="cuda driver library failed to get device context 2" time=2025-02-12T14:19:55.996+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-815904a6-2743-16e0-5b63-b8b3b75b63c0 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="23.6 GiB" available="2.1 GiB" time=2025-02-12T14:19:55.996+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-3937293a-f047-9752-e819-aadee70aaf24 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="23.6 GiB" available="23.2 GiB" time=2025-02-12T14:19:55.996+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-f760b5ef-c602-8573-87a9-2fc818a53a34 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="23.6 GiB" available="14.1 GiB"

I used the ollama ps command to get all the Gpus running, then I used the nvidia-smi command to find no information about ollama, and I used the top command to find that ollama was frantically using the cpu.
`(base) xxxxxxx$ ollama ps

NAME ID SIZE PROCESSOR UNTIL

bge-m3:latest 790764642607 1.7 GB 100% GPU 2 minutes from now

deepseek-r1:8b 28f8fd6cdc67 6.9 GB 100% GPU 2 minutes from now `

Image

Image

Relevant log output


OS

Linux

GPU

Nvidia

CPU

No response

Ollama version

0.5.7

Originally created by @yigedabuliu on GitHub (Feb 12, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9035 ### What is the issue? I installed ollama on linux without root using a manual installation, which ran successfully but could not call the GPU. Experts, give me a hand. Here is the ollama startup log, where "Dynamic LLM libraries" runners=[cpu] shows only the cpu, I have added cuda_v12_avx to LD_LIBRARY_PATH. `[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2025-02-12T14:19:55.576+08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)" time=2025-02-12T14:19:55.576+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[cpu] time=2025-02-12T14:19:55.576+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-02-12T14:19:55.608+08:00 level=INFO source=gpu.go:283 msg="error looking up nvidia GPU memory" error="cuda driver library failed to get device context 2" time=2025-02-12T14:19:55.996+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-815904a6-2743-16e0-5b63-b8b3b75b63c0 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="23.6 GiB" available="2.1 GiB" time=2025-02-12T14:19:55.996+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-3937293a-f047-9752-e819-aadee70aaf24 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="23.6 GiB" available="23.2 GiB" time=2025-02-12T14:19:55.996+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-f760b5ef-c602-8573-87a9-2fc818a53a34 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="23.6 GiB" available="14.1 GiB"` I used the `ollama ps` command to get all the Gpus running, then I used the `nvidia-smi` command to find no information about ollama, and I used the `top` command to find that ollama was frantically using the cpu. `(base) xxxxxxx$ ollama ps NAME ID SIZE PROCESSOR UNTIL bge-m3:latest 790764642607 1.7 GB 100% GPU 2 minutes from now deepseek-r1:8b 28f8fd6cdc67 6.9 GB 100% GPU 2 minutes from now ` ![Image](https://github.com/user-attachments/assets/875c9d73-6420-49d1-9797-ba7412066d4a) ![Image](https://github.com/user-attachments/assets/52903e60-9fdb-420a-8e65-5cc650245c13) ### Relevant log output ```shell ``` ### OS Linux ### GPU Nvidia ### CPU _No response_ ### Ollama version 0.5.7
GiteaMirror added the bug label 2026-04-12 17:13:02 -05:00
Author
Owner

@zhurunhua commented on GitHub (Feb 12, 2025):

I have the same issue. How to run with GPU?

<!-- gh-comment-id:2652943666 --> @zhurunhua commented on GitHub (Feb 12, 2025): I have the same issue. How to run with GPU?
Author
Owner

@sancelot commented on GitHub (Feb 12, 2025):

Please search for existing issues before posting.

https://github.com/ollama/ollama/issues/3095

<!-- gh-comment-id:2653010341 --> @sancelot commented on GitHub (Feb 12, 2025): Please search for existing issues before posting. https://github.com/ollama/ollama/issues/3095
Author
Owner

@yigedabuliu commented on GitHub (Feb 12, 2025):

Please search for existing issues before posting.

#3095

I'm sorry, I checked the [# 3095] (https://github.com/ollama/ollama/issues/3095), this is not the same problems with me. That doesn't solve my problem.

<!-- gh-comment-id:2653030339 --> @yigedabuliu commented on GitHub (Feb 12, 2025): > Please search for existing issues before posting. > > [#3095](https://github.com/ollama/ollama/issues/3095) I'm sorry, I checked the [# 3095] (https://github.com/ollama/ollama/issues/3095), this is not the same problems with me. That doesn't solve my problem.
Author
Owner

@yigedabuliu commented on GitHub (Feb 13, 2025):

I have the same issue. How to run with GPU?

确保你的ollama安装目录结构,ollama二进制文件和runners要保证以下结构

Binary:/xxx/yyy/bin/ollama
Runners:/xxx/yyy/lib/ollama/runners/

同时运行ollama时使用绝对路径 /xxx/yyy/bin/ollama serve而不是ollama serve

Ensure that your ollama installation directory structure, ollama binaries and runners ensure the following structure

Binary: /xxx/yyy/bin/ollama
Runner: / XXX/yyy/lib/ollama/runners /

Run ollama at the same time using the absolute path '/xxx/yyy/bin/ollama serve' instead of 'ollama serve'
#8532

<!-- gh-comment-id:2655259271 --> @yigedabuliu commented on GitHub (Feb 13, 2025): > I have the same issue. How to run with GPU? 确保你的ollama安装目录结构,ollama二进制文件和runners要保证以下结构 Binary:/xxx/yyy/bin/ollama Runners:/xxx/yyy/lib/ollama/runners/ 同时运行ollama时使用绝对路径 `/xxx/yyy/bin/ollama serve`而不是`ollama serve` Ensure that your ollama installation directory structure, ollama binaries and runners ensure the following structure Binary: /xxx/yyy/bin/ollama Runner: / XXX/yyy/lib/ollama/runners / Run ollama at the same time using the absolute path '/xxx/yyy/bin/ollama serve' instead of 'ollama serve' [#8532](https://github.com/ollama/ollama/issues/8532#issuecomment-2616281903)
Author
Owner

@zhurunhua commented on GitHub (Feb 13, 2025):

I have the same issue. How to run with GPU?

确保你的ollama安装目录结构,ollama二进制文件和runners要保证以下结构

Binary:/xxx/yyy/bin/ollama Runners:/xxx/yyy/lib/ollama/runners/

同时运行ollama时使用绝对路径 /xxx/yyy/bin/ollama serve而不是ollama serve

Ensure that your ollama installation directory structure, ollama binaries and runners ensure the following structure

Binary: /xxx/yyy/bin/ollama Runner: / XXX/yyy/lib/ollama/runners /

Run ollama at the same time using the absolute path '/xxx/yyy/bin/ollama serve' instead of 'ollama serve' #8532

Thanks,solved

<!-- gh-comment-id:2655299211 --> @zhurunhua commented on GitHub (Feb 13, 2025): > > I have the same issue. How to run with GPU? > > 确保你的ollama安装目录结构,ollama二进制文件和runners要保证以下结构 > > Binary:/xxx/yyy/bin/ollama Runners:/xxx/yyy/lib/ollama/runners/ > > 同时运行ollama时使用绝对路径 `/xxx/yyy/bin/ollama serve`而不是`ollama serve` > > Ensure that your ollama installation directory structure, ollama binaries and runners ensure the following structure > > Binary: /xxx/yyy/bin/ollama Runner: / XXX/yyy/lib/ollama/runners / > > Run ollama at the same time using the absolute path '/xxx/yyy/bin/ollama serve' instead of 'ollama serve' [#8532](https://github.com/ollama/ollama/issues/8532#issuecomment-2616281903) Thanks,solved
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5879