[GH-ISSUE #3306] No GPU found! #64072

Closed
opened 2026-05-03 16:04:35 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @InfoOfInfo on GitHub (Mar 23, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3306

What are you trying to do?

root@localhost:~# ollama serve

time=2024-03-23T11:28:06.617+05:30 level=INFO source=images.go:806 msg="total blobs: 0"

time=2024-03-23T11:28:06.682+05:30 level=INFO source=images.go:813 msg="total unused blobs removed: 0"

time=2024-03-23T11:28:06.685+05:30 level=INFO source=routes.go: 1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)" time=2024-03-23T11:28:06.686+05:30 level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to /tmp/ollama3763750206/runners..."

time=2024-03-23T11:28:14.668+05:30 level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu cuda_v11]"

time=2024-03-23T11:28:14.668+05:30 level=INFO source=gpu.go:77 msg="Detecting GPU type"

time=2024-03-23T11:28:14.668+05:30 level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so"

time=2024-03-23T11:28:14.675+05:30 level=INFO source=gpu.go: 237 msg="Discovered GPU libraries: []"

time=2024-03-23T11:28:14.675+05:30 level=INFO source=cpu_common.go:18 msg="CPU does not have vector extensions"

time=2024-03-23T11:28:14.676+05:30 level=INFO source=routes.go:1133 msg="no GPU detected"

How should we solve this?

Guys i am on a very old Samsung tablets that God knows has what specs its sure is 3 years old and has 3gb ram and no graphics card. can you all crowdfund me to get me a some kind of a pc i am 17 yrs old. sure you know what it feels like.
proof
SAMSUNG TAB A7

What is the impact of not solving this?

I will study hard and make something that helps humanity.

Anything else?

No response

Originally created by @InfoOfInfo on GitHub (Mar 23, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3306 ### What are you trying to do? root@localhost:~# ollama serve time=2024-03-23T11:28:06.617+05:30 level=INFO source=images.go:806 msg="total blobs: 0" time=2024-03-23T11:28:06.682+05:30 level=INFO source=images.go:813 msg="total unused blobs removed: 0" time=2024-03-23T11:28:06.685+05:30 level=INFO source=routes.go: 1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)" time=2024-03-23T11:28:06.686+05:30 level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to /tmp/ollama3763750206/runners..." time=2024-03-23T11:28:14.668+05:30 level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu cuda_v11]" time=2024-03-23T11:28:14.668+05:30 level=INFO source=gpu.go:77 msg="Detecting GPU type" time=2024-03-23T11:28:14.668+05:30 level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so" time=2024-03-23T11:28:14.675+05:30 level=INFO source=gpu.go: 237 msg="Discovered GPU libraries: []" time=2024-03-23T11:28:14.675+05:30 level=INFO source=cpu_common.go:18 msg="CPU does not have vector extensions" **time=2024-03-23T11:28:14.676+05:30 level=INFO source=routes.go:1133 msg="no GPU detected"** ### How should we solve this? Guys i am on a very old Samsung tablets that God knows has what specs its sure is 3 years old and has 3gb ram and no graphics card. can you all crowdfund me to get me a some kind of a pc i am 17 yrs old. sure you know what it feels like. proof SAMSUNG TAB A7 ### What is the impact of not solving this? I will study hard and make something that helps humanity. ### Anything else? _No response_
Author
Owner

@dhiltgen commented on GitHub (Mar 23, 2024):

CPU does not have vector extensions

Our current GPU code is compiled with vector extensions (AVX) to ensure that if all layers don't fit within the GPU, we get reasonable performance from the layers that are processed by the CPU. We have another issue tracking adding support for older CPUs (and server CPUs) that don't have AVX. #2187

<!-- gh-comment-id:2016380295 --> @dhiltgen commented on GitHub (Mar 23, 2024): `CPU does not have vector extensions` Our current GPU code is compiled with vector extensions (AVX) to ensure that if all layers don't fit within the GPU, we get reasonable performance from the layers that are processed by the CPU. We have another issue tracking adding support for older CPUs (and server CPUs) that don't have AVX. #2187
Author
Owner

@navr32 commented on GitHub (Apr 7, 2024):

Hi ! When do you think be abble to give access to gpu to old processor without avx ?
I have test the dbzoo commit by build on my z800 2xXeon rtx3090 and this work very well !
Many thanks. Because as far as now i am unable to use Ollama with my gpu since you have add this test...perhaps adding one option when starting ollama serve to disable the avx check ? because with just my cpu it is very too slow ...

<!-- gh-comment-id:2041240445 --> @navr32 commented on GitHub (Apr 7, 2024): Hi ! When do you think be abble to give access to gpu to old processor without avx ? I have test the dbzoo commit by build on my z800 2xXeon rtx3090 and this work very well ! Many thanks. Because as far as now i am unable to use Ollama with my gpu since you have add this test...perhaps adding one option when starting ollama serve to disable the avx check ? because with just my cpu it is very too slow ...
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64072