[GH-ISSUE #1594] Wont run on amd or intel gpu's? #47394

Closed
opened 2026-04-28 03:41:55 -05:00 by GiteaMirror · 25 comments
Owner

Originally created by @srgantmoomoo on GitHub (Dec 19, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1594

it seems that I cannot get this to run on my amd or my intel machine... does it only support nvidia gpu's?

keep getting this...

2023/12/18 21:59:15 images.go:737: total blobs: 0
2023/12/18 21:59:15 images.go:744: total unused blobs removed: 0
2023/12/18 21:59:15 routes.go:871: Listening on 127.0.0.1:11434 (version 0.1.16)
2023/12/18 21:59:15 routes.go:891: warning: gpu support may not be enabled, check that you have installed GPU drivers: nvidia-smi command failed
Originally created by @srgantmoomoo on GitHub (Dec 19, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1594 it seems that I cannot get this to run on my amd or my intel machine... does it only support nvidia gpu's? keep getting this... ``` 2023/12/18 21:59:15 images.go:737: total blobs: 0 2023/12/18 21:59:15 images.go:744: total unused blobs removed: 0 2023/12/18 21:59:15 routes.go:871: Listening on 127.0.0.1:11434 (version 0.1.16) 2023/12/18 21:59:15 routes.go:891: warning: gpu support may not be enabled, check that you have installed GPU drivers: nvidia-smi command failed ```
Author
Owner

@Redhawk18 commented on GitHub (Dec 19, 2023):

bump, I know theres rocm for amd but would that actually fix it?

<!-- gh-comment-id:1862047343 --> @Redhawk18 commented on GitHub (Dec 19, 2023): bump, I know theres rocm for amd but would that actually fix it?
Author
Owner

@Redhawk18 commented on GitHub (Dec 19, 2023):

#814

<!-- gh-comment-id:1862050815 --> @Redhawk18 commented on GitHub (Dec 19, 2023): #814
Author
Owner

@srgantmoomoo commented on GitHub (Dec 19, 2023):

better question, can i force it to just run on my cpu?

<!-- gh-comment-id:1862051501 --> @srgantmoomoo commented on GitHub (Dec 19, 2023): better question, can i force it to just run on my cpu?
Author
Owner

@srgantmoomoo commented on GitHub (Dec 19, 2023):

or simply, how can i run this on an intel or amd gpu system.

<!-- gh-comment-id:1862070687 --> @srgantmoomoo commented on GitHub (Dec 19, 2023): or simply, how can i run this on an intel or amd gpu system.
Author
Owner

@technovangelist commented on GitHub (Dec 19, 2023):

If you don't have an nVidia GPU we will just use CPU. Are you saying its not running? What OS, CPU, and how much ram do you have?

<!-- gh-comment-id:1862137402 --> @technovangelist commented on GitHub (Dec 19, 2023): If you don't have an nVidia GPU we will just use CPU. Are you saying its not running? What OS, CPU, and how much ram do you have?
Author
Owner

@srgantmoomoo commented on GitHub (Dec 19, 2023):

yes, it is not running. it's giving the error i showed in my first message. tried it on linux mint, ryzen 3 5000 series cpu, 20 gb ram. As well as on a different, newer machine with an intel core i5, 16 gb ram, through debian on wsl. got the same error for both.

<!-- gh-comment-id:1862158945 --> @srgantmoomoo commented on GitHub (Dec 19, 2023): yes, it is not running. it's giving the error i showed in my first message. tried it on linux mint, ryzen 3 5000 series cpu, 20 gb ram. As well as on a different, newer machine with an intel core i5, 16 gb ram, through debian on wsl. got the same error for both.
Author
Owner

@Leo512bit commented on GitHub (Dec 19, 2023):

If you don't have an nVidia GPU we will just use CPU.

I get same error on Intel Arc on WSL, I didn't see any flags to force ollama to run on the CPU, I also tried to disable GPU acceleration in WSL but I still got the same error. (Mot sure if GPU acceleration was in fact disabled on WSL but I did set it in the
.wslconfig file.)

<!-- gh-comment-id:1862163792 --> @Leo512bit commented on GitHub (Dec 19, 2023): >If you don't have an nVidia GPU we will just use CPU. I get same error on Intel Arc on WSL, I didn't see any flags to force ollama to run on the CPU, I also tried to disable GPU acceleration in WSL but I still got the same error. (Mot sure if GPU acceleration was in fact disabled on WSL but I did set it in the `.wslconfig` file.)
Author
Owner

@technovangelist commented on GitHub (Dec 19, 2023):

What model are you trying to run?

<!-- gh-comment-id:1863294599 --> @technovangelist commented on GitHub (Dec 19, 2023): What model are you trying to run?
Author
Owner

@srgantmoomoo commented on GitHub (Dec 19, 2023):

What model are you trying to run?

i get the error after running ollama serve. my intention is to run mistral... but i haven't even gotten to a point where i can run anything.

<!-- gh-comment-id:1863298118 --> @srgantmoomoo commented on GitHub (Dec 19, 2023): > What model are you trying to run? i get the error after running `ollama serve`. my intention is to run mistral... but i haven't even gotten to a point where i can run anything.
Author
Owner

@technovangelist commented on GitHub (Dec 19, 2023):

Did you install with the install script or did you build this yourself?

<!-- gh-comment-id:1863310544 --> @technovangelist commented on GitHub (Dec 19, 2023): Did you install with the install script or did you build this yourself?
Author
Owner

@srgantmoomoo commented on GitHub (Dec 19, 2023):

i installed it using curl https://ollama.ai/install.sh | sh

<!-- gh-comment-id:1863311697 --> @srgantmoomoo commented on GitHub (Dec 19, 2023): i installed it using `curl https://ollama.ai/install.sh | sh`
Author
Owner

@technovangelist commented on GitHub (Dec 19, 2023):

so ollama serve should already be running and you don't need to run it yourself. Aslo, i don't see an error. What happens when you run ollama run mistral?

<!-- gh-comment-id:1863322676 --> @technovangelist commented on GitHub (Dec 19, 2023): so `ollama serve` should already be running and you don't need to run it yourself. Aslo, i don't see an error. What happens when you run `ollama run mistral`?
Author
Owner

@srgantmoomoo commented on GitHub (Dec 19, 2023):

so ollama serve should already be running and you don't need to run it yourself. Aslo, i don't see an error. What happens when you run ollama run mistral?

i do have to run ollama serve, if i run ollama run mistral i get Error: could not connect to ollama server, run 'ollama serve' to start it.

<!-- gh-comment-id:1863333840 --> @srgantmoomoo commented on GitHub (Dec 19, 2023): > so `ollama serve` should already be running and you don't need to run it yourself. Aslo, i don't see an error. What happens when you run `ollama run mistral`? i do have to run `ollama serve`, if i run `ollama run mistral` i get `Error: could not connect to ollama server, run 'ollama serve' to start it`.
Author
Owner

@technovangelist commented on GitHub (Dec 19, 2023):

so what happens when you run ollama run mistral in a different terminal?

<!-- gh-comment-id:1863346383 --> @technovangelist commented on GitHub (Dec 19, 2023): so what happens when you run `ollama run mistral` in a different terminal?
Author
Owner

@srgantmoomoo commented on GitHub (Dec 19, 2023):

so what happens when you run ollama run mistral in a different terminal?

umm... same error? why would that change with the terminal im using?

<!-- gh-comment-id:1863356679 --> @srgantmoomoo commented on GitHub (Dec 19, 2023): > so what happens when you run `ollama run mistral` in a different terminal? umm... same error? why would that change with the terminal im using?
Author
Owner

@technovangelist commented on GitHub (Dec 19, 2023):

So you have ollama serve running in one terminal and then you say you have the same error in a different terminal running ollama run mistral. What error are you seeing there?

<!-- gh-comment-id:1863359124 --> @technovangelist commented on GitHub (Dec 19, 2023): So you have `ollama serve` running in one terminal and then you say you have the same error in a different terminal running `ollama run mistral`. What error are you seeing there?
Author
Owner

@srgantmoomoo commented on GitHub (Dec 19, 2023):

are you trolling me? the error in my original comment -
https://github.com/jmorganca/ollama/issues/1594#issue-2047830033

<!-- gh-comment-id:1863360656 --> @srgantmoomoo commented on GitHub (Dec 19, 2023): are you trolling me? the error in my original comment - https://github.com/jmorganca/ollama/issues/1594#issue-2047830033
Author
Owner

@srgantmoomoo commented on GitHub (Dec 19, 2023):

when i run ollama serve i get the following -

2023/12/18 21:59:15 images.go:744: total unused blobs removed: 0
2023/12/18 21:59:15 routes.go:871: Listening on 127.0.0.1:11434 (version 0.1.16)
2023/12/18 21:59:15 routes.go:891: warning: gpu support may not be enabled, check that you have installed GPU drivers: nvidia-smi command failed

when i run ollama run mistral i get the following -
Error: could not connect to ollama server, run 'ollama serve' to start it

<!-- gh-comment-id:1863362624 --> @srgantmoomoo commented on GitHub (Dec 19, 2023): when i run `ollama serve` i get the following - ```2023/12/18 21:59:15 images.go:737: total blobs: 0 2023/12/18 21:59:15 images.go:744: total unused blobs removed: 0 2023/12/18 21:59:15 routes.go:871: Listening on 127.0.0.1:11434 (version 0.1.16) 2023/12/18 21:59:15 routes.go:891: warning: gpu support may not be enabled, check that you have installed GPU drivers: nvidia-smi command failed ``` when i run `ollama run mistral` i get the following - `Error: could not connect to ollama server, run 'ollama serve' to start it`
Author
Owner

@technovangelist commented on GitHub (Dec 19, 2023):

would you be open to joining a zoom session so i can see whats going on?

<!-- gh-comment-id:1863363475 --> @technovangelist commented on GitHub (Dec 19, 2023): would you be open to joining a zoom session so i can see whats going on?
Author
Owner

@srgantmoomoo commented on GitHub (Dec 19, 2023):

i could if that is easier for you, discord would be easier for me though if you want to add me srgantmoomoo#1052. but i dont mind zoom if thats what you prefer

<!-- gh-comment-id:1863365244 --> @srgantmoomoo commented on GitHub (Dec 19, 2023): i could if that is easier for you, discord would be easier for me though if you want to add me srgantmoomoo#1052. but i dont mind zoom if thats what you prefer
Author
Owner

@technovangelist commented on GitHub (Dec 19, 2023):

discord would be best

<!-- gh-comment-id:1863365557 --> @technovangelist commented on GitHub (Dec 19, 2023): discord would be best
Author
Owner

@technovangelist commented on GitHub (Dec 19, 2023):

I don't see you in the discord. Are you there?

<!-- gh-comment-id:1863369186 --> @technovangelist commented on GitHub (Dec 19, 2023): I don't see you in the discord. Are you there?
Author
Owner

@technovangelist commented on GitHub (Dec 19, 2023):

I am mattw there

<!-- gh-comment-id:1863369673 --> @technovangelist commented on GitHub (Dec 19, 2023): I am mattw there
Author
Owner

@technovangelist commented on GitHub (Dec 19, 2023):

@srgantmoomoo and I worked through the issue on a Discord DM. ollama serve outputted what looks like an error message and they quit the program. The solution was to let it run and then in a new terminal window, run ollama run <modelname>

I will go ahead and close this issue now. If you think there is anything we left out, reopen and we can address. Thanks for being part of this great community.

<!-- gh-comment-id:1863392787 --> @technovangelist commented on GitHub (Dec 19, 2023): @srgantmoomoo and I worked through the issue on a Discord DM. `ollama serve` outputted what looks like an error message and they quit the program. The solution was to let it run and then in a new terminal window, run `ollama run <modelname>` I will go ahead and close this issue now. If you think there is anything we left out, reopen and we can address. Thanks for being part of this great community.
Author
Owner

@Leo512bit commented on GitHub (Dec 19, 2023):

Can confirm that it works.

Maybe you should post this in the readme so people don't have to hunt down this issue.

<!-- gh-comment-id:1863399381 --> @Leo512bit commented on GitHub (Dec 19, 2023): Can confirm that it works. Maybe you should post this in the readme so people don't have to hunt down this issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47394