Inconsistency in command line regarding "--gpus" #3497

Closed
opened 2025-11-11 15:32:53 -06:00 by GiteaMirror · 3 comments
Owner

Originally created by @rabinnh on GitHub (Jan 30, 2025).

A note, the READE.md page shows:

To run Open WebUI with Nvidia GPU support, use this command:

docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda

But under the heading "With bundled Ollama"

With GPU Support: Utilize GPU resources by running the following command:

docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama

So the question is, do I need the '=' sign, will either work, or is it actually different depending on whether or not you're running in the same container as Ollama?

Originally created by @rabinnh on GitHub (Jan 30, 2025). A note, the READE.md page shows: ### To run Open WebUI with Nvidia GPU support, use this command: `docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda` But under the heading "With bundled Ollama" ### With GPU Support: Utilize GPU resources by running the following command: `docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama` So the question is, do I need the '=' sign, will either work, or is it actually different depending on whether or not you're running in the same container as Ollama?
Author
Owner

@wxfred commented on GitHub (Feb 11, 2025):

I'm using the

docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama

But only cpu is running.

@wxfred commented on GitHub (Feb 11, 2025): I'm using the > docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama But only cpu is running.
Author
Owner

@senhao-xu commented on GitHub (Feb 13, 2025):

I'm using the

docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama

But only cpu is running.

I also encountered the same problem

@senhao-xu commented on GitHub (Feb 13, 2025): > I'm using the > > > docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama > > But only cpu is running. I also encountered the same problem
Author
Owner

@wxfred commented on GitHub (Feb 13, 2025):

I saw the log of the container, compatible gpu is not detected. But running nvidia-smi in the container shows my gpu info correctly.

I installed the windows bundle ollama direclty, it can use my GPU after i install the CUDA toolkits. Then i restarted my container, no magic happened.

@wxfred commented on GitHub (Feb 13, 2025): I saw the log of the container, compatible gpu is not detected. But running nvidia-smi in the container shows my gpu info correctly. I installed the windows bundle ollama direclty, it can use my GPU after i install the CUDA toolkits. Then i restarted my container, no magic happened.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#3497