[GH-ISSUE #797] Support GPU on older NVIDIA GPU and CUDA drivers #382

Closed
opened 2026-04-12 10:01:39 -05:00 by GiteaMirror · 24 comments
Owner

Originally created by @Syulin7 on GitHub (Oct 16, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/797

I am testing using ollama on linux and docker, and its not using the GPU at all.

it appears that ollma is not using the CUDA image.

I resolved the issue by replacing the base image.

92578798bb/Dockerfile (L17-L23)

change ubuntu:22.04 to nvidia/cuda:11.8.0-devel-ubuntu22.04

and then it works
image

Perhaps we can build a GPU image and push it to the community, using the "gpu" tag for differentiation.

Originally created by @Syulin7 on GitHub (Oct 16, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/797 I am testing using ollama on linux and docker, and its not using the GPU at all. it appears that ollma is not using the CUDA image. I resolved the issue by replacing the base image. https://github.com/jmorganca/ollama/blob/92578798bb1abcedd6bc99479d804f32d9ee2f6c/Dockerfile#L17-L23 change ubuntu:22.04 to nvidia/cuda:11.8.0-devel-ubuntu22.04 and then it works ![image](https://github.com/jmorganca/ollama/assets/37265556/52f7f99a-2533-4069-b700-7a738f03c7b4) Perhaps we can build a GPU image and push it to the community, using the "gpu" tag for differentiation.
Author
Owner

@Syulin7 commented on GitHub (Oct 16, 2023):

@mxyng PTAL, thanks.

<!-- gh-comment-id:1764037186 --> @Syulin7 commented on GitHub (Oct 16, 2023): @mxyng PTAL, thanks.
Author
Owner

@missandi commented on GitHub (Oct 16, 2023):

I am testing using ollama on linux and docker, and its not using the GPU at all.

it appears that ollma is not using the CUDA image.

I resolved the issue by replacing the base image.

92578798bb/Dockerfile (L17-L23)

change ubuntu:22.04 to nvidia/cuda:11.8.0-devel-ubuntu22.04

and then it works image

Perhaps we can build a GPU image and push it to the community, using the "gpu" tag for differentiation.

How to edit it please. Thank you

<!-- gh-comment-id:1764503931 --> @missandi commented on GitHub (Oct 16, 2023): > I am testing using ollama on linux and docker, and its not using the GPU at all. > > it appears that ollma is not using the CUDA image. > > I resolved the issue by replacing the base image. > > https://github.com/jmorganca/ollama/blob/92578798bb1abcedd6bc99479d804f32d9ee2f6c/Dockerfile#L17-L23 > > change ubuntu:22.04 to nvidia/cuda:11.8.0-devel-ubuntu22.04 > > and then it works ![image](https://user-images.githubusercontent.com/37265556/274519256-52f7f99a-2533-4069-b700-7a738f03c7b4.png) > > Perhaps we can build a GPU image and push it to the community, using the "gpu" tag for differentiation. How to edit it please. Thank you
Author
Owner

@Syulin7 commented on GitHub (Oct 16, 2023):

In dockerhub image ollama/ollama, GPU actually doesn’t work.
https://hub.docker.com/r/ollama/ollama

cp ollama/Dockerfile ollama/Dockerfile.gpu

change Line17
06bcfbd629/Dockerfile (L17)
to

FROM nvidia/cuda:11.8.0-devel-ubuntu22.04

and then build a gpu image

docker build -t ollama/ollama:0.1.3-gpu -f Dockerfile.gpu .
<!-- gh-comment-id:1764552205 --> @Syulin7 commented on GitHub (Oct 16, 2023): In dockerhub image ollama/ollama, GPU actually doesn’t work. https://hub.docker.com/r/ollama/ollama ``` cp ollama/Dockerfile ollama/Dockerfile.gpu ``` change Line17 https://github.com/jmorganca/ollama/blob/06bcfbd6295b0aa0b4a63b6bd6731c0995f0802d/Dockerfile#L17 to ``` FROM nvidia/cuda:11.8.0-devel-ubuntu22.04 ``` and then build a gpu image ``` docker build -t ollama/ollama:0.1.3-gpu -f Dockerfile.gpu . ```
Author
Owner

@pieroit commented on GitHub (Oct 16, 2023):

I just tested (ubuntu 22 + docker nvidia toolkit + RTX 2070) and the docker works fine with GPU:

docker run -it --gpus=all -v ./ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:latest

But for some reason it does not if I use it via compose:

version: '3.7'

services:

  llm:
    container_name: llm
    image: ollama/ollama:latest
    volumes:
      - ./ollama:/root/.ollama
    ports:
      - 11434:11434
    deploy:
      resources:
        reservations:
          devices:
            - capabilities: [gpu]
<!-- gh-comment-id:1764665424 --> @pieroit commented on GitHub (Oct 16, 2023): I just tested (ubuntu 22 + docker nvidia toolkit + RTX 2070) and the docker works fine with GPU: ``` docker run -it --gpus=all -v ./ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:latest ``` But for some reason it does not if I use it via compose: ```yaml version: '3.7' services: llm: container_name: llm image: ollama/ollama:latest volumes: - ./ollama:/root/.ollama ports: - 11434:11434 deploy: resources: reservations: devices: - capabilities: [gpu] ```
Author
Owner

@Syulin7 commented on GitHub (Oct 16, 2023):

I just tested (ubuntu 22 + docker nvidia toolkit + RTX 2070) and the docker works fine with GPU

@pieroit can you run nvidia-smi, dose it show GPU usage?

it works via compose when use gpu image, like this:

services:

  llm:
    image: ollama/ollama:gpu
    volumes:
      - ./.ollama:/root/.ollama
    ports:
      - 11434:11434
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1 
              capabilities: [ gpu ]
<!-- gh-comment-id:1764687661 --> @Syulin7 commented on GitHub (Oct 16, 2023): > I just tested (ubuntu 22 + docker nvidia toolkit + RTX 2070) and the docker works fine with GPU @pieroit can you run nvidia-smi, dose it show GPU usage? it works via compose when use gpu image, like this: ``` services: llm: image: ollama/ollama:gpu volumes: - ./.ollama:/root/.ollama ports: - 11434:11434 deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [ gpu ] ```
Author
Owner

@pieroit commented on GitHub (Oct 16, 2023):

@Syulin7 thanks by adding count and driver under devices it works!

BTW I'm using image:ollama/ollama:latest, I can see the model runs at 3x the speed and I can launch nvidia-smi from within the container

<!-- gh-comment-id:1764724606 --> @pieroit commented on GitHub (Oct 16, 2023): @Syulin7 thanks by adding `count` and `driver` under `devices` it works! BTW I'm using `image:ollama/ollama:latest`, I can see the model runs at 3x the speed and I can launch `nvidia-smi` from within the container
Author
Owner

@Syulin7 commented on GitHub (Oct 16, 2023):

@pieroit In my case, it can start, but gpu not works.
Can you see if any process is using the GPU by using nvidia-smi.

Run docker logs ollama, there are some err logs. I think it's because the base image is Ubuntu, which does not include CUDA.

ggml_init_cublas: found 1 CUDA devices:
  Device 0: NVIDIA A10, compute capability 8.6
CUDA error 222 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml/ggml-cuda.cu:4905: the provided PTX was compiled with an unsupported toolchain.
2023/10/16 15:28:16 llama.go:323: llama runner exited with error: exit status 1
2023/10/16 15:28:17 llama.go:330: error starting llama runner: llama runner process has terminated
<!-- gh-comment-id:1764758251 --> @Syulin7 commented on GitHub (Oct 16, 2023): @pieroit In my case, it can start, but gpu not works. Can you see if any process is using the GPU by using nvidia-smi. Run docker logs ollama, there are some err logs. I think it's because the base image is Ubuntu, which does not include CUDA. ``` ggml_init_cublas: found 1 CUDA devices: Device 0: NVIDIA A10, compute capability 8.6 CUDA error 222 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml/ggml-cuda.cu:4905: the provided PTX was compiled with an unsupported toolchain. 2023/10/16 15:28:16 llama.go:323: llama runner exited with error: exit status 1 2023/10/16 15:28:17 llama.go:330: error starting llama runner: llama runner process has terminated ```
Author
Owner

@pieroit commented on GitHub (Oct 16, 2023):

@Syulin7 the logs are fine:

2023/10/16 15:38:13 images.go:995: total blobs: 3
2023/10/16 15:38:13 images.go:1002: total unused blobs removed: 0
2023/10/16 15:38:13 routes.go:614: Listening on [::]:11434
2023/10/16 15:39:18 llama.go:252: 6506 MiB VRAM available, loading up to 64 GPU layers
2023/10/16 15:39:18 llama.go:356: starting llama runner

Some things that come to mind, but I'm not sure:

  • mismatch between the cuda drivers and the specific GPU
  • maybe some models are not compatible because they use different runtimes. I'm using mistral:7b-instruct-q2_K
<!-- gh-comment-id:1764771225 --> @pieroit commented on GitHub (Oct 16, 2023): @Syulin7 the logs are fine: ``` 2023/10/16 15:38:13 images.go:995: total blobs: 3 2023/10/16 15:38:13 images.go:1002: total unused blobs removed: 0 2023/10/16 15:38:13 routes.go:614: Listening on [::]:11434 2023/10/16 15:39:18 llama.go:252: 6506 MiB VRAM available, loading up to 64 GPU layers 2023/10/16 15:39:18 llama.go:356: starting llama runner ``` Some things that come to mind, but I'm not sure: - mismatch between the cuda drivers and the specific GPU - maybe some models are not compatible because they use different runtimes. I'm using `mistral:7b-instruct-q2_K`
Author
Owner

@missandi commented on GitHub (Oct 16, 2023):

:latest

I do the same but only 1 GPU-0 running, GPU-1 nvidia not working

<!-- gh-comment-id:1764854076 --> @missandi commented on GitHub (Oct 16, 2023): > :latest I do the same but only 1 GPU-0 running, GPU-1 nvidia not working
Author
Owner

@mxyng commented on GitHub (Oct 16, 2023):

As @pieroit mentioned, there a number of reasons it might not be working as expected. Can you (@Syulin7 and @missandi) describe what GPU you're running as well as the driver version?

<!-- gh-comment-id:1764931496 --> @mxyng commented on GitHub (Oct 16, 2023): As @pieroit mentioned, there a number of reasons it might not be working as expected. Can you (@Syulin7 and @missandi) describe what GPU you're running as well as the driver version?
Author
Owner

@Syulin7 commented on GitHub (Oct 17, 2023):

As @pieroit mentioned, there a number of reasons it might not be working as expected. Can you (@Syulin7 and @missandi) describe what GPU you're running as well as the driver version?

@mxyng
A10 GPU + Ubuntu 20.04.6 LTS
NVIDIA-SMI 470.161.03
Driver Version: 470.161.03
CUDA Version: 11.4

nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Wed_Jul_14_19:41:19_PDT_2021
Cuda compilation tools, release 11.4, V11.4.100
Build cuda_11.4.r11.4/compiler.30188945_0
<!-- gh-comment-id:1765515820 --> @Syulin7 commented on GitHub (Oct 17, 2023): > As @pieroit mentioned, there a number of reasons it might not be working as expected. Can you (@Syulin7 and @missandi) describe what GPU you're running as well as the driver version? @mxyng A10 GPU + Ubuntu 20.04.6 LTS NVIDIA-SMI 470.161.03 Driver Version: 470.161.03 CUDA Version: 11.4 ``` nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Wed_Jul_14_19:41:19_PDT_2021 Cuda compilation tools, release 11.4, V11.4.100 Build cuda_11.4.r11.4/compiler.30188945_0 ```
Author
Owner

@missandi commented on GitHub (Oct 21, 2023):

@Syulin7 my cuda version 11.5 but the same can't connect with nvidia in ubuntu 22.04.

<!-- gh-comment-id:1773635013 --> @missandi commented on GitHub (Oct 21, 2023): @Syulin7 my cuda version 11.5 but the same can't connect with nvidia in ubuntu 22.04.
Author
Owner

@Syulin7 commented on GitHub (Oct 24, 2023):

@Syulin7 my cuda version 11.5 but the same can't connect with nvidia in ubuntu 22.04.

@missandi I'm not sure if it's because ollama requires CUDA >= 11.8, so you need to use a container image with CUDA 11.8. You can refer to the above method to rebuild the image.

<!-- gh-comment-id:1777146780 --> @Syulin7 commented on GitHub (Oct 24, 2023): > @Syulin7 my cuda version 11.5 but the same can't connect with nvidia in ubuntu 22.04. @missandi I'm not sure if it's because ollama requires CUDA >= 11.8, so you need to use a container image with CUDA 11.8. You can refer to the above method to rebuild the image.
Author
Owner

@mxyng commented on GitHub (Oct 25, 2023):

@Syulin7 Both the GPU and CUDA drivers are older, from Aug. 2022. It's possible the combination of the two prevents ollama from using the GPU. If possible, you can try upgrading your drivers.

As a sanity check, make sure you've installed nvidia-container-toolkit and are passing in --gpus otherwise the container will not have access to the GPU.

<!-- gh-comment-id:1779954736 --> @mxyng commented on GitHub (Oct 25, 2023): @Syulin7 Both the GPU and CUDA drivers are older, from Aug. 2022. It's possible the combination of the two prevents ollama from using the GPU. If possible, you can try upgrading your drivers. As a sanity check, make sure you've installed nvidia-container-toolkit and are passing in `--gpus` otherwise the container will not have access to the GPU.
Author
Owner

@j2l commented on GitHub (Nov 2, 2023):

On PopOS (Ubuntu 22.04) host, I also have the 1.5 Cuda Compiler, installed using sudo apt install nvidia-cuda-toolkit after following install process from https://hub.docker.com/r/ollama/ollama (Nvidia GPU with apt) and also tested Nvidia's process.
Exact version installed of nvidia-container-toolkit is (1.12.1-0pop1~1679409890~22.04~5f4b1f2)
Still nvcc --version throws:

Built on Thu_Nov_18_09:45:30_PST_2021
Cuda compilation tools, release 11.5, V11.5.119
Build cuda_11.5.r11.5/compiler.30672275_0

with Driver Version: 535.113.01

Any idea how to upgrade to 11.8?

<!-- gh-comment-id:1790454909 --> @j2l commented on GitHub (Nov 2, 2023): On PopOS (Ubuntu 22.04) host, I also have the 1.5 Cuda Compiler, installed using `sudo apt install nvidia-cuda-toolkit` after following install process from https://hub.docker.com/r/ollama/ollama (Nvidia GPU with apt) and also tested Nvidia's process. Exact version installed of nvidia-container-toolkit is `(1.12.1-0pop1~1679409890~22.04~5f4b1f2)` Still `nvcc --version` throws: ``` Built on Thu_Nov_18_09:45:30_PST_2021 Cuda compilation tools, release 11.5, V11.5.119 Build cuda_11.5.r11.5/compiler.30672275_0 ``` with `Driver Version: 535.113.01` Any idea how to upgrade to 11.8?
Author
Owner

@j2l commented on GitHub (Nov 3, 2023):

Ok, looks like https://developer.nvidia.com/cuda-11-8-0-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=deb_local is the way
Last steps thew errors though:

Errors were encountered while processing:
 nvidia-dkms-520
 cuda-drivers-520
 cuda-drivers
 nvidia-driver-520
 cuda-runtime-11-8
 cuda-11-8
 cuda-demo-suite-11-8
 cuda
E: Sub-process /usr/bin/dpkg returned an error code (1)

but nvcc is now V11.8.89

EDIT : Aaargh, NO, don't do it! In my case, the downgrade forced the display to 800x600 and you need to reload the write driver (535) to make your computer work again. WHAT A C..P

<!-- gh-comment-id:1792310307 --> @j2l commented on GitHub (Nov 3, 2023): Ok, looks like https://developer.nvidia.com/cuda-11-8-0-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=deb_local is the way Last steps thew errors though: ``` Errors were encountered while processing: nvidia-dkms-520 cuda-drivers-520 cuda-drivers nvidia-driver-520 cuda-runtime-11-8 cuda-11-8 cuda-demo-suite-11-8 cuda E: Sub-process /usr/bin/dpkg returned an error code (1) ``` but nvcc is now V11.8.89 EDIT : Aaargh, NO, don't do it! In my case, the downgrade forced the display to 800x600 and you need to reload the write driver (535) to make your computer work again. WHAT A C..P
Author
Owner

@j2l commented on GitHub (Nov 3, 2023):

Ok, looks like https://developer.nvidia.com/cuda-11-8-0-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=deb_local is the way
Last steps thew errors though:

Errors were encountered while processing:
 nvidia-dkms-520
 cuda-drivers-520
 cuda-drivers
 nvidia-driver-520
 cuda-runtime-11-8
 cuda-11-8
 cuda-demo-suite-11-8
 cuda
E: Sub-process /usr/bin/dpkg returned an error code (1)

but nvcc is now V11.8.89

EDIT : Aaargh, NO, don't do it! In my case, the downgrade forced the display to 800x600 and you need to reload the right driver (535) to make your computer work again. WHAT A C..P

<!-- gh-comment-id:1792331090 --> @j2l commented on GitHub (Nov 3, 2023): Ok, looks like https://developer.nvidia.com/cuda-11-8-0-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=deb_local is the way Last steps thew errors though: ``` Errors were encountered while processing: nvidia-dkms-520 cuda-drivers-520 cuda-drivers nvidia-driver-520 cuda-runtime-11-8 cuda-11-8 cuda-demo-suite-11-8 cuda E: Sub-process /usr/bin/dpkg returned an error code (1) ``` but nvcc is now V11.8.89 EDIT : Aaargh, NO, don't do it! In my case, the downgrade forced the display to 800x600 and you need to reload the right driver (535) to make your computer work again. WHAT A C..P
Author
Owner

@mxyng commented on GitHub (Nov 3, 2023):

Starting the next release, you can set LD_LIBRARY_PATH when running ollama serve which will override the preset CUDA library ollama will use. This should increase compatibility when run on older systems. See #959 for an example of setting this in Kubernetes.

Note, this setting will not solve all compatibility issues with older systems especially CUDA driver versions less than 11.x.x

<!-- gh-comment-id:1792673519 --> @mxyng commented on GitHub (Nov 3, 2023): Starting the next release, you can set `LD_LIBRARY_PATH` when running `ollama serve` which will override the preset CUDA library ollama will use. This should increase compatibility when run on older systems. See #959 for an example of setting this in Kubernetes. Note, this setting will not solve all compatibility issues with older systems especially CUDA driver versions less than 11.x.x
Author
Owner

@xinmans commented on GitHub (Nov 4, 2023):

I am testing using ollama on linux and docker, and its not using the GPU at all.
it appears that ollma is not using the CUDA image.
I resolved the issue by replacing the base image.
92578798bb/Dockerfile (L17-L23)

change ubuntu:22.04 to nvidia/cuda:11.8.0-devel-ubuntu22.04
and then it works image
Perhaps we can build a GPU image and push it to the community, using the "gpu" tag for differentiation.

How to edit it please. Thank you

Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
2023-11-04T11:55:27.693683008+08:00 Your new public key is:
2023-11-04T11:55:27.693689278+08:00
2023-11-04T11:55:27.693691398+08:00 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ+RedFwXCBV3M6Z9VzsvtMgEiGDIqkLO/TLdN0KC+PK
2023-11-04T11:55:27.693692748+08:00
2023-11-04T11:55:27.693872504+08:00 2023/11/04 03:55:27 images.go:824: total blobs: 0
2023-11-04T11:55:27.693912054+08:00 2023/11/04 03:55:27 images.go:831: total unused blobs removed: 0
2023-11-04T11:55:27.694082400+08:00 2023/11/04 03:55:27 routes.go:680: Listening on [::]:11434 (version 0.1.8)
2023-11-04T11:55:27.694538690+08:00 2023/11/04 03:55:27 routes.go:700: Warning: GPU support may not be enabled, check you have installed GPU drivers: nvidia-smi command failed

root@ollama-64764b89c5-6nbjh:/# nvidia-smi
bash: nvidia-smi: command not found
root@ollama-64764b89c5-6nbjh:/#

deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: llm
name: ollama
namespace: k8s-at-home
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: llm
template:
metadata:
labels:
app.kubernetes.io/name: llm
spec:
nodeName: ubuntu-gpu
containers:
- image: 192.168.31.158:5000/ollama/ollama:gpu
imagePullPolicy: IfNotPresent
name: llm
ports:
- containerPort: 11434
protocol: TCP
resources:
limits:
nvidia.com/gpu: 1
requests:
nvidia.com/gpu: 1

FROM nvidia/cuda:11.4.3-runtime-ubuntu20.04
RUN apt-get update && apt-get install -y ca-certificates
COPY ./ollama-linux-amd64 /bin/ollama
EXPOSE 11434
ENV OLLAMA_HOST 0.0.0.0
ENTRYPOINT ["/bin/ollama"]
CMD ["serve"]

<!-- gh-comment-id:1793328728 --> @xinmans commented on GitHub (Nov 4, 2023): > > I am testing using ollama on linux and docker, and its not using the GPU at all. > > it appears that ollma is not using the CUDA image. > > I resolved the issue by replacing the base image. > > https://github.com/jmorganca/ollama/blob/92578798bb1abcedd6bc99479d804f32d9ee2f6c/Dockerfile#L17-L23 > > > > change ubuntu:22.04 to nvidia/cuda:11.8.0-devel-ubuntu22.04 > > and then it works ![image](https://user-images.githubusercontent.com/37265556/274519256-52f7f99a-2533-4069-b700-7a738f03c7b4.png) > > Perhaps we can build a GPU image and push it to the community, using the "gpu" tag for differentiation. > > How to edit it please. Thank you Couldn't find '/root/.ollama/id_ed25519'. Generating new private key. 2023-11-04T11:55:27.693683008+08:00 Your new public key is: 2023-11-04T11:55:27.693689278+08:00 2023-11-04T11:55:27.693691398+08:00 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ+RedFwXCBV3M6Z9VzsvtMgEiGDIqkLO/TLdN0KC+PK 2023-11-04T11:55:27.693692748+08:00 2023-11-04T11:55:27.693872504+08:00 2023/11/04 03:55:27 images.go:824: total blobs: 0 2023-11-04T11:55:27.693912054+08:00 2023/11/04 03:55:27 images.go:831: total unused blobs removed: 0 2023-11-04T11:55:27.694082400+08:00 2023/11/04 03:55:27 routes.go:680: Listening on [::]:11434 (version 0.1.8) 2023-11-04T11:55:27.694538690+08:00 2023/11/04 03:55:27 routes.go:700: Warning: GPU support may not be enabled, check you have installed GPU drivers: nvidia-smi command failed root@ollama-64764b89c5-6nbjh:/# nvidia-smi bash: nvidia-smi: command not found root@ollama-64764b89c5-6nbjh:/# deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app.kubernetes.io/name: llm name: ollama namespace: k8s-at-home spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: llm template: metadata: labels: app.kubernetes.io/name: llm spec: nodeName: ubuntu-gpu containers: - image: 192.168.31.158:5000/ollama/ollama:gpu imagePullPolicy: IfNotPresent name: llm ports: - containerPort: 11434 protocol: TCP resources: limits: nvidia.com/gpu: 1 requests: nvidia.com/gpu: 1 FROM nvidia/cuda:11.4.3-runtime-ubuntu20.04 RUN apt-get update && apt-get install -y ca-certificates COPY ./ollama-linux-amd64 /bin/ollama EXPOSE 11434 ENV OLLAMA_HOST 0.0.0.0 ENTRYPOINT ["/bin/ollama"] CMD ["serve"]
Author
Owner

@feacluster commented on GitHub (Nov 22, 2023):

change ubuntu:22.04 to nvidia/cuda:11.8.0-devel-ubuntu22.04

Thanks, I made that change then did:

docker build -t ollama/ollama:0.1.3-gpu -f Dockerfile.gpu .

But the build eventually fails with:

Step 6/18 : ADD https://dl.google.com/go/go1.21.1.linux-$TARGETARCH.tar.gz /tmp/go1.21.1.tar.gz
ADD failed: failed to GET https://dl.google.com/go/go1.21.1.linux-.tar.gz with status 404 Not Found: <!DOCTYPE html>

Update: I got it to build by first doing a "git clone https://github.com/jmorganca/ollama.git" then editing the Dockerfile as explained above. But still it does not use the GPU. Tried doing a "watch nvdia-smi" from within the container and outside. I am testing with an older K80 GPU so that is likely to blame. But would be good to know if there is a way to get it to work. Don't want to pay $$ for a cloud machine with newer GPU.

Update 2:

I got it working by using a Tesla T4 gpu from google cloud. Turns out that is even cheaper than the K80 at just 16 cents/hour! I just did the vanilla install of ollama via

curl https://ollama.ai/install.sh | sh

<!-- gh-comment-id:1823051348 --> @feacluster commented on GitHub (Nov 22, 2023): > change ubuntu:22.04 to nvidia/cuda:11.8.0-devel-ubuntu22.04 Thanks, I made that change then did: ` docker build -t ollama/ollama:0.1.3-gpu -f Dockerfile.gpu . ` But the build eventually fails with: ``` Step 6/18 : ADD https://dl.google.com/go/go1.21.1.linux-$TARGETARCH.tar.gz /tmp/go1.21.1.tar.gz ADD failed: failed to GET https://dl.google.com/go/go1.21.1.linux-.tar.gz with status 404 Not Found: <!DOCTYPE html> ``` Update: I got it to build by first doing a "git clone https://github.com/jmorganca/ollama.git" then editing the Dockerfile as explained above. But still it does not use the GPU. Tried doing a "watch nvdia-smi" from within the container and outside. I am testing with an older K80 GPU so that is likely to blame. But would be good to know if there is a way to get it to work. Don't want to pay $$ for a cloud machine with newer GPU. Update 2: I got it working by using a Tesla T4 gpu from google cloud. Turns out that is even cheaper than the K80 at just 16 cents/hour! I just did the vanilla install of ollama via `curl https://ollama.ai/install.sh | sh `
Author
Owner

@mxyng commented on GitHub (Nov 28, 2023):

The Docker Hub image should work out of the box with NVIDIA GPUs. Make sure the image is up-to-date and that all preconditions (mainly nvidia-drivers and nvidia-container-toolkit) are satisfied. See #1306 for more details

<!-- gh-comment-id:1830774681 --> @mxyng commented on GitHub (Nov 28, 2023): The Docker Hub image should work out of the box with NVIDIA GPUs. Make sure the image is up-to-date and that all preconditions (mainly `nvidia-drivers` and `nvidia-container-toolkit`) are satisfied. See #1306 for more details
Author
Owner

@bwest2397 commented on GitHub (Nov 28, 2023):

@mxyng The ollama/ollama docker image (at least of version ollama/ollama:0.1.12) does not work out of the box, at least not for every machine. In my testing, #1306 fixes this issue.

<!-- gh-comment-id:1830886736 --> @bwest2397 commented on GitHub (Nov 28, 2023): @mxyng The `ollama/ollama` docker image (at least of version `ollama/ollama:0.1.12`) does not work out of the box, at least not for every machine. In my testing, #1306 fixes this issue.
Author
Owner

@abhishekslab commented on GitHub (Jan 11, 2024):

on wsl 2 with ollama:latest , this compose file worked for me https://github.com/jmorganca/ollama/issues/797#issuecomment-1764687661

reference on more resources: https://docs.docker.com/compose/gpu-support/

<!-- gh-comment-id:1886375869 --> @abhishekslab commented on GitHub (Jan 11, 2024): on wsl 2 with ollama:latest , this compose file worked for me https://github.com/jmorganca/ollama/issues/797#issuecomment-1764687661 reference on more resources: https://docs.docker.com/compose/gpu-support/
Author
Owner

@RLutsch commented on GitHub (Feb 26, 2024):

added

    spec:
      runtimeClassName: your-runtime-class  # Specify the RuntimeClass name here
      containers:....

to my deployment and started working

<!-- gh-comment-id:1963969005 --> @RLutsch commented on GitHub (Feb 26, 2024): added ``` spec: runtimeClassName: your-runtime-class # Specify the RuntimeClass name here containers:.... ``` to my deployment and started working
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#382