[GH-ISSUE #9490] Initially using GPU, but not using it after a period of time. #6179

Closed
opened 2026-04-12 17:32:53 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @j820301 on GitHub (Mar 4, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9490

What is the issue?

Initially using GPU, but not using it after a period of time.
I'm not sure if I configured it incorrectly, I need to have a specific GPU configuration because my other VM needs to assign containers to a specific block of GPUs.
I need to ensure that the container (ollama) always uses the GPU and doesn't switch to CPU, in order to prevent service downtime.

I've referred to the relevant articles, but I still don't know how to configure num_gpu in docker-compose.yml.
#9063
#6950
#5749

Could it be a context issue?
#8935

If my configuration file is incorrect, please help me modify it. Thank you to all the contributors of Ollama for their hard work.

docker-compose.yml

services:
ollama1:
image: ollama/ollama:0.5.12
restart: always
container_name: ollama1
pull_policy: always
ports:
- 11431:11434
volumes:
- ./ollama1:/root/.ollama
environment:
- CUDA_VISIBLE_DEVICES=0
- OLLAMA_DEBUG=1
deploy:
resources:
reservations:
devices:
- driver: nvidia
capabilities: [gpu]
device_ids: ['0']

Image

Image

The other VM has 2 containers that need to be assigned to use GPU 1 and 2, with memory location at 47c.
Image

Relevant log output


OS

Docker

GPU

Nvidia

CPU

Intel

Ollama version

0.5.12

Originally created by @j820301 on GitHub (Mar 4, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9490 ### What is the issue? Initially using GPU, but not using it after a period of time. I'm not sure if I configured it incorrectly, I need to have a specific GPU configuration because my other VM needs to assign containers to a specific block of GPUs. I need to ensure that the container (ollama) always uses the GPU and doesn't switch to CPU, in order to prevent service downtime. I've referred to the relevant articles, but I still don't know how to configure num_gpu in docker-compose.yml. #9063 #6950 #5749 Could it be a context issue? #8935 If my configuration file is incorrect, please help me modify it. Thank you to all the contributors of Ollama for their hard work. docker-compose.yml services: ollama1: image: ollama/ollama:0.5.12 restart: always container_name: ollama1 pull_policy: always ports: - 11431:11434 volumes: - ./ollama1:/root/.ollama environment: - CUDA_VISIBLE_DEVICES=0 - OLLAMA_DEBUG=1 deploy: resources: reservations: devices: - driver: nvidia capabilities: [gpu] device_ids: ['0'] ![Image](https://github.com/user-attachments/assets/975a75e8-2539-4914-9d5e-0f73a01b963a) ![Image](https://github.com/user-attachments/assets/3698d336-b390-4cb4-9c56-451a5c88a31b) The other VM has 2 containers that need to be assigned to use GPU 1 and 2, with memory location at 47c. ![Image](https://github.com/user-attachments/assets/29373049-ef80-4d90-8aec-e64fcfaa8fd9) ### Relevant log output ```shell ``` ### OS Docker ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.12
GiteaMirror added the bug label 2026-04-12 17:32:53 -05:00
Author
Owner

@flywiththetide commented on GitHub (Mar 4, 2025):

It looks like Ollama initially detects the GPU, but switches to CPU after a while. Here are some things to check and potential fixes:

1. Ensure Persistent GPU Access

Try adding NVIDIA persistence mode so the GPU doesn't time out:

sudo nvidia-smi -pm 1

This will keep the GPU active, preventing automatic power-down.

2. Check GPU Usage Inside the Container

Run this inside your Ollama container:

docker exec -it ollama1 bash
nvidia-smi
  • If nvidia-smi does not show GPU usage, the container may have lost access to the GPU.
  • If it does show GPU activity, Ollama is still using it, and the issue may be with logging.

3. Modify Docker Compose GPU Configuration

Your docker-compose.yml is mostly correct, but you might need to force NVIDIA runtime:

deploy:
  resources:
    reservations:
      devices:
        - driver: nvidia
          count: 1
          capabilities: [gpu]

Or explicitly set the runtime:

runtime: nvidia

Restart the container:

docker-compose down
docker-compose up -d

4. Watch for GPU Failures

Check for GPU errors:

dmesg | grep -i nvidia

If the GPU is crashing or resetting, check power settings and logs from NVIDIA drivers.

Let me know if these steps help or if you need further debugging assistance!

<!-- gh-comment-id:2696223830 --> @flywiththetide commented on GitHub (Mar 4, 2025): It looks like Ollama initially detects the **GPU**, but switches to CPU after a while. Here are some things to check and potential fixes: ### **1. Ensure Persistent GPU Access** Try adding **NVIDIA persistence mode** so the GPU doesn't time out: ```bash sudo nvidia-smi -pm 1 ``` This will keep the GPU active, preventing automatic power-down. ### **2. Check GPU Usage Inside the Container** Run this inside your Ollama container: ```bash docker exec -it ollama1 bash nvidia-smi ``` - If `nvidia-smi` **does not show GPU usage**, the container may have lost access to the GPU. - If it **does show GPU activity**, Ollama is still using it, and the issue may be with logging. ### **3. Modify Docker Compose GPU Configuration** Your `docker-compose.yml` is mostly correct, but you might need to **force NVIDIA runtime**: ```yaml deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] ``` Or explicitly set the **runtime**: ```yaml runtime: nvidia ``` Restart the container: ```bash docker-compose down docker-compose up -d ``` ### **4. Watch for GPU Failures** Check for GPU errors: ```bash dmesg | grep -i nvidia ``` If the GPU is crashing or resetting, **check power settings** and **logs from NVIDIA drivers**. Let me know if these steps help or if you need further debugging assistance!
Author
Owner

@j820301 commented on GitHub (Mar 4, 2025):

Thanks for your reply, but I still can't solve my problem.

  1. 2.The settings seem to be running normally.

Image

  1. This configuration cannot let ollama use the specified GPU, and my VM2 will not work.
  2. There are no abnormal logs on the GPU."

Image

<!-- gh-comment-id:2696261439 --> @j820301 commented on GitHub (Mar 4, 2025): Thanks for your reply, but I still can't solve my problem. 1. 2.The settings seem to be running normally. ![Image](https://github.com/user-attachments/assets/3280ec7a-9956-461c-b180-c7ed758a2bcf) 3. This configuration cannot let ollama use the specified GPU, and my VM2 will not work. 4. There are no abnormal logs on the GPU." ![Image](https://github.com/user-attachments/assets/1873d7de-d4a5-4091-8c9c-91a084cae31a)
Author
Owner

@flywiththetide commented on GitHub (Mar 4, 2025):

Thanks for the update! Since docker-compose.yml didn’t allow Ollama to use the correct GPU, let’s try forcing GPU selection manually.

1. Manually Set the GPU Device ID

Check available GPUs with:

nvidia-smi -L

This should list your GPUs (e.g., GPU 0, GPU 1).

Now, try running Ollama inside the container with a specific GPU:

docker run --gpus 'device=0' ollama/ollama

Or modify your docker-compose.yml:

deploy:
  resources:
    reservations:
      devices:
        - driver: nvidia
          count: 1
          capabilities: [gpu]
          device_ids: ["0"]  # Force GPU 0

Restart the container:

docker-compose down
docker-compose up -d

2. Check If Other Containers Are Interfering

Run:

nvidia-smi
  • If another process is hogging the GPU, it might be preventing Ollama from accessing it.
  • If necessary, restart the GPU process:
sudo systemctl restart nvidia-persistenced

3. Run Ollama With CUDA Debugging

Inside the container, try:

CUDA_VISIBLE_DEVICES=0 ollama serve --debug

This forces GPU 0 and enables debugging.


Let me know if these steps help, or if you see any error messages in the logs!

<!-- gh-comment-id:2696266925 --> @flywiththetide commented on GitHub (Mar 4, 2025): Thanks for the update! Since `docker-compose.yml` didn’t allow Ollama to use the correct GPU, let’s try **forcing GPU selection manually**. ### **1. Manually Set the GPU Device ID** Check available GPUs with: ```bash nvidia-smi -L ``` This should list your GPUs (e.g., `GPU 0`, `GPU 1`). Now, try running Ollama inside the container with a specific GPU: ```bash docker run --gpus 'device=0' ollama/ollama ``` Or modify your `docker-compose.yml`: ```yaml deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] device_ids: ["0"] # Force GPU 0 ``` Restart the container: ```bash docker-compose down docker-compose up -d ``` ### **2. Check If Other Containers Are Interfering** Run: ```bash nvidia-smi ``` - If **another process is hogging the GPU**, it might be preventing Ollama from accessing it. - If necessary, restart the GPU process: ```bash sudo systemctl restart nvidia-persistenced ``` ### **3. Run Ollama With CUDA Debugging** Inside the container, try: ```bash CUDA_VISIBLE_DEVICES=0 ollama serve --debug ``` This forces GPU 0 and enables debugging. --- Let me know if these steps help, or if you see any **error messages in the logs**!
Author
Owner

@j820301 commented on GitHub (Mar 4, 2025):

Thank you for your yaml configuration suggestion, I will adopt it and continue to check if the same issue still occurs.
Additionally, I would like to inquire about the num_gpu parameter, how should it be configured? Is this one of the solutions?
If I can find obvious problems in my log, it will be very beneficial to me.

deploy:
  resources:
    reservations:
      devices:
        - driver: nvidia
          count: 1
          capabilities: [gpu]
          device_ids: ["0"]  # Force GPU 0
<!-- gh-comment-id:2696782941 --> @j820301 commented on GitHub (Mar 4, 2025): Thank you for your yaml configuration suggestion, I will adopt it and continue to check if the same issue still occurs. Additionally, I would like to inquire about the num_gpu parameter, how should it be configured? Is this one of the solutions? If I can find obvious problems in my log, it will be very beneficial to me. ``` deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] device_ids: ["0"] # Force GPU 0 ```
Author
Owner

@rick-github commented on GitHub (Mar 4, 2025):

  • ollama is switching to CPU after having been using GPU.
    see https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#linux-docker.

  • assignment of GPUs in a docker container.
    if you set CUDA_VISIBLE_DEVICES there's no need to set device_ids. You can either set CUDA_VISIBLE_DEVICES to the device index (CUDA_VISIBLE_DEVICES=1,2) or to the actual UUID of the device. Use nvidia-smi-L to find this id.

  • num_gpu
    This is only used to set the number of layers to offload to the GPU, it has nothing to do with selecting which GPU to use.

<!-- gh-comment-id:2697192461 --> @rick-github commented on GitHub (Mar 4, 2025): - ollama is switching to CPU after having been using GPU. see https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#linux-docker. - assignment of GPUs in a docker container. if you set `CUDA_VISIBLE_DEVICES` there's no need to set `device_ids`. You can either set `CUDA_VISIBLE_DEVICES` to the device index (`CUDA_VISIBLE_DEVICES=1,2`) or to the actual UUID of the device. Use `nvidia-smi-L` to find this id. - `num_gpu` This is only used to set the number of layers to offload to the GPU, it has nothing to do with selecting which GPU to use.
Author
Owner

@flywiththetide commented on GitHub (Mar 4, 2025):

Thanks for the clarification! Based on your response:

  • num_gpu only controls layer offloading and does not assign a GPU.
  • If using CUDA_VISIBLE_DEVICES, there's no need to set device_ids in docker-compose.yml.

Next Steps to Debug Further

  1. Confirm which GPU is in use:

    nvidia-smi
    

    Check if another process is interfering with Ollama’s access.

  2. Test without device_ids and only use CUDA_VISIBLE_DEVICES:

    CUDA_VISIBLE_DEVICES=0 ollama serve
    

    This should force GPU 0 to be used.

  3. Check the Ollama logs for errors:

    docker logs ollama-container
    

Let us know if the issue persists, or if you see any GPU-related warnings!

<!-- gh-comment-id:2697242759 --> @flywiththetide commented on GitHub (Mar 4, 2025): Thanks for the clarification! Based on your response: - `num_gpu` only controls **layer offloading** and does not assign a GPU. - If using `CUDA_VISIBLE_DEVICES`, there's **no need to set `device_ids`** in `docker-compose.yml`. ### **Next Steps to Debug Further** 1. **Confirm which GPU is in use:** ```bash nvidia-smi ``` Check if another process is interfering with Ollama’s access. 2. **Test without `device_ids`** and only use `CUDA_VISIBLE_DEVICES`: ```bash CUDA_VISIBLE_DEVICES=0 ollama serve ``` This should force GPU 0 to be used. 3. **Check the Ollama logs** for errors: ```bash docker logs ollama-container ``` Let us know if the issue persists, or if you see any GPU-related warnings!
Author
Owner

@rick-github commented on GitHub (Mar 4, 2025):

@flywiththetide please don't spam these tickets with LLM generated responses.

<!-- gh-comment-id:2697248160 --> @rick-github commented on GitHub (Mar 4, 2025): @flywiththetide please don't spam these tickets with LLM generated responses.
Author
Owner

@j820301 commented on GitHub (Mar 5, 2025):

@rick-github Thank you very much for your detailed explanation, I think it should be the cgroup setting, the situation is very similar, thank you for your guidance and great contribution.
Your answer is very important and helpful to me.
I will complete the configuration update and continue to observe and report back.

According to your guidance, I updated the following configuration. If there are any syntax errors, please correct me. Thank you very much.

services:
  ollama1:
    image: ollama/ollama:0.5.12
    restart: always
    container_name: ollama1
    pull_policy: always
    ports:
      - 11431:11434
    volumes:
      - ./ollama1:/root/.ollama
    environment:
      - CUDA_VISIBLE_DEVICES=GPU-d0327e65-5678-11b2-8319-d758e9bc8d6e
      - OLLAMA_DEBUG=1
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]
{
    "default-runtime": "nvidia",
    "exec-opts": ["native.cgroupdriver=cgroupfs"],
    "log-opts": {
        "max-file": "5",
        "max-size": "10m"
    },
    "runtimes": {
        "nvidia": {
            "args": [],
            "path": "nvidia-container-runtime"
        }
    }
}
<!-- gh-comment-id:2699533664 --> @j820301 commented on GitHub (Mar 5, 2025): @rick-github Thank you very much for your detailed explanation, I think it should be the cgroup setting, the situation is very similar, thank you for your guidance and great contribution. Your answer is very important and helpful to me. I will complete the configuration update and continue to observe and report back. According to your guidance, I updated the following configuration. If there are any syntax errors, please correct me. Thank you very much. ```dockerfile services: ollama1: image: ollama/ollama:0.5.12 restart: always container_name: ollama1 pull_policy: always ports: - 11431:11434 volumes: - ./ollama1:/root/.ollama environment: - CUDA_VISIBLE_DEVICES=GPU-d0327e65-5678-11b2-8319-d758e9bc8d6e - OLLAMA_DEBUG=1 deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] ``` ```python { "default-runtime": "nvidia", "exec-opts": ["native.cgroupdriver=cgroupfs"], "log-opts": { "max-file": "5", "max-size": "10m" }, "runtimes": { "nvidia": { "args": [], "path": "nvidia-container-runtime" } } } ```
Author
Owner

@rick-github commented on GitHub (Mar 5, 2025):

This looks syntactically correct, but docker will alert you if there's an error. Try it and see if the GPU remains available.

<!-- gh-comment-id:2699538870 --> @rick-github commented on GitHub (Mar 5, 2025): This looks syntactically correct, but docker will alert you if there's an error. Try it and see if the GPU remains available.
Author
Owner

@j820301 commented on GitHub (Mar 5, 2025):

After restarting Docker and the container, I detected the GPU UUID and successfully loaded it into VRAM. It is currently running normally, and I will continue to monitor it and report back. If no further anomalies occur this week, I will close this question. Thank you very much for your help, and once again, thanks to the Ollama developers.

source=server.go:1091 msg="llama server stopped"
2025/03/05 02:05:45 routes.go:1205: INFO server config env="map[CUDA_VISIBLE_DEVICES:GPU-d0327e65-5678-11b2-8319-d758e9bc8d6e GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-03-05T02:05:45.171Z level=INFO source=images.go:432 msg="total blobs: 48"
time=2025-03-05T02:05:45.171Z level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-05T02:05:45.171Z level=INFO source=routes.go:1256 msg="Listening on [::]:11434 (version 0.5.12)"
time=2025-03-05T02:05:45.171Z level=DEBUG source=sched.go:106 msg="starting llm scheduler"
time=2025-03-05T02:05:45.172Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-05T02:05:45.172Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-03-05T02:05:45.173Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so*
time=2025-03-05T02:05:45.173Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-03-05T02:05:45.174Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.550.127.05]
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.05
dlsym: cuInit - 0x74d7b087cbc0
dlsym: cuDriverGetVersion - 0x74d7b087cbe0
dlsym: cuDeviceGetCount - 0x74d7b087cc20
dlsym: cuDeviceGet - 0x74d7b087cc00
dlsym: cuDeviceGetAttribute - 0x74d7b087cd00
dlsym: cuDeviceGetUuid - 0x74d7b087cc60
dlsym: cuDeviceGetName - 0x74d7b087cc40
dlsym: cuCtxCreate_v3 - 0x74d7b087cee0
dlsym: cuMemGetInfo_v2 - 0x74d7b0886e20
dlsym: cuCtxDestroy - 0x74d7b08e1850
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2025-03-05T02:05:45.220Z level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.550.127.05
[GPU-d0327e65-5678-11b2-8319-d758e9bc8d6e] CUDA totalMem 47864 mb
[GPU-d0327e65-5678-11b2-8319-d758e9bc8d6e] CUDA freeMem 46194 mb
[GPU-d0327e65-5678-11b2-8319-d758e9bc8d6e] Compute Capability 9.0
time=2025-03-05T02:05:45.525Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2025-03-05T02:05:45.525Z level=INFO source=types.go:130 msg="inference compute" id=GPU-d0327e65-5678-11b2-8319-d758e9bc8d6e library=cuda variant=v12 compute=9.0 driver=12.4 name="NVIDIA H100L-47C" total="46.7 GiB" available="45.1 GiB"
<!-- gh-comment-id:2699583439 --> @j820301 commented on GitHub (Mar 5, 2025): After restarting Docker and the container, I detected the GPU UUID and successfully loaded it into VRAM. It is currently running normally, and I will continue to monitor it and report back. If no further anomalies occur this week, I will close this question. Thank you very much for your help, and once again, thanks to the Ollama developers. ```dockerfile source=server.go:1091 msg="llama server stopped" 2025/03/05 02:05:45 routes.go:1205: INFO server config env="map[CUDA_VISIBLE_DEVICES:GPU-d0327e65-5678-11b2-8319-d758e9bc8d6e GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-03-05T02:05:45.171Z level=INFO source=images.go:432 msg="total blobs: 48" time=2025-03-05T02:05:45.171Z level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-03-05T02:05:45.171Z level=INFO source=routes.go:1256 msg="Listening on [::]:11434 (version 0.5.12)" time=2025-03-05T02:05:45.171Z level=DEBUG source=sched.go:106 msg="starting llm scheduler" time=2025-03-05T02:05:45.172Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-03-05T02:05:45.172Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-03-05T02:05:45.173Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so* time=2025-03-05T02:05:45.173Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-03-05T02:05:45.174Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.550.127.05] initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.127.05 dlsym: cuInit - 0x74d7b087cbc0 dlsym: cuDriverGetVersion - 0x74d7b087cbe0 dlsym: cuDeviceGetCount - 0x74d7b087cc20 dlsym: cuDeviceGet - 0x74d7b087cc00 dlsym: cuDeviceGetAttribute - 0x74d7b087cd00 dlsym: cuDeviceGetUuid - 0x74d7b087cc60 dlsym: cuDeviceGetName - 0x74d7b087cc40 dlsym: cuCtxCreate_v3 - 0x74d7b087cee0 dlsym: cuMemGetInfo_v2 - 0x74d7b0886e20 dlsym: cuCtxDestroy - 0x74d7b08e1850 calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 1 time=2025-03-05T02:05:45.220Z level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.550.127.05 [GPU-d0327e65-5678-11b2-8319-d758e9bc8d6e] CUDA totalMem 47864 mb [GPU-d0327e65-5678-11b2-8319-d758e9bc8d6e] CUDA freeMem 46194 mb [GPU-d0327e65-5678-11b2-8319-d758e9bc8d6e] Compute Capability 9.0 time=2025-03-05T02:05:45.525Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu" releasing cuda driver library time=2025-03-05T02:05:45.525Z level=INFO source=types.go:130 msg="inference compute" id=GPU-d0327e65-5678-11b2-8319-d758e9bc8d6e library=cuda variant=v12 compute=9.0 driver=12.4 name="NVIDIA H100L-47C" total="46.7 GiB" available="45.1 GiB" ```
Author
Owner

@j820301 commented on GitHub (Mar 10, 2025):

Rick, thank you again for your response and help. The issue is now resolved, I will close this case, thanks

<!-- gh-comment-id:2709285175 --> @j820301 commented on GitHub (Mar 10, 2025): Rick, thank you again for your response and help. The issue is now resolved, I will close this case, thanks
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6179