[GH-ISSUE #7630] Ollama 0.4.1 not using/detecting the GPUs #4870

Closed
opened 2026-04-12 15:52:38 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @rajeshkumar-n on GitHub (Nov 12, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7630

What is the issue?

Installed ollama as a software in Ubuntu 22_04 through the install scripts. The install script installed the OS specific latest CUDA Toolkit, NVIDIA drivers. I need to use the latest Ollama to run the Llama3.2 vision models

nvidia-smi command output
image

Ollama logs:
image

Docker container detecting GPU:
image

GCC Version:
cat /proc/driver/nvidia/version NVRM version: NVIDIA UNIX x86_64 Kernel Module 565.57.01 Thu Oct 10 12:29:05 UTC 2024 GCC version: gcc version 13.1.0 (Ubuntu 13.1.0-8ubuntu1~22.04)

This was not working either with the Ollama docker. Any help would be appreciated.

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.4.1

Originally created by @rajeshkumar-n on GitHub (Nov 12, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7630 ### What is the issue? Installed ollama as a software in Ubuntu 22_04 through the [install scripts](https://github.com/ollama/ollama/blob/main/scripts/install.sh). The install script installed the OS specific latest CUDA Toolkit, NVIDIA drivers. I need to use the latest Ollama to run the [Llama3.2 vision models](https://ollama.com/library/llama3.2) nvidia-smi command output <img width="914" alt="image" src="https://github.com/user-attachments/assets/d42b04ae-7fcd-4753-867f-c1a9c5fc73d2"> Ollama logs: <img width="1992" alt="image" src="https://github.com/user-attachments/assets/5f730fdf-9a3a-4528-b4bb-06ec931fd946"> Docker container detecting GPU: <img width="915" alt="image" src="https://github.com/user-attachments/assets/10dbce88-f2db-4052-97ef-7bc9fd06425c"> GCC Version: `cat /proc/driver/nvidia/version NVRM version: NVIDIA UNIX x86_64 Kernel Module 565.57.01 Thu Oct 10 12:29:05 UTC 2024 GCC version: gcc version 13.1.0 (Ubuntu 13.1.0-8ubuntu1~22.04)` This was not working either with the Ollama docker. Any help would be appreciated. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.4.1
GiteaMirror added the bug label 2026-04-12 15:52:38 -05:00
Author
Owner

@dhiltgen commented on GitHub (Nov 12, 2024):

We don't currently have MIG support. This is tracked via #4814

<!-- gh-comment-id:2471411563 --> @dhiltgen commented on GitHub (Nov 12, 2024): We don't currently have MIG support. This is tracked via #4814
Author
Owner

@rajeshkumar-n commented on GitHub (Nov 13, 2024):

I didn't really meant to use MIG here. I corrected my mistake by disabling the MIG(sudo nvidia-smi -i 0 -mig 0) and the default configuration takes place now to detect the GPUs.

Thanks @dhiltgen for the hint. I will follow on the other ticket to get updates on the MIG.

<!-- gh-comment-id:2472322049 --> @rajeshkumar-n commented on GitHub (Nov 13, 2024): I didn't really meant to use MIG here. I corrected my mistake by disabling the MIG(sudo nvidia-smi -i 0 -mig 0) and the default configuration takes place now to detect the GPUs. Thanks @dhiltgen for the hint. I will follow on the other ticket to get updates on the MIG.
Author
Owner

@tharun571 commented on GitHub (Jan 20, 2026):

Ollama GPU detection failures when nvidia-smi works are frustrating. Glad MIG mode was the culprit here!

Common Ollama GPU issues beyond MIG:

  • Ollama uses different GPU detection than nvidia-smi
  • CUDA libraries must be in standard paths (LD_LIBRARY_PATH)
  • Running in Docker requires proper nvidia-container-runtime configuration
  • Ollama service user permissions may differ from shell user

Quick diagnostic checks for Ollama GPU issues:

# Check Ollama can see GPU:
ollama run llama3.2 --verbose
# Verify CUDA libs accessible:
ldconfig -p | grep cuda

I built env-doctor to help diagnose these "nvidia-smi works but app doesn't" scenarios:

  • Checks CUDA runtime library paths
  • Tests GPU visibility from application perspective (not just nvidia-smi)
  • Validates Docker GPU configuration
  • Helps identify permission and path issues

Full disclosure: I'm the author. Sharing because Ollama's GPU detection requirements differ from nvidia-smi. Hope it helps others!

<!-- gh-comment-id:3774406745 --> @tharun571 commented on GitHub (Jan 20, 2026): Ollama GPU detection failures when nvidia-smi works are frustrating. Glad MIG mode was the culprit here! Common Ollama GPU issues beyond MIG: - Ollama uses different GPU detection than nvidia-smi - CUDA libraries must be in standard paths (LD_LIBRARY_PATH) - Running in Docker requires proper nvidia-container-runtime configuration - Ollama service user permissions may differ from shell user Quick diagnostic checks for Ollama GPU issues: ```bash # Check Ollama can see GPU: ollama run llama3.2 --verbose # Verify CUDA libs accessible: ldconfig -p | grep cuda ``` I built **[env-doctor](https://github.com/mitulgarg/env-doctor)** to help diagnose these "nvidia-smi works but app doesn't" scenarios: - Checks CUDA runtime library paths - Tests GPU visibility from application perspective (not just nvidia-smi) - Validates Docker GPU configuration - Helps identify permission and path issues Full disclosure: I'm the author. Sharing because Ollama's GPU detection requirements differ from nvidia-smi. Hope it helps others!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4870