[GH-ISSUE #15184] Installer exits early when nvidia-smi is found even if no nvidia gpu is present #56234

Open
opened 2026-04-29 10:27:36 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @misieck on GitHub (Mar 31, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15184

What is the issue?

When nvidia-smi is available in the system, the installer thinks its done before trying to discover amd cards and install rocm support.
In fact it appears the installer would skip installalation of cuda as well as that happens later in the script, but im not sure if thats an issue.

nvidia-smi can be installed through the nvidia-utils-xxx package without nvidia hardware present, can be left over from previous hardware installation or there could be both nvidia and amd cards present. So this affects both singe card as well as mixed-card setups.

 # check_gpu nvidia-smi simply checks if nvidia-smi is available to call, i.e. installed in the system

 if check_gpu nvidia-smi; then  
     status "NVIDIA GPU installed."
     exit 0     #        <--------  installer exits 
 fi

 if ! check_gpu lspci nvidia && ! check_gpu lshw nvidia && ! check_gpu lspci amdgpu && ! check_gpu lshw amdgpu; then
     install_success
     warning "No NVIDIA/AMD GPU detected. Ollama will run in CPU-only mode."
     exit 0
 fi
 
 #                      v--------   install rocm support
 if check_gpu lspci amdgpu || check_gpu lshw amdgpu; then
      download_and_extract "https://ollama.com/download" "$OLLAMA_INSTALL_DIR" "ollama-linux-${ARCH}-rocm"   
     install_success
     status "AMD GPU ready."
     exit 0
 fi

To solve this the exit 0 calls should probably be removed in the nvidia and amd check (the latter to allow cuda installation for mixed gpu setups).

Relevant log output


OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.19.0

Originally created by @misieck on GitHub (Mar 31, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15184 ### What is the issue? When nvidia-smi is available in the system, the installer thinks its done before trying to discover amd cards and install rocm support. In fact it appears the installer would skip installalation of cuda as well as that happens later in the script, but im not sure if thats an issue. nvidia-smi can be installed through the nvidia-utils-xxx package without nvidia hardware present, can be left over from previous hardware installation or there could be both nvidia and amd cards present. So this affects both singe card as well as mixed-card setups. # check_gpu nvidia-smi simply checks if nvidia-smi is available to call, i.e. installed in the system if check_gpu nvidia-smi; then status "NVIDIA GPU installed." exit 0 # <-------- installer exits fi if ! check_gpu lspci nvidia && ! check_gpu lshw nvidia && ! check_gpu lspci amdgpu && ! check_gpu lshw amdgpu; then install_success warning "No NVIDIA/AMD GPU detected. Ollama will run in CPU-only mode." exit 0 fi # v-------- install rocm support if check_gpu lspci amdgpu || check_gpu lshw amdgpu; then download_and_extract "https://ollama.com/download" "$OLLAMA_INSTALL_DIR" "ollama-linux-${ARCH}-rocm" install_success status "AMD GPU ready." exit 0 fi To solve this the exit 0 calls should probably be removed in the nvidia and amd check (the latter to allow cuda installation for mixed gpu setups). ### Relevant log output ```shell ``` ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.19.0
GiteaMirror added the bug label 2026-04-29 10:27:36 -05:00
Author
Owner

@misieck commented on GitHub (Mar 31, 2026):

For those trying to run AMD cards on systems with NVIDIA installation history, as a workaround uninstall the nvidia-utils-* package before running the installation script.

<!-- gh-comment-id:4165825449 --> @misieck commented on GitHub (Mar 31, 2026): For those trying to run AMD cards on systems with NVIDIA installation history, as a workaround uninstall the nvidia-utils-* package before running the installation script.
Author
Owner

@rick-github commented on GitHub (Mar 31, 2026):

Or temporarily hide nvidia-smi:

nv=$(command -v nvidia-smi)
sudo mv $nv $nv.tmp
curl -fsSL https://ollama.com/install.sh | sh
sudo mv $nv.tmp $nv
<!-- gh-comment-id:4166053679 --> @rick-github commented on GitHub (Mar 31, 2026): Or temporarily hide `nvidia-smi`: ```shell nv=$(command -v nvidia-smi) sudo mv $nv $nv.tmp curl -fsSL https://ollama.com/install.sh | sh sudo mv $nv.tmp $nv ```
Author
Owner

@dhirajlochib commented on GitHub (Apr 2, 2026):

Fix submitted in #15225.

The exit 0 after check_gpu nvidia-smi is replaced with a HAS_CUDA flag so the script continues to check for AMD GPUs and install ROCm when needed. This handles both mixed NVIDIA+AMD systems and orphaned nvidia-smi installs.

<!-- gh-comment-id:4179187367 --> @dhirajlochib commented on GitHub (Apr 2, 2026): Fix submitted in #15225. The `exit 0` after `check_gpu nvidia-smi` is replaced with a `HAS_CUDA` flag so the script continues to check for AMD GPUs and install ROCm when needed. This handles both mixed NVIDIA+AMD systems and orphaned nvidia-smi installs.
Author
Owner

@misieck commented on GitHub (Apr 2, 2026):

Are you sure this will work?
Presence of nvidia-smi alone does not mean cuda is installed does it?
I think check_gpu should not test for existence of nvidia-smi alone, but use it to determine if gpu is found.
Check for cuda should be a separate test, like what is done in the wsl case.

This fix tests for existence of gpus and if its negative, i. e. no gpu installed suppresses warning message if nvidia-smi is found.
It also suppresses installation of cuda if nvidia-smi is installed.

<!-- gh-comment-id:4180624620 --> @misieck commented on GitHub (Apr 2, 2026): Are you sure this will work? Presence of nvidia-smi alone does not mean cuda is installed does it? I think check_gpu should not test for existence of nvidia-smi alone, but use it to determine if gpu is found. Check for cuda should be a separate test, like what is done in the wsl case. This fix tests for existence of gpus and if its negative, i. e. no gpu installed suppresses warning message if nvidia-smi is found. It also suppresses installation of cuda if nvidia-smi is installed.
Author
Owner

@dhirajlochib commented on GitHub (Apr 2, 2026):

Are you sure this will work? Presence of nvidia-smi alone does not mean cuda is installed does it? I think check_gpu should not test for existence of nvidia-smi alone, but use it to determine if gpu is found. Check for cuda should be a separate test, like what is done in the wsl case.

This fix tests for existence of gpus and if its negative, i. e. no gpu installed suppresses warning message if nvidia-smi is found. It also suppresses installation of cuda if nvidia-smi is installed.

that's a fair point to make sure I implement this correctly are you suggesting something like using nvidia-smi -L to query actual GPU presence, and treating a non-zero exit code as 'no NVIDIA GPU found' even if the binary exists?
Happy to rework the check along those lines before this gets reviewed.

<!-- gh-comment-id:4180684928 --> @dhirajlochib commented on GitHub (Apr 2, 2026): > Are you sure this will work? Presence of nvidia-smi alone does not mean cuda is installed does it? I think check_gpu should not test for existence of nvidia-smi alone, but use it to determine if gpu is found. Check for cuda should be a separate test, like what is done in the wsl case. > > This fix tests for existence of gpus and if its negative, i. e. no gpu installed suppresses warning message if nvidia-smi is found. It also suppresses installation of cuda if nvidia-smi is installed. that's a fair point to make sure I implement this correctly are you suggesting something like using `nvidia-smi -L` to query actual GPU presence, and treating a non-zero exit code as 'no NVIDIA GPU found' even if the binary exists? Happy to rework the check along those lines before this gets reviewed.
Author
Owner

@PureBlissAK commented on GitHub (Apr 18, 2026):

🤖 Automated Triage & Analysis Report

Issue: #15184
Analyzed: 2026-04-18T18:22:56.674401

Analysis

  • Type: unknown
  • Severity: medium
  • Components: unknown

Implementation Plan

  • Effort: medium
  • Steps:

This issue has been triaged and marked for implementation.

<!-- gh-comment-id:4274310934 --> @PureBlissAK commented on GitHub (Apr 18, 2026): <!-- ollama-issue-orchestrator:v1 issue:15184 --> ## 🤖 Automated Triage & Analysis Report **Issue**: #15184 **Analyzed**: 2026-04-18T18:22:56.674401 ### Analysis - **Type**: unknown - **Severity**: medium - **Components**: unknown ### Implementation Plan - **Effort**: medium - **Steps**: *This issue has been triaged and marked for implementation.*
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#56234