[GH-ISSUE #8591] High idle power consumption due to persistent CUDA initialization #52065

Open
opened 2026-04-28 21:47:03 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @SvenMeyer on GitHub (Jan 26, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8591

High idle power consumption due to PCIe bus unable to enter sleep state

Issue Description

When running Ollama as a service with CUDA enabled, the system maintains unnecessarily high power consumption (~14W vs ~6W) even when idle. This is primarily caused by the PCIe bus being unable to enter sleep state (D3) due to persistent CUDA initialization, despite the GPU itself being in a low-power state.

Technical Details

  • System: DELL XPS15 9530
  • CPU: Intel i7-13700H
  • GPU: NVIDIA RTX 4070 Laptop
  • Driver Version: 550.144.03
  • CUDA Version: 12.4
  • Ollama Version: 0.5.7
  • OS: Manjaro Linux (DISTRIB_RELEASE="24.2.1")
  • Kernel: Linux 6.12.4-1-MANJARO

Current Behavior

Power consumption breakdown:

  • Base system (no Ollama): ~6W
  • With Ollama service running: ~14W
  • Delta: ~8W additional power

The increased power consumption is primarily due to:

  1. PCIe bus stuck in D0 state (cannot enter D3 sleep)
  2. CUDA runtime keeping PCIe link active

The GPU itself is actually well-behaved:

  • Enters P8 power state correctly
  • Only draws ~4W at idle
  • Properly releases display to Intel GPU
  • Memory Usage: 11MiB

Expected Behavior

  • PCIe bus should enter D3 state when CUDA inference is not active
  • System should maintain low power consumption (~6W) when not actively processing
  • CUDA initialization should happen only when needed for inference
  • No impact on inference performance when needed

Diagnostics

With Ollama service running but idle:

PCIe State: D0 (should be D3)
GPU Power State: P8
GPU Power Draw: ~4W
Display: Using Intel GPU
CUDA: Initialized but idle

Attempted Solutions

  1. CUDA environment configuration:
Environment="CUDA_MODULE_LOADING=LAZY"
Environment="CUDA_CACHE_DISABLE=0"
Environment="NVIDIA_DRIVER_CAPABILITIES=compute,utility"
  1. PCIe power management:
echo auto > /sys/bus/pci/devices/0000:01:00.0/power/control
  1. Various NVIDIA power management settings through nvidia-smi

None of these solutions allowed the PCIe bus to enter sleep state while the Ollama service is running.

Proposed Solution

Implement lazy CUDA initialization in Ollama:

  1. Initialize CUDA only when a model is loaded/inference requested
  2. Release CUDA context when idle for a configurable period
  3. Allow PCIe bus to enter D3 state when CUDA context is released
  4. Add configuration option for power management strategy

Benefits:

  • Lower idle power consumption
  • Better battery life on laptops
  • Same performance for actual inference
  • No impact on API responsiveness
  • Proper power management for PCIe bus

Current Workaround

Users need to manually start/stop the Ollama service when needed:

sudo systemctl start ollama  # When inference needed
sudo systemctl stop ollama   # To restore low power state

This is not ideal for IDE integrations and other automated workflows that expect Ollama to be always available.

Originally created by @SvenMeyer on GitHub (Jan 26, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8591 # High idle power consumption due to PCIe bus unable to enter sleep state ## Issue Description When running Ollama as a service with CUDA enabled, the system maintains unnecessarily high power consumption (~14W vs ~6W) even when idle. This is primarily caused by the PCIe bus being unable to enter sleep state (D3) due to persistent CUDA initialization, despite the GPU itself being in a low-power state. ### Technical Details - System: DELL XPS15 9530 - CPU: Intel i7-13700H - GPU: NVIDIA RTX 4070 Laptop - Driver Version: 550.144.03 - CUDA Version: 12.4 - Ollama Version: 0.5.7 - OS: Manjaro Linux (DISTRIB_RELEASE="24.2.1") - Kernel: Linux 6.12.4-1-MANJARO ### Current Behavior Power consumption breakdown: - Base system (no Ollama): ~6W - With Ollama service running: ~14W - Delta: ~8W additional power The increased power consumption is primarily due to: 1. PCIe bus stuck in D0 state (cannot enter D3 sleep) 2. CUDA runtime keeping PCIe link active The GPU itself is actually well-behaved: - Enters P8 power state correctly - Only draws ~4W at idle - Properly releases display to Intel GPU - Memory Usage: 11MiB ### Expected Behavior - PCIe bus should enter D3 state when CUDA inference is not active - System should maintain low power consumption (~6W) when not actively processing - CUDA initialization should happen only when needed for inference - No impact on inference performance when needed ### Diagnostics With Ollama service running but idle: ``` PCIe State: D0 (should be D3) GPU Power State: P8 GPU Power Draw: ~4W Display: Using Intel GPU CUDA: Initialized but idle ``` ### Attempted Solutions 1. CUDA environment configuration: ```ini Environment="CUDA_MODULE_LOADING=LAZY" Environment="CUDA_CACHE_DISABLE=0" Environment="NVIDIA_DRIVER_CAPABILITIES=compute,utility" ``` 2. PCIe power management: ```bash echo auto > /sys/bus/pci/devices/0000:01:00.0/power/control ``` 3. Various NVIDIA power management settings through nvidia-smi None of these solutions allowed the PCIe bus to enter sleep state while the Ollama service is running. ### Proposed Solution Implement lazy CUDA initialization in Ollama: 1. Initialize CUDA only when a model is loaded/inference requested 2. Release CUDA context when idle for a configurable period 3. Allow PCIe bus to enter D3 state when CUDA context is released 4. Add configuration option for power management strategy Benefits: - Lower idle power consumption - Better battery life on laptops - Same performance for actual inference - No impact on API responsiveness - Proper power management for PCIe bus ### Current Workaround Users need to manually start/stop the Ollama service when needed: ```bash sudo systemctl start ollama # When inference needed sudo systemctl stop ollama # To restore low power state ``` This is not ideal for IDE integrations and other automated workflows that expect Ollama to be always available.
GiteaMirror added the bug label 2026-04-28 21:47:03 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 26, 2025):

Unload the model if it's idle. OLLAMA_KEEP_ALIVE=5m.

<!-- gh-comment-id:2614340405 --> @rick-github commented on GitHub (Jan 26, 2025): Unload the model if it's idle. `OLLAMA_KEEP_ALIVE=5m`.
Author
Owner

@SvenMeyer commented on GitHub (Jan 26, 2025):

Unload the model if it's idle. OLLAMA_KEEP_ALIVE=5m.

Thanks, will try that.

Should I use the ollama-cuda or ollama-cuda-git version or should I be fine just executing install.sh which worked actually fine for me except for the power issue mentioned above.

<!-- gh-comment-id:2614351232 --> @SvenMeyer commented on GitHub (Jan 26, 2025): > Unload the model if it's idle. `OLLAMA_KEEP_ALIVE=5m`. Thanks, will try that. Should I use the `ollama-cuda` or `ollama-cuda-git` version or should I be fine just executing `install.sh` which worked actually fine for me except for the power issue mentioned above.
Author
Owner

@rick-github commented on GitHub (Jan 26, 2025):

install.sh should be good enough. For maximum power saving you can set OLLAMA_KEEP_ALIVE=0, that will cause the model to be immediately unloaded after inference. This does mean that there will be a slight delay for inference as the model needs to be reloaded into the GPU. Since the model is likely already in the page cache the delay should be small; if the model has been evicted from the page cache ollama will need to load from disk and the startup time will be longer.

It's possible that the savings from this may not be as great as stopping the service - the ollama server probes the CUDA devices to know what it can use, if that causes D0 state in the PCIe bus then there will still be some power draw even if no model is loaded.

<!-- gh-comment-id:2614361343 --> @rick-github commented on GitHub (Jan 26, 2025): `install.sh` should be good enough. For maximum power saving you can set `OLLAMA_KEEP_ALIVE=0`, that will cause the model to be immediately unloaded after inference. This does mean that there will be a slight delay for inference as the model needs to be reloaded into the GPU. Since the model is likely already in the page cache the delay should be small; if the model has been evicted from the page cache ollama will need to load from disk and the startup time will be longer. It's possible that the savings from this may not be as great as stopping the service - the ollama server probes the CUDA devices to know what it can use, if that causes D0 state in the PCIe bus then there will still be some power draw even if no model is loaded.
Author
Owner

@SvenMeyer commented on GitHub (Jan 27, 2025):

Here's a summary of our findings regarding Ollama's power consumption:

Root Cause Analysis

  • Primary power drain comes from PCIe bus being stuck in D0 state (~8W overhead)
  • GPU itself behaves well:
    • Enters P8 power state correctly
    • Only draws ~4W at idle
    • Properly releases display to Intel GPU

Power States

  • Ollama service not running (hybrid mode - no network) : ~ 3.5W
  • Ollama service running idle (hybrid mode - no network) : ~12.8W

Testing OLLAMA_KEEP_ALIVE=0

  • Setting did not significantly impact idle power consumption
  • PCIe bus remains in D0 state
  • Still consumes ~8W more power when service is running
  • The only solution remains to stop the Ollama service completely

Our Solution

We developed a power management approach (https://github.com/SvenMeyer/dell-xps15-cuda-ollama-setup) that:

  1. Boots in hybrid mode (allows GPU switching)
  2. Keeps Ollama service disabled by default
  3. Provides opm tool for on-demand service control
  4. Achieves minimum idle power while maintaining flexibility

Potential Improvements for Ollama

  1. Current Workaround:

    • Set OLLAMA_KEEP_ALIVE=0 (suggested by Ollama team)
    • Models unload immediately after inference
    • Slight delay for model reloading
  2. Ideal Solution would involve:

    • Lazy CUDA initialization (only when needed)
    • Allow PCIe bus to enter D3 state when idle
    • Maintain API availability without power penalty
    • Smart model caching with power awareness
<!-- gh-comment-id:2615636956 --> @SvenMeyer commented on GitHub (Jan 27, 2025): Here's a summary of our findings regarding Ollama's power consumption: ### Root Cause Analysis - Primary power drain comes from **PCIe bus** being stuck in D0 state (~8W overhead) - GPU itself behaves well: - Enters P8 power state correctly - Only draws ~4W at idle - Properly releases display to Intel GPU ### Power States - Ollama service not running (hybrid mode - no network) : ~ 3.5W - Ollama service running idle (hybrid mode - no network) : ~12.8W ### Testing OLLAMA_KEEP_ALIVE=0 - Setting did not significantly impact idle power consumption - PCIe bus remains in D0 state - Still consumes ~8W more power when service is running - The only solution remains to stop the Ollama service completely ### Our Solution We developed a power management approach (https://github.com/SvenMeyer/dell-xps15-cuda-ollama-setup) that: 1. Boots in hybrid mode (allows GPU switching) 2. Keeps Ollama service disabled by default 3. Provides `opm` tool for on-demand service control 4. Achieves minimum idle power while maintaining flexibility ### Potential Improvements for Ollama 1. **Current Workaround**: - Set `OLLAMA_KEEP_ALIVE=0` (suggested by Ollama team) - Models unload immediately after inference - Slight delay for model reloading 2. **Ideal Solution** would involve: - Lazy CUDA initialization (only when needed) - Allow PCIe bus to enter D3 state when idle - Maintain API availability without power penalty - Smart model caching with power awareness
Author
Owner

@evgeniy-harchenko commented on GitHub (Oct 24, 2025):

It's a really big problem. My idle power consumption is around 80W. I'm using Open WebUI, so I can't stop and start Ollama every time I want to use it.

As a temporary solution I created proxy for switching GPU/CPU: link

I don't work with python at all, so fixes and suggestions are welcome.

But I would really like to see an official solution to this problem. It would be very helpful. I believe the developers will be able to figure it out.

<!-- gh-comment-id:3442017067 --> @evgeniy-harchenko commented on GitHub (Oct 24, 2025): It's a really big problem. My idle power consumption is around 80W. I'm using Open WebUI, so I can't stop and start Ollama every time I want to use it. As a temporary solution I created proxy for switching GPU/CPU: [link](https://gist.github.com/evgeniy-harchenko/0500161ec9a3345ea8972c27b2448271) _I don't work with python at all, so fixes and suggestions are welcome._ But I would really like to see an official solution to this problem. It would be very helpful. I believe the developers will be able to figure it out.
Author
Owner

@diba78 commented on GitHub (Feb 3, 2026):

Hello everybody,

i have the same problem with the following system:

Technical Details
System: Asrock n100m
CPU: Intel n100
GPU: NVIDIA RTX 3060 or 4080
Driver Version: 580.126.09
CUDA Version: 13.0
Ollama Version: 0.15.4
OS: Ubuntu Server Linux (DISTRIB_RELEASE="24.04.3 LTS")
Kernel: Linux 6.8.0-94-generic

<!-- gh-comment-id:3842931498 --> @diba78 commented on GitHub (Feb 3, 2026): Hello everybody, i have the same problem with the following system: Technical Details System: Asrock n100m CPU: Intel n100 GPU: NVIDIA RTX 3060 or 4080 Driver Version: 580.126.09 CUDA Version: 13.0 Ollama Version: 0.15.4 OS: Ubuntu Server Linux (DISTRIB_RELEASE="24.04.3 LTS") Kernel: Linux 6.8.0-94-generic
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52065