[GH-ISSUE #11437] Ollama Ignores OLLAMA_NUM_GPU Environment Variable, Leading to RAM Exhaustion and Server Crash #7550

Closed
opened 2026-04-12 19:39:05 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @alissaknight01 on GitHub (Jul 16, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11437

My server experiences a hard crash due to system RAM exhaustion when loading large models (e.g., llama4:17b-maverick-128e-instruct-q4_K_M). Verbose logging shows the Ollama runner process defaults to loading only a few layers onto the GPU (--n-gpu-layers 4), regardless of any configuration attempting to force a full GPU offload.

System Environment

  • OS: Red Hat Enterprise Linux 10 (RHEL 10)
  • System RAM: 256 GB
  • GPU: NVIDIA GeForce RTX 5090 (32 GB VRAM)
  • NVIDIA Driver: 570.169
  • Ollama Version: 0.9.6
  • CPU: AMD Threadripper / Gigabyte TRX50

Troubleshooting Steps Taken

  • We have confirmed that the runner process ignores GPU offload settings provided via:
  • Client-side parameters in the API call (num_gpu=-1).
  • Environment variables in the ollama.service systemd file (OLLAMA_NUM_GPU=99).
  • Parameters in a custom Modelfile (PARAMETER num_gpu -1).

After discovering that the runner process was not inheriting environment variables from the main systemd service, we implemented a wrapper script (/usr/local/bin/ollama) to forcefully inject the variables.

We have captured the environment of the runner process at the moment of execution and can definitively prove that it is receiving the correct variables.

The Ollama binary is receiving the correct commands to offload the entire model to the GPU but is choosing to ignore them. This appears to be a bug in how Ollama handles GPU layer calculation or configuration precedence on this specific hardware/software stack.

Relevant log output

Runner Process Environment Log (/tmp/ollama_runner_env.log):

OLLAMA_NUM_GPU=99
OLLAMA_GPU_LAYERS=-1
... (and other vars)
Despite receiving these variables, the ollama ps command shows the application still refuses to offload the model correctly:

ollama ps Output:

NAME                    ID          SIZE    PROCESSOR
athena-maverick:latest  e1e69f67446d  254 GB  90%/10% CPU/GPU

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.9.6

Originally created by @alissaknight01 on GitHub (Jul 16, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11437 My server experiences a hard crash due to system RAM exhaustion when loading large models (e.g., llama4:17b-maverick-128e-instruct-q4_K_M). Verbose logging shows the Ollama runner process defaults to loading only a few layers onto the GPU (--n-gpu-layers 4), regardless of any configuration attempting to force a full GPU offload. System Environment - OS: Red Hat Enterprise Linux 10 (RHEL 10) - System RAM: 256 GB - GPU: NVIDIA GeForce RTX 5090 (32 GB VRAM) - NVIDIA Driver: 570.169 - Ollama Version: 0.9.6 - CPU: AMD Threadripper / Gigabyte TRX50 Troubleshooting Steps Taken - We have confirmed that the runner process ignores GPU offload settings provided via: - Client-side parameters in the API call (num_gpu=-1). - Environment variables in the ollama.service systemd file (OLLAMA_NUM_GPU=99). - Parameters in a custom Modelfile (PARAMETER num_gpu -1). After discovering that the runner process was not inheriting environment variables from the main systemd service, we implemented a wrapper script (/usr/local/bin/ollama) to forcefully inject the variables. We have captured the environment of the runner process at the moment of execution and can definitively prove that it is receiving the correct variables. The Ollama binary is receiving the correct commands to offload the entire model to the GPU but is choosing to ignore them. This appears to be a bug in how Ollama handles GPU layer calculation or configuration precedence on this specific hardware/software stack. ### Relevant log output ```shell Runner Process Environment Log (/tmp/ollama_runner_env.log): OLLAMA_NUM_GPU=99 OLLAMA_GPU_LAYERS=-1 ... (and other vars) Despite receiving these variables, the ollama ps command shows the application still refuses to offload the model correctly: ollama ps Output: NAME ID SIZE PROCESSOR athena-maverick:latest e1e69f67446d 254 GB 90%/10% CPU/GPU ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.9.6
GiteaMirror added the bug label 2026-04-12 19:39:05 -05:00
Author
Owner

@alissaknight01 commented on GitHub (Jul 16, 2025):

After applying source code patches to server/sched.go and llm/server.go to force Ollama to respect the NumGPU = -1 setting, the application now correctly attempts a full GPU offload.

However, this reveals a new issue. When attempting to load llama4:17b-maverick-128e-instruct-q4_K_M, the runner now correctly terminates with a cudaMalloc failed: out of memory error.

This confirms two things:

The original logic for handling NumGPU settings was buggy and has been fixed with our patches.

The model itself is too large to be fully offloaded to a 32GB GPU, which means Ollama's initial memory estimation is also likely inaccurate for large, quantized models.

<!-- gh-comment-id:3076616311 --> @alissaknight01 commented on GitHub (Jul 16, 2025): After applying source code patches to server/sched.go and llm/server.go to force Ollama to respect the NumGPU = -1 setting, the application now correctly attempts a full GPU offload. However, this reveals a new issue. When attempting to load llama4:17b-maverick-128e-instruct-q4_K_M, the runner now correctly terminates with a cudaMalloc failed: out of memory error. This confirms two things: The original logic for handling NumGPU settings was buggy and has been fixed with our patches. The model itself is too large to be fully offloaded to a 32GB GPU, which means Ollama's initial memory estimation is also likely inaccurate for large, quantized models.
Author
Owner

@rick-github commented on GitHub (Jul 16, 2025):

OLLAMA_NUM_GPU and OLLAMA_GPU_LAYERS are not ollama configuration variables.

num_gpu=-1 means load as many layers into the GPU as will fit, not load all of the layers into the GPU. Ollama estimates this based on the model size, context window, parallelism, etc. in this case, ollama estimated only 4 layers would fit. Trying to load more layers than will fit will cause an OOM.

You can override ollama's estimation by explicitly setting num_gpu. For example, num_gpu=5 in the API call or in the Modelfile.

<!-- gh-comment-id:3077530670 --> @rick-github commented on GitHub (Jul 16, 2025): `OLLAMA_NUM_GPU` and `OLLAMA_GPU_LAYERS` are not ollama configuration variables. `num_gpu=-1` means load as many layers into the GPU as will fit, not load all of the layers into the GPU. Ollama estimates this based on the model size, context window, parallelism, etc. in this case, ollama estimated only 4 layers would fit. Trying to load more layers than will fit will cause an OOM. You can override ollama's estimation by explicitly setting `num_gpu`. For example, `num_gpu=5` in the API call or in the Modelfile.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7550