[PR #14324] [CLOSED] docs: restore num_gpu as a valid Modelfile parameter #14620

Closed
opened 2026-04-13 00:59:20 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/14324
Author: @cluster2600
Created: 2/19/2026
Status: Closed

Base: mainHead: docs/restore-num_gpu-parameter


📝 Commits (1)

  • 1bd5a70 docs: restore num_gpu as a valid Modelfile parameter

📊 Changes

1 file changed (+1 additions, -0 deletions)

View changed files

📝 docs/modelfile.mdx (+1 -0)

📄 Description

Summary

The num_gpu parameter was removed from the Modelfile documentation in e54a3c7 (Remove Modelfile parameters that are decided at runtime), but the parameter is still fully functional.

This PR restores the num_gpu entry to the Valid Parameters and Values table.

Why this matters

Setting num_gpu 0 in a Modelfile is currently the only way to force CPU-only inference for a specific model without modifying each individual API request. Users have been discovering this through community issues since it disappeared from the official docs.

Changes

  • Adds num_gpu back to the parameter table in docs/modelfile.mdx with an accurate description

Fixes #13986


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/14324 **Author:** [@cluster2600](https://github.com/cluster2600) **Created:** 2/19/2026 **Status:** ❌ Closed **Base:** `main` ← **Head:** `docs/restore-num_gpu-parameter` --- ### 📝 Commits (1) - [`1bd5a70`](https://github.com/ollama/ollama/commit/1bd5a70214871596ec1a21557cece26a5d749d39) docs: restore num_gpu as a valid Modelfile parameter ### 📊 Changes **1 file changed** (+1 additions, -0 deletions) <details> <summary>View changed files</summary> 📝 `docs/modelfile.mdx` (+1 -0) </details> ### 📄 Description ## Summary The `num_gpu` parameter was removed from the Modelfile documentation in e54a3c7 (*Remove Modelfile parameters that are decided at runtime*), but the parameter is still fully functional. This PR restores the `num_gpu` entry to the **Valid Parameters and Values** table. ## Why this matters Setting `num_gpu 0` in a Modelfile is currently the only way to force CPU-only inference for a specific model **without modifying each individual API request**. Users have been discovering this through community issues since it disappeared from the official docs. ## Changes - Adds `num_gpu` back to the parameter table in `docs/modelfile.mdx` with an accurate description Fixes #13986 --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-13 00:59:20 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#14620