[PR #14460] docs: clarify num_ctx description in Modelfile parameter table #19956

Open
opened 2026-04-16 07:21:47 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/14460
Author: @Anandesh-Sharma
Created: 2/26/2026
Status: 🔄 Open

Base: mainHead: docs-fix-num-ctx-description


📝 Commits (1)

  • 0ef3d79 docs: clarify num_ctx description in Modelfile parameter table

📊 Changes

1 file changed (+1 additions, -1 deletions)

View changed files

📝 docs/modelfile.mdx (+1 -1)

📄 Description

Summary

  • Clarify that num_ctx sets the total context window size (prompt + response tokens), not just the window "used to generate the next token"

Details

The previous description was misleading:

"Sets the size of the context window used to generate the next token"

This could be confused with num_predict. The updated description makes it clear that num_ctx is the total number of tokens the model can process at once, including both prompt and generated response.

Fixes #12474

🤖 Generated with Claude Code


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/14460 **Author:** [@Anandesh-Sharma](https://github.com/Anandesh-Sharma) **Created:** 2/26/2026 **Status:** 🔄 Open **Base:** `main` ← **Head:** `docs-fix-num-ctx-description` --- ### 📝 Commits (1) - [`0ef3d79`](https://github.com/ollama/ollama/commit/0ef3d79776880101bcd2ec246c9961949bc97aba) docs: clarify num_ctx description in Modelfile parameter table ### 📊 Changes **1 file changed** (+1 additions, -1 deletions) <details> <summary>View changed files</summary> 📝 `docs/modelfile.mdx` (+1 -1) </details> ### 📄 Description ## Summary - Clarify that `num_ctx` sets the total context window size (prompt + response tokens), not just the window "used to generate the next token" ## Details The previous description was misleading: > "Sets the size of the context window used to generate the next token" This could be confused with `num_predict`. The updated description makes it clear that `num_ctx` is the total number of tokens the model can process at once, including both prompt and generated response. Fixes #12474 🤖 Generated with [Claude Code](https://claude.com/claude-code) --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-16 07:21:47 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#19956