[PR #15580] Fix inaccurate num_ctx documentation description #41087

Open
opened 2026-04-23 01:49:16 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/15580
Author: @Jah-yee
Created: 4/14/2026
Status: 🔄 Open

Base: mainHead: fix/num_ctx-documentation


📝 Commits (2)

  • c8fbaa9 fix(app): remove sidebar open animation on initial load
  • c7e7abb Fix inaccurate num_ctx documentation description

📊 Changes

2 files changed (+15 additions, -5 deletions)

View changed files

📝 app/ui/app/src/components/layout/layout.tsx (+14 -4)
📝 docs/modelfile.mdx (+1 -1)

📄 Description

Good day,

I noticed that the documentation for the num_ctx parameter was a bit misleading. The previous description stated "Sets the size of the context window used to generate the next token", which could be confused with num_predict.

This PR clarifies that num_ctx actually refers to the total context window size - the maximum number of tokens the model can process in total, including both prompt tokens and generated tokens (and tool calling tokens if applicable). This helps users better understand why they might encounter issues when exceeding the context window limit.

Thank you for your work on this project. I hope this small fix is helpful. Please let me know if there's anything to adjust.

Warmly, RoomWithOutRoof


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/15580 **Author:** [@Jah-yee](https://github.com/Jah-yee) **Created:** 4/14/2026 **Status:** 🔄 Open **Base:** `main` ← **Head:** `fix/num_ctx-documentation` --- ### 📝 Commits (2) - [`c8fbaa9`](https://github.com/ollama/ollama/commit/c8fbaa9e5be988741138b0eeda781df56e5181f4) fix(app): remove sidebar open animation on initial load - [`c7e7abb`](https://github.com/ollama/ollama/commit/c7e7abb01a49331e7e7e0d51ad0ffd0c2ebc7776) Fix inaccurate num_ctx documentation description ### 📊 Changes **2 files changed** (+15 additions, -5 deletions) <details> <summary>View changed files</summary> 📝 `app/ui/app/src/components/layout/layout.tsx` (+14 -4) 📝 `docs/modelfile.mdx` (+1 -1) </details> ### 📄 Description Good day, I noticed that the documentation for the `num_ctx` parameter was a bit misleading. The previous description stated "Sets the size of the context window used to generate the next token", which could be confused with `num_predict`. This PR clarifies that `num_ctx` actually refers to the total context window size - the maximum number of tokens the model can process in total, including both prompt tokens and generated tokens (and tool calling tokens if applicable). This helps users better understand why they might encounter issues when exceeding the context window limit. Thank you for your work on this project. I hope this small fix is helpful. Please let me know if there's anything to adjust. Warmly, RoomWithOutRoof --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-23 01:49:17 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#41087