[GH-ISSUE #9876] "/set parameter num_ctx 1024 " then n_ctx from 8192 to 4096 #6465

Closed
opened 2026-04-12 18:01:41 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @fivem on GitHub (Mar 19, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9876

What is the issue?

ollama run deepseek-r1:32b

------- log -------
llama_init_from_model: n_seq_max = 4
llama_init_from_model: n_ctx = 8192
llama_init_from_model: n_ctx_per_seq = 2048
llama_init_from_model: n_batch = 2048
llama_init_from_model: n_ubatch = 512

then
/set parameter num_ctx 1024

------- log -------
llama_init_from_model: n_seq_max = 4
llama_init_from_model: n_ctx = 4096
llama_init_from_model: n_ctx_per_seq = 1024
llama_init_from_model: n_batch = 2048
llama_init_from_model: n_ubatch = 512

Why did n_ctx change from 8192 to 4096?

Relevant log output


OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.6.0

Originally created by @fivem on GitHub (Mar 19, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9876 ### What is the issue? `ollama run deepseek-r1:32b` ------- log ------- llama_init_from_model: n_seq_max = 4 llama_init_from_model: n_ctx = 8192 llama_init_from_model: n_ctx_per_seq = 2048 llama_init_from_model: n_batch = 2048 llama_init_from_model: n_ubatch = 512 -------------------- then `/set parameter num_ctx 1024 ` ------- log ------- llama_init_from_model: n_seq_max = 4 llama_init_from_model: n_ctx = 4096 llama_init_from_model: n_ctx_per_seq = 1024 llama_init_from_model: n_batch = 2048 llama_init_from_model: n_ubatch = 512 -------------------- Why did n_ctx change from 8192 to 4096? ### Relevant log output ```shell ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.6.0
GiteaMirror added the bug label 2026-04-12 18:01:41 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 19, 2025):

n_seq_max * n_ctx_per_seq = 4 * 1024 = 4096

<!-- gh-comment-id:2735134048 --> @rick-github commented on GitHub (Mar 19, 2025): n_seq_max * n_ctx_per_seq = 4 * 1024 = 4096
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6465