[GH-ISSUE #15944] Can't change context windows size of qwen3.6 #72212

Open
opened 2026-05-05 03:38:27 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @gradha on GitHub (May 3, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15944

What is the issue?

I'm trying to change the context window size but models like qwen3.6:35b-a3b-coding-nvfp4 or qwen3.6:27b-coding-nvfp4 ignore the setting. I'm launching the ollama server with the following script:

$ cat ollama_serve_bigger_context
#!/bin/sh

export OLLAMA_REQUEST_TIMEOUT=120m 
export OLLAMA_KEEP_ALIVE=120m 
export OLLAMA_CONTEXT_LENGTH=190000
export OLLAMA_DEBUG=1
export OLLAMA_NUM_PARALLEL=1
export OLLAMA_MAX_LOADED_MODELS=1

ollama serve

which starts the server:

% ollama_serve_bigger_context 
time=2026-05-03T11:36:56.431+02:00 level=INFO source=routes.go:1782 msg="server config" env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:190000 OLLAMA_DEBUG:DEBUG OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:2h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/aitest/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]"
time=2026-05-03T11:36:56.431+02:00 level=INFO source=routes.go:1784 msg="Ollama cloud disabled: false"
time=2026-05-03T11:36:56.456+02:00 level=INFO source=images.go:517 msg="total blobs: 997"
time=2026-05-03T11:36:56.460+02:00 level=INFO source=images.go:524 msg="total unused blobs removed: 0"
time=2026-05-03T11:36:56.460+02:00 level=DEBUG source=model_recommendations.go:59 msg="starting model recommendations cache" default_recommendations=6 refresh_interval=4h0m0s fetch_timeout=3s
time=2026-05-03T11:36:56.460+02:00 level=INFO source=routes.go:1847 msg="Listening on 127.0.0.1:11434 (version 0.22.1)"
time=2026-05-03T11:36:56.460+02:00 level=DEBUG source=sched.go:145 msg="starting llm scheduler"
time=2026-05-03T11:36:56.460+02:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-05-03T11:36:56.461+02:00 level=DEBUG source=model_recommendations.go:264 msg="loaded model recommendations snapshot" path=/Users/aitest/.ollama/cache/model-recommendations.json count=7
time=2026-05-03T11:36:56.461+02:00 level=DEBUG source=model_recommendations.go:194 msg="refreshing model recommendations from remote" url=https://ollama.com/api/experimental/model-recommendations
time=2026-05-03T11:36:56.461+02:00 level=INFO source=server.go:444 msg="starting runner" cmd="/opt/homebrew/Cellar/ollama/0.22.1/libexec/ollama runner --ollama-engine --port 54145"
time=2026-05-03T11:36:56.461+02:00 level=DEBUG source=server.go:445 msg=subprocess OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_CONTEXT_LENGTH=190000 OLLAMA_KEEP_ALIVE=120m OLLAMA_REQUEST_TIMEOUT=120m PATH=/opt/homebrew/bin:/opt/homebrew/sbin:/Users/aitest/.opencode/bin:/Users/aitest/.nvm/versions/node/v24.14.1/bin:/Users/aitest/Library/Android/sdk/platform-tools:/Users/aitest/Library/Android/sdk/emulator:/Users/aitest/.local/bin:/Users/aitest/bin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin:/Library/Apple/usr/bin:/usr/local/MacGPG2/bin:/usr/local/go/bin OLLAMA_NUM_PARALLEL=1 OLLAMA_DEBUG=1 DYLD_LIBRARY_PATH=/opt/homebrew/Cellar/ollama/0.22.1/libexec:/opt/homebrew/Cellar/ollama/0.22.1/libexec/lib/ollama/mlx_metal_v3 OLLAMA_LIBRARY_PATH=/opt/homebrew/Cellar/ollama/0.22.1/libexec
time=2026-05-03T11:36:56.509+02:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=49.0995ms OLLAMA_LIBRARY_PATH=[/opt/homebrew/Cellar/ollama/0.22.1/libexec] extra_envs=map[]
time=2026-05-03T11:36:56.510+02:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=1
time=2026-05-03T11:36:56.510+02:00 level=DEBUG source=runner.go:193 msg="adjusting filtering IDs" FilterID=0 new_ID=0
time=2026-05-03T11:36:56.510+02:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=49.259542ms
time=2026-05-03T11:36:56.510+02:00 level=INFO source=types.go:42 msg="inference compute" id=0 filter_id=0 library=Metal compute=0.0 name=Metal description="Apple M4 Pro" libdirs="" driver=0.0 pci_id="" type=discrete total="36.0 GiB" available="36.0 GiB"
time=2026-05-03T11:36:56.510+02:00 level=INFO source=routes.go:1897 msg="vram-based default context" total_vram="36.0 GiB" default_num_ctx=32768
time=2026-05-03T11:36:56.666+02:00 level=DEBUG source=model_recommendations.go:227 msg="model recommendations refreshed" count=7
time=2026-05-03T11:36:56.673+02:00 level=DEBUG source=model_recommendations.go:304 msg="persisted model recommendations snapshot" path=/Users/aitest/.ollama/cache/model-recommendations.json count=7
time=2026-05-03T11:36:56.673+02:00 level=INFO source=model_recommendations.go:179 msg="model recommendations cache sleep scheduled" wait=4h35m36.405282306s consecutive_failures=0

The moment I launch claude or opencode I get these logs:

time=2026-05-03T11:37:59.021+02:00 level=INFO source=runner.go:162 msg="Starting HTTP server" host=127.0.0.1 port=54156
time=2026-05-03T11:37:59.038+02:00 level=INFO source=server.go:189 msg=ServeHTTP method=GET path=/v1/status took=18.125µs status="200 OK"
time=2026-05-03T11:37:59.038+02:00 level=INFO source=client.go:147 msg="mlx runner is ready" port=54156
time=2026-05-03T11:37:59.038+02:00 level=DEBUG source=sched.go:573 msg="finished setting up" runner.name=registry.ollama.ai/library/qwen3.6:35b-a3b-coding-nvfp4 runner.size="20.4 GiB" runner.vram="20.4 GiB" runner.parallel=1 runner.pid=96254 runner.model=digest:cd2692a833e66c4c98991b67e9fbaa0bb15a93285baac9240c022f2f40075b6d runner.num_ctx=190000
time=2026-05-03T11:37:59.038+02:00 level=INFO source=server.go:189 msg=ServeHTTP method=GET path=/v1/status took=3.291µs status="200 OK"
time=2026-05-03T11:37:59.039+02:00 level=INFO source=cache.go:126 msg="cache miss" total=195 matched=0 cached=0 left=195
time=2026-05-03T11:38:00.566+02:00 level=INFO source=pipeline.go:135 msg="Prompt processing progress" processed=191 total=195
time=2026-05-03T11:38:00.567+02:00 level=DEBUG source=cache.go:401 msg="created snapshot" offset=191
time=2026-05-03T11:38:00.672+02:00 level=INFO source=pipeline.go:135 msg="Prompt processing progress" processed=194 total=195
[GIN] 2026/05/03 - 11:38:01 | 200 |  5.914781542s |       127.0.0.1 | POST     "/v1/messages?beta=true"
time=2026-05-03T11:38:01.897+02:00 level=INFO source=server.go:189 msg=ServeHTTP method=POST path=/v1/completions took=2.858332875s status="200 OK"
time=2026-05-03T11:38:01.897+02:00 level=DEBUG source=sched.go:581 msg="context for request finished"
time=2026-05-03T11:38:01.897+02:00 level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen3.6:35b-a3b-coding-nvfp4 runner.size="20.4 GiB" runner.vram="20.4 GiB" runner.parallel=1 runner.pid=96254 runner.model=digest:cd2692a833e66c4c98991b67e9fbaa0bb15a93285baac9240c022f2f40075b6d runner.num_ctx=190000 refCount=1
time=2026-05-03T11:38:01.897+02:00 level=INFO source=pipeline.go:71 msg="peak memory" size="19.82 GiB"
time=2026-05-03T11:38:01.898+02:00 level=DEBUG source=cache.go:250 msg="switching cache path" page_out=1 page_in=0
time=2026-05-03T11:38:01.898+02:00 level=INFO source=cache.go:126 msg="cache miss" total=22877 matched=3 cached=0 left=22877
time=2026-05-03T11:38:01.918+02:00 level=INFO source=pipeline.go:135 msg="Prompt processing progress" processed=3 total=22877
time=2026-05-03T11:38:01.919+02:00 level=DEBUG source=cache.go:401 msg="created snapshot" offset=3
^Ctime=2026-05-03T11:38:02.199+02:00 level=DEBUG source=sched.go:908 msg="shutting down runner" model=digest:cd2692a833e66c4c98991b67e9fbaa0bb15a93285baac9240c022f2f40075b6d
time=2026-05-03T11:38:02.199+02:00 level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/qwen3.6:35b-a3b-coding-nvfp4 runner.size="20.4 GiB" runner.vram="20.4 GiB" runner.parallel=1 runner.pid=96254 runner.model=digest:cd2692a833e66c4c98991b67e9fbaa0bb15a93285baac9240c022f2f40075b6d runner.num_ctx=190000
time=2026-05-03T11:38:02.199+02:00 level=DEBUG source=sched.go:161 msg="shutting down scheduler pending loop"
time=2026-05-03T11:38:02.199+02:00 level=DEBUG source=sched.go:287 msg="shutting down scheduler completed loop"
[GIN] 2026/05/03 - 11:38:02 | 500 |  6.202840125s |       127.0.0.1 | POST     "/v1/messages?beta=true"
time=2026-05-03T11:38:02.199+02:00 level=INFO source=client.go:182 msg="stopping mlx runner subprocess" pid=96254
time=2026-05-03T11:38:02.370+02:00 level=DEBUG source=model_recommendations.go:183 msg="stopping model recommendations cache"

Despite parameters, it's using a greater context window:

% ollama ps
NAME                            ID              SIZE     PROCESSOR    CONTEXT    UNTIL            
qwen3.6:35b-a3b-coding-nvfp4    cd2692a833e6    21 GB    100% GPU     262144     2 hours from now    

However, with previous models the context window parameter is respected:

% ollama ps
NAME                          ID              SIZE     PROCESSOR    CONTEXT    UNTIL            
qwen3-coder:30b-a3b-q4_K_M    06c1097efce0    37 GB    100% GPU     190000     2 hours from now    

Relevant log output


OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.22.1

Originally created by @gradha on GitHub (May 3, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15944 ### What is the issue? I'm [trying to change the context window size](https://docs.ollama.com/faq#how-can-i-specify-the-context-window-size) but models like `qwen3.6:35b-a3b-coding-nvfp4` or `qwen3.6:27b-coding-nvfp4` ignore the setting. I'm launching the ollama server with the following script: ``` $ cat ollama_serve_bigger_context #!/bin/sh export OLLAMA_REQUEST_TIMEOUT=120m export OLLAMA_KEEP_ALIVE=120m export OLLAMA_CONTEXT_LENGTH=190000 export OLLAMA_DEBUG=1 export OLLAMA_NUM_PARALLEL=1 export OLLAMA_MAX_LOADED_MODELS=1 ollama serve ``` which starts the server: ``` % ollama_serve_bigger_context time=2026-05-03T11:36:56.431+02:00 level=INFO source=routes.go:1782 msg="server config" env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:190000 OLLAMA_DEBUG:DEBUG OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:2h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/aitest/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]" time=2026-05-03T11:36:56.431+02:00 level=INFO source=routes.go:1784 msg="Ollama cloud disabled: false" time=2026-05-03T11:36:56.456+02:00 level=INFO source=images.go:517 msg="total blobs: 997" time=2026-05-03T11:36:56.460+02:00 level=INFO source=images.go:524 msg="total unused blobs removed: 0" time=2026-05-03T11:36:56.460+02:00 level=DEBUG source=model_recommendations.go:59 msg="starting model recommendations cache" default_recommendations=6 refresh_interval=4h0m0s fetch_timeout=3s time=2026-05-03T11:36:56.460+02:00 level=INFO source=routes.go:1847 msg="Listening on 127.0.0.1:11434 (version 0.22.1)" time=2026-05-03T11:36:56.460+02:00 level=DEBUG source=sched.go:145 msg="starting llm scheduler" time=2026-05-03T11:36:56.460+02:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-05-03T11:36:56.461+02:00 level=DEBUG source=model_recommendations.go:264 msg="loaded model recommendations snapshot" path=/Users/aitest/.ollama/cache/model-recommendations.json count=7 time=2026-05-03T11:36:56.461+02:00 level=DEBUG source=model_recommendations.go:194 msg="refreshing model recommendations from remote" url=https://ollama.com/api/experimental/model-recommendations time=2026-05-03T11:36:56.461+02:00 level=INFO source=server.go:444 msg="starting runner" cmd="/opt/homebrew/Cellar/ollama/0.22.1/libexec/ollama runner --ollama-engine --port 54145" time=2026-05-03T11:36:56.461+02:00 level=DEBUG source=server.go:445 msg=subprocess OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_CONTEXT_LENGTH=190000 OLLAMA_KEEP_ALIVE=120m OLLAMA_REQUEST_TIMEOUT=120m PATH=/opt/homebrew/bin:/opt/homebrew/sbin:/Users/aitest/.opencode/bin:/Users/aitest/.nvm/versions/node/v24.14.1/bin:/Users/aitest/Library/Android/sdk/platform-tools:/Users/aitest/Library/Android/sdk/emulator:/Users/aitest/.local/bin:/Users/aitest/bin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin:/Library/Apple/usr/bin:/usr/local/MacGPG2/bin:/usr/local/go/bin OLLAMA_NUM_PARALLEL=1 OLLAMA_DEBUG=1 DYLD_LIBRARY_PATH=/opt/homebrew/Cellar/ollama/0.22.1/libexec:/opt/homebrew/Cellar/ollama/0.22.1/libexec/lib/ollama/mlx_metal_v3 OLLAMA_LIBRARY_PATH=/opt/homebrew/Cellar/ollama/0.22.1/libexec time=2026-05-03T11:36:56.509+02:00 level=DEBUG source=runner.go:437 msg="bootstrap discovery took" duration=49.0995ms OLLAMA_LIBRARY_PATH=[/opt/homebrew/Cellar/ollama/0.22.1/libexec] extra_envs=map[] time=2026-05-03T11:36:56.510+02:00 level=DEBUG source=runner.go:124 msg="evaluating which, if any, devices to filter out" initial_count=1 time=2026-05-03T11:36:56.510+02:00 level=DEBUG source=runner.go:193 msg="adjusting filtering IDs" FilterID=0 new_ID=0 time=2026-05-03T11:36:56.510+02:00 level=DEBUG source=runner.go:40 msg="GPU bootstrap discovery took" duration=49.259542ms time=2026-05-03T11:36:56.510+02:00 level=INFO source=types.go:42 msg="inference compute" id=0 filter_id=0 library=Metal compute=0.0 name=Metal description="Apple M4 Pro" libdirs="" driver=0.0 pci_id="" type=discrete total="36.0 GiB" available="36.0 GiB" time=2026-05-03T11:36:56.510+02:00 level=INFO source=routes.go:1897 msg="vram-based default context" total_vram="36.0 GiB" default_num_ctx=32768 time=2026-05-03T11:36:56.666+02:00 level=DEBUG source=model_recommendations.go:227 msg="model recommendations refreshed" count=7 time=2026-05-03T11:36:56.673+02:00 level=DEBUG source=model_recommendations.go:304 msg="persisted model recommendations snapshot" path=/Users/aitest/.ollama/cache/model-recommendations.json count=7 time=2026-05-03T11:36:56.673+02:00 level=INFO source=model_recommendations.go:179 msg="model recommendations cache sleep scheduled" wait=4h35m36.405282306s consecutive_failures=0 ``` The moment I launch claude or opencode I get these logs: ``` time=2026-05-03T11:37:59.021+02:00 level=INFO source=runner.go:162 msg="Starting HTTP server" host=127.0.0.1 port=54156 time=2026-05-03T11:37:59.038+02:00 level=INFO source=server.go:189 msg=ServeHTTP method=GET path=/v1/status took=18.125µs status="200 OK" time=2026-05-03T11:37:59.038+02:00 level=INFO source=client.go:147 msg="mlx runner is ready" port=54156 time=2026-05-03T11:37:59.038+02:00 level=DEBUG source=sched.go:573 msg="finished setting up" runner.name=registry.ollama.ai/library/qwen3.6:35b-a3b-coding-nvfp4 runner.size="20.4 GiB" runner.vram="20.4 GiB" runner.parallel=1 runner.pid=96254 runner.model=digest:cd2692a833e66c4c98991b67e9fbaa0bb15a93285baac9240c022f2f40075b6d runner.num_ctx=190000 time=2026-05-03T11:37:59.038+02:00 level=INFO source=server.go:189 msg=ServeHTTP method=GET path=/v1/status took=3.291µs status="200 OK" time=2026-05-03T11:37:59.039+02:00 level=INFO source=cache.go:126 msg="cache miss" total=195 matched=0 cached=0 left=195 time=2026-05-03T11:38:00.566+02:00 level=INFO source=pipeline.go:135 msg="Prompt processing progress" processed=191 total=195 time=2026-05-03T11:38:00.567+02:00 level=DEBUG source=cache.go:401 msg="created snapshot" offset=191 time=2026-05-03T11:38:00.672+02:00 level=INFO source=pipeline.go:135 msg="Prompt processing progress" processed=194 total=195 [GIN] 2026/05/03 - 11:38:01 | 200 | 5.914781542s | 127.0.0.1 | POST "/v1/messages?beta=true" time=2026-05-03T11:38:01.897+02:00 level=INFO source=server.go:189 msg=ServeHTTP method=POST path=/v1/completions took=2.858332875s status="200 OK" time=2026-05-03T11:38:01.897+02:00 level=DEBUG source=sched.go:581 msg="context for request finished" time=2026-05-03T11:38:01.897+02:00 level=DEBUG source=sched.go:327 msg="after processing request finished event" runner.name=registry.ollama.ai/library/qwen3.6:35b-a3b-coding-nvfp4 runner.size="20.4 GiB" runner.vram="20.4 GiB" runner.parallel=1 runner.pid=96254 runner.model=digest:cd2692a833e66c4c98991b67e9fbaa0bb15a93285baac9240c022f2f40075b6d runner.num_ctx=190000 refCount=1 time=2026-05-03T11:38:01.897+02:00 level=INFO source=pipeline.go:71 msg="peak memory" size="19.82 GiB" time=2026-05-03T11:38:01.898+02:00 level=DEBUG source=cache.go:250 msg="switching cache path" page_out=1 page_in=0 time=2026-05-03T11:38:01.898+02:00 level=INFO source=cache.go:126 msg="cache miss" total=22877 matched=3 cached=0 left=22877 time=2026-05-03T11:38:01.918+02:00 level=INFO source=pipeline.go:135 msg="Prompt processing progress" processed=3 total=22877 time=2026-05-03T11:38:01.919+02:00 level=DEBUG source=cache.go:401 msg="created snapshot" offset=3 ^Ctime=2026-05-03T11:38:02.199+02:00 level=DEBUG source=sched.go:908 msg="shutting down runner" model=digest:cd2692a833e66c4c98991b67e9fbaa0bb15a93285baac9240c022f2f40075b6d time=2026-05-03T11:38:02.199+02:00 level=DEBUG source=sched.go:404 msg="context for request finished" runner.name=registry.ollama.ai/library/qwen3.6:35b-a3b-coding-nvfp4 runner.size="20.4 GiB" runner.vram="20.4 GiB" runner.parallel=1 runner.pid=96254 runner.model=digest:cd2692a833e66c4c98991b67e9fbaa0bb15a93285baac9240c022f2f40075b6d runner.num_ctx=190000 time=2026-05-03T11:38:02.199+02:00 level=DEBUG source=sched.go:161 msg="shutting down scheduler pending loop" time=2026-05-03T11:38:02.199+02:00 level=DEBUG source=sched.go:287 msg="shutting down scheduler completed loop" [GIN] 2026/05/03 - 11:38:02 | 500 | 6.202840125s | 127.0.0.1 | POST "/v1/messages?beta=true" time=2026-05-03T11:38:02.199+02:00 level=INFO source=client.go:182 msg="stopping mlx runner subprocess" pid=96254 time=2026-05-03T11:38:02.370+02:00 level=DEBUG source=model_recommendations.go:183 msg="stopping model recommendations cache" ``` Despite parameters, it's using a greater context window: ``` % ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL qwen3.6:35b-a3b-coding-nvfp4 cd2692a833e6 21 GB 100% GPU 262144 2 hours from now ``` However, with previous models the context window parameter is respected: ``` % ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL qwen3-coder:30b-a3b-q4_K_M 06c1097efce0 37 GB 100% GPU 190000 2 hours from now ``` ### Relevant log output ```shell ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.22.1
GiteaMirror added the bug label 2026-05-05 03:38:27 -05:00
Author
Owner

@gradha commented on GitHub (May 3, 2026):

Forgot to mention I found some internet instructions about customization through model files, so I tried something like this:

% cat Modelfile.qwen36.custom 
# My custom Modelfile to increase context length
FROM qwen3.6:35b-a3b-coding-nvfp4

# Set the context window size (e.g., 4096, 8192, 16384, etc.)
PARAMETER num_ctx 192000

While running this model ollama ps still showed a context window size of 262144.

<!-- gh-comment-id:4365886024 --> @gradha commented on GitHub (May 3, 2026): Forgot to mention I found some internet instructions about customization through model files, so I tried something like this: ``` % cat Modelfile.qwen36.custom # My custom Modelfile to increase context length FROM qwen3.6:35b-a3b-coding-nvfp4 # Set the context window size (e.g., 4096, 8192, 16384, etc.) PARAMETER num_ctx 192000 ``` While running this model `ollama ps` still showed a context window size of 262144.
Author
Owner

@andrisak-am commented on GitHub (May 3, 2026):

Confirming on a different qwen3 variantqwen3-14b-sk:q6 (a custom Slovak fine-tune via FROM qwen3:14b) shows the same behavior: the Modelfile-declared num_ctx is honored, but per-request options.num_ctx is ignored. The active n_ctx falls back to 4096 even when training context is 8192 and the request explicitly asks for 16384.

Reproduction

Setup:

  • NVIDIA DGX Spark (GB10, ARM64, Linux 6.17, NVIDIA driver 580.142)
  • Ollama daemon (default install, no OLLAMA_CONTEXT_LENGTH env set)
  • Model: qwen3-14b-sk:q6 (custom FROM qwen3:14b fine-tune, default num_ctx 32768 in Modelfile)

Request:

curl -X POST http://127.0.0.1:11434/api/generate -d '{
  "model": "qwen3-14b-sk:q6",
  "prompt": "<long prompt with ~6000 tokens>",
  "options": {"num_ctx": 16384, "num_predict": 15},
  "stream": false
}'

Daemon log on model load:

print_info: n_ctx_train      = 8192
llama_context: n_ctx         = 4096            ← ignored 16384 request
llama_context: n_ctx_seq     = 4096
llama_context: n_ctx_seq (4096) < n_ctx_train (8192) -- the full capacity of the model will not be utilized

Response: prompt_eval_count: 4013 tokens — the long prompt was silently truncated to 4096 instead of the requested 16384.

Workaround that works for us

Bake the desired context into a separate Modelfile:

FROM qwen3-14b-sk:q6
PARAMETER num_ctx 32768

Then ollama create qwen3-14b-sk-32k:q6 -f <Modelfile>. Re-running the same long-prompt test on the new variant returns prompt_eval_count: 12016 tokens — now honoring the larger context.

This is what we did to keep an OpenClaw compaction.memoryFlush.model callable on bloated sessions (~50K tokens accumulated) — the per-request num_ctx from the OpenClaw side was being silently truncated by Ollama, defeating the whole point of routing memory flush to a fast small model.

Note

The same workaround pattern is required for our vision model (qwen3-vl-32k:8b derived from qwen3-vl:8b with PARAMETER num_ctx 32768). The Modelfile-baked param works; the per-request override does not. So this isn't qwen3.6-specific — it appears to be a general "API-level num_ctx is silently capped" issue in the Ollama runtime.

<!-- gh-comment-id:4366128571 --> @andrisak-am commented on GitHub (May 3, 2026): **Confirming on a different qwen3 variant** — `qwen3-14b-sk:q6` (a custom Slovak fine-tune via `FROM qwen3:14b`) shows the same behavior: the Modelfile-declared `num_ctx` is honored, but **per-request `options.num_ctx` is ignored**. The active n_ctx falls back to 4096 even when training context is 8192 and the request explicitly asks for 16384. ## Reproduction Setup: - NVIDIA DGX Spark (GB10, ARM64, Linux 6.17, NVIDIA driver 580.142) - Ollama daemon (default install, no `OLLAMA_CONTEXT_LENGTH` env set) - Model: `qwen3-14b-sk:q6` (custom `FROM qwen3:14b` fine-tune, default `num_ctx 32768` in Modelfile) Request: ```bash curl -X POST http://127.0.0.1:11434/api/generate -d '{ "model": "qwen3-14b-sk:q6", "prompt": "<long prompt with ~6000 tokens>", "options": {"num_ctx": 16384, "num_predict": 15}, "stream": false }' ``` Daemon log on model load: ``` print_info: n_ctx_train = 8192 llama_context: n_ctx = 4096 ← ignored 16384 request llama_context: n_ctx_seq = 4096 llama_context: n_ctx_seq (4096) < n_ctx_train (8192) -- the full capacity of the model will not be utilized ``` Response: `prompt_eval_count: 4013 tokens` — the long prompt was silently truncated to 4096 instead of the requested 16384. ## Workaround that works for us Bake the desired context into a separate Modelfile: ``` FROM qwen3-14b-sk:q6 PARAMETER num_ctx 32768 ``` Then `ollama create qwen3-14b-sk-32k:q6 -f <Modelfile>`. Re-running the same long-prompt test on the new variant returns `prompt_eval_count: 12016 tokens` — now honoring the larger context. This is what we did to keep an OpenClaw `compaction.memoryFlush.model` callable on bloated sessions (~50K tokens accumulated) — the per-request `num_ctx` from the OpenClaw side was being silently truncated by Ollama, defeating the whole point of routing memory flush to a fast small model. ## Note The same workaround pattern is required for our vision model (`qwen3-vl-32k:8b` derived from `qwen3-vl:8b` with `PARAMETER num_ctx 32768`). The Modelfile-baked param works; the per-request override does not. So this isn't qwen3.6-specific — it appears to be a general "API-level num_ctx is silently capped" issue in the Ollama runtime.
Author
Owner

@cookpod167 commented on GitHub (May 3, 2026):

Confirming the same behavior on Windows 11 + RTX 5080 (16 GB) with qwen3.6:35b-a3b-q4_K_M, Ollama 0.22.1.

A symptom not yet captured in this thread: when the silently-truncated prompt strips out tool definitions, tool-using clients see a tool_calls regression — the model emits tool calls as free-text in content (markdown JSON or Anthropic-style <tool_use> XML) instead of the structured tool_calls field, because the truncation removed the tool list from its context.

Reproduced with sst/opencode v1.4.x as the client, ~25k-char system prompt:

  • /v1/chat/completionsprompt_tokens=4096 in usage, no tool_calls in the response, content contains improvised tool calls as text. The model's reasoning trace explicitly states it has no tools available.
  • /api/chat with options.num_ctx: 32768, identical messages/tools → proper structured tool_calls. Confirms the issue is in the OpenAI-compat layer, not the chat template.

Workaround we're running locally: a small /v1/chat/completions/api/chat translation proxy that forces options.num_ctx=32768 and normalizes tool_calls.arguments (OpenAI sends a JSON string, native Ollama expects an object). Unblocks opencode + openwork end-to-end on Ollama.

Related: #2963 (open since 2024-03 covering the OpenAI-compat options gap in general) and PR #11249 (in flight but currently scoped to think and keep_alive only — adding num_ctx to that PR would resolve this, as @ShyamKadari requested in #2963).

<!-- gh-comment-id:4366156201 --> @cookpod167 commented on GitHub (May 3, 2026): Confirming the same behavior on Windows 11 + RTX 5080 (16 GB) with `qwen3.6:35b-a3b-q4_K_M`, Ollama 0.22.1. A symptom not yet captured in this thread: when the silently-truncated prompt strips out tool definitions, tool-using clients see a **`tool_calls` regression** — the model emits tool calls as free-text in `content` (markdown JSON or Anthropic-style `<tool_use>` XML) instead of the structured `tool_calls` field, because the truncation removed the tool list from its context. Reproduced with sst/opencode v1.4.x as the client, ~25k-char system prompt: - `/v1/chat/completions` → `prompt_tokens=4096` in usage, no `tool_calls` in the response, `content` contains improvised tool calls as text. The model's reasoning trace explicitly states it has no tools available. - `/api/chat` with `options.num_ctx: 32768`, identical messages/tools → proper structured `tool_calls`. Confirms the issue is in the OpenAI-compat layer, not the chat template. Workaround we're running locally: a small `/v1/chat/completions` → `/api/chat` translation proxy that forces `options.num_ctx=32768` and normalizes `tool_calls.arguments` (OpenAI sends a JSON string, native Ollama expects an object). Unblocks opencode + openwork end-to-end on Ollama. Related: #2963 (open since 2024-03 covering the OpenAI-compat options gap in general) and PR #11249 (in flight but currently scoped to `think` and `keep_alive` only — adding `num_ctx` to that PR would resolve this, as @ShyamKadari requested in #2963).
Author
Owner

@rick-github commented on GitHub (May 3, 2026):

Three different issues.

@gradha NVFP4 models are run with the MLX runner, which has dynamic context. Setting a fixed context length buffer is not supported. The value displayed in ollama ps is the maximum amount the context size will grow to.

@andrisak-am qwen13:14b has a context training length of 40k, but your log snippet shows 8k. Open a new issue and include a full log.

@cookpod167 The official OpenAI API does not support setting the context length, so the ollama OpenAI API-compatibility endpoint also does not support this. The documented way of handling this is to create a copy of the model with num_ctx set to the desired value.

<!-- gh-comment-id:4366704727 --> @rick-github commented on GitHub (May 3, 2026): Three different issues. @gradha NVFP4 models are run with the MLX runner, which has dynamic context. Setting a fixed context length buffer is not supported. The value displayed in `ollama ps` is the maximum amount the context size will grow to. @andrisak-am qwen13:14b has a context training length of 40k, but your log snippet shows 8k. Open a new issue and include a full log. @cookpod167 The official OpenAI API does not support setting the context length, so the ollama OpenAI API-compatibility endpoint also does not support this. The [documented way](https://github.com/ollama/ollama/blob/main/docs/api/openai-compatibility.mdx#setting-the-context-size) of handling this is to create a copy of the model with `num_ctx` set to the desired value.
Author
Owner

@gradha commented on GitHub (May 3, 2026):

@gradha NVFP4 models are run with the MLX runner, which has dynamic context. Setting a fixed context length buffer is not supported. The value displayed in ollama ps is the maximum amount the context size will grow to.

Oh well, would have been nice to know that instead of running the past weeks like a headless chicken trying random stuff. There's no mention of this particularity of MLX models in the FAQ. The "View all" model page in the model directory doesn't show the MLX tag, only the front page with a selection.

I've renamed the title of the issue but won't close it since maybe the other two commenters have something to add.

<!-- gh-comment-id:4367095231 --> @gradha commented on GitHub (May 3, 2026): > [@gradha](https://github.com/gradha) NVFP4 models are run with the MLX runner, which has dynamic context. Setting a fixed context length buffer is not supported. The value displayed in `ollama ps` is the maximum amount the context size will grow to. Oh well, would have been nice to know that instead of running the past weeks like a headless chicken trying random stuff. There's no mention of this particularity of MLX models in the FAQ. The "View all" model page in the model directory doesn't show the MLX tag, only the front page with a selection. I've renamed the title of the issue but won't close it since maybe the other two commenters have something to add.
Author
Owner

@andrisak-am commented on GitHub (May 3, 2026):

@rick-github you're right on both counts — apologies for the noise on this thread.

I just re-tested on a fresh ollama 0.22.0 install and the per-request num_ctx override does work correctly:

# A: no override (Modelfile default num_ctx=32768)
prompt 6678 tokens → prompt_eval_count: 6678
ollama ps           → context_length: 32768

# B: options.num_ctx=8192
prompt 6678 tokens → prompt_eval_count: 6678
ollama ps           → context_length: 8192   ← override applied

# C: options.num_ctx=4096 (lower than prompt)
prompt 6678 tokens → prompt_eval_count: 4096   ← truncated as expected
ollama ps           → context_length: 4096

The model is qwen3-14b-sk:q6, a custom FROM qwen3:14b fine-tune. GGUF metadata correctly reports qwen3.context_length: 40960, matching what you said about the upstream qwen3:14b training length.

My earlier comment claimed n_ctx_train = 8192 and "per-request num_ctx ignored". Neither holds up. The 8192 figure came from a log line I misread, and the override applies cleanly in 0.22.0. My case is also a different model + a non-MLX runner, so it doesn't belong on this thread anyway.

Withdrawing my report to keep this thread focused on its actual scope (qwen3.6 NVFP4 MLX runner). If I later observe a genuine truncation bug under different conditions, I'll open a separate issue with full reproducer.

Thanks for the correction.

<!-- gh-comment-id:4367177227 --> @andrisak-am commented on GitHub (May 3, 2026): @rick-github you're right on both counts — apologies for the noise on this thread. I just re-tested on a fresh ollama 0.22.0 install and the per-request `num_ctx` override **does work correctly**: ``` # A: no override (Modelfile default num_ctx=32768) prompt 6678 tokens → prompt_eval_count: 6678 ollama ps → context_length: 32768 # B: options.num_ctx=8192 prompt 6678 tokens → prompt_eval_count: 6678 ollama ps → context_length: 8192 ← override applied # C: options.num_ctx=4096 (lower than prompt) prompt 6678 tokens → prompt_eval_count: 4096 ← truncated as expected ollama ps → context_length: 4096 ``` The model is `qwen3-14b-sk:q6`, a custom `FROM qwen3:14b` fine-tune. GGUF metadata correctly reports `qwen3.context_length: 40960`, matching what you said about the upstream qwen3:14b training length. My earlier comment claimed `n_ctx_train = 8192` and "per-request num_ctx ignored". Neither holds up. The 8192 figure came from a log line I misread, and the override applies cleanly in 0.22.0. My case is also a different model + a non-MLX runner, so it doesn't belong on this thread anyway. Withdrawing my report to keep this thread focused on its actual scope (qwen3.6 NVFP4 MLX runner). If I later observe a genuine truncation bug under different conditions, I'll open a separate issue with full reproducer. Thanks for the correction.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#72212