[GH-ISSUE #11711] GPT-OSS 20B num_ctx is overridden when lower than 8192 #7752

Closed
opened 2026-04-12 19:53:06 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @crahzee on GitHub (Aug 6, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11711

What is the issue?

When setting num_ctx to less than 8192, Ollama will load GPT-OSS:20b with 8192 context length. I created a model with 4096 and it loads as 8192.

# ollama show test
  Model
    architecture        gptoss    
    parameters          20.9B     
    context length      131072    
    embedding length    2880      
    quantization        MXFP4     

  Capabilities
    completion    
    tools         
    thinking      

  Parameters
    num_ctx        4096    
    temperature    1       

  License
    Apache License               
    Version 2.0, January 2004    
    ...                          

# ollama run test
>>> /bye
# ollama ps
NAME           ID              SIZE     PROCESSOR         CONTEXT    UNTIL               
test:latest    5a691afb5095    18 GB    9%/91% CPU/GPU    8192       29 minutes from now    
#

Relevant log output

time=2025-08-06T01:59:23.395Z level=INFO source=routes.go:1297 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:16384 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:30m0s OLLAMA_KV_CACHE_TYPE:q4_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-08-06T01:59:23.396Z level=INFO source=images.go:477 msg="total blobs: 31"
time=2025-08-06T01:59:23.397Z level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2025-08-06T01:59:23.398Z level=INFO source=routes.go:1350 msg="Listening on [::]:11434 (version 0.11.2)"
time=2025-08-06T01:59:23.398Z level=DEBUG source=sched.go:106 msg="starting llm scheduler"
time=2025-08-06T01:59:23.398Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-08-06T01:59:23.399Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
time=2025-08-06T01:59:23.399Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so*
time=2025-08-06T01:59:23.399Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2025-08-06T01:59:23.400Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.550.142]
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.142
dlsym: cuInit - 0x7f2460eadb70
dlsym: cuDriverGetVersion - 0x7f2460eadb90
dlsym: cuDeviceGetCount - 0x7f2460eadbd0
dlsym: cuDeviceGet - 0x7f2460eadbb0
dlsym: cuDeviceGetAttribute - 0x7f2460eadcb0
dlsym: cuDeviceGetUuid - 0x7f2460eadc10
dlsym: cuDeviceGetName - 0x7f2460eadbf0
dlsym: cuCtxCreate_v3 - 0x7f2460eade90
dlsym: cuMemGetInfo_v2 - 0x7f2460eb7dd0
dlsym: cuCtxDestroy - 0x7f2460f12800
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2025-08-06T01:59:23.830Z level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.550.142
[GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f] CUDA totalMem 16076mb
[GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f] CUDA freeMem 15952mb
[GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f] Compute Capability 8.9
time=2025-08-06T01:59:23.890Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2025-08-06T01:59:23.890Z level=INFO source=types.go:130 msg="inference compute" id=GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4060 Ti" total="15.7 GiB" available="15.6 GiB"
[GIN] 2025/08/06 - 01:59:45 | 200 |      28.099µs |       127.0.0.1 | HEAD     "/"
time=2025-08-06T01:59:45.297Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32
[GIN] 2025/08/06 - 01:59:45 | 200 |   63.079255ms |       127.0.0.1 | POST     "/api/show"
time=2025-08-06T01:59:45.352Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.5 GiB" before.free="15.9 GiB" before.free_swap="0 B" now.total="62.5 GiB" now.free="15.1 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.142
dlsym: cuInit - 0x7f2460eadb70
dlsym: cuDriverGetVersion - 0x7f2460eadb90
dlsym: cuDeviceGetCount - 0x7f2460eadbd0
dlsym: cuDeviceGet - 0x7f2460eadbb0
dlsym: cuDeviceGetAttribute - 0x7f2460eadcb0
dlsym: cuDeviceGetUuid - 0x7f2460eadc10
dlsym: cuDeviceGetName - 0x7f2460eadbf0
dlsym: cuCtxCreate_v3 - 0x7f2460eade90
dlsym: cuMemGetInfo_v2 - 0x7f2460eb7dd0
dlsym: cuCtxDestroy - 0x7f2460f12800
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2025-08-06T01:59:45.420Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f name="NVIDIA GeForce RTX 4060 Ti" overhead="0 B" before.total="15.7 GiB" before.free="15.6 GiB" now.total="15.7 GiB" now.free="15.6 GiB" now.used="123.2 MiB"
releasing cuda driver library
time=2025-08-06T01:59:45.420Z level=DEBUG source=sched.go:183 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2025-08-06T01:59:45.437Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32
time=2025-08-06T01:59:45.476Z level=DEBUG source=sched.go:226 msg="loading first model" model=/root/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583
time=2025-08-06T01:59:45.476Z level=DEBUG source=memory.go:111 msg=evaluating library=cuda gpu_count=1 available="[15.6 GiB]"
time=2025-08-06T01:59:45.476Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=gptoss.vision.block_count default=0
time=2025-08-06T01:59:45.476Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.5 GiB" before.free="15.1 GiB" before.free_swap="0 B" now.total="62.5 GiB" now.free="15.1 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.142
dlsym: cuInit - 0x7f2460eadb70
dlsym: cuDriverGetVersion - 0x7f2460eadb90
dlsym: cuDeviceGetCount - 0x7f2460eadbd0
dlsym: cuDeviceGet - 0x7f2460eadbb0
dlsym: cuDeviceGetAttribute - 0x7f2460eadcb0
dlsym: cuDeviceGetUuid - 0x7f2460eadc10
dlsym: cuDeviceGetName - 0x7f2460eadbf0
dlsym: cuCtxCreate_v3 - 0x7f2460eade90
dlsym: cuMemGetInfo_v2 - 0x7f2460eb7dd0
dlsym: cuCtxDestroy - 0x7f2460f12800
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2025-08-06T01:59:45.534Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f name="NVIDIA GeForce RTX 4060 Ti" overhead="0 B" before.total="15.7 GiB" before.free="15.6 GiB" now.total="15.7 GiB" now.free="15.6 GiB" now.used="123.2 MiB"
releasing cuda driver library
time=2025-08-06T01:59:45.534Z level=DEBUG source=memory.go:111 msg=evaluating library=cuda gpu_count=1 available="[15.6 GiB]"
time=2025-08-06T01:59:45.534Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=gptoss.vision.block_count default=0
time=2025-08-06T01:59:45.534Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.5 GiB" before.free="15.1 GiB" before.free_swap="0 B" now.total="62.5 GiB" now.free="15.0 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.142
dlsym: cuInit - 0x7f2460eadb70
dlsym: cuDriverGetVersion - 0x7f2460eadb90
dlsym: cuDeviceGetCount - 0x7f2460eadbd0
dlsym: cuDeviceGet - 0x7f2460eadbb0
dlsym: cuDeviceGetAttribute - 0x7f2460eadcb0
dlsym: cuDeviceGetUuid - 0x7f2460eadc10
dlsym: cuDeviceGetName - 0x7f2460eadbf0
dlsym: cuCtxCreate_v3 - 0x7f2460eade90
dlsym: cuMemGetInfo_v2 - 0x7f2460eb7dd0
dlsym: cuCtxDestroy - 0x7f2460f12800
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2025-08-06T01:59:45.586Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f name="NVIDIA GeForce RTX 4060 Ti" overhead="0 B" before.total="15.7 GiB" before.free="15.6 GiB" now.total="15.7 GiB" now.free="15.6 GiB" now.used="123.2 MiB"
releasing cuda driver library
time=2025-08-06T01:59:45.586Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.5 GiB" before.free="15.0 GiB" before.free_swap="0 B" now.total="62.5 GiB" now.free="15.0 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.142
dlsym: cuInit - 0x7f2460eadb70
dlsym: cuDriverGetVersion - 0x7f2460eadb90
dlsym: cuDeviceGetCount - 0x7f2460eadbd0
dlsym: cuDeviceGet - 0x7f2460eadbb0
dlsym: cuDeviceGetAttribute - 0x7f2460eadcb0
dlsym: cuDeviceGetUuid - 0x7f2460eadc10
dlsym: cuDeviceGetName - 0x7f2460eadbf0
dlsym: cuCtxCreate_v3 - 0x7f2460eade90
dlsym: cuMemGetInfo_v2 - 0x7f2460eb7dd0
dlsym: cuCtxDestroy - 0x7f2460f12800
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2025-08-06T01:59:45.639Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f name="NVIDIA GeForce RTX 4060 Ti" overhead="0 B" before.total="15.7 GiB" before.free="15.6 GiB" now.total="15.7 GiB" now.free="15.6 GiB" now.used="123.2 MiB"
releasing cuda driver library
time=2025-08-06T01:59:45.639Z level=INFO source=server.go:135 msg="system memory" total="62.5 GiB" free="15.0 GiB" free_swap="0 B"
time=2025-08-06T01:59:45.639Z level=DEBUG source=memory.go:111 msg=evaluating library=cuda gpu_count=1 available="[15.6 GiB]"
time=2025-08-06T01:59:45.639Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=gptoss.vision.block_count default=0
time=2025-08-06T01:59:45.639Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.5 GiB" before.free="15.0 GiB" before.free_swap="0 B" now.total="62.5 GiB" now.free="15.0 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.142
dlsym: cuInit - 0x7f2460eadb70
dlsym: cuDriverGetVersion - 0x7f2460eadb90
dlsym: cuDeviceGetCount - 0x7f2460eadbd0
dlsym: cuDeviceGet - 0x7f2460eadbb0
dlsym: cuDeviceGetAttribute - 0x7f2460eadcb0
dlsym: cuDeviceGetUuid - 0x7f2460eadc10
dlsym: cuDeviceGetName - 0x7f2460eadbf0
dlsym: cuCtxCreate_v3 - 0x7f2460eade90
dlsym: cuMemGetInfo_v2 - 0x7f2460eb7dd0
dlsym: cuCtxDestroy - 0x7f2460f12800
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2025-08-06T01:59:45.699Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f name="NVIDIA GeForce RTX 4060 Ti" overhead="0 B" before.total="15.7 GiB" before.free="15.6 GiB" now.total="15.7 GiB" now.free="15.6 GiB" now.used="123.2 MiB"
releasing cuda driver library
time=2025-08-06T01:59:45.700Z level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=23 layers.split="" memory.available="[15.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="16.9 GiB" memory.required.partial="15.4 GiB" memory.required.kv="300.0 MiB" memory.required.allocations="[15.4 GiB]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="2.0 GiB" memory.graph.partial="4.0 GiB"
time=2025-08-06T01:59:45.700Z level=WARN source=server.go:211 msg="flash attention enabled but not supported by model"
time=2025-08-06T01:59:45.700Z level=WARN source=server.go:229 msg="quantized kv cache requested but flash attention disabled" type=q4_0
time=2025-08-06T01:59:45.700Z level=DEBUG source=server.go:291 msg="compatible gpu libraries" compatible=[]
time=2025-08-06T01:59:45.739Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32
time=2025-08-06T01:59:45.739Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+(?i:'s|'t|'re|'ve|'m|'ll|'d)?|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*(?i:'s|'t|'re|'ve|'m|'ll|'d)?|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-08-06T01:59:45.740Z level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --ctx-size 8192 --batch-size 512 --n-gpu-layers 23 --threads 6 --no-mmap --parallel 1 --port 45179"
time=2025-08-06T01:59:45.740Z level=DEBUG source=server.go:439 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_KEEP_ALIVE=30m OLLAMA_NUM_PARALLEL=1 OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q4_0 OLLAMA_CONTEXT_LENGTH=16384 OLLAMA_DEBUG=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/ollama OLLAMA_HOST=0.0.0.0:11434 OLLAMA_MAX_LOADED_MODELS=3 OLLAMA_LIBRARY_PATH=/usr/lib/ollama CUDA_VISIBLE_DEVICES=GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f
time=2025-08-06T01:59:45.740Z level=INFO source=sched.go:481 msg="loaded runners" count=1
time=2025-08-06T01:59:45.740Z level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
time=2025-08-06T01:59:45.740Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
time=2025-08-06T01:59:45.748Z level=INFO source=runner.go:925 msg="starting ollama engine"
time=2025-08-06T01:59:45.748Z level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:45179"
time=2025-08-06T01:59:45.796Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32
time=2025-08-06T01:59:45.796Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.name default=""
time=2025-08-06T01:59:45.796Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.description default=""
time=2025-08-06T01:59:45.796Z level=INFO source=ggml.go:92 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30
time=2025-08-06T01:59:45.796Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from /usr/lib/ollama/libggml-cuda.so
load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-alderlake.so
time=2025-08-06T01:59:45.837Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-08-06T01:59:45.884Z level=INFO source=ggml.go:367 msg="offloading 23 repeating layers to GPU"
time=2025-08-06T01:59:45.884Z level=INFO source=ggml.go:371 msg="offloading output layer to CPU"
time=2025-08-06T01:59:45.884Z level=INFO source=ggml.go:378 msg="offloaded 23/25 layers to GPU"
time=2025-08-06T01:59:45.884Z level=INFO source=ggml.go:381 msg="model weights" buffer=CUDA0 size="10.2 GiB"
time=2025-08-06T01:59:45.884Z level=INFO source=ggml.go:381 msg="model weights" buffer=CPU size="2.6 GiB"
time=2025-08-06T01:59:45.884Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+(?i:'s|'t|'re|'ve|'m|'ll|'d)?|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*(?i:'s|'t|'re|'ve|'m|'ll|'d)?|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-08-06T01:59:45.891Z level=DEBUG source=ggml.go:654 msg="compute graph" nodes=1847 splits=3
time=2025-08-06T01:59:45.891Z level=INFO source=ggml.go:672 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="2.1 GiB"
time=2025-08-06T01:59:45.891Z level=INFO source=ggml.go:672 msg="compute graph" backend=CPU buffer_type=CPU size="1.2 GiB"
time=2025-08-06T01:59:45.891Z level=DEBUG source=runner.go:883 msg=memory allocated.InputWeights=1158266880A allocated.CPU.Weights="[477075584A 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 1158278400A]" allocated.CPU.Cache="[9437184A 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U]" allocated.CPU.Graph=1238761472A allocated.CUDA0.ID=GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f allocated.CUDA0.Weights="[0U 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 0U]" allocated.CUDA0.Cache="[0U 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 0U]" allocated.CUDA0.Graph=2227046656A
time=2025-08-06T01:59:45.992Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
time=2025-08-06T01:59:45.992Z level=DEBUG source=server.go:643 msg="model load progress 0.08"
time=2025-08-06T01:59:46.242Z level=DEBUG source=server.go:643 msg="model load progress 0.24"
time=2025-08-06T01:59:46.493Z level=DEBUG source=server.go:643 msg="model load progress 0.37"
time=2025-08-06T01:59:46.744Z level=DEBUG source=server.go:643 msg="model load progress 0.45"
time=2025-08-06T01:59:46.995Z level=DEBUG source=server.go:643 msg="model load progress 0.59"
time=2025-08-06T01:59:47.245Z level=DEBUG source=server.go:643 msg="model load progress 0.75"
time=2025-08-06T01:59:47.496Z level=DEBUG source=server.go:643 msg="model load progress 0.92"
time=2025-08-06T01:59:47.747Z level=INFO source=server.go:637 msg="llama runner started in 2.01 seconds"
time=2025-08-06T01:59:47.747Z level=DEBUG source=sched.go:493 msg="finished setting up" runner.name=registry.ollama.ai/library/test:latest runner.inference=cuda runner.devices=1 runner.size="16.9 GiB" runner.vram="15.4 GiB" runner.parallel=1 runner.pid=56 runner.model=/root/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=8192
time=2025-08-06T01:59:47.747Z level=DEBUG source=sched.go:501 msg="context for request finished"
time=2025-08-06T01:59:47.747Z level=DEBUG source=sched.go:341 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/test:latest runner.inference=cuda runner.devices=1 runner.size="16.9 GiB" runner.vram="15.4 GiB" runner.parallel=1 runner.pid=56 runner.model=/root/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=8192 duration=30m0s
time=2025-08-06T01:59:47.747Z level=DEBUG source=sched.go:359 msg="after processing request finished event" runner.name=registry.ollama.ai/library/test:latest runner.inference=cuda runner.devices=1 runner.size="16.9 GiB" runner.vram="15.4 GiB" runner.parallel=1 runner.pid=56 runner.model=/root/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=8192 refCount=0
[GIN] 2025/08/06 - 01:59:47 | 200 |  2.448412456s |       127.0.0.1 | POST     "/api/generate"

OS

Docker

GPU

Nvidia

CPU

Intel

Ollama version

0.11.2

Originally created by @crahzee on GitHub (Aug 6, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11711 ### What is the issue? When setting num_ctx to less than 8192, Ollama will load GPT-OSS:20b with 8192 context length. I created a model with 4096 and it loads as 8192. ``` # ollama show test Model architecture gptoss parameters 20.9B context length 131072 embedding length 2880 quantization MXFP4 Capabilities completion tools thinking Parameters num_ctx 4096 temperature 1 License Apache License Version 2.0, January 2004 ... # ollama run test >>> /bye # ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL test:latest 5a691afb5095 18 GB 9%/91% CPU/GPU 8192 29 minutes from now # ``` ### Relevant log output ```shell time=2025-08-06T01:59:23.395Z level=INFO source=routes.go:1297 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:16384 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:30m0s OLLAMA_KV_CACHE_TYPE:q4_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-08-06T01:59:23.396Z level=INFO source=images.go:477 msg="total blobs: 31" time=2025-08-06T01:59:23.397Z level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2025-08-06T01:59:23.398Z level=INFO source=routes.go:1350 msg="Listening on [::]:11434 (version 0.11.2)" time=2025-08-06T01:59:23.398Z level=DEBUG source=sched.go:106 msg="starting llm scheduler" time=2025-08-06T01:59:23.398Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-08-06T01:59:23.399Z level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" time=2025-08-06T01:59:23.399Z level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so* time=2025-08-06T01:59:23.399Z level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2025-08-06T01:59:23.400Z level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.550.142] initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.142 dlsym: cuInit - 0x7f2460eadb70 dlsym: cuDriverGetVersion - 0x7f2460eadb90 dlsym: cuDeviceGetCount - 0x7f2460eadbd0 dlsym: cuDeviceGet - 0x7f2460eadbb0 dlsym: cuDeviceGetAttribute - 0x7f2460eadcb0 dlsym: cuDeviceGetUuid - 0x7f2460eadc10 dlsym: cuDeviceGetName - 0x7f2460eadbf0 dlsym: cuCtxCreate_v3 - 0x7f2460eade90 dlsym: cuMemGetInfo_v2 - 0x7f2460eb7dd0 dlsym: cuCtxDestroy - 0x7f2460f12800 calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 1 time=2025-08-06T01:59:23.830Z level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.550.142 [GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f] CUDA totalMem 16076mb [GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f] CUDA freeMem 15952mb [GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f] Compute Capability 8.9 time=2025-08-06T01:59:23.890Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu" releasing cuda driver library time=2025-08-06T01:59:23.890Z level=INFO source=types.go:130 msg="inference compute" id=GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4060 Ti" total="15.7 GiB" available="15.6 GiB" [GIN] 2025/08/06 - 01:59:45 | 200 | 28.099µs | 127.0.0.1 | HEAD "/" time=2025-08-06T01:59:45.297Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32 [GIN] 2025/08/06 - 01:59:45 | 200 | 63.079255ms | 127.0.0.1 | POST "/api/show" time=2025-08-06T01:59:45.352Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.5 GiB" before.free="15.9 GiB" before.free_swap="0 B" now.total="62.5 GiB" now.free="15.1 GiB" now.free_swap="0 B" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.142 dlsym: cuInit - 0x7f2460eadb70 dlsym: cuDriverGetVersion - 0x7f2460eadb90 dlsym: cuDeviceGetCount - 0x7f2460eadbd0 dlsym: cuDeviceGet - 0x7f2460eadbb0 dlsym: cuDeviceGetAttribute - 0x7f2460eadcb0 dlsym: cuDeviceGetUuid - 0x7f2460eadc10 dlsym: cuDeviceGetName - 0x7f2460eadbf0 dlsym: cuCtxCreate_v3 - 0x7f2460eade90 dlsym: cuMemGetInfo_v2 - 0x7f2460eb7dd0 dlsym: cuCtxDestroy - 0x7f2460f12800 calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 1 time=2025-08-06T01:59:45.420Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f name="NVIDIA GeForce RTX 4060 Ti" overhead="0 B" before.total="15.7 GiB" before.free="15.6 GiB" now.total="15.7 GiB" now.free="15.6 GiB" now.used="123.2 MiB" releasing cuda driver library time=2025-08-06T01:59:45.420Z level=DEBUG source=sched.go:183 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1 time=2025-08-06T01:59:45.437Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32 time=2025-08-06T01:59:45.476Z level=DEBUG source=sched.go:226 msg="loading first model" model=/root/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 time=2025-08-06T01:59:45.476Z level=DEBUG source=memory.go:111 msg=evaluating library=cuda gpu_count=1 available="[15.6 GiB]" time=2025-08-06T01:59:45.476Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-08-06T01:59:45.476Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.5 GiB" before.free="15.1 GiB" before.free_swap="0 B" now.total="62.5 GiB" now.free="15.1 GiB" now.free_swap="0 B" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.142 dlsym: cuInit - 0x7f2460eadb70 dlsym: cuDriverGetVersion - 0x7f2460eadb90 dlsym: cuDeviceGetCount - 0x7f2460eadbd0 dlsym: cuDeviceGet - 0x7f2460eadbb0 dlsym: cuDeviceGetAttribute - 0x7f2460eadcb0 dlsym: cuDeviceGetUuid - 0x7f2460eadc10 dlsym: cuDeviceGetName - 0x7f2460eadbf0 dlsym: cuCtxCreate_v3 - 0x7f2460eade90 dlsym: cuMemGetInfo_v2 - 0x7f2460eb7dd0 dlsym: cuCtxDestroy - 0x7f2460f12800 calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 1 time=2025-08-06T01:59:45.534Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f name="NVIDIA GeForce RTX 4060 Ti" overhead="0 B" before.total="15.7 GiB" before.free="15.6 GiB" now.total="15.7 GiB" now.free="15.6 GiB" now.used="123.2 MiB" releasing cuda driver library time=2025-08-06T01:59:45.534Z level=DEBUG source=memory.go:111 msg=evaluating library=cuda gpu_count=1 available="[15.6 GiB]" time=2025-08-06T01:59:45.534Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-08-06T01:59:45.534Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.5 GiB" before.free="15.1 GiB" before.free_swap="0 B" now.total="62.5 GiB" now.free="15.0 GiB" now.free_swap="0 B" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.142 dlsym: cuInit - 0x7f2460eadb70 dlsym: cuDriverGetVersion - 0x7f2460eadb90 dlsym: cuDeviceGetCount - 0x7f2460eadbd0 dlsym: cuDeviceGet - 0x7f2460eadbb0 dlsym: cuDeviceGetAttribute - 0x7f2460eadcb0 dlsym: cuDeviceGetUuid - 0x7f2460eadc10 dlsym: cuDeviceGetName - 0x7f2460eadbf0 dlsym: cuCtxCreate_v3 - 0x7f2460eade90 dlsym: cuMemGetInfo_v2 - 0x7f2460eb7dd0 dlsym: cuCtxDestroy - 0x7f2460f12800 calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 1 time=2025-08-06T01:59:45.586Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f name="NVIDIA GeForce RTX 4060 Ti" overhead="0 B" before.total="15.7 GiB" before.free="15.6 GiB" now.total="15.7 GiB" now.free="15.6 GiB" now.used="123.2 MiB" releasing cuda driver library time=2025-08-06T01:59:45.586Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.5 GiB" before.free="15.0 GiB" before.free_swap="0 B" now.total="62.5 GiB" now.free="15.0 GiB" now.free_swap="0 B" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.142 dlsym: cuInit - 0x7f2460eadb70 dlsym: cuDriverGetVersion - 0x7f2460eadb90 dlsym: cuDeviceGetCount - 0x7f2460eadbd0 dlsym: cuDeviceGet - 0x7f2460eadbb0 dlsym: cuDeviceGetAttribute - 0x7f2460eadcb0 dlsym: cuDeviceGetUuid - 0x7f2460eadc10 dlsym: cuDeviceGetName - 0x7f2460eadbf0 dlsym: cuCtxCreate_v3 - 0x7f2460eade90 dlsym: cuMemGetInfo_v2 - 0x7f2460eb7dd0 dlsym: cuCtxDestroy - 0x7f2460f12800 calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 1 time=2025-08-06T01:59:45.639Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f name="NVIDIA GeForce RTX 4060 Ti" overhead="0 B" before.total="15.7 GiB" before.free="15.6 GiB" now.total="15.7 GiB" now.free="15.6 GiB" now.used="123.2 MiB" releasing cuda driver library time=2025-08-06T01:59:45.639Z level=INFO source=server.go:135 msg="system memory" total="62.5 GiB" free="15.0 GiB" free_swap="0 B" time=2025-08-06T01:59:45.639Z level=DEBUG source=memory.go:111 msg=evaluating library=cuda gpu_count=1 available="[15.6 GiB]" time=2025-08-06T01:59:45.639Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=gptoss.vision.block_count default=0 time=2025-08-06T01:59:45.639Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="62.5 GiB" before.free="15.0 GiB" before.free_swap="0 B" now.total="62.5 GiB" now.free="15.0 GiB" now.free_swap="0 B" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.142 dlsym: cuInit - 0x7f2460eadb70 dlsym: cuDriverGetVersion - 0x7f2460eadb90 dlsym: cuDeviceGetCount - 0x7f2460eadbd0 dlsym: cuDeviceGet - 0x7f2460eadbb0 dlsym: cuDeviceGetAttribute - 0x7f2460eadcb0 dlsym: cuDeviceGetUuid - 0x7f2460eadc10 dlsym: cuDeviceGetName - 0x7f2460eadbf0 dlsym: cuCtxCreate_v3 - 0x7f2460eade90 dlsym: cuMemGetInfo_v2 - 0x7f2460eb7dd0 dlsym: cuCtxDestroy - 0x7f2460f12800 calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 1 time=2025-08-06T01:59:45.699Z level=DEBUG source=gpu.go:441 msg="updating cuda memory data" gpu=GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f name="NVIDIA GeForce RTX 4060 Ti" overhead="0 B" before.total="15.7 GiB" before.free="15.6 GiB" now.total="15.7 GiB" now.free="15.6 GiB" now.used="123.2 MiB" releasing cuda driver library time=2025-08-06T01:59:45.700Z level=INFO source=server.go:175 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=23 layers.split="" memory.available="[15.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="16.9 GiB" memory.required.partial="15.4 GiB" memory.required.kv="300.0 MiB" memory.required.allocations="[15.4 GiB]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="2.0 GiB" memory.graph.partial="4.0 GiB" time=2025-08-06T01:59:45.700Z level=WARN source=server.go:211 msg="flash attention enabled but not supported by model" time=2025-08-06T01:59:45.700Z level=WARN source=server.go:229 msg="quantized kv cache requested but flash attention disabled" type=q4_0 time=2025-08-06T01:59:45.700Z level=DEBUG source=server.go:291 msg="compatible gpu libraries" compatible=[] time=2025-08-06T01:59:45.739Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32 time=2025-08-06T01:59:45.739Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+(?i:'s|'t|'re|'ve|'m|'ll|'d)?|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*(?i:'s|'t|'re|'ve|'m|'ll|'d)?|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-08-06T01:59:45.740Z level=INFO source=server.go:438 msg="starting llama server" cmd="/usr/bin/ollama runner --ollama-engine --model /root/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --ctx-size 8192 --batch-size 512 --n-gpu-layers 23 --threads 6 --no-mmap --parallel 1 --port 45179" time=2025-08-06T01:59:45.740Z level=DEBUG source=server.go:439 msg=subprocess PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin OLLAMA_KEEP_ALIVE=30m OLLAMA_NUM_PARALLEL=1 OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q4_0 OLLAMA_CONTEXT_LENGTH=16384 OLLAMA_DEBUG=1 LD_LIBRARY_PATH=/usr/lib/ollama:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/ollama OLLAMA_HOST=0.0.0.0:11434 OLLAMA_MAX_LOADED_MODELS=3 OLLAMA_LIBRARY_PATH=/usr/lib/ollama CUDA_VISIBLE_DEVICES=GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f time=2025-08-06T01:59:45.740Z level=INFO source=sched.go:481 msg="loaded runners" count=1 time=2025-08-06T01:59:45.740Z level=INFO source=server.go:598 msg="waiting for llama runner to start responding" time=2025-08-06T01:59:45.740Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding" time=2025-08-06T01:59:45.748Z level=INFO source=runner.go:925 msg="starting ollama engine" time=2025-08-06T01:59:45.748Z level=INFO source=runner.go:983 msg="Server listening on 127.0.0.1:45179" time=2025-08-06T01:59:45.796Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32 time=2025-08-06T01:59:45.796Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.name default="" time=2025-08-06T01:59:45.796Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.description default="" time=2025-08-06T01:59:45.796Z level=INFO source=ggml.go:92 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30 time=2025-08-06T01:59:45.796Z level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/usr/lib/ollama ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from /usr/lib/ollama/libggml-cuda.so load_backend: loaded CPU backend from /usr/lib/ollama/libggml-cpu-alderlake.so time=2025-08-06T01:59:45.837Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-08-06T01:59:45.884Z level=INFO source=ggml.go:367 msg="offloading 23 repeating layers to GPU" time=2025-08-06T01:59:45.884Z level=INFO source=ggml.go:371 msg="offloading output layer to CPU" time=2025-08-06T01:59:45.884Z level=INFO source=ggml.go:378 msg="offloaded 23/25 layers to GPU" time=2025-08-06T01:59:45.884Z level=INFO source=ggml.go:381 msg="model weights" buffer=CUDA0 size="10.2 GiB" time=2025-08-06T01:59:45.884Z level=INFO source=ggml.go:381 msg="model weights" buffer=CPU size="2.6 GiB" time=2025-08-06T01:59:45.884Z level=DEBUG source=ggml.go:208 msg="key with type not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+(?i:'s|'t|'re|'ve|'m|'ll|'d)?|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*(?i:'s|'t|'re|'ve|'m|'ll|'d)?|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-08-06T01:59:45.891Z level=DEBUG source=ggml.go:654 msg="compute graph" nodes=1847 splits=3 time=2025-08-06T01:59:45.891Z level=INFO source=ggml.go:672 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="2.1 GiB" time=2025-08-06T01:59:45.891Z level=INFO source=ggml.go:672 msg="compute graph" backend=CPU buffer_type=CPU size="1.2 GiB" time=2025-08-06T01:59:45.891Z level=DEBUG source=runner.go:883 msg=memory allocated.InputWeights=1158266880A allocated.CPU.Weights="[477075584A 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 1158278400A]" allocated.CPU.Cache="[9437184A 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U 0U]" allocated.CPU.Graph=1238761472A allocated.CUDA0.ID=GPU-fc910dbc-2430-88ae-72d3-514d41cbac0f allocated.CUDA0.Weights="[0U 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 477075840A 0U]" allocated.CUDA0.Cache="[0U 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 9437184A 16777216A 0U]" allocated.CUDA0.Graph=2227046656A time=2025-08-06T01:59:45.992Z level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" time=2025-08-06T01:59:45.992Z level=DEBUG source=server.go:643 msg="model load progress 0.08" time=2025-08-06T01:59:46.242Z level=DEBUG source=server.go:643 msg="model load progress 0.24" time=2025-08-06T01:59:46.493Z level=DEBUG source=server.go:643 msg="model load progress 0.37" time=2025-08-06T01:59:46.744Z level=DEBUG source=server.go:643 msg="model load progress 0.45" time=2025-08-06T01:59:46.995Z level=DEBUG source=server.go:643 msg="model load progress 0.59" time=2025-08-06T01:59:47.245Z level=DEBUG source=server.go:643 msg="model load progress 0.75" time=2025-08-06T01:59:47.496Z level=DEBUG source=server.go:643 msg="model load progress 0.92" time=2025-08-06T01:59:47.747Z level=INFO source=server.go:637 msg="llama runner started in 2.01 seconds" time=2025-08-06T01:59:47.747Z level=DEBUG source=sched.go:493 msg="finished setting up" runner.name=registry.ollama.ai/library/test:latest runner.inference=cuda runner.devices=1 runner.size="16.9 GiB" runner.vram="15.4 GiB" runner.parallel=1 runner.pid=56 runner.model=/root/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=8192 time=2025-08-06T01:59:47.747Z level=DEBUG source=sched.go:501 msg="context for request finished" time=2025-08-06T01:59:47.747Z level=DEBUG source=sched.go:341 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/test:latest runner.inference=cuda runner.devices=1 runner.size="16.9 GiB" runner.vram="15.4 GiB" runner.parallel=1 runner.pid=56 runner.model=/root/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=8192 duration=30m0s time=2025-08-06T01:59:47.747Z level=DEBUG source=sched.go:359 msg="after processing request finished event" runner.name=registry.ollama.ai/library/test:latest runner.inference=cuda runner.devices=1 runner.size="16.9 GiB" runner.vram="15.4 GiB" runner.parallel=1 runner.pid=56 runner.model=/root/.ollama/models/blobs/sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 runner.num_ctx=8192 refCount=0 [GIN] 2025/08/06 - 01:59:47 | 200 | 2.448412456s | 127.0.0.1 | POST "/api/generate" ``` ### OS Docker ### GPU Nvidia ### CPU Intel ### Ollama version 0.11.2
GiteaMirror added the bug label 2026-04-12 19:53:06 -05:00
Author
Owner

@CaffeeLake commented on GitHub (Aug 6, 2025):

This model requires a minimum context to function effectively

4742e12c23/server/routes.go (L115-L118)

<!-- gh-comment-id:3157489148 --> @CaffeeLake commented on GitHub (Aug 6, 2025): > This model requires a minimum context to function effectively https://github.com/ollama/ollama/blob/4742e12c2360bd2b43aedcf6d11cefc3a048f791/server/routes.go#L115-L118
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7752