[GH-ISSUE #12927] Modelfile PARAMETER num_ctx is ignored when lower than model's native context length #8574

Closed
opened 2026-04-12 21:18:26 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @SuperSonnix71 on GitHub (Nov 3, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12927

Modelfile PARAMETER num_ctx is ignored when lower than model's native context length

Hey folks, ran into a frustrating issue where setting PARAMETER num_ctx in a Modelfile doesn't actually work if you're trying to reduce the context below what the model was trained with.

The Problem

I have a custom model (sqlcoder-7b-2.fp16.gguf) with a native context of 16384. I want to limit it to 4096 to save memory, especially when running with OLLAMA_NUM_PARALLEL=4.

Here's my Modelfile:

FROM /usr/share/ollama/.ollama/models/sqlcoder-7b-2.fp16.gguf
PARAMETER num_thread 8
PARAMETER num_ctx 4096
PARAMETER top_k 10
PARAMETER top_p 0.3

After creating the model with ollama create sql2 -f modelfile, the model completely ignores the num_ctx 4096 setting:

$ ollama ps
NAME           ID              SIZE     PROCESSOR    CONTEXT    UNTIL              
sql2:latest    9e4fefbf18f8    70 GB    100% GPU     16384      4 minutes from now

Notice it's using 16384 context (the native length) instead of 4096 (what I set).

Why This Matters

With OLLAMA_NUM_PARALLEL=4, this creates a massive KV cache:

  • 4 parallel × 16384 context = 65,536 total context pool
  • KV cache allocation: 32 GB just for the cache
  • Total model size: 70 GB (13 GB model + 32 GB KV cache + overhead)

When I actually want:

  • 4 parallel × 4096 context = 16,384 total context pool
  • KV cache allocation: 8 GB
  • Total model size: 24 GB (13 GB model + 8 GB KV cache + overhead)

Logs Showing the Issue

Here's what the logs show when loading the model with PARAMETER num_ctx 4096 in the Modelfile:

print_info: n_ctx_train      = 16384
print_info: n_ctx_orig_yarn  = 16384
llama_context: n_ctx         = 65536
llama_context: n_ctx_per_seq = 16384
llama_kv_cache_unified:      CUDA0 KV buffer size = 11264.00 MiB
llama_kv_cache_unified:      CUDA1 KV buffer size = 11264.00 MiB
llama_kv_cache_unified:      CUDA2 KV buffer size = 10240.00 MiB
llama_kv_cache_unified: size = 32768.00 MiB (16384 cells, 32 layers, 4/4 seqs)

See how n_ctx_per_seq = 16384 instead of 4096? The Modelfile setting is completely ignored.

The NON working Workaround

The only way I could get it to work ( temporarily) was by setting an environment variable in the systemd service:

Environment="OLLAMA_CONTEXT_LENGTH=4096"

After adding this and restarting:

$ ollama ps
NAME           ID              SIZE     PROCESSOR    CONTEXT    UNTIL              
sql2:latest    9e4fefbf18f8    24 GB    100% GPU     4096       4 minutes from now

Logs showed:

print_info: n_ctx_train      = 16384
print_info: n_ctx_orig_yarn  = 16384
llama_context: n_ctx         = 16384
llama_context: n_ctx_per_seq = 4096
llama_context: n_ctx_per_seq (4096) < n_ctx_train (16384) -- the full capacity of the model will not be utilized
llama_kv_cache_unified:      CUDA0 KV buffer size = 8192.00 MiB
llama_kv_cache_unified: size = 8192.00 MiB (4096 cells, 32 layers, 4/4 seqs)

Perfect! Now n_ctx_per_seq = 4096 and memory usage is down to 24 GB.

And then all the sudden when it gets loaded again i see

NAME ID SIZE PROCESSOR CONTEXT UNTIL
sql2:latest 9e4fefbf18f8 70 GB 100% GPU 16384 4 minutes from now

Expected Behavior

PARAMETER num_ctx in the Modelfile should be respected, even when setting a context length lower than the model's native training context. If there's a reason it can't be lower, there should at least be a warning when creating the model.

Environment

  • Ollama version: (latest as of Nov 2025)
  • OS: Linux (Ubuntu with systemd)
  • GPUs: 6x RTX 4090 (24GB each)
  • NVIDIA Driver: 580.95.05
  • CUDA: 13.0

Reproduction

  1. Take any model with a large native context (e.g., 16384 or 32768)
  2. Create a Modelfile with PARAMETER num_ctx 4096
  3. Run ollama create mymodel -f modelfile
  4. Load the model and check ollama ps - context will be native length, not 4096
  5. Check logs - n_ctx_per_seq will show native length

Would be great if Modelfile parameters actually worked as documented, or if there's a clear explanation of why they can't override native context lengths.

Originally created by @SuperSonnix71 on GitHub (Nov 3, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12927 # Modelfile `PARAMETER num_ctx` is ignored when lower than model's native context length Hey folks, ran into a frustrating issue where setting `PARAMETER num_ctx` in a Modelfile doesn't actually work if you're trying to reduce the context below what the model was trained with. ## The Problem I have a custom model (sqlcoder-7b-2.fp16.gguf) with a native context of 16384. I want to limit it to 4096 to save memory, especially when running with `OLLAMA_NUM_PARALLEL=4`. Here's my Modelfile: ``` FROM /usr/share/ollama/.ollama/models/sqlcoder-7b-2.fp16.gguf PARAMETER num_thread 8 PARAMETER num_ctx 4096 PARAMETER top_k 10 PARAMETER top_p 0.3 ``` After creating the model with `ollama create sql2 -f modelfile`, the model completely ignores the `num_ctx 4096` setting: ```bash $ ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL sql2:latest 9e4fefbf18f8 70 GB 100% GPU 16384 4 minutes from now ``` Notice it's using **16384 context** (the native length) instead of **4096** (what I set). ## Why This Matters With `OLLAMA_NUM_PARALLEL=4`, this creates a massive KV cache: - **4 parallel × 16384 context = 65,536 total context pool** - **KV cache allocation: 32 GB** just for the cache - **Total model size: 70 GB** (13 GB model + 32 GB KV cache + overhead) When I actually want: - **4 parallel × 4096 context = 16,384 total context pool** - **KV cache allocation: 8 GB** - **Total model size: 24 GB** (13 GB model + 8 GB KV cache + overhead) ## Logs Showing the Issue Here's what the logs show when loading the model with `PARAMETER num_ctx 4096` in the Modelfile: ``` print_info: n_ctx_train = 16384 print_info: n_ctx_orig_yarn = 16384 llama_context: n_ctx = 65536 llama_context: n_ctx_per_seq = 16384 llama_kv_cache_unified: CUDA0 KV buffer size = 11264.00 MiB llama_kv_cache_unified: CUDA1 KV buffer size = 11264.00 MiB llama_kv_cache_unified: CUDA2 KV buffer size = 10240.00 MiB llama_kv_cache_unified: size = 32768.00 MiB (16384 cells, 32 layers, 4/4 seqs) ``` See how `n_ctx_per_seq = 16384` instead of 4096? The Modelfile setting is completely ignored. ## The NON working Workaround The **only** way I could get it to work ( temporarily) was by setting an environment variable in the systemd service: ```ini Environment="OLLAMA_CONTEXT_LENGTH=4096" ``` After adding this and restarting: ```bash $ ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL sql2:latest 9e4fefbf18f8 24 GB 100% GPU 4096 4 minutes from now ``` Logs showed: ``` print_info: n_ctx_train = 16384 print_info: n_ctx_orig_yarn = 16384 llama_context: n_ctx = 16384 llama_context: n_ctx_per_seq = 4096 llama_context: n_ctx_per_seq (4096) < n_ctx_train (16384) -- the full capacity of the model will not be utilized llama_kv_cache_unified: CUDA0 KV buffer size = 8192.00 MiB llama_kv_cache_unified: size = 8192.00 MiB (4096 cells, 32 layers, 4/4 seqs) ``` Perfect! Now `n_ctx_per_seq = 4096` and memory usage is down to 24 GB. **And then all the sudden when it gets loaded again i see** NAME ID SIZE PROCESSOR CONTEXT UNTIL sql2:latest 9e4fefbf18f8 **70 GB** 100% GPU **16384** 4 minutes from now ## Expected Behavior `PARAMETER num_ctx` in the Modelfile should be respected, even when setting a context length lower than the model's native training context. If there's a reason it can't be lower, there should at least be a warning when creating the model. ## Environment - Ollama version: (latest as of Nov 2025) - OS: Linux (Ubuntu with systemd) - GPUs: 6x RTX 4090 (24GB each) - NVIDIA Driver: 580.95.05 - CUDA: 13.0 ## Reproduction 1. Take any model with a large native context (e.g., 16384 or 32768) 2. Create a Modelfile with `PARAMETER num_ctx 4096` 3. Run `ollama create mymodel -f modelfile` 4. Load the model and check `ollama ps` - context will be native length, not 4096 5. Check logs - `n_ctx_per_seq` will show native length Would be great if Modelfile parameters actually worked as documented, or if there's a clear explanation of why they can't override native context lengths.
Author
Owner

@rick-github commented on GitHub (Nov 3, 2025):

Server logs will aid in debugging.

<!-- gh-comment-id:3481601708 --> @rick-github commented on GitHub (Nov 3, 2025): [Server logs](https://docs.ollama.com/[troubleshooting](https://docs.ollama.com/troubleshooting)) will aid in debugging.
Author
Owner

@SuperSonnix71 commented on GitHub (Nov 3, 2025):

@rick-github Thanks for your response! here it is
The model file:

─(base) ○ cat sql2                              
FROM /usr/share/ollama/.ollama/models/sqlcoder-7b-2.fp16.gguf
PARAMETER num_thread 8
PARAMETER num_ctx 4096
PARAMETER top_k 10
PARAMETER top_p 0.3

Nov 03 19:52:06 ai ollama[86728]: time=2025-11-03T19:52:06.698+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-c9a7f0b0-254d-1664-e79b-3ad372c29137 filtered_id="" library=CUDA compute=8.9 name=CUDA0 description="NVIDIA GeForce RTX 4090" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:01:00.0 type=discrete total="24.0 GiB" available="23.5 GiB"
Nov 03 19:52:06 ai ollama[86728]: time=2025-11-03T19:52:06.698+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-49bf12b5-ea42-8e6d-4bf1-909576960b91 filtered_id="" library=CUDA compute=8.9 name=CUDA1 description="NVIDIA GeForce RTX 4090" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:2c:00.0 type=discrete total="24.0 GiB" available="23.1 GiB"
Nov 03 19:52:06 ai ollama[86728]: time=2025-11-03T19:52:06.698+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-b6a21b7a-b69b-1c62-87b9-c6bc10a9e76f filtered_id="" library=CUDA compute=8.9 name=CUDA2 description="NVIDIA GeForce RTX 4090" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:41:00.0 type=discrete total="24.0 GiB" available="23.1 GiB"
Nov 03 19:52:06 ai ollama[86728]: time=2025-11-03T19:52:06.698+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-05f5177d-1479-1ce3-4669-d95c29517009 filtered_id="" library=CUDA compute=8.9 name=CUDA3 description="NVIDIA GeForce RTX 4090" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:42:00.0 type=discrete total="24.0 GiB" available="23.1 GiB"
Nov 03 19:52:06 ai ollama[86728]: time=2025-11-03T19:52:06.698+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-192b484e-d0b3-07dd-9c7a-97058b0b2bc4 filtered_id="" library=CUDA compute=8.9 name=CUDA4 description="NVIDIA GeForce RTX 4090" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:61:00.0 type=discrete total="24.0 GiB" available="23.1 GiB"
Nov 03 19:52:06 ai ollama[86728]: time=2025-11-03T19:52:06.698+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-8f72f4b1-0bf7-18e1-d842-a2a7f657fad1 filtered_id="" library=CUDA compute=8.9 name=CUDA5 description="NVIDIA GeForce RTX 4090" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:62:00.0 type=discrete total="24.0 GiB" available="23.1 GiB"
Nov 03 19:52:27 ai ollama[86728]: [GIN] 2025/11/03 - 19:52:27 | 200 |   13.348198ms |      172.17.0.4 | POST     "/api/show"
Nov 03 19:52:27 ai ollama[86728]: time=2025-11-03T19:52:27.613+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41671"
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-5b119e7341d5f0791dd52da6a02b77e02263392331b6d18d8e5708d6290d12c5 (version GGUF V3 (latest))
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv   0:                       general.architecture str              = llama
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv   1:                               general.name str              = hub
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv   2:                       llama.context_length u32              = 16384
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv   4:                          llama.block_count u32              = 32
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 1000000.000000
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv  11:                          general.file_type u32              = 1
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32016]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32016]   = [0.000000, 0.000000, 0.000000, 0.0000...
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32016]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 0
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv  19:               tokenizer.ggml.add_bos_token bool             = true
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv  20:               tokenizer.ggml.add_eos_token bool             = false
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - type  f32:   65 tensors
Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - type  f16:  226 tensors
Nov 03 19:52:29 ai ollama[86728]: print_info: file format = GGUF V3 (latest)
Nov 03 19:52:29 ai ollama[86728]: print_info: file type   = F16
Nov 03 19:52:29 ai ollama[86728]: print_info: file size   = 12.55 GiB (16.00 BPW)
Nov 03 19:52:29 ai ollama[86728]: load: control-looking token:  32007 '▁<PRE>' was not control-type; this is probably a bug in the model. its type will be overridden
Nov 03 19:52:29 ai ollama[86728]: load: control-looking token:  32009 '▁<MID>' was not control-type; this is probably a bug in the model. its type will be overridden
Nov 03 19:52:29 ai ollama[86728]: load: control-looking token:  32008 '▁<SUF>' was not control-type; this is probably a bug in the model. its type will be overridden
Nov 03 19:52:29 ai ollama[86728]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
Nov 03 19:52:29 ai ollama[86728]: load: printing all EOG tokens:
Nov 03 19:52:29 ai ollama[86728]: load:   - 2 ('</s>')
Nov 03 19:52:29 ai ollama[86728]: load: special tokens cache size = 6
Nov 03 19:52:29 ai ollama[86728]: load: token to piece cache size = 0.1686 MB
Nov 03 19:52:29 ai ollama[86728]: print_info: arch             = llama
Nov 03 19:52:29 ai ollama[86728]: print_info: vocab_only       = 1
Nov 03 19:52:29 ai ollama[86728]: print_info: model type       = ?B
Nov 03 19:52:29 ai ollama[86728]: print_info: model params     = 6.74 B
Nov 03 19:52:29 ai ollama[86728]: print_info: general.name     = hub
Nov 03 19:52:29 ai ollama[86728]: print_info: vocab type       = SPM
Nov 03 19:52:29 ai ollama[86728]: print_info: n_vocab          = 32016
Nov 03 19:52:29 ai ollama[86728]: print_info: n_merges         = 0
Nov 03 19:52:29 ai ollama[86728]: print_info: BOS token        = 1 '<s>'
Nov 03 19:52:29 ai ollama[86728]: print_info: EOS token        = 2 '</s>'
Nov 03 19:52:29 ai ollama[86728]: print_info: UNK token        = 0 '<unk>'
Nov 03 19:52:29 ai ollama[86728]: print_info: LF token         = 13 '<0x0A>'
Nov 03 19:52:29 ai ollama[86728]: print_info: FIM PRE token    = 32007 '▁<PRE>'
Nov 03 19:52:29 ai ollama[86728]: print_info: FIM SUF token    = 32008 '▁<SUF>'
Nov 03 19:52:29 ai ollama[86728]: print_info: FIM MID token    = 32009 '▁<MID>'
Nov 03 19:52:29 ai ollama[86728]: print_info: EOG token        = 2 '</s>'
Nov 03 19:52:29 ai ollama[86728]: print_info: max token length = 48
Nov 03 19:52:29 ai ollama[86728]: llama_model_load: vocab only - skipping tensors
Nov 03 19:52:29 ai ollama[86728]: time=2025-11-03T19:52:29.033+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-5b119e7341d5f0791dd52da6a02b77e02263392331b6d18d8e5708d6290d12c5 --port 36235"
Nov 03 19:52:29 ai ollama[86728]: time=2025-11-03T19:52:29.033+01:00 level=INFO source=server.go:470 msg="system memory" total="251.5 GiB" free="183.7 GiB" free_swap="136.0 GiB"
Nov 03 19:52:29 ai ollama[86728]: time=2025-11-03T19:52:29.034+01:00 level=INFO source=memory.go:37 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-5b119e7341d5f0791dd52da6a02b77e02263392331b6d18d8e5708d6290d12c5 library=CUDA parallel=1 required="22.4 GiB" gpus=1
Nov 03 19:52:29 ai ollama[86728]: time=2025-11-03T19:52:29.034+01:00 level=INFO source=server.go:522 msg=offload library=CUDA layers.requested=-1 layers.model=33 layers.offload=33 layers.split=[33] memory.available="[23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="22.4 GiB" memory.required.partial="22.4 GiB" memory.required.kv="8.0 GiB" memory.required.allocations="[22.4 GiB]" memory.weights.total="12.3 GiB" memory.weights.repeating="12.1 GiB" memory.weights.nonrepeating="250.1 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.3 GiB"
Nov 03 19:52:29 ai ollama[86728]: [GIN] 2025/11/03 - 19:52:29 | 200 |    2.763606ms |    100.64.0.255 | GET      "/v1/models"
Nov 03 19:52:29 ai ollama[86728]: time=2025-11-03T19:52:29.047+01:00 level=INFO source=runner.go:910 msg="starting go runner"
Nov 03 19:52:29 ai ollama[86728]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
Nov 03 19:52:29 ai ollama[86728]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Nov 03 19:52:29 ai ollama[86728]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Nov 03 19:52:29 ai ollama[86728]: ggml_cuda_init: found 6 CUDA devices:
Nov 03 19:52:29 ai ollama[86728]:   Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-c9a7f0b0-254d-1664-e79b-3ad372c29137
Nov 03 19:52:29 ai ollama[86728]:   Device 1: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-49bf12b5-ea42-8e6d-4bf1-909576960b91
Nov 03 19:52:29 ai ollama[86728]:   Device 2: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-b6a21b7a-b69b-1c62-87b9-c6bc10a9e76f
Nov 03 19:52:29 ai ollama[86728]:   Device 3: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-05f5177d-1479-1ce3-4669-d95c29517009
Nov 03 19:52:29 ai ollama[86728]:   Device 4: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-192b484e-d0b3-07dd-9c7a-97058b0b2bc4
Nov 03 19:52:29 ai ollama[86728]:   Device 5: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-8f72f4b1-0bf7-18e1-d842-a2a7f657fad1
Nov 03 19:52:29 ai ollama[86728]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v13/libggml-cuda.so
Nov 03 19:52:29 ai ollama[86728]: time=2025-11-03T19:52:29.759+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 CUDA.3.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.3.USE_GRAPHS=1 CUDA.3.PEER_MAX_BATCH_SIZE=128 CUDA.4.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.4.USE_GRAPHS=1 CUDA.4.PEER_MAX_BATCH_SIZE=128 CUDA.5.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.5.USE_GRAPHS=1 CUDA.5.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
Nov 03 19:52:29 ai ollama[86728]: time=2025-11-03T19:52:29.759+01:00 level=INFO source=runner.go:946 msg="Server listening on 127.0.0.1:36235"
Nov 03 19:52:29 ai ollama[86728]: time=2025-11-03T19:52:29.766+01:00 level=INFO source=runner.go:845 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:16384 KvCacheType: NumThreads:8 GPULayers:33[ID:GPU-c9a7f0b0-254d-1664-e79b-3ad372c29137 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}"
Nov 03 19:52:29 ai ollama[86728]: time=2025-11-03T19:52:29.766+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
Nov 03 19:52:29 ai ollama[86728]: time=2025-11-03T19:52:29.767+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
Nov 03 19:52:29 ai ollama[86728]: ggml_backend_cuda_device_get_memory device GPU-c9a7f0b0-254d-1664-e79b-3ad372c29137 utilizing NVML memory reporting free: 25247612928 total: 25757220864
Nov 03 19:52:29 ai ollama[86728]: llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) (0000:01:00.0) - 24078 MiB free
Nov 03 19:52:29 ai ollama[86728]: ggml_backend_cuda_device_get_memory device GPU-49bf12b5-ea42-8e6d-4bf1-909576960b91 utilizing NVML memory reporting free: 24837292032 total: 25757220864
Nov 03 19:52:29 ai ollama[86728]: llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 4090) (0000:2c:00.0) - 23686 MiB free
Nov 03 19:52:30 ai ollama[86728]: ggml_backend_cuda_device_get_memory device GPU-b6a21b7a-b69b-1c62-87b9-c6bc10a9e76f utilizing NVML memory reporting free: 24837292032 total: 25757220864
Nov 03 19:52:30 ai ollama[86728]: llama_model_load_from_file_impl: using device CUDA2 (NVIDIA GeForce RTX 4090) (0000:41:00.0) - 23686 MiB free
Nov 03 19:52:30 ai ollama[86728]: ggml_backend_cuda_device_get_memory device GPU-05f5177d-1479-1ce3-4669-d95c29517009 utilizing NVML memory reporting free: 24837292032 total: 25757220864
Nov 03 19:52:30 ai ollama[86728]: llama_model_load_from_file_impl: using device CUDA3 (NVIDIA GeForce RTX 4090) (0000:42:00.0) - 23686 MiB free
Nov 03 19:52:30 ai ollama[86728]: ggml_backend_cuda_device_get_memory device GPU-192b484e-d0b3-07dd-9c7a-97058b0b2bc4 utilizing NVML memory reporting free: 24837292032 total: 25757220864
Nov 03 19:52:30 ai ollama[86728]: llama_model_load_from_file_impl: using device CUDA4 (NVIDIA GeForce RTX 4090) (0000:61:00.0) - 23686 MiB free
Nov 03 19:52:30 ai ollama[86728]: ggml_backend_cuda_device_get_memory device GPU-8f72f4b1-0bf7-18e1-d842-a2a7f657fad1 utilizing NVML memory reporting free: 24837292032 total: 25757220864
Nov 03 19:52:30 ai ollama[86728]: llama_model_load_from_file_impl: using device CUDA5 (NVIDIA GeForce RTX 4090) (0000:62:00.0) - 23686 MiB free
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-5b119e7341d5f0791dd52da6a02b77e02263392331b6d18d8e5708d6290d12c5 (version GGUF V3 (latest))
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv   0:                       general.architecture str              = llama
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv   1:                               general.name str              = hub
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv   2:                       llama.context_length u32              = 16384
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv   4:                          llama.block_count u32              = 32
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 1000000.000000
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv  11:                          general.file_type u32              = 1
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32016]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32016]   = [0.000000, 0.000000, 0.000000, 0.0000...
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32016]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 0
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv  19:               tokenizer.ggml.add_bos_token bool             = true
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv  20:               tokenizer.ggml.add_eos_token bool             = false
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - type  f32:   65 tensors
Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - type  f16:  226 tensors
Nov 03 19:52:30 ai ollama[86728]: print_info: file format = GGUF V3 (latest)
Nov 03 19:52:30 ai ollama[86728]: print_info: file type   = F16
Nov 03 19:52:30 ai ollama[86728]: print_info: file size   = 12.55 GiB (16.00 BPW)
Nov 03 19:52:30 ai ollama[86728]: load: control-looking token:  32007 '▁<PRE>' was not control-type; this is probably a bug in the model. its type will be overridden
Nov 03 19:52:30 ai ollama[86728]: load: control-looking token:  32009 '▁<MID>' was not control-type; this is probably a bug in the model. its type will be overridden
Nov 03 19:52:30 ai ollama[86728]: load: control-looking token:  32008 '▁<SUF>' was not control-type; this is probably a bug in the model. its type will be overridden
Nov 03 19:52:30 ai ollama[86728]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
Nov 03 19:52:30 ai ollama[86728]: load: printing all EOG tokens:
Nov 03 19:52:30 ai ollama[86728]: load:   - 2 ('</s>')
Nov 03 19:52:30 ai ollama[86728]: load: special tokens cache size = 6
Nov 03 19:52:30 ai ollama[86728]: load: token to piece cache size = 0.1686 MB
Nov 03 19:52:30 ai ollama[86728]: print_info: arch             = llama
Nov 03 19:52:30 ai ollama[86728]: print_info: vocab_only       = 0
Nov 03 19:52:30 ai ollama[86728]: print_info: n_ctx_train      = 16384
Nov 03 19:52:30 ai ollama[86728]: print_info: n_embd           = 4096
Nov 03 19:52:30 ai ollama[86728]: print_info: n_layer          = 32
Nov 03 19:52:30 ai ollama[86728]: print_info: n_head           = 32
Nov 03 19:52:30 ai ollama[86728]: print_info: n_head_kv        = 32
Nov 03 19:52:30 ai ollama[86728]: print_info: n_rot            = 128
Nov 03 19:52:30 ai ollama[86728]: print_info: n_swa            = 0
Nov 03 19:52:30 ai ollama[86728]: print_info: is_swa_any       = 0
Nov 03 19:52:30 ai ollama[86728]: print_info: n_embd_head_k    = 128
Nov 03 19:52:30 ai ollama[86728]: print_info: n_embd_head_v    = 128
Nov 03 19:52:30 ai ollama[86728]: print_info: n_gqa            = 1
Nov 03 19:52:30 ai ollama[86728]: print_info: n_embd_k_gqa     = 4096
Nov 03 19:52:30 ai ollama[86728]: print_info: n_embd_v_gqa     = 4096
Nov 03 19:52:30 ai ollama[86728]: print_info: f_norm_eps       = 0.0e+00
Nov 03 19:52:30 ai ollama[86728]: print_info: f_norm_rms_eps   = 1.0e-05
Nov 03 19:52:30 ai ollama[86728]: print_info: f_clamp_kqv      = 0.0e+00
Nov 03 19:52:30 ai ollama[86728]: print_info: f_max_alibi_bias = 0.0e+00
Nov 03 19:52:30 ai ollama[86728]: print_info: f_logit_scale    = 0.0e+00
Nov 03 19:52:30 ai ollama[86728]: print_info: f_attn_scale     = 0.0e+00
Nov 03 19:52:30 ai ollama[86728]: print_info: n_ff             = 11008
Nov 03 19:52:30 ai ollama[86728]: print_info: n_expert         = 0
Nov 03 19:52:30 ai ollama[86728]: print_info: n_expert_used    = 0
Nov 03 19:52:30 ai ollama[86728]: print_info: causal attn      = 1
Nov 03 19:52:30 ai ollama[86728]: print_info: pooling type     = 0
Nov 03 19:52:30 ai ollama[86728]: print_info: rope type        = 0
Nov 03 19:52:30 ai ollama[86728]: print_info: rope scaling     = linear
Nov 03 19:52:30 ai ollama[86728]: print_info: freq_base_train  = 1000000.0
Nov 03 19:52:30 ai ollama[86728]: print_info: freq_scale_train = 1
Nov 03 19:52:30 ai ollama[86728]: print_info: n_ctx_orig_yarn  = 16384
Nov 03 19:52:30 ai ollama[86728]: print_info: rope_finetuned   = unknown
Nov 03 19:52:30 ai ollama[86728]: print_info: model type       = 7B
Nov 03 19:52:30 ai ollama[86728]: print_info: model params     = 6.74 B
Nov 03 19:52:30 ai ollama[86728]: print_info: general.name     = hub
Nov 03 19:52:30 ai ollama[86728]: print_info: vocab type       = SPM
Nov 03 19:52:30 ai ollama[86728]: print_info: n_vocab          = 32016
Nov 03 19:52:30 ai ollama[86728]: print_info: n_merges         = 0
Nov 03 19:52:30 ai ollama[86728]: print_info: BOS token        = 1 '<s>'
Nov 03 19:52:30 ai ollama[86728]: print_info: EOS token        = 2 '</s>'
Nov 03 19:52:30 ai ollama[86728]: print_info: UNK token        = 0 '<unk>'
Nov 03 19:52:30 ai ollama[86728]: print_info: LF token         = 13 '<0x0A>'
Nov 03 19:52:30 ai ollama[86728]: print_info: FIM PRE token    = 32007 '▁<PRE>'
Nov 03 19:52:30 ai ollama[86728]: print_info: FIM SUF token    = 32008 '▁<SUF>'
Nov 03 19:52:30 ai ollama[86728]: print_info: FIM MID token    = 32009 '▁<MID>'
Nov 03 19:52:30 ai ollama[86728]: print_info: EOG token        = 2 '</s>'
Nov 03 19:52:30 ai ollama[86728]: print_info: max token length = 48
Nov 03 19:52:30 ai ollama[86728]: load_tensors: loading model tensors, this can take a while... (mmap = true)
Nov 03 19:52:31 ai ollama[86728]: load_tensors: offloading 32 repeating layers to GPU
Nov 03 19:52:31 ai ollama[86728]: load_tensors: offloading output layer to GPU
Nov 03 19:52:31 ai ollama[86728]: load_tensors: offloaded 33/33 layers to GPU
Nov 03 19:52:31 ai ollama[86728]: load_tensors:        CUDA0 model buffer size = 12603.14 MiB
Nov 03 19:52:31 ai ollama[86728]: load_tensors:   CPU_Mapped model buffer size =   250.12 MiB
Nov 03 19:52:32 ai ollama[86728]: llama_init_from_model: model default pooling_type is [0], but [-1] was specified
Nov 03 19:52:32 ai ollama[86728]: llama_context: constructing llama_context
Nov 03 19:52:32 ai ollama[86728]: llama_context: n_seq_max     = 1
Nov 03 19:52:32 ai ollama[86728]: llama_context: n_ctx         = 16384
Nov 03 19:52:32 ai ollama[86728]: llama_context: n_ctx_per_seq = 16384
Nov 03 19:52:32 ai ollama[86728]: llama_context: n_batch       = 512
Nov 03 19:52:32 ai ollama[86728]: llama_context: n_ubatch      = 512
Nov 03 19:52:32 ai ollama[86728]: llama_context: causal_attn   = 1
Nov 03 19:52:32 ai ollama[86728]: llama_context: flash_attn    = disabled
Nov 03 19:52:32 ai ollama[86728]: llama_context: kv_unified    = false
Nov 03 19:52:32 ai ollama[86728]: llama_context: freq_base     = 1000000.0
Nov 03 19:52:32 ai ollama[86728]: llama_context: freq_scale    = 1
Nov 03 19:52:32 ai ollama[86728]: llama_context:  CUDA_Host  output buffer size =     0.14 MiB
Nov 03 19:52:32 ai ollama[86728]: llama_kv_cache:      CUDA0 KV buffer size =  8192.00 MiB
Nov 03 19:52:32 ai ollama[86728]: llama_kv_cache: size = 8192.00 MiB ( 16384 cells,  32 layers,  1/1 seqs), K (f16): 4096.00 MiB, V (f16): 4096.00 MiB
Nov 03 19:52:32 ai ollama[86728]: llama_context: pipeline parallelism enabled (n_copies=4)
Nov 03 19:52:33 ai ollama[86728]: llama_context:      CUDA0 compute buffer size =  1280.03 MiB
Nov 03 19:52:33 ai ollama[86728]: llama_context:  CUDA_Host compute buffer size =   200.04 MiB
Nov 03 19:52:33 ai ollama[86728]: llama_context: graph nodes  = 1158
Nov 03 19:52:33 ai ollama[86728]: llama_context: graph splits = 2
Nov 03 19:52:33 ai ollama[86728]: time=2025-11-03T19:52:33.279+01:00 level=INFO source=server.go:1289 msg="llama runner started in 4.25 seconds"
Nov 03 19:52:33 ai ollama[86728]: time=2025-11-03T19:52:33.279+01:00 level=INFO source=sched.go:493 msg="loaded runners" count=1
Nov 03 19:52:33 ai ollama[86728]: time=2025-11-03T19:52:33.279+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
Nov 03 19:52:33 ai ollama[86728]: time=2025-11-03T19:52:33.280+01:00 level=INFO source=server.go:1289 msg="llama runner started in 4.25 seconds"
Nov 03 19:52:34 ai ollama[86728]: [GIN] 2025/11/03 - 19:52:34 | 200 |  6.800893026s |      172.17.0.4 | POST     "/api/chat"
Nov 03 19:52:41 ai ollama[86728]: [GIN] 2025/11/03 - 19:52:41 | 200 |      30.038µs |       127.0.0.1 | HEAD     "/"
Nov 03 19:52:41 ai ollama[86728]: [GIN] 2025/11/03 - 19:52:41 | 200 |     105.085µs |       127.0.0.1 | GET      "/api/ps"
Nov 03 19:52:59 ai ollama[86728]: [GIN] 2025/11/03 - 19:52:59 | 200 |    3.324158ms |    100.64.0.255 | GET      "/v1/models"
Nov 03 19:53:29 ai ollama[86728]: [GIN] 2025/11/03 - 19:53:29 | 200 |    3.760796ms |    100.64.0.255 | GET      "/v1/models"
Nov 03 19:53:59 ai ollama[86728]: [GIN] 2025/11/03 - 19:53:59 | 200 |    3.415365ms |    100.64.0.255 | GET      "/v1/models"
Nov 03 19:54:29 ai ollama[86728]: [GIN] 2025/11/03 - 19:54:29 | 200 |    3.343757ms |    100.64.0.255 | GET      "/v1/models"
Nov 03 19:54:59 ai ollama[86728]: [GIN] 2025/11/03 - 19:54:59 | 200 |    4.090421ms |    100.64.0.255 | GET      "/v1/models"
<!-- gh-comment-id:3482035104 --> @SuperSonnix71 commented on GitHub (Nov 3, 2025): @rick-github Thanks for your response! here it is **The model file:** ``` ─(base) ○ cat sql2 FROM /usr/share/ollama/.ollama/models/sqlcoder-7b-2.fp16.gguf PARAMETER num_thread 8 PARAMETER num_ctx 4096 PARAMETER top_k 10 PARAMETER top_p 0.3 ``` ```shell Nov 03 19:52:06 ai ollama[86728]: time=2025-11-03T19:52:06.698+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-c9a7f0b0-254d-1664-e79b-3ad372c29137 filtered_id="" library=CUDA compute=8.9 name=CUDA0 description="NVIDIA GeForce RTX 4090" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:01:00.0 type=discrete total="24.0 GiB" available="23.5 GiB" Nov 03 19:52:06 ai ollama[86728]: time=2025-11-03T19:52:06.698+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-49bf12b5-ea42-8e6d-4bf1-909576960b91 filtered_id="" library=CUDA compute=8.9 name=CUDA1 description="NVIDIA GeForce RTX 4090" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:2c:00.0 type=discrete total="24.0 GiB" available="23.1 GiB" Nov 03 19:52:06 ai ollama[86728]: time=2025-11-03T19:52:06.698+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-b6a21b7a-b69b-1c62-87b9-c6bc10a9e76f filtered_id="" library=CUDA compute=8.9 name=CUDA2 description="NVIDIA GeForce RTX 4090" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:41:00.0 type=discrete total="24.0 GiB" available="23.1 GiB" Nov 03 19:52:06 ai ollama[86728]: time=2025-11-03T19:52:06.698+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-05f5177d-1479-1ce3-4669-d95c29517009 filtered_id="" library=CUDA compute=8.9 name=CUDA3 description="NVIDIA GeForce RTX 4090" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:42:00.0 type=discrete total="24.0 GiB" available="23.1 GiB" Nov 03 19:52:06 ai ollama[86728]: time=2025-11-03T19:52:06.698+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-192b484e-d0b3-07dd-9c7a-97058b0b2bc4 filtered_id="" library=CUDA compute=8.9 name=CUDA4 description="NVIDIA GeForce RTX 4090" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:61:00.0 type=discrete total="24.0 GiB" available="23.1 GiB" Nov 03 19:52:06 ai ollama[86728]: time=2025-11-03T19:52:06.698+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-8f72f4b1-0bf7-18e1-d842-a2a7f657fad1 filtered_id="" library=CUDA compute=8.9 name=CUDA5 description="NVIDIA GeForce RTX 4090" libdirs=ollama,cuda_v13 driver=13.0 pci_id=0000:62:00.0 type=discrete total="24.0 GiB" available="23.1 GiB" Nov 03 19:52:27 ai ollama[86728]: [GIN] 2025/11/03 - 19:52:27 | 200 | 13.348198ms | 172.17.0.4 | POST "/api/show" Nov 03 19:52:27 ai ollama[86728]: time=2025-11-03T19:52:27.613+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --port 41671" Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-5b119e7341d5f0791dd52da6a02b77e02263392331b6d18d8e5708d6290d12c5 (version GGUF V3 (latest)) Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv 0: general.architecture str = llama Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv 1: general.name str = hub Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv 2: llama.context_length u32 = 16384 Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv 4: llama.block_count u32 = 32 Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv 11: general.file_type u32 = 1 Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv 12: tokenizer.ggml.model str = llama Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32016] = ["<unk>", "<s>", "</s>", "<0x00>", "<... Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32016] = [0.000000, 0.000000, 0.000000, 0.0000... Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32016] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - type f32: 65 tensors Nov 03 19:52:29 ai ollama[86728]: llama_model_loader: - type f16: 226 tensors Nov 03 19:52:29 ai ollama[86728]: print_info: file format = GGUF V3 (latest) Nov 03 19:52:29 ai ollama[86728]: print_info: file type = F16 Nov 03 19:52:29 ai ollama[86728]: print_info: file size = 12.55 GiB (16.00 BPW) Nov 03 19:52:29 ai ollama[86728]: load: control-looking token: 32007 '▁<PRE>' was not control-type; this is probably a bug in the model. its type will be overridden Nov 03 19:52:29 ai ollama[86728]: load: control-looking token: 32009 '▁<MID>' was not control-type; this is probably a bug in the model. its type will be overridden Nov 03 19:52:29 ai ollama[86728]: load: control-looking token: 32008 '▁<SUF>' was not control-type; this is probably a bug in the model. its type will be overridden Nov 03 19:52:29 ai ollama[86728]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect Nov 03 19:52:29 ai ollama[86728]: load: printing all EOG tokens: Nov 03 19:52:29 ai ollama[86728]: load: - 2 ('</s>') Nov 03 19:52:29 ai ollama[86728]: load: special tokens cache size = 6 Nov 03 19:52:29 ai ollama[86728]: load: token to piece cache size = 0.1686 MB Nov 03 19:52:29 ai ollama[86728]: print_info: arch = llama Nov 03 19:52:29 ai ollama[86728]: print_info: vocab_only = 1 Nov 03 19:52:29 ai ollama[86728]: print_info: model type = ?B Nov 03 19:52:29 ai ollama[86728]: print_info: model params = 6.74 B Nov 03 19:52:29 ai ollama[86728]: print_info: general.name = hub Nov 03 19:52:29 ai ollama[86728]: print_info: vocab type = SPM Nov 03 19:52:29 ai ollama[86728]: print_info: n_vocab = 32016 Nov 03 19:52:29 ai ollama[86728]: print_info: n_merges = 0 Nov 03 19:52:29 ai ollama[86728]: print_info: BOS token = 1 '<s>' Nov 03 19:52:29 ai ollama[86728]: print_info: EOS token = 2 '</s>' Nov 03 19:52:29 ai ollama[86728]: print_info: UNK token = 0 '<unk>' Nov 03 19:52:29 ai ollama[86728]: print_info: LF token = 13 '<0x0A>' Nov 03 19:52:29 ai ollama[86728]: print_info: FIM PRE token = 32007 '▁<PRE>' Nov 03 19:52:29 ai ollama[86728]: print_info: FIM SUF token = 32008 '▁<SUF>' Nov 03 19:52:29 ai ollama[86728]: print_info: FIM MID token = 32009 '▁<MID>' Nov 03 19:52:29 ai ollama[86728]: print_info: EOG token = 2 '</s>' Nov 03 19:52:29 ai ollama[86728]: print_info: max token length = 48 Nov 03 19:52:29 ai ollama[86728]: llama_model_load: vocab only - skipping tensors Nov 03 19:52:29 ai ollama[86728]: time=2025-11-03T19:52:29.033+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-5b119e7341d5f0791dd52da6a02b77e02263392331b6d18d8e5708d6290d12c5 --port 36235" Nov 03 19:52:29 ai ollama[86728]: time=2025-11-03T19:52:29.033+01:00 level=INFO source=server.go:470 msg="system memory" total="251.5 GiB" free="183.7 GiB" free_swap="136.0 GiB" Nov 03 19:52:29 ai ollama[86728]: time=2025-11-03T19:52:29.034+01:00 level=INFO source=memory.go:37 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-5b119e7341d5f0791dd52da6a02b77e02263392331b6d18d8e5708d6290d12c5 library=CUDA parallel=1 required="22.4 GiB" gpus=1 Nov 03 19:52:29 ai ollama[86728]: time=2025-11-03T19:52:29.034+01:00 level=INFO source=server.go:522 msg=offload library=CUDA layers.requested=-1 layers.model=33 layers.offload=33 layers.split=[33] memory.available="[23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="22.4 GiB" memory.required.partial="22.4 GiB" memory.required.kv="8.0 GiB" memory.required.allocations="[22.4 GiB]" memory.weights.total="12.3 GiB" memory.weights.repeating="12.1 GiB" memory.weights.nonrepeating="250.1 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.3 GiB" Nov 03 19:52:29 ai ollama[86728]: [GIN] 2025/11/03 - 19:52:29 | 200 | 2.763606ms | 100.64.0.255 | GET "/v1/models" Nov 03 19:52:29 ai ollama[86728]: time=2025-11-03T19:52:29.047+01:00 level=INFO source=runner.go:910 msg="starting go runner" Nov 03 19:52:29 ai ollama[86728]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so Nov 03 19:52:29 ai ollama[86728]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Nov 03 19:52:29 ai ollama[86728]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Nov 03 19:52:29 ai ollama[86728]: ggml_cuda_init: found 6 CUDA devices: Nov 03 19:52:29 ai ollama[86728]: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-c9a7f0b0-254d-1664-e79b-3ad372c29137 Nov 03 19:52:29 ai ollama[86728]: Device 1: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-49bf12b5-ea42-8e6d-4bf1-909576960b91 Nov 03 19:52:29 ai ollama[86728]: Device 2: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-b6a21b7a-b69b-1c62-87b9-c6bc10a9e76f Nov 03 19:52:29 ai ollama[86728]: Device 3: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-05f5177d-1479-1ce3-4669-d95c29517009 Nov 03 19:52:29 ai ollama[86728]: Device 4: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-192b484e-d0b3-07dd-9c7a-97058b0b2bc4 Nov 03 19:52:29 ai ollama[86728]: Device 5: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-8f72f4b1-0bf7-18e1-d842-a2a7f657fad1 Nov 03 19:52:29 ai ollama[86728]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v13/libggml-cuda.so Nov 03 19:52:29 ai ollama[86728]: time=2025-11-03T19:52:29.759+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 CUDA.2.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.2.USE_GRAPHS=1 CUDA.2.PEER_MAX_BATCH_SIZE=128 CUDA.3.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.3.USE_GRAPHS=1 CUDA.3.PEER_MAX_BATCH_SIZE=128 CUDA.4.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.4.USE_GRAPHS=1 CUDA.4.PEER_MAX_BATCH_SIZE=128 CUDA.5.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.5.USE_GRAPHS=1 CUDA.5.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) Nov 03 19:52:29 ai ollama[86728]: time=2025-11-03T19:52:29.759+01:00 level=INFO source=runner.go:946 msg="Server listening on 127.0.0.1:36235" Nov 03 19:52:29 ai ollama[86728]: time=2025-11-03T19:52:29.766+01:00 level=INFO source=runner.go:845 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:16384 KvCacheType: NumThreads:8 GPULayers:33[ID:GPU-c9a7f0b0-254d-1664-e79b-3ad372c29137 Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}" Nov 03 19:52:29 ai ollama[86728]: time=2025-11-03T19:52:29.766+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" Nov 03 19:52:29 ai ollama[86728]: time=2025-11-03T19:52:29.767+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" Nov 03 19:52:29 ai ollama[86728]: ggml_backend_cuda_device_get_memory device GPU-c9a7f0b0-254d-1664-e79b-3ad372c29137 utilizing NVML memory reporting free: 25247612928 total: 25757220864 Nov 03 19:52:29 ai ollama[86728]: llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) (0000:01:00.0) - 24078 MiB free Nov 03 19:52:29 ai ollama[86728]: ggml_backend_cuda_device_get_memory device GPU-49bf12b5-ea42-8e6d-4bf1-909576960b91 utilizing NVML memory reporting free: 24837292032 total: 25757220864 Nov 03 19:52:29 ai ollama[86728]: llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 4090) (0000:2c:00.0) - 23686 MiB free Nov 03 19:52:30 ai ollama[86728]: ggml_backend_cuda_device_get_memory device GPU-b6a21b7a-b69b-1c62-87b9-c6bc10a9e76f utilizing NVML memory reporting free: 24837292032 total: 25757220864 Nov 03 19:52:30 ai ollama[86728]: llama_model_load_from_file_impl: using device CUDA2 (NVIDIA GeForce RTX 4090) (0000:41:00.0) - 23686 MiB free Nov 03 19:52:30 ai ollama[86728]: ggml_backend_cuda_device_get_memory device GPU-05f5177d-1479-1ce3-4669-d95c29517009 utilizing NVML memory reporting free: 24837292032 total: 25757220864 Nov 03 19:52:30 ai ollama[86728]: llama_model_load_from_file_impl: using device CUDA3 (NVIDIA GeForce RTX 4090) (0000:42:00.0) - 23686 MiB free Nov 03 19:52:30 ai ollama[86728]: ggml_backend_cuda_device_get_memory device GPU-192b484e-d0b3-07dd-9c7a-97058b0b2bc4 utilizing NVML memory reporting free: 24837292032 total: 25757220864 Nov 03 19:52:30 ai ollama[86728]: llama_model_load_from_file_impl: using device CUDA4 (NVIDIA GeForce RTX 4090) (0000:61:00.0) - 23686 MiB free Nov 03 19:52:30 ai ollama[86728]: ggml_backend_cuda_device_get_memory device GPU-8f72f4b1-0bf7-18e1-d842-a2a7f657fad1 utilizing NVML memory reporting free: 24837292032 total: 25757220864 Nov 03 19:52:30 ai ollama[86728]: llama_model_load_from_file_impl: using device CUDA5 (NVIDIA GeForce RTX 4090) (0000:62:00.0) - 23686 MiB free Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-5b119e7341d5f0791dd52da6a02b77e02263392331b6d18d8e5708d6290d12c5 (version GGUF V3 (latest)) Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv 0: general.architecture str = llama Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv 1: general.name str = hub Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv 2: llama.context_length u32 = 16384 Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv 4: llama.block_count u32 = 32 Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv 11: general.file_type u32 = 1 Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv 12: tokenizer.ggml.model str = llama Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32016] = ["<unk>", "<s>", "</s>", "<0x00>", "<... Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32016] = [0.000000, 0.000000, 0.000000, 0.0000... Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32016] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - type f32: 65 tensors Nov 03 19:52:30 ai ollama[86728]: llama_model_loader: - type f16: 226 tensors Nov 03 19:52:30 ai ollama[86728]: print_info: file format = GGUF V3 (latest) Nov 03 19:52:30 ai ollama[86728]: print_info: file type = F16 Nov 03 19:52:30 ai ollama[86728]: print_info: file size = 12.55 GiB (16.00 BPW) Nov 03 19:52:30 ai ollama[86728]: load: control-looking token: 32007 '▁<PRE>' was not control-type; this is probably a bug in the model. its type will be overridden Nov 03 19:52:30 ai ollama[86728]: load: control-looking token: 32009 '▁<MID>' was not control-type; this is probably a bug in the model. its type will be overridden Nov 03 19:52:30 ai ollama[86728]: load: control-looking token: 32008 '▁<SUF>' was not control-type; this is probably a bug in the model. its type will be overridden Nov 03 19:52:30 ai ollama[86728]: load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect Nov 03 19:52:30 ai ollama[86728]: load: printing all EOG tokens: Nov 03 19:52:30 ai ollama[86728]: load: - 2 ('</s>') Nov 03 19:52:30 ai ollama[86728]: load: special tokens cache size = 6 Nov 03 19:52:30 ai ollama[86728]: load: token to piece cache size = 0.1686 MB Nov 03 19:52:30 ai ollama[86728]: print_info: arch = llama Nov 03 19:52:30 ai ollama[86728]: print_info: vocab_only = 0 Nov 03 19:52:30 ai ollama[86728]: print_info: n_ctx_train = 16384 Nov 03 19:52:30 ai ollama[86728]: print_info: n_embd = 4096 Nov 03 19:52:30 ai ollama[86728]: print_info: n_layer = 32 Nov 03 19:52:30 ai ollama[86728]: print_info: n_head = 32 Nov 03 19:52:30 ai ollama[86728]: print_info: n_head_kv = 32 Nov 03 19:52:30 ai ollama[86728]: print_info: n_rot = 128 Nov 03 19:52:30 ai ollama[86728]: print_info: n_swa = 0 Nov 03 19:52:30 ai ollama[86728]: print_info: is_swa_any = 0 Nov 03 19:52:30 ai ollama[86728]: print_info: n_embd_head_k = 128 Nov 03 19:52:30 ai ollama[86728]: print_info: n_embd_head_v = 128 Nov 03 19:52:30 ai ollama[86728]: print_info: n_gqa = 1 Nov 03 19:52:30 ai ollama[86728]: print_info: n_embd_k_gqa = 4096 Nov 03 19:52:30 ai ollama[86728]: print_info: n_embd_v_gqa = 4096 Nov 03 19:52:30 ai ollama[86728]: print_info: f_norm_eps = 0.0e+00 Nov 03 19:52:30 ai ollama[86728]: print_info: f_norm_rms_eps = 1.0e-05 Nov 03 19:52:30 ai ollama[86728]: print_info: f_clamp_kqv = 0.0e+00 Nov 03 19:52:30 ai ollama[86728]: print_info: f_max_alibi_bias = 0.0e+00 Nov 03 19:52:30 ai ollama[86728]: print_info: f_logit_scale = 0.0e+00 Nov 03 19:52:30 ai ollama[86728]: print_info: f_attn_scale = 0.0e+00 Nov 03 19:52:30 ai ollama[86728]: print_info: n_ff = 11008 Nov 03 19:52:30 ai ollama[86728]: print_info: n_expert = 0 Nov 03 19:52:30 ai ollama[86728]: print_info: n_expert_used = 0 Nov 03 19:52:30 ai ollama[86728]: print_info: causal attn = 1 Nov 03 19:52:30 ai ollama[86728]: print_info: pooling type = 0 Nov 03 19:52:30 ai ollama[86728]: print_info: rope type = 0 Nov 03 19:52:30 ai ollama[86728]: print_info: rope scaling = linear Nov 03 19:52:30 ai ollama[86728]: print_info: freq_base_train = 1000000.0 Nov 03 19:52:30 ai ollama[86728]: print_info: freq_scale_train = 1 Nov 03 19:52:30 ai ollama[86728]: print_info: n_ctx_orig_yarn = 16384 Nov 03 19:52:30 ai ollama[86728]: print_info: rope_finetuned = unknown Nov 03 19:52:30 ai ollama[86728]: print_info: model type = 7B Nov 03 19:52:30 ai ollama[86728]: print_info: model params = 6.74 B Nov 03 19:52:30 ai ollama[86728]: print_info: general.name = hub Nov 03 19:52:30 ai ollama[86728]: print_info: vocab type = SPM Nov 03 19:52:30 ai ollama[86728]: print_info: n_vocab = 32016 Nov 03 19:52:30 ai ollama[86728]: print_info: n_merges = 0 Nov 03 19:52:30 ai ollama[86728]: print_info: BOS token = 1 '<s>' Nov 03 19:52:30 ai ollama[86728]: print_info: EOS token = 2 '</s>' Nov 03 19:52:30 ai ollama[86728]: print_info: UNK token = 0 '<unk>' Nov 03 19:52:30 ai ollama[86728]: print_info: LF token = 13 '<0x0A>' Nov 03 19:52:30 ai ollama[86728]: print_info: FIM PRE token = 32007 '▁<PRE>' Nov 03 19:52:30 ai ollama[86728]: print_info: FIM SUF token = 32008 '▁<SUF>' Nov 03 19:52:30 ai ollama[86728]: print_info: FIM MID token = 32009 '▁<MID>' Nov 03 19:52:30 ai ollama[86728]: print_info: EOG token = 2 '</s>' Nov 03 19:52:30 ai ollama[86728]: print_info: max token length = 48 Nov 03 19:52:30 ai ollama[86728]: load_tensors: loading model tensors, this can take a while... (mmap = true) Nov 03 19:52:31 ai ollama[86728]: load_tensors: offloading 32 repeating layers to GPU Nov 03 19:52:31 ai ollama[86728]: load_tensors: offloading output layer to GPU Nov 03 19:52:31 ai ollama[86728]: load_tensors: offloaded 33/33 layers to GPU Nov 03 19:52:31 ai ollama[86728]: load_tensors: CUDA0 model buffer size = 12603.14 MiB Nov 03 19:52:31 ai ollama[86728]: load_tensors: CPU_Mapped model buffer size = 250.12 MiB Nov 03 19:52:32 ai ollama[86728]: llama_init_from_model: model default pooling_type is [0], but [-1] was specified Nov 03 19:52:32 ai ollama[86728]: llama_context: constructing llama_context Nov 03 19:52:32 ai ollama[86728]: llama_context: n_seq_max = 1 Nov 03 19:52:32 ai ollama[86728]: llama_context: n_ctx = 16384 Nov 03 19:52:32 ai ollama[86728]: llama_context: n_ctx_per_seq = 16384 Nov 03 19:52:32 ai ollama[86728]: llama_context: n_batch = 512 Nov 03 19:52:32 ai ollama[86728]: llama_context: n_ubatch = 512 Nov 03 19:52:32 ai ollama[86728]: llama_context: causal_attn = 1 Nov 03 19:52:32 ai ollama[86728]: llama_context: flash_attn = disabled Nov 03 19:52:32 ai ollama[86728]: llama_context: kv_unified = false Nov 03 19:52:32 ai ollama[86728]: llama_context: freq_base = 1000000.0 Nov 03 19:52:32 ai ollama[86728]: llama_context: freq_scale = 1 Nov 03 19:52:32 ai ollama[86728]: llama_context: CUDA_Host output buffer size = 0.14 MiB Nov 03 19:52:32 ai ollama[86728]: llama_kv_cache: CUDA0 KV buffer size = 8192.00 MiB Nov 03 19:52:32 ai ollama[86728]: llama_kv_cache: size = 8192.00 MiB ( 16384 cells, 32 layers, 1/1 seqs), K (f16): 4096.00 MiB, V (f16): 4096.00 MiB Nov 03 19:52:32 ai ollama[86728]: llama_context: pipeline parallelism enabled (n_copies=4) Nov 03 19:52:33 ai ollama[86728]: llama_context: CUDA0 compute buffer size = 1280.03 MiB Nov 03 19:52:33 ai ollama[86728]: llama_context: CUDA_Host compute buffer size = 200.04 MiB Nov 03 19:52:33 ai ollama[86728]: llama_context: graph nodes = 1158 Nov 03 19:52:33 ai ollama[86728]: llama_context: graph splits = 2 Nov 03 19:52:33 ai ollama[86728]: time=2025-11-03T19:52:33.279+01:00 level=INFO source=server.go:1289 msg="llama runner started in 4.25 seconds" Nov 03 19:52:33 ai ollama[86728]: time=2025-11-03T19:52:33.279+01:00 level=INFO source=sched.go:493 msg="loaded runners" count=1 Nov 03 19:52:33 ai ollama[86728]: time=2025-11-03T19:52:33.279+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" Nov 03 19:52:33 ai ollama[86728]: time=2025-11-03T19:52:33.280+01:00 level=INFO source=server.go:1289 msg="llama runner started in 4.25 seconds" Nov 03 19:52:34 ai ollama[86728]: [GIN] 2025/11/03 - 19:52:34 | 200 | 6.800893026s | 172.17.0.4 | POST "/api/chat" Nov 03 19:52:41 ai ollama[86728]: [GIN] 2025/11/03 - 19:52:41 | 200 | 30.038µs | 127.0.0.1 | HEAD "/" Nov 03 19:52:41 ai ollama[86728]: [GIN] 2025/11/03 - 19:52:41 | 200 | 105.085µs | 127.0.0.1 | GET "/api/ps" Nov 03 19:52:59 ai ollama[86728]: [GIN] 2025/11/03 - 19:52:59 | 200 | 3.324158ms | 100.64.0.255 | GET "/v1/models" Nov 03 19:53:29 ai ollama[86728]: [GIN] 2025/11/03 - 19:53:29 | 200 | 3.760796ms | 100.64.0.255 | GET "/v1/models" Nov 03 19:53:59 ai ollama[86728]: [GIN] 2025/11/03 - 19:53:59 | 200 | 3.415365ms | 100.64.0.255 | GET "/v1/models" Nov 03 19:54:29 ai ollama[86728]: [GIN] 2025/11/03 - 19:54:29 | 200 | 3.343757ms | 100.64.0.255 | GET "/v1/models" Nov 03 19:54:59 ai ollama[86728]: [GIN] 2025/11/03 - 19:54:59 | 200 | 4.090421ms | 100.64.0.255 | GET "/v1/models" ```
Author
Owner

@rick-github commented on GitHub (Nov 3, 2025):

What client are you using to trigger the model load? Would you be comfortable with pushing the model to the ollama user library so it could be downloaded and examined?

<!-- gh-comment-id:3482058088 --> @rick-github commented on GitHub (Nov 3, 2025): What client are you using to trigger the model load? Would you be comfortable with pushing the model to the ollama user library so it could be downloaded and examined?
Author
Owner

@SuperSonnix71 commented on GitHub (Nov 3, 2025):

Hey folks, quick update after testing this more thoroughly:

Everything's working correctly in Ollama 0.12.9

I updated to the latest version (0.12.9) and ran extensive tests on my setup with 6× RTX 4090 GPUs. Turns out both Modelfile PARAMETER num_ctx and OLLAMA_CONTEXT_LENGTH environment variable work perfectly fine.


My Test Setup

Hardware:

  • 6× NVIDIA RTX 4090 GPUs (24GB each)
  • NVIDIA Driver: 580.95.05
  • CUDA: 13.0

Software:

  • Ollama version: 0.12.9
  • OS: Linux (Ubuntu with systemd)

Service config:

Environment="OLLAMA_CONTEXT_LENGTH=8192"
Environment="OLLAMA_NUM_PARALLEL=4"
Environment="OLLAMA_FLASH_ATTENTION=1"

sql2 Modelfile (sqlcoder-7b-2.fp16.gguf, 13GB):

FROM /usr/share/ollama/.ollama/models/sqlcoder-7b-2.fp16.gguf
PARAMETER num_thread 8
PARAMETER num_ctx 4096
PARAMETER top_k 10
PARAMETER top_p 0.3

Test Results

Single Model Tests

Model Native Context Actual Context Memory What Controlled It
llama3.2:latest 131,072 8,192 4 GB OLLAMA_CONTEXT_LENGTH
sql2 (sqlcoder-7b-2) 16,384 4,096 24 GB Modelfile PARAMETER
qwen3:32b 40,960 8,192 29 GB OLLAMA_CONTEXT_LENGTH

Logs for sql2 with OLLAMA_NUM_PARALLEL=4:

Parallel: 4
KvSize: 16384 (4 × 4096)
n_ctx_per_seq: 4096
KV buffer: 8192 MiB (8 GB)

Multiple Models Running Together

Loaded 3 big models at once to see what happens:

$ ollama ps
NAME            ID              SIZE     PROCESSOR    CONTEXT    
sql2:latest     9e4fefbf18f8    24 GB    100% GPU     4096       
qwen3:32b       e1c9f234c6eb    29 GB    100% GPU     8192       
gpt-oss:120b    735371f916a9    67 GB    100% GPU     8192       

Total memory: ~120 GB across all 6 GPUs
All models kept their correct context lengths.


Priority Order (confirmed working)

  1. Modelfile PARAMETER num_ctx (takes precedence)
  2. OLLAMA_CONTEXT_LENGTH environment variable
  3. Model's native context (fallback)

So it looks like the original issue was either fixed in a recent update or I had something misconfigured initially. After updating to 0.12.9 and testing with multiple models (including ones with huge native contexts like 40k-130k), everything works as expected. Both Modelfile parameters and environment variables properly override the native context length.

<!-- gh-comment-id:3482498313 --> @SuperSonnix71 commented on GitHub (Nov 3, 2025): Hey folks, quick update after testing this more thoroughly: **Everything's working correctly in Ollama 0.12.9** I updated to the latest version (0.12.9) and ran extensive tests on my setup with 6× RTX 4090 GPUs. Turns out both Modelfile `PARAMETER num_ctx` and `OLLAMA_CONTEXT_LENGTH` environment variable work perfectly fine. --- ## My Test Setup **Hardware:** - 6× NVIDIA RTX 4090 GPUs (24GB each) - NVIDIA Driver: 580.95.05 - CUDA: 13.0 **Software:** - Ollama version: 0.12.9 - OS: Linux (Ubuntu with systemd) **Service config:** ```ini Environment="OLLAMA_CONTEXT_LENGTH=8192" Environment="OLLAMA_NUM_PARALLEL=4" Environment="OLLAMA_FLASH_ATTENTION=1" ``` **sql2 Modelfile (sqlcoder-7b-2.fp16.gguf, 13GB):** ``` FROM /usr/share/ollama/.ollama/models/sqlcoder-7b-2.fp16.gguf PARAMETER num_thread 8 PARAMETER num_ctx 4096 PARAMETER top_k 10 PARAMETER top_p 0.3 ``` --- ## Test Results ### Single Model Tests | Model | Native Context | Actual Context | Memory | What Controlled It | |-------|---------------|----------------|--------|-------------------| | llama3.2:latest | 131,072 | **8,192** | 4 GB | OLLAMA_CONTEXT_LENGTH | | sql2 (sqlcoder-7b-2) | 16,384 | **4,096** | 24 GB | Modelfile PARAMETER | | qwen3:32b | 40,960 | **8,192** | 29 GB | OLLAMA_CONTEXT_LENGTH | **Logs for sql2 with OLLAMA_NUM_PARALLEL=4:** ``` Parallel: 4 KvSize: 16384 (4 × 4096) n_ctx_per_seq: 4096 KV buffer: 8192 MiB (8 GB) ``` ### Multiple Models Running Together Loaded 3 big models at once to see what happens: ```bash $ ollama ps NAME ID SIZE PROCESSOR CONTEXT sql2:latest 9e4fefbf18f8 24 GB 100% GPU 4096 qwen3:32b e1c9f234c6eb 29 GB 100% GPU 8192 gpt-oss:120b 735371f916a9 67 GB 100% GPU 8192 ``` Total memory: ~120 GB across all 6 GPUs All models kept their correct context lengths. --- ## Priority Order (confirmed working) 1. Modelfile `PARAMETER num_ctx` (takes precedence) 2. `OLLAMA_CONTEXT_LENGTH` environment variable 3. Model's native context (fallback) --- So it looks like the original issue was either fixed in a recent update or I had something misconfigured initially. After updating to 0.12.9 and testing with multiple models (including ones with huge native contexts like 40k-130k), everything works as expected. Both Modelfile parameters and environment variables properly override the native context length.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#8574