[GH-ISSUE #12149] [Model Request] Support new Apertus model #54591

Open
opened 2026-04-29 06:28:46 -05:00 by GiteaMirror · 34 comments
Owner

Originally created by @loleg on GitHub (Sep 2, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12149

This is a new model from the Swiss AI initiative. It currently does not load due to

Error: unsupported architecture "ApertusForCausalLM"

Some tips on getting Transformers updated on the Hugging Face repo could be useful here.

Originally created by @loleg on GitHub (Sep 2, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12149 This is a new model from the Swiss AI initiative. It currently does not load due to `Error: unsupported architecture "ApertusForCausalLM"` Some tips on getting Transformers updated on the [Hugging Face repo](https://huggingface.co/swiss-ai/Apertus-8B-2509#how-to-use) could be useful here.
GiteaMirror added the model label 2026-04-29 06:28:46 -05:00
Author
Owner

@abenmrad commented on GitHub (Sep 2, 2025):

Relevant changes to Transformers were added in v4.56.0 (Compare with v4.55.0)

Key file containing the config seems to be modular_apertus.py.

I see that it inherits from Llama and Nemotron classes, in some places at least.

<!-- gh-comment-id:3244926792 --> @abenmrad commented on GitHub (Sep 2, 2025): Relevant changes to Transformers were added in v4.56.0 ([Compare with v4.55.0](https://github.com/huggingface/transformers/compare/v4.55.4...v4.56.0)) Key file containing the config seems to be [modular_apertus.py](https://github.com/huggingface/transformers/blob/e7d351cebad5f6dcdd169b0c034fdee0a000e6a9/src/transformers/models/apertus/modular_apertus.py). I see that it inherits from Llama and Nemotron classes, in some places at least.
Author
Owner

@chefrocker commented on GitHub (Sep 2, 2025):

The thing was trained with 1,000 different languages. I'd be happy if the model ran on Ollama soon.

<!-- gh-comment-id:3245999417 --> @chefrocker commented on GitHub (Sep 2, 2025): The thing was trained with 1,000 different languages. I'd be happy if the model ran on Ollama soon.
Author
Owner

@abenmrad commented on GitHub (Sep 2, 2025):

The PR that added Apertus to Transformers

<!-- gh-comment-id:3246457908 --> @abenmrad commented on GitHub (Sep 2, 2025): [The PR that added Apertus to Transformers](https://github.com/huggingface/transformers/pull/39381)
Author
Owner

@rick-github commented on GitHub (Sep 4, 2025):

https://github.com/ggml-org/llama.cpp/issues/15748

<!-- gh-comment-id:3252450066 --> @rick-github commented on GitHub (Sep 4, 2025): https://github.com/ggml-org/llama.cpp/issues/15748
Author
Owner

@loleg commented on GitHub (Sep 5, 2025):

Looks like we are getting close here.
@ollamabot can you suggest how to help with integration or testing to speed this up?

<!-- gh-comment-id:3258251051 --> @loleg commented on GitHub (Sep 5, 2025): Looks like we are getting close here. @ollamabot can you suggest how to help with integration or testing to speed this up?
Author
Owner

@magnus919 commented on GitHub (Sep 9, 2025):

It would be great to have it. Liip says they have integrated it with openwebui. Does anybody know how?

should probably ask over there instead of here.

<!-- gh-comment-id:3272398890 --> @magnus919 commented on GitHub (Sep 9, 2025): > It would be great to have it. Liip says they have integrated it with openwebui. Does anybody know how? should probably ask over there instead of here.
Author
Owner

@loleg commented on GitHub (Sep 10, 2025):

I've pinged @dasjo - who says they are not using Ollama. If you're referring to https://oss.zuericitygpt.ch/ this is using publicai.co endpoints.

<!-- gh-comment-id:3274280574 --> @loleg commented on GitHub (Sep 10, 2025): I've pinged @dasjo - who says they are not using Ollama. If you're referring to https://oss.zuericitygpt.ch/ this is using publicai.co endpoints.
Author
Owner

@loleg commented on GitHub (Sep 20, 2025):

I am monitoring this issue: thanks to the llama.cpp maintainers progress is being made. Stay tuned, folks.

<!-- gh-comment-id:3315008444 --> @loleg commented on GitHub (Sep 20, 2025): I am monitoring this issue: thanks to the llama.cpp maintainers progress is being made. Stay tuned, folks.
Author
Owner

@pd95 commented on GitHub (Oct 3, 2025):

The feature request https://github.com/ggml-org/llama.cpp/issues/15748 has been merged to llama.cpp. So it is now possible to update/merge changes into Ollama.

I've been doing it experimentally in my fork (available here https://github.com/pd95/ollama) and tested it by running an Apertus GGUF model locally on my Mac.
As I'm not sure if I've done it all correctly, I've only created a "draft" pull request here https://github.com/ollama/ollama/pull/12488.

EDIT: How I tested it so far is described in this gist.

<!-- gh-comment-id:3366483122 --> @pd95 commented on GitHub (Oct 3, 2025): The feature request https://github.com/ggml-org/llama.cpp/issues/15748 has been merged to llama.cpp. So it is now possible to update/merge changes into Ollama. I've been doing it experimentally in my fork (available here https://github.com/pd95/ollama) and tested it by running an Apertus GGUF model locally on my Mac. As I'm not sure if I've done it all correctly, I've only created a "draft" pull request here https://github.com/ollama/ollama/pull/12488. EDIT: How I tested it so far is described in [this gist](https://gist.github.com/pd95/7841bb5d15220773c4ca8666f024c7c9).
Author
Owner

@loleg commented on GitHub (Oct 16, 2025):

Thanks for this @pd95

It's not quite straightforward to compile Ollama with GPU support from source, looking forward to a release of the Docker build.

<!-- gh-comment-id:3410569264 --> @loleg commented on GitHub (Oct 16, 2025): Thanks for this @pd95 It's not quite straightforward to compile Ollama with GPU support from source, looking forward to a release of the Docker build.
Author
Owner

@rick-github commented on GitHub (Oct 16, 2025):

The model will be supported in 0.12.6. You can check it out using the release candidate.

$ docker pull ollama/ollama:0.12.6-rc0
$ docker compose up -d ollama
$ docker compose exec -it ollama ollama run hf.co/redponike/Apertus-8B-Instruct-2509-GGUF:Q4_K_M
>>> /set system respond in english.
Set system message.
>>> hello
Hello! How can I assist you today? If you have any questions or need information on a specific topic, feel free to ask.

>>> 

The template for this model is simple, it doesn't support thinking control or tool use.

<!-- gh-comment-id:3410648111 --> @rick-github commented on GitHub (Oct 16, 2025): The model will be supported in 0.12.6. You can check it out using the release candidate. ```console $ docker pull ollama/ollama:0.12.6-rc0 $ docker compose up -d ollama $ docker compose exec -it ollama ollama run hf.co/redponike/Apertus-8B-Instruct-2509-GGUF:Q4_K_M >>> /set system respond in english. Set system message. >>> hello Hello! How can I assist you today? If you have any questions or need information on a specific topic, feel free to ask. >>> ``` The template for this model is simple, it doesn't support thinking control or tool use.
Author
Owner

@somera commented on GitHub (Oct 17, 2025):

@rick-github I update Ollama to 0.12.6 and downloaded same Apertus model. And it works.

Image

But why is the CPU usage high and GPU usage low?

<!-- gh-comment-id:3414163115 --> @somera commented on GitHub (Oct 17, 2025): @rick-github I update Ollama to 0.12.6 and downloaded same Apertus model. And it works. <img width="748" height="56" alt="Image" src="https://github.com/user-attachments/assets/8afcf65e-9122-45a4-ba38-8f705c275fe4" /> But why is the CPU usage high and GPU usage low?
Author
Owner

@eXt73 commented on GitHub (Oct 17, 2025):

under 0.12.6 the model is completely hallucinating... a total disaster

Image
<!-- gh-comment-id:3415017329 --> @eXt73 commented on GitHub (Oct 17, 2025): under 0.12.6 the model is completely hallucinating... a total disaster <img width="2560" height="1600" alt="Image" src="https://github.com/user-attachments/assets/ff01d2f8-c5de-443e-b329-226971c4f220" />
Author
Owner

@eXt73 commented on GitHub (Oct 17, 2025):

same in English - I use the new Ollam engine

Image
<!-- gh-comment-id:3415032279 --> @eXt73 commented on GitHub (Oct 17, 2025): same in English - I use the new Ollam engine <img width="2560" height="1600" alt="Image" src="https://github.com/user-attachments/assets/31db1bfe-a548-40e3-8771-d4a134ded1b7" />
Author
Owner

@rick-github commented on GitHub (Oct 17, 2025):

A screenshot of your entire desktop is unnecessary. Just paste the text, and a server log.

<!-- gh-comment-id:3415037918 --> @rick-github commented on GitHub (Oct 17, 2025): A screenshot of your entire desktop is unnecessary. Just paste the text, and a [server log](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues).
Author
Owner

@rick-github commented on GitHub (Oct 17, 2025):

$ ollama -v
ollama version is 0.12.6
$ ollama run hf.co/redponike/Apertus-8B-Instruct-2509-GGUF:Q4_K_M
>>> /set system respond in english.
Set system message.
>>> hello
Hello! How can I assist you today?
<!-- gh-comment-id:3415049171 --> @rick-github commented on GitHub (Oct 17, 2025): ```console $ ollama -v ollama version is 0.12.6 $ ollama run hf.co/redponike/Apertus-8B-Instruct-2509-GGUF:Q4_K_M >>> /set system respond in english. Set system message. >>> hello Hello! How can I assist you today?
Author
Owner

@eXt73 commented on GitHub (Oct 17, 2025):

time=2025-10-17T13:28:43.281+02:00 level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q4_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-10-17T13:28:43.282+02:00 level=INFO source=images.go:522 msg="total blobs: 26"
time=2025-10-17T13:28:43.282+02:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-10-17T13:28:43.282+02:00 level=INFO source=routes.go:1564 msg="Listening on 127.0.0.1:11434 (version 0.12.6)"
time=2025-10-17T13:28:43.283+02:00 level=INFO source=runner.go:80 msg="discovering available GPUs..."
time=2025-10-17T13:28:43.824+02:00 level=INFO source=types.go:112 msg="inference compute" id=GPU-344f6cf8-eede-03b0-070c-285b5936ff5f library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5090 Laptop GPU" libdirs=ollama,cuda_v13 driver=13.0 pci_id=02:00.0 type=discrete total="23.9 GiB" available="23.4 GiB"
[GIN] 2025/10/17 - 13:28:46 | 200 |     146.154µs |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/10/17 - 13:29:02 | 200 |      17.747µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/10/17 - 13:29:02 | 200 |   38.022585ms |       127.0.0.1 | POST     "/api/show"
llama_model_loader: loaded meta data with 43 key-value pairs and 324 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-87e7f0eb5a1d33e1a8ea4e7fdd2363764e0465061dd92962e0073aec31a7944d (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = apertus
llama_model_loader: - kv   1:                              xielu.alpha_n arr[f32,32]      = [40.750000, 31.625000, 22.875000, 16....
llama_model_loader: - kv   2:                              xielu.alpha_p arr[f32,32]      = [166.000000, 174.000000, 128.000000, ...
llama_model_loader: - kv   3:                                 xielu.beta arr[f32,32]      = [0.500000, 0.500000, 0.500000, 0.5000...
llama_model_loader: - kv   4:                                  xielu.eps arr[f32,32]      = [-0.000001, -0.000001, -0.000001, -0....
llama_model_loader: - kv   5:                               general.type str              = model
llama_model_loader: - kv   6:                               general.name str              = Apertus-8B-Instruct-2509
llama_model_loader: - kv   7:                            general.version str              = 2509
llama_model_loader: - kv   8:                           general.finetune str              = Instruct
llama_model_loader: - kv   9:                           general.basename str              = Apertus-8B-Instruct-2509
llama_model_loader: - kv  10:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv  11:                         general.size_label str              = 8B
llama_model_loader: - kv  12:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv  13:                        apertus.block_count u32              = 32
llama_model_loader: - kv  14:                     apertus.context_length u32              = 65536
llama_model_loader: - kv  15:                   apertus.embedding_length u32              = 4096
llama_model_loader: - kv  16:                apertus.feed_forward_length u32              = 21504
llama_model_loader: - kv  17:               apertus.attention.head_count u32              = 32
llama_model_loader: - kv  18:            apertus.attention.head_count_kv u32              = 8
llama_model_loader: - kv  19:                     apertus.rope.freq_base f32              = 12000000.000000
llama_model_loader: - kv  20:   apertus.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  21:                         apertus.vocab_size u32              = 131072
llama_model_loader: - kv  22:               apertus.rope.dimension_count u32              = 128
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = tekken
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,131072]  = ["<unk>", "<s>", "</s>", "<pad>", "[/...
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,131072]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,269443]  = ["Ä  Ä ", "Ä  t", "e r", "i n", "Ä  Ä...
llama_model_loader: - kv  28:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  29:                tokenizer.ggml.eos_token_id u32              = 68
llama_model_loader: - kv  30:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  31:            tokenizer.ggml.padding_token_id u32              = 3
llama_model_loader: - kv  32:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  33:               tokenizer.ggml.add_sep_token bool             = false
llama_model_loader: - kv  34:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  35:                    tokenizer.chat_template str              = {# Unsloth template fixes #}\n{%- macr...
llama_model_loader: - kv  36:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  37:               general.quantization_version u32              = 2
llama_model_loader: - kv  38:                          general.file_type u32              = 15
llama_model_loader: - kv  39:                      quantize.imatrix.file str              = Apertus-8B-Instruct-2509-GGUF/imatrix...
llama_model_loader: - kv  40:                   quantize.imatrix.dataset str              = unsloth_calibration_Apertus-8B-Instru...
llama_model_loader: - kv  41:             quantize.imatrix.entries_count u32              = 192
llama_model_loader: - kv  42:              quantize.imatrix.chunks_count u32              = 143
llama_model_loader: - type  f32:  130 tensors
llama_model_loader: - type q4_K:  161 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.70 GiB (5.02 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load:   - 31 ('<reponame>')
load:   - 68 ('<|assistant_end|>')
load: special tokens cache size = 1000
load: token to piece cache size = 0.8499 MB
print_info: arch             = apertus
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 8.05 B
print_info: general.name     = Apertus-8B-Instruct-2509
print_info: vocab type       = BPE
print_info: n_vocab          = 131072
print_info: n_merges         = 269443
print_info: BOS token        = 1 '<s>'
print_info: EOS token        = 68 '<|assistant_end|>'
print_info: UNK token        = 0 '<unk>'
print_info: PAD token        = 3 '<pad>'
print_info: LF token         = 1010 'Ċ'
print_info: FIM REP token    = 31 '<reponame>'
print_info: EOG token        = 31 '<reponame>'
print_info: EOG token        = 68 '<|assistant_end|>'
print_info: max token length = 150
llama_model_load: vocab only - skipping tensors
time=2025-10-17T13:29:03.040+02:00 level=INFO source=server.go:216 msg="enabling flash attention"
time=2025-10-17T13:29:03.040+02:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-87e7f0eb5a1d33e1a8ea4e7fdd2363764e0465061dd92962e0073aec31a7944d --port 37583"
time=2025-10-17T13:29:03.041+02:00 level=INFO source=server.go:505 msg="system memory" total="62.2 GiB" free="53.1 GiB" free_swap="512.0 MiB"
time=2025-10-17T13:29:03.041+02:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-87e7f0eb5a1d33e1a8ea4e7fdd2363764e0465061dd92962e0073aec31a7944d library=CUDA parallel=1 required="5.2 GiB" gpus=1
time=2025-10-17T13:29:03.042+02:00 level=INFO source=server.go:545 msg=offload library=CUDA layers.requested=-1 layers.model=33 layers.offload=33 layers.split=[33] memory.available="[23.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.2 GiB" memory.required.partial="5.2 GiB" memory.required.kv="128.0 MiB" memory.required.allocations="[5.2 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.0 GiB" memory.weights.nonrepeating="420.0 MiB" memory.graph.full="85.3 MiB" memory.graph.partial="85.3 MiB"
time=2025-10-17T13:29:03.048+02:00 level=INFO source=runner.go:893 msg="starting go runner"
load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-alderlake.so
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 5090 Laptop GPU, compute capability 12.0, VMM: yes, ID: GPU-344f6cf8-eede-03b0-070c-285b5936ff5f
load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v13/libggml-cuda.so
time=2025-10-17T13:29:03.212+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-10-17T13:29:03.212+02:00 level=INFO source=runner.go:929 msg="Server listening on 127.0.0.1:37583"
time=2025-10-17T13:29:03.218+02:00 level=INFO source=server.go:1272 msg="waiting for llama runner to start responding"
time=2025-10-17T13:29:03.218+02:00 level=INFO source=runner.go:828 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType:q4_0 NumThreads:24 GPULayers:33[ID:GPU-344f6cf8-eede-03b0-070c-285b5936ff5f Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}"
time=2025-10-17T13:29:03.219+02:00 level=INFO source=server.go:1306 msg="waiting for server to become available" status="llm server loading model"
ggml_backend_cuda_device_get_memory utilizing NVML memory reporting free: 25094782976 total: 25651314688
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5090 Laptop GPU) (0000:02:00.0) - 23932 MiB free
llama_model_loader: loaded meta data with 43 key-value pairs and 324 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-87e7f0eb5a1d33e1a8ea4e7fdd2363764e0465061dd92962e0073aec31a7944d (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = apertus
llama_model_loader: - kv   1:                              xielu.alpha_n arr[f32,32]      = [40.750000, 31.625000, 22.875000, 16....
llama_model_loader: - kv   2:                              xielu.alpha_p arr[f32,32]      = [166.000000, 174.000000, 128.000000, ...
llama_model_loader: - kv   3:                                 xielu.beta arr[f32,32]      = [0.500000, 0.500000, 0.500000, 0.5000...
llama_model_loader: - kv   4:                                  xielu.eps arr[f32,32]      = [-0.000001, -0.000001, -0.000001, -0....
llama_model_loader: - kv   5:                               general.type str              = model
llama_model_loader: - kv   6:                               general.name str              = Apertus-8B-Instruct-2509
llama_model_loader: - kv   7:                            general.version str              = 2509
llama_model_loader: - kv   8:                           general.finetune str              = Instruct
llama_model_loader: - kv   9:                           general.basename str              = Apertus-8B-Instruct-2509
llama_model_loader: - kv  10:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv  11:                         general.size_label str              = 8B
llama_model_loader: - kv  12:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv  13:                        apertus.block_count u32              = 32
llama_model_loader: - kv  14:                     apertus.context_length u32              = 65536
llama_model_loader: - kv  15:                   apertus.embedding_length u32              = 4096
llama_model_loader: - kv  16:                apertus.feed_forward_length u32              = 21504
llama_model_loader: - kv  17:               apertus.attention.head_count u32              = 32
llama_model_loader: - kv  18:            apertus.attention.head_count_kv u32              = 8
llama_model_loader: - kv  19:                     apertus.rope.freq_base f32              = 12000000.000000
llama_model_loader: - kv  20:   apertus.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  21:                         apertus.vocab_size u32              = 131072
llama_model_loader: - kv  22:               apertus.rope.dimension_count u32              = 128
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = tekken
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,131072]  = ["<unk>", "<s>", "</s>", "<pad>", "[/...
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,131072]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,269443]  = ["Ä  Ä ", "Ä  t", "e r", "i n", "Ä  Ä...
llama_model_loader: - kv  28:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  29:                tokenizer.ggml.eos_token_id u32              = 68
llama_model_loader: - kv  30:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  31:            tokenizer.ggml.padding_token_id u32              = 3
llama_model_loader: - kv  32:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  33:               tokenizer.ggml.add_sep_token bool             = false
llama_model_loader: - kv  34:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  35:                    tokenizer.chat_template str              = {# Unsloth template fixes #}\n{%- macr...
llama_model_loader: - kv  36:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  37:               general.quantization_version u32              = 2
llama_model_loader: - kv  38:                          general.file_type u32              = 15
llama_model_loader: - kv  39:                      quantize.imatrix.file str              = Apertus-8B-Instruct-2509-GGUF/imatrix...
llama_model_loader: - kv  40:                   quantize.imatrix.dataset str              = unsloth_calibration_Apertus-8B-Instru...
llama_model_loader: - kv  41:             quantize.imatrix.entries_count u32              = 192
llama_model_loader: - kv  42:              quantize.imatrix.chunks_count u32              = 143
llama_model_loader: - type  f32:  130 tensors
llama_model_loader: - type q4_K:  161 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.70 GiB (5.02 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load:   - 31 ('<reponame>')
load:   - 68 ('<|assistant_end|>')
load: special tokens cache size = 1000
load: token to piece cache size = 0.8499 MB
print_info: arch             = apertus
print_info: vocab_only       = 0
print_info: n_ctx_train      = 65536
print_info: n_embd           = 4096
print_info: n_layer          = 32
print_info: n_head           = 32
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 4
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 21504
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 12000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 65536
print_info: rope_finetuned   = unknown
print_info: model type       = 8B
print_info: model params     = 8.05 B
print_info: general.name     = Apertus-8B-Instruct-2509
print_info: vocab type       = BPE
print_info: n_vocab          = 131072
print_info: n_merges         = 269443
print_info: BOS token        = 1 '<s>'
print_info: EOS token        = 68 '<|assistant_end|>'
print_info: UNK token        = 0 '<unk>'
print_info: PAD token        = 3 '<pad>'
print_info: LF token         = 1010 'Ċ'
print_info: FIM REP token    = 31 '<reponame>'
print_info: EOG token        = 31 '<reponame>'
print_info: EOG token        = 68 '<|assistant_end|>'
print_info: max token length = 150
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 32 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 33/33 layers to GPU
load_tensors:        CUDA0 model buffer size =  4528.05 MiB
load_tensors:   CPU_Mapped model buffer size =   288.00 MiB
llama_init_from_model: model default pooling_type is [0], but [-1] was specified
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = enabled
llama_context: kv_unified    = false
llama_context: freq_base     = 12000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (65536) -- the full capacity of the model will not be utilized
llama_context:  CUDA_Host  output buffer size =     0.52 MiB
llama_kv_cache:      CUDA0 KV buffer size =   144.00 MiB
llama_kv_cache: size =  144.00 MiB (  4096 cells,  32 layers,  1/1 seqs), K (q4_0):   72.00 MiB, V (q4_0):   72.00 MiB
llama_context:      CUDA0 compute buffer size =   264.00 MiB
llama_context:  CUDA_Host compute buffer size =    50.01 MiB
llama_context: graph nodes  = 1095
llama_context: graph splits = 66
time=2025-10-17T13:29:04.222+02:00 level=INFO source=server.go:1310 msg="llama runner started in 1.18 seconds"
time=2025-10-17T13:29:04.222+02:00 level=INFO source=sched.go:482 msg="loaded runners" count=1
time=2025-10-17T13:29:04.222+02:00 level=INFO source=server.go:1272 msg="waiting for llama runner to start responding"
time=2025-10-17T13:29:04.222+02:00 level=INFO source=server.go:1310 msg="llama runner started in 1.18 seconds"
[GIN] 2025/10/17 - 13:29:04 | 200 |  1.658240338s |       127.0.0.1 | POST     "/api/generate"
time=2025-10-17T13:31:47.255+02:00 level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q4_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-10-17T13:31:47.256+02:00 level=INFO source=images.go:522 msg="total blobs: 26"
time=2025-10-17T13:31:47.256+02:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-10-17T13:31:47.256+02:00 level=INFO source=routes.go:1564 msg="Listening on 127.0.0.1:11434 (version 0.12.6)"
time=2025-10-17T13:31:47.257+02:00 level=INFO source=runner.go:80 msg="discovering available GPUs..."
time=2025-10-17T13:31:47.806+02:00 level=INFO source=types.go:112 msg="inference compute" id=GPU-344f6cf8-eede-03b0-070c-285b5936ff5f library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5090 Laptop GPU" libdirs=ollama,cuda_v13 driver=13.0 pci_id=02:00.0 type=discrete total="23.9 GiB" available="23.4 GiB"
time=2025-10-17T13:32:02.131+02:00 level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q4_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-10-17T13:32:02.132+02:00 level=INFO source=images.go:522 msg="total blobs: 26"
time=2025-10-17T13:32:02.132+02:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-10-17T13:32:02.132+02:00 level=INFO source=routes.go:1564 msg="Listening on 127.0.0.1:11434 (version 0.12.6)"
time=2025-10-17T13:32:02.133+02:00 level=INFO source=runner.go:80 msg="discovering available GPUs..."
time=2025-10-17T13:32:02.762+02:00 level=INFO source=types.go:112 msg="inference compute" id=GPU-344f6cf8-eede-03b0-070c-285b5936ff5f library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5090 Laptop GPU" libdirs=ollama,cuda_v13 driver=13.0 pci_id=02:00.0 type=discrete total="23.9 GiB" available="23.4 GiB"
llama_model_loader: loaded meta data with 43 key-value pairs and 324 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-87e7f0eb5a1d33e1a8ea4e7fdd2363764e0465061dd92962e0073aec31a7944d (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = apertus
llama_model_loader: - kv   1:                              xielu.alpha_n arr[f32,32]      = [40.750000, 31.625000, 22.875000, 16....
llama_model_loader: - kv   2:                              xielu.alpha_p arr[f32,32]      = [166.000000, 174.000000, 128.000000, ...
llama_model_loader: - kv   3:                                 xielu.beta arr[f32,32]      = [0.500000, 0.500000, 0.500000, 0.5000...
llama_model_loader: - kv   4:                                  xielu.eps arr[f32,32]      = [-0.000001, -0.000001, -0.000001, -0....
llama_model_loader: - kv   5:                               general.type str              = model
llama_model_loader: - kv   6:                               general.name str              = Apertus-8B-Instruct-2509
llama_model_loader: - kv   7:                            general.version str              = 2509
llama_model_loader: - kv   8:                           general.finetune str              = Instruct
llama_model_loader: - kv   9:                           general.basename str              = Apertus-8B-Instruct-2509
llama_model_loader: - kv  10:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv  11:                         general.size_label str              = 8B
llama_model_loader: - kv  12:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv  13:                        apertus.block_count u32              = 32
llama_model_loader: - kv  14:                     apertus.context_length u32              = 65536
llama_model_loader: - kv  15:                   apertus.embedding_length u32              = 4096
llama_model_loader: - kv  16:                apertus.feed_forward_length u32              = 21504
llama_model_loader: - kv  17:               apertus.attention.head_count u32              = 32
llama_model_loader: - kv  18:            apertus.attention.head_count_kv u32              = 8
llama_model_loader: - kv  19:                     apertus.rope.freq_base f32              = 12000000.000000
llama_model_loader: - kv  20:   apertus.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  21:                         apertus.vocab_size u32              = 131072
llama_model_loader: - kv  22:               apertus.rope.dimension_count u32              = 128
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = tekken
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,131072]  = ["<unk>", "<s>", "</s>", "<pad>", "[/...
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,131072]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,269443]  = ["Ä  Ä ", "Ä  t", "e r", "i n", "Ä  Ä...
llama_model_loader: - kv  28:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  29:                tokenizer.ggml.eos_token_id u32              = 68
llama_model_loader: - kv  30:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  31:            tokenizer.ggml.padding_token_id u32              = 3
llama_model_loader: - kv  32:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  33:               tokenizer.ggml.add_sep_token bool             = false
llama_model_loader: - kv  34:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  35:                    tokenizer.chat_template str              = {# Unsloth template fixes #}\n{%- macr...
llama_model_loader: - kv  36:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  37:               general.quantization_version u32              = 2
llama_model_loader: - kv  38:                          general.file_type u32              = 15
llama_model_loader: - kv  39:                      quantize.imatrix.file str              = Apertus-8B-Instruct-2509-GGUF/imatrix...
llama_model_loader: - kv  40:                   quantize.imatrix.dataset str              = unsloth_calibration_Apertus-8B-Instru...
llama_model_loader: - kv  41:             quantize.imatrix.entries_count u32              = 192
llama_model_loader: - kv  42:              quantize.imatrix.chunks_count u32              = 143
llama_model_loader: - type  f32:  130 tensors
llama_model_loader: - type q4_K:  161 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.70 GiB (5.02 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load:   - 31 ('<reponame>')
load:   - 68 ('<|assistant_end|>')
load: special tokens cache size = 1000
load: token to piece cache size = 0.8499 MB
print_info: arch             = apertus
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 8.05 B
print_info: general.name     = Apertus-8B-Instruct-2509
print_info: vocab type       = BPE
print_info: n_vocab          = 131072
print_info: n_merges         = 269443
print_info: BOS token        = 1 '<s>'
print_info: EOS token        = 68 '<|assistant_end|>'
print_info: UNK token        = 0 '<unk>'
print_info: PAD token        = 3 '<pad>'
print_info: LF token         = 1010 'Ċ'
print_info: FIM REP token    = 31 '<reponame>'
print_info: EOG token        = 31 '<reponame>'
print_info: EOG token        = 68 '<|assistant_end|>'
print_info: max token length = 150
llama_model_load: vocab only - skipping tensors
time=2025-10-17T13:32:54.168+02:00 level=INFO source=server.go:216 msg="enabling flash attention"
time=2025-10-17T13:32:54.168+02:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-87e7f0eb5a1d33e1a8ea4e7fdd2363764e0465061dd92962e0073aec31a7944d --port 44607"
time=2025-10-17T13:32:54.169+02:00 level=INFO source=server.go:505 msg="system memory" total="62.2 GiB" free="53.4 GiB" free_swap="512.0 MiB"
time=2025-10-17T13:32:54.169+02:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-87e7f0eb5a1d33e1a8ea4e7fdd2363764e0465061dd92962e0073aec31a7944d library=CUDA parallel=1 required="5.2 GiB" gpus=1
time=2025-10-17T13:32:54.169+02:00 level=INFO source=server.go:545 msg=offload library=CUDA layers.requested=-1 layers.model=33 layers.offload=33 layers.split=[33] memory.available="[23.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.2 GiB" memory.required.partial="5.2 GiB" memory.required.kv="128.0 MiB" memory.required.allocations="[5.2 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.0 GiB" memory.weights.nonrepeating="420.0 MiB" memory.graph.full="85.3 MiB" memory.graph.partial="85.3 MiB"
time=2025-10-17T13:32:54.175+02:00 level=INFO source=runner.go:893 msg="starting go runner"
load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-alderlake.so
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 5090 Laptop GPU, compute capability 12.0, VMM: yes, ID: GPU-344f6cf8-eede-03b0-070c-285b5936ff5f
load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v13/libggml-cuda.so
time=2025-10-17T13:32:54.323+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-10-17T13:32:54.323+02:00 level=INFO source=runner.go:929 msg="Server listening on 127.0.0.1:44607"
time=2025-10-17T13:32:54.325+02:00 level=INFO source=server.go:1272 msg="waiting for llama runner to start responding"
time=2025-10-17T13:32:54.325+02:00 level=INFO source=runner.go:828 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType:q4_0 NumThreads:24 GPULayers:33[ID:GPU-344f6cf8-eede-03b0-070c-285b5936ff5f Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}"
time=2025-10-17T13:32:54.325+02:00 level=INFO source=server.go:1306 msg="waiting for server to become available" status="llm server loading model"
ggml_backend_cuda_device_get_memory utilizing NVML memory reporting free: 25094782976 total: 25651314688
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5090 Laptop GPU) (0000:02:00.0) - 23932 MiB free
llama_model_loader: loaded meta data with 43 key-value pairs and 324 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-87e7f0eb5a1d33e1a8ea4e7fdd2363764e0465061dd92962e0073aec31a7944d (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = apertus
llama_model_loader: - kv   1:                              xielu.alpha_n arr[f32,32]      = [40.750000, 31.625000, 22.875000, 16....
llama_model_loader: - kv   2:                              xielu.alpha_p arr[f32,32]      = [166.000000, 174.000000, 128.000000, ...
llama_model_loader: - kv   3:                                 xielu.beta arr[f32,32]      = [0.500000, 0.500000, 0.500000, 0.5000...
llama_model_loader: - kv   4:                                  xielu.eps arr[f32,32]      = [-0.000001, -0.000001, -0.000001, -0....
llama_model_loader: - kv   5:                               general.type str              = model
llama_model_loader: - kv   6:                               general.name str              = Apertus-8B-Instruct-2509
llama_model_loader: - kv   7:                            general.version str              = 2509
llama_model_loader: - kv   8:                           general.finetune str              = Instruct
llama_model_loader: - kv   9:                           general.basename str              = Apertus-8B-Instruct-2509
llama_model_loader: - kv  10:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv  11:                         general.size_label str              = 8B
llama_model_loader: - kv  12:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv  13:                        apertus.block_count u32              = 32
llama_model_loader: - kv  14:                     apertus.context_length u32              = 65536
llama_model_loader: - kv  15:                   apertus.embedding_length u32              = 4096
llama_model_loader: - kv  16:                apertus.feed_forward_length u32              = 21504
llama_model_loader: - kv  17:               apertus.attention.head_count u32              = 32
llama_model_loader: - kv  18:            apertus.attention.head_count_kv u32              = 8
llama_model_loader: - kv  19:                     apertus.rope.freq_base f32              = 12000000.000000
llama_model_loader: - kv  20:   apertus.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  21:                         apertus.vocab_size u32              = 131072
llama_model_loader: - kv  22:               apertus.rope.dimension_count u32              = 128
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = tekken
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,131072]  = ["<unk>", "<s>", "</s>", "<pad>", "[/...
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,131072]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,269443]  = ["Ä  Ä ", "Ä  t", "e r", "i n", "Ä  Ä...
llama_model_loader: - kv  28:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  29:                tokenizer.ggml.eos_token_id u32              = 68
llama_model_loader: - kv  30:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  31:            tokenizer.ggml.padding_token_id u32              = 3
llama_model_loader: - kv  32:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  33:               tokenizer.ggml.add_sep_token bool             = false
llama_model_loader: - kv  34:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  35:                    tokenizer.chat_template str              = {# Unsloth template fixes #}\n{%- macr...
llama_model_loader: - kv  36:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  37:               general.quantization_version u32              = 2
llama_model_loader: - kv  38:                          general.file_type u32              = 15
llama_model_loader: - kv  39:                      quantize.imatrix.file str              = Apertus-8B-Instruct-2509-GGUF/imatrix...
llama_model_loader: - kv  40:                   quantize.imatrix.dataset str              = unsloth_calibration_Apertus-8B-Instru...
llama_model_loader: - kv  41:             quantize.imatrix.entries_count u32              = 192
llama_model_loader: - kv  42:              quantize.imatrix.chunks_count u32              = 143
llama_model_loader: - type  f32:  130 tensors
llama_model_loader: - type q4_K:  161 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.70 GiB (5.02 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load:   - 31 ('<reponame>')
load:   - 68 ('<|assistant_end|>')
load: special tokens cache size = 1000
load: token to piece cache size = 0.8499 MB
print_info: arch             = apertus
print_info: vocab_only       = 0
print_info: n_ctx_train      = 65536
print_info: n_embd           = 4096
print_info: n_layer          = 32
print_info: n_head           = 32
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 4
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 21504
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 12000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 65536
print_info: rope_finetuned   = unknown
print_info: model type       = 8B
print_info: model params     = 8.05 B
print_info: general.name     = Apertus-8B-Instruct-2509
print_info: vocab type       = BPE
print_info: n_vocab          = 131072
print_info: n_merges         = 269443
print_info: BOS token        = 1 '<s>'
print_info: EOS token        = 68 '<|assistant_end|>'
print_info: UNK token        = 0 '<unk>'
print_info: PAD token        = 3 '<pad>'
print_info: LF token         = 1010 'Ċ'
print_info: FIM REP token    = 31 '<reponame>'
print_info: EOG token        = 31 '<reponame>'
print_info: EOG token        = 68 '<|assistant_end|>'
print_info: max token length = 150
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 32 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 33/33 layers to GPU
load_tensors:        CUDA0 model buffer size =  4528.05 MiB
load_tensors:   CPU_Mapped model buffer size =   288.00 MiB
llama_init_from_model: model default pooling_type is [0], but [-1] was specified
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = enabled
llama_context: kv_unified    = false
llama_context: freq_base     = 12000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (65536) -- the full capacity of the model will not be utilized
llama_context:  CUDA_Host  output buffer size =     0.52 MiB
llama_kv_cache:      CUDA0 KV buffer size =   144.00 MiB
llama_kv_cache: size =  144.00 MiB (  4096 cells,  32 layers,  1/1 seqs), K (q4_0):   72.00 MiB, V (q4_0):   72.00 MiB
llama_context:      CUDA0 compute buffer size =   264.00 MiB
llama_context:  CUDA_Host compute buffer size =    50.01 MiB
llama_context: graph nodes  = 1095
llama_context: graph splits = 66
time=2025-10-17T13:32:55.328+02:00 level=INFO source=server.go:1310 msg="llama runner started in 1.16 seconds"
time=2025-10-17T13:32:55.328+02:00 level=INFO source=sched.go:482 msg="loaded runners" count=1
time=2025-10-17T13:32:55.328+02:00 level=INFO source=server.go:1272 msg="waiting for llama runner to start responding"
time=2025-10-17T13:32:55.329+02:00 level=INFO source=server.go:1310 msg="llama runner started in 1.16 seconds"
[GIN] 2025/10/17 - 13:33:00 | 200 |   6.73328923s |       127.0.0.1 | POST     "/api/chat"
<!-- gh-comment-id:3415148131 --> @eXt73 commented on GitHub (Oct 17, 2025): ``` time=2025-10-17T13:28:43.281+02:00 level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q4_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-10-17T13:28:43.282+02:00 level=INFO source=images.go:522 msg="total blobs: 26" time=2025-10-17T13:28:43.282+02:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-10-17T13:28:43.282+02:00 level=INFO source=routes.go:1564 msg="Listening on 127.0.0.1:11434 (version 0.12.6)" time=2025-10-17T13:28:43.283+02:00 level=INFO source=runner.go:80 msg="discovering available GPUs..." time=2025-10-17T13:28:43.824+02:00 level=INFO source=types.go:112 msg="inference compute" id=GPU-344f6cf8-eede-03b0-070c-285b5936ff5f library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5090 Laptop GPU" libdirs=ollama,cuda_v13 driver=13.0 pci_id=02:00.0 type=discrete total="23.9 GiB" available="23.4 GiB" [GIN] 2025/10/17 - 13:28:46 | 200 | 146.154µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/17 - 13:29:02 | 200 | 17.747µs | 127.0.0.1 | HEAD "/" [GIN] 2025/10/17 - 13:29:02 | 200 | 38.022585ms | 127.0.0.1 | POST "/api/show" llama_model_loader: loaded meta data with 43 key-value pairs and 324 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-87e7f0eb5a1d33e1a8ea4e7fdd2363764e0465061dd92962e0073aec31a7944d (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = apertus llama_model_loader: - kv 1: xielu.alpha_n arr[f32,32] = [40.750000, 31.625000, 22.875000, 16.... llama_model_loader: - kv 2: xielu.alpha_p arr[f32,32] = [166.000000, 174.000000, 128.000000, ... llama_model_loader: - kv 3: xielu.beta arr[f32,32] = [0.500000, 0.500000, 0.500000, 0.5000... llama_model_loader: - kv 4: xielu.eps arr[f32,32] = [-0.000001, -0.000001, -0.000001, -0.... llama_model_loader: - kv 5: general.type str = model llama_model_loader: - kv 6: general.name str = Apertus-8B-Instruct-2509 llama_model_loader: - kv 7: general.version str = 2509 llama_model_loader: - kv 8: general.finetune str = Instruct llama_model_loader: - kv 9: general.basename str = Apertus-8B-Instruct-2509 llama_model_loader: - kv 10: general.quantized_by str = Unsloth llama_model_loader: - kv 11: general.size_label str = 8B llama_model_loader: - kv 12: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 13: apertus.block_count u32 = 32 llama_model_loader: - kv 14: apertus.context_length u32 = 65536 llama_model_loader: - kv 15: apertus.embedding_length u32 = 4096 llama_model_loader: - kv 16: apertus.feed_forward_length u32 = 21504 llama_model_loader: - kv 17: apertus.attention.head_count u32 = 32 llama_model_loader: - kv 18: apertus.attention.head_count_kv u32 = 8 llama_model_loader: - kv 19: apertus.rope.freq_base f32 = 12000000.000000 llama_model_loader: - kv 20: apertus.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 21: apertus.vocab_size u32 = 131072 llama_model_loader: - kv 22: apertus.rope.dimension_count u32 = 128 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = tekken llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,131072] = ["<unk>", "<s>", "</s>", "<pad>", "[/... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,269443] = ["Ä  Ä ", "Ä  t", "e r", "i n", "Ä  Ä... llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 68 llama_model_loader: - kv 30: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 31: tokenizer.ggml.padding_token_id u32 = 3 llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 33: tokenizer.ggml.add_sep_token bool = false llama_model_loader: - kv 34: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 35: tokenizer.chat_template str = {# Unsloth template fixes #}\n{%- macr... llama_model_loader: - kv 36: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 37: general.quantization_version u32 = 2 llama_model_loader: - kv 38: general.file_type u32 = 15 llama_model_loader: - kv 39: quantize.imatrix.file str = Apertus-8B-Instruct-2509-GGUF/imatrix... llama_model_loader: - kv 40: quantize.imatrix.dataset str = unsloth_calibration_Apertus-8B-Instru... llama_model_loader: - kv 41: quantize.imatrix.entries_count u32 = 192 llama_model_loader: - kv 42: quantize.imatrix.chunks_count u32 = 143 llama_model_loader: - type f32: 130 tensors llama_model_loader: - type q4_K: 161 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 4.70 GiB (5.02 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: printing all EOG tokens: load: - 31 ('<reponame>') load: - 68 ('<|assistant_end|>') load: special tokens cache size = 1000 load: token to piece cache size = 0.8499 MB print_info: arch = apertus print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 8.05 B print_info: general.name = Apertus-8B-Instruct-2509 print_info: vocab type = BPE print_info: n_vocab = 131072 print_info: n_merges = 269443 print_info: BOS token = 1 '<s>' print_info: EOS token = 68 '<|assistant_end|>' print_info: UNK token = 0 '<unk>' print_info: PAD token = 3 '<pad>' print_info: LF token = 1010 'Ċ' print_info: FIM REP token = 31 '<reponame>' print_info: EOG token = 31 '<reponame>' print_info: EOG token = 68 '<|assistant_end|>' print_info: max token length = 150 llama_model_load: vocab only - skipping tensors time=2025-10-17T13:29:03.040+02:00 level=INFO source=server.go:216 msg="enabling flash attention" time=2025-10-17T13:29:03.040+02:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-87e7f0eb5a1d33e1a8ea4e7fdd2363764e0465061dd92962e0073aec31a7944d --port 37583" time=2025-10-17T13:29:03.041+02:00 level=INFO source=server.go:505 msg="system memory" total="62.2 GiB" free="53.1 GiB" free_swap="512.0 MiB" time=2025-10-17T13:29:03.041+02:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-87e7f0eb5a1d33e1a8ea4e7fdd2363764e0465061dd92962e0073aec31a7944d library=CUDA parallel=1 required="5.2 GiB" gpus=1 time=2025-10-17T13:29:03.042+02:00 level=INFO source=server.go:545 msg=offload library=CUDA layers.requested=-1 layers.model=33 layers.offload=33 layers.split=[33] memory.available="[23.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.2 GiB" memory.required.partial="5.2 GiB" memory.required.kv="128.0 MiB" memory.required.allocations="[5.2 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.0 GiB" memory.weights.nonrepeating="420.0 MiB" memory.graph.full="85.3 MiB" memory.graph.partial="85.3 MiB" time=2025-10-17T13:29:03.048+02:00 level=INFO source=runner.go:893 msg="starting go runner" load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-alderlake.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5090 Laptop GPU, compute capability 12.0, VMM: yes, ID: GPU-344f6cf8-eede-03b0-070c-285b5936ff5f load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v13/libggml-cuda.so time=2025-10-17T13:29:03.212+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-10-17T13:29:03.212+02:00 level=INFO source=runner.go:929 msg="Server listening on 127.0.0.1:37583" time=2025-10-17T13:29:03.218+02:00 level=INFO source=server.go:1272 msg="waiting for llama runner to start responding" time=2025-10-17T13:29:03.218+02:00 level=INFO source=runner.go:828 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType:q4_0 NumThreads:24 GPULayers:33[ID:GPU-344f6cf8-eede-03b0-070c-285b5936ff5f Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}" time=2025-10-17T13:29:03.219+02:00 level=INFO source=server.go:1306 msg="waiting for server to become available" status="llm server loading model" ggml_backend_cuda_device_get_memory utilizing NVML memory reporting free: 25094782976 total: 25651314688 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5090 Laptop GPU) (0000:02:00.0) - 23932 MiB free llama_model_loader: loaded meta data with 43 key-value pairs and 324 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-87e7f0eb5a1d33e1a8ea4e7fdd2363764e0465061dd92962e0073aec31a7944d (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = apertus llama_model_loader: - kv 1: xielu.alpha_n arr[f32,32] = [40.750000, 31.625000, 22.875000, 16.... llama_model_loader: - kv 2: xielu.alpha_p arr[f32,32] = [166.000000, 174.000000, 128.000000, ... llama_model_loader: - kv 3: xielu.beta arr[f32,32] = [0.500000, 0.500000, 0.500000, 0.5000... llama_model_loader: - kv 4: xielu.eps arr[f32,32] = [-0.000001, -0.000001, -0.000001, -0.... llama_model_loader: - kv 5: general.type str = model llama_model_loader: - kv 6: general.name str = Apertus-8B-Instruct-2509 llama_model_loader: - kv 7: general.version str = 2509 llama_model_loader: - kv 8: general.finetune str = Instruct llama_model_loader: - kv 9: general.basename str = Apertus-8B-Instruct-2509 llama_model_loader: - kv 10: general.quantized_by str = Unsloth llama_model_loader: - kv 11: general.size_label str = 8B llama_model_loader: - kv 12: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 13: apertus.block_count u32 = 32 llama_model_loader: - kv 14: apertus.context_length u32 = 65536 llama_model_loader: - kv 15: apertus.embedding_length u32 = 4096 llama_model_loader: - kv 16: apertus.feed_forward_length u32 = 21504 llama_model_loader: - kv 17: apertus.attention.head_count u32 = 32 llama_model_loader: - kv 18: apertus.attention.head_count_kv u32 = 8 llama_model_loader: - kv 19: apertus.rope.freq_base f32 = 12000000.000000 llama_model_loader: - kv 20: apertus.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 21: apertus.vocab_size u32 = 131072 llama_model_loader: - kv 22: apertus.rope.dimension_count u32 = 128 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = tekken llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,131072] = ["<unk>", "<s>", "</s>", "<pad>", "[/... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,269443] = ["Ä  Ä ", "Ä  t", "e r", "i n", "Ä  Ä... llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 68 llama_model_loader: - kv 30: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 31: tokenizer.ggml.padding_token_id u32 = 3 llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 33: tokenizer.ggml.add_sep_token bool = false llama_model_loader: - kv 34: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 35: tokenizer.chat_template str = {# Unsloth template fixes #}\n{%- macr... llama_model_loader: - kv 36: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 37: general.quantization_version u32 = 2 llama_model_loader: - kv 38: general.file_type u32 = 15 llama_model_loader: - kv 39: quantize.imatrix.file str = Apertus-8B-Instruct-2509-GGUF/imatrix... llama_model_loader: - kv 40: quantize.imatrix.dataset str = unsloth_calibration_Apertus-8B-Instru... llama_model_loader: - kv 41: quantize.imatrix.entries_count u32 = 192 llama_model_loader: - kv 42: quantize.imatrix.chunks_count u32 = 143 llama_model_loader: - type f32: 130 tensors llama_model_loader: - type q4_K: 161 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 4.70 GiB (5.02 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: printing all EOG tokens: load: - 31 ('<reponame>') load: - 68 ('<|assistant_end|>') load: special tokens cache size = 1000 load: token to piece cache size = 0.8499 MB print_info: arch = apertus print_info: vocab_only = 0 print_info: n_ctx_train = 65536 print_info: n_embd = 4096 print_info: n_layer = 32 print_info: n_head = 32 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 4 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 21504 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 12000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 65536 print_info: rope_finetuned = unknown print_info: model type = 8B print_info: model params = 8.05 B print_info: general.name = Apertus-8B-Instruct-2509 print_info: vocab type = BPE print_info: n_vocab = 131072 print_info: n_merges = 269443 print_info: BOS token = 1 '<s>' print_info: EOS token = 68 '<|assistant_end|>' print_info: UNK token = 0 '<unk>' print_info: PAD token = 3 '<pad>' print_info: LF token = 1010 'Ċ' print_info: FIM REP token = 31 '<reponame>' print_info: EOG token = 31 '<reponame>' print_info: EOG token = 68 '<|assistant_end|>' print_info: max token length = 150 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: offloading 32 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 33/33 layers to GPU load_tensors: CUDA0 model buffer size = 4528.05 MiB load_tensors: CPU_Mapped model buffer size = 288.00 MiB llama_init_from_model: model default pooling_type is [0], but [-1] was specified llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 4096 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = enabled llama_context: kv_unified = false llama_context: freq_base = 12000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (65536) -- the full capacity of the model will not be utilized llama_context: CUDA_Host output buffer size = 0.52 MiB llama_kv_cache: CUDA0 KV buffer size = 144.00 MiB llama_kv_cache: size = 144.00 MiB ( 4096 cells, 32 layers, 1/1 seqs), K (q4_0): 72.00 MiB, V (q4_0): 72.00 MiB llama_context: CUDA0 compute buffer size = 264.00 MiB llama_context: CUDA_Host compute buffer size = 50.01 MiB llama_context: graph nodes = 1095 llama_context: graph splits = 66 time=2025-10-17T13:29:04.222+02:00 level=INFO source=server.go:1310 msg="llama runner started in 1.18 seconds" time=2025-10-17T13:29:04.222+02:00 level=INFO source=sched.go:482 msg="loaded runners" count=1 time=2025-10-17T13:29:04.222+02:00 level=INFO source=server.go:1272 msg="waiting for llama runner to start responding" time=2025-10-17T13:29:04.222+02:00 level=INFO source=server.go:1310 msg="llama runner started in 1.18 seconds" [GIN] 2025/10/17 - 13:29:04 | 200 | 1.658240338s | 127.0.0.1 | POST "/api/generate" time=2025-10-17T13:31:47.255+02:00 level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q4_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-10-17T13:31:47.256+02:00 level=INFO source=images.go:522 msg="total blobs: 26" time=2025-10-17T13:31:47.256+02:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-10-17T13:31:47.256+02:00 level=INFO source=routes.go:1564 msg="Listening on 127.0.0.1:11434 (version 0.12.6)" time=2025-10-17T13:31:47.257+02:00 level=INFO source=runner.go:80 msg="discovering available GPUs..." time=2025-10-17T13:31:47.806+02:00 level=INFO source=types.go:112 msg="inference compute" id=GPU-344f6cf8-eede-03b0-070c-285b5936ff5f library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5090 Laptop GPU" libdirs=ollama,cuda_v13 driver=13.0 pci_id=02:00.0 type=discrete total="23.9 GiB" available="23.4 GiB" time=2025-10-17T13:32:02.131+02:00 level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE:q4_0 OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-10-17T13:32:02.132+02:00 level=INFO source=images.go:522 msg="total blobs: 26" time=2025-10-17T13:32:02.132+02:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-10-17T13:32:02.132+02:00 level=INFO source=routes.go:1564 msg="Listening on 127.0.0.1:11434 (version 0.12.6)" time=2025-10-17T13:32:02.133+02:00 level=INFO source=runner.go:80 msg="discovering available GPUs..." time=2025-10-17T13:32:02.762+02:00 level=INFO source=types.go:112 msg="inference compute" id=GPU-344f6cf8-eede-03b0-070c-285b5936ff5f library=CUDA compute=12.0 name=CUDA0 description="NVIDIA GeForce RTX 5090 Laptop GPU" libdirs=ollama,cuda_v13 driver=13.0 pci_id=02:00.0 type=discrete total="23.9 GiB" available="23.4 GiB" llama_model_loader: loaded meta data with 43 key-value pairs and 324 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-87e7f0eb5a1d33e1a8ea4e7fdd2363764e0465061dd92962e0073aec31a7944d (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = apertus llama_model_loader: - kv 1: xielu.alpha_n arr[f32,32] = [40.750000, 31.625000, 22.875000, 16.... llama_model_loader: - kv 2: xielu.alpha_p arr[f32,32] = [166.000000, 174.000000, 128.000000, ... llama_model_loader: - kv 3: xielu.beta arr[f32,32] = [0.500000, 0.500000, 0.500000, 0.5000... llama_model_loader: - kv 4: xielu.eps arr[f32,32] = [-0.000001, -0.000001, -0.000001, -0.... llama_model_loader: - kv 5: general.type str = model llama_model_loader: - kv 6: general.name str = Apertus-8B-Instruct-2509 llama_model_loader: - kv 7: general.version str = 2509 llama_model_loader: - kv 8: general.finetune str = Instruct llama_model_loader: - kv 9: general.basename str = Apertus-8B-Instruct-2509 llama_model_loader: - kv 10: general.quantized_by str = Unsloth llama_model_loader: - kv 11: general.size_label str = 8B llama_model_loader: - kv 12: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 13: apertus.block_count u32 = 32 llama_model_loader: - kv 14: apertus.context_length u32 = 65536 llama_model_loader: - kv 15: apertus.embedding_length u32 = 4096 llama_model_loader: - kv 16: apertus.feed_forward_length u32 = 21504 llama_model_loader: - kv 17: apertus.attention.head_count u32 = 32 llama_model_loader: - kv 18: apertus.attention.head_count_kv u32 = 8 llama_model_loader: - kv 19: apertus.rope.freq_base f32 = 12000000.000000 llama_model_loader: - kv 20: apertus.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 21: apertus.vocab_size u32 = 131072 llama_model_loader: - kv 22: apertus.rope.dimension_count u32 = 128 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = tekken llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,131072] = ["<unk>", "<s>", "</s>", "<pad>", "[/... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,269443] = ["Ä  Ä ", "Ä  t", "e r", "i n", "Ä  Ä... llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 68 llama_model_loader: - kv 30: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 31: tokenizer.ggml.padding_token_id u32 = 3 llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 33: tokenizer.ggml.add_sep_token bool = false llama_model_loader: - kv 34: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 35: tokenizer.chat_template str = {# Unsloth template fixes #}\n{%- macr... llama_model_loader: - kv 36: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 37: general.quantization_version u32 = 2 llama_model_loader: - kv 38: general.file_type u32 = 15 llama_model_loader: - kv 39: quantize.imatrix.file str = Apertus-8B-Instruct-2509-GGUF/imatrix... llama_model_loader: - kv 40: quantize.imatrix.dataset str = unsloth_calibration_Apertus-8B-Instru... llama_model_loader: - kv 41: quantize.imatrix.entries_count u32 = 192 llama_model_loader: - kv 42: quantize.imatrix.chunks_count u32 = 143 llama_model_loader: - type f32: 130 tensors llama_model_loader: - type q4_K: 161 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 4.70 GiB (5.02 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: printing all EOG tokens: load: - 31 ('<reponame>') load: - 68 ('<|assistant_end|>') load: special tokens cache size = 1000 load: token to piece cache size = 0.8499 MB print_info: arch = apertus print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 8.05 B print_info: general.name = Apertus-8B-Instruct-2509 print_info: vocab type = BPE print_info: n_vocab = 131072 print_info: n_merges = 269443 print_info: BOS token = 1 '<s>' print_info: EOS token = 68 '<|assistant_end|>' print_info: UNK token = 0 '<unk>' print_info: PAD token = 3 '<pad>' print_info: LF token = 1010 'Ċ' print_info: FIM REP token = 31 '<reponame>' print_info: EOG token = 31 '<reponame>' print_info: EOG token = 68 '<|assistant_end|>' print_info: max token length = 150 llama_model_load: vocab only - skipping tensors time=2025-10-17T13:32:54.168+02:00 level=INFO source=server.go:216 msg="enabling flash attention" time=2025-10-17T13:32:54.168+02:00 level=INFO source=server.go:400 msg="starting runner" cmd="/usr/local/bin/ollama runner --model /usr/share/ollama/.ollama/models/blobs/sha256-87e7f0eb5a1d33e1a8ea4e7fdd2363764e0465061dd92962e0073aec31a7944d --port 44607" time=2025-10-17T13:32:54.169+02:00 level=INFO source=server.go:505 msg="system memory" total="62.2 GiB" free="53.4 GiB" free_swap="512.0 MiB" time=2025-10-17T13:32:54.169+02:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-87e7f0eb5a1d33e1a8ea4e7fdd2363764e0465061dd92962e0073aec31a7944d library=CUDA parallel=1 required="5.2 GiB" gpus=1 time=2025-10-17T13:32:54.169+02:00 level=INFO source=server.go:545 msg=offload library=CUDA layers.requested=-1 layers.model=33 layers.offload=33 layers.split=[33] memory.available="[23.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.2 GiB" memory.required.partial="5.2 GiB" memory.required.kv="128.0 MiB" memory.required.allocations="[5.2 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.0 GiB" memory.weights.nonrepeating="420.0 MiB" memory.graph.full="85.3 MiB" memory.graph.partial="85.3 MiB" time=2025-10-17T13:32:54.175+02:00 level=INFO source=runner.go:893 msg="starting go runner" load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-alderlake.so ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5090 Laptop GPU, compute capability 12.0, VMM: yes, ID: GPU-344f6cf8-eede-03b0-070c-285b5936ff5f load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v13/libggml-cuda.so time=2025-10-17T13:32:54.323+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-10-17T13:32:54.323+02:00 level=INFO source=runner.go:929 msg="Server listening on 127.0.0.1:44607" time=2025-10-17T13:32:54.325+02:00 level=INFO source=server.go:1272 msg="waiting for llama runner to start responding" time=2025-10-17T13:32:54.325+02:00 level=INFO source=runner.go:828 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType:q4_0 NumThreads:24 GPULayers:33[ID:GPU-344f6cf8-eede-03b0-070c-285b5936ff5f Layers:33(0..32)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}" time=2025-10-17T13:32:54.325+02:00 level=INFO source=server.go:1306 msg="waiting for server to become available" status="llm server loading model" ggml_backend_cuda_device_get_memory utilizing NVML memory reporting free: 25094782976 total: 25651314688 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5090 Laptop GPU) (0000:02:00.0) - 23932 MiB free llama_model_loader: loaded meta data with 43 key-value pairs and 324 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-87e7f0eb5a1d33e1a8ea4e7fdd2363764e0465061dd92962e0073aec31a7944d (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = apertus llama_model_loader: - kv 1: xielu.alpha_n arr[f32,32] = [40.750000, 31.625000, 22.875000, 16.... llama_model_loader: - kv 2: xielu.alpha_p arr[f32,32] = [166.000000, 174.000000, 128.000000, ... llama_model_loader: - kv 3: xielu.beta arr[f32,32] = [0.500000, 0.500000, 0.500000, 0.5000... llama_model_loader: - kv 4: xielu.eps arr[f32,32] = [-0.000001, -0.000001, -0.000001, -0.... llama_model_loader: - kv 5: general.type str = model llama_model_loader: - kv 6: general.name str = Apertus-8B-Instruct-2509 llama_model_loader: - kv 7: general.version str = 2509 llama_model_loader: - kv 8: general.finetune str = Instruct llama_model_loader: - kv 9: general.basename str = Apertus-8B-Instruct-2509 llama_model_loader: - kv 10: general.quantized_by str = Unsloth llama_model_loader: - kv 11: general.size_label str = 8B llama_model_loader: - kv 12: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 13: apertus.block_count u32 = 32 llama_model_loader: - kv 14: apertus.context_length u32 = 65536 llama_model_loader: - kv 15: apertus.embedding_length u32 = 4096 llama_model_loader: - kv 16: apertus.feed_forward_length u32 = 21504 llama_model_loader: - kv 17: apertus.attention.head_count u32 = 32 llama_model_loader: - kv 18: apertus.attention.head_count_kv u32 = 8 llama_model_loader: - kv 19: apertus.rope.freq_base f32 = 12000000.000000 llama_model_loader: - kv 20: apertus.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 21: apertus.vocab_size u32 = 131072 llama_model_loader: - kv 22: apertus.rope.dimension_count u32 = 128 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = tekken llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,131072] = ["<unk>", "<s>", "</s>", "<pad>", "[/... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,269443] = ["Ä  Ä ", "Ä  t", "e r", "i n", "Ä  Ä... llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 68 llama_model_loader: - kv 30: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 31: tokenizer.ggml.padding_token_id u32 = 3 llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 33: tokenizer.ggml.add_sep_token bool = false llama_model_loader: - kv 34: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 35: tokenizer.chat_template str = {# Unsloth template fixes #}\n{%- macr... llama_model_loader: - kv 36: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 37: general.quantization_version u32 = 2 llama_model_loader: - kv 38: general.file_type u32 = 15 llama_model_loader: - kv 39: quantize.imatrix.file str = Apertus-8B-Instruct-2509-GGUF/imatrix... llama_model_loader: - kv 40: quantize.imatrix.dataset str = unsloth_calibration_Apertus-8B-Instru... llama_model_loader: - kv 41: quantize.imatrix.entries_count u32 = 192 llama_model_loader: - kv 42: quantize.imatrix.chunks_count u32 = 143 llama_model_loader: - type f32: 130 tensors llama_model_loader: - type q4_K: 161 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 4.70 GiB (5.02 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: printing all EOG tokens: load: - 31 ('<reponame>') load: - 68 ('<|assistant_end|>') load: special tokens cache size = 1000 load: token to piece cache size = 0.8499 MB print_info: arch = apertus print_info: vocab_only = 0 print_info: n_ctx_train = 65536 print_info: n_embd = 4096 print_info: n_layer = 32 print_info: n_head = 32 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 4 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 21504 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 12000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 65536 print_info: rope_finetuned = unknown print_info: model type = 8B print_info: model params = 8.05 B print_info: general.name = Apertus-8B-Instruct-2509 print_info: vocab type = BPE print_info: n_vocab = 131072 print_info: n_merges = 269443 print_info: BOS token = 1 '<s>' print_info: EOS token = 68 '<|assistant_end|>' print_info: UNK token = 0 '<unk>' print_info: PAD token = 3 '<pad>' print_info: LF token = 1010 'Ċ' print_info: FIM REP token = 31 '<reponame>' print_info: EOG token = 31 '<reponame>' print_info: EOG token = 68 '<|assistant_end|>' print_info: max token length = 150 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: offloading 32 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 33/33 layers to GPU load_tensors: CUDA0 model buffer size = 4528.05 MiB load_tensors: CPU_Mapped model buffer size = 288.00 MiB llama_init_from_model: model default pooling_type is [0], but [-1] was specified llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 4096 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = enabled llama_context: kv_unified = false llama_context: freq_base = 12000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (65536) -- the full capacity of the model will not be utilized llama_context: CUDA_Host output buffer size = 0.52 MiB llama_kv_cache: CUDA0 KV buffer size = 144.00 MiB llama_kv_cache: size = 144.00 MiB ( 4096 cells, 32 layers, 1/1 seqs), K (q4_0): 72.00 MiB, V (q4_0): 72.00 MiB llama_context: CUDA0 compute buffer size = 264.00 MiB llama_context: CUDA_Host compute buffer size = 50.01 MiB llama_context: graph nodes = 1095 llama_context: graph splits = 66 time=2025-10-17T13:32:55.328+02:00 level=INFO source=server.go:1310 msg="llama runner started in 1.16 seconds" time=2025-10-17T13:32:55.328+02:00 level=INFO source=sched.go:482 msg="loaded runners" count=1 time=2025-10-17T13:32:55.328+02:00 level=INFO source=server.go:1272 msg="waiting for llama runner to start responding" time=2025-10-17T13:32:55.329+02:00 level=INFO source=server.go:1310 msg="llama runner started in 1.16 seconds" [GIN] 2025/10/17 - 13:33:00 | 200 | 6.73328923s | 127.0.0.1 | POST "/api/chat" ```
Author
Owner

@rick-github commented on GitHub (Oct 17, 2025):

Please wrap the log in a markdown code block (``` before and after).

<!-- gh-comment-id:3415151168 --> @rick-github commented on GitHub (Oct 17, 2025): Please wrap the log in a markdown code block (\`\`\` before and after).
Author
Owner

@rick-github commented on GitHub (Oct 17, 2025):

Put newlines between the markdown markers and the log.

```
log lines
```
<!-- gh-comment-id:3415163338 --> @rick-github commented on GitHub (Oct 17, 2025): Put newlines between the markdown markers and the log. ```` ``` log lines ``` ````
Author
Owner

@eXt73 commented on GitHub (Oct 17, 2025):

Done... and interestingly, Ollama wasn't logging after: journalctl -u ollama... I had to redirect to:

StandardOutput=append:/var/log/ollama.log
StandardError=append:/var/log/ollama.log

<!-- gh-comment-id:3415169905 --> @eXt73 commented on GitHub (Oct 17, 2025): Done... and interestingly, Ollama wasn't logging after: journalctl -u ollama... I had to redirect to: StandardOutput=append:/var/log/ollama.log StandardError=append:/var/log/ollama.log
Author
Owner

@rick-github commented on GitHub (Oct 17, 2025):

The problem appears to be OLLAMA_KV_CACHE_TYPE=q4_0. Turning flash attention off, or using cache quant of f16 or q8_0 works fine.

<!-- gh-comment-id:3415208740 --> @rick-github commented on GitHub (Oct 17, 2025): The problem appears to be `OLLAMA_KV_CACHE_TYPE=q4_0`. Turning flash attention off, or using cache quant of f16 or q8_0 works fine.
Author
Owner

@eXt73 commented on GitHub (Oct 17, 2025):

However, when using a parameterization other than:

Environment="OLLAMA_FLASH_ATTENTION=1"
Environment="OLLAMA_KV_CACHE_TYPE=q4_0"
Environment="OLLAMA_NEW_ESTIMATES=1"
Environment="OLLAMA_NEW_ENGINE=1"

VRAM usage 'shoots into the universe' = unacceptable > Mistral 3.2, Qwen 3, etc. have no problem with this... is this due to the 'imperfect' implementation of the model in llama.cpp/ollama... or the operation of the model itself?

p.s. and as someone already wrote... despite the declaration of using the GPU, it also uses the CPU and the GPU can only be used at 50%

<!-- gh-comment-id:3415612360 --> @eXt73 commented on GitHub (Oct 17, 2025): However, when using a parameterization other than: Environment="OLLAMA_FLASH_ATTENTION=1" Environment="OLLAMA_KV_CACHE_TYPE=q4_0" Environment="OLLAMA_NEW_ESTIMATES=1" Environment="OLLAMA_NEW_ENGINE=1" VRAM usage 'shoots into the universe' = unacceptable > Mistral 3.2, Qwen 3, etc. have no problem with this... is this due to the 'imperfect' implementation of the model in llama.cpp/ollama... or the operation of the model itself? p.s. and as someone already wrote... despite the declaration of using the GPU, it also uses the CPU and the GPU can only be used at 50%
Author
Owner

@somera commented on GitHub (Oct 17, 2025):

$ ollama -v
ollama version is 0.12.6
$ ollama run hf.co/redponike/Apertus-8B-Instruct-2509-GGUF:Q4_K_M

/set system respond in english.
Set system message.
hello
Hello! How can I assist you today?

I run same with RTX A6000 and CPU usage for just helloprompt is

Image
<!-- gh-comment-id:3415771089 --> @somera commented on GitHub (Oct 17, 2025): > $ ollama -v > ollama version is 0.12.6 > $ ollama run hf.co/redponike/Apertus-8B-Instruct-2509-GGUF:Q4_K_M > >>> /set system respond in english. > Set system message. > >>> hello > Hello! How can I assist you today? I run same with RTX A6000 and CPU usage for just `hello`prompt is <img width="527" height="305" alt="Image" src="https://github.com/user-attachments/assets/1eaf8bc7-d871-4acb-88f1-8dc95e685bf8" />
Author
Owner

@loleg commented on GitHub (Oct 20, 2025):

Ollama version 0.12.6 is now out of RC and available as latest. I can see that ApertusForCausalLM is now supported in the main release. Getting Apertus into the Model library is a separate topic, and would probably require first GGUF support from swiss-ai.

I've tested with GGUF builds from bartowski, redponike & unsloth, all working fine.

Here is a screenshot of the Web interface, as an additional example:

Image

As there has been discussion of performance, I can say that my RTX 2600S rarely goes past 50% capacity, the modest i5 CPU only heats up when activating Web search or uploading large documents. This is as high as it goes on bartowski Q4_K_M:

Image

C'est bien, merci! 🇨🇭

<!-- gh-comment-id:3424041014 --> @loleg commented on GitHub (Oct 20, 2025): Ollama version 0.12.6 is now out of RC and available as `latest`. I can see that ApertusForCausalLM is now supported in the main release. Getting Apertus into the [Model library](https://ollama.com/search?q=apertus) is a separate topic, and would probably require first GGUF support from swiss-ai. I've tested with GGUF builds from [bartowski](https://huggingface.co/bartowski/swiss-ai_Apertus-8B-Instruct-2509-GGUF), [redponike](https://huggingface.co/redponike/Apertus-8B-Instruct-2509-GGUF) & [unsloth](https://huggingface.co/unsloth/Apertus-8B-Instruct-2509-GGUF), all working fine. Here is a screenshot of the Web interface, as an additional example: <img width="1192" height="497" alt="Image" src="https://github.com/user-attachments/assets/4fef8286-d5e8-4123-a7a4-b2affec0995e" /> --- As there has been discussion of performance, I can say that my RTX 2600S rarely goes past 50% capacity, the modest i5 CPU only heats up when activating Web search or uploading large documents. This is as high as it goes on bartowski Q4_K_M: <img width="672" height="173" alt="Image" src="https://github.com/user-attachments/assets/50ea52f4-9fc4-43da-b98b-a86db646dca2" /> C'est bien, merci! 🇨🇭
Author
Owner

@pdevine commented on GitHub (Oct 20, 2025):

This is running on the legacy llama.cpp engine and not the ollama engine, so it won't get the benefit of a lot of the new scheduling/memory estimation work. New engine support is in PR #12607

<!-- gh-comment-id:3424153342 --> @pdevine commented on GitHub (Oct 20, 2025): This is running on the legacy llama.cpp engine and not the ollama engine, so it won't get the benefit of a lot of the new scheduling/memory estimation work. New engine support is in PR #12607
Author
Owner

@rick-github commented on GitHub (Oct 23, 2025):

Modelfile for thinking and tool support. Tools are not fully supported by the model yet, so the results vary. The tool supprt required a change in ollama tool call parsing, so only supported in 0.12.7+. The model also doesn't do a lot of thinking: in the time I've been testing, it only generated thought traces in a handful of cases.

FROM hf.co/unsloth/Apertus-8B-Instruct-2509-GGUF:Q4_K_M
TEMPLATE """<|system_start|>
{{- if .System }}
{{- .System }}
{{- else -}}
You are Apertus, a helpful assistant created by the SwissAI initiative.
Knowledge cutoff: 2024-04
Current date: {{ currentDate }}
{{- end -}}
<|system_end|><|developer_start|>Deliberation: {{ if and $.IsThinkSet $.Think }}enabled{{ else }}disabled{{ end }}
Tool Capabilities:
{{- if not $.Tools }} disabled
{{- else }}
{{- range $i, $tool := $.Tools }}
{{- $last := eq (len (slice $.Tools $i)) 1 }}
// {{ .Function.Description }}
type {{ .Function.Name }} =
{{- if and .Function.Parameters .Function.Parameters.Properties }} (_: {
{{- $comma := false }}
{{- range $name, $prop := .Function.Parameters.Properties }}
{{- if $comma }},{{ end }}
{{- if $prop.Description }}
// {{ $prop.Description }}
{{- end }}
{{ $name }}: {{ $prop | toTypeScriptType }}{{ $comma = true }}
{{- end }}
}) => any;
{{- else }} () => any;
{{- end -}}
{{- end -}}
{{- end -}}
<|developer_end|>

{{- $in_assistant := false }}
{{- $in_tool := false }}
{{- range $i, $message := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1}}

{{- if eq .Role "user" -}}
{{- if $in_tool }}{{ $in_tool = false }}]{{ end -}}
{{- if $in_assistant }}{{ $in_assistant = false }}<|assistant_end|>{{ end -}}
<|user_start|>
{{- .Content -}}
<|user_end|>
{{- end }}

{{- if eq .Role "assistant" -}}
{{- if not $in_assistant }}{{ $in_assistant = true }}<|assistant_start|>{{ end -}}
{{- if and $.IsThinkSet .Thinking -}}
<|inner_prefix|>
{{- .Thinking -}}
<|inner_suffix|>
{{- end }}
{{- .Content -}}
{{- if .ToolCalls -}}
<|tools_prefix|>[
{{- range $j, $_ := .ToolCalls }}{"{{ .Function.Name }}": "{{ .Function.Arguments }}"}{{ if ne (len (slice $message.ToolCalls $j)) 1 }}, {{end}}{{ end -}}
]<|tools_suffix|>
{{- end }}
{{- end }}

{{- if eq .Role "tool" -}}
{{- if $in_assistant }}
{{- if not $in_tool }}[{{ $in_tool = true }}{{ else }},{{ end -}}
{{- .Content -}}
{{- end }}
{{- end }}

{{- if $last }}
{{- if $in_tool }}{{ $in_tool = false }}]{{ end -}}
{{- if $in_assistant }}{{ $in_assistant = false }}<|assistant_end|>{{ end -}}
{{- if ne .Role "assistant" -}}
<|assistant_start|>
{{- end }}
{{- end }}

{{- end -}}
"""
PARAMETER stop <s>
PARAMETER stop <|system_start|>
PARAMETER stop <|system_end|>
PARAMETER stop <|developer_start|>
PARAMETER stop <|developer_end|>
PARAMETER stop <|assistant_end|>
PARAMETER stop <|user_start|>
PARAMETER stop <|user_end|>
PARAMETER stop <|assistant_start|>
<!-- gh-comment-id:3437140871 --> @rick-github commented on GitHub (Oct 23, 2025): Modelfile for thinking and tool support. Tools are [not fully supported](https://huggingface.co/swiss-ai/Apertus-8B-Instruct-2509/discussions/18#68d183dfcd73b0ba3811102c) by the model yet, so the results vary. The tool supprt required a change in ollama tool call parsing, so only supported in 0.12.7+. The model also doesn't do a lot of thinking: in the time I've been testing, it only generated thought traces in a handful of cases. ```modelfile FROM hf.co/unsloth/Apertus-8B-Instruct-2509-GGUF:Q4_K_M TEMPLATE """<|system_start|> {{- if .System }} {{- .System }} {{- else -}} You are Apertus, a helpful assistant created by the SwissAI initiative. Knowledge cutoff: 2024-04 Current date: {{ currentDate }} {{- end -}} <|system_end|><|developer_start|>Deliberation: {{ if and $.IsThinkSet $.Think }}enabled{{ else }}disabled{{ end }} Tool Capabilities: {{- if not $.Tools }} disabled {{- else }} {{- range $i, $tool := $.Tools }} {{- $last := eq (len (slice $.Tools $i)) 1 }} // {{ .Function.Description }} type {{ .Function.Name }} = {{- if and .Function.Parameters .Function.Parameters.Properties }} (_: { {{- $comma := false }} {{- range $name, $prop := .Function.Parameters.Properties }} {{- if $comma }},{{ end }} {{- if $prop.Description }} // {{ $prop.Description }} {{- end }} {{ $name }}: {{ $prop | toTypeScriptType }}{{ $comma = true }} {{- end }} }) => any; {{- else }} () => any; {{- end -}} {{- end -}} {{- end -}} <|developer_end|> {{- $in_assistant := false }} {{- $in_tool := false }} {{- range $i, $message := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1}} {{- if eq .Role "user" -}} {{- if $in_tool }}{{ $in_tool = false }}]{{ end -}} {{- if $in_assistant }}{{ $in_assistant = false }}<|assistant_end|>{{ end -}} <|user_start|> {{- .Content -}} <|user_end|> {{- end }} {{- if eq .Role "assistant" -}} {{- if not $in_assistant }}{{ $in_assistant = true }}<|assistant_start|>{{ end -}} {{- if and $.IsThinkSet .Thinking -}} <|inner_prefix|> {{- .Thinking -}} <|inner_suffix|> {{- end }} {{- .Content -}} {{- if .ToolCalls -}} <|tools_prefix|>[ {{- range $j, $_ := .ToolCalls }}{"{{ .Function.Name }}": "{{ .Function.Arguments }}"}{{ if ne (len (slice $message.ToolCalls $j)) 1 }}, {{end}}{{ end -}} ]<|tools_suffix|> {{- end }} {{- end }} {{- if eq .Role "tool" -}} {{- if $in_assistant }} {{- if not $in_tool }}[{{ $in_tool = true }}{{ else }},{{ end -}} {{- .Content -}} {{- end }} {{- end }} {{- if $last }} {{- if $in_tool }}{{ $in_tool = false }}]{{ end -}} {{- if $in_assistant }}{{ $in_assistant = false }}<|assistant_end|>{{ end -}} {{- if ne .Role "assistant" -}} <|assistant_start|> {{- end }} {{- end }} {{- end -}} """ PARAMETER stop <s> PARAMETER stop <|system_start|> PARAMETER stop <|system_end|> PARAMETER stop <|developer_start|> PARAMETER stop <|developer_end|> PARAMETER stop <|assistant_end|> PARAMETER stop <|user_start|> PARAMETER stop <|user_end|> PARAMETER stop <|assistant_start|> ```
Author
Owner

@seanlinmt commented on GitHub (Oct 26, 2025):

With 0.12.6, I still get
Error: unsupported architecture "ApertusForCausalLM"

when attempting to quantize directly from downloaded hf model

ollama create --quantize q4_K_M Apertus-70B-Instruct-2509

<!-- gh-comment-id:3448304267 --> @seanlinmt commented on GitHub (Oct 26, 2025): With 0.12.6, I still get Error: unsupported architecture "ApertusForCausalLM" when attempting to quantize directly from downloaded hf model > ollama create --quantize q4_K_M Apertus-70B-Instruct-2509
Author
Owner

@rick-github commented on GitHub (Oct 26, 2025):

Apertus is not an architecture currently supported by the safetensor import method. Use llama.cpp to quantize.

<!-- gh-comment-id:3448417518 --> @rick-github commented on GitHub (Oct 26, 2025): Apertus is not an architecture currently supported by the safetensor import method. Use llama.cpp to quantize.
Author
Owner

@chrisoutwright commented on GitHub (Nov 8, 2025):

I'm experiencing difficulties running the unsloth/Apertus-70B-Instruct-2509-GGUF model when using OLLAMA_KV_CACHE_TYPE=q8_0 and a context size (ctx) of 1k, with the model variant (IQ4_NL). In contrast, I can successfully run the DeepSeek-R1-Distill-Llama-70B model (IQ4_NL) without issues, even with much larger context sizes (>14k)

<!-- gh-comment-id:3506864728 --> @chrisoutwright commented on GitHub (Nov 8, 2025): I'm experiencing difficulties running the unsloth/Apertus-70B-Instruct-2509-GGUF model when using OLLAMA_KV_CACHE_TYPE=q8_0 and a context size (ctx) of 1k, with the model variant (IQ4_NL). In contrast, I can successfully run the DeepSeek-R1-Distill-Llama-70B model (IQ4_NL) without issues, even with much larger context sizes (>14k)
Author
Owner

@rick-github commented on GitHub (Nov 8, 2025):

Define difficulties.

<!-- gh-comment-id:3506868818 --> @rick-github commented on GitHub (Nov 8, 2025): Define difficulties.
Author
Owner

@chrisoutwright commented on GitHub (Nov 8, 2025):

Define difficulties.

I cannot run it in vram only with 1k. with 2x4090 cards for a 70B model IQ4NL .. that is unusual.

logs: trying 5k with that model..

time=2025-11-08T23:47:02.655+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\Chris\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 63795"
time=2025-11-08T23:47:03.176+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\Chris\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 63801"
time=2025-11-08T23:47:03.746+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\Chris\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 63808"
time=2025-11-08T23:47:03.994+01:00 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2025-11-08T23:47:03.994+01:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=8
llama_model_loader: loaded meta data with 43 key-value pairs and 804 tensors from D:\Ollama\models\blobs\sha256-22000cacd20f1b7759e479ab2348b0ad17ea7f076e28680e8560bcdaf6a099d7 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = apertus
llama_model_loader: - kv   1:                              xielu.alpha_n arr[f32,80]      = [7.593750, 6.500000, 4.656250, 4.1250...
llama_model_loader: - kv   2:                              xielu.alpha_p arr[f32,80]      = [2.796875, 11.125000, 7.000000, 5.968...
llama_model_loader: - kv   3:                                 xielu.beta arr[f32,80]      = [0.500000, 0.500000, 0.500000, 0.5000...
llama_model_loader: - kv   4:                                  xielu.eps arr[f32,80]      = [-0.000001, -0.000001, -0.000001, -0....
llama_model_loader: - kv   5:                               general.type str              = model
llama_model_loader: - kv   6:                               general.name str              = Apertus-70B-Instruct-2509
llama_model_loader: - kv   7:                            general.version str              = 2509
llama_model_loader: - kv   8:                           general.finetune str              = Instruct
llama_model_loader: - kv   9:                           general.basename str              = Apertus-70B-Instruct-2509
llama_model_loader: - kv  10:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv  11:                         general.size_label str              = 70B
llama_model_loader: - kv  12:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv  13:                        apertus.block_count u32              = 80
llama_model_loader: - kv  14:                     apertus.context_length u32              = 65536
llama_model_loader: - kv  15:                   apertus.embedding_length u32              = 8192
llama_model_loader: - kv  16:                apertus.feed_forward_length u32              = 43008
llama_model_loader: - kv  17:               apertus.attention.head_count u32              = 64
llama_model_loader: - kv  18:            apertus.attention.head_count_kv u32              = 8
llama_model_loader: - kv  19:                     apertus.rope.freq_base f32              = 12000000.000000
llama_model_loader: - kv  20:   apertus.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  21:                         apertus.vocab_size u32              = 131072
llama_model_loader: - kv  22:               apertus.rope.dimension_count u32              = 128
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = tekken
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,131072]  = ["<unk>", "<s>", "</s>", "<pad>", "[/...
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,131072]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,269443]  = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ �...
llama_model_loader: - kv  28:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  29:                tokenizer.ggml.eos_token_id u32              = 68
llama_model_loader: - kv  30:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  31:            tokenizer.ggml.padding_token_id u32              = 3
llama_model_loader: - kv  32:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  33:               tokenizer.ggml.add_sep_token bool             = false
llama_model_loader: - kv  34:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  35:                    tokenizer.chat_template str              = {# Unsloth template fixes #}\n{%- macr...
llama_model_loader: - kv  36:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  37:               general.quantization_version u32              = 2
llama_model_loader: - kv  38:                          general.file_type u32              = 25
llama_model_loader: - kv  39:                      quantize.imatrix.file str              = Apertus-70B-Instruct-2509-GGUF/imatri...
llama_model_loader: - kv  40:                   quantize.imatrix.dataset str              = unsloth_calibration_Apertus-70B-Instr...
llama_model_loader: - kv  41:             quantize.imatrix.entries_count u32              = 480
llama_model_loader: - kv  42:              quantize.imatrix.chunks_count u32              = 143
llama_model_loader: - type  f32:  322 tensors
llama_model_loader: - type q4_K:    1 tensors
llama_model_loader: - type q5_K:   80 tensors
llama_model_loader: - type q6_K:    1 tensors
llama_model_loader: - type iq4_nl:  400 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = IQ4_NL - 4.5 bpw
print_info: file size   = 37.33 GiB (4.54 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load:   - 31 ('<reponame>')
load:   - 68 ('<|assistant_end|>')
load: special tokens cache size = 1000
load: token to piece cache size = 0.8499 MB
print_info: arch             = apertus
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 70.60 B
print_info: general.name     = Apertus-70B-Instruct-2509
print_info: vocab type       = BPE
print_info: n_vocab          = 131072
print_info: n_merges         = 269443
print_info: BOS token        = 1 '<s>'
print_info: EOS token        = 68 '<|assistant_end|>'
print_info: UNK token        = 0 '<unk>'
print_info: PAD token        = 3 '<pad>'
print_info: LF token         = 1010 'Ċ'
print_info: FIM REP token    = 31 '<reponame>'
print_info: EOG token        = 31 '<reponame>'
print_info: EOG token        = 68 '<|assistant_end|>'
print_info: max token length = 150
llama_model_load: vocab only - skipping tensors
time=2025-11-08T23:47:04.369+01:00 level=INFO source=server.go:215 msg="enabling flash attention"
time=2025-11-08T23:47:04.370+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\Chris\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\Ollama\\models\\blobs\\sha256-22000cacd20f1b7759e479ab2348b0ad17ea7f076e28680e8560bcdaf6a099d7 --port 63812"
time=2025-11-08T23:47:04.373+01:00 level=INFO source=server.go:470 msg="system memory" total="63.9 GiB" free="55.2 GiB" free_swap="71.6 GiB"
time=2025-11-08T23:47:04.374+01:00 level=INFO source=memory.go:53 msg="new model will fit in available VRAM, loading" model=D:\Ollama\models\blobs\sha256-22000cacd20f1b7759e479ab2348b0ad17ea7f076e28680e8560bcdaf6a099d7 library=CUDA parallel=1 required="41.4 GiB" gpus=2
time=2025-11-08T23:47:04.375+01:00 level=INFO source=server.go:522 msg=offload library=CUDA layers.requested=81 layers.model=81 layers.offload=81 layers.split="[41 40]" memory.available="[23.6 GiB 23.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="41.4 GiB" memory.required.partial="41.4 GiB" memory.required.kv="781.2 MiB" memory.required.allocations="[21.1 GiB 20.3 GiB]" memory.weights.total="36.8 GiB" memory.weights.repeating="35.9 GiB" memory.weights.nonrepeating="840.0 MiB" memory.graph.full="1.0 GiB" memory.graph.partial="1.0 GiB"
time=2025-11-08T23:47:04.400+01:00 level=INFO source=runner.go:910 msg="starting go runner"
load_backend: loaded CPU backend from C:\Users\Chris\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-971b407f-ae20-75ed-99c8-42c696057b0e
  Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes, ID: GPU-3752f260-9f8c-48e9-780e-12430a037c53
load_backend: loaded CUDA backend from C:\Users\Chris\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll
time=2025-11-08T23:47:04.486+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-11-08T23:47:04.486+01:00 level=INFO source=runner.go:946 msg="Server listening on 127.0.0.1:63812"
time=2025-11-08T23:47:04.491+01:00 level=INFO source=runner.go:845 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:5000 KvCacheType:q8_0 NumThreads:8 GPULayers:81[ID:GPU-971b407f-ae20-75ed-99c8-42c696057b0e Layers:41(0..40) ID:GPU-3752f260-9f8c-48e9-780e-12430a037c53 Layers:40(41..80)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-11-08T23:47:04.491+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding"
time=2025-11-08T23:47:04.491+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model"
ggml_backend_cuda_device_get_memory device GPU-971b407f-ae20-75ed-99c8-42c696057b0e utilizing NVML memory reporting free: 25314721792 total: 25757220864
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) (0000:02:00.0) - 24142 MiB free
ggml_backend_cuda_device_get_memory device GPU-3752f260-9f8c-48e9-780e-12430a037c53 utilizing NVML memory reporting free: 24650018816 total: 25769803776
llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 3090) (0000:01:00.0) - 23508 MiB free
llama_model_loader: loaded meta data with 43 key-value pairs and 804 tensors from D:\Ollama\models\blobs\sha256-22000cacd20f1b7759e479ab2348b0ad17ea7f076e28680e8560bcdaf6a099d7 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = apertus
llama_model_loader: - kv   1:                              xielu.alpha_n arr[f32,80]      = [7.593750, 6.500000, 4.656250, 4.1250...
llama_model_loader: - kv   2:                              xielu.alpha_p arr[f32,80]      = [2.796875, 11.125000, 7.000000, 5.968...
llama_model_loader: - kv   3:                                 xielu.beta arr[f32,80]      = [0.500000, 0.500000, 0.500000, 0.5000...
llama_model_loader: - kv   4:                                  xielu.eps arr[f32,80]      = [-0.000001, -0.000001, -0.000001, -0....
llama_model_loader: - kv   5:                               general.type str              = model
llama_model_loader: - kv   6:                               general.name str              = Apertus-70B-Instruct-2509
llama_model_loader: - kv   7:                            general.version str              = 2509
llama_model_loader: - kv   8:                           general.finetune str              = Instruct
llama_model_loader: - kv   9:                           general.basename str              = Apertus-70B-Instruct-2509
llama_model_loader: - kv  10:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv  11:                         general.size_label str              = 70B
llama_model_loader: - kv  12:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv  13:                        apertus.block_count u32              = 80
llama_model_loader: - kv  14:                     apertus.context_length u32              = 65536
llama_model_loader: - kv  15:                   apertus.embedding_length u32              = 8192
llama_model_loader: - kv  16:                apertus.feed_forward_length u32              = 43008
llama_model_loader: - kv  17:               apertus.attention.head_count u32              = 64
llama_model_loader: - kv  18:            apertus.attention.head_count_kv u32              = 8
llama_model_loader: - kv  19:                     apertus.rope.freq_base f32              = 12000000.000000
llama_model_loader: - kv  20:   apertus.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  21:                         apertus.vocab_size u32              = 131072
llama_model_loader: - kv  22:               apertus.rope.dimension_count u32              = 128
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = tekken
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,131072]  = ["<unk>", "<s>", "</s>", "<pad>", "[/...
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,131072]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,269443]  = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ �...
llama_model_loader: - kv  28:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  29:                tokenizer.ggml.eos_token_id u32              = 68
llama_model_loader: - kv  30:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  31:            tokenizer.ggml.padding_token_id u32              = 3
llama_model_loader: - kv  32:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  33:               tokenizer.ggml.add_sep_token bool             = false
llama_model_loader: - kv  34:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  35:                    tokenizer.chat_template str              = {# Unsloth template fixes #}\n{%- macr...
llama_model_loader: - kv  36:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  37:               general.quantization_version u32              = 2
llama_model_loader: - kv  38:                          general.file_type u32              = 25
llama_model_loader: - kv  39:                      quantize.imatrix.file str              = Apertus-70B-Instruct-2509-GGUF/imatri...
llama_model_loader: - kv  40:                   quantize.imatrix.dataset str              = unsloth_calibration_Apertus-70B-Instr...
llama_model_loader: - kv  41:             quantize.imatrix.entries_count u32              = 480
llama_model_loader: - kv  42:              quantize.imatrix.chunks_count u32              = 143
llama_model_loader: - type  f32:  322 tensors
llama_model_loader: - type q4_K:    1 tensors
llama_model_loader: - type q5_K:   80 tensors
llama_model_loader: - type q6_K:    1 tensors
llama_model_loader: - type iq4_nl:  400 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = IQ4_NL - 4.5 bpw
print_info: file size   = 37.33 GiB (4.54 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load:   - 31 ('<reponame>')
load:   - 68 ('<|assistant_end|>')
load: special tokens cache size = 1000
load: token to piece cache size = 0.8499 MB
print_info: arch             = apertus
print_info: vocab_only       = 0
print_info: n_ctx_train      = 65536
print_info: n_embd           = 8192
print_info: n_layer          = 80
print_info: n_head           = 64
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 8
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 43008
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 12000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 65536
print_info: rope_finetuned   = unknown
print_info: model type       = ?B
print_info: model params     = 70.60 B
print_info: general.name     = Apertus-70B-Instruct-2509
print_info: vocab type       = BPE
print_info: n_vocab          = 131072
print_info: n_merges         = 269443
print_info: BOS token        = 1 '<s>'
print_info: EOS token        = 68 '<|assistant_end|>'
print_info: UNK token        = 0 '<unk>'
print_info: PAD token        = 3 '<pad>'
print_info: LF token         = 1010 'Ċ'
print_info: FIM REP token    = 31 '<reponame>'
print_info: EOG token        = 31 '<reponame>'
print_info: EOG token        = 68 '<|assistant_end|>'
print_info: max token length = 150
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 80 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 81/81 layers to GPU
load_tensors:        CUDA0 model buffer size = 18862.60 MiB
load_tensors:        CUDA1 model buffer size = 18782.51 MiB
load_tensors:          CPU model buffer size =   576.00 MiB
llama_init_from_model: model default pooling_type is [0], but [-1] was specified
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 5000
llama_context: n_ctx_per_seq = 5000
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = enabled
llama_context: kv_unified    = false
llama_context: freq_base     = 12000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (5000) < n_ctx_train (65536) -- the full capacity of the model will not be utilized
llama_context:  CUDA_Host  output buffer size =     0.53 MiB
llama_kv_cache:      CUDA0 KV buffer size =   435.63 MiB
llama_kv_cache:      CUDA1 KV buffer size =   414.38 MiB
llama_kv_cache: size =  850.00 MiB (  5120 cells,  80 layers,  1/1 seqs), K (q8_0):  425.00 MiB, V (q8_0):  425.00 MiB
llama_context: pipeline parallelism enabled (n_copies=4)
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 14017.04 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 14697930752
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 13524.05 MiB on device 1: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 14180990976
ggml_cuda_host_malloc: failed to allocate 27004.05 MiB of pinned memory: out of memory
graph_reserve: failed to allocate compute buffers
llama_init_from_model: failed to initialize the context: failed to allocate compute pp buffers
panic: unable to create llama context

goroutine 52 [running]:
github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0002b4a00, {0x51, 0x0, 0x0, {0xc0000fcc28, 0x2, 0x2}, 0xc000482190, 0x0}, {0xc0000a8000, ...}, ...)
        github.com/ollama/ollama/runner/llamarunner/runner.go:799 +0x353
created by github.com/ollama/ollama/runner/llamarunner.(*Server).load in goroutine 10
        github.com/ollama/ollama/runner/llamarunner/runner.go:879 +0x7ce
time=2025-11-08T23:47:15.678+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server error"
time=2025-11-08T23:47:15.730+01:00 level=ERROR source=server.go:273 msg="llama runner terminated" error="exit status 2"
time=2025-11-08T23:47:15.928+01:00 level=INFO source=sched.go:453 msg="Load failed" model=D:\Ollama\models\blobs\sha256-22000cacd20f1b7759e479ab2348b0ad17ea7f076e28680e8560bcdaf6a099d7 error="llama runner process has terminated: cudaMalloc failed: out of memory"
[GIN] 2025/11/08 - 23:47:15 | 500 |   13.4065636s |    192.168.1.88 | POST     "/api/chat"
<!-- gh-comment-id:3507100358 --> @chrisoutwright commented on GitHub (Nov 8, 2025): > Define difficulties. I cannot run it in vram only with 1k. with 2x4090 cards for a 70B model IQ4NL .. that is unusual. logs: trying 5k with that model.. ``` time=2025-11-08T23:47:02.655+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\Chris\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 63795" time=2025-11-08T23:47:03.176+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\Chris\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 63801" time=2025-11-08T23:47:03.746+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\Chris\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 63808" time=2025-11-08T23:47:03.994+01:00 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2025-11-08T23:47:03.994+01:00 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=8 efficiency=0 threads=8 llama_model_loader: loaded meta data with 43 key-value pairs and 804 tensors from D:\Ollama\models\blobs\sha256-22000cacd20f1b7759e479ab2348b0ad17ea7f076e28680e8560bcdaf6a099d7 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = apertus llama_model_loader: - kv 1: xielu.alpha_n arr[f32,80] = [7.593750, 6.500000, 4.656250, 4.1250... llama_model_loader: - kv 2: xielu.alpha_p arr[f32,80] = [2.796875, 11.125000, 7.000000, 5.968... llama_model_loader: - kv 3: xielu.beta arr[f32,80] = [0.500000, 0.500000, 0.500000, 0.5000... llama_model_loader: - kv 4: xielu.eps arr[f32,80] = [-0.000001, -0.000001, -0.000001, -0.... llama_model_loader: - kv 5: general.type str = model llama_model_loader: - kv 6: general.name str = Apertus-70B-Instruct-2509 llama_model_loader: - kv 7: general.version str = 2509 llama_model_loader: - kv 8: general.finetune str = Instruct llama_model_loader: - kv 9: general.basename str = Apertus-70B-Instruct-2509 llama_model_loader: - kv 10: general.quantized_by str = Unsloth llama_model_loader: - kv 11: general.size_label str = 70B llama_model_loader: - kv 12: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 13: apertus.block_count u32 = 80 llama_model_loader: - kv 14: apertus.context_length u32 = 65536 llama_model_loader: - kv 15: apertus.embedding_length u32 = 8192 llama_model_loader: - kv 16: apertus.feed_forward_length u32 = 43008 llama_model_loader: - kv 17: apertus.attention.head_count u32 = 64 llama_model_loader: - kv 18: apertus.attention.head_count_kv u32 = 8 llama_model_loader: - kv 19: apertus.rope.freq_base f32 = 12000000.000000 llama_model_loader: - kv 20: apertus.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 21: apertus.vocab_size u32 = 131072 llama_model_loader: - kv 22: apertus.rope.dimension_count u32 = 128 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = tekken llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,131072] = ["<unk>", "<s>", "</s>", "<pad>", "[/... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,269443] = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ �... llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 68 llama_model_loader: - kv 30: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 31: tokenizer.ggml.padding_token_id u32 = 3 llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 33: tokenizer.ggml.add_sep_token bool = false llama_model_loader: - kv 34: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 35: tokenizer.chat_template str = {# Unsloth template fixes #}\n{%- macr... llama_model_loader: - kv 36: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 37: general.quantization_version u32 = 2 llama_model_loader: - kv 38: general.file_type u32 = 25 llama_model_loader: - kv 39: quantize.imatrix.file str = Apertus-70B-Instruct-2509-GGUF/imatri... llama_model_loader: - kv 40: quantize.imatrix.dataset str = unsloth_calibration_Apertus-70B-Instr... llama_model_loader: - kv 41: quantize.imatrix.entries_count u32 = 480 llama_model_loader: - kv 42: quantize.imatrix.chunks_count u32 = 143 llama_model_loader: - type f32: 322 tensors llama_model_loader: - type q4_K: 1 tensors llama_model_loader: - type q5_K: 80 tensors llama_model_loader: - type q6_K: 1 tensors llama_model_loader: - type iq4_nl: 400 tensors print_info: file format = GGUF V3 (latest) print_info: file type = IQ4_NL - 4.5 bpw print_info: file size = 37.33 GiB (4.54 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: printing all EOG tokens: load: - 31 ('<reponame>') load: - 68 ('<|assistant_end|>') load: special tokens cache size = 1000 load: token to piece cache size = 0.8499 MB print_info: arch = apertus print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 70.60 B print_info: general.name = Apertus-70B-Instruct-2509 print_info: vocab type = BPE print_info: n_vocab = 131072 print_info: n_merges = 269443 print_info: BOS token = 1 '<s>' print_info: EOS token = 68 '<|assistant_end|>' print_info: UNK token = 0 '<unk>' print_info: PAD token = 3 '<pad>' print_info: LF token = 1010 'Ċ' print_info: FIM REP token = 31 '<reponame>' print_info: EOG token = 31 '<reponame>' print_info: EOG token = 68 '<|assistant_end|>' print_info: max token length = 150 llama_model_load: vocab only - skipping tensors time=2025-11-08T23:47:04.369+01:00 level=INFO source=server.go:215 msg="enabling flash attention" time=2025-11-08T23:47:04.370+01:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\Chris\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --model D:\\Ollama\\models\\blobs\\sha256-22000cacd20f1b7759e479ab2348b0ad17ea7f076e28680e8560bcdaf6a099d7 --port 63812" time=2025-11-08T23:47:04.373+01:00 level=INFO source=server.go:470 msg="system memory" total="63.9 GiB" free="55.2 GiB" free_swap="71.6 GiB" time=2025-11-08T23:47:04.374+01:00 level=INFO source=memory.go:53 msg="new model will fit in available VRAM, loading" model=D:\Ollama\models\blobs\sha256-22000cacd20f1b7759e479ab2348b0ad17ea7f076e28680e8560bcdaf6a099d7 library=CUDA parallel=1 required="41.4 GiB" gpus=2 time=2025-11-08T23:47:04.375+01:00 level=INFO source=server.go:522 msg=offload library=CUDA layers.requested=81 layers.model=81 layers.offload=81 layers.split="[41 40]" memory.available="[23.6 GiB 23.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="41.4 GiB" memory.required.partial="41.4 GiB" memory.required.kv="781.2 MiB" memory.required.allocations="[21.1 GiB 20.3 GiB]" memory.weights.total="36.8 GiB" memory.weights.repeating="35.9 GiB" memory.weights.nonrepeating="840.0 MiB" memory.graph.full="1.0 GiB" memory.graph.partial="1.0 GiB" time=2025-11-08T23:47:04.400+01:00 level=INFO source=runner.go:910 msg="starting go runner" load_backend: loaded CPU backend from C:\Users\Chris\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, ID: GPU-971b407f-ae20-75ed-99c8-42c696057b0e Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes, ID: GPU-3752f260-9f8c-48e9-780e-12430a037c53 load_backend: loaded CUDA backend from C:\Users\Chris\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll time=2025-11-08T23:47:04.486+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 CUDA.1.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.1.USE_GRAPHS=1 CUDA.1.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-11-08T23:47:04.486+01:00 level=INFO source=runner.go:946 msg="Server listening on 127.0.0.1:63812" time=2025-11-08T23:47:04.491+01:00 level=INFO source=runner.go:845 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:5000 KvCacheType:q8_0 NumThreads:8 GPULayers:81[ID:GPU-971b407f-ae20-75ed-99c8-42c696057b0e Layers:41(0..40) ID:GPU-3752f260-9f8c-48e9-780e-12430a037c53 Layers:40(41..80)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-11-08T23:47:04.491+01:00 level=INFO source=server.go:1251 msg="waiting for llama runner to start responding" time=2025-11-08T23:47:04.491+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server loading model" ggml_backend_cuda_device_get_memory device GPU-971b407f-ae20-75ed-99c8-42c696057b0e utilizing NVML memory reporting free: 25314721792 total: 25757220864 llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) (0000:02:00.0) - 24142 MiB free ggml_backend_cuda_device_get_memory device GPU-3752f260-9f8c-48e9-780e-12430a037c53 utilizing NVML memory reporting free: 24650018816 total: 25769803776 llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 3090) (0000:01:00.0) - 23508 MiB free llama_model_loader: loaded meta data with 43 key-value pairs and 804 tensors from D:\Ollama\models\blobs\sha256-22000cacd20f1b7759e479ab2348b0ad17ea7f076e28680e8560bcdaf6a099d7 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = apertus llama_model_loader: - kv 1: xielu.alpha_n arr[f32,80] = [7.593750, 6.500000, 4.656250, 4.1250... llama_model_loader: - kv 2: xielu.alpha_p arr[f32,80] = [2.796875, 11.125000, 7.000000, 5.968... llama_model_loader: - kv 3: xielu.beta arr[f32,80] = [0.500000, 0.500000, 0.500000, 0.5000... llama_model_loader: - kv 4: xielu.eps arr[f32,80] = [-0.000001, -0.000001, -0.000001, -0.... llama_model_loader: - kv 5: general.type str = model llama_model_loader: - kv 6: general.name str = Apertus-70B-Instruct-2509 llama_model_loader: - kv 7: general.version str = 2509 llama_model_loader: - kv 8: general.finetune str = Instruct llama_model_loader: - kv 9: general.basename str = Apertus-70B-Instruct-2509 llama_model_loader: - kv 10: general.quantized_by str = Unsloth llama_model_loader: - kv 11: general.size_label str = 70B llama_model_loader: - kv 12: general.repo_url str = https://huggingface.co/unsloth llama_model_loader: - kv 13: apertus.block_count u32 = 80 llama_model_loader: - kv 14: apertus.context_length u32 = 65536 llama_model_loader: - kv 15: apertus.embedding_length u32 = 8192 llama_model_loader: - kv 16: apertus.feed_forward_length u32 = 43008 llama_model_loader: - kv 17: apertus.attention.head_count u32 = 64 llama_model_loader: - kv 18: apertus.attention.head_count_kv u32 = 8 llama_model_loader: - kv 19: apertus.rope.freq_base f32 = 12000000.000000 llama_model_loader: - kv 20: apertus.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 21: apertus.vocab_size u32 = 131072 llama_model_loader: - kv 22: apertus.rope.dimension_count u32 = 128 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = tekken llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,131072] = ["<unk>", "<s>", "</s>", "<pad>", "[/... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,269443] = ["Ġ Ġ", "Ġ t", "e r", "i n", "Ġ �... llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 68 llama_model_loader: - kv 30: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 31: tokenizer.ggml.padding_token_id u32 = 3 llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 33: tokenizer.ggml.add_sep_token bool = false llama_model_loader: - kv 34: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 35: tokenizer.chat_template str = {# Unsloth template fixes #}\n{%- macr... llama_model_loader: - kv 36: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - kv 37: general.quantization_version u32 = 2 llama_model_loader: - kv 38: general.file_type u32 = 25 llama_model_loader: - kv 39: quantize.imatrix.file str = Apertus-70B-Instruct-2509-GGUF/imatri... llama_model_loader: - kv 40: quantize.imatrix.dataset str = unsloth_calibration_Apertus-70B-Instr... llama_model_loader: - kv 41: quantize.imatrix.entries_count u32 = 480 llama_model_loader: - kv 42: quantize.imatrix.chunks_count u32 = 143 llama_model_loader: - type f32: 322 tensors llama_model_loader: - type q4_K: 1 tensors llama_model_loader: - type q5_K: 80 tensors llama_model_loader: - type q6_K: 1 tensors llama_model_loader: - type iq4_nl: 400 tensors print_info: file format = GGUF V3 (latest) print_info: file type = IQ4_NL - 4.5 bpw print_info: file size = 37.33 GiB (4.54 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: printing all EOG tokens: load: - 31 ('<reponame>') load: - 68 ('<|assistant_end|>') load: special tokens cache size = 1000 load: token to piece cache size = 0.8499 MB print_info: arch = apertus print_info: vocab_only = 0 print_info: n_ctx_train = 65536 print_info: n_embd = 8192 print_info: n_layer = 80 print_info: n_head = 64 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 8 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 43008 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 12000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 65536 print_info: rope_finetuned = unknown print_info: model type = ?B print_info: model params = 70.60 B print_info: general.name = Apertus-70B-Instruct-2509 print_info: vocab type = BPE print_info: n_vocab = 131072 print_info: n_merges = 269443 print_info: BOS token = 1 '<s>' print_info: EOS token = 68 '<|assistant_end|>' print_info: UNK token = 0 '<unk>' print_info: PAD token = 3 '<pad>' print_info: LF token = 1010 'Ċ' print_info: FIM REP token = 31 '<reponame>' print_info: EOG token = 31 '<reponame>' print_info: EOG token = 68 '<|assistant_end|>' print_info: max token length = 150 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 80 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 81/81 layers to GPU load_tensors: CUDA0 model buffer size = 18862.60 MiB load_tensors: CUDA1 model buffer size = 18782.51 MiB load_tensors: CPU model buffer size = 576.00 MiB llama_init_from_model: model default pooling_type is [0], but [-1] was specified llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 5000 llama_context: n_ctx_per_seq = 5000 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = enabled llama_context: kv_unified = false llama_context: freq_base = 12000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (5000) < n_ctx_train (65536) -- the full capacity of the model will not be utilized llama_context: CUDA_Host output buffer size = 0.53 MiB llama_kv_cache: CUDA0 KV buffer size = 435.63 MiB llama_kv_cache: CUDA1 KV buffer size = 414.38 MiB llama_kv_cache: size = 850.00 MiB ( 5120 cells, 80 layers, 1/1 seqs), K (q8_0): 425.00 MiB, V (q8_0): 425.00 MiB llama_context: pipeline parallelism enabled (n_copies=4) ggml_backend_cuda_buffer_type_alloc_buffer: allocating 14017.04 MiB on device 0: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 14697930752 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 13524.05 MiB on device 1: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 14180990976 ggml_cuda_host_malloc: failed to allocate 27004.05 MiB of pinned memory: out of memory graph_reserve: failed to allocate compute buffers llama_init_from_model: failed to initialize the context: failed to allocate compute pp buffers panic: unable to create llama context goroutine 52 [running]: github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc0002b4a00, {0x51, 0x0, 0x0, {0xc0000fcc28, 0x2, 0x2}, 0xc000482190, 0x0}, {0xc0000a8000, ...}, ...) github.com/ollama/ollama/runner/llamarunner/runner.go:799 +0x353 created by github.com/ollama/ollama/runner/llamarunner.(*Server).load in goroutine 10 github.com/ollama/ollama/runner/llamarunner/runner.go:879 +0x7ce time=2025-11-08T23:47:15.678+01:00 level=INFO source=server.go:1285 msg="waiting for server to become available" status="llm server error" time=2025-11-08T23:47:15.730+01:00 level=ERROR source=server.go:273 msg="llama runner terminated" error="exit status 2" time=2025-11-08T23:47:15.928+01:00 level=INFO source=sched.go:453 msg="Load failed" model=D:\Ollama\models\blobs\sha256-22000cacd20f1b7759e479ab2348b0ad17ea7f076e28680e8560bcdaf6a099d7 error="llama runner process has terminated: cudaMalloc failed: out of memory" [GIN] 2025/11/08 - 23:47:15 | 500 | 13.4065636s | 192.168.1.88 | POST "/api/chat" ```
Author
Owner

@chrisoutwright commented on GitHub (Nov 9, 2025):

I could not get it to work with gpu only in my case with ollama, I opened: https://github.com/ollama/ollama/issues/13025

<!-- gh-comment-id:3507518253 --> @chrisoutwright commented on GitHub (Nov 9, 2025): I could not get it to work with gpu only in my case with ollama, I opened: https://github.com/ollama/ollama/issues/13025
Author
Owner

@AlecMRogers commented on GitHub (Dec 25, 2025):

I downloaded the safe tensors from HuggingFace, converted to GGUF using llama.cpp python scripts, and then created a quantized model using ollama. I get the following failure when I try to run the model:

loading model hyperparameters: key not found in model: xielu.alpha_n

Any ideas where I went wrong, and am I early even to be trying Apertus within Ollama?

I'm using macOS to run Ollama 0.13.5, the llama.cpp files were straight from GitHub.

Thanks,
-Alec

<!-- gh-comment-id:3691772929 --> @AlecMRogers commented on GitHub (Dec 25, 2025): I downloaded the safe tensors from HuggingFace, converted to GGUF using llama.cpp python scripts, and then created a quantized model using ollama. I get the following failure when I try to run the model: loading model hyperparameters: key not found in model: xielu.alpha_n Any ideas where I went wrong, and am I early even to be trying Apertus within Ollama? I'm using macOS to run Ollama 0.13.5, the llama.cpp files were straight from GitHub. Thanks, -Alec
Author
Owner

@loleg commented on GitHub (Jan 15, 2026):

Hey @AlecMRogers thanks for your question. It would be great to see also the command you're using, Python and llama.cpp version installed: could you attach a pip freeze to a Gist so we can troubleshoot?

<!-- gh-comment-id:3754771637 --> @loleg commented on GitHub (Jan 15, 2026): Hey @AlecMRogers thanks for your question. It would be great to see also the command you're using, Python and llama.cpp version installed: could you attach a `pip freeze` to a Gist so we can troubleshoot?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#54591