[GH-ISSUE #12128] self-built Ollama not using GPU #54572

Closed
opened 2026-04-29 06:23:30 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @vegerot on GitHub (Aug 30, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12128

What is the issue?

When running a self-built Ollama, my GPU isn't being used.

Steps to reproduce:

  1. run curl https://ollama.com/install.sh | sh
  2. run ollama serve
  3. run ollama run <your-model-here>
  4. ASSERT you see these logs in stdout
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4
load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so
load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
# ...
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5090) - 30478 MiB free
  1. clone ollama
  2. run cmake -B build && cmake --build build --config Release && go run . serve
  3. run go run . run <your-model-here>

Expected: Should see same logs in output.

Actual: Only see

load_backend: loaded CPU backend from /home/max/workspace/github.com/ollama/ollama/build/lib/ollama/libggml-cpu-haswell.so

and none of the other logs. And it doesn't use my GPU

Relevant log output

GOOD

time=2025-08-30T16:12:17.246-07:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5090" total="31.4 GiB" available="29.8 GiB"
# ...
  Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4
load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so
load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
# ...
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5090) - 30483 MiB free

BAD:

time=2025-08-30T16:11:33.101-07:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5090" total="31.4 GiB" available="29.8 GiB"
# ...
load_backend: loaded CPU backend from /home/max/workspace/github.com/ollama/ollama/build/lib/ollama/libggml-cpu-haswell.so

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

v0.11.8-2-g517807c

Originally created by @vegerot on GitHub (Aug 30, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12128 ### What is the issue? When running a self-built Ollama, my GPU isn't being used. ### Steps to reproduce: 0. run `curl https://ollama.com/install.sh | sh` 1. run `ollama serve` 2. run `ollama run <your-model-here>` 4. ASSERT you see these logs in stdout ```log ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4 load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so # ... llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5090) - 30478 MiB free ``` 5. clone ollama 6. run `cmake -B build && cmake --build build --config Release && go run . serve` 7. run `go run . run <your-model-here>` Expected: Should see same logs in output. Actual: Only see ``` load_backend: loaded CPU backend from /home/max/workspace/github.com/ollama/ollama/build/lib/ollama/libggml-cpu-haswell.so ``` and none of the other logs. And it doesn't use my GPU ### Relevant log output GOOD ```log time=2025-08-30T16:12:17.246-07:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5090" total="31.4 GiB" available="29.8 GiB" # ... Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4 load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so # ... llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5090) - 30483 MiB free ``` BAD: ```log time=2025-08-30T16:11:33.101-07:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5090" total="31.4 GiB" available="29.8 GiB" # ... load_backend: loaded CPU backend from /home/max/workspace/github.com/ollama/ollama/build/lib/ollama/libggml-cpu-haswell.so ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version v0.11.8-2-g517807c
GiteaMirror added the bug label 2026-04-29 06:23:30 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 30, 2025):

Full log from server and the output of

ls /home/max/workspace/github.com/ollama/ollama/build/lib/ollama/
<!-- gh-comment-id:3239601581 --> @rick-github commented on GitHub (Aug 30, 2025): Full log from server and the output of ``` ls /home/max/workspace/github.com/ollama/ollama/build/lib/ollama/ ```
Author
Owner

@vegerot commented on GitHub (Aug 30, 2025):

$ ls build/lib/ollama
libggml-base.so           libggml-cpu-icelake.so      libggml-cpu-sse42.so
libggml-cpu-alderlake.so  libggml-cpu-sandybridge.so  libggml-cpu-x64.so
libggml-cpu-haswell.so    libggml-cpu-skylakex.so

GOOD:

$ ollama serve
time=2025-08-30T16:36:41.558-07:00 level=INFO source=routes.go:1331 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/max/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-08-30T16:36:41.560-07:00 level=INFO source=images.go:477 msg="total blobs: 51"
time=2025-08-30T16:36:41.560-07:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2025-08-30T16:36:41.561-07:00 level=INFO source=routes.go:1384 msg="Listening on 127.0.0.1:11434 (version 0.11.8)"
time=2025-08-30T16:36:41.561-07:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-08-30T16:36:41.780-07:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5090" total="31.4 GiB" available="29.6 GiB"
[GIN] 2025/08/30 - 16:36:53 | 200 |       42.09µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/08/30 - 16:36:53 | 200 |   46.357807ms |       127.0.0.1 | POST     "/api/show"
llama_model_loader: loaded meta data with 28 key-value pairs and 311 tensors from /home/max/.ollama/models/blobs/sha256-7f4030143c1c477224c5434f8272c662a8b042079a0a584f0a27a1684fe2e1fa (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3 0.6B
llama_model_loader: - kv   3:                           general.basename str              = Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 0.6B
llama_model_loader: - kv   5:                            general.license str              = apache-2.0
llama_model_loader: - kv   6:                          qwen3.block_count u32              = 28
llama_model_loader: - kv   7:                       qwen3.context_length u32              = 40960
llama_model_loader: - kv   8:                     qwen3.embedding_length u32              = 1024
llama_model_loader: - kv   9:                  qwen3.feed_forward_length u32              = 3072
llama_model_loader: - kv  10:                 qwen3.attention.head_count u32              = 16
llama_model_loader: - kv  11:              qwen3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  12:                       qwen3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                 qwen3.attention.key_length u32              = 128
llama_model_loader: - kv  15:               qwen3.attention.value_length u32              = 128
llama_model_loader: - kv  16:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  17:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  18:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  20:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  22:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  26:               general.quantization_version u32              = 2
llama_model_loader: - kv  27:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  113 tensors
llama_model_loader: - type  f16:   28 tensors
llama_model_loader: - type q4_K:  155 tensors
llama_model_loader: - type q6_K:   15 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 492.75 MiB (5.50 BPW)
load: printing all EOG tokens:
load:   - 151643 ('<|endoftext|>')
load:   - 151645 ('<|im_end|>')
load:   - 151662 ('<|fim_pad|>')
load:   - 151663 ('<|repo_name|>')
load:   - 151664 ('<|file_sep|>')
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 751.63 M
print_info: general.name     = Qwen3 0.6B
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-08-30T16:36:53.712-07:00 level=INFO source=server.go:388 msg="starting runner" cmd="/usr/local/bin/ollama runner --model /home/max/.ollama/models/blobs/sha256-7f4030143c1c477224c5434f8272c662a8b042079a0a584f0a27a1684fe2e1fa --port 40679"
time=2025-08-30T16:36:53.723-07:00 level=INFO source=runner.go:864 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4
load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so
load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
time=2025-08-30T16:36:53.788-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
time=2025-08-30T16:36:53.789-07:00 level=INFO source=runner.go:900 msg="Server listening on 127.0.0.1:40679"
time=2025-08-30T16:36:53.858-07:00 level=INFO source=server.go:493 msg="system memory" total="47.0 GiB" free="37.1 GiB" free_swap="0 B"
time=2025-08-30T16:36:53.859-07:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/home/max/.ollama/models/blobs/sha256-7f4030143c1c477224c5434f8272c662a8b042079a0a584f0a27a1684fe2e1fa library=cuda parallel=1 required="1.5 GiB" gpus=1
time=2025-08-30T16:36:53.859-07:00 level=INFO source=server.go:533 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split=[29] memory.available="[29.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.5 GiB" memory.required.partial="1.5 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[1.5 GiB]" memory.weights.total="409.3 MiB" memory.weights.repeating="287.6 MiB" memory.weights.nonrepeating="121.7 MiB" memory.graph.full="149.3 MiB" memory.graph.partial="149.3 MiB"
time=2025-08-30T16:36:53.859-07:00 level=INFO source=runner.go:799 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:12 GPULayers:29[ID:GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}"
time=2025-08-30T16:36:53.860-07:00 level=INFO source=server.go:1236 msg="waiting for llama runner to start responding"
time=2025-08-30T16:36:53.860-07:00 level=INFO source=server.go:1270 msg="waiting for server to become available" status="llm server loading model"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5090) - 30329 MiB free
llama_model_loader: loaded meta data with 28 key-value pairs and 311 tensors from /home/max/.ollama/models/blobs/sha256-7f4030143c1c477224c5434f8272c662a8b042079a0a584f0a27a1684fe2e1fa (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3 0.6B
llama_model_loader: - kv   3:                           general.basename str              = Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 0.6B
llama_model_loader: - kv   5:                            general.license str              = apache-2.0
llama_model_loader: - kv   6:                          qwen3.block_count u32              = 28
llama_model_loader: - kv   7:                       qwen3.context_length u32              = 40960
llama_model_loader: - kv   8:                     qwen3.embedding_length u32              = 1024
llama_model_loader: - kv   9:                  qwen3.feed_forward_length u32              = 3072
llama_model_loader: - kv  10:                 qwen3.attention.head_count u32              = 16
llama_model_loader: - kv  11:              qwen3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  12:                       qwen3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                 qwen3.attention.key_length u32              = 128
llama_model_loader: - kv  15:               qwen3.attention.value_length u32              = 128
llama_model_loader: - kv  16:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  17:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  18:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  20:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  22:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  26:               general.quantization_version u32              = 2
llama_model_loader: - kv  27:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  113 tensors
llama_model_loader: - type  f16:   28 tensors
llama_model_loader: - type q4_K:  155 tensors
llama_model_loader: - type q6_K:   15 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 492.75 MiB (5.50 BPW)
load: printing all EOG tokens:
load:   - 151643 ('<|endoftext|>')
load:   - 151645 ('<|im_end|>')
load:   - 151662 ('<|fim_pad|>')
load:   - 151663 ('<|repo_name|>')
load:   - 151664 ('<|file_sep|>')
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3
print_info: vocab_only       = 0
print_info: n_ctx_train      = 40960
print_info: n_embd           = 1024
print_info: n_layer          = 28
print_info: n_head           = 16
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 2
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 3072
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = -1
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 40960
print_info: rope_finetuned   = unknown
print_info: model type       = 0.6B
print_info: model params     = 751.63 M
print_info: general.name     = Qwen3 0.6B
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 28 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 29/29 layers to GPU
load_tensors:        CUDA0 model buffer size =   409.29 MiB
load_tensors:   CPU_Mapped model buffer size =    83.46 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: kv_unified    = false
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (40960) -- the full capacity of the model will not be utilized
llama_context:  CUDA_Host  output buffer size =     0.58 MiB
llama_kv_cache_unified:      CUDA0 KV buffer size =   448.00 MiB
llama_kv_cache_unified: size =  448.00 MiB (  4096 cells,  28 layers,  1/1 seqs), K (f16):  224.00 MiB, V (f16):  224.00 MiB
llama_context:      CUDA0 compute buffer size =   298.75 MiB
llama_context:  CUDA_Host compute buffer size =    14.01 MiB
llama_context: graph nodes  = 1098
llama_context: graph splits = 2
time=2025-08-30T16:36:54.863-07:00 level=INFO source=server.go:1274 msg="llama runner started in 1.15 seconds"
time=2025-08-30T16:36:54.863-07:00 level=INFO source=sched.go:473 msg="loaded runners" count=1
time=2025-08-30T16:36:54.863-07:00 level=INFO source=server.go:1236 msg="waiting for llama runner to start responding"
time=2025-08-30T16:36:54.863-07:00 level=INFO source=server.go:1274 msg="llama runner started in 1.15 seconds"
[GIN] 2025/08/30 - 16:36:54 | 200 |  1.674794201s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/08/30 - 16:36:55 | 200 |  272.672071ms |       127.0.0.1 | POST     "/api/chat"

BAD:

❯ go run . serve
time=2025-08-30T16:37:40.891-07:00 level=INFO source=routes.go:1331 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/max/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-08-30T16:37:40.893-07:00 level=INFO source=images.go:477 msg="total blobs: 51"
time=2025-08-30T16:37:40.893-07:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:   export GIN_MODE=release
 - using code:  gin.SetMode(gin.ReleaseMode)

[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func4 (5 handlers)
[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
time=2025-08-30T16:37:40.894-07:00 level=INFO source=routes.go:1384 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2025-08-30T16:37:40.894-07:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-08-30T16:37:41.109-07:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5090" total="31.4 GiB" available="29.6 GiB"
[GIN] 2025/08/30 - 16:37:46 | 200 |       35.86µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/08/30 - 16:37:46 | 200 |   37.395976ms |       127.0.0.1 | POST     "/api/show"
llama_model_loader: loaded meta data with 28 key-value pairs and 311 tensors from /home/max/.ollama/models/blobs/sha256-7f4030143c1c477224c5434f8272c662a8b042079a0a584f0a27a1684fe2e1fa (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3 0.6B
llama_model_loader: - kv   3:                           general.basename str              = Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 0.6B
llama_model_loader: - kv   5:                            general.license str              = apache-2.0
llama_model_loader: - kv   6:                          qwen3.block_count u32              = 28
llama_model_loader: - kv   7:                       qwen3.context_length u32              = 40960
llama_model_loader: - kv   8:                     qwen3.embedding_length u32              = 1024
llama_model_loader: - kv   9:                  qwen3.feed_forward_length u32              = 3072
llama_model_loader: - kv  10:                 qwen3.attention.head_count u32              = 16
llama_model_loader: - kv  11:              qwen3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  12:                       qwen3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                 qwen3.attention.key_length u32              = 128
llama_model_loader: - kv  15:               qwen3.attention.value_length u32              = 128
llama_model_loader: - kv  16:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  17:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  18:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  20:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  22:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  26:               general.quantization_version u32              = 2
llama_model_loader: - kv  27:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  113 tensors
llama_model_loader: - type  f16:   28 tensors
llama_model_loader: - type q4_K:  155 tensors
llama_model_loader: - type q6_K:   15 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 492.75 MiB (5.50 BPW)
load: printing all EOG tokens:
load:   - 151643 ('<|endoftext|>')
load:   - 151645 ('<|im_end|>')
load:   - 151662 ('<|fim_pad|>')
load:   - 151663 ('<|repo_name|>')
load:   - 151664 ('<|file_sep|>')
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 751.63 M
print_info: general.name     = Qwen3 0.6B
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-08-30T16:37:46.788-07:00 level=INFO source=server.go:398 msg="starting runner" cmd="/home/max/.cache/go-build/9c/9c997039e1a403dfd5d341e0fb130f9e633678bee2addc40206aaa189315976b-d/ollama runner --model /home/max/.ollama/models/blobs/sha256-7f4030143c1c477224c5434f8272c662a8b042079a0a584f0a27a1684fe2e1fa --port 36223"
time=2025-08-30T16:37:46.796-07:00 level=INFO source=runner.go:864 msg="starting go runner"
load_backend: loaded CPU backend from /home/max/workspace/github.com/ollama/ollama/build/lib/ollama/libggml-cpu-haswell.so
time=2025-08-30T16:37:46.811-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-08-30T16:37:46.811-07:00 level=INFO source=runner.go:900 msg="Server listening on 127.0.0.1:36223"
time=2025-08-30T16:37:46.923-07:00 level=INFO source=server.go:503 msg="system memory" total="47.0 GiB" free="37.8 GiB" free_swap="0 B"
time=2025-08-30T16:37:46.923-07:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/home/max/.ollama/models/blobs/sha256-7f4030143c1c477224c5434f8272c662a8b042079a0a584f0a27a1684fe2e1fa library=cuda parallel=1 required="1.5 GiB" gpus=1
time=2025-08-30T16:37:46.923-07:00 level=INFO source=server.go:543 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split=[29] memory.available="[29.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.5 GiB" memory.required.partial="1.5 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[1.5 GiB]" memory.weights.total="409.3 MiB" memory.weights.repeating="287.6 MiB" memory.weights.nonrepeating="121.7 MiB" memory.graph.full="149.3 MiB" memory.graph.partial="149.3 MiB"
time=2025-08-30T16:37:46.924-07:00 level=INFO source=runner.go:799 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:12 GPULayers:29[ID:GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}"
time=2025-08-30T16:37:46.924-07:00 level=INFO source=server.go:1246 msg="waiting for llama runner to start responding"
time=2025-08-30T16:37:46.924-07:00 level=INFO source=server.go:1280 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 28 key-value pairs and 311 tensors from /home/max/.ollama/models/blobs/sha256-7f4030143c1c477224c5434f8272c662a8b042079a0a584f0a27a1684fe2e1fa (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3 0.6B
llama_model_loader: - kv   3:                           general.basename str              = Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 0.6B
llama_model_loader: - kv   5:                            general.license str              = apache-2.0
llama_model_loader: - kv   6:                          qwen3.block_count u32              = 28
llama_model_loader: - kv   7:                       qwen3.context_length u32              = 40960
llama_model_loader: - kv   8:                     qwen3.embedding_length u32              = 1024
llama_model_loader: - kv   9:                  qwen3.feed_forward_length u32              = 3072
llama_model_loader: - kv  10:                 qwen3.attention.head_count u32              = 16
llama_model_loader: - kv  11:              qwen3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  12:                       qwen3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                 qwen3.attention.key_length u32              = 128
llama_model_loader: - kv  15:               qwen3.attention.value_length u32              = 128
llama_model_loader: - kv  16:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  17:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  18:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  20:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  22:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  26:               general.quantization_version u32              = 2
llama_model_loader: - kv  27:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  113 tensors
llama_model_loader: - type  f16:   28 tensors
llama_model_loader: - type q4_K:  155 tensors
llama_model_loader: - type q6_K:   15 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 492.75 MiB (5.50 BPW)
load: printing all EOG tokens:
load:   - 151643 ('<|endoftext|>')
load:   - 151645 ('<|im_end|>')
load:   - 151662 ('<|fim_pad|>')
load:   - 151663 ('<|repo_name|>')
load:   - 151664 ('<|file_sep|>')
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3
print_info: vocab_only       = 0
print_info: n_ctx_train      = 40960
print_info: n_embd           = 1024
print_info: n_layer          = 28
print_info: n_head           = 16
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 2
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 3072
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = -1
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 40960
print_info: rope_finetuned   = unknown
print_info: model type       = 0.6B
print_info: model params     = 751.63 M
print_info: general.name     = Qwen3 0.6B
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors:   CPU_Mapped model buffer size =   492.75 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: kv_unified    = false
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (40960) -- the full capacity of the model will not be utilized
llama_context:        CPU  output buffer size =     0.58 MiB
llama_kv_cache_unified:        CPU KV buffer size =   448.00 MiB
llama_kv_cache_unified: size =  448.00 MiB (  4096 cells,  28 layers,  1/1 seqs), K (f16):  224.00 MiB, V (f16):  224.00 MiB
llama_context:        CPU compute buffer size =   300.75 MiB
llama_context: graph nodes  = 1098
llama_context: graph splits = 1
time=2025-08-30T16:37:47.175-07:00 level=INFO source=server.go:1284 msg="llama runner started in 0.39 seconds"
time=2025-08-30T16:37:47.175-07:00 level=INFO source=sched.go:473 msg="loaded runners" count=1
time=2025-08-30T16:37:47.175-07:00 level=INFO source=server.go:1246 msg="waiting for llama runner to start responding"
time=2025-08-30T16:37:47.175-07:00 level=INFO source=server.go:1284 msg="llama runner started in 0.39 seconds"
[GIN] 2025/08/30 - 16:37:47 | 200 |  920.772313ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/08/30 - 16:37:52 | 200 |  1.762292069s |       127.0.0.1 | POST     "/api/chat"
<!-- gh-comment-id:3239607174 --> @vegerot commented on GitHub (Aug 30, 2025): ```sh $ ls build/lib/ollama libggml-base.so libggml-cpu-icelake.so libggml-cpu-sse42.so libggml-cpu-alderlake.so libggml-cpu-sandybridge.so libggml-cpu-x64.so libggml-cpu-haswell.so libggml-cpu-skylakex.so ``` GOOD: ```sh $ ollama serve time=2025-08-30T16:36:41.558-07:00 level=INFO source=routes.go:1331 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/max/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-08-30T16:36:41.560-07:00 level=INFO source=images.go:477 msg="total blobs: 51" time=2025-08-30T16:36:41.560-07:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2025-08-30T16:36:41.561-07:00 level=INFO source=routes.go:1384 msg="Listening on 127.0.0.1:11434 (version 0.11.8)" time=2025-08-30T16:36:41.561-07:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-08-30T16:36:41.780-07:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5090" total="31.4 GiB" available="29.6 GiB" [GIN] 2025/08/30 - 16:36:53 | 200 | 42.09µs | 127.0.0.1 | HEAD "/" [GIN] 2025/08/30 - 16:36:53 | 200 | 46.357807ms | 127.0.0.1 | POST "/api/show" llama_model_loader: loaded meta data with 28 key-value pairs and 311 tensors from /home/max/.ollama/models/blobs/sha256-7f4030143c1c477224c5434f8272c662a8b042079a0a584f0a27a1684fe2e1fa (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3 0.6B llama_model_loader: - kv 3: general.basename str = Qwen3 llama_model_loader: - kv 4: general.size_label str = 0.6B llama_model_loader: - kv 5: general.license str = apache-2.0 llama_model_loader: - kv 6: qwen3.block_count u32 = 28 llama_model_loader: - kv 7: qwen3.context_length u32 = 40960 llama_model_loader: - kv 8: qwen3.embedding_length u32 = 1024 llama_model_loader: - kv 9: qwen3.feed_forward_length u32 = 3072 llama_model_loader: - kv 10: qwen3.attention.head_count u32 = 16 llama_model_loader: - kv 11: qwen3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 12: qwen3.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 14: qwen3.attention.key_length u32 = 128 llama_model_loader: - kv 15: qwen3.attention.value_length u32 = 128 llama_model_loader: - kv 16: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 17: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 18: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 19: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 20: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 21: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 22: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 24: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 25: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 26: general.quantization_version u32 = 2 llama_model_loader: - kv 27: general.file_type u32 = 15 llama_model_loader: - type f32: 113 tensors llama_model_loader: - type f16: 28 tensors llama_model_loader: - type q4_K: 155 tensors llama_model_loader: - type q6_K: 15 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 492.75 MiB (5.50 BPW) load: printing all EOG tokens: load: - 151643 ('<|endoftext|>') load: - 151645 ('<|im_end|>') load: - 151662 ('<|fim_pad|>') load: - 151663 ('<|repo_name|>') load: - 151664 ('<|file_sep|>') load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 751.63 M print_info: general.name = Qwen3 0.6B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-08-30T16:36:53.712-07:00 level=INFO source=server.go:388 msg="starting runner" cmd="/usr/local/bin/ollama runner --model /home/max/.ollama/models/blobs/sha256-7f4030143c1c477224c5434f8272c662a8b042079a0a584f0a27a1684fe2e1fa --port 40679" time=2025-08-30T16:36:53.723-07:00 level=INFO source=runner.go:864 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes, ID: GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4 load_backend: loaded CUDA backend from /usr/local/lib/ollama/libggml-cuda.so load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so time=2025-08-30T16:36:53.788-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) time=2025-08-30T16:36:53.789-07:00 level=INFO source=runner.go:900 msg="Server listening on 127.0.0.1:40679" time=2025-08-30T16:36:53.858-07:00 level=INFO source=server.go:493 msg="system memory" total="47.0 GiB" free="37.1 GiB" free_swap="0 B" time=2025-08-30T16:36:53.859-07:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/home/max/.ollama/models/blobs/sha256-7f4030143c1c477224c5434f8272c662a8b042079a0a584f0a27a1684fe2e1fa library=cuda parallel=1 required="1.5 GiB" gpus=1 time=2025-08-30T16:36:53.859-07:00 level=INFO source=server.go:533 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split=[29] memory.available="[29.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.5 GiB" memory.required.partial="1.5 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[1.5 GiB]" memory.weights.total="409.3 MiB" memory.weights.repeating="287.6 MiB" memory.weights.nonrepeating="121.7 MiB" memory.graph.full="149.3 MiB" memory.graph.partial="149.3 MiB" time=2025-08-30T16:36:53.859-07:00 level=INFO source=runner.go:799 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:12 GPULayers:29[ID:GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}" time=2025-08-30T16:36:53.860-07:00 level=INFO source=server.go:1236 msg="waiting for llama runner to start responding" time=2025-08-30T16:36:53.860-07:00 level=INFO source=server.go:1270 msg="waiting for server to become available" status="llm server loading model" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5090) - 30329 MiB free llama_model_loader: loaded meta data with 28 key-value pairs and 311 tensors from /home/max/.ollama/models/blobs/sha256-7f4030143c1c477224c5434f8272c662a8b042079a0a584f0a27a1684fe2e1fa (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3 0.6B llama_model_loader: - kv 3: general.basename str = Qwen3 llama_model_loader: - kv 4: general.size_label str = 0.6B llama_model_loader: - kv 5: general.license str = apache-2.0 llama_model_loader: - kv 6: qwen3.block_count u32 = 28 llama_model_loader: - kv 7: qwen3.context_length u32 = 40960 llama_model_loader: - kv 8: qwen3.embedding_length u32 = 1024 llama_model_loader: - kv 9: qwen3.feed_forward_length u32 = 3072 llama_model_loader: - kv 10: qwen3.attention.head_count u32 = 16 llama_model_loader: - kv 11: qwen3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 12: qwen3.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 14: qwen3.attention.key_length u32 = 128 llama_model_loader: - kv 15: qwen3.attention.value_length u32 = 128 llama_model_loader: - kv 16: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 17: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 18: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 19: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 20: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 21: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 22: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 24: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 25: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 26: general.quantization_version u32 = 2 llama_model_loader: - kv 27: general.file_type u32 = 15 llama_model_loader: - type f32: 113 tensors llama_model_loader: - type f16: 28 tensors llama_model_loader: - type q4_K: 155 tensors llama_model_loader: - type q6_K: 15 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 492.75 MiB (5.50 BPW) load: printing all EOG tokens: load: - 151643 ('<|endoftext|>') load: - 151645 ('<|im_end|>') load: - 151662 ('<|fim_pad|>') load: - 151663 ('<|repo_name|>') load: - 151664 ('<|file_sep|>') load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3 print_info: vocab_only = 0 print_info: n_ctx_train = 40960 print_info: n_embd = 1024 print_info: n_layer = 28 print_info: n_head = 16 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 2 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 3072 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = -1 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 40960 print_info: rope_finetuned = unknown print_info: model type = 0.6B print_info: model params = 751.63 M print_info: general.name = Qwen3 0.6B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: offloading 28 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 29/29 layers to GPU load_tensors: CUDA0 model buffer size = 409.29 MiB load_tensors: CPU_Mapped model buffer size = 83.46 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 4096 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: kv_unified = false llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (40960) -- the full capacity of the model will not be utilized llama_context: CUDA_Host output buffer size = 0.58 MiB llama_kv_cache_unified: CUDA0 KV buffer size = 448.00 MiB llama_kv_cache_unified: size = 448.00 MiB ( 4096 cells, 28 layers, 1/1 seqs), K (f16): 224.00 MiB, V (f16): 224.00 MiB llama_context: CUDA0 compute buffer size = 298.75 MiB llama_context: CUDA_Host compute buffer size = 14.01 MiB llama_context: graph nodes = 1098 llama_context: graph splits = 2 time=2025-08-30T16:36:54.863-07:00 level=INFO source=server.go:1274 msg="llama runner started in 1.15 seconds" time=2025-08-30T16:36:54.863-07:00 level=INFO source=sched.go:473 msg="loaded runners" count=1 time=2025-08-30T16:36:54.863-07:00 level=INFO source=server.go:1236 msg="waiting for llama runner to start responding" time=2025-08-30T16:36:54.863-07:00 level=INFO source=server.go:1274 msg="llama runner started in 1.15 seconds" [GIN] 2025/08/30 - 16:36:54 | 200 | 1.674794201s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/08/30 - 16:36:55 | 200 | 272.672071ms | 127.0.0.1 | POST "/api/chat" ``` BAD: ```log ❯ go run . serve time=2025-08-30T16:37:40.891-07:00 level=INFO source=routes.go:1331 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/max/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-08-30T16:37:40.893-07:00 level=INFO source=images.go:477 msg="total blobs: 51" time=2025-08-30T16:37:40.893-07:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func4 (5 handlers) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) time=2025-08-30T16:37:40.894-07:00 level=INFO source=routes.go:1384 msg="Listening on 127.0.0.1:11434 (version 0.0.0)" time=2025-08-30T16:37:40.894-07:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-08-30T16:37:41.109-07:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5090" total="31.4 GiB" available="29.6 GiB" [GIN] 2025/08/30 - 16:37:46 | 200 | 35.86µs | 127.0.0.1 | HEAD "/" [GIN] 2025/08/30 - 16:37:46 | 200 | 37.395976ms | 127.0.0.1 | POST "/api/show" llama_model_loader: loaded meta data with 28 key-value pairs and 311 tensors from /home/max/.ollama/models/blobs/sha256-7f4030143c1c477224c5434f8272c662a8b042079a0a584f0a27a1684fe2e1fa (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3 0.6B llama_model_loader: - kv 3: general.basename str = Qwen3 llama_model_loader: - kv 4: general.size_label str = 0.6B llama_model_loader: - kv 5: general.license str = apache-2.0 llama_model_loader: - kv 6: qwen3.block_count u32 = 28 llama_model_loader: - kv 7: qwen3.context_length u32 = 40960 llama_model_loader: - kv 8: qwen3.embedding_length u32 = 1024 llama_model_loader: - kv 9: qwen3.feed_forward_length u32 = 3072 llama_model_loader: - kv 10: qwen3.attention.head_count u32 = 16 llama_model_loader: - kv 11: qwen3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 12: qwen3.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 14: qwen3.attention.key_length u32 = 128 llama_model_loader: - kv 15: qwen3.attention.value_length u32 = 128 llama_model_loader: - kv 16: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 17: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 18: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 19: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 20: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 21: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 22: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 24: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 25: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 26: general.quantization_version u32 = 2 llama_model_loader: - kv 27: general.file_type u32 = 15 llama_model_loader: - type f32: 113 tensors llama_model_loader: - type f16: 28 tensors llama_model_loader: - type q4_K: 155 tensors llama_model_loader: - type q6_K: 15 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 492.75 MiB (5.50 BPW) load: printing all EOG tokens: load: - 151643 ('<|endoftext|>') load: - 151645 ('<|im_end|>') load: - 151662 ('<|fim_pad|>') load: - 151663 ('<|repo_name|>') load: - 151664 ('<|file_sep|>') load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 751.63 M print_info: general.name = Qwen3 0.6B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-08-30T16:37:46.788-07:00 level=INFO source=server.go:398 msg="starting runner" cmd="/home/max/.cache/go-build/9c/9c997039e1a403dfd5d341e0fb130f9e633678bee2addc40206aaa189315976b-d/ollama runner --model /home/max/.ollama/models/blobs/sha256-7f4030143c1c477224c5434f8272c662a8b042079a0a584f0a27a1684fe2e1fa --port 36223" time=2025-08-30T16:37:46.796-07:00 level=INFO source=runner.go:864 msg="starting go runner" load_backend: loaded CPU backend from /home/max/workspace/github.com/ollama/ollama/build/lib/ollama/libggml-cpu-haswell.so time=2025-08-30T16:37:46.811-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-08-30T16:37:46.811-07:00 level=INFO source=runner.go:900 msg="Server listening on 127.0.0.1:36223" time=2025-08-30T16:37:46.923-07:00 level=INFO source=server.go:503 msg="system memory" total="47.0 GiB" free="37.8 GiB" free_swap="0 B" time=2025-08-30T16:37:46.923-07:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/home/max/.ollama/models/blobs/sha256-7f4030143c1c477224c5434f8272c662a8b042079a0a584f0a27a1684fe2e1fa library=cuda parallel=1 required="1.5 GiB" gpus=1 time=2025-08-30T16:37:46.923-07:00 level=INFO source=server.go:543 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split=[29] memory.available="[29.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.5 GiB" memory.required.partial="1.5 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[1.5 GiB]" memory.weights.total="409.3 MiB" memory.weights.repeating="287.6 MiB" memory.weights.nonrepeating="121.7 MiB" memory.graph.full="149.3 MiB" memory.graph.partial="149.3 MiB" time=2025-08-30T16:37:46.924-07:00 level=INFO source=runner.go:799 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:12 GPULayers:29[ID:GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:true}" time=2025-08-30T16:37:46.924-07:00 level=INFO source=server.go:1246 msg="waiting for llama runner to start responding" time=2025-08-30T16:37:46.924-07:00 level=INFO source=server.go:1280 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 28 key-value pairs and 311 tensors from /home/max/.ollama/models/blobs/sha256-7f4030143c1c477224c5434f8272c662a8b042079a0a584f0a27a1684fe2e1fa (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3 0.6B llama_model_loader: - kv 3: general.basename str = Qwen3 llama_model_loader: - kv 4: general.size_label str = 0.6B llama_model_loader: - kv 5: general.license str = apache-2.0 llama_model_loader: - kv 6: qwen3.block_count u32 = 28 llama_model_loader: - kv 7: qwen3.context_length u32 = 40960 llama_model_loader: - kv 8: qwen3.embedding_length u32 = 1024 llama_model_loader: - kv 9: qwen3.feed_forward_length u32 = 3072 llama_model_loader: - kv 10: qwen3.attention.head_count u32 = 16 llama_model_loader: - kv 11: qwen3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 12: qwen3.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 14: qwen3.attention.key_length u32 = 128 llama_model_loader: - kv 15: qwen3.attention.value_length u32 = 128 llama_model_loader: - kv 16: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 17: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 18: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 19: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 20: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 21: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 22: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 24: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 25: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 26: general.quantization_version u32 = 2 llama_model_loader: - kv 27: general.file_type u32 = 15 llama_model_loader: - type f32: 113 tensors llama_model_loader: - type f16: 28 tensors llama_model_loader: - type q4_K: 155 tensors llama_model_loader: - type q6_K: 15 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 492.75 MiB (5.50 BPW) load: printing all EOG tokens: load: - 151643 ('<|endoftext|>') load: - 151645 ('<|im_end|>') load: - 151662 ('<|fim_pad|>') load: - 151663 ('<|repo_name|>') load: - 151664 ('<|file_sep|>') load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3 print_info: vocab_only = 0 print_info: n_ctx_train = 40960 print_info: n_embd = 1024 print_info: n_layer = 28 print_info: n_head = 16 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 2 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 3072 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = -1 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 40960 print_info: rope_finetuned = unknown print_info: model type = 0.6B print_info: model params = 751.63 M print_info: general.name = Qwen3 0.6B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: CPU_Mapped model buffer size = 492.75 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 4096 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: kv_unified = false llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (40960) -- the full capacity of the model will not be utilized llama_context: CPU output buffer size = 0.58 MiB llama_kv_cache_unified: CPU KV buffer size = 448.00 MiB llama_kv_cache_unified: size = 448.00 MiB ( 4096 cells, 28 layers, 1/1 seqs), K (f16): 224.00 MiB, V (f16): 224.00 MiB llama_context: CPU compute buffer size = 300.75 MiB llama_context: graph nodes = 1098 llama_context: graph splits = 1 time=2025-08-30T16:37:47.175-07:00 level=INFO source=server.go:1284 msg="llama runner started in 0.39 seconds" time=2025-08-30T16:37:47.175-07:00 level=INFO source=sched.go:473 msg="loaded runners" count=1 time=2025-08-30T16:37:47.175-07:00 level=INFO source=server.go:1246 msg="waiting for llama runner to start responding" time=2025-08-30T16:37:47.175-07:00 level=INFO source=server.go:1284 msg="llama runner started in 0.39 seconds" [GIN] 2025/08/30 - 16:37:47 | 200 | 920.772313ms | 127.0.0.1 | POST "/api/generate" [GIN] 2025/08/30 - 16:37:52 | 200 | 1.762292069s | 127.0.0.1 | POST "/api/chat" ```
Author
Owner

@rick-github commented on GitHub (Aug 31, 2025):

$ ls build/lib/ollama
libggml-base.so           libggml-cpu-icelake.so      libggml-cpu-sse42.so
libggml-cpu-alderlake.so  libggml-cpu-sandybridge.so  libggml-cpu-x64.so
libggml-cpu-haswell.so    libggml-cpu-skylakex.so

No CUDA libraries built. Have you installed the CUDA SDK?

<!-- gh-comment-id:3239642291 --> @rick-github commented on GitHub (Aug 31, 2025): ``` $ ls build/lib/ollama libggml-base.so libggml-cpu-icelake.so libggml-cpu-sse42.so libggml-cpu-alderlake.so libggml-cpu-sandybridge.so libggml-cpu-x64.so libggml-cpu-haswell.so libggml-cpu-skylakex.so ``` No CUDA libraries built. Have you [installed](https://github.com/ollama/ollama/blob/main/docs/development.md#linux) the CUDA SDK?
Author
Owner

@vegerot commented on GitHub (Aug 31, 2025):

@rick-github you were right that at the time I didn't have the CUDA SDK installed. Now I do and it still doesn't work :(

$ apt list --installed | grep cuda

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

cuda-cccl-13-0/unknown,now 13.0.50-1 amd64 [installed,automatic]
cuda-command-line-tools-13-0/unknown,now 13.0.0-1 amd64 [installed,automatic]
cuda-compiler-13-0/unknown,now 13.0.0-1 amd64 [installed,automatic]
cuda-crt-13-0/unknown,now 13.0.48-1 amd64 [installed,automatic]
cuda-cudart-13-0/unknown,now 13.0.48-1 amd64 [installed,automatic]
cuda-cudart-dev-13-0/unknown,now 13.0.48-1 amd64 [installed,automatic]
cuda-culibos-dev-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic]
cuda-cuobjdump-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic]
cuda-cupti-13-0/unknown,now 13.0.48-1 amd64 [installed,automatic]
cuda-cupti-dev-13-0/unknown,now 13.0.48-1 amd64 [installed,automatic]
cuda-cuxxfilt-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic]
cuda-documentation-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic]
cuda-driver-dev-13-0/unknown,now 13.0.48-1 amd64 [installed,automatic]
cuda-gdb-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic]
cuda-keyring/unknown,now 1.1-1 all [installed]
cuda-libraries-13-0/unknown,now 13.0.0-1 amd64 [installed,automatic]
cuda-libraries-dev-13-0/unknown,now 13.0.0-1 amd64 [installed,automatic]
cuda-nsight-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic]
cuda-nsight-compute-13-0/unknown,now 13.0.0-1 amd64 [installed,automatic]
cuda-nsight-systems-13-0/unknown,now 13.0.0-1 amd64 [installed,automatic]
cuda-nvcc-13-0/unknown,now 13.0.48-1 amd64 [installed,automatic]
cuda-nvdisasm-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic]
cuda-nvml-dev-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic]
cuda-nvprune-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic]
cuda-nvrtc-13-0/unknown,now 13.0.48-1 amd64 [installed,automatic]
cuda-nvrtc-dev-13-0/unknown,now 13.0.48-1 amd64 [installed,automatic]
cuda-nvtx-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic]
cuda-opencl-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic]
cuda-profiler-api-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic]
cuda-sandbox-dev-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic]
cuda-sanitizer-13-0/unknown,now 13.0.48-1 amd64 [installed,automatic]
cuda-toolkit-13-0-config-common/unknown,now 13.0.48-1 all [installed,automatic]
cuda-toolkit-13-0/unknown,now 13.0.0-1 amd64 [installed]
cuda-toolkit-13-config-common/unknown,now 13.0.48-1 all [installed,automatic]
cuda-toolkit-config-common/unknown,now 13.0.48-1 all [installed,automatic]
cuda-tools-13-0/unknown,now 13.0.0-1 amd64 [installed,automatic]
cuda-visual-tools-13-0/unknown,now 13.0.0-1 amd64 [installed,automatic]

$ cmake -B build && cmake --build build --config Release
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- GGML_SYSTEM_ARCH: x86
-- Including CPU backend
-- x86 detected
-- Adding CPU backend variant ggml-cpu-x64:
-- x86 detected
-- Adding CPU backend variant ggml-cpu-sse42: -msse4.2 GGML_SSE42
-- x86 detected
-- Adding CPU backend variant ggml-cpu-sandybridge: -msse4.2;-mavx GGML_SSE42;GGML_AVX
-- x86 detected
-- Adding CPU backend variant ggml-cpu-haswell: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2 GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2
-- x86 detected
-- Adding CPU backend variant ggml-cpu-skylakex: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512
-- x86 detected
-- Adding CPU backend variant ggml-cpu-icelake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw;-mavx512vbmi;-mavx512vnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512;GGML_AVX512_VBMI;GGML_AVX512_VNNI
-- x86 detected
-- Adding CPU backend variant ggml-cpu-alderlake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavxvnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX_VNNI
-- Configuring done
-- Generating done
-- Build files have been written to: /home/max/workspace/github.com/ollama/ollama/build
[  7%] Built target ggml-base
[  8%] Built target ggml-cpu-x64-feats
[ 21%] Built target ggml-cpu-x64
[ 21%] Built target ggml-cpu-sse42-feats
[ 34%] Built target ggml-cpu-sse42
[ 34%] Built target ggml-cpu-sandybridge-feats
[ 47%] Built target ggml-cpu-sandybridge
[ 48%] Built target ggml-cpu-haswell-feats
[ 60%] Built target ggml-cpu-haswell
[ 60%] Built target ggml-cpu-skylakex-feats
[ 73%] Built target ggml-cpu-skylakex
[ 74%] Built target ggml-cpu-icelake-feats
[ 86%] Built target ggml-cpu-icelake
[ 87%] Built target ggml-cpu-alderlake-feats
[100%] Built target ggml-cpu-alderlake

$ ls build/lib/ollama
libggml-base.so           libggml-cpu-haswell.so  libggml-cpu-sandybridge.so  libggml-cpu-sse42.so
libggml-cpu-alderlake.so  libggml-cpu-icelake.so  libggml-cpu-skylakex.so     libggml-cpu-x64.so
<!-- gh-comment-id:3240394525 --> @vegerot commented on GitHub (Aug 31, 2025): @rick-github you were right that at the time I didn't have the CUDA SDK installed. Now I do and it still doesn't work :( ```sh $ apt list --installed | grep cuda WARNING: apt does not have a stable CLI interface. Use with caution in scripts. cuda-cccl-13-0/unknown,now 13.0.50-1 amd64 [installed,automatic] cuda-command-line-tools-13-0/unknown,now 13.0.0-1 amd64 [installed,automatic] cuda-compiler-13-0/unknown,now 13.0.0-1 amd64 [installed,automatic] cuda-crt-13-0/unknown,now 13.0.48-1 amd64 [installed,automatic] cuda-cudart-13-0/unknown,now 13.0.48-1 amd64 [installed,automatic] cuda-cudart-dev-13-0/unknown,now 13.0.48-1 amd64 [installed,automatic] cuda-culibos-dev-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic] cuda-cuobjdump-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic] cuda-cupti-13-0/unknown,now 13.0.48-1 amd64 [installed,automatic] cuda-cupti-dev-13-0/unknown,now 13.0.48-1 amd64 [installed,automatic] cuda-cuxxfilt-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic] cuda-documentation-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic] cuda-driver-dev-13-0/unknown,now 13.0.48-1 amd64 [installed,automatic] cuda-gdb-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic] cuda-keyring/unknown,now 1.1-1 all [installed] cuda-libraries-13-0/unknown,now 13.0.0-1 amd64 [installed,automatic] cuda-libraries-dev-13-0/unknown,now 13.0.0-1 amd64 [installed,automatic] cuda-nsight-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic] cuda-nsight-compute-13-0/unknown,now 13.0.0-1 amd64 [installed,automatic] cuda-nsight-systems-13-0/unknown,now 13.0.0-1 amd64 [installed,automatic] cuda-nvcc-13-0/unknown,now 13.0.48-1 amd64 [installed,automatic] cuda-nvdisasm-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic] cuda-nvml-dev-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic] cuda-nvprune-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic] cuda-nvrtc-13-0/unknown,now 13.0.48-1 amd64 [installed,automatic] cuda-nvrtc-dev-13-0/unknown,now 13.0.48-1 amd64 [installed,automatic] cuda-nvtx-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic] cuda-opencl-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic] cuda-profiler-api-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic] cuda-sandbox-dev-13-0/unknown,now 13.0.39-1 amd64 [installed,automatic] cuda-sanitizer-13-0/unknown,now 13.0.48-1 amd64 [installed,automatic] cuda-toolkit-13-0-config-common/unknown,now 13.0.48-1 all [installed,automatic] cuda-toolkit-13-0/unknown,now 13.0.0-1 amd64 [installed] cuda-toolkit-13-config-common/unknown,now 13.0.48-1 all [installed,automatic] cuda-toolkit-config-common/unknown,now 13.0.48-1 all [installed,automatic] cuda-tools-13-0/unknown,now 13.0.0-1 amd64 [installed,automatic] cuda-visual-tools-13-0/unknown,now 13.0.0-1 amd64 [installed,automatic] $ cmake -B build && cmake --build build --config Release -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- GGML_SYSTEM_ARCH: x86 -- Including CPU backend -- x86 detected -- Adding CPU backend variant ggml-cpu-x64: -- x86 detected -- Adding CPU backend variant ggml-cpu-sse42: -msse4.2 GGML_SSE42 -- x86 detected -- Adding CPU backend variant ggml-cpu-sandybridge: -msse4.2;-mavx GGML_SSE42;GGML_AVX -- x86 detected -- Adding CPU backend variant ggml-cpu-haswell: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2 GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2 -- x86 detected -- Adding CPU backend variant ggml-cpu-skylakex: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512 -- x86 detected -- Adding CPU backend variant ggml-cpu-icelake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw;-mavx512vbmi;-mavx512vnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512;GGML_AVX512_VBMI;GGML_AVX512_VNNI -- x86 detected -- Adding CPU backend variant ggml-cpu-alderlake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavxvnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX_VNNI -- Configuring done -- Generating done -- Build files have been written to: /home/max/workspace/github.com/ollama/ollama/build [ 7%] Built target ggml-base [ 8%] Built target ggml-cpu-x64-feats [ 21%] Built target ggml-cpu-x64 [ 21%] Built target ggml-cpu-sse42-feats [ 34%] Built target ggml-cpu-sse42 [ 34%] Built target ggml-cpu-sandybridge-feats [ 47%] Built target ggml-cpu-sandybridge [ 48%] Built target ggml-cpu-haswell-feats [ 60%] Built target ggml-cpu-haswell [ 60%] Built target ggml-cpu-skylakex-feats [ 73%] Built target ggml-cpu-skylakex [ 74%] Built target ggml-cpu-icelake-feats [ 86%] Built target ggml-cpu-icelake [ 87%] Built target ggml-cpu-alderlake-feats [100%] Built target ggml-cpu-alderlake $ ls build/lib/ollama libggml-base.so libggml-cpu-haswell.so libggml-cpu-sandybridge.so libggml-cpu-sse42.so libggml-cpu-alderlake.so libggml-cpu-icelake.so libggml-cpu-skylakex.so libggml-cpu-x64.so ```
Author
Owner

@rick-github commented on GitHub (Aug 31, 2025):

$ cmake -B build --fresh && cmake --build build 
<!-- gh-comment-id:3240420512 --> @rick-github commented on GitHub (Aug 31, 2025): ``` $ cmake -B build --fresh && cmake --build build ```
Author
Owner

@vegerot commented on GitHub (Sep 1, 2025):

@rick-github :(

time (cmake -B build --fresh && cmake --build build && go run . serve)
-- The C compiler identification is GNU 12.2.0
-- The CXX compiler identification is GNU 12.2.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- GGML_SYSTEM_ARCH: x86
-- Including CPU backend
-- x86 detected
-- Adding CPU backend variant ggml-cpu-x64:
-- x86 detected
-- Adding CPU backend variant ggml-cpu-sse42: -msse4.2 GGML_SSE42
-- x86 detected
-- Adding CPU backend variant ggml-cpu-sandybridge: -msse4.2;-mavx GGML_SSE42;GGML_AVX
-- x86 detected
-- Adding CPU backend variant ggml-cpu-haswell: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2 GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2
-- x86 detected
-- Adding CPU backend variant ggml-cpu-skylakex: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512
-- x86 detected
-- Adding CPU backend variant ggml-cpu-icelake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw;-mavx512vbmi;-mavx512vnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512;GGML_AVX512_VBMI;GGML_AVX512_VNNI
-- x86 detected
-- Adding CPU backend variant ggml-cpu-alderlake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavxvnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX_VNNI
-- Looking for a CUDA compiler
-- Looking for a CUDA compiler - NOTFOUND
-- Looking for a HIP compiler
-- Looking for a HIP compiler - NOTFOUND
-- Configuring done
-- Generating done
-- Build files have been written to: /home/max/workspace/github.com/ollama/ollama/build
[  1%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o
[  2%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.cpp.o
[  2%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o
[  3%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o
[  4%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o
[  5%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o
[  5%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o
[  6%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o
[  7%] Linking CXX shared library ../../../../../lib/ollama/libggml-base.so
[  7%] Built target ggml-base
[  8%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o
[  8%] Built target ggml-cpu-x64-feats
[  9%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o
[ 10%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o
[ 11%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
    inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=]
  959 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
      |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 72 into destination object ‘<anonymous>’ of size 72
 1150 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
      |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
    inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=]
  959 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
      |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 88 into destination object ‘<anonymous>’ of size 72
 1150 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
      |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
    inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=]
  959 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
      |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 104 into destination object ‘<anonymous>’ of size 72
 1150 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
      |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
    inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=]
  959 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
      |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 120 into destination object ‘<anonymous>’ of size 72
 1150 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
      |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[ 12%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o
[ 12%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o
[ 13%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o
[ 14%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o
[ 15%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o
[ 15%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o
[ 16%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o
[ 17%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o
[ 18%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o
[ 18%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o
[ 19%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o
[ 20%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o
[ 21%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-x64.so
[ 21%] Built target ggml-cpu-x64
[ 21%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o
[ 21%] Built target ggml-cpu-sse42-feats
[ 22%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o
[ 23%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o
[ 24%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
    inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=]
  959 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
      |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 72 into destination object ‘<anonymous>’ of size 72
 1150 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
      |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
    inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=]
  959 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
      |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 88 into destination object ‘<anonymous>’ of size 72
 1150 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
      |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
    inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=]
  959 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
      |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 104 into destination object ‘<anonymous>’ of size 72
 1150 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
      |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
    inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=]
  959 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
      |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 120 into destination object ‘<anonymous>’ of size 72
 1150 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
      |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[ 24%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o
[ 25%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o
[ 26%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o
[ 27%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o
[ 28%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o
[ 28%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o
[ 29%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o
[ 30%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o
[ 31%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o
[ 31%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o
[ 32%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o
[ 33%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o
[ 34%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-sse42.so
[ 34%] Built target ggml-cpu-sse42
[ 34%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o
[ 34%] Built target ggml-cpu-sandybridge-feats
[ 35%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o
[ 36%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o
[ 37%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
    inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=]
  959 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
      |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 72 into destination object ‘<anonymous>’ of size 72
 1150 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
      |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
    inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=]
  959 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
      |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 88 into destination object ‘<anonymous>’ of size 72
 1150 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
      |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
    inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=]
  959 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
      |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 104 into destination object ‘<anonymous>’ of size 72
 1150 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
      |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
    inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=]
  959 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
      |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 120 into destination object ‘<anonymous>’ of size 72
 1150 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
      |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[ 37%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o
[ 38%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o
[ 39%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o
[ 40%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o
[ 40%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o
[ 41%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o
[ 42%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o
[ 43%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o
[ 43%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o
[ 44%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o
[ 45%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o
[ 46%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o
[ 47%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-sandybridge.so
[ 47%] Built target ggml-cpu-sandybridge
[ 48%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o
[ 48%] Built target ggml-cpu-haswell-feats
[ 49%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o
[ 50%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o
[ 50%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
    inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=]
  959 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
      |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 72 into destination object ‘<anonymous>’ of size 72
 1150 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
      |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
    inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=]
  959 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
      |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 104 into destination object ‘<anonymous>’ of size 72
 1150 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
      |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[ 51%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o
[ 52%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o
[ 53%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o
[ 53%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o
[ 54%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o
[ 55%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o
[ 56%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o
[ 57%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o
[ 57%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o
[ 58%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o
[ 59%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o
[ 60%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o
[ 60%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-haswell.so
[ 60%] Built target ggml-cpu-haswell
[ 60%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o
[ 60%] Built target ggml-cpu-skylakex-feats
[ 61%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o
[ 62%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o
[ 63%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
    inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 64 bytes into a region of size 0 [-Wstringop-overflow=]
  959 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
      |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 72 into destination object ‘<anonymous>’ of size 72
 1150 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
      |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[ 63%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o
[ 64%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o
[ 65%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o
[ 66%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o
[ 66%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o
[ 67%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o
[ 68%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o
[ 69%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o
[ 70%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o
[ 70%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o
[ 71%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o
[ 72%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o
[ 73%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-skylakex.so
[ 73%] Built target ggml-cpu-skylakex
[ 74%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o
[ 74%] Built target ggml-cpu-icelake-feats
[ 75%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o
[ 76%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o
[ 76%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
    inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 64 bytes into a region of size 0 [-Wstringop-overflow=]
  959 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
      |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 72 into destination object ‘<anonymous>’ of size 72
 1150 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
      |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[ 77%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o
[ 78%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o
[ 79%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o
[ 80%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o
[ 80%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o
[ 81%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o
[ 82%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o
[ 83%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o
[ 83%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o
[ 84%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o
[ 85%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o
[ 86%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o
[ 86%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-icelake.so
[ 86%] Built target ggml-cpu-icelake
[ 87%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o
[ 87%] Built target ggml-cpu-alderlake-feats
[ 88%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o
[ 89%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o
[ 89%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
    inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=]
  959 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
      |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 72 into destination object ‘<anonymous>’ of size 72
 1150 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
      |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’,
    inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=]
  959 |             memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t));
      |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’:
/home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 104 into destination object ‘<anonymous>’ of size 72
 1150 |             *dst++ = make_block_q4_0x4(dst_tmp, interleave_block);
      |             ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[ 90%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o
[ 91%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o
[ 92%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o
[ 92%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o
[ 93%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o
[ 94%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o
[ 95%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o
[ 95%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o
[ 96%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o
[ 97%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o
[ 98%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o
[100%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o
[100%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-alderlake.so
[100%] Built target ggml-cpu-alderlake
time=2025-08-31T22:24:11.818-07:00 level=INFO source=routes.go:1331 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/max/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-08-31T22:24:11.820-07:00 level=INFO source=images.go:477 msg="total blobs: 58"
time=2025-08-31T22:24:11.821-07:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:   export GIN_MODE=release
 - using code:  gin.SetMode(gin.ReleaseMode)

[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func4 (5 handlers)
[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
time=2025-08-31T22:24:11.821-07:00 level=INFO source=routes.go:1384 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2025-08-31T22:24:11.821-07:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-08-31T22:24:12.082-07:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5090" total="31.4 GiB" available="29.5 GiB"

( cmake -B build --fresh && cmake --build build && go run . serve; )  128.47s user 6.69s system 100% cpu 2:14.64 total

❯ ls build/lib/ollama
libggml-base.so           libggml-cpu-haswell.so  libggml-cpu-sandybridge.so  libggml-cpu-sse42.so
libggml-cpu-alderlake.so  libggml-cpu-icelake.so  libggml-cpu-skylakex.so     libggml-cpu-x64.so
<!-- gh-comment-id:3240887469 --> @vegerot commented on GitHub (Sep 1, 2025): @rick-github :( ```log time (cmake -B build --fresh && cmake --build build && go run . serve) -- The C compiler identification is GNU 12.2.0 -- The CXX compiler identification is GNU 12.2.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- GGML_SYSTEM_ARCH: x86 -- Including CPU backend -- x86 detected -- Adding CPU backend variant ggml-cpu-x64: -- x86 detected -- Adding CPU backend variant ggml-cpu-sse42: -msse4.2 GGML_SSE42 -- x86 detected -- Adding CPU backend variant ggml-cpu-sandybridge: -msse4.2;-mavx GGML_SSE42;GGML_AVX -- x86 detected -- Adding CPU backend variant ggml-cpu-haswell: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2 GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2 -- x86 detected -- Adding CPU backend variant ggml-cpu-skylakex: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512 -- x86 detected -- Adding CPU backend variant ggml-cpu-icelake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw;-mavx512vbmi;-mavx512vnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512;GGML_AVX512_VBMI;GGML_AVX512_VNNI -- x86 detected -- Adding CPU backend variant ggml-cpu-alderlake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavxvnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX_VNNI -- Looking for a CUDA compiler -- Looking for a CUDA compiler - NOTFOUND -- Looking for a HIP compiler -- Looking for a HIP compiler - NOTFOUND -- Configuring done -- Generating done -- Build files have been written to: /home/max/workspace/github.com/ollama/ollama/build [ 1%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o [ 2%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.cpp.o [ 2%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o [ 3%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o [ 4%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o [ 5%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o [ 5%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o [ 6%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o [ 7%] Linking CXX shared library ../../../../../lib/ollama/libggml-base.so [ 7%] Built target ggml-base [ 8%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 8%] Built target ggml-cpu-x64-feats [ 9%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o [ 10%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o [ 11%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 959 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 72 into destination object ‘<anonymous>’ of size 72 1150 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 959 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 88 into destination object ‘<anonymous>’ of size 72 1150 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 959 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 104 into destination object ‘<anonymous>’ of size 72 1150 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 959 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 120 into destination object ‘<anonymous>’ of size 72 1150 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [ 12%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o [ 12%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o [ 13%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o [ 14%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o [ 15%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o [ 15%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o [ 16%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o [ 17%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o [ 18%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o [ 18%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o [ 19%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o [ 20%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o [ 21%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-x64.so [ 21%] Built target ggml-cpu-x64 [ 21%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 21%] Built target ggml-cpu-sse42-feats [ 22%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o [ 23%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o [ 24%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 959 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 72 into destination object ‘<anonymous>’ of size 72 1150 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 959 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 88 into destination object ‘<anonymous>’ of size 72 1150 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 959 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 104 into destination object ‘<anonymous>’ of size 72 1150 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 959 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 120 into destination object ‘<anonymous>’ of size 72 1150 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [ 24%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o [ 25%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o [ 26%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o [ 27%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o [ 28%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o [ 28%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o [ 29%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o [ 30%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o [ 31%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o [ 31%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o [ 32%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o [ 33%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o [ 34%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-sse42.so [ 34%] Built target ggml-cpu-sse42 [ 34%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 34%] Built target ggml-cpu-sandybridge-feats [ 35%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o [ 36%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o [ 37%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 959 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 72 into destination object ‘<anonymous>’ of size 72 1150 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 959 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 88 into destination object ‘<anonymous>’ of size 72 1150 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 959 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 104 into destination object ‘<anonymous>’ of size 72 1150 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 959 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 120 into destination object ‘<anonymous>’ of size 72 1150 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [ 37%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o [ 38%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o [ 39%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o [ 40%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o [ 40%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o [ 41%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o [ 42%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o [ 43%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o [ 43%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o [ 44%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o [ 45%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o [ 46%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o [ 47%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-sandybridge.so [ 47%] Built target ggml-cpu-sandybridge [ 48%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 48%] Built target ggml-cpu-haswell-feats [ 49%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o [ 50%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o [ 50%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 959 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 72 into destination object ‘<anonymous>’ of size 72 1150 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 959 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 104 into destination object ‘<anonymous>’ of size 72 1150 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [ 51%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o [ 52%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o [ 53%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o [ 53%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o [ 54%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o [ 55%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o [ 56%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o [ 57%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o [ 57%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o [ 58%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o [ 59%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o [ 60%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o [ 60%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-haswell.so [ 60%] Built target ggml-cpu-haswell [ 60%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 60%] Built target ggml-cpu-skylakex-feats [ 61%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o [ 62%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o [ 63%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 64 bytes into a region of size 0 [-Wstringop-overflow=] 959 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 72 into destination object ‘<anonymous>’ of size 72 1150 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [ 63%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o [ 64%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o [ 65%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o [ 66%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o [ 66%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o [ 67%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o [ 68%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o [ 69%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o [ 70%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o [ 70%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o [ 71%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o [ 72%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o [ 73%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-skylakex.so [ 73%] Built target ggml-cpu-skylakex [ 74%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 74%] Built target ggml-cpu-icelake-feats [ 75%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o [ 76%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o [ 76%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 64 bytes into a region of size 0 [-Wstringop-overflow=] 959 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 72 into destination object ‘<anonymous>’ of size 72 1150 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [ 77%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o [ 78%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o [ 79%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o [ 80%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o [ 80%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o [ 81%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o [ 82%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o [ 83%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o [ 83%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o [ 84%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o [ 85%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o [ 86%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o [ 86%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-icelake.so [ 86%] Built target ggml-cpu-icelake [ 87%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 87%] Built target ggml-cpu-alderlake-feats [ 88%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o [ 89%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o [ 89%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 959 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 72 into destination object ‘<anonymous>’ of size 72 1150 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In function ‘block_q4_0x4 make_block_q4_0x4(block_q4_0*, unsigned int)’, inlined from ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’ at /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:39: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:959:19: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 959 | memcpy(&out.qs[dst_offset], &elems, sizeof(uint64_t)); | ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp: In function ‘int repack_q4_0_to_q4_0_4_bl(ggml_tensor*, int, const void*, size_t)’: /home/max/workspace/github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:1150:20: note: at offset 104 into destination object ‘<anonymous>’ of size 72 1150 | *dst++ = make_block_q4_0x4(dst_tmp, interleave_block); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [ 90%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o [ 91%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o [ 92%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o [ 92%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o [ 93%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o [ 94%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o [ 95%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o [ 95%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o [ 96%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o [ 97%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o [ 98%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o [100%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o [100%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-alderlake.so [100%] Built target ggml-cpu-alderlake time=2025-08-31T22:24:11.818-07:00 level=INFO source=routes.go:1331 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/max/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-08-31T22:24:11.820-07:00 level=INFO source=images.go:477 msg="total blobs: 58" time=2025-08-31T22:24:11.821-07:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func4 (5 handlers) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) time=2025-08-31T22:24:11.821-07:00 level=INFO source=routes.go:1384 msg="Listening on 127.0.0.1:11434 (version 0.0.0)" time=2025-08-31T22:24:11.821-07:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-08-31T22:24:12.082-07:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4 library=cuda variant=v12 compute=12.0 driver=12.8 name="NVIDIA GeForce RTX 5090" total="31.4 GiB" available="29.5 GiB" ( cmake -B build --fresh && cmake --build build && go run . serve; ) 128.47s user 6.69s system 100% cpu 2:14.64 total ❯ ls build/lib/ollama libggml-base.so libggml-cpu-haswell.so libggml-cpu-sandybridge.so libggml-cpu-sse42.so libggml-cpu-alderlake.so libggml-cpu-icelake.so libggml-cpu-skylakex.so libggml-cpu-x64.so ```
Author
Owner

@vegerot commented on GitHub (Sep 1, 2025):

@rick-github where is the code that determines whether or not to build libggml-gpu-cuda? Is there a more direct way of probing why they're not building?

<!-- gh-comment-id:3240900277 --> @vegerot commented on GitHub (Sep 1, 2025): @rick-github where is the code that determines whether or not to build libggml-gpu-cuda? Is there a more direct way of probing why they're not building?
Author
Owner

@rick-github commented on GitHub (Sep 1, 2025):

-- Looking for a CUDA compiler
-- Looking for a CUDA compiler - NOTFOUND

What's the output of:

command -v nvcc
nvcc --version
<!-- gh-comment-id:3242487139 --> @rick-github commented on GitHub (Sep 1, 2025): ``` -- Looking for a CUDA compiler -- Looking for a CUDA compiler - NOTFOUND ``` What's the output of: ``` command -v nvcc nvcc --version ```
Author
Owner

@vegerot commented on GitHub (Sep 1, 2025):

@rick-github 🤦🏾‍♀️ looks like the CUDA toolkit .deb installs nvcc to /usr/local/cuda/bin and doesn't tell me to update my PATH :D

Now I get

$ time (cmake -B build --fresh && cmake --build build && go run . serve)
-- The C compiler identification is GNU 12.2.0
-- The CXX compiler identification is GNU 12.2.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- GGML_SYSTEM_ARCH: x86
-- Including CPU backend
-- x86 detected
-- Adding CPU backend variant ggml-cpu-x64:
-- x86 detected
-- Adding CPU backend variant ggml-cpu-sse42: -msse4.2 GGML_SSE42
-- x86 detected
-- Adding CPU backend variant ggml-cpu-sandybridge: -msse4.2;-mavx GGML_SSE42;GGML_AVX
-- x86 detected
-- Adding CPU backend variant ggml-cpu-haswell: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2 GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2
-- x86 detected
-- Adding CPU backend variant ggml-cpu-skylakex: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512
-- x86 detected
-- Adding CPU backend variant ggml-cpu-icelake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw;-mavx512vbmi;-mavx512vnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512;GGML_AVX512_VBMI;GGML_AVX512_VNNI
-- x86 detected
-- Adding CPU backend variant ggml-cpu-alderlake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavxvnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX_VNNI
-- Looking for a CUDA compiler
-- Looking for a CUDA compiler - /usr/local/cuda/bin/nvcc
-- Found CUDAToolkit: /usr/local/cuda/include (found version "13.0.48")
-- CUDA Toolkit found
-- Using CUDA architectures: native
-- The CUDA compiler identification is NVIDIA 13.0.48
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Looking for a HIP compiler
-- Looking for a HIP compiler - NOTFOUND
-- Configuring done
CMake Error in ml/backend/ggml/ggml/src/ggml-cuda/CMakeLists.txt:
  CUDA_ARCHITECTURES is set to "native", but no GPU was detected.


CMake Error in ml/backend/ggml/ggml/src/ggml-cuda/CMakeLists.txt:
  CUDA_ARCHITECTURES is set to "native", but no GPU was detected.


-- Generating done
CMake Generate step failed.  Build files cannot be regenerated correctly.
( cmake -B build --fresh && cmake --build build && go run . serve; )  4.03s user 0.95s system 99% cpu 4.997 total

(to remind you, I do have a GPU and when I download ollama from your website it does use my 5090)

<!-- gh-comment-id:3242959243 --> @vegerot commented on GitHub (Sep 1, 2025): @rick-github 🤦🏾‍♀️ looks like the CUDA toolkit `.deb` installs `nvcc` to `/usr/local/cuda/bin` and doesn't tell me to update my PATH :D Now I get ```sh $ time (cmake -B build --fresh && cmake --build build && go run . serve) -- The C compiler identification is GNU 12.2.0 -- The CXX compiler identification is GNU 12.2.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- GGML_SYSTEM_ARCH: x86 -- Including CPU backend -- x86 detected -- Adding CPU backend variant ggml-cpu-x64: -- x86 detected -- Adding CPU backend variant ggml-cpu-sse42: -msse4.2 GGML_SSE42 -- x86 detected -- Adding CPU backend variant ggml-cpu-sandybridge: -msse4.2;-mavx GGML_SSE42;GGML_AVX -- x86 detected -- Adding CPU backend variant ggml-cpu-haswell: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2 GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2 -- x86 detected -- Adding CPU backend variant ggml-cpu-skylakex: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512 -- x86 detected -- Adding CPU backend variant ggml-cpu-icelake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw;-mavx512vbmi;-mavx512vnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512;GGML_AVX512_VBMI;GGML_AVX512_VNNI -- x86 detected -- Adding CPU backend variant ggml-cpu-alderlake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavxvnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX_VNNI -- Looking for a CUDA compiler -- Looking for a CUDA compiler - /usr/local/cuda/bin/nvcc -- Found CUDAToolkit: /usr/local/cuda/include (found version "13.0.48") -- CUDA Toolkit found -- Using CUDA architectures: native -- The CUDA compiler identification is NVIDIA 13.0.48 -- Detecting CUDA compiler ABI info -- Detecting CUDA compiler ABI info - done -- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped -- Detecting CUDA compile features -- Detecting CUDA compile features - done -- Looking for a HIP compiler -- Looking for a HIP compiler - NOTFOUND -- Configuring done CMake Error in ml/backend/ggml/ggml/src/ggml-cuda/CMakeLists.txt: CUDA_ARCHITECTURES is set to "native", but no GPU was detected. CMake Error in ml/backend/ggml/ggml/src/ggml-cuda/CMakeLists.txt: CUDA_ARCHITECTURES is set to "native", but no GPU was detected. -- Generating done CMake Generate step failed. Build files cannot be regenerated correctly. ( cmake -B build --fresh && cmake --build build && go run . serve; ) 4.03s user 0.95s system 99% cpu 4.997 total ``` (to remind you, I do have a GPU and when I download `ollama` from your website it does use my 5090)
Author
Owner

@rick-github commented on GitHub (Sep 1, 2025):

-- The CUDA compiler identification is NVIDIA 13.0.48

13.* might be a bit too new - there are reports of issues with trying to use CUDA 13. Try a slightly older release: https://developer.nvidia.com/cuda-12-9-0-download-archive

<!-- gh-comment-id:3243015348 --> @rick-github commented on GitHub (Sep 1, 2025): ``` -- The CUDA compiler identification is NVIDIA 13.0.48 ``` 13.* might be a bit too new - there are reports of issues with trying to use CUDA 13. Try a slightly older release: https://developer.nvidia.com/cuda-12-9-0-download-archive
Author
Owner

@vegerot commented on GitHub (Sep 1, 2025):

@rick-github I was able to get it to work on the latest CUDA 13 by applying this patch:

diff --git a/CMakeLists.txt b/CMakeLists.txt
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -74,7 +74,7 @@
 check_language(CUDA)
 if(CMAKE_CUDA_COMPILER)
     if(CMAKE_VERSION VERSION_GREATER_EQUAL "3.24" AND NOT CMAKE_CUDA_ARCHITECTURES)
-        set(CMAKE_CUDA_ARCHITECTURES "native")
+        set(CMAKE_CUDA_ARCHITECTURES "120")
     endif()

     find_package(CUDAToolkit)
diff --git a/ml/backend/ggml/ggml/src/ggml-cuda/CMakeLists.txt b/ml/backend/ggml/ggml/src/ggml-cuda/CMakeLists.txt
--- a/ml/backend/ggml/ggml/src/ggml-cuda/CMakeLists.txt
+++ b/ml/backend/ggml/ggml/src/ggml-cuda/CMakeLists.txt
@@ -23,7 +23,7 @@
         # The default behavior for a non-native is to build virtual architectures as needed to cover all features needed
         #     for best performance and to also build real architectures for the most commonly used GPUs.
         if (GGML_NATIVE AND CUDAToolkit_VERSION VERSION_GREATER_EQUAL "11.6" AND CMAKE_VERSION VERSION_GREATER_EQUAL "3.24")
-            set(CMAKE_CUDA_ARCHITECTURES "native")
+            set(CMAKE_CUDA_ARCHITECTURES "120")
         elseif(GGML_CUDA_F16 OR GGML_CUDA_DMMV_F16)
             if (CUDAToolkit_VERSION VERSION_GREATER_EQUAL "11.8")
                 set(CMAKE_CUDA_ARCHITECTURES "60-virtual;61-virtual;70-virtual;75-virtual;80-virtual;86-real;89-real")

Now it works perfectly :D

<!-- gh-comment-id:3243024210 --> @vegerot commented on GitHub (Sep 1, 2025): @rick-github I was able to get it to work on the latest CUDA 13 by applying this patch: ```diff diff --git a/CMakeLists.txt b/CMakeLists.txt --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -74,7 +74,7 @@ check_language(CUDA) if(CMAKE_CUDA_COMPILER) if(CMAKE_VERSION VERSION_GREATER_EQUAL "3.24" AND NOT CMAKE_CUDA_ARCHITECTURES) - set(CMAKE_CUDA_ARCHITECTURES "native") + set(CMAKE_CUDA_ARCHITECTURES "120") endif() find_package(CUDAToolkit) diff --git a/ml/backend/ggml/ggml/src/ggml-cuda/CMakeLists.txt b/ml/backend/ggml/ggml/src/ggml-cuda/CMakeLists.txt --- a/ml/backend/ggml/ggml/src/ggml-cuda/CMakeLists.txt +++ b/ml/backend/ggml/ggml/src/ggml-cuda/CMakeLists.txt @@ -23,7 +23,7 @@ # The default behavior for a non-native is to build virtual architectures as needed to cover all features needed # for best performance and to also build real architectures for the most commonly used GPUs. if (GGML_NATIVE AND CUDAToolkit_VERSION VERSION_GREATER_EQUAL "11.6" AND CMAKE_VERSION VERSION_GREATER_EQUAL "3.24") - set(CMAKE_CUDA_ARCHITECTURES "native") + set(CMAKE_CUDA_ARCHITECTURES "120") elseif(GGML_CUDA_F16 OR GGML_CUDA_DMMV_F16) if (CUDAToolkit_VERSION VERSION_GREATER_EQUAL "11.8") set(CMAKE_CUDA_ARCHITECTURES "60-virtual;61-virtual;70-virtual;75-virtual;80-virtual;86-real;89-real") ``` Now it works perfectly :D
Author
Owner

@vegerot commented on GitHub (Sep 1, 2025):

@rick-github actually manually setting CMAKE_CUDA_ARCHITECTURES didn't work :(

I thought it worked because it built successfully, but it's still not using my GPU at runtime

time=2025-09-01T12:13:09.742-07:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/home/max/.ollama/models/blobs/sha256-e796792eba26c4d3b04b0ac5adb01a453dd9ec2dfd83b6c59cbf6fe5f30b0f68 library=cuda parallel=1 required="19.3 GiB" gpus=1
time=2025-09-01T12:13:09.744-07:00 level=INFO source=server.go:543 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=63 layers.split=[63] memory.available="[29.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="19.3 GiB" memory.required.partial="19.3 GiB" memory.required.kv="944.0 MiB" memory.required.allocations="[19.3 GiB]" memory.weights.total="15.4 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="522.5 MiB" memory.graph.partial="1.6 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-09-01T12:13:09.744-07:00 level=INFO source=runner.go:1101 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:12 GPULayers:63[ID:GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4 Layers:63(0..62)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-09-01T12:13:09.794-07:00 level=INFO source=ggml.go:131 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1247 num_key_values=37
load_backend: loaded CPU backend from /home/max/workspace/github.com/ollama/ollama/build/lib/ollama/libggml-cpu-haswell.so
time=2025-09-01T12:13:09.798-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-09-01T12:13:10.151-07:00 level=INFO source=ggml.go:487 msg="offloading 0 repeating layers to GPU"
time=2025-09-01T12:13:10.151-07:00 level=INFO source=ggml.go:491 msg="offloading output layer to CPU"
time=2025-09-01T12:13:10.151-07:00 level=INFO source=ggml.go:498 msg="offloaded 0/63 layers to GPU"
time=2025-09-01T12:13:10.151-07:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="17.3 GiB"
time=2025-09-01T12:13:10.151-07:00 level=INFO source=backend.go:326 msg="kv cache" device=CPU size="944.0 MiB"
<!-- gh-comment-id:3243146708 --> @vegerot commented on GitHub (Sep 1, 2025): @rick-github actually manually setting `CMAKE_CUDA_ARCHITECTURES` didn't work :( I thought it worked because it built successfully, but it's still not using my GPU at runtime ``` time=2025-09-01T12:13:09.742-07:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/home/max/.ollama/models/blobs/sha256-e796792eba26c4d3b04b0ac5adb01a453dd9ec2dfd83b6c59cbf6fe5f30b0f68 library=cuda parallel=1 required="19.3 GiB" gpus=1 time=2025-09-01T12:13:09.744-07:00 level=INFO source=server.go:543 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=63 layers.split=[63] memory.available="[29.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="19.3 GiB" memory.required.partial="19.3 GiB" memory.required.kv="944.0 MiB" memory.required.allocations="[19.3 GiB]" memory.weights.total="15.4 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="522.5 MiB" memory.graph.partial="1.6 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" time=2025-09-01T12:13:09.744-07:00 level=INFO source=runner.go:1101 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:12 GPULayers:63[ID:GPU-fe3687c8-f8a3-41fd-b53c-7ce1f2754cf4 Layers:63(0..62)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-09-01T12:13:09.794-07:00 level=INFO source=ggml.go:131 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1247 num_key_values=37 load_backend: loaded CPU backend from /home/max/workspace/github.com/ollama/ollama/build/lib/ollama/libggml-cpu-haswell.so time=2025-09-01T12:13:09.798-07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) time=2025-09-01T12:13:10.151-07:00 level=INFO source=ggml.go:487 msg="offloading 0 repeating layers to GPU" time=2025-09-01T12:13:10.151-07:00 level=INFO source=ggml.go:491 msg="offloading output layer to CPU" time=2025-09-01T12:13:10.151-07:00 level=INFO source=ggml.go:498 msg="offloaded 0/63 layers to GPU" time=2025-09-01T12:13:10.151-07:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="17.3 GiB" time=2025-09-01T12:13:10.151-07:00 level=INFO source=backend.go:326 msg="kv cache" device=CPU size="944.0 MiB" ```
Author
Owner

@vegerot commented on GitHub (Sep 1, 2025):

Okay, as you suggested I downgraded to CUDA-12.9 and now it actually works! Thanks @rick-github for all your help <3

<!-- gh-comment-id:3243147360 --> @vegerot commented on GitHub (Sep 1, 2025): Okay, as you suggested I downgraded to CUDA-12.9 and now it actually works! Thanks @rick-github for all your help <3
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#54572