[GH-ISSUE #10874] Llama 3 runs on GPU (fast). Llama 2 runs on CPU (slow) #53658

Closed
opened 2026-04-29 04:24:54 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @forthrin on GitHub (May 27, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10874

What is the issue?

  1. How can Llama2 be made to run fast?
  2. Why does it run slow (on CPU) in the first place?
< llama_kv_cache_unified: layer   0: dev = CPU
> llama_kv_cache_unified: layer   0: dev = Metal

Relevant log output

13c13
< level=DEBUG source=sched.go:228 msg="loading first model" model=~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
---
> level=DEBUG source=sched.go:228 msg="loading first model" model=~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa
30c30
< level=INFO source=server.go:135 msg="system memory" total="8.0 GiB" free="5.6 GiB" free_swap="0 B"
---
> level=INFO source=server.go:135 msg="system memory" total="8.0 GiB" free="5.1 GiB" free_swap="0 B"
35c35
< level=INFO source=server.go:168 msg=offload library=metal layers.requested=-1 layers.model=33 layers.offload=25 layers.split="" memory.available="[5.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.5 GiB" memory.required.partial="5.2 GiB" memory.required.kv="2.0 GiB" memory.required.allocations="[5.2 GiB]" memory.weights.total="3.5 GiB" memory.weights.repeating="3.4 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="296.0 MiB" memory.graph.partial="296.0 MiB"
---
> level=INFO source=server.go:168 msg=offload library=metal layers.requested=-1 layers.model=33 layers.offload=32 layers.split="" memory.available="[5.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.5 GiB" memory.required.partial="5.1 GiB" memory.required.kv="512.0 MiB" memory.required.allocations="[5.1 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="296.0 MiB" memory.graph.partial="296.0 MiB"
38c38
< llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from ~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
---
> llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from ~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest))
41,48c41,48
< llama_model_loader: - kv 1: general.name str = LLaMA v2
< llama_model_loader: - kv 2: llama.context_length u32 = 4096
< llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
< llama_model_loader: - kv 4: llama.block_count u32 = 32
< llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
< llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
< llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
< llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
---
> llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct
> llama_model_loader: - kv 2: llama.block_count u32 = 32
> llama_model_loader: - kv 3: llama.context_length u32 = 8192
> llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
> llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
> llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
> llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
> llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
51,62c51,61
< llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
< llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
< llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
< llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
< llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
< llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
< llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
< llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
< llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
< llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
< llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
< llama_model_loader: - kv 22: general.quantization_version u32 = 2
---
> llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
> llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
> llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
> llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe
> llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
> llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
> llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
> llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000
> llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009
> llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
> llama_model_loader: - kv 21: general.quantization_version u32 = 2
68,72c67,70
< print_info: file size = 3.56 GiB (4.54 BPW) 
< init_tokenizer: initializing tokenizer for type 1
< load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
< load: special tokens cache size = 3
< load: token to piece cache size = 0.1684 MB
---
> print_info: file size = 4.33 GiB (4.64 BPW) 
> init_tokenizer: initializing tokenizer for type 2
> load: special tokens cache size = 256
> load: token to piece cache size = 0.8000 MB
76,86c74,84
< print_info: model params = 6.74 B
< print_info: general.name = LLaMA v2
< print_info: vocab type = SPM
< print_info: n_vocab = 32000
< print_info: n_merges = 0
< print_info: BOS token = 1 '<s>'
< print_info: EOS token = 2 '</s>'
< print_info: UNK token = 0 '<unk>'
< print_info: LF token = 13 '<0x0A>'
< print_info: EOG token = 2 '</s>'
< print_info: max token length = 48
---
> print_info: model params = 8.03 B
> print_info: general.name = Meta-Llama-3-8B-Instruct
> print_info: vocab type = BPE
> print_info: n_vocab = 128256
> print_info: n_merges = 280147
> print_info: BOS token = 128000 '<|begin_of_text|>'
> print_info: EOS token = 128009 '<|eot_id|>'
> print_info: EOT token = 128009 '<|eot_id|>'
> print_info: LF token = 198 'Ċ'
> print_info: EOG token = 128009 '<|eot_id|>'
> print_info: max token length = 256
88c86
< level=INFO source=server.go:431 msg="starting llama server" cmd="/opt/homebrew/Cellar/ollama/0.7.1/bin/ollama runner --model ~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 4096 --batch-size 512 --n-gpu-layers 25 --threads 4 --no-mmap --parallel 1 --port 49586"
---
> level=INFO source=server.go:431 msg="starting llama server" cmd="/opt/homebrew/Cellar/ollama/0.7.1/bin/ollama runner --model ~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 4096 --batch-size 512 --n-gpu-layers 32 --threads 4 --no-mmap --parallel 1 --port 49582"
96c94
< level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:49586"
---
> level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:49582"
98c96
< llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from ~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
---
> llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from ~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest))
101,108c99,106
< llama_model_loader: - kv 1: general.name str = LLaMA v2
< llama_model_loader: - kv 2: llama.context_length u32 = 4096
< llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
< llama_model_loader: - kv 4: llama.block_count u32 = 32
< llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
< llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
< llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
< llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
---
> llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct
> llama_model_loader: - kv 2: llama.block_count u32 = 32
> llama_model_loader: - kv 3: llama.context_length u32 = 8192
> llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
> llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
> llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
> llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
> llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
111,122c109,119
< llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
< llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
< llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
< llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
< llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n...
< llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
< llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
< llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
< llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
< llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
< llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
< llama_model_loader: - kv 22: general.quantization_version u32 = 2
---
> llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
> llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
> llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
> llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe
> llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
> llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
> llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
> llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000
> llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009
> llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
> llama_model_loader: - kv 21: general.quantization_version u32 = 2
128,132c125,128
< print_info: file size = 3.56 GiB (4.54 BPW) 
< init_tokenizer: initializing tokenizer for type 1
< load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
< load: special tokens cache size = 3
< load: token to piece cache size = 0.1684 MB
---
> print_info: file size = 4.33 GiB (4.64 BPW) 
> init_tokenizer: initializing tokenizer for type 2
> load: special tokens cache size = 256
> load: token to piece cache size = 0.8000 MB
135c131
< print_info: n_ctx_train = 4096
---
> print_info: n_ctx_train = 8192
139c135
< print_info: n_head_kv = 32
---
> print_info: n_head_kv = 8
145,147c141,143
< print_info: n_gqa = 1
< print_info: n_embd_k_gqa = 4096
< print_info: n_embd_v_gqa = 4096
---
> print_info: n_gqa = 4
> print_info: n_embd_k_gqa = 1024
> print_info: n_embd_v_gqa = 1024
154c150
< print_info: n_ff = 11008
---
> print_info: n_ff = 14336
161c157
< print_info: freq_base_train = 10000.0
---
> print_info: freq_base_train = 500000.0
163c159
< print_info: n_ctx_orig_yarn = 4096
---
> print_info: n_ctx_orig_yarn = 8192
170,181c166,177
< print_info: model type = 7B
< print_info: model params = 6.74 B
< print_info: general.name = LLaMA v2
< print_info: vocab type = SPM
< print_info: n_vocab = 32000
< print_info: n_merges = 0
< print_info: BOS token = 1 '<s>'
< print_info: EOS token = 2 '</s>'
< print_info: UNK token = 0 '<unk>'
< print_info: LF token = 13 '<0x0A>'
< print_info: EOG token = 2 '</s>'
< print_info: max token length = 48
---
> print_info: model type = 8B
> print_info: model params = 8.03 B
> print_info: general.name = Meta-Llama-3-8B-Instruct
> print_info: vocab type = BPE
> print_info: n_vocab = 128256
> print_info: n_merges = 280147
> print_info: BOS token = 128000 '<|begin_of_text|>'
> print_info: EOS token = 128009 '<|eot_id|>'
> print_info: EOT token = 128009 '<|eot_id|>'
> print_info: LF token = 198 'Ċ'
> print_info: EOG token = 128009 '<|eot_id|>'
> print_info: max token length = 256
183,189c179,185
< load_tensors: layer 0 assigned to device CPU, is_swa = 0
< load_tensors: layer 1 assigned to device CPU, is_swa = 0
< load_tensors: layer 2 assigned to device CPU, is_swa = 0
< load_tensors: layer 3 assigned to device CPU, is_swa = 0
< load_tensors: layer 4 assigned to device CPU, is_swa = 0
< load_tensors: layer 5 assigned to device CPU, is_swa = 0
< load_tensors: layer 6 assigned to device CPU, is_swa = 0
---
> load_tensors: layer 0 assigned to device Metal, is_swa = 0
> load_tensors: layer 1 assigned to device Metal, is_swa = 0
> load_tensors: layer 2 assigned to device Metal, is_swa = 0
> load_tensors: layer 3 assigned to device Metal, is_swa = 0
> load_tensors: layer 4 assigned to device Metal, is_swa = 0
> load_tensors: layer 5 assigned to device Metal, is_swa = 0
> load_tensors: layer 6 assigned to device Metal, is_swa = 0
216,219c212,215
< load_tensors: offloading 25 repeating layers to GPU
< load_tensors: offloaded 25/33 layers to GPU
< load_tensors: CPU model buffer size = 933.02 MiB
< load_tensors: Metal model buffer size = 2714.84 MiB
---
> load_tensors: offloading 32 repeating layers to GPU
> load_tensors: offloaded 32/33 layers to GPU
> load_tensors: CPU model buffer size = 692.80 MiB
> load_tensors: Metal model buffer size = 3745.00 MiB
222c218
< level=DEBUG source=server.go:636 msg="model load progress 0.10"
---
> level=DEBUG source=server.go:636 msg="model load progress 0.06"
224,228c220,230
< level=DEBUG source=server.go:636 msg="model load progress 0.26"
< level=DEBUG source=server.go:636 msg="model load progress 0.40"
< level=DEBUG source=server.go:636 msg="model load progress 0.56"
< level=DEBUG source=server.go:636 msg="model load progress 0.70"
< level=DEBUG source=server.go:636 msg="model load progress 0.85"
---
> level=DEBUG source=server.go:636 msg="model load progress 0.17"
> level=DEBUG source=server.go:636 msg="model load progress 0.25"
> level=DEBUG source=server.go:636 msg="model load progress 0.33"
> level=DEBUG source=server.go:636 msg="model load progress 0.41"
> level=DEBUG source=server.go:636 msg="model load progress 0.48"
> level=DEBUG source=server.go:636 msg="model load progress 0.57"
> level=DEBUG source=server.go:636 msg="model load progress 0.65"
> level=DEBUG source=server.go:636 msg="model load progress 0.75"
> level=DEBUG source=server.go:636 msg="model load progress 0.82"
> level=DEBUG source=server.go:636 msg="model load progress 0.89"
> level=DEBUG source=server.go:636 msg="model load progress 0.98"
237c239
< llama_context: freq_base = 10000.0
---
> llama_context: freq_base = 500000.0
238a241
> llama_context: n_ctx_per_seq (4096) < n_ctx_train (8192) -- the full capacity of the model will not be utilized
241d243
< level=DEBUG source=server.go:636 msg="model load progress 1.00"
615c617
< llama_context: CPU output buffer size = 0.14 MiB
---
> llama_context: CPU output buffer size = 0.50 MiB
618,624c620,626
< llama_kv_cache_unified: layer 0: dev = CPU
< llama_kv_cache_unified: layer 1: dev = CPU
< llama_kv_cache_unified: layer 2: dev = CPU
< llama_kv_cache_unified: layer 3: dev = CPU
< llama_kv_cache_unified: layer 4: dev = CPU
< llama_kv_cache_unified: layer 5: dev = CPU
< llama_kv_cache_unified: layer 6: dev = CPU
---
> llama_kv_cache_unified: layer 0: dev = Metal
> llama_kv_cache_unified: layer 1: dev = Metal
> llama_kv_cache_unified: layer 2: dev = Metal
> llama_kv_cache_unified: layer 3: dev = Metal
> llama_kv_cache_unified: layer 4: dev = Metal
> llama_kv_cache_unified: layer 5: dev = Metal
> llama_kv_cache_unified: layer 6: dev = Metal
650,653c652,654
< llama_kv_cache_unified: CPU KV buffer size = 448.00 MiB
< level=DEBUG source=server.go:639 msg="model load completed, waiting for server to become available" status="llm server loading model"
< llama_kv_cache_unified: Metal KV buffer size = 1600.00 MiB
< llama_kv_cache_unified: KV self size = 2048.00 MiB, K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
---
> level=DEBUG source=server.go:636 msg="model load progress 1.00"
> llama_kv_cache_unified: Metal KV buffer size = 512.00 MiB
> llama_kv_cache_unified: KV self size = 512.00 MiB, K (f16): 256.00 MiB, V (f16): 256.00 MiB
662c663
< llama_context: CPU compute buffer size = 296.01 MiB
---
> llama_context: CPU compute buffer size = 250.50 MiB
664,666c665,667
< llama_context: graph splits = 115 (with bs=512), 3 (with bs=1)
< level=INFO source=server.go:630 msg="llama runner started in 2.52 seconds"
< level=DEBUG source=sched.go:495 msg="finished setting up" runner.name=registry.ollama.ai/library/llama2:latest runner.inference=metal runner.devices=1 runner.size="6.5 GiB" runner.vram="5.2 GiB" runner.parallel=1 runner.pid=95933 runner.model=~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 runner.num_ctx=4096
---
> llama_context: graph splits = 4 (with bs=512), 3 (with bs=1)
> level=INFO source=server.go:630 msg="llama runner started in 3.77 seconds"
> level=DEBUG source=sched.go:495 msg="finished setting up" runner.name=registry.ollama.ai/library/llama3:latest runner.inference=metal runner.devices=1 runner.size="5.5 GiB" runner.vram="5.1 GiB" runner.parallel=1 runner.pid=95925 runner.model=~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa runner.num_ctx=4096
668,669c669,670
< level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/llama2:latest runner.inference=metal runner.devices=1 runner.size="6.5 GiB" runner.vram="5.2 GiB" runner.parallel=1 runner.pid=95933 runner.model=~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 runner.num_ctx=4096 duration=5m0s
< level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/llama2:latest runner.inference=metal runner.devices=1 runner.size="6.5 GiB" runner.vram="5.2 GiB" runner.parallel=1 runner.pid=95933 runner.model=~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 runner.num_ctx=4096 refCount=0
---
> level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/llama3:latest runner.inference=metal runner.devices=1 runner.size="5.5 GiB" runner.vram="5.1 GiB" runner.parallel=1 runner.pid=95925 runner.model=~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa runner.num_ctx=4096 duration=5m0s
> level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/llama3:latest runner.inference=metal runner.devices=1 runner.size="5.5 GiB" runner.vram="5.1 GiB" runner.parallel=1 runner.pid=95925 runner.model=~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa runner.num_ctx=4096 refCount=0
671,677c672,677
< level=DEBUG source=sched.go:615 msg="evaluating already loaded" model=~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
< level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=48 format=""
< level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=25 used=0 remaining=25
< level=DEBUG source=sched.go:434 msg="context for request finished" runner.name=registry.ollama.ai/library/llama2:latest runner.inference=metal runner.devices=1 runner.size="6.5 GiB" runner.vram="5.2 GiB" runner.parallel=1 runner.pid=95933 runner.model=~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 runner.num_ctx=4096
< level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/llama2:latest runner.inference=metal runner.devices=1 runner.size="6.5 GiB" runner.vram="5.2 GiB" runner.parallel=1 runner.pid=95933 runner.model=~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 runner.num_ctx=4096 duration=5m0s
< level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/llama2:latest runner.inference=metal runner.devices=1 runner.size="6.5 GiB" runner.vram="5.2 GiB" runner.parallel=1 runner.pid=95933 runner.model=~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 runner.num_ctx=4096 refCount=0
< level=DEBUG source=sched.go:322 msg="shutting down scheduler completed loop"
---
> level=DEBUG source=sched.go:615 msg="evaluating already loaded" model=~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa
> level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=114 format=""
> level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=14 used=0 remaining=14
> level=DEBUG source=sched.go:434 msg="context for request finished" runner.name=registry.ollama.ai/library/llama3:latest runner.inference=metal runner.devices=1 runner.size="5.5 GiB" runner.vram="5.1 GiB" runner.parallel=1 runner.pid=95925 runner.model=~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa runner.num_ctx=4096
> level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/llama3:latest runner.inference=metal runner.devices=1 runner.size="5.5 GiB" runner.vram="5.1 GiB" runner.parallel=1 runner.pid=95925 runner.model=~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa runner.num_ctx=4096 duration=5m0s
> level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/llama3:latest runner.inference=metal runner.devices=1 runner.size="5.5 GiB" runner.vram="5.1 GiB" runner.parallel=1 runner.pid=95925 runner.model=~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa runner.num_ctx=4096 refCount=0
679,682c679,683
< level=DEBUG source=sched.go:872 msg="shutting down runner" model=~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246
< level=DEBUG source=server.go:1023 msg="stopping llama server" pid=95933
< level=DEBUG source=server.go:1029 msg="waiting for llama server to exit" pid=95933
< level=DEBUG source=server.go:1033 msg="llama server stopped" pid=95933
---
> level=DEBUG source=sched.go:872 msg="shutting down runner" model=~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa
> level=DEBUG source=sched.go:322 msg="shutting down scheduler completed loop"
> level=DEBUG source=server.go:1023 msg="stopping llama server" pid=95925
> level=DEBUG source=server.go:1029 msg="waiting for llama server to exit" pid=95925
> level=DEBUG source=server.go:1033 msg="llama server stopped" pid=95925

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.7.1 (Homebrew)

Originally created by @forthrin on GitHub (May 27, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10874 ### What is the issue? 1. How can Llama2 be made to run fast? 2. Why does it run slow (on CPU) in the first place? ``` < llama_kv_cache_unified: layer 0: dev = CPU > llama_kv_cache_unified: layer 0: dev = Metal ``` ### Relevant log output ```shell 13c13 < level=DEBUG source=sched.go:228 msg="loading first model" model=~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --- > level=DEBUG source=sched.go:228 msg="loading first model" model=~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa 30c30 < level=INFO source=server.go:135 msg="system memory" total="8.0 GiB" free="5.6 GiB" free_swap="0 B" --- > level=INFO source=server.go:135 msg="system memory" total="8.0 GiB" free="5.1 GiB" free_swap="0 B" 35c35 < level=INFO source=server.go:168 msg=offload library=metal layers.requested=-1 layers.model=33 layers.offload=25 layers.split="" memory.available="[5.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.5 GiB" memory.required.partial="5.2 GiB" memory.required.kv="2.0 GiB" memory.required.allocations="[5.2 GiB]" memory.weights.total="3.5 GiB" memory.weights.repeating="3.4 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="296.0 MiB" memory.graph.partial="296.0 MiB" --- > level=INFO source=server.go:168 msg=offload library=metal layers.requested=-1 layers.model=33 layers.offload=32 layers.split="" memory.available="[5.3 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.5 GiB" memory.required.partial="5.1 GiB" memory.required.kv="512.0 MiB" memory.required.allocations="[5.1 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="296.0 MiB" memory.graph.partial="296.0 MiB" 38c38 < llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from ~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest)) --- > llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from ~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest)) 41,48c41,48 < llama_model_loader: - kv 1: general.name str = LLaMA v2 < llama_model_loader: - kv 2: llama.context_length u32 = 4096 < llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 < llama_model_loader: - kv 4: llama.block_count u32 = 32 < llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 < llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 < llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 < llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 --- > llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct > llama_model_loader: - kv 2: llama.block_count u32 = 32 > llama_model_loader: - kv 3: llama.context_length u32 = 8192 > llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 > llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 > llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 > llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 > llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 51,62c51,61 < llama_model_loader: - kv 11: tokenizer.ggml.model str = llama < llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... < llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... < llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... < llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... < llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 < llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 < llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 < llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true < llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false < llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'... < llama_model_loader: - kv 22: general.quantization_version u32 = 2 --- > llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 > llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 > llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 > llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe > llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... > llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... > llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... > llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000 > llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009 > llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... > llama_model_loader: - kv 21: general.quantization_version u32 = 2 68,72c67,70 < print_info: file size = 3.56 GiB (4.54 BPW) < init_tokenizer: initializing tokenizer for type 1 < load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect < load: special tokens cache size = 3 < load: token to piece cache size = 0.1684 MB --- > print_info: file size = 4.33 GiB (4.64 BPW) > init_tokenizer: initializing tokenizer for type 2 > load: special tokens cache size = 256 > load: token to piece cache size = 0.8000 MB 76,86c74,84 < print_info: model params = 6.74 B < print_info: general.name = LLaMA v2 < print_info: vocab type = SPM < print_info: n_vocab = 32000 < print_info: n_merges = 0 < print_info: BOS token = 1 '<s>' < print_info: EOS token = 2 '</s>' < print_info: UNK token = 0 '<unk>' < print_info: LF token = 13 '<0x0A>' < print_info: EOG token = 2 '</s>' < print_info: max token length = 48 --- > print_info: model params = 8.03 B > print_info: general.name = Meta-Llama-3-8B-Instruct > print_info: vocab type = BPE > print_info: n_vocab = 128256 > print_info: n_merges = 280147 > print_info: BOS token = 128000 '<|begin_of_text|>' > print_info: EOS token = 128009 '<|eot_id|>' > print_info: EOT token = 128009 '<|eot_id|>' > print_info: LF token = 198 'Ċ' > print_info: EOG token = 128009 '<|eot_id|>' > print_info: max token length = 256 88c86 < level=INFO source=server.go:431 msg="starting llama server" cmd="/opt/homebrew/Cellar/ollama/0.7.1/bin/ollama runner --model ~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 --ctx-size 4096 --batch-size 512 --n-gpu-layers 25 --threads 4 --no-mmap --parallel 1 --port 49586" --- > level=INFO source=server.go:431 msg="starting llama server" cmd="/opt/homebrew/Cellar/ollama/0.7.1/bin/ollama runner --model ~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 4096 --batch-size 512 --n-gpu-layers 32 --threads 4 --no-mmap --parallel 1 --port 49582" 96c94 < level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:49586" --- > level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:49582" 98c96 < llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from ~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest)) --- > llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from ~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest)) 101,108c99,106 < llama_model_loader: - kv 1: general.name str = LLaMA v2 < llama_model_loader: - kv 2: llama.context_length u32 = 4096 < llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 < llama_model_loader: - kv 4: llama.block_count u32 = 32 < llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 < llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 < llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 < llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 --- > llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct > llama_model_loader: - kv 2: llama.block_count u32 = 32 > llama_model_loader: - kv 3: llama.context_length u32 = 8192 > llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 > llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 > llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 > llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 > llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 111,122c109,119 < llama_model_loader: - kv 11: tokenizer.ggml.model str = llama < llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... < llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... < llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... < llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... < llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 < llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 < llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 < llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true < llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false < llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'... < llama_model_loader: - kv 22: general.quantization_version u32 = 2 --- > llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 > llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 > llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 > llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe > llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... > llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... > llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... > llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000 > llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009 > llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... > llama_model_loader: - kv 21: general.quantization_version u32 = 2 128,132c125,128 < print_info: file size = 3.56 GiB (4.54 BPW) < init_tokenizer: initializing tokenizer for type 1 < load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect < load: special tokens cache size = 3 < load: token to piece cache size = 0.1684 MB --- > print_info: file size = 4.33 GiB (4.64 BPW) > init_tokenizer: initializing tokenizer for type 2 > load: special tokens cache size = 256 > load: token to piece cache size = 0.8000 MB 135c131 < print_info: n_ctx_train = 4096 --- > print_info: n_ctx_train = 8192 139c135 < print_info: n_head_kv = 32 --- > print_info: n_head_kv = 8 145,147c141,143 < print_info: n_gqa = 1 < print_info: n_embd_k_gqa = 4096 < print_info: n_embd_v_gqa = 4096 --- > print_info: n_gqa = 4 > print_info: n_embd_k_gqa = 1024 > print_info: n_embd_v_gqa = 1024 154c150 < print_info: n_ff = 11008 --- > print_info: n_ff = 14336 161c157 < print_info: freq_base_train = 10000.0 --- > print_info: freq_base_train = 500000.0 163c159 < print_info: n_ctx_orig_yarn = 4096 --- > print_info: n_ctx_orig_yarn = 8192 170,181c166,177 < print_info: model type = 7B < print_info: model params = 6.74 B < print_info: general.name = LLaMA v2 < print_info: vocab type = SPM < print_info: n_vocab = 32000 < print_info: n_merges = 0 < print_info: BOS token = 1 '<s>' < print_info: EOS token = 2 '</s>' < print_info: UNK token = 0 '<unk>' < print_info: LF token = 13 '<0x0A>' < print_info: EOG token = 2 '</s>' < print_info: max token length = 48 --- > print_info: model type = 8B > print_info: model params = 8.03 B > print_info: general.name = Meta-Llama-3-8B-Instruct > print_info: vocab type = BPE > print_info: n_vocab = 128256 > print_info: n_merges = 280147 > print_info: BOS token = 128000 '<|begin_of_text|>' > print_info: EOS token = 128009 '<|eot_id|>' > print_info: EOT token = 128009 '<|eot_id|>' > print_info: LF token = 198 'Ċ' > print_info: EOG token = 128009 '<|eot_id|>' > print_info: max token length = 256 183,189c179,185 < load_tensors: layer 0 assigned to device CPU, is_swa = 0 < load_tensors: layer 1 assigned to device CPU, is_swa = 0 < load_tensors: layer 2 assigned to device CPU, is_swa = 0 < load_tensors: layer 3 assigned to device CPU, is_swa = 0 < load_tensors: layer 4 assigned to device CPU, is_swa = 0 < load_tensors: layer 5 assigned to device CPU, is_swa = 0 < load_tensors: layer 6 assigned to device CPU, is_swa = 0 --- > load_tensors: layer 0 assigned to device Metal, is_swa = 0 > load_tensors: layer 1 assigned to device Metal, is_swa = 0 > load_tensors: layer 2 assigned to device Metal, is_swa = 0 > load_tensors: layer 3 assigned to device Metal, is_swa = 0 > load_tensors: layer 4 assigned to device Metal, is_swa = 0 > load_tensors: layer 5 assigned to device Metal, is_swa = 0 > load_tensors: layer 6 assigned to device Metal, is_swa = 0 216,219c212,215 < load_tensors: offloading 25 repeating layers to GPU < load_tensors: offloaded 25/33 layers to GPU < load_tensors: CPU model buffer size = 933.02 MiB < load_tensors: Metal model buffer size = 2714.84 MiB --- > load_tensors: offloading 32 repeating layers to GPU > load_tensors: offloaded 32/33 layers to GPU > load_tensors: CPU model buffer size = 692.80 MiB > load_tensors: Metal model buffer size = 3745.00 MiB 222c218 < level=DEBUG source=server.go:636 msg="model load progress 0.10" --- > level=DEBUG source=server.go:636 msg="model load progress 0.06" 224,228c220,230 < level=DEBUG source=server.go:636 msg="model load progress 0.26" < level=DEBUG source=server.go:636 msg="model load progress 0.40" < level=DEBUG source=server.go:636 msg="model load progress 0.56" < level=DEBUG source=server.go:636 msg="model load progress 0.70" < level=DEBUG source=server.go:636 msg="model load progress 0.85" --- > level=DEBUG source=server.go:636 msg="model load progress 0.17" > level=DEBUG source=server.go:636 msg="model load progress 0.25" > level=DEBUG source=server.go:636 msg="model load progress 0.33" > level=DEBUG source=server.go:636 msg="model load progress 0.41" > level=DEBUG source=server.go:636 msg="model load progress 0.48" > level=DEBUG source=server.go:636 msg="model load progress 0.57" > level=DEBUG source=server.go:636 msg="model load progress 0.65" > level=DEBUG source=server.go:636 msg="model load progress 0.75" > level=DEBUG source=server.go:636 msg="model load progress 0.82" > level=DEBUG source=server.go:636 msg="model load progress 0.89" > level=DEBUG source=server.go:636 msg="model load progress 0.98" 237c239 < llama_context: freq_base = 10000.0 --- > llama_context: freq_base = 500000.0 238a241 > llama_context: n_ctx_per_seq (4096) < n_ctx_train (8192) -- the full capacity of the model will not be utilized 241d243 < level=DEBUG source=server.go:636 msg="model load progress 1.00" 615c617 < llama_context: CPU output buffer size = 0.14 MiB --- > llama_context: CPU output buffer size = 0.50 MiB 618,624c620,626 < llama_kv_cache_unified: layer 0: dev = CPU < llama_kv_cache_unified: layer 1: dev = CPU < llama_kv_cache_unified: layer 2: dev = CPU < llama_kv_cache_unified: layer 3: dev = CPU < llama_kv_cache_unified: layer 4: dev = CPU < llama_kv_cache_unified: layer 5: dev = CPU < llama_kv_cache_unified: layer 6: dev = CPU --- > llama_kv_cache_unified: layer 0: dev = Metal > llama_kv_cache_unified: layer 1: dev = Metal > llama_kv_cache_unified: layer 2: dev = Metal > llama_kv_cache_unified: layer 3: dev = Metal > llama_kv_cache_unified: layer 4: dev = Metal > llama_kv_cache_unified: layer 5: dev = Metal > llama_kv_cache_unified: layer 6: dev = Metal 650,653c652,654 < llama_kv_cache_unified: CPU KV buffer size = 448.00 MiB < level=DEBUG source=server.go:639 msg="model load completed, waiting for server to become available" status="llm server loading model" < llama_kv_cache_unified: Metal KV buffer size = 1600.00 MiB < llama_kv_cache_unified: KV self size = 2048.00 MiB, K (f16): 1024.00 MiB, V (f16): 1024.00 MiB --- > level=DEBUG source=server.go:636 msg="model load progress 1.00" > llama_kv_cache_unified: Metal KV buffer size = 512.00 MiB > llama_kv_cache_unified: KV self size = 512.00 MiB, K (f16): 256.00 MiB, V (f16): 256.00 MiB 662c663 < llama_context: CPU compute buffer size = 296.01 MiB --- > llama_context: CPU compute buffer size = 250.50 MiB 664,666c665,667 < llama_context: graph splits = 115 (with bs=512), 3 (with bs=1) < level=INFO source=server.go:630 msg="llama runner started in 2.52 seconds" < level=DEBUG source=sched.go:495 msg="finished setting up" runner.name=registry.ollama.ai/library/llama2:latest runner.inference=metal runner.devices=1 runner.size="6.5 GiB" runner.vram="5.2 GiB" runner.parallel=1 runner.pid=95933 runner.model=~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 runner.num_ctx=4096 --- > llama_context: graph splits = 4 (with bs=512), 3 (with bs=1) > level=INFO source=server.go:630 msg="llama runner started in 3.77 seconds" > level=DEBUG source=sched.go:495 msg="finished setting up" runner.name=registry.ollama.ai/library/llama3:latest runner.inference=metal runner.devices=1 runner.size="5.5 GiB" runner.vram="5.1 GiB" runner.parallel=1 runner.pid=95925 runner.model=~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa runner.num_ctx=4096 668,669c669,670 < level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/llama2:latest runner.inference=metal runner.devices=1 runner.size="6.5 GiB" runner.vram="5.2 GiB" runner.parallel=1 runner.pid=95933 runner.model=~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 runner.num_ctx=4096 duration=5m0s < level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/llama2:latest runner.inference=metal runner.devices=1 runner.size="6.5 GiB" runner.vram="5.2 GiB" runner.parallel=1 runner.pid=95933 runner.model=~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 runner.num_ctx=4096 refCount=0 --- > level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/llama3:latest runner.inference=metal runner.devices=1 runner.size="5.5 GiB" runner.vram="5.1 GiB" runner.parallel=1 runner.pid=95925 runner.model=~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa runner.num_ctx=4096 duration=5m0s > level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/llama3:latest runner.inference=metal runner.devices=1 runner.size="5.5 GiB" runner.vram="5.1 GiB" runner.parallel=1 runner.pid=95925 runner.model=~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa runner.num_ctx=4096 refCount=0 671,677c672,677 < level=DEBUG source=sched.go:615 msg="evaluating already loaded" model=~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 < level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=48 format="" < level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=25 used=0 remaining=25 < level=DEBUG source=sched.go:434 msg="context for request finished" runner.name=registry.ollama.ai/library/llama2:latest runner.inference=metal runner.devices=1 runner.size="6.5 GiB" runner.vram="5.2 GiB" runner.parallel=1 runner.pid=95933 runner.model=~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 runner.num_ctx=4096 < level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/llama2:latest runner.inference=metal runner.devices=1 runner.size="6.5 GiB" runner.vram="5.2 GiB" runner.parallel=1 runner.pid=95933 runner.model=~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 runner.num_ctx=4096 duration=5m0s < level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/llama2:latest runner.inference=metal runner.devices=1 runner.size="6.5 GiB" runner.vram="5.2 GiB" runner.parallel=1 runner.pid=95933 runner.model=~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 runner.num_ctx=4096 refCount=0 < level=DEBUG source=sched.go:322 msg="shutting down scheduler completed loop" --- > level=DEBUG source=sched.go:615 msg="evaluating already loaded" model=~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa > level=DEBUG source=server.go:729 msg="completion request" images=0 prompt=114 format="" > level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=14 used=0 remaining=14 > level=DEBUG source=sched.go:434 msg="context for request finished" runner.name=registry.ollama.ai/library/llama3:latest runner.inference=metal runner.devices=1 runner.size="5.5 GiB" runner.vram="5.1 GiB" runner.parallel=1 runner.pid=95925 runner.model=~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa runner.num_ctx=4096 > level=DEBUG source=sched.go:343 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/llama3:latest runner.inference=metal runner.devices=1 runner.size="5.5 GiB" runner.vram="5.1 GiB" runner.parallel=1 runner.pid=95925 runner.model=~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa runner.num_ctx=4096 duration=5m0s > level=DEBUG source=sched.go:361 msg="after processing request finished event" runner.name=registry.ollama.ai/library/llama3:latest runner.inference=metal runner.devices=1 runner.size="5.5 GiB" runner.vram="5.1 GiB" runner.parallel=1 runner.pid=95925 runner.model=~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa runner.num_ctx=4096 refCount=0 679,682c679,683 < level=DEBUG source=sched.go:872 msg="shutting down runner" model=~/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 < level=DEBUG source=server.go:1023 msg="stopping llama server" pid=95933 < level=DEBUG source=server.go:1029 msg="waiting for llama server to exit" pid=95933 < level=DEBUG source=server.go:1033 msg="llama server stopped" pid=95933 --- > level=DEBUG source=sched.go:872 msg="shutting down runner" model=~/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa > level=DEBUG source=sched.go:322 msg="shutting down scheduler completed loop" > level=DEBUG source=server.go:1023 msg="stopping llama server" pid=95925 > level=DEBUG source=server.go:1029 msg="waiting for llama server to exit" pid=95925 > level=DEBUG source=server.go:1033 msg="llama server stopped" pid=95925 ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.7.1 (Homebrew)
GiteaMirror added the bug label 2026-04-29 04:24:54 -05:00
Author
Owner

@rick-github commented on GitHub (May 27, 2025):

< memory.required.kv="2.0 GiB" 
---
> memory.required.kv="512.0 MiB" 

llama2 requires more KV cache. This causes some of the layers to be offloaded to the CPU. Reducing the size of the context window will make the VRAM footprint smaller and allow more layers on the GPU.

OLLAMA_CONTEXT_LENGTH=4096 (default):

NAME             ID              SIZE      PROCESSOR    UNTIL   
llama2:latest    78e26419b446    6.9 GB    100% GPU     Forever    
llama3:latest    365c0bd3c000    5.8 GB    100% GPU     Forever    

OLLAMA_CONTEXT_LENGTH=2048:

NAME             ID              SIZE      PROCESSOR    UNTIL   
llama2:latest    78e26419b446    5.6 GB    100% GPU     Forever    
llama3:latest    365c0bd3c000    5.5 GB    100% GPU     Forever    

You can also experiment with flash attention and K/V cache quantization.

<!-- gh-comment-id:2912005904 --> @rick-github commented on GitHub (May 27, 2025): ``` < memory.required.kv="2.0 GiB" --- > memory.required.kv="512.0 MiB" ``` llama2 requires more KV cache. This causes some of the layers to be offloaded to the CPU. Reducing the size of the [context window](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-specify-the-context-window-size) will make the VRAM footprint smaller and allow more layers on the GPU. `OLLAMA_CONTEXT_LENGTH=4096` (default): ``` NAME ID SIZE PROCESSOR UNTIL llama2:latest 78e26419b446 6.9 GB 100% GPU Forever llama3:latest 365c0bd3c000 5.8 GB 100% GPU Forever ``` `OLLAMA_CONTEXT_LENGTH=2048`: ``` NAME ID SIZE PROCESSOR UNTIL llama2:latest 78e26419b446 5.6 GB 100% GPU Forever llama3:latest 365c0bd3c000 5.5 GB 100% GPU Forever ``` You can also experiment with [flash attention](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-enable-flash-attention) and [K/V cache quantization](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-set-the-quantization-type-for-the-kv-cache).
Author
Owner

@forthrin commented on GitHub (May 27, 2025):

OLLAMA_CONTEXT_LENGTH=2048 made Llama2 go fast. Console should have tipped the user.

Insufficient memory for GPU acceleration. Try: OLLAMA_CONTEXT_LENGTH=2048 ollama serve

Or something like OLLAMA_PREFER_SPEED_OVER_QUALITY that takes care of this automatically.

<!-- gh-comment-id:2912141252 --> @forthrin commented on GitHub (May 27, 2025): `OLLAMA_CONTEXT_LENGTH=2048` made Llama2 go fast. Console should have tipped the user. `Insufficient memory for GPU acceleration. Try: OLLAMA_CONTEXT_LENGTH=2048 ollama serve` Or something like `OLLAMA_PREFER_SPEED_OVER_QUALITY` that takes care of this automatically.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53658