[GH-ISSUE #11627] Bad VRAM management in v.0.10.1 #7681

Closed
opened 2026-04-12 19:47:08 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @adcape on GitHub (Aug 1, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11627

What is the issue?

Environment: 2 x RTX 5060 Ti GPUs with total 32 GB of VRAM, Fedora 42
On v. 0.9.5 it was possible to run Qwen3-32b:Q4_K_M model with all the 65 layers in VRAM and with num_ctx 20000 +.
On v. 0.10.1, for the very same model and setup, num_ctx has to be cut to 5000 to avoid out of VRAM errors, which effectively renders this model useless.

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @adcape on GitHub (Aug 1, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11627 ### What is the issue? Environment: 2 x RTX 5060 Ti GPUs with total 32 GB of VRAM, Fedora 42 On v. 0.9.5 it was possible to run Qwen3-32b:Q4_K_M model with all the 65 layers in VRAM and with num_ctx 20000 +. On v. 0.10.1, for the very same model and setup, num_ctx has to be cut to 5000 to avoid out of VRAM errors, which effectively renders this model useless. ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-12 19:47:08 -05:00
Author
Owner

@eavanesian1 commented on GitHub (Aug 1, 2025):

Similar issues happening in macOS. Used to be able to run many models before, but now they are not loading or failing after a few tokens. Here is a sample of models for testing that used to run on macOS (mac studio with m2 ultra, 64 gb ram)

NAME                            ID              SIZE      
deepseek-r1:70b                 d37b54d01a76    42 GB     
deepseek-r1:8b                  6995872bfe4c    5.2 GB    
gemma3:27b                      a418f5838eaf    17 GB     
llama3.1:70b                    711a9e8463af    42 GB     
llama3.1:8b-instruct-q8_0       b158ded76fa0    8.5 GB    
llama3.3:latest                 a6eb4748fd29    42 GB     
mistral-nemo:latest             e7e06d107c6c    7.1 GB    
mistral:latest                  6577803aa9a0    4.4 GB    
nomic-embed-text:latest         0a109f422b47    274 MB    
phi4-reasoning:14b-plus-fp16    561e2e2df29e    29 GB     
qwen2.5vl:72b                   05ea68274581    48 GB     
qwen3:32b                       030ee887880f    20 GB     
qwq:latest                      009cb3f08d74    19 GB     

All of these models used to run without issues on v.0.9.6, but any model larger than about 10GB doesn't seem to work anymore. I have manually reverted back to v.0.9.6 and can confirm all these models still work with that version.

The issue is using the ollama run cli, new ollama ui app, or any other ui that is hitting the ollama cli with v0.10.0 and v.0.10.1.

<!-- gh-comment-id:3145875761 --> @eavanesian1 commented on GitHub (Aug 1, 2025): Similar issues happening in macOS. Used to be able to run many models before, but now they are not loading or failing after a few tokens. Here is a sample of models for testing that used to run on macOS (mac studio with m2 ultra, 64 gb ram) ``` NAME ID SIZE deepseek-r1:70b d37b54d01a76 42 GB deepseek-r1:8b 6995872bfe4c 5.2 GB gemma3:27b a418f5838eaf 17 GB llama3.1:70b 711a9e8463af 42 GB llama3.1:8b-instruct-q8_0 b158ded76fa0 8.5 GB llama3.3:latest a6eb4748fd29 42 GB mistral-nemo:latest e7e06d107c6c 7.1 GB mistral:latest 6577803aa9a0 4.4 GB nomic-embed-text:latest 0a109f422b47 274 MB phi4-reasoning:14b-plus-fp16 561e2e2df29e 29 GB qwen2.5vl:72b 05ea68274581 48 GB qwen3:32b 030ee887880f 20 GB qwq:latest 009cb3f08d74 19 GB ``` All of these models used to run without issues on v.0.9.6, but any model larger than about 10GB doesn't seem to work anymore. I have manually reverted back to v.0.9.6 and can confirm all these models still work with that version. The issue is using the `ollama run` cli, new ollama ui app, or any other ui that is hitting the ollama cli with v0.10.0 and v.0.10.1.
Author
Owner

@chrisoutwright commented on GitHub (Aug 1, 2025):

same issue (dual gpu),

I can do following with dual 24GB GPUs:

print_info: general.name     = Qwen2.5 Coder 32B Instruct AWQ

load_tensors: offloading 64 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 65/65 layers to GPU
load_tensors:        CUDA0 model buffer size =  9211.25 MiB
load_tensors:        CUDA1 model buffer size =  9297.10 MiB
load_tensors:          CPU model buffer size =   417.66 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 50000
llama_context: n_ctx_per_seq = 50000

with v.0.10.1 or v.0.10.0, I get out of ram issues.
what is the reason?

<!-- gh-comment-id:3145939445 --> @chrisoutwright commented on GitHub (Aug 1, 2025): same issue (dual gpu), I can do following with dual 24GB GPUs: ``` print_info: general.name = Qwen2.5 Coder 32B Instruct AWQ load_tensors: offloading 64 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 65/65 layers to GPU load_tensors: CUDA0 model buffer size = 9211.25 MiB load_tensors: CUDA1 model buffer size = 9297.10 MiB load_tensors: CPU model buffer size = 417.66 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 50000 llama_context: n_ctx_per_seq = 50000 ``` with v.0.10.1 or v.0.10.0, I get out of ram issues. what is the reason?
Author
Owner

@rick-github commented on GitHub (Aug 2, 2025):

Server logs will help in debugging.

<!-- gh-comment-id:3146053539 --> @rick-github commented on GitHub (Aug 2, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will help in debugging.
Author
Owner

@adcape commented on GitHub (Aug 2, 2025):

@rick-github
Here, this is when attempting to run qwen3:32b Q4_K_M in Ollama v. 0.10.1 with num_gpu 65 and num_ctx 20000. No other parameters are set manually. It fails in a similar way with num_ctx 6000 and up.:

llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5060 Ti) - 15713 MiB free
llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 5060 Ti) - 15713 MiB free
llama_model_loader: loaded meta data with 27 key-value pairs and 707 tensors from /home/none/.ollama/models/blobs/sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3 32B
llama_model_loader: - kv   3:                           general.basename str              = Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 32B
llama_model_loader: - kv   5:                          qwen3.block_count u32              = 64
llama_model_loader: - kv   6:                       qwen3.context_length u32              = 40960
llama_model_loader: - kv   7:                     qwen3.embedding_length u32              = 5120
llama_model_loader: - kv   8:                  qwen3.feed_forward_length u32              = 25600
llama_model_loader: - kv   9:                 qwen3.attention.head_count u32              = 64
llama_model_loader: - kv  10:              qwen3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       qwen3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  12:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  13:                 qwen3.attention.key_length u32              = 128
llama_model_loader: - kv  14:               qwen3.attention.value_length u32              = 128
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  23:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - kv  26:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  257 tensors
llama_model_loader: - type  f16:   64 tensors
llama_model_loader: - type q4_K:  353 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 18.81 GiB (4.93 BPW) 
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3
print_info: vocab_only       = 0
print_info: n_ctx_train      = 40960
print_info: n_embd           = 5120
print_info: n_layer          = 64
print_info: n_head           = 64
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 8
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 25600
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 40960
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 32B
print_info: model params     = 32.76 B
print_info: general.name     = Qwen3 32B
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 64 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 65/65 layers to GPU
load_tensors:        CUDA0 model buffer size =  9385.70 MiB
load_tensors:        CUDA1 model buffer size =  9456.71 MiB
load_tensors:          CPU model buffer size =   417.30 MiB
time=2025-08-02T05:25:20.727+03:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
llama_context: constructing llama_context
llama_context: n_seq_max     = 4
llama_context: n_ctx         = 80000
llama_context: n_ctx_per_seq = 20000
llama_context: n_batch       = 2048
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (20000) < n_ctx_train (40960) -- the full capacity of the model will not be utilized
llama_context:  CUDA_Host  output buffer size =     2.40 MiB
llama_kv_cache_unified: kv_size = 80000, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1, padding = 32
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 10312.50 MiB on device 0: cudaMalloc failed: out of memory
alloc_tensor_range: failed to allocate CUDA0 buffer of size 10813440000
llama_init_from_model: failed to initialize the context: failed to allocate buffer for kv cache
panic: unable to create llama context

goroutine 50 [running]:
github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc000594000, {0x41, 0x0, 0x0, {0x0, 0x0, 0x0}, 0xc000503770, 0x0}, {0x7ffd7faf823d, ...}, ...)
        github.com/ollama/ollama/runner/llamarunner/runner.go:757 +0x389
created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
        github.com/ollama/ollama/runner/llamarunner/runner.go:848 +0xb57
time=2025-08-02T05:25:21.932+03:00 level=ERROR source=server.go:464 msg="llama runner terminated" error="exit status 2"
time=2025-08-02T05:25:21.981+03:00 level=ERROR source=sched.go:487 msg="error loading llama server" error="llama runner process has terminated: cudaMalloc failed: out of memory\nalloc_tensor_range: failed to allocate CUDA0 buffer of size 10813440000"
[GIN] 2025/08/02 - 05:25:21 | 500 |  7.560696322s |       127.0.0.1 | POST     "/api/chat"
time=2025-08-02T05:25:27.198+03:00 level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.217225383 runner.size="37.9 GiB" runner.vram="0 B" runner.parallel=4 runner.pid=558825 runner.model=/home/none/.ollama/models/blobs/sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312
time=2025-08-02T05:25:27.449+03:00 level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.467899129 runner.size="37.9 GiB" runner.vram="0 B" runner.parallel=4 runner.pid=558825 runner.model=/home/none/.ollama/models/blobs/sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312
time=2025-08-02T05:25:27.698+03:00 level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.717017708 runner.size="37.9 GiB" runner.vram="0 B" runner.parallel=4 runner.pid=558825 runner.model=/home/none/.ollama/models/blobs/sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312

GPU's' VRAM was almost free (~463 MB used on each per btop, it's like this usually).

Both llama.cpp and vLLM are capable of running a bigger variant of this model (unsloth/qwen3-32B-GGUF:Q6_K) with context length 16384 and 13500 respectively on the same system. Tried several times with the same results.

<!-- gh-comment-id:3146148120 --> @adcape commented on GitHub (Aug 2, 2025): @rick-github Here, this is when attempting to run qwen3:32b Q4_K_M in Ollama v. 0.10.1 with num_gpu 65 and num_ctx 20000. No other parameters are set manually. It fails in a similar way with num_ctx 6000 and up.: ``` llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5060 Ti) - 15713 MiB free llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 5060 Ti) - 15713 MiB free llama_model_loader: loaded meta data with 27 key-value pairs and 707 tensors from /home/none/.ollama/models/blobs/sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen3 32B llama_model_loader: - kv 3: general.basename str = Qwen3 llama_model_loader: - kv 4: general.size_label str = 32B llama_model_loader: - kv 5: qwen3.block_count u32 = 64 llama_model_loader: - kv 6: qwen3.context_length u32 = 40960 llama_model_loader: - kv 7: qwen3.embedding_length u32 = 5120 llama_model_loader: - kv 8: qwen3.feed_forward_length u32 = 25600 llama_model_loader: - kv 9: qwen3.attention.head_count u32 = 64 llama_model_loader: - kv 10: qwen3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: qwen3.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 12: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 13: qwen3.attention.key_length u32 = 128 llama_model_loader: - kv 14: qwen3.attention.value_length u32 = 128 llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - kv 26: general.file_type u32 = 15 llama_model_loader: - type f32: 257 tensors llama_model_loader: - type f16: 64 tensors llama_model_loader: - type q4_K: 353 tensors llama_model_loader: - type q6_K: 33 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 18.81 GiB (4.93 BPW) load: special tokens cache size = 26 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3 print_info: vocab_only = 0 print_info: n_ctx_train = 40960 print_info: n_embd = 5120 print_info: n_layer = 64 print_info: n_head = 64 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 8 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 25600 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 40960 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 32B print_info: model params = 32.76 B print_info: general.name = Qwen3 32B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 64 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 65/65 layers to GPU load_tensors: CUDA0 model buffer size = 9385.70 MiB load_tensors: CUDA1 model buffer size = 9456.71 MiB load_tensors: CPU model buffer size = 417.30 MiB time=2025-08-02T05:25:20.727+03:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" llama_context: constructing llama_context llama_context: n_seq_max = 4 llama_context: n_ctx = 80000 llama_context: n_ctx_per_seq = 20000 llama_context: n_batch = 2048 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (20000) < n_ctx_train (40960) -- the full capacity of the model will not be utilized llama_context: CUDA_Host output buffer size = 2.40 MiB llama_kv_cache_unified: kv_size = 80000, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1, padding = 32 ggml_backend_cuda_buffer_type_alloc_buffer: allocating 10312.50 MiB on device 0: cudaMalloc failed: out of memory alloc_tensor_range: failed to allocate CUDA0 buffer of size 10813440000 llama_init_from_model: failed to initialize the context: failed to allocate buffer for kv cache panic: unable to create llama context goroutine 50 [running]: github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc000594000, {0x41, 0x0, 0x0, {0x0, 0x0, 0x0}, 0xc000503770, 0x0}, {0x7ffd7faf823d, ...}, ...) github.com/ollama/ollama/runner/llamarunner/runner.go:757 +0x389 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 github.com/ollama/ollama/runner/llamarunner/runner.go:848 +0xb57 time=2025-08-02T05:25:21.932+03:00 level=ERROR source=server.go:464 msg="llama runner terminated" error="exit status 2" time=2025-08-02T05:25:21.981+03:00 level=ERROR source=sched.go:487 msg="error loading llama server" error="llama runner process has terminated: cudaMalloc failed: out of memory\nalloc_tensor_range: failed to allocate CUDA0 buffer of size 10813440000" [GIN] 2025/08/02 - 05:25:21 | 500 | 7.560696322s | 127.0.0.1 | POST "/api/chat" time=2025-08-02T05:25:27.198+03:00 level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.217225383 runner.size="37.9 GiB" runner.vram="0 B" runner.parallel=4 runner.pid=558825 runner.model=/home/none/.ollama/models/blobs/sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 time=2025-08-02T05:25:27.449+03:00 level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.467899129 runner.size="37.9 GiB" runner.vram="0 B" runner.parallel=4 runner.pid=558825 runner.model=/home/none/.ollama/models/blobs/sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 time=2025-08-02T05:25:27.698+03:00 level=WARN source=sched.go:685 msg="gpu VRAM usage didn't recover within timeout" seconds=5.717017708 runner.size="37.9 GiB" runner.vram="0 B" runner.parallel=4 runner.pid=558825 runner.model=/home/none/.ollama/models/blobs/sha256-3291abe70f16ee9682de7bfae08db5373ea9d6497e614aaad63340ad421d6312 ``` GPU's' VRAM was almost free (~463 MB used on each per btop, it's like this usually). Both llama.cpp and vLLM are capable of running a bigger variant of this model (unsloth/qwen3-32B-GGUF:Q6_K) with context length 16384 and 13500 respectively on the same system. Tried several times with the same results.
Author
Owner

@eavanesian1 commented on GitHub (Aug 2, 2025):

Here's one from earlier today

time=2025-08-01T15:10:34.503-07:00 level=INFO source=routes.go:1235 msg="server config" env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/ea/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]"
time=2025-08-01T15:10:34.506-07:00 level=INFO source=images.go:476 msg="total blobs: 57"
time=2025-08-01T15:10:34.506-07:00 level=INFO source=images.go:483 msg="total unused blobs removed: 0"
time=2025-08-01T15:10:34.507-07:00 level=INFO source=routes.go:1288 msg="Listening on [::]:11434 (version 0.9.6)"
time=2025-08-01T15:10:34.582-07:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="51.8 GiB" available="51.8 GiB"
time=2025-08-01T15:10:51.925-07:00 level=WARN source=types.go:573 msg="invalid option provided" option=use_mlock
time=2025-08-01T15:10:51.958-07:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/ea/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 gpu=0 parallel=2 available=55662788608 required="43.7 GiB"
time=2025-08-01T15:10:51.959-07:00 level=INFO source=server.go:135 msg="system memory" total="64.0 GiB" free="54.2 GiB" free_swap="0 B"
time=2025-08-01T15:10:51.959-07:00 level=INFO source=server.go:175 msg=offload library=metal layers.requested=-1 layers.model=81 layers.offload=81 layers.split="" memory.available="[51.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="43.7 GiB" memory.required.partial="43.7 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[43.7 GiB]" memory.weights.total="39.0 GiB" memory.weights.repeating="38.2 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
llama_model_load_from_file_impl: using device Metal (Apple M2 Ultra) - 53084 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /Users/ea/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv   4:                         general.size_label str              = 70B
llama_model_loader: - kv   5:                          llama.block_count u32              = 80
llama_model_loader: - kv   6:                       llama.context_length u32              = 131072
llama_model_loader: - kv   7:                     llama.embedding_length u32              = 8192
llama_model_loader: - kv   8:                  llama.feed_forward_length u32              = 28672
llama_model_loader: - kv   9:                 llama.attention.head_count u32              = 64
llama_model_loader: - kv  10:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  12:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  13:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  14:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  15:                          general.file_type u32              = 15
llama_model_loader: - kv  16:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  17:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  18:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  19:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  20:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  21:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  22:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  24:                tokenizer.ggml.eos_token_id u32              = 128001
llama_model_loader: - kv  25:            tokenizer.ggml.padding_token_id u32              = 128001
llama_model_loader: - kv  26:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  27:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  29:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  162 tensors
llama_model_loader: - type q4_K:  441 tensors
llama_model_loader: - type q5_K:   40 tensors
llama_model_loader: - type q6_K:   81 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 39.59 GiB (4.82 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 70.55 B
print_info: general.name     = DeepSeek R1 Distill Llama 70B
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin▁of▁sentence|>'
print_info: EOS token        = 128001 '<|end▁of▁sentence|>'
print_info: EOT token        = 128001 '<|end▁of▁sentence|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: PAD token        = 128001 '<|end▁of▁sentence|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end▁of▁sentence|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-08-01T15:10:52.127-07:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/Applications/Ollama.0.9.6.app/Contents/Resources/ollama runner --model /Users/ea/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 16 --parallel 2 --port 53920"
time=2025-08-01T15:10:52.130-07:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-08-01T15:10:52.130-07:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
time=2025-08-01T15:10:52.131-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
time=2025-08-01T15:10:52.140-07:00 level=INFO source=runner.go:815 msg="starting go runner"
time=2025-08-01T15:10:52.141-07:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
time=2025-08-01T15:10:52.142-07:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:53920"
llama_model_load_from_file_impl: using device Metal (Apple M2 Ultra) - 53084 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /Users/ea/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv   4:                         general.size_label str              = 70B
llama_model_loader: - kv   5:                          llama.block_count u32              = 80
llama_model_loader: - kv   6:                       llama.context_length u32              = 131072
llama_model_loader: - kv   7:                     llama.embedding_length u32              = 8192
llama_model_loader: - kv   8:                  llama.feed_forward_length u32              = 28672
llama_model_loader: - kv   9:                 llama.attention.head_count u32              = 64
llama_model_loader: - kv  10:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  12:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  13:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  14:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  15:                          general.file_type u32              = 15
llama_model_loader: - kv  16:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  17:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  18:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  19:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  20:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  21:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  22:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  24:                tokenizer.ggml.eos_token_id u32              = 128001
llama_model_loader: - kv  25:            tokenizer.ggml.padding_token_id u32              = 128001
llama_model_loader: - kv  26:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  27:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  29:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  162 tensors
llama_model_loader: - type q4_K:  441 tensors
llama_model_loader: - type q5_K:   40 tensors
llama_model_loader: - type q6_K:   81 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 39.59 GiB (4.82 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 8192
print_info: n_layer          = 80
print_info: n_head           = 64
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 8
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 28672
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 70B
print_info: model params     = 70.55 B
print_info: general.name     = DeepSeek R1 Distill Llama 70B
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin▁of▁sentence|>'
print_info: EOS token        = 128001 '<|end▁of▁sentence|>'
print_info: EOT token        = 128001 '<|end▁of▁sentence|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: PAD token        = 128001 '<|end▁of▁sentence|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end▁of▁sentence|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
time=2025-08-01T15:10:52.383-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"

load_tensors: offloading 80 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 81/81 layers to GPU
load_tensors:   CPU_Mapped model buffer size =   563.62 MiB
load_tensors: Metal_Mapped model buffer size = 40543.12 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 2
llama_context: n_ctx         = 8192
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 1024
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 500000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
ggml_metal_init: allocating
ggml_metal_init: picking default device: Apple M2 Ultra
ggml_metal_load_library: using embedded metal library
ggml_metal_init: GPU name:   Apple M2 Ultra
ggml_metal_init: GPU family: MTLGPUFamilyApple8  (1008)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal4  (5002)
ggml_metal_init: simdgroup reduction   = true
ggml_metal_init: simdgroup matrix mul. = true
ggml_metal_init: has residency sets    = false
ggml_metal_init: has bfloat            = true
ggml_metal_init: use bfloat            = false
ggml_metal_init: hasUnifiedMemory      = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 55662.79 MB
ggml_metal_init: skipping kernel_get_rows_bf16                     (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32                   (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row              (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4                (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_bf16                  (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32                (not supported)
ggml_metal_init: skipping kernel_mul_mm_bf16_f32                   (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_bf16_f16                (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128   (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk576_hv512   (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h96       (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128      (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192      (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256      (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk576_hv512 (not supported)
ggml_metal_init: skipping kernel_cpy_f32_bf16                      (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_f32                      (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_bf16                     (not supported)
llama_context:        CPU  output buffer size =     1.04 MiB
llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1, padding = 32
llama_kv_cache_unified:      Metal KV buffer size =  2560.00 MiB
llama_kv_cache_unified: KV self size  = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
llama_context:      Metal compute buffer size =  1104.00 MiB
llama_context:        CPU compute buffer size =    32.01 MiB
llama_context: graph nodes  = 2726
llama_context: graph splits = 2
time=2025-08-01T15:10:57.661-07:00 level=INFO source=server.go:637 msg="llama runner started in 5.53 seconds"
[GIN] 2025/08/01 - 15:11:24 | 200 | 32.944458792s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/08/01 - 15:32:21 | 200 |     209.459µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/08/01 - 15:32:21 | 200 |   52.144708ms |       127.0.0.1 | POST     "/api/show"
time=2025-08-01T15:32:21.196-07:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/ea/.ollama/models/blobs/sha256-e6a7edc1a4d7d9b2de136a221a57336b76316cfe53a252aeba814496c5ae439d gpu=0 parallel=2 available=55662788608 required="7.1 GiB"
time=2025-08-01T15:32:21.196-07:00 level=INFO source=server.go:135 msg="system memory" total="64.0 GiB" free="31.8 GiB" free_swap="0 B"
time=2025-08-01T15:32:21.196-07:00 level=INFO source=server.go:175 msg=offload library=metal layers.requested=-1 layers.model=37 layers.offload=37 layers.split="" memory.available="[51.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="7.1 GiB" memory.required.partial="7.1 GiB" memory.required.kv="1.1 GiB" memory.required.allocations="[7.1 GiB]" memory.weights.total="4.5 GiB" memory.weights.repeating="4.1 GiB" memory.weights.nonrepeating="486.9 MiB" memory.graph.full="768.0 MiB" memory.graph.partial="768.0 MiB"
llama_model_load_from_file_impl: using device Metal (Apple M2 Ultra) - 53084 MiB free
llama_model_loader: loaded meta data with 32 key-value pairs and 399 tensors from /Users/ea/.ollama/models/blobs/sha256-e6a7edc1a4d7d9b2de136a221a57336b76316cfe53a252aeba814496c5ae439d (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 0528 Qwen3 8B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-0528-Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 8B
llama_model_loader: - kv   5:                            general.license str              = mit
llama_model_loader: - kv   6:                          qwen3.block_count u32              = 36
llama_model_loader: - kv   7:                       qwen3.context_length u32              = 131072
llama_model_loader: - kv   8:                     qwen3.embedding_length u32              = 4096
llama_model_loader: - kv   9:                  qwen3.feed_forward_length u32              = 12288
llama_model_loader: - kv  10:                 qwen3.attention.head_count u32              = 32
llama_model_loader: - kv  11:              qwen3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  12:                       qwen3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                 qwen3.attention.key_length u32              = 128
llama_model_loader: - kv  15:               qwen3.attention.value_length u32              = 128
llama_model_loader: - kv  16:                    qwen3.rope.scaling.type str              = yarn
llama_model_loader: - kv  17:                  qwen3.rope.scaling.factor f32              = 4.000000
llama_model_loader: - kv  18: qwen3.rope.scaling.original_context_length u32              = 32768
llama_model_loader: - kv  19:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  20:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  21:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  22:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  23:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  24:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  25:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  26:            tokenizer.ggml.padding_token_id u32              = 151645
llama_model_loader: - kv  27:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  28:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  29:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  30:               general.quantization_version u32              = 2
llama_model_loader: - kv  31:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  145 tensors
llama_model_loader: - type  f16:   36 tensors
llama_model_loader: - type q4_K:  199 tensors
llama_model_loader: - type q6_K:   19 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.86 GiB (5.10 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 28
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 8.19 B
print_info: general.name     = DeepSeek R1 0528 Qwen3 8B
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|begin▁of▁sentence|>'
print_info: EOS token        = 151645 '<|end▁of▁sentence|>'
print_info: EOT token        = 151645 '<|end▁of▁sentence|>'
print_info: PAD token        = 151645 '<|end▁of▁sentence|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151645 '<|end▁of▁sentence|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-08-01T15:32:21.325-07:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/Applications/Ollama.0.9.6.app/Contents/Resources/ollama runner --model /Users/ea/.ollama/models/blobs/sha256-e6a7edc1a4d7d9b2de136a221a57336b76316cfe53a252aeba814496c5ae439d --ctx-size 8192 --batch-size 512 --n-gpu-layers 37 --threads 16 --parallel 2 --port 54533"
time=2025-08-01T15:32:21.329-07:00 level=INFO source=sched.go:483 msg="loaded runners" count=1
time=2025-08-01T15:32:21.329-07:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding"
time=2025-08-01T15:32:21.329-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding"
time=2025-08-01T15:32:21.339-07:00 level=INFO source=runner.go:815 msg="starting go runner"
time=2025-08-01T15:32:21.342-07:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
time=2025-08-01T15:32:21.343-07:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:54533"
llama_model_load_from_file_impl: using device Metal (Apple M2 Ultra) - 53084 MiB free
llama_model_loader: loaded meta data with 32 key-value pairs and 399 tensors from /Users/ea/.ollama/models/blobs/sha256-e6a7edc1a4d7d9b2de136a221a57336b76316cfe53a252aeba814496c5ae439d (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 0528 Qwen3 8B
llama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-0528-Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 8B
llama_model_loader: - kv   5:                            general.license str              = mit
llama_model_loader: - kv   6:                          qwen3.block_count u32              = 36
llama_model_loader: - kv   7:                       qwen3.context_length u32              = 131072
llama_model_loader: - kv   8:                     qwen3.embedding_length u32              = 4096
llama_model_loader: - kv   9:                  qwen3.feed_forward_length u32              = 12288
llama_model_loader: - kv  10:                 qwen3.attention.head_count u32              = 32
llama_model_loader: - kv  11:              qwen3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  12:                       qwen3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                 qwen3.attention.key_length u32              = 128
llama_model_loader: - kv  15:               qwen3.attention.value_length u32              = 128
llama_model_loader: - kv  16:                    qwen3.rope.scaling.type str              = yarn
llama_model_loader: - kv  17:                  qwen3.rope.scaling.factor f32              = 4.000000
llama_model_loader: - kv  18: qwen3.rope.scaling.original_context_length u32              = 32768
llama_model_loader: - kv  19:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  20:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  21:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  22:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  23:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  24:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  25:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  26:            tokenizer.ggml.padding_token_id u32              = 151645
llama_model_loader: - kv  27:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  28:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  29:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  30:               general.quantization_version u32              = 2
llama_model_loader: - kv  31:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  145 tensors
llama_model_loader: - type  f16:   36 tensors
llama_model_loader: - type q4_K:  199 tensors
llama_model_loader: - type q6_K:   19 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.86 GiB (5.10 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 28
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 4096
print_info: n_layer          = 36
print_info: n_head           = 32
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 4
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 12288
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = yarn
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 0.25
print_info: n_ctx_orig_yarn  = 32768
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 8B
print_info: model params     = 8.19 B
print_info: general.name     = DeepSeek R1 0528 Qwen3 8B
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|begin▁of▁sentence|>'
print_info: EOS token        = 151645 '<|end▁of▁sentence|>'
print_info: EOT token        = 151645 '<|end▁of▁sentence|>'
print_info: PAD token        = 151645 '<|end▁of▁sentence|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151645 '<|end▁of▁sentence|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
time=2025-08-01T15:32:21.581-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model"
load_tensors: offloading 36 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 37/37 layers to GPU
load_tensors:   CPU_Mapped model buffer size =   333.84 MiB
load_tensors: Metal_Mapped model buffer size =  4977.63 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 2
llama_context: n_ctx         = 8192
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 1024
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 0.25
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
ggml_metal_init: allocating
ggml_metal_init: picking default device: Apple M2 Ultra
ggml_metal_load_library: using embedded metal library
ggml_metal_init: GPU name:   Apple M2 Ultra
ggml_metal_init: GPU family: MTLGPUFamilyApple8  (1008)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal4  (5002)
ggml_metal_init: simdgroup reduction   = true
ggml_metal_init: simdgroup matrix mul. = true
ggml_metal_init: has residency sets    = false
ggml_metal_init: has bfloat            = true
ggml_metal_init: use bfloat            = false
ggml_metal_init: hasUnifiedMemory      = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 55662.79 MB
ggml_metal_init: skipping kernel_get_rows_bf16                     (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32                   (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row              (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4                (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_bf16                  (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32                (not supported)
ggml_metal_init: skipping kernel_mul_mm_bf16_f32                   (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_bf16_f16                (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96           (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128   (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256          (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk576_hv512   (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h96       (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128      (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192      (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256      (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk576_hv512 (not supported)
ggml_metal_init: skipping kernel_cpy_f32_bf16                      (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_f32                      (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_bf16                     (not supported)
llama_context:        CPU  output buffer size =     1.19 MiB
llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 36, can_shift = 1, padding = 32
llama_kv_cache_unified:      Metal KV buffer size =  1152.00 MiB
llama_kv_cache_unified: KV self size  = 1152.00 MiB, K (f16):  576.00 MiB, V (f16):  576.00 MiB
llama_context:      Metal compute buffer size =   560.00 MiB
llama_context:        CPU compute buffer size =    24.01 MiB
llama_context: graph nodes  = 1374
llama_context: graph splits = 2
time=2025-08-01T15:32:23.844-07:00 level=INFO source=server.go:637 msg="llama runner started in 2.51 seconds"
[GIN] 2025/08/01 - 15:32:23 | 200 |  2.701937917s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/08/01 - 15:32:43 | 200 |  3.266171916s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/08/01 - 15:33:01 | 200 |  5.532237875s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/08/01 - 15:36:28 | 200 | 13.490628083s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/08/01 - 15:37:43 | 200 |  26.34053225s |       127.0.0.1 | POST     "/api/chat"
<!-- gh-comment-id:3146150318 --> @eavanesian1 commented on GitHub (Aug 2, 2025): Here's one from earlier today ``` time=2025-08-01T15:10:34.503-07:00 level=INFO source=routes.go:1235 msg="server config" env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/ea/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]" time=2025-08-01T15:10:34.506-07:00 level=INFO source=images.go:476 msg="total blobs: 57" time=2025-08-01T15:10:34.506-07:00 level=INFO source=images.go:483 msg="total unused blobs removed: 0" time=2025-08-01T15:10:34.507-07:00 level=INFO source=routes.go:1288 msg="Listening on [::]:11434 (version 0.9.6)" time=2025-08-01T15:10:34.582-07:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="51.8 GiB" available="51.8 GiB" time=2025-08-01T15:10:51.925-07:00 level=WARN source=types.go:573 msg="invalid option provided" option=use_mlock time=2025-08-01T15:10:51.958-07:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/ea/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 gpu=0 parallel=2 available=55662788608 required="43.7 GiB" time=2025-08-01T15:10:51.959-07:00 level=INFO source=server.go:135 msg="system memory" total="64.0 GiB" free="54.2 GiB" free_swap="0 B" time=2025-08-01T15:10:51.959-07:00 level=INFO source=server.go:175 msg=offload library=metal layers.requested=-1 layers.model=81 layers.offload=81 layers.split="" memory.available="[51.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="43.7 GiB" memory.required.partial="43.7 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[43.7 GiB]" memory.weights.total="39.0 GiB" memory.weights.repeating="38.2 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" llama_model_load_from_file_impl: using device Metal (Apple M2 Ultra) - 53084 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /Users/ea/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 39.59 GiB (4.82 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 70.55 B print_info: general.name = DeepSeek R1 Distill Llama 70B print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin▁of▁sentence|>' print_info: EOS token = 128001 '<|end▁of▁sentence|>' print_info: EOT token = 128001 '<|end▁of▁sentence|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end▁of▁sentence|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-08-01T15:10:52.127-07:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/Applications/Ollama.0.9.6.app/Contents/Resources/ollama runner --model /Users/ea/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 16 --parallel 2 --port 53920" time=2025-08-01T15:10:52.130-07:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-08-01T15:10:52.130-07:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" time=2025-08-01T15:10:52.131-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding" time=2025-08-01T15:10:52.140-07:00 level=INFO source=runner.go:815 msg="starting go runner" time=2025-08-01T15:10:52.141-07:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) time=2025-08-01T15:10:52.142-07:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:53920" llama_model_load_from_file_impl: using device Metal (Apple M2 Ultra) - 53084 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /Users/ea/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 39.59 GiB (4.82 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 8192 print_info: n_layer = 80 print_info: n_head = 64 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 8 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 28672 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 70B print_info: model params = 70.55 B print_info: general.name = DeepSeek R1 Distill Llama 70B print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin▁of▁sentence|>' print_info: EOS token = 128001 '<|end▁of▁sentence|>' print_info: EOT token = 128001 '<|end▁of▁sentence|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end▁of▁sentence|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) time=2025-08-01T15:10:52.383-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" load_tensors: offloading 80 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 81/81 layers to GPU load_tensors: CPU_Mapped model buffer size = 563.62 MiB load_tensors: Metal_Mapped model buffer size = 40543.12 MiB llama_context: constructing llama_context llama_context: n_seq_max = 2 llama_context: n_ctx = 8192 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 1024 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 500000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized ggml_metal_init: allocating ggml_metal_init: picking default device: Apple M2 Ultra ggml_metal_load_library: using embedded metal library ggml_metal_init: GPU name: Apple M2 Ultra ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal4 (5002) ggml_metal_init: simdgroup reduction = true ggml_metal_init: simdgroup matrix mul. = true ggml_metal_init: has residency sets = false ggml_metal_init: has bfloat = true ggml_metal_init: use bfloat = false ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 55662.79 MB ggml_metal_init: skipping kernel_get_rows_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_id_bf16_f16 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported) llama_context: CPU output buffer size = 1.04 MiB llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1, padding = 32 llama_kv_cache_unified: Metal KV buffer size = 2560.00 MiB llama_kv_cache_unified: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB llama_context: Metal compute buffer size = 1104.00 MiB llama_context: CPU compute buffer size = 32.01 MiB llama_context: graph nodes = 2726 llama_context: graph splits = 2 time=2025-08-01T15:10:57.661-07:00 level=INFO source=server.go:637 msg="llama runner started in 5.53 seconds" [GIN] 2025/08/01 - 15:11:24 | 200 | 32.944458792s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/08/01 - 15:32:21 | 200 | 209.459µs | 127.0.0.1 | HEAD "/" [GIN] 2025/08/01 - 15:32:21 | 200 | 52.144708ms | 127.0.0.1 | POST "/api/show" time=2025-08-01T15:32:21.196-07:00 level=INFO source=sched.go:788 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/ea/.ollama/models/blobs/sha256-e6a7edc1a4d7d9b2de136a221a57336b76316cfe53a252aeba814496c5ae439d gpu=0 parallel=2 available=55662788608 required="7.1 GiB" time=2025-08-01T15:32:21.196-07:00 level=INFO source=server.go:135 msg="system memory" total="64.0 GiB" free="31.8 GiB" free_swap="0 B" time=2025-08-01T15:32:21.196-07:00 level=INFO source=server.go:175 msg=offload library=metal layers.requested=-1 layers.model=37 layers.offload=37 layers.split="" memory.available="[51.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="7.1 GiB" memory.required.partial="7.1 GiB" memory.required.kv="1.1 GiB" memory.required.allocations="[7.1 GiB]" memory.weights.total="4.5 GiB" memory.weights.repeating="4.1 GiB" memory.weights.nonrepeating="486.9 MiB" memory.graph.full="768.0 MiB" memory.graph.partial="768.0 MiB" llama_model_load_from_file_impl: using device Metal (Apple M2 Ultra) - 53084 MiB free llama_model_loader: loaded meta data with 32 key-value pairs and 399 tensors from /Users/ea/.ollama/models/blobs/sha256-e6a7edc1a4d7d9b2de136a221a57336b76316cfe53a252aeba814496c5ae439d (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 0528 Qwen3 8B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-0528-Qwen3 llama_model_loader: - kv 4: general.size_label str = 8B llama_model_loader: - kv 5: general.license str = mit llama_model_loader: - kv 6: qwen3.block_count u32 = 36 llama_model_loader: - kv 7: qwen3.context_length u32 = 131072 llama_model_loader: - kv 8: qwen3.embedding_length u32 = 4096 llama_model_loader: - kv 9: qwen3.feed_forward_length u32 = 12288 llama_model_loader: - kv 10: qwen3.attention.head_count u32 = 32 llama_model_loader: - kv 11: qwen3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 12: qwen3.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 14: qwen3.attention.key_length u32 = 128 llama_model_loader: - kv 15: qwen3.attention.value_length u32 = 128 llama_model_loader: - kv 16: qwen3.rope.scaling.type str = yarn llama_model_loader: - kv 17: qwen3.rope.scaling.factor f32 = 4.000000 llama_model_loader: - kv 18: qwen3.rope.scaling.original_context_length u32 = 32768 llama_model_loader: - kv 19: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 20: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 21: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 22: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 23: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 24: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 25: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 26: tokenizer.ggml.padding_token_id u32 = 151645 llama_model_loader: - kv 27: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 28: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 29: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 30: general.quantization_version u32 = 2 llama_model_loader: - kv 31: general.file_type u32 = 15 llama_model_loader: - type f32: 145 tensors llama_model_loader: - type f16: 36 tensors llama_model_loader: - type q4_K: 199 tensors llama_model_loader: - type q6_K: 19 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 4.86 GiB (5.10 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 28 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 8.19 B print_info: general.name = DeepSeek R1 0528 Qwen3 8B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|begin▁of▁sentence|>' print_info: EOS token = 151645 '<|end▁of▁sentence|>' print_info: EOT token = 151645 '<|end▁of▁sentence|>' print_info: PAD token = 151645 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151645 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-08-01T15:32:21.325-07:00 level=INFO source=server.go:438 msg="starting llama server" cmd="/Applications/Ollama.0.9.6.app/Contents/Resources/ollama runner --model /Users/ea/.ollama/models/blobs/sha256-e6a7edc1a4d7d9b2de136a221a57336b76316cfe53a252aeba814496c5ae439d --ctx-size 8192 --batch-size 512 --n-gpu-layers 37 --threads 16 --parallel 2 --port 54533" time=2025-08-01T15:32:21.329-07:00 level=INFO source=sched.go:483 msg="loaded runners" count=1 time=2025-08-01T15:32:21.329-07:00 level=INFO source=server.go:598 msg="waiting for llama runner to start responding" time=2025-08-01T15:32:21.329-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server not responding" time=2025-08-01T15:32:21.339-07:00 level=INFO source=runner.go:815 msg="starting go runner" time=2025-08-01T15:32:21.342-07:00 level=INFO source=ggml.go:104 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) time=2025-08-01T15:32:21.343-07:00 level=INFO source=runner.go:874 msg="Server listening on 127.0.0.1:54533" llama_model_load_from_file_impl: using device Metal (Apple M2 Ultra) - 53084 MiB free llama_model_loader: loaded meta data with 32 key-value pairs and 399 tensors from /Users/ea/.ollama/models/blobs/sha256-e6a7edc1a4d7d9b2de136a221a57336b76316cfe53a252aeba814496c5ae439d (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen3 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 0528 Qwen3 8B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-0528-Qwen3 llama_model_loader: - kv 4: general.size_label str = 8B llama_model_loader: - kv 5: general.license str = mit llama_model_loader: - kv 6: qwen3.block_count u32 = 36 llama_model_loader: - kv 7: qwen3.context_length u32 = 131072 llama_model_loader: - kv 8: qwen3.embedding_length u32 = 4096 llama_model_loader: - kv 9: qwen3.feed_forward_length u32 = 12288 llama_model_loader: - kv 10: qwen3.attention.head_count u32 = 32 llama_model_loader: - kv 11: qwen3.attention.head_count_kv u32 = 8 llama_model_loader: - kv 12: qwen3.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 14: qwen3.attention.key_length u32 = 128 llama_model_loader: - kv 15: qwen3.attention.value_length u32 = 128 llama_model_loader: - kv 16: qwen3.rope.scaling.type str = yarn llama_model_loader: - kv 17: qwen3.rope.scaling.factor f32 = 4.000000 llama_model_loader: - kv 18: qwen3.rope.scaling.original_context_length u32 = 32768 llama_model_loader: - kv 19: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 20: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 21: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 22: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 23: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 24: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 25: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 26: tokenizer.ggml.padding_token_id u32 = 151645 llama_model_loader: - kv 27: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 28: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 29: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 30: general.quantization_version u32 = 2 llama_model_loader: - kv 31: general.file_type u32 = 15 llama_model_loader: - type f32: 145 tensors llama_model_loader: - type f16: 36 tensors llama_model_loader: - type q4_K: 199 tensors llama_model_loader: - type q6_K: 19 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 4.86 GiB (5.10 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 28 load: token to piece cache size = 0.9311 MB print_info: arch = qwen3 print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 4096 print_info: n_layer = 36 print_info: n_head = 32 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 4 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 12288 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = yarn print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 0.25 print_info: n_ctx_orig_yarn = 32768 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 8B print_info: model params = 8.19 B print_info: general.name = DeepSeek R1 0528 Qwen3 8B print_info: vocab type = BPE print_info: n_vocab = 151936 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|begin▁of▁sentence|>' print_info: EOS token = 151645 '<|end▁of▁sentence|>' print_info: EOT token = 151645 '<|end▁of▁sentence|>' print_info: PAD token = 151645 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151645 '<|end▁of▁sentence|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) time=2025-08-01T15:32:21.581-07:00 level=INFO source=server.go:632 msg="waiting for server to become available" status="llm server loading model" load_tensors: offloading 36 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 37/37 layers to GPU load_tensors: CPU_Mapped model buffer size = 333.84 MiB load_tensors: Metal_Mapped model buffer size = 4977.63 MiB llama_context: constructing llama_context llama_context: n_seq_max = 2 llama_context: n_ctx = 8192 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 1024 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 1000000.0 llama_context: freq_scale = 0.25 llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized ggml_metal_init: allocating ggml_metal_init: picking default device: Apple M2 Ultra ggml_metal_load_library: using embedded metal library ggml_metal_init: GPU name: Apple M2 Ultra ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal4 (5002) ggml_metal_init: simdgroup reduction = true ggml_metal_init: simdgroup matrix mul. = true ggml_metal_init: has residency sets = false ggml_metal_init: has bfloat = true ggml_metal_init: use bfloat = false ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 55662.79 MB ggml_metal_init: skipping kernel_get_rows_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_id_bf16_f16 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported) llama_context: CPU output buffer size = 1.19 MiB llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 36, can_shift = 1, padding = 32 llama_kv_cache_unified: Metal KV buffer size = 1152.00 MiB llama_kv_cache_unified: KV self size = 1152.00 MiB, K (f16): 576.00 MiB, V (f16): 576.00 MiB llama_context: Metal compute buffer size = 560.00 MiB llama_context: CPU compute buffer size = 24.01 MiB llama_context: graph nodes = 1374 llama_context: graph splits = 2 time=2025-08-01T15:32:23.844-07:00 level=INFO source=server.go:637 msg="llama runner started in 2.51 seconds" [GIN] 2025/08/01 - 15:32:23 | 200 | 2.701937917s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/08/01 - 15:32:43 | 200 | 3.266171916s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/08/01 - 15:33:01 | 200 | 5.532237875s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/08/01 - 15:36:28 | 200 | 13.490628083s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/08/01 - 15:37:43 | 200 | 26.34053225s | 127.0.0.1 | POST "/api/chat" ```
Author
Owner

@rick-github commented on GitHub (Aug 2, 2025):

@adcape

No other parameters are set manually.

OLLAMA_NUM_PARALLEL seems to be set to 4, which quadruples the size of the context buffer. llama.cpp and vLLM may be using the equivalent of OLLAMA_NUM_PARALLEL=1, allowing a larger context.

@eavanesian1

There are no failures in this log.

<!-- gh-comment-id:3146233832 --> @rick-github commented on GitHub (Aug 2, 2025): @adcape > No other parameters are set manually. `OLLAMA_NUM_PARALLEL` seems to be set to 4, which quadruples the size of the context buffer. llama.cpp and vLLM may be using the equivalent of `OLLAMA_NUM_PARALLEL=1`, allowing a larger context. @eavanesian1 There are no failures in this log.
Author
Owner

@adcape commented on GitHub (Aug 2, 2025):

@rick-github A valid point, my bad. Set it in the environment, when read about OLLAMA_RUN_PARALLEL being now decreased to 1 by default and forgot about it.

But please explain - the old default was OLLAMA_NUM_PARALLEL=2, if I'm not wrong? I was running two sessions to the same model via open-webui with two parallel requests, and it just worked slower. Not sure if the model was the same though or if I ran really complex requests that would use more than a half of available context.

Now I tried to set OLLAMA_NUM_PARALLEL=2 manually, and even v. 0.9.5 crashed with num_ctx=13000, but ran well with no variable set.
Or when the variable is not set explicitly for earlier versions, the context buffer allocation is dynamic? Or it simply ignores the number of sessions until the context buffer is full?

Anyway, thank you very much for support and finding my error.
Feel free to close this issue, or I'll close it myself later today.

<!-- gh-comment-id:3146280907 --> @adcape commented on GitHub (Aug 2, 2025): @rick-github A valid point, my bad. Set it in the environment, when read about OLLAMA_RUN_PARALLEL being now decreased to 1 by default and forgot about it. But please explain - the old default was OLLAMA_NUM_PARALLEL=2, if I'm not wrong? I was running two sessions to the same model via open-webui with two parallel requests, and it just worked slower. Not sure if the model was the same though or if I ran really complex requests that would use more than a half of available context. Now I tried to set OLLAMA_NUM_PARALLEL=2 manually, and even v. 0.9.5 crashed with num_ctx=13000, but ran well with no variable set. Or when the variable is not set explicitly for earlier versions, the context buffer allocation is dynamic? Or it simply ignores the number of sessions until the context buffer is full? Anyway, thank you very much for support and finding my error. Feel free to close this issue, or I'll close it myself later today.
Author
Owner

@rick-github commented on GitHub (Aug 2, 2025):

Or when the variable is not set explicitly for earlier versions, the context buffer allocation is dynamic?

It was dynamic. If the variable was unset, the server would assess available VRAM and set parallelism to 1 or a higher value, depending on available resources. The higher value used to be 4, and was set to 2 a few versions ago. As of 0.10.0 the default is now just 1.

<!-- gh-comment-id:3146288071 --> @rick-github commented on GitHub (Aug 2, 2025): > Or when the variable is not set explicitly for earlier versions, the context buffer allocation is dynamic? It was dynamic. If the variable was unset, the server would assess available VRAM and set parallelism to 1 or a higher value, depending on available resources. The higher value used to be 4, and was set to 2 a few versions ago. As of 0.10.0 the default is now just 1.
Author
Owner

@adcape commented on GitHub (Aug 2, 2025):

@rick-github Thank you very much for the explanations and support!
Closing this issue now, as at least in my case the cause of the problem is clear. Other participants probably have different root causes for the problem, so it would be better to discuss them separately.

<!-- gh-comment-id:3146290324 --> @adcape commented on GitHub (Aug 2, 2025): @rick-github Thank you very much for the explanations and support! Closing this issue now, as at least in my case the cause of the problem is clear. Other participants probably have different root causes for the problem, so it would be better to discuss them separately.
Author
Owner

@eavanesian1 commented on GitHub (Aug 4, 2025):

Not sure what changed, but after all the install/uninstalling of individual versions to troubleshoot, now my v.10.0.0 and v.10.0.1 are working with the larger models like before! I'll take the win and keep this marked at closed.

<!-- gh-comment-id:3152447385 --> @eavanesian1 commented on GitHub (Aug 4, 2025): Not sure what changed, but after all the install/uninstalling of individual versions to troubleshoot, now my v.10.0.0 and v.10.0.1 are working with the larger models like before! I'll take the win and keep this marked at closed.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7681