[GH-ISSUE #11000] 0.90 issue with Modelfile #7253

Closed
opened 2026-04-12 19:17:57 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @goactiongo on GitHub (Jun 6, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11000

What is the issue?

I create the new model with modelfile edited with following commands:

ollama show qwen3:32b  --modelfile > Modelfile
echo PARAMETER num_predict -1 >> Modelfile
echo PARAMETER num_ctx 125000 >> Modelfile
ollama create qwen3:32b_125k -f Modelfile 

Seems that the new model didn't work well with log followed,pls

Image

qwen3_32b.txt
qwen3_32b_numctx125k.txt

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @goactiongo on GitHub (Jun 6, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11000 ### What is the issue? I create the new model with modelfile edited with following commands: ``` ollama show qwen3:32b --modelfile > Modelfile echo PARAMETER num_predict -1 >> Modelfile echo PARAMETER num_ctx 125000 >> Modelfile ollama create qwen3:32b_125k -f Modelfile ``` Seems that the new model didn't work well with log followed,pls ![Image](https://github.com/user-attachments/assets/e6116ec2-3e17-4ec2-81ca-90f5a1332751) [qwen3_32b.txt](https://github.com/user-attachments/files/20633261/qwen3_32b.txt) [qwen3_32b_numctx125k.txt](https://github.com/user-attachments/files/20633260/qwen3_32b_numctx125k.txt) ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-12 19:17:57 -05:00
Author
Owner

@rick-github commented on GitHub (Jun 6, 2025):

What does "didn't work well" mean? Didn't answer prompts? Answered incorrectly? Too slow?

<!-- gh-comment-id:2950663242 --> @rick-github commented on GitHub (Jun 6, 2025): What does "didn't work well" mean? Didn't answer prompts? Answered incorrectly? Too slow?
Author
Owner

@goactiongo commented on GitHub (Jun 6, 2025):

Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: loading model tensors, this can take a while... (mmap = false)
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 0 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 1 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 2 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 3 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 4 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 5 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 6 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 7 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 8 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 9 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 10 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 11 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 12 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 13 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 14 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 15 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 16 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 17 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 18 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 19 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 20 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 21 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 22 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 23 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 24 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 25 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 26 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 27 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 28 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 29 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 30 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 31 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 32 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 33 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 34 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 35 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 36 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 37 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 38 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 39 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 40 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 41 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 42 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 43 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 44 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 45 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 46 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 47 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 48 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 49 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 50 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 51 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 52 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 53 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 54 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 55 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 56 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 57 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 58 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 59 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 60 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 61 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 62 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 63 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 64 assigned to device CPU, is_swa = 0
Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: CPU model buffer size = 19259.71 MiB
Jun 06 19:12:13 ai001 ollama[590926]: load_all_data: no device found for buffer type CPU for async uploads
Jun 06 19:12:13 ai001 ollama[590926]: time=2025-06-06T19:12:13.955Z level=DEBUG source=server.go:636 msg="model load progress 0.02"
Jun 06 19:12:14 ai001 ollama[590926]: time=2025-06-06T19:12:14.205Z level=DEBUG source=server.go:636 msg="model load progress 0.06"
Jun 06 19:12:14 ai001 ollama[590926]: time=2025-06-06T19:12:14.456Z level=DEBUG source=server.go:636 msg="model load progress 0.08"
Jun 06 19:12:14 ai001 ollama[590926]: time=2025-06-06T19:12:14.707Z level=DEBUG source=server.go:636 msg="model load progress 0.11"
Jun 06 19:12:14 ai001 ollama[590926]: time=2025-06-06T19:12:14.958Z level=DEBUG source=server.go:636 msg="model load progress 0.13"
Jun 06 19:12:15 ai001 ollama[590926]: time=2025-06-06T19:12:15.208Z level=DEBUG source=server.go:636 msg="model load progress 0.16"
Jun 06 19:12:15 ai001 ollama[590926]: time=2025-06-06T19:12:15.459Z level=DEBUG source=server.go:636 msg="model load progress 0.18"
Jun 06 19:12:15 ai001 ollama[590926]: time=2025-06-06T19:12:15.710Z level=DEBUG source=server.go:636 msg="model load progress 0.21"
Jun 06 19:12:15 ai001 ollama[590926]: time=2025-06-06T19:12:15.961Z level=DEBUG source=server.go:636 msg="model load progress 0.23"
Jun 06 19:12:16 ai001 ollama[590926]: time=2025-06-06T19:12:16.211Z level=DEBUG source=server.go:636 msg="model load progress 0.26"
Jun 06 19:12:16 ai001 ollama[590926]: time=2025-06-06T19:12:16.462Z level=DEBUG source=server.go:636 msg="model load progress 0.28"
Jun 06 19:12:16 ai001 ollama[590926]: time=2025-06-06T19:12:16.713Z level=DEBUG source=server.go:636 msg="model load progress 0.31"
Jun 06 19:12:16 ai001 ollama[590926]: time=2025-06-06T19:12:16.963Z level=DEBUG source=server.go:636 msg="model load progress 0.33"
Jun 06 19:12:17 ai001 ollama[590926]: time=2025-06-06T19:12:17.214Z level=DEBUG source=server.go:636 msg="model load progress 0.36"
Jun 06 19:12:17 ai001 ollama[590926]: time=2025-06-06T19:12:17.465Z level=DEBUG source=server.go:636 msg="model load progress 0.38"
Jun 06 19:12:17 ai001 ollama[590926]: time=2025-06-06T19:12:17.715Z level=DEBUG source=server.go:636 msg="model load progress 0.41"
Jun 06 19:12:17 ai001 ollama[590926]: time=2025-06-06T19:12:17.966Z level=DEBUG source=server.go:636 msg="model load progress 0.43"
Jun 06 19:12:18 ai001 ollama[590926]: time=2025-06-06T19:12:18.217Z level=DEBUG source=server.go:636 msg="model load progress 0.46"
Jun 06 19:12:18 ai001 ollama[590926]: time=2025-06-06T19:12:18.468Z level=DEBUG source=server.go:636 msg="model load progress 0.48"
Jun 06 19:12:18 ai001 ollama[590926]: time=2025-06-06T19:12:18.718Z level=DEBUG source=server.go:636 msg="model load progress 0.50"
Jun 06 19:12:18 ai001 ollama[590926]: time=2025-06-06T19:12:18.969Z level=DEBUG source=server.go:636 msg="model load progress 0.53"
Jun 06 19:12:19 ai001 ollama[590926]: time=2025-06-06T19:12:19.220Z level=DEBUG source=server.go:636 msg="model load progress 0.55"
Jun 06 19:12:19 ai001 ollama[590926]: time=2025-06-06T19:12:19.470Z level=DEBUG source=server.go:636 msg="model load progress 0.58"
Jun 06 19:12:19 ai001 ollama[590926]: time=2025-06-06T19:12:19.721Z level=DEBUG source=server.go:636 msg="model load progress 0.60"
Jun 06 19:12:19 ai001 ollama[590926]: time=2025-06-06T19:12:19.972Z level=DEBUG source=server.go:636 msg="model load progress 0.63"
Jun 06 19:12:20 ai001 ollama[590926]: time=2025-06-06T19:12:20.223Z level=DEBUG source=server.go:636 msg="model load progress 0.65"
Jun 06 19:12:20 ai001 ollama[590926]: time=2025-06-06T19:12:20.473Z level=DEBUG source=server.go:636 msg="model load progress 0.68"
Jun 06 19:12:20 ai001 ollama[590926]: time=2025-06-06T19:12:20.724Z level=DEBUG source=server.go:636 msg="model load progress 0.70"
Jun 06 19:12:20 ai001 ollama[590926]: time=2025-06-06T19:12:20.975Z level=DEBUG source=server.go:636 msg="model load progress 0.73"
Jun 06 19:12:21 ai001 ollama[590926]: time=2025-06-06T19:12:21.225Z level=DEBUG source=server.go:636 msg="model load progress 0.75"
Jun 06 19:12:21 ai001 ollama[590926]: time=2025-06-06T19:12:21.476Z level=DEBUG source=server.go:636 msg="model load progress 0.78"
Jun 06 19:12:21 ai001 ollama[590926]: time=2025-06-06T19:12:21.727Z level=DEBUG source=server.go:636 msg="model load progress 0.80"
Jun 06 19:12:21 ai001 ollama[590926]: time=2025-06-06T19:12:21.977Z level=DEBUG source=server.go:636 msg="model load progress 0.83"
Jun 06 19:12:22 ai001 ollama[590926]: time=2025-06-06T19:12:22.228Z level=DEBUG source=server.go:636 msg="model load progress 0.86"
Jun 06 19:12:22 ai001 ollama[590926]: time=2025-06-06T19:12:22.479Z level=DEBUG source=server.go:636 msg="model load progress 0.88"
Jun 06 19:12:22 ai001 ollama[590926]: time=2025-06-06T19:12:22.729Z level=DEBUG source=server.go:636 msg="model load progress 0.90"
Jun 06 19:12:22 ai001 ollama[590926]: time=2025-06-06T19:12:22.980Z level=DEBUG source=server.go:636 msg="model load progress 0.93"
Jun 06 19:12:23 ai001 ollama[590926]: time=2025-06-06T19:12:23.230Z level=DEBUG source=server.go:636 msg="model load progress 0.95"
Jun 06 19:12:23 ai001 ollama[590926]: time=2025-06-06T19:12:23.481Z level=DEBUG source=server.go:636 msg="model load progress 0.98"
Jun 06 19:12:23 ai001 ollama[590926]: llama_context: constructing llama_context
Jun 06 19:12:23 ai001 ollama[590926]: llama_context: n_seq_max = 2
Jun 06 19:12:23 ai001 ollama[590926]: llama_context: n_ctx = 250000
Jun 06 19:12:23 ai001 ollama[590926]: llama_context: n_ctx_per_seq = 125000
Jun 06 19:12:23 ai001 ollama[590926]: llama_context: n_batch = 1024
Jun 06 19:12:23 ai001 ollama[590926]: llama_context: n_ubatch = 512
Jun 06 19:12:23 ai001 ollama[590926]: llama_context: causal_attn = 1
Jun 06 19:12:23 ai001 ollama[590926]: llama_context: flash_attn = 0
Jun 06 19:12:23 ai001 ollama[590926]: llama_context: freq_base = 1000000.0
Jun 06 19:12:23 ai001 ollama[590926]: llama_context: freq_scale = 1
Jun 06 19:12:23 ai001 ollama[590926]: llama_context: n_ctx_per_seq (125000) > n_ctx_train (40960) -- possible training context overflow
Jun 06 19:12:23 ai001 ollama[590926]: set_abort_callback: call
Jun 06 19:12:23 ai001 ollama[590926]: llama_context: CPU output buffer size = 1.20 MiB
Jun 06 19:12:23 ai001 ollama[590926]: create_memory: n_ctx = 250016 (padded)
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: kv_size = 250016, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1, padding = 32
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 0: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 1: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 2: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 3: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 4: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 5: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 6: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 7: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 8: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 9: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 10: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 11: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 12: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 13: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 14: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 15: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 16: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 17: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 18: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 19: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 20: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 21: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 22: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 23: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 24: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 25: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 26: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 27: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 28: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 29: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 30: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 31: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 32: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 33: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 34: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 35: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 36: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 37: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 38: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 39: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 40: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 41: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 42: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 43: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 44: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 45: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 46: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 47: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 48: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 49: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 50: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 51: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 52: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 53: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 54: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 55: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 56: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 57: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 58: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 59: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 60: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 61: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 62: dev = CPU
Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 63: dev = CPU

<!-- gh-comment-id:2950719695 --> @goactiongo commented on GitHub (Jun 6, 2025): Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: loading model tensors, this can take a while... (mmap = false) Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 0 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 1 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 2 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 3 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 4 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 5 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 6 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 7 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 8 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 9 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 10 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 11 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 12 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 13 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 14 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 15 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 16 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 17 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 18 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 19 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 20 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 21 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 22 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 23 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 24 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 25 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 26 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 27 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 28 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 29 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 30 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 31 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 32 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 33 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 34 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 35 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 36 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 37 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 38 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 39 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 40 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 41 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 42 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 43 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 44 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 45 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 46 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 47 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 48 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 49 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 50 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 51 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 52 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 53 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 54 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 55 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 56 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 57 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 58 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 59 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 60 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 61 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 62 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 63 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: layer 64 assigned to device CPU, is_swa = 0 Jun 06 19:12:13 ai001 ollama[590926]: load_tensors: CPU model buffer size = 19259.71 MiB Jun 06 19:12:13 ai001 ollama[590926]: load_all_data: no device found for buffer type CPU for async uploads Jun 06 19:12:13 ai001 ollama[590926]: time=2025-06-06T19:12:13.955Z level=DEBUG source=server.go:636 msg="model load progress 0.02" Jun 06 19:12:14 ai001 ollama[590926]: time=2025-06-06T19:12:14.205Z level=DEBUG source=server.go:636 msg="model load progress 0.06" Jun 06 19:12:14 ai001 ollama[590926]: time=2025-06-06T19:12:14.456Z level=DEBUG source=server.go:636 msg="model load progress 0.08" Jun 06 19:12:14 ai001 ollama[590926]: time=2025-06-06T19:12:14.707Z level=DEBUG source=server.go:636 msg="model load progress 0.11" Jun 06 19:12:14 ai001 ollama[590926]: time=2025-06-06T19:12:14.958Z level=DEBUG source=server.go:636 msg="model load progress 0.13" Jun 06 19:12:15 ai001 ollama[590926]: time=2025-06-06T19:12:15.208Z level=DEBUG source=server.go:636 msg="model load progress 0.16" Jun 06 19:12:15 ai001 ollama[590926]: time=2025-06-06T19:12:15.459Z level=DEBUG source=server.go:636 msg="model load progress 0.18" Jun 06 19:12:15 ai001 ollama[590926]: time=2025-06-06T19:12:15.710Z level=DEBUG source=server.go:636 msg="model load progress 0.21" Jun 06 19:12:15 ai001 ollama[590926]: time=2025-06-06T19:12:15.961Z level=DEBUG source=server.go:636 msg="model load progress 0.23" Jun 06 19:12:16 ai001 ollama[590926]: time=2025-06-06T19:12:16.211Z level=DEBUG source=server.go:636 msg="model load progress 0.26" Jun 06 19:12:16 ai001 ollama[590926]: time=2025-06-06T19:12:16.462Z level=DEBUG source=server.go:636 msg="model load progress 0.28" Jun 06 19:12:16 ai001 ollama[590926]: time=2025-06-06T19:12:16.713Z level=DEBUG source=server.go:636 msg="model load progress 0.31" Jun 06 19:12:16 ai001 ollama[590926]: time=2025-06-06T19:12:16.963Z level=DEBUG source=server.go:636 msg="model load progress 0.33" Jun 06 19:12:17 ai001 ollama[590926]: time=2025-06-06T19:12:17.214Z level=DEBUG source=server.go:636 msg="model load progress 0.36" Jun 06 19:12:17 ai001 ollama[590926]: time=2025-06-06T19:12:17.465Z level=DEBUG source=server.go:636 msg="model load progress 0.38" Jun 06 19:12:17 ai001 ollama[590926]: time=2025-06-06T19:12:17.715Z level=DEBUG source=server.go:636 msg="model load progress 0.41" Jun 06 19:12:17 ai001 ollama[590926]: time=2025-06-06T19:12:17.966Z level=DEBUG source=server.go:636 msg="model load progress 0.43" Jun 06 19:12:18 ai001 ollama[590926]: time=2025-06-06T19:12:18.217Z level=DEBUG source=server.go:636 msg="model load progress 0.46" Jun 06 19:12:18 ai001 ollama[590926]: time=2025-06-06T19:12:18.468Z level=DEBUG source=server.go:636 msg="model load progress 0.48" Jun 06 19:12:18 ai001 ollama[590926]: time=2025-06-06T19:12:18.718Z level=DEBUG source=server.go:636 msg="model load progress 0.50" Jun 06 19:12:18 ai001 ollama[590926]: time=2025-06-06T19:12:18.969Z level=DEBUG source=server.go:636 msg="model load progress 0.53" Jun 06 19:12:19 ai001 ollama[590926]: time=2025-06-06T19:12:19.220Z level=DEBUG source=server.go:636 msg="model load progress 0.55" Jun 06 19:12:19 ai001 ollama[590926]: time=2025-06-06T19:12:19.470Z level=DEBUG source=server.go:636 msg="model load progress 0.58" Jun 06 19:12:19 ai001 ollama[590926]: time=2025-06-06T19:12:19.721Z level=DEBUG source=server.go:636 msg="model load progress 0.60" Jun 06 19:12:19 ai001 ollama[590926]: time=2025-06-06T19:12:19.972Z level=DEBUG source=server.go:636 msg="model load progress 0.63" Jun 06 19:12:20 ai001 ollama[590926]: time=2025-06-06T19:12:20.223Z level=DEBUG source=server.go:636 msg="model load progress 0.65" Jun 06 19:12:20 ai001 ollama[590926]: time=2025-06-06T19:12:20.473Z level=DEBUG source=server.go:636 msg="model load progress 0.68" Jun 06 19:12:20 ai001 ollama[590926]: time=2025-06-06T19:12:20.724Z level=DEBUG source=server.go:636 msg="model load progress 0.70" Jun 06 19:12:20 ai001 ollama[590926]: time=2025-06-06T19:12:20.975Z level=DEBUG source=server.go:636 msg="model load progress 0.73" Jun 06 19:12:21 ai001 ollama[590926]: time=2025-06-06T19:12:21.225Z level=DEBUG source=server.go:636 msg="model load progress 0.75" Jun 06 19:12:21 ai001 ollama[590926]: time=2025-06-06T19:12:21.476Z level=DEBUG source=server.go:636 msg="model load progress 0.78" Jun 06 19:12:21 ai001 ollama[590926]: time=2025-06-06T19:12:21.727Z level=DEBUG source=server.go:636 msg="model load progress 0.80" Jun 06 19:12:21 ai001 ollama[590926]: time=2025-06-06T19:12:21.977Z level=DEBUG source=server.go:636 msg="model load progress 0.83" Jun 06 19:12:22 ai001 ollama[590926]: time=2025-06-06T19:12:22.228Z level=DEBUG source=server.go:636 msg="model load progress 0.86" Jun 06 19:12:22 ai001 ollama[590926]: time=2025-06-06T19:12:22.479Z level=DEBUG source=server.go:636 msg="model load progress 0.88" Jun 06 19:12:22 ai001 ollama[590926]: time=2025-06-06T19:12:22.729Z level=DEBUG source=server.go:636 msg="model load progress 0.90" Jun 06 19:12:22 ai001 ollama[590926]: time=2025-06-06T19:12:22.980Z level=DEBUG source=server.go:636 msg="model load progress 0.93" Jun 06 19:12:23 ai001 ollama[590926]: time=2025-06-06T19:12:23.230Z level=DEBUG source=server.go:636 msg="model load progress 0.95" Jun 06 19:12:23 ai001 ollama[590926]: time=2025-06-06T19:12:23.481Z level=DEBUG source=server.go:636 msg="model load progress 0.98" Jun 06 19:12:23 ai001 ollama[590926]: llama_context: constructing llama_context Jun 06 19:12:23 ai001 ollama[590926]: llama_context: n_seq_max = 2 Jun 06 19:12:23 ai001 ollama[590926]: llama_context: n_ctx = 250000 Jun 06 19:12:23 ai001 ollama[590926]: llama_context: n_ctx_per_seq = 125000 Jun 06 19:12:23 ai001 ollama[590926]: llama_context: n_batch = 1024 Jun 06 19:12:23 ai001 ollama[590926]: llama_context: n_ubatch = 512 Jun 06 19:12:23 ai001 ollama[590926]: llama_context: causal_attn = 1 Jun 06 19:12:23 ai001 ollama[590926]: llama_context: flash_attn = 0 Jun 06 19:12:23 ai001 ollama[590926]: llama_context: freq_base = 1000000.0 Jun 06 19:12:23 ai001 ollama[590926]: llama_context: freq_scale = 1 Jun 06 19:12:23 ai001 ollama[590926]: llama_context: n_ctx_per_seq (125000) > n_ctx_train (40960) -- possible training context overflow Jun 06 19:12:23 ai001 ollama[590926]: set_abort_callback: call Jun 06 19:12:23 ai001 ollama[590926]: llama_context: CPU output buffer size = 1.20 MiB Jun 06 19:12:23 ai001 ollama[590926]: create_memory: n_ctx = 250016 (padded) Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: kv_size = 250016, type_k = 'f16', type_v = 'f16', n_layer = 64, can_shift = 1, padding = 32 Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 0: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 1: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 2: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 3: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 4: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 5: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 6: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 7: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 8: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 9: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 10: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 11: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 12: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 13: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 14: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 15: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 16: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 17: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 18: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 19: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 20: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 21: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 22: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 23: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 24: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 25: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 26: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 27: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 28: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 29: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 30: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 31: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 32: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 33: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 34: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 35: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 36: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 37: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 38: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 39: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 40: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 41: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 42: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 43: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 44: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 45: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 46: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 47: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 48: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 49: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 50: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 51: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 52: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 53: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 54: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 55: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 56: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 57: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 58: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 59: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 60: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 61: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 62: dev = CPU Jun 06 19:12:23 ai001 ollama[590926]: llama_kv_cache_unified: layer 63: dev = CPU
Author
Owner

@rick-github commented on GitHub (Jun 6, 2025):

So what's the question? Why is the model running on the CPU? The context is too large to fit on a GPU.

<!-- gh-comment-id:2950723924 --> @rick-github commented on GitHub (Jun 6, 2025): So what's the question? Why is the model running on the CPU? The context is too large to fit on a GPU.
Author
Owner

@goactiongo commented on GitHub (Jun 6, 2025):

Got it .Thanks

<!-- gh-comment-id:2950726477 --> @goactiongo commented on GitHub (Jun 6, 2025): Got it .Thanks
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7253