[GH-ISSUE #10626] Error with llama 3.3 q8 models and variations #6991

Closed
opened 2026-04-12 18:53:07 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @fedesantamarina on GitHub (May 9, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10626

What is the issue?

I download llama 3.3 70 B models (llama, deepseek, etc)
When I try to use them I have this error:
Error: POST predict: Post "http://127.0.0.1:50744/completion": EOF
ollama run deepseek-r1:70b-llama-distill-q8_0

It works with q6 llama 3.3. Fail with q8
M3 96GB

Relevant log output

Error: POST predict: Post "http://127.0.0.1:50744/completion": EOF

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

ollama version is 0.6.8

Originally created by @fedesantamarina on GitHub (May 9, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10626 ### What is the issue? I download llama 3.3 70 B models (llama, deepseek, etc) When I try to use them I have this error: Error: POST predict: Post "http://127.0.0.1:50744/completion": EOF ollama run deepseek-r1:70b-llama-distill-q8_0 It works with q6 llama 3.3. Fail with q8 M3 96GB ### Relevant log output ```shell Error: POST predict: Post "http://127.0.0.1:50744/completion": EOF ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version ollama version is 0.6.8
GiteaMirror added the bug label 2026-04-12 18:53:07 -05:00
Author
Owner

@rick-github commented on GitHub (May 9, 2025):

Server logs may aid in debugging.

<!-- gh-comment-id:2865719732 --> @rick-github commented on GitHub (May 9, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may aid in debugging.
Author
Owner

@fedesantamarina commented on GitHub (May 9, 2025):

time=2025-05-08T20:02:22.523-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T20:02:22.533-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 20:02:22 | 200 | 20.756875ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T20:02:22.552-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T20:02:22.568-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 20:02:22 | 200 | 33.622ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T20:02:22.586-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T20:02:22.597-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 20:02:22 | 200 | 23.962541ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T20:02:22.610-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T20:02:22.620-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 20:02:22 | 200 | 18.945041ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T20:32:23.499-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T20:32:23.516-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 20:32:23 | 200 | 53.037709ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T20:32:23.541-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T20:32:23.558-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 20:32:23 | 200 | 38.263583ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T20:32:23.571-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T20:32:23.578-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 20:32:23 | 200 | 16.637958ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T20:32:23.589-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T20:32:23.596-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 20:32:23 | 200 | 14.940584ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T20:32:23.610-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T20:32:23.620-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 20:32:23 | 200 | 20.977208ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T20:32:23.640-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T20:32:23.655-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 20:32:23 | 200 | 33.585541ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T20:32:23.672-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T20:32:23.683-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 20:32:23 | 200 | 23.277959ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T20:32:23.696-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T20:32:23.705-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 20:32:23 | 200 | 18.569417ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T21:02:24.593-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T21:02:24.611-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 21:02:24 | 200 | 53.534ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T21:02:24.637-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T21:02:24.654-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 21:02:24 | 200 | 39.657084ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T21:02:24.667-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T21:02:24.675-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 21:02:24 | 200 | 16.892ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T21:02:24.686-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T21:02:24.693-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 21:02:24 | 200 | 15.053417ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T21:02:24.707-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T21:02:24.717-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 21:02:24 | 200 | 20.684458ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T21:02:24.737-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T21:02:24.753-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 21:02:24 | 200 | 34.07175ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T21:02:24.771-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T21:02:24.782-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 21:02:24 | 200 | 23.4775ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T21:02:24.795-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T21:02:24.804-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 21:02:24 | 200 | 18.583584ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T21:32:25.864-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T21:32:25.882-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 21:32:25 | 200 | 50.80525ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T21:32:25.909-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T21:32:25.926-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 21:32:25 | 200 | 39.631834ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T21:32:25.939-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T21:32:25.947-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 21:32:25 | 200 | 16.782208ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T21:32:25.959-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T21:32:25.967-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 21:32:25 | 200 | 16.402708ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T21:32:25.979-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T21:32:25.989-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 21:32:25 | 200 | 21.207542ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T21:32:26.007-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T21:32:26.023-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 21:32:26 | 200 | 33.6505ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T21:32:26.037-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T21:32:26.048-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 21:32:26 | 200 | 23.737334ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T21:32:26.061-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T21:32:26.070-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 21:32:26 | 200 | 18.83925ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T22:02:26.847-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:02:26.865-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 22:02:26 | 200 | 50.963584ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T22:02:26.890-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:02:26.906-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 22:02:26 | 200 | 38.188709ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T22:02:26.919-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:02:26.927-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 22:02:26 | 200 | 16.737333ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T22:02:26.938-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:02:26.945-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 22:02:26 | 200 | 15.037375ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T22:02:26.958-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:02:26.968-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 22:02:26 | 200 | 21.319542ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T22:02:26.988-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:02:27.004-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 22:02:27 | 200 | 33.983583ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T22:02:27.022-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:02:27.033-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 22:02:27 | 200 | 23.821541ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T22:02:27.046-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:02:27.055-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 22:02:27 | 200 | 18.996125ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T22:32:27.718-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:32:27.736-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 22:32:27 | 200 | 50.538291ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T22:32:27.763-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:32:27.780-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 22:32:27 | 200 | 40.121417ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T22:32:27.794-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:32:27.802-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 22:32:27 | 200 | 17.03225ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T22:32:27.813-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:32:27.820-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 22:32:27 | 200 | 15.321958ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T22:32:27.834-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:32:27.844-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 22:32:27 | 200 | 21.04425ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T22:32:27.864-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:32:27.879-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 22:32:27 | 200 | 33.307209ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T22:32:27.896-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:32:27.908-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 22:32:27 | 200 | 23.7945ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T22:32:27.921-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:32:27.930-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 22:32:27 | 200 | 18.800792ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T22:51:31.677-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:51:31.695-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:51:31.708-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:51:31.709-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0
time=2025-05-08T22:51:31.710-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0
time=2025-05-08T22:51:31.711-03:00 level=INFO source=sched.go:754 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 gpu=0 parallel=1 available=77309411328 required="72.0 GiB"
time=2025-05-08T22:51:31.711-03:00 level=INFO source=server.go:106 msg="system memory" total="96.0 GiB" free="45.4 GiB" free_swap="0 B"
time=2025-05-08T22:51:31.711-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0
time=2025-05-08T22:51:31.711-03:00 level=INFO source=server.go:139 msg=offload library=metal layers.requested=-1 layers.model=81 layers.offload=81 layers.split="" memory.available="[72.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="72.0 GiB" memory.required.partial="72.0 GiB" memory.required.kv="1.2 GiB" memory.required.allocations="[72.0 GiB]" memory.weights.total="68.8 GiB" memory.weights.repeating="67.7 GiB" memory.weights.nonrepeating="1.0 GiB" memory.graph.full="584.0 MiB" memory.graph.partial="584.0 MiB"
llama_model_load_from_file_impl: using device Metal (Apple M3 Ultra) - 73727 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 7
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q8_0: 562 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q8_0
print_info: file size = 69.82 GiB (8.50 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch = llama
print_info: vocab_only = 1
print_info: model type = ?B
print_info: model params = 70.55 B
print_info: general.name = DeepSeek R1 Distill Llama 70B
print_info: vocab type = BPE
print_info: n_vocab = 128256
print_info: n_merges = 280147
print_info: BOS token = 128000 '<|begin▁of▁sentence|>'
print_info: EOS token = 128001 '<|end▁of▁sentence|>'
print_info: EOT token = 128001 '<|end▁of▁sentence|>'
print_info: EOM token = 128008 '<|eom_id|>'
print_info: PAD token = 128001 '<|end▁of▁sentence|>'
print_info: LF token = 198 'Ċ'
print_info: EOG token = 128001 '<|end▁of▁sentence|>'
print_info: EOG token = 128008 '<|eom_id|>'
print_info: EOG token = 128009 '<|eot_id|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-05-08T22:51:31.846-03:00 level=INFO source=server.go:410 msg="starting llama server" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --model /Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 --ctx-size 4096 --batch-size 512 --n-gpu-layers 81 --threads 20 --parallel 1 --port 50706"
time=2025-05-08T22:51:31.848-03:00 level=INFO source=sched.go:452 msg="loaded runners" count=1
time=2025-05-08T22:51:31.848-03:00 level=INFO source=server.go:589 msg="waiting for llama runner to start responding"
time=2025-05-08T22:51:31.849-03:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server not responding"
time=2025-05-08T22:51:31.865-03:00 level=INFO source=runner.go:853 msg="starting go runner"
time=2025-05-08T22:51:31.868-03:00 level=INFO source=ggml.go:103 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
time=2025-05-08T22:51:31.868-03:00 level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:50706"
llama_model_load_from_file_impl: using device Metal (Apple M3 Ultra) - 73727 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 7
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q8_0: 562 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q8_0
print_info: file size = 69.82 GiB (8.50 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch = llama
print_info: vocab_only = 0
print_info: n_ctx_train = 131072
print_info: n_embd = 8192
print_info: n_layer = 80
print_info: n_head = 64
print_info: n_head_kv = 8
print_info: n_rot = 128
print_info: n_swa = 0
print_info: n_swa_pattern = 1
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 8
print_info: n_embd_k_gqa = 1024
print_info: n_embd_v_gqa = 1024
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 28672
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 0
print_info: rope scaling = linear
print_info: freq_base_train = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 131072
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 70B
print_info: model params = 70.55 B
print_info: general.name = DeepSeek R1 Distill Llama 70B
print_info: vocab type = BPE
print_info: n_vocab = 128256
print_info: n_merges = 280147
print_info: BOS token = 128000 '<|begin▁of▁sentence|>'
print_info: EOS token = 128001 '<|end▁of▁sentence|>'
print_info: EOT token = 128001 '<|end▁of▁sentence|>'
print_info: EOM token = 128008 '<|eom_id|>'
print_info: PAD token = 128001 '<|end▁of▁sentence|>'
print_info: LF token = 198 'Ċ'
print_info: EOG token = 128001 '<|end▁of▁sentence|>'
print_info: EOG token = 128008 '<|eom_id|>'
print_info: EOG token = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
time=2025-05-08T22:51:32.100-03:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server loading model"

load_tensors: offloading 80 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 81/81 layers to GPU
load_tensors: CPU_Mapped model buffer size = 1064.62 MiB
load_tensors: Metal_Mapped model buffer size = 71494.30 MiB
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch = 512
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = 0
llama_context: freq_base = 500000.0
llama_context: freq_scale = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M3 Ultra
ggml_metal_init: picking default device: Apple M3 Ultra
ggml_metal_load_library: using embedded metal library
ggml_metal_init: GPU name: Apple M3 Ultra
ggml_metal_init: GPU family: MTLGPUFamilyApple9 (1009)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001)
ggml_metal_init: simdgroup reduction = true
ggml_metal_init: simdgroup matrix mul. = true
ggml_metal_init: has residency sets = false
ggml_metal_init: has bfloat = true
ggml_metal_init: use bfloat = false
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 77309.41 MB
ggml_metal_init: skipping kernel_get_rows_bf16 (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4 (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_bf16 (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_mul_mm_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk576_hv512 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h96 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk576_hv512 (not supported)
ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported)
llama_context: CPU output buffer size = 0.52 MiB
init: kv_size = 4096, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1
init: Metal KV buffer size = 1280.00 MiB
llama_context: KV self size = 1280.00 MiB, K (f16): 640.00 MiB, V (f16): 640.00 MiB
llama_context: Metal compute buffer size = 584.00 MiB
llama_context: CPU compute buffer size = 24.01 MiB
llama_context: graph nodes = 2726
llama_context: graph splits = 2
time=2025-05-08T22:51:46.425-03:00 level=INFO source=server.go:628 msg="llama runner started in 14.58 seconds"
ggml_metal_graph_compute: command buffer 0 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
graph_compute: ggml_backend_sched_graph_compute_async failed with error -1
llama_decode: failed to decode, ret = -3
panic: failed to decode batch: llama_decode failed with code -3

goroutine 15 [running]:
github.com/ollama/ollama/runner/llamarunner.(*Server).run(0x14000312360, {0x1018bfde0, 0x140001785a0})
/Users/runner/work/ollama/ollama/runner/llamarunner/runner.go:346 +0x1d0
created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
/Users/runner/work/ollama/ollama/runner/llamarunner/runner.go:894 +0xa5c
time=2025-05-08T22:51:56.012-03:00 level=ERROR source=server.go:455 msg="llama runner terminated" error="exit status 2"
[GIN] 2025/05/08 - 22:51:56 | 200 | 24.370679708s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/05/08 - 22:52:09 | 200 | 71.75µs | 127.0.0.1 | HEAD "/"
time=2025-05-08T22:52:09.236-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:52:09.254-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 22:52:09 | 200 | 63.119792ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T22:52:09.271-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:52:09.315-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:52:09.323-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T22:52:09.324-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0
time=2025-05-08T22:52:09.324-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0
time=2025-05-08T22:52:09.325-03:00 level=INFO source=sched.go:754 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 gpu=0 parallel=1 available=77309411328 required="72.0 GiB"
time=2025-05-08T22:52:09.325-03:00 level=INFO source=server.go:106 msg="system memory" total="96.0 GiB" free="29.0 GiB" free_swap="0 B"
time=2025-05-08T22:52:09.325-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0
time=2025-05-08T22:52:09.325-03:00 level=INFO source=server.go:139 msg=offload library=metal layers.requested=-1 layers.model=81 layers.offload=81 layers.split="" memory.available="[72.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="72.0 GiB" memory.required.partial="72.0 GiB" memory.required.kv="1.2 GiB" memory.required.allocations="[72.0 GiB]" memory.weights.total="68.8 GiB" memory.weights.repeating="67.7 GiB" memory.weights.nonrepeating="1.0 GiB" memory.graph.full="584.0 MiB" memory.graph.partial="584.0 MiB"
llama_model_load_from_file_impl: using device Metal (Apple M3 Ultra) - 73727 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 7
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q8_0: 562 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q8_0
print_info: file size = 69.82 GiB (8.50 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch = llama
print_info: vocab_only = 1
print_info: model type = ?B
print_info: model params = 70.55 B
print_info: general.name = DeepSeek R1 Distill Llama 70B
print_info: vocab type = BPE
print_info: n_vocab = 128256
print_info: n_merges = 280147
print_info: BOS token = 128000 '<|begin▁of▁sentence|>'
print_info: EOS token = 128001 '<|end▁of▁sentence|>'
print_info: EOT token = 128001 '<|end▁of▁sentence|>'
print_info: EOM token = 128008 '<|eom_id|>'
print_info: PAD token = 128001 '<|end▁of▁sentence|>'
print_info: LF token = 198 'Ċ'
print_info: EOG token = 128001 '<|end▁of▁sentence|>'
print_info: EOG token = 128008 '<|eom_id|>'
print_info: EOG token = 128009 '<|eot_id|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-05-08T22:52:09.455-03:00 level=INFO source=server.go:410 msg="starting llama server" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --model /Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 --ctx-size 4096 --batch-size 512 --n-gpu-layers 81 --threads 20 --parallel 1 --port 50744"
time=2025-05-08T22:52:09.458-03:00 level=INFO source=sched.go:452 msg="loaded runners" count=1
time=2025-05-08T22:52:09.458-03:00 level=INFO source=server.go:589 msg="waiting for llama runner to start responding"
time=2025-05-08T22:52:09.458-03:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server not responding"
time=2025-05-08T22:52:09.466-03:00 level=INFO source=runner.go:853 msg="starting go runner"
time=2025-05-08T22:52:09.468-03:00 level=INFO source=ggml.go:103 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
time=2025-05-08T22:52:09.468-03:00 level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:50744"
llama_model_load_from_file_impl: using device Metal (Apple M3 Ultra) - 73727 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 7
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q8_0: 562 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q8_0
print_info: file size = 69.82 GiB (8.50 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch = llama
print_info: vocab_only = 0
print_info: n_ctx_train = 131072
print_info: n_embd = 8192
print_info: n_layer = 80
print_info: n_head = 64
print_info: n_head_kv = 8
print_info: n_rot = 128
print_info: n_swa = 0
print_info: n_swa_pattern = 1
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 8
print_info: n_embd_k_gqa = 1024
print_info: n_embd_v_gqa = 1024
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 28672
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 0
print_info: rope scaling = linear
print_info: freq_base_train = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 131072
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 70B
print_info: model params = 70.55 B
print_info: general.name = DeepSeek R1 Distill Llama 70B
print_info: vocab type = BPE
print_info: n_vocab = 128256
print_info: n_merges = 280147
print_info: BOS token = 128000 '<|begin▁of▁sentence|>'
print_info: EOS token = 128001 '<|end▁of▁sentence|>'
print_info: EOT token = 128001 '<|end▁of▁sentence|>'
print_info: EOM token = 128008 '<|eom_id|>'
print_info: PAD token = 128001 '<|end▁of▁sentence|>'
print_info: LF token = 198 'Ċ'
print_info: EOG token = 128001 '<|end▁of▁sentence|>'
print_info: EOG token = 128008 '<|eom_id|>'
print_info: EOG token = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
time=2025-05-08T22:52:09.710-03:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server loading model"

load_tensors: offloading 80 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 81/81 layers to GPU
load_tensors: CPU_Mapped model buffer size = 1064.62 MiB
load_tensors: Metal_Mapped model buffer size = 71494.30 MiB
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch = 512
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = 0
llama_context: freq_base = 500000.0
llama_context: freq_scale = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M3 Ultra
ggml_metal_init: picking default device: Apple M3 Ultra
ggml_metal_load_library: using embedded metal library
ggml_metal_init: GPU name: Apple M3 Ultra
ggml_metal_init: GPU family: MTLGPUFamilyApple9 (1009)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001)
ggml_metal_init: simdgroup reduction = true
ggml_metal_init: simdgroup matrix mul. = true
ggml_metal_init: has residency sets = false
ggml_metal_init: has bfloat = true
ggml_metal_init: use bfloat = false
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 77309.41 MB
ggml_metal_init: skipping kernel_get_rows_bf16 (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4 (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_bf16 (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_mul_mm_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk576_hv512 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h96 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk576_hv512 (not supported)
ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported)
llama_context: CPU output buffer size = 0.52 MiB
init: kv_size = 4096, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1
init: Metal KV buffer size = 1280.00 MiB
llama_context: KV self size = 1280.00 MiB, K (f16): 640.00 MiB, V (f16): 640.00 MiB
llama_context: Metal compute buffer size = 584.00 MiB
llama_context: CPU compute buffer size = 24.01 MiB
llama_context: graph nodes = 2726
llama_context: graph splits = 2
time=2025-05-08T22:52:12.224-03:00 level=INFO source=server.go:628 msg="llama runner started in 2.77 seconds"
[GIN] 2025/05/08 - 22:52:12 | 200 | 2.96530075s | 127.0.0.1 | POST "/api/generate"
time=2025-05-08T22:53:21.959-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
ggml_metal_graph_compute: command buffer 0 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
graph_compute: ggml_backend_sched_graph_compute_async failed with error -1
llama_decode: failed to decode, ret = -3
panic: failed to decode batch: llama_decode failed with code -3

goroutine 51 [running]:
github.com/ollama/ollama/runner/llamarunner.(Server).run(0x140004ce360, {0x10147fde0, 0x1400061cf50})
/Users/runner/work/ollama/ollama/runner/llamarunner/runner.go:346 +0x1d0
created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
/Users/runner/work/ollama/ollama/runner/llamarunner/runner.go:894 +
0xa5c
time=2025-05-08T22:53:22.586-03:00 level=ERROR source=server.go:455 msg="llama runner terminated" error="exit status 2"
[GIN] 2025/05/08 - 22:53:22 | 200 | 658.296083ms | 127.0.0.1 | POST "/api/chat"
time=2025-05-08T23:02:28.785-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T23:02:28.802-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 23:02:28 | 200 | 53.565666ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T23:02:28.830-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T23:02:28.846-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 23:02:28 | 200 | 39.568209ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T23:02:28.862-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T23:02:28.869-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 23:02:28 | 200 | 18.360375ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T23:02:28.883-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T23:02:28.889-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 23:02:28 | 200 | 16.8825ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T23:02:28.905-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T23:02:28.915-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 23:02:28 | 200 | 22.970333ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T23:02:28.938-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T23:02:28.954-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 23:02:28 | 200 | 36.462ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T23:02:28.975-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T23:02:28.986-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 23:02:28 | 200 | 26.994667ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T23:02:29.001-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T23:02:29.010-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 23:02:29 | 200 | 21.149ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T23:18:35.291-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T23:18:35.307-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 23:18:35 | 200 | 49.563958ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T23:18:35.331-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T23:18:35.348-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 23:18:35 | 200 | 39.536583ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T23:18:35.360-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T23:18:35.367-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 23:18:35 | 200 | 16.839ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T23:18:35.376-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T23:18:35.384-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 23:18:35 | 200 | 15.18525ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T23:18:35.395-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T23:18:35.405-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 23:18:35 | 200 | 20.930625ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T23:18:35.423-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T23:18:35.439-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 23:18:35 | 200 | 33.590708ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T23:18:35.454-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T23:18:35.465-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 23:18:35 | 200 | 23.624917ms | 127.0.0.1 | POST "/api/show"
time=2025-05-08T23:18:35.478-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-08T23:18:35.487-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/08 - 23:18:35 | 200 | 19.124875ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/05/08 - 23:51:34 | 200 | 62.25µs | 127.0.0.1 | GET "/api/version"
[GIN] 2025/05/09 - 00:16:25 | 200 | 62.208µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/05/09 - 00:16:25 | 200 | 10.725375ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/05/09 - 00:16:45 | 200 | 110.583µs | 127.0.0.1 | HEAD "/"
time=2025-05-09T00:16:45.956-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-09T00:16:45.982-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/09 - 00:16:45 | 200 | 76.717625ms | 127.0.0.1 | POST "/api/show"
time=2025-05-09T00:16:46.010-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-09T00:16:46.027-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-09T00:16:46.043-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-09T00:16:46.044-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.vision.num_channels default=0
time=2025-05-09T00:16:46.046-03:00 level=INFO source=sched.go:754 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/fede/.ollama/models/blobs/sha256-9d507a36062c2845dd3bb3e93364e9abc1607118acd8650727a700f72fb126e5 gpu=0 parallel=2 available=77309411328 required="66.3 GiB"
time=2025-05-09T00:16:46.046-03:00 level=INFO source=server.go:106 msg="system memory" total="96.0 GiB" free="75.4 GiB" free_swap="0 B"
time=2025-05-09T00:16:46.046-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.vision.num_channels default=0
time=2025-05-09T00:16:46.047-03:00 level=INFO source=server.go:139 msg=offload library=metal layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[72.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="66.3 GiB" memory.required.partial="66.3 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[66.3 GiB]" memory.weights.total="60.6 GiB" memory.weights.repeating="59.8 GiB" memory.weights.nonrepeating="809.3 MiB" memory.graph.full="696.0 MiB" memory.graph.partial="696.0 MiB" projector.weights="1.6 GiB" projector.graph="0 B"
time=2025-05-09T00:16:46.077-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-09T00:16:46.078-03:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.pretokenizer default="[^\r\n\p{L}\p{N}]?[\p{Lu}\p{Lt}\p{Lm}\p{Lo}\p{M}]
[\p{Ll}\p{Lm}\p{Lo}\p{M}]+(?i:'s|'t|'re|'ve|'m|'ll|'d)?|[^\r\n\p{L}\p{N}]?[\p{Lu}\p{Lt}\p{Lm}\p{Lo}\p{M}]+[\p{Ll}\p{Lm}\p{Lo}\p{M}](?i:'s|'t|'re|'ve|'m|'ll|'d)?|\p{N}{1,3}| ?[^\s\p{L}\p{N}]+[\r\n/]|\s*[\r\n]+|\s+(?!\S)|\s+"
time=2025-05-09T00:16:46.078-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.vision.num_channels default=3
time=2025-05-09T00:16:46.078-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.vision.max_upscaling_size default=448
time=2025-05-09T00:16:46.078-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.rope.freq_scale default=1
time=2025-05-09T00:16:46.078-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.no_rope_interval default=4
time=2025-05-09T00:16:46.078-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.attention.temperature_tuning default=true
time=2025-05-09T00:16:46.078-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.attention.scale default=0.10000000149011612
time=2025-05-09T00:16:46.078-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.attention.floor_scale default=8192
time=2025-05-09T00:16:46.079-03:00 level=INFO source=server.go:410 msg="starting llama server" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/fede/.ollama/models/blobs/sha256-9d507a36062c2845dd3bb3e93364e9abc1607118acd8650727a700f72fb126e5 --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 20 --parallel 2 --port 51324"
time=2025-05-09T00:16:46.080-03:00 level=INFO source=sched.go:452 msg="loaded runners" count=1
time=2025-05-09T00:16:46.080-03:00 level=INFO source=server.go:589 msg="waiting for llama runner to start responding"
time=2025-05-09T00:16:46.081-03:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server not responding"
time=2025-05-09T00:16:46.088-03:00 level=INFO source=runner.go:851 msg="starting ollama engine"
time=2025-05-09T00:16:46.088-03:00 level=INFO source=runner.go:914 msg="Server listening on 127.0.0.1:51324"
time=2025-05-09T00:16:46.115-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-09T00:16:46.116-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.name default=""
time=2025-05-09T00:16:46.116-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.description default=""
time=2025-05-09T00:16:46.116-03:00 level=INFO source=ggml.go:72 msg="" architecture=llama4 file_type=Q4_K_M name="" description="" num_tensors=1182 num_key_values=45
time=2025-05-09T00:16:46.118-03:00 level=INFO source=ggml.go:103 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
time=2025-05-09T00:16:46.199-03:00 level=INFO source=ggml.go:298 msg="model weights" buffer=Metal size="62.3 GiB"
time=2025-05-09T00:16:46.199-03:00 level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="554.9 MiB"
time=2025-05-09T00:16:46.332-03:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server loading model"
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M3 Ultra
ggml_metal_init: picking default device: Apple M3 Ultra
ggml_metal_load_library: using embedded metal library
ggml_metal_init: GPU name: Apple M3 Ultra
ggml_metal_init: GPU family: MTLGPUFamilyApple9 (1009)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001)
ggml_metal_init: simdgroup reduction = true
ggml_metal_init: simdgroup matrix mul. = true
ggml_metal_init: has residency sets = false
ggml_metal_init: has bfloat = true
ggml_metal_init: use bfloat = false
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 77309.41 MB
ggml_metal_init: skipping kernel_get_rows_bf16 (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4 (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_bf16 (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_mul_mm_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk576_hv512 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h96 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk576_hv512 (not supported)
ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported)
time=2025-05-09T00:16:58.364-03:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.pretokenizer default="[^\r\n\p{L}\p{N}]?[\p{Lu}\p{Lt}\p{Lm}\p{Lo}\p{M}][\p{Ll}\p{Lm}\p{Lo}\p{M}]+(?i:'s|'t|'re|'ve|'m|'ll|'d)?|[^\r\n\p{L}\p{N}]?[\p{Lu}\p{Lt}\p{Lm}\p{Lo}\p{M}]+[\p{Ll}\p{Lm}\p{Lo}\p{M}](?i:'s|'t|'re|'ve|'m|'ll|'d)?|\p{N}{1,3}| ?[^\s\p{L}\p{N}]+[\r\n/]|\s[\r\n]+|\s+(?!\S)|\s+"
time=2025-05-09T00:16:58.364-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.vision.num_channels default=3
time=2025-05-09T00:16:58.364-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.vision.max_upscaling_size default=448
time=2025-05-09T00:16:58.364-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.rope.freq_scale default=1
time=2025-05-09T00:16:58.364-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.no_rope_interval default=4
time=2025-05-09T00:16:58.364-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.attention.temperature_tuning default=true
time=2025-05-09T00:16:58.364-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.attention.scale default=0.10000000149011612
time=2025-05-09T00:16:58.364-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.attention.floor_scale default=8192
time=2025-05-09T00:16:58.495-03:00 level=INFO source=ggml.go:553 msg="compute graph" backend=Metal buffer_type=Metal size="692.0 MiB"
time=2025-05-09T00:16:58.495-03:00 level=INFO source=ggml.go:553 msg="compute graph" backend=CPU buffer_type=CPU size="10.0 MiB"
time=2025-05-09T00:16:58.613-03:00 level=INFO source=server.go:628 msg="llama runner started in 12.53 seconds"
[GIN] 2025/05/09 - 00:16:58 | 200 | 12.622015292s | 127.0.0.1 | POST "/api/generate"
time=2025-05-09T00:17:30.731-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/09 - 00:17:33 | 200 | 2.871635958s | 127.0.0.1 | POST "/api/chat"
2025/05/09 12:04:38 routes.go:1233: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/fede/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]"
time=2025-05-09T12:04:38.931-03:00 level=INFO source=images.go:463 msg="total blobs: 49"
time=2025-05-09T12:04:38.933-03:00 level=INFO source=images.go:470 msg="total unused blobs removed: 0"
time=2025-05-09T12:04:38.934-03:00 level=INFO source=routes.go:1300 msg="Listening on 127.0.0.1:11434 (version 0.6.8)"
time=2025-05-09T12:04:39.005-03:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="72.0 GiB" available="72.0 GiB"
[GIN] 2025/05/09 - 12:04:39 | 200 | 174.167µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/05/09 - 12:04:39 | 200 | 1.545708ms | 127.0.0.1 | GET "/api/tags"
time=2025-05-09T12:04:39.023-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-09T12:04:39.037-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/09 - 12:04:39 | 200 | 32.870542ms | 127.0.0.1 | POST "/api/show"
time=2025-05-09T12:04:39.051-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-09T12:04:39.058-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/09 - 12:04:39 | 200 | 18.700834ms | 127.0.0.1 | POST "/api/show"
time=2025-05-09T12:04:39.071-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-09T12:04:39.080-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/09 - 12:04:39 | 200 | 19.747666ms | 127.0.0.1 | POST "/api/show"
time=2025-05-09T12:04:39.096-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-09T12:04:39.107-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/09 - 12:04:39 | 200 | 25.832291ms | 127.0.0.1 | POST "/api/show"
time=2025-05-09T12:04:39.130-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-09T12:04:39.148-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/09 - 12:04:39 | 200 | 40.632791ms | 127.0.0.1 | POST "/api/show"
time=2025-05-09T12:04:39.167-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-09T12:04:39.180-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/09 - 12:04:39 | 200 | 29.029083ms | 127.0.0.1 | POST "/api/show"
time=2025-05-09T12:04:39.196-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-09T12:04:39.206-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/09 - 12:04:39 | 200 | 23.380375ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/05/09 - 12:04:39 | 200 | 18.709µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/05/09 - 12:04:39 | 200 | 973.583µs | 127.0.0.1 | GET "/api/tags"
time=2025-05-09T12:05:43.035-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-09T12:05:43.052-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/09 - 12:05:43 | 200 | 52.935083ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/05/09 - 12:07:49 | 200 | 62.042µs | 127.0.0.1 | HEAD "/"
time=2025-05-09T12:07:49.695-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-09T12:07:49.713-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
[GIN] 2025/05/09 - 12:07:49 | 200 | 51.754667ms | 127.0.0.1 | POST "/api/show"
time=2025-05-09T12:07:49.732-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-09T12:07:49.744-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-09T12:07:49.754-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-09T12:07:49.755-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0
time=2025-05-09T12:07:49.755-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0
time=2025-05-09T12:07:49.755-03:00 level=INFO source=sched.go:754 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 gpu=0 parallel=1 available=77309411328 required="72.0 GiB"
time=2025-05-09T12:07:49.756-03:00 level=INFO source=server.go:106 msg="system memory" total="96.0 GiB" free="76.6 GiB" free_swap="0 B"
time=2025-05-09T12:07:49.756-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0
time=2025-05-09T12:07:49.756-03:00 level=INFO source=server.go:139 msg=offload library=metal layers.requested=-1 layers.model=81 layers.offload=81 layers.split="" memory.available="[72.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="72.0 GiB" memory.required.partial="72.0 GiB" memory.required.kv="1.2 GiB" memory.required.allocations="[72.0 GiB]" memory.weights.total="68.8 GiB" memory.weights.repeating="67.7 GiB" memory.weights.nonrepeating="1.0 GiB" memory.graph.full="584.0 MiB" memory.graph.partial="584.0 MiB"
llama_model_load_from_file_impl: using device Metal (Apple M3 Ultra) - 73727 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 7
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q8_0: 562 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q8_0
print_info: file size = 69.82 GiB (8.50 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch = llama
print_info: vocab_only = 1
print_info: model type = ?B
print_info: model params = 70.55 B
print_info: general.name = DeepSeek R1 Distill Llama 70B
print_info: vocab type = BPE
print_info: n_vocab = 128256
print_info: n_merges = 280147
print_info: BOS token = 128000 '<|begin▁of▁sentence|>'
print_info: EOS token = 128001 '<|end▁of▁sentence|>'
print_info: EOT token = 128001 '<|end▁of▁sentence|>'
print_info: EOM token = 128008 '<|eom_id|>'
print_info: PAD token = 128001 '<|end▁of▁sentence|>'
print_info: LF token = 198 'Ċ'
print_info: EOG token = 128001 '<|end▁of▁sentence|>'
print_info: EOG token = 128008 '<|eom_id|>'
print_info: EOG token = 128009 '<|eot_id|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-05-09T12:07:49.889-03:00 level=INFO source=server.go:410 msg="starting llama server" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --model /Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 --ctx-size 4096 --batch-size 512 --n-gpu-layers 81 --threads 20 --parallel 1 --port 49582"
time=2025-05-09T12:07:49.890-03:00 level=INFO source=sched.go:452 msg="loaded runners" count=1
time=2025-05-09T12:07:49.891-03:00 level=INFO source=server.go:589 msg="waiting for llama runner to start responding"
time=2025-05-09T12:07:49.891-03:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server not responding"
time=2025-05-09T12:07:49.898-03:00 level=INFO source=runner.go:853 msg="starting go runner"
time=2025-05-09T12:07:49.901-03:00 level=INFO source=ggml.go:103 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang)
time=2025-05-09T12:07:49.902-03:00 level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:49582"
llama_model_load_from_file_impl: using device Metal (Apple M3 Ultra) - 73727 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 7
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q8_0: 562 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q8_0
print_info: file size = 69.82 GiB (8.50 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch = llama
print_info: vocab_only = 0
print_info: n_ctx_train = 131072
print_info: n_embd = 8192
print_info: n_layer = 80
print_info: n_head = 64
print_info: n_head_kv = 8
print_info: n_rot = 128
print_info: n_swa = 0
print_info: n_swa_pattern = 1
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 8
print_info: n_embd_k_gqa = 1024
print_info: n_embd_v_gqa = 1024
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 28672
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 0
print_info: rope scaling = linear
print_info: freq_base_train = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 131072
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 70B
print_info: model params = 70.55 B
print_info: general.name = DeepSeek R1 Distill Llama 70B
print_info: vocab type = BPE
print_info: n_vocab = 128256
print_info: n_merges = 280147
print_info: BOS token = 128000 '<|begin▁of▁sentence|>'
print_info: EOS token = 128001 '<|end▁of▁sentence|>'
print_info: EOT token = 128001 '<|end▁of▁sentence|>'
print_info: EOM token = 128008 '<|eom_id|>'
print_info: PAD token = 128001 '<|end▁of▁sentence|>'
print_info: LF token = 198 'Ċ'
print_info: EOG token = 128001 '<|end▁of▁sentence|>'
print_info: EOG token = 128008 '<|eom_id|>'
print_info: EOG token = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
time=2025-05-09T12:07:50.142-03:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server loading model"

load_tensors: offloading 80 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 81/81 layers to GPU
load_tensors: CPU_Mapped model buffer size = 1064.62 MiB
load_tensors: Metal_Mapped model buffer size = 71494.30 MiB
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch = 512
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = 0
llama_context: freq_base = 500000.0
llama_context: freq_scale = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M3 Ultra
ggml_metal_init: picking default device: Apple M3 Ultra
ggml_metal_load_library: using embedded metal library
ggml_metal_init: GPU name: Apple M3 Ultra
ggml_metal_init: GPU family: MTLGPUFamilyApple9 (1009)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001)
ggml_metal_init: simdgroup reduction = true
ggml_metal_init: simdgroup matrix mul. = true
ggml_metal_init: has residency sets = false
ggml_metal_init: has bfloat = true
ggml_metal_init: use bfloat = false
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 77309.41 MB
ggml_metal_init: skipping kernel_get_rows_bf16 (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4 (not supported)
ggml_metal_init: skipping kernel_mul_mv_bf16_bf16 (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_mul_mm_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk576_hv512 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h96 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256 (not supported)
ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk576_hv512 (not supported)
ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported)
llama_context: CPU output buffer size = 0.52 MiB
init: kv_size = 4096, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1
init: Metal KV buffer size = 1280.00 MiB
llama_context: KV self size = 1280.00 MiB, K (f16): 640.00 MiB, V (f16): 640.00 MiB
llama_context: Metal compute buffer size = 584.00 MiB
llama_context: CPU compute buffer size = 24.01 MiB
llama_context: graph nodes = 2726
llama_context: graph splits = 2
time=2025-05-09T12:08:18.542-03:00 level=INFO source=server.go:628 msg="llama runner started in 28.65 seconds"
[GIN] 2025/05/09 - 12:08:18 | 200 | 28.82524325s | 127.0.0.1 | POST "/api/generate"
time=2025-05-09T12:08:22.495-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
ggml_metal_graph_compute: command buffer 0 failed with status 5
error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory)
graph_compute: ggml_backend_sched_graph_compute_async failed with error -1
llama_decode: failed to decode, ret = -3
panic: failed to decode batch: llama_decode failed with code -3

goroutine 50 [running]:
github.com/ollama/ollama/runner/llamarunner.(*Server).run(0x14000548360, {0x101c63de0, 0x1400041c960})
/Users/runner/work/ollama/ollama/runner/llamarunner/runner.go:346 +0x1d0
created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
/Users/runner/work/ollama/ollama/runner/llamarunner/runner.go:894 +0xa5c
time=2025-05-09T12:08:29.019-03:00 level=ERROR source=server.go:455 msg="llama runner terminated" error="exit status 2"
[GIN] 2025/05/09 - 12:08:29 | 200 | 6.557922833s | 127.0.0.1 | POST "/api/chat"
fede@Federicos-Mac-Studio ~ %

<!-- gh-comment-id:2866926649 --> @fedesantamarina commented on GitHub (May 9, 2025): time=2025-05-08T20:02:22.523-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T20:02:22.533-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 20:02:22 | 200 | 20.756875ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T20:02:22.552-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T20:02:22.568-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 20:02:22 | 200 | 33.622ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T20:02:22.586-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T20:02:22.597-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 20:02:22 | 200 | 23.962541ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T20:02:22.610-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T20:02:22.620-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 20:02:22 | 200 | 18.945041ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T20:32:23.499-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T20:32:23.516-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 20:32:23 | 200 | 53.037709ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T20:32:23.541-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T20:32:23.558-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 20:32:23 | 200 | 38.263583ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T20:32:23.571-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T20:32:23.578-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 20:32:23 | 200 | 16.637958ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T20:32:23.589-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T20:32:23.596-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 20:32:23 | 200 | 14.940584ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T20:32:23.610-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T20:32:23.620-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 20:32:23 | 200 | 20.977208ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T20:32:23.640-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T20:32:23.655-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 20:32:23 | 200 | 33.585541ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T20:32:23.672-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T20:32:23.683-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 20:32:23 | 200 | 23.277959ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T20:32:23.696-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T20:32:23.705-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 20:32:23 | 200 | 18.569417ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T21:02:24.593-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T21:02:24.611-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 21:02:24 | 200 | 53.534ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T21:02:24.637-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T21:02:24.654-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 21:02:24 | 200 | 39.657084ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T21:02:24.667-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T21:02:24.675-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 21:02:24 | 200 | 16.892ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T21:02:24.686-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T21:02:24.693-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 21:02:24 | 200 | 15.053417ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T21:02:24.707-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T21:02:24.717-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 21:02:24 | 200 | 20.684458ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T21:02:24.737-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T21:02:24.753-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 21:02:24 | 200 | 34.07175ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T21:02:24.771-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T21:02:24.782-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 21:02:24 | 200 | 23.4775ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T21:02:24.795-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T21:02:24.804-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 21:02:24 | 200 | 18.583584ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T21:32:25.864-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T21:32:25.882-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 21:32:25 | 200 | 50.80525ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T21:32:25.909-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T21:32:25.926-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 21:32:25 | 200 | 39.631834ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T21:32:25.939-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T21:32:25.947-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 21:32:25 | 200 | 16.782208ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T21:32:25.959-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T21:32:25.967-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 21:32:25 | 200 | 16.402708ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T21:32:25.979-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T21:32:25.989-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 21:32:25 | 200 | 21.207542ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T21:32:26.007-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T21:32:26.023-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 21:32:26 | 200 | 33.6505ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T21:32:26.037-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T21:32:26.048-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 21:32:26 | 200 | 23.737334ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T21:32:26.061-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T21:32:26.070-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 21:32:26 | 200 | 18.83925ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T22:02:26.847-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:02:26.865-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 22:02:26 | 200 | 50.963584ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T22:02:26.890-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:02:26.906-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 22:02:26 | 200 | 38.188709ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T22:02:26.919-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:02:26.927-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 22:02:26 | 200 | 16.737333ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T22:02:26.938-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:02:26.945-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 22:02:26 | 200 | 15.037375ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T22:02:26.958-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:02:26.968-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 22:02:26 | 200 | 21.319542ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T22:02:26.988-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:02:27.004-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 22:02:27 | 200 | 33.983583ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T22:02:27.022-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:02:27.033-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 22:02:27 | 200 | 23.821541ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T22:02:27.046-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:02:27.055-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 22:02:27 | 200 | 18.996125ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T22:32:27.718-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:32:27.736-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 22:32:27 | 200 | 50.538291ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T22:32:27.763-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:32:27.780-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 22:32:27 | 200 | 40.121417ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T22:32:27.794-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:32:27.802-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 22:32:27 | 200 | 17.03225ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T22:32:27.813-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:32:27.820-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 22:32:27 | 200 | 15.321958ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T22:32:27.834-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:32:27.844-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 22:32:27 | 200 | 21.04425ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T22:32:27.864-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:32:27.879-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 22:32:27 | 200 | 33.307209ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T22:32:27.896-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:32:27.908-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 22:32:27 | 200 | 23.7945ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T22:32:27.921-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:32:27.930-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 22:32:27 | 200 | 18.800792ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T22:51:31.677-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:51:31.695-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:51:31.708-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:51:31.709-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0 time=2025-05-08T22:51:31.710-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0 time=2025-05-08T22:51:31.711-03:00 level=INFO source=sched.go:754 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 gpu=0 parallel=1 available=77309411328 required="72.0 GiB" time=2025-05-08T22:51:31.711-03:00 level=INFO source=server.go:106 msg="system memory" total="96.0 GiB" free="45.4 GiB" free_swap="0 B" time=2025-05-08T22:51:31.711-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0 time=2025-05-08T22:51:31.711-03:00 level=INFO source=server.go:139 msg=offload library=metal layers.requested=-1 layers.model=81 layers.offload=81 layers.split="" memory.available="[72.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="72.0 GiB" memory.required.partial="72.0 GiB" memory.required.kv="1.2 GiB" memory.required.allocations="[72.0 GiB]" memory.weights.total="68.8 GiB" memory.weights.repeating="67.7 GiB" memory.weights.nonrepeating="1.0 GiB" memory.graph.full="584.0 MiB" memory.graph.partial="584.0 MiB" llama_model_load_from_file_impl: using device Metal (Apple M3 Ultra) - 73727 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 7 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q8_0: 562 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q8_0 print_info: file size = 69.82 GiB (8.50 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 70.55 B print_info: general.name = DeepSeek R1 Distill Llama 70B print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin▁of▁sentence|>' print_info: EOS token = 128001 '<|end▁of▁sentence|>' print_info: EOT token = 128001 '<|end▁of▁sentence|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end▁of▁sentence|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-05-08T22:51:31.846-03:00 level=INFO source=server.go:410 msg="starting llama server" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --model /Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 --ctx-size 4096 --batch-size 512 --n-gpu-layers 81 --threads 20 --parallel 1 --port 50706" time=2025-05-08T22:51:31.848-03:00 level=INFO source=sched.go:452 msg="loaded runners" count=1 time=2025-05-08T22:51:31.848-03:00 level=INFO source=server.go:589 msg="waiting for llama runner to start responding" time=2025-05-08T22:51:31.849-03:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server not responding" time=2025-05-08T22:51:31.865-03:00 level=INFO source=runner.go:853 msg="starting go runner" time=2025-05-08T22:51:31.868-03:00 level=INFO source=ggml.go:103 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) time=2025-05-08T22:51:31.868-03:00 level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:50706" llama_model_load_from_file_impl: using device Metal (Apple M3 Ultra) - 73727 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 7 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q8_0: 562 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q8_0 print_info: file size = 69.82 GiB (8.50 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 8192 print_info: n_layer = 80 print_info: n_head = 64 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 8 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 28672 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 70B print_info: model params = 70.55 B print_info: general.name = DeepSeek R1 Distill Llama 70B print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin▁of▁sentence|>' print_info: EOS token = 128001 '<|end▁of▁sentence|>' print_info: EOT token = 128001 '<|end▁of▁sentence|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end▁of▁sentence|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) time=2025-05-08T22:51:32.100-03:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server loading model" load_tensors: offloading 80 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 81/81 layers to GPU load_tensors: CPU_Mapped model buffer size = 1064.62 MiB load_tensors: Metal_Mapped model buffer size = 71494.30 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 4096 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 500000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized ggml_metal_init: allocating ggml_metal_init: found device: Apple M3 Ultra ggml_metal_init: picking default device: Apple M3 Ultra ggml_metal_load_library: using embedded metal library ggml_metal_init: GPU name: Apple M3 Ultra ggml_metal_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction = true ggml_metal_init: simdgroup matrix mul. = true ggml_metal_init: has residency sets = false ggml_metal_init: has bfloat = true ggml_metal_init: use bfloat = false ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 77309.41 MB ggml_metal_init: skipping kernel_get_rows_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported) llama_context: CPU output buffer size = 0.52 MiB init: kv_size = 4096, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1 init: Metal KV buffer size = 1280.00 MiB llama_context: KV self size = 1280.00 MiB, K (f16): 640.00 MiB, V (f16): 640.00 MiB llama_context: Metal compute buffer size = 584.00 MiB llama_context: CPU compute buffer size = 24.01 MiB llama_context: graph nodes = 2726 llama_context: graph splits = 2 time=2025-05-08T22:51:46.425-03:00 level=INFO source=server.go:628 msg="llama runner started in 14.58 seconds" ggml_metal_graph_compute: command buffer 0 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) graph_compute: ggml_backend_sched_graph_compute_async failed with error -1 llama_decode: failed to decode, ret = -3 panic: failed to decode batch: llama_decode failed with code -3 goroutine 15 [running]: github.com/ollama/ollama/runner/llamarunner.(*Server).run(0x14000312360, {0x1018bfde0, 0x140001785a0}) /Users/runner/work/ollama/ollama/runner/llamarunner/runner.go:346 +0x1d0 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 /Users/runner/work/ollama/ollama/runner/llamarunner/runner.go:894 +0xa5c time=2025-05-08T22:51:56.012-03:00 level=ERROR source=server.go:455 msg="llama runner terminated" error="exit status 2" [GIN] 2025/05/08 - 22:51:56 | 200 | 24.370679708s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/05/08 - 22:52:09 | 200 | 71.75µs | 127.0.0.1 | HEAD "/" time=2025-05-08T22:52:09.236-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:52:09.254-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 22:52:09 | 200 | 63.119792ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T22:52:09.271-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:52:09.315-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:52:09.323-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T22:52:09.324-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0 time=2025-05-08T22:52:09.324-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0 time=2025-05-08T22:52:09.325-03:00 level=INFO source=sched.go:754 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 gpu=0 parallel=1 available=77309411328 required="72.0 GiB" time=2025-05-08T22:52:09.325-03:00 level=INFO source=server.go:106 msg="system memory" total="96.0 GiB" free="29.0 GiB" free_swap="0 B" time=2025-05-08T22:52:09.325-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0 time=2025-05-08T22:52:09.325-03:00 level=INFO source=server.go:139 msg=offload library=metal layers.requested=-1 layers.model=81 layers.offload=81 layers.split="" memory.available="[72.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="72.0 GiB" memory.required.partial="72.0 GiB" memory.required.kv="1.2 GiB" memory.required.allocations="[72.0 GiB]" memory.weights.total="68.8 GiB" memory.weights.repeating="67.7 GiB" memory.weights.nonrepeating="1.0 GiB" memory.graph.full="584.0 MiB" memory.graph.partial="584.0 MiB" llama_model_load_from_file_impl: using device Metal (Apple M3 Ultra) - 73727 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 7 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q8_0: 562 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q8_0 print_info: file size = 69.82 GiB (8.50 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 70.55 B print_info: general.name = DeepSeek R1 Distill Llama 70B print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin▁of▁sentence|>' print_info: EOS token = 128001 '<|end▁of▁sentence|>' print_info: EOT token = 128001 '<|end▁of▁sentence|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end▁of▁sentence|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-05-08T22:52:09.455-03:00 level=INFO source=server.go:410 msg="starting llama server" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --model /Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 --ctx-size 4096 --batch-size 512 --n-gpu-layers 81 --threads 20 --parallel 1 --port 50744" time=2025-05-08T22:52:09.458-03:00 level=INFO source=sched.go:452 msg="loaded runners" count=1 time=2025-05-08T22:52:09.458-03:00 level=INFO source=server.go:589 msg="waiting for llama runner to start responding" time=2025-05-08T22:52:09.458-03:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server not responding" time=2025-05-08T22:52:09.466-03:00 level=INFO source=runner.go:853 msg="starting go runner" time=2025-05-08T22:52:09.468-03:00 level=INFO source=ggml.go:103 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) time=2025-05-08T22:52:09.468-03:00 level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:50744" llama_model_load_from_file_impl: using device Metal (Apple M3 Ultra) - 73727 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 7 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q8_0: 562 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q8_0 print_info: file size = 69.82 GiB (8.50 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 8192 print_info: n_layer = 80 print_info: n_head = 64 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 8 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 28672 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 70B print_info: model params = 70.55 B print_info: general.name = DeepSeek R1 Distill Llama 70B print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin▁of▁sentence|>' print_info: EOS token = 128001 '<|end▁of▁sentence|>' print_info: EOT token = 128001 '<|end▁of▁sentence|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end▁of▁sentence|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) time=2025-05-08T22:52:09.710-03:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server loading model" load_tensors: offloading 80 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 81/81 layers to GPU load_tensors: CPU_Mapped model buffer size = 1064.62 MiB load_tensors: Metal_Mapped model buffer size = 71494.30 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 4096 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 500000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized ggml_metal_init: allocating ggml_metal_init: found device: Apple M3 Ultra ggml_metal_init: picking default device: Apple M3 Ultra ggml_metal_load_library: using embedded metal library ggml_metal_init: GPU name: Apple M3 Ultra ggml_metal_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction = true ggml_metal_init: simdgroup matrix mul. = true ggml_metal_init: has residency sets = false ggml_metal_init: has bfloat = true ggml_metal_init: use bfloat = false ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 77309.41 MB ggml_metal_init: skipping kernel_get_rows_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported) llama_context: CPU output buffer size = 0.52 MiB init: kv_size = 4096, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1 init: Metal KV buffer size = 1280.00 MiB llama_context: KV self size = 1280.00 MiB, K (f16): 640.00 MiB, V (f16): 640.00 MiB llama_context: Metal compute buffer size = 584.00 MiB llama_context: CPU compute buffer size = 24.01 MiB llama_context: graph nodes = 2726 llama_context: graph splits = 2 time=2025-05-08T22:52:12.224-03:00 level=INFO source=server.go:628 msg="llama runner started in 2.77 seconds" [GIN] 2025/05/08 - 22:52:12 | 200 | 2.96530075s | 127.0.0.1 | POST "/api/generate" time=2025-05-08T22:53:21.959-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 ggml_metal_graph_compute: command buffer 0 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) graph_compute: ggml_backend_sched_graph_compute_async failed with error -1 llama_decode: failed to decode, ret = -3 panic: failed to decode batch: llama_decode failed with code -3 goroutine 51 [running]: github.com/ollama/ollama/runner/llamarunner.(*Server).run(0x140004ce360, {0x10147fde0, 0x1400061cf50}) /Users/runner/work/ollama/ollama/runner/llamarunner/runner.go:346 +0x1d0 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 /Users/runner/work/ollama/ollama/runner/llamarunner/runner.go:894 + 0xa5c time=2025-05-08T22:53:22.586-03:00 level=ERROR source=server.go:455 msg="llama runner terminated" error="exit status 2" [GIN] 2025/05/08 - 22:53:22 | 200 | 658.296083ms | 127.0.0.1 | POST "/api/chat" time=2025-05-08T23:02:28.785-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T23:02:28.802-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 23:02:28 | 200 | 53.565666ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T23:02:28.830-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T23:02:28.846-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 23:02:28 | 200 | 39.568209ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T23:02:28.862-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T23:02:28.869-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 23:02:28 | 200 | 18.360375ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T23:02:28.883-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T23:02:28.889-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 23:02:28 | 200 | 16.8825ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T23:02:28.905-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T23:02:28.915-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 23:02:28 | 200 | 22.970333ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T23:02:28.938-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T23:02:28.954-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 23:02:28 | 200 | 36.462ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T23:02:28.975-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T23:02:28.986-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 23:02:28 | 200 | 26.994667ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T23:02:29.001-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T23:02:29.010-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 23:02:29 | 200 | 21.149ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T23:18:35.291-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T23:18:35.307-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 23:18:35 | 200 | 49.563958ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T23:18:35.331-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T23:18:35.348-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 23:18:35 | 200 | 39.536583ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T23:18:35.360-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T23:18:35.367-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 23:18:35 | 200 | 16.839ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T23:18:35.376-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T23:18:35.384-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 23:18:35 | 200 | 15.18525ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T23:18:35.395-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T23:18:35.405-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 23:18:35 | 200 | 20.930625ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T23:18:35.423-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T23:18:35.439-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 23:18:35 | 200 | 33.590708ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T23:18:35.454-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T23:18:35.465-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 23:18:35 | 200 | 23.624917ms | 127.0.0.1 | POST "/api/show" time=2025-05-08T23:18:35.478-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-08T23:18:35.487-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/08 - 23:18:35 | 200 | 19.124875ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/05/08 - 23:51:34 | 200 | 62.25µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/05/09 - 00:16:25 | 200 | 62.208µs | 127.0.0.1 | HEAD "/" [GIN] 2025/05/09 - 00:16:25 | 200 | 10.725375ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/05/09 - 00:16:45 | 200 | 110.583µs | 127.0.0.1 | HEAD "/" time=2025-05-09T00:16:45.956-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-09T00:16:45.982-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/09 - 00:16:45 | 200 | 76.717625ms | 127.0.0.1 | POST "/api/show" time=2025-05-09T00:16:46.010-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-09T00:16:46.027-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-09T00:16:46.043-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-09T00:16:46.044-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.vision.num_channels default=0 time=2025-05-09T00:16:46.046-03:00 level=INFO source=sched.go:754 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/fede/.ollama/models/blobs/sha256-9d507a36062c2845dd3bb3e93364e9abc1607118acd8650727a700f72fb126e5 gpu=0 parallel=2 available=77309411328 required="66.3 GiB" time=2025-05-09T00:16:46.046-03:00 level=INFO source=server.go:106 msg="system memory" total="96.0 GiB" free="75.4 GiB" free_swap="0 B" time=2025-05-09T00:16:46.046-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.vision.num_channels default=0 time=2025-05-09T00:16:46.047-03:00 level=INFO source=server.go:139 msg=offload library=metal layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[72.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="66.3 GiB" memory.required.partial="66.3 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[66.3 GiB]" memory.weights.total="60.6 GiB" memory.weights.repeating="59.8 GiB" memory.weights.nonrepeating="809.3 MiB" memory.graph.full="696.0 MiB" memory.graph.partial="696.0 MiB" projector.weights="1.6 GiB" projector.graph="0 B" time=2025-05-09T00:16:46.077-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-09T00:16:46.078-03:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+(?i:'s|'t|'re|'ve|'m|'ll|'d)?|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*(?i:'s|'t|'re|'ve|'m|'ll|'d)?|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-05-09T00:16:46.078-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.vision.num_channels default=3 time=2025-05-09T00:16:46.078-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.vision.max_upscaling_size default=448 time=2025-05-09T00:16:46.078-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.rope.freq_scale default=1 time=2025-05-09T00:16:46.078-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.no_rope_interval default=4 time=2025-05-09T00:16:46.078-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.attention.temperature_tuning default=true time=2025-05-09T00:16:46.078-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.attention.scale default=0.10000000149011612 time=2025-05-09T00:16:46.078-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.attention.floor_scale default=8192 time=2025-05-09T00:16:46.079-03:00 level=INFO source=server.go:410 msg="starting llama server" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --ollama-engine --model /Users/fede/.ollama/models/blobs/sha256-9d507a36062c2845dd3bb3e93364e9abc1607118acd8650727a700f72fb126e5 --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 20 --parallel 2 --port 51324" time=2025-05-09T00:16:46.080-03:00 level=INFO source=sched.go:452 msg="loaded runners" count=1 time=2025-05-09T00:16:46.080-03:00 level=INFO source=server.go:589 msg="waiting for llama runner to start responding" time=2025-05-09T00:16:46.081-03:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server not responding" time=2025-05-09T00:16:46.088-03:00 level=INFO source=runner.go:851 msg="starting ollama engine" time=2025-05-09T00:16:46.088-03:00 level=INFO source=runner.go:914 msg="Server listening on 127.0.0.1:51324" time=2025-05-09T00:16:46.115-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-09T00:16:46.116-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.name default="" time=2025-05-09T00:16:46.116-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.description default="" time=2025-05-09T00:16:46.116-03:00 level=INFO source=ggml.go:72 msg="" architecture=llama4 file_type=Q4_K_M name="" description="" num_tensors=1182 num_key_values=45 time=2025-05-09T00:16:46.118-03:00 level=INFO source=ggml.go:103 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) time=2025-05-09T00:16:46.199-03:00 level=INFO source=ggml.go:298 msg="model weights" buffer=Metal size="62.3 GiB" time=2025-05-09T00:16:46.199-03:00 level=INFO source=ggml.go:298 msg="model weights" buffer=CPU size="554.9 MiB" time=2025-05-09T00:16:46.332-03:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server loading model" ggml_metal_init: allocating ggml_metal_init: found device: Apple M3 Ultra ggml_metal_init: picking default device: Apple M3 Ultra ggml_metal_load_library: using embedded metal library ggml_metal_init: GPU name: Apple M3 Ultra ggml_metal_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction = true ggml_metal_init: simdgroup matrix mul. = true ggml_metal_init: has residency sets = false ggml_metal_init: has bfloat = true ggml_metal_init: use bfloat = false ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 77309.41 MB ggml_metal_init: skipping kernel_get_rows_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported) time=2025-05-09T00:16:58.364-03:00 level=WARN source=ggml.go:152 msg="key not found" key=tokenizer.ggml.pretokenizer default="[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]*[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]+(?i:'s|'t|'re|'ve|'m|'ll|'d)?|[^\\r\\n\\p{L}\\p{N}]?[\\p{Lu}\\p{Lt}\\p{Lm}\\p{Lo}\\p{M}]+[\\p{Ll}\\p{Lm}\\p{Lo}\\p{M}]*(?i:'s|'t|'re|'ve|'m|'ll|'d)?|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n/]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+" time=2025-05-09T00:16:58.364-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.vision.num_channels default=3 time=2025-05-09T00:16:58.364-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.vision.max_upscaling_size default=448 time=2025-05-09T00:16:58.364-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.rope.freq_scale default=1 time=2025-05-09T00:16:58.364-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.no_rope_interval default=4 time=2025-05-09T00:16:58.364-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.attention.temperature_tuning default=true time=2025-05-09T00:16:58.364-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.attention.scale default=0.10000000149011612 time=2025-05-09T00:16:58.364-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama4.attention.floor_scale default=8192 time=2025-05-09T00:16:58.495-03:00 level=INFO source=ggml.go:553 msg="compute graph" backend=Metal buffer_type=Metal size="692.0 MiB" time=2025-05-09T00:16:58.495-03:00 level=INFO source=ggml.go:553 msg="compute graph" backend=CPU buffer_type=CPU size="10.0 MiB" time=2025-05-09T00:16:58.613-03:00 level=INFO source=server.go:628 msg="llama runner started in 12.53 seconds" [GIN] 2025/05/09 - 00:16:58 | 200 | 12.622015292s | 127.0.0.1 | POST "/api/generate" time=2025-05-09T00:17:30.731-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/09 - 00:17:33 | 200 | 2.871635958s | 127.0.0.1 | POST "/api/chat" 2025/05/09 12:04:38 routes.go:1233: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/fede/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]" time=2025-05-09T12:04:38.931-03:00 level=INFO source=images.go:463 msg="total blobs: 49" time=2025-05-09T12:04:38.933-03:00 level=INFO source=images.go:470 msg="total unused blobs removed: 0" time=2025-05-09T12:04:38.934-03:00 level=INFO source=routes.go:1300 msg="Listening on 127.0.0.1:11434 (version 0.6.8)" time=2025-05-09T12:04:39.005-03:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="72.0 GiB" available="72.0 GiB" [GIN] 2025/05/09 - 12:04:39 | 200 | 174.167µs | 127.0.0.1 | HEAD "/" [GIN] 2025/05/09 - 12:04:39 | 200 | 1.545708ms | 127.0.0.1 | GET "/api/tags" time=2025-05-09T12:04:39.023-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-09T12:04:39.037-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/09 - 12:04:39 | 200 | 32.870542ms | 127.0.0.1 | POST "/api/show" time=2025-05-09T12:04:39.051-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-09T12:04:39.058-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/09 - 12:04:39 | 200 | 18.700834ms | 127.0.0.1 | POST "/api/show" time=2025-05-09T12:04:39.071-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-09T12:04:39.080-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/09 - 12:04:39 | 200 | 19.747666ms | 127.0.0.1 | POST "/api/show" time=2025-05-09T12:04:39.096-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-09T12:04:39.107-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/09 - 12:04:39 | 200 | 25.832291ms | 127.0.0.1 | POST "/api/show" time=2025-05-09T12:04:39.130-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-09T12:04:39.148-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/09 - 12:04:39 | 200 | 40.632791ms | 127.0.0.1 | POST "/api/show" time=2025-05-09T12:04:39.167-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-09T12:04:39.180-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/09 - 12:04:39 | 200 | 29.029083ms | 127.0.0.1 | POST "/api/show" time=2025-05-09T12:04:39.196-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-09T12:04:39.206-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/09 - 12:04:39 | 200 | 23.380375ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/05/09 - 12:04:39 | 200 | 18.709µs | 127.0.0.1 | HEAD "/" [GIN] 2025/05/09 - 12:04:39 | 200 | 973.583µs | 127.0.0.1 | GET "/api/tags" time=2025-05-09T12:05:43.035-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-09T12:05:43.052-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/09 - 12:05:43 | 200 | 52.935083ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/05/09 - 12:07:49 | 200 | 62.042µs | 127.0.0.1 | HEAD "/" time=2025-05-09T12:07:49.695-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-09T12:07:49.713-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 [GIN] 2025/05/09 - 12:07:49 | 200 | 51.754667ms | 127.0.0.1 | POST "/api/show" time=2025-05-09T12:07:49.732-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-09T12:07:49.744-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-09T12:07:49.754-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-09T12:07:49.755-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0 time=2025-05-09T12:07:49.755-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0 time=2025-05-09T12:07:49.755-03:00 level=INFO source=sched.go:754 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 gpu=0 parallel=1 available=77309411328 required="72.0 GiB" time=2025-05-09T12:07:49.756-03:00 level=INFO source=server.go:106 msg="system memory" total="96.0 GiB" free="76.6 GiB" free_swap="0 B" time=2025-05-09T12:07:49.756-03:00 level=WARN source=ggml.go:152 msg="key not found" key=llama.vision.block_count default=0 time=2025-05-09T12:07:49.756-03:00 level=INFO source=server.go:139 msg=offload library=metal layers.requested=-1 layers.model=81 layers.offload=81 layers.split="" memory.available="[72.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="72.0 GiB" memory.required.partial="72.0 GiB" memory.required.kv="1.2 GiB" memory.required.allocations="[72.0 GiB]" memory.weights.total="68.8 GiB" memory.weights.repeating="67.7 GiB" memory.weights.nonrepeating="1.0 GiB" memory.graph.full="584.0 MiB" memory.graph.partial="584.0 MiB" llama_model_load_from_file_impl: using device Metal (Apple M3 Ultra) - 73727 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 7 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q8_0: 562 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q8_0 print_info: file size = 69.82 GiB (8.50 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 70.55 B print_info: general.name = DeepSeek R1 Distill Llama 70B print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin▁of▁sentence|>' print_info: EOS token = 128001 '<|end▁of▁sentence|>' print_info: EOT token = 128001 '<|end▁of▁sentence|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end▁of▁sentence|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-05-09T12:07:49.889-03:00 level=INFO source=server.go:410 msg="starting llama server" cmd="/Applications/Ollama.app/Contents/Resources/ollama runner --model /Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 --ctx-size 4096 --batch-size 512 --n-gpu-layers 81 --threads 20 --parallel 1 --port 49582" time=2025-05-09T12:07:49.890-03:00 level=INFO source=sched.go:452 msg="loaded runners" count=1 time=2025-05-09T12:07:49.891-03:00 level=INFO source=server.go:589 msg="waiting for llama runner to start responding" time=2025-05-09T12:07:49.891-03:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server not responding" time=2025-05-09T12:07:49.898-03:00 level=INFO source=runner.go:853 msg="starting go runner" time=2025-05-09T12:07:49.901-03:00 level=INFO source=ggml.go:103 msg=system Metal.0.EMBED_LIBRARY=1 CPU.0.ARM_FMA=1 CPU.0.FP16_VA=1 CPU.0.DOTPROD=1 CPU.0.LLAMAFILE=1 CPU.0.ACCELERATE=1 compiler=cgo(clang) time=2025-05-09T12:07:49.902-03:00 level=INFO source=runner.go:913 msg="Server listening on 127.0.0.1:49582" llama_model_load_from_file_impl: using device Metal (Apple M3 Ultra) - 73727 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /Users/fede/.ollama/models/blobs/sha256-feef62aa06ab4162ebd3b9af4ff8383a37bf9544a7d30a3fe4623c8398bd1a28 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 7 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q8_0: 562 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q8_0 print_info: file size = 69.82 GiB (8.50 BPW) load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 256 load: token to piece cache size = 0.7999 MB print_info: arch = llama print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 8192 print_info: n_layer = 80 print_info: n_head = 64 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 8 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 28672 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 70B print_info: model params = 70.55 B print_info: general.name = DeepSeek R1 Distill Llama 70B print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin▁of▁sentence|>' print_info: EOS token = 128001 '<|end▁of▁sentence|>' print_info: EOT token = 128001 '<|end▁of▁sentence|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: PAD token = 128001 '<|end▁of▁sentence|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end▁of▁sentence|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) time=2025-05-09T12:07:50.142-03:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server loading model" load_tensors: offloading 80 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 81/81 layers to GPU load_tensors: CPU_Mapped model buffer size = 1064.62 MiB load_tensors: Metal_Mapped model buffer size = 71494.30 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 4096 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: freq_base = 500000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized ggml_metal_init: allocating ggml_metal_init: found device: Apple M3 Ultra ggml_metal_init: picking default device: Apple M3 Ultra ggml_metal_load_library: using embedded metal library ggml_metal_init: GPU name: Apple M3 Ultra ggml_metal_init: GPU family: MTLGPUFamilyApple9 (1009) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction = true ggml_metal_init: simdgroup matrix mul. = true ggml_metal_init: has residency sets = false ggml_metal_init: has bfloat = true ggml_metal_init: use bfloat = false ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 77309.41 MB ggml_metal_init: skipping kernel_get_rows_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_1row (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_f32_l4 (not supported) ggml_metal_init: skipping kernel_mul_mv_bf16_bf16 (not supported) ggml_metal_init: skipping kernel_mul_mv_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_bf16_f32 (not supported) ggml_metal_init: skipping kernel_mul_mm_id_bf16_f32 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h64 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h80 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h112 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h96 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h192 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk192_hv128 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_h256 (not supported) ggml_metal_init: skipping kernel_flash_attn_ext_vec_bf16_hk576_hv512 (not supported) ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported) llama_context: CPU output buffer size = 0.52 MiB init: kv_size = 4096, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1 init: Metal KV buffer size = 1280.00 MiB llama_context: KV self size = 1280.00 MiB, K (f16): 640.00 MiB, V (f16): 640.00 MiB llama_context: Metal compute buffer size = 584.00 MiB llama_context: CPU compute buffer size = 24.01 MiB llama_context: graph nodes = 2726 llama_context: graph splits = 2 time=2025-05-09T12:08:18.542-03:00 level=INFO source=server.go:628 msg="llama runner started in 28.65 seconds" [GIN] 2025/05/09 - 12:08:18 | 200 | 28.82524325s | 127.0.0.1 | POST "/api/generate" time=2025-05-09T12:08:22.495-03:00 level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 ggml_metal_graph_compute: command buffer 0 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) graph_compute: ggml_backend_sched_graph_compute_async failed with error -1 llama_decode: failed to decode, ret = -3 panic: failed to decode batch: llama_decode failed with code -3 goroutine 50 [running]: github.com/ollama/ollama/runner/llamarunner.(*Server).run(0x14000548360, {0x101c63de0, 0x1400041c960}) /Users/runner/work/ollama/ollama/runner/llamarunner/runner.go:346 +0x1d0 created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1 /Users/runner/work/ollama/ollama/runner/llamarunner/runner.go:894 +0xa5c time=2025-05-09T12:08:29.019-03:00 level=ERROR source=server.go:455 msg="llama runner terminated" error="exit status 2" [GIN] 2025/05/09 - 12:08:29 | 200 | 6.557922833s | 127.0.0.1 | POST "/api/chat" fede@Federicos-Mac-Studio ~ %
Author
Owner

@galoisgroupcn commented on GitHub (May 10, 2025):

@fedesantamarina @rick-github
The error you’re encountering with q8 quantized Llama 3.3 70B models on your Mac (Apple Silicon, 96GB RAM) is likely due to system or architecture limitations:

  1. q8 models require more memory and may not be fully supported or stable on Apple Silicon, even with high RAM.
  2. Your Ollama version (0.6.8) is out of date. Newer releases may improve support and stability for large/q8 models.
  3. The error (EOF) means the server process likely crashed, usually from running out of memory or hitting an unsupported configuration.

How to fix:

  1. Upgrade Ollama to the latest version: https://ollama.com/download
  2. Use q6 or q4 quantization for 70B models on Apple Silicon—these are much more reliable.
  3. If you must use q8, try a smaller model (13B or 34B), or run on a Linux machine with much higher system/VRAM.

Summary:

  1. Upgrade Ollama.
  2. For 70B models on Mac, stick to q6 or lower. q8 is not recommended or may not work.
<!-- gh-comment-id:2868452732 --> @galoisgroupcn commented on GitHub (May 10, 2025): @fedesantamarina @rick-github The error you’re encountering with q8 quantized Llama 3.3 70B models on your Mac (Apple Silicon, 96GB RAM) is likely due to system or architecture limitations: 1. q8 models require more memory and may not be fully supported or stable on Apple Silicon, even with high RAM. 2. Your Ollama version (0.6.8) is out of date. Newer releases may improve support and stability for large/q8 models. 3. The error (EOF) means the server process likely crashed, usually from running out of memory or hitting an unsupported configuration. How to fix: 1. Upgrade Ollama to the latest version: https://ollama.com/download 2. Use q6 or q4 quantization for 70B models on Apple Silicon—these are much more reliable. 3. If you must use q8, try a smaller model (13B or 34B), or run on a Linux machine with much higher system/VRAM. Summary: 1. Upgrade Ollama. 2. For 70B models on Mac, stick to q6 or lower. q8 is not recommended or may not work.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6991