[GH-ISSUE #8564] Error: server metal not listed in available servers map #5530

Closed
opened 2026-04-12 16:46:46 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @felix021 on GitHub (Jan 24, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8564

What is the issue?

I downloaded Ollama today on my Macbook (Apple M3 Pro, with MacOS Sonoma 14.3 23D56), and tried to run deepseek-r1:8b, but ollama failed with this error:

$ ollama run deepseek-r1:8b
Error: [0] server metal not listed in available servers map[]

p.s. I can run this model with llama-cli on the same device.

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.5.7

Originally created by @felix021 on GitHub (Jan 24, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8564 ### What is the issue? I downloaded Ollama today on my Macbook (Apple M3 Pro, with MacOS Sonoma 14.3 23D56), and tried to run deepseek-r1:8b, but ollama failed with this error: > $ ollama run deepseek-r1:8b > Error: [0] server metal not listed in available servers map[] p.s. I can run this model with llama-cli on the same device. ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.5.7
GiteaMirror added the bug label 2026-04-12 16:46:46 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 24, 2025):

This error is annotated with // Shouldn't happen in the source code. What's in the server logs?

<!-- gh-comment-id:2612726225 --> @rick-github commented on GitHub (Jan 24, 2025): This error is annotated with `// Shouldn't happen` in the source code. What's in the [server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues)?
Author
Owner

@felix021 commented on GitHub (Jan 26, 2025):

This error is annotated with // Shouldn't happen in the source code. What's in the server logs?

Found in .ollama/logs:

time=2025-01-26T10:24:35.514+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/user/.ollama/models/blobs/sha256-6340dc3229b0d08ea9cc49b75d4098702983e17b4c096d57afbbf2ffc813f2be gpu=0 parallel=4 available=28991029248 required="6.5 GiB"
time=2025-01-26T10:24:35.514+08:00 level=INFO source=server.go:104 msg="system memory" total="36.0 GiB" free="12.5 GiB" free_swap="0 B"
time=2025-01-26T10:24:35.515+08:00 level=INFO source=memory.go:356 msg="offload to metal" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[27.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.5 GiB" memory.required.partial="6.5 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.5 GiB]" memory.weights.total="4.9 GiB" memory.weights.repeating="4.5 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="560.0 MiB"
time=2025-01-26T10:24:35.515+08:00 level=ERROR source=server.go:279 msg="server list inconsistent" error="[0] server metal not listed in available servers map[]"
time=2025-01-26T10:24:35.515+08:00 level=ERROR source=server.go:429 msg="unable to load any llama server" error="[0] server metal not listed in available servers map[]"
time=2025-01-26T10:24:35.515+08:00 level=INFO source=sched.go:428 msg="NewLlamaServer failed" model=/Users/user/.ollama/models/blobs/sha256-6340dc3229b0d08ea9cc49b75d4098702983e17b4c096d57afbbf2ffc813f2be error="[0] server metal not listed in available servers map[]"

<!-- gh-comment-id:2614179053 --> @felix021 commented on GitHub (Jan 26, 2025): > This error is annotated with `// Shouldn't happen` in the source code. What's in the [server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues)? Found in `.ollama/logs`: > time=2025-01-26T10:24:35.514+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/user/.ollama/models/blobs/sha256-6340dc3229b0d08ea9cc49b75d4098702983e17b4c096d57afbbf2ffc813f2be gpu=0 parallel=4 available=28991029248 required="6.5 GiB" > time=2025-01-26T10:24:35.514+08:00 level=INFO source=server.go:104 msg="system memory" total="36.0 GiB" free="12.5 GiB" free_swap="0 B" > time=2025-01-26T10:24:35.515+08:00 level=INFO source=memory.go:356 msg="offload to metal" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[27.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.5 GiB" memory.required.partial="6.5 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.5 GiB]" memory.weights.total="4.9 GiB" memory.weights.repeating="4.5 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="560.0 MiB" > time=2025-01-26T10:24:35.515+08:00 level=ERROR source=server.go:279 msg="server list inconsistent" error="[0] server metal not listed in available servers map[]" > time=2025-01-26T10:24:35.515+08:00 level=ERROR source=server.go:429 msg="unable to load any llama server" error="[0] server metal not listed in available servers map[]" > time=2025-01-26T10:24:35.515+08:00 level=INFO source=sched.go:428 msg="NewLlamaServer failed" model=/Users/user/.ollama/models/blobs/sha256-6340dc3229b0d08ea9cc49b75d4098702983e17b4c096d57afbbf2ffc813f2be error="[0] server metal not listed in available servers map[]"
Author
Owner

@glud123 commented on GitHub (Feb 2, 2025):

You can try the following steps:

  1. Exit ollama.
  2. Run the ollama service as an administrator with sudo ollama serve.
  3. After successfully starting the ollama service, run the required model ollama run deepseek-r1:8b.
<!-- gh-comment-id:2629211718 --> @glud123 commented on GitHub (Feb 2, 2025): You can try the following steps: 1. Exit ollama. 2. Run the ollama service as an administrator with `sudo ollama serve`. 3. After successfully starting the ollama service, run the required model `ollama run deepseek-r1:8b`.
Author
Owner

@felix021 commented on GitHub (Feb 5, 2025):

You can try the following steps:

  1. Exit ollama.
  2. Run the ollama service as an administrator with sudo ollama serve.
  3. After successfully starting the ollama service, run the required model ollama run deepseek-r1:8b.

Cool, that worked, thx!

<!-- gh-comment-id:2636470178 --> @felix021 commented on GitHub (Feb 5, 2025): > You can try the following steps: > > 1. Exit ollama. > 2. Run the ollama service as an administrator with `sudo ollama serve`. > 3. After successfully starting the ollama service, run the required model `ollama run deepseek-r1:8b`. Cool, that worked, thx!
Author
Owner

@felix021 commented on GitHub (Feb 5, 2025):

Btw, would you please add that to the error message?

<!-- gh-comment-id:2636471167 --> @felix021 commented on GitHub (Feb 5, 2025): Btw, would you please add that to the error message?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5530