[GH-ISSUE #719] Question -> Request: Mac acceleration for https://hub.docker.com/r/ollama/ollama #62369

Closed
opened 2026-05-03 08:29:07 -05:00 by GiteaMirror · 14 comments
Owner

Originally created by @jamesbraza on GitHub (Oct 6, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/719

Ollama continues to be one of the most user-friendly local model serving libraries out there.
https://hub.docker.com/r/ollama/ollama has great instructions for attaining GPU optimizations.

I am wondering, is there a similar optimization attainable for Mac Metal?

From reading around, it seems there isn't, but I thought it was at least worth the ask

Originally created by @jamesbraza on GitHub (Oct 6, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/719 Ollama continues to be one of the most user-friendly local model serving libraries out there. https://hub.docker.com/r/ollama/ollama has great instructions for attaining GPU optimizations. I am wondering, is there a similar optimization attainable for Mac Metal? From reading around, it _seems_ there isn't, but I thought it was at least worth the ask
Author
Owner

@AdirthaBorgohain commented on GitHub (Oct 6, 2023):

I think when you set num_gpu to 1, it will automatically use Mac's metal acceleration. Performance is pretty good on macs

<!-- gh-comment-id:1751026906 --> @AdirthaBorgohain commented on GitHub (Oct 6, 2023): I think when you set `num_gpu` to 1, it will automatically use Mac's metal acceleration. Performance is pretty good on macs
Author
Owner

@SabareeshGC commented on GitHub (Oct 6, 2023):

Does it support apple silicon gpu

<!-- gh-comment-id:1751279103 --> @SabareeshGC commented on GitHub (Oct 6, 2023): Does it support apple silicon gpu
Author
Owner

@AdirthaBorgohain commented on GitHub (Oct 6, 2023):

Does it support apple silicon gpu

Yes it does. You can refer the num_gpu here for details.

<!-- gh-comment-id:1751287050 --> @AdirthaBorgohain commented on GitHub (Oct 6, 2023): > Does it support apple silicon gpu Yes it does. You can refer the `num_gpu` [here](https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#parameter:~:text=The%20number%20of%20layers%20to%20send%20to%20the%20GPU(s).%20On%20macOS%20it%20defaults%20to%201%20to%20enable%20metal%20support%2C%200%20to%20disable.) for details.
Author
Owner

@SabareeshGC commented on GitHub (Oct 6, 2023):

Seems it is using just cpu when running directly from docker , i see no gpu usage when running llama2

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
docker exec -it ollama ollama run llama2
<!-- gh-comment-id:1751303055 --> @SabareeshGC commented on GitHub (Oct 6, 2023): Seems it is using just cpu when running directly from docker , i see no gpu usage when running llama2 ``` docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama docker exec -it ollama ollama run llama2 ```
Author
Owner

@jamesbraza commented on GitHub (Oct 6, 2023):

@SabareeshGC I think your command doesn't have num_gpu=1 in it anywhere. To get num_gpu, you have to make your own Modelfile like suggested here: https://github.com/jmorganca/ollama/issues/618#issuecomment-1737547046

Running these (my Mac uses ~/.ollama):

> docker run --rm --detach --volume ~/.ollama:/root/.ollama --publish 11434:11434 --name ollama ollama/ollama
> docker exec -it ollama ollama show --modelfile llama2:13b
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this one, replace the FROM line with:
# FROM llama2:13b

FROM /root/.ollama/models/blobs/sha256:f79142715bc9539a2edbb4b253548db8b34fac22736593eeaa28555874476e30
TEMPLATE """[INST] {{ if and .First .System }}<<SYS>>{{ .System }}<</SYS>>

{{ end }}{{ .Prompt }} [/INST] """
SYSTEM """"""
PARAMETER stop [INST]
PARAMETER stop [/INST]
PARAMETER stop <<SYS>>
PARAMETER stop <</SYS>>

We can see the default llama2:13b doesn't have num_gpu. So to use num_gpu, it's actually required to make your own Modelfile.

<!-- gh-comment-id:1751326259 --> @jamesbraza commented on GitHub (Oct 6, 2023): @SabareeshGC I think your command doesn't have `num_gpu=1` in it anywhere. To get `num_gpu`, you have to make your own `Modelfile` like suggested here: https://github.com/jmorganca/ollama/issues/618#issuecomment-1737547046 Running these (my Mac uses `~/.ollama`): ```bash > docker run --rm --detach --volume ~/.ollama:/root/.ollama --publish 11434:11434 --name ollama ollama/ollama > docker exec -it ollama ollama show --modelfile llama2:13b # Modelfile generated by "ollama show" # To build a new Modelfile based on this one, replace the FROM line with: # FROM llama2:13b FROM /root/.ollama/models/blobs/sha256:f79142715bc9539a2edbb4b253548db8b34fac22736593eeaa28555874476e30 TEMPLATE """[INST] {{ if and .First .System }}<<SYS>>{{ .System }}<</SYS>> {{ end }}{{ .Prompt }} [/INST] """ SYSTEM """""" PARAMETER stop [INST] PARAMETER stop [/INST] PARAMETER stop <<SYS>> PARAMETER stop <</SYS>> ``` We can see the default `llama2:13b` doesn't have `num_gpu`. So to use `num_gpu`, it's actually required to make your own `Modelfile`.
Author
Owner

@jamesbraza commented on GitHub (Oct 6, 2023):

Trying to use docker run --gpus=all on my Mac:

> docker run --gpus=all --rm --volume ~/.ollama:/root/.ollama --publish 11434:11434 --name ollama ollama/ollama
docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].

So I think on Mac, there is no --gpus=all equivalent for docker run


I think in conclusion:

  1. Run the docker run command as if using CPU (no --gpus=all)
  2. Use a Modelfile with PARAMETER num_gpu 1 set inside
    • The built-in Ollama models don't have this configured automatically

If you guys agree, I will make a docs PR for this

<!-- gh-comment-id:1751330955 --> @jamesbraza commented on GitHub (Oct 6, 2023): Trying to use `docker run --gpus=all` on my Mac: ```bash > docker run --gpus=all --rm --volume ~/.ollama:/root/.ollama --publish 11434:11434 --name ollama ollama/ollama docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]. ``` So I think on Mac, there is no `--gpus=all` equivalent for `docker run` --- I think in conclusion: 1. Run the `docker run` command as if using CPU (no `--gpus=all`) 2. Use a `Modelfile` with `PARAMETER num_gpu 1` set inside - The built-in Ollama models don't have this configured automatically If you guys agree, I will make a docs PR for this
Author
Owner

@SabareeshGC commented on GitHub (Oct 6, 2023):

Let me test this out, I am fairly new to this

<!-- gh-comment-id:1751332290 --> @SabareeshGC commented on GitHub (Oct 6, 2023): Let me test this out, I am fairly new to this
Author
Owner

@SabareeshGC commented on GitHub (Oct 6, 2023):

I created following model but unfortunately still i dont see any gpu usage at all. At this point i am not even sure Docker desktop gpu supports it , I am also using virtualization framework which should provide access to gpu. I am using M1 Macbook pro

FROM mistral
# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1
PARAMETER num_gpu 1
<!-- gh-comment-id:1751356551 --> @SabareeshGC commented on GitHub (Oct 6, 2023): I created following model but unfortunately still i dont see any gpu usage at all. At this point i am not even sure Docker desktop gpu supports it , I am also using virtualization framework which should provide access to gpu. I am using M1 Macbook pro ``` FROM mistral # set the temperature to 1 [higher is more creative, lower is more coherent] PARAMETER temperature 1 PARAMETER num_gpu 1 ```
Author
Owner

@jamesbraza commented on GitHub (Oct 6, 2023):

Fwiw @SabareeshGC, M1 chips aren't considered "GPU"s, so I wouldn't expect there to be printing about GPUs. If hardware acceleration is present, I would expect it to print "Metal" or "MPS" somewhere

Also, the printouts wouldn't be from ollama serve, it would be when a model is loaded in elsewhere

<!-- gh-comment-id:1751366534 --> @jamesbraza commented on GitHub (Oct 6, 2023): Fwiw @SabareeshGC, M1 chips aren't considered "GPU"s, so I wouldn't expect there to be printing about GPUs. If hardware acceleration is present, I would expect it to print "Metal" or "MPS" somewhere Also, the printouts wouldn't be from `ollama serve`, it would be when a model is loaded in elsewhere
Author
Owner

@jamesbraza commented on GitHub (Oct 6, 2023):

Okay, running ollama create llama2:james with the following Modelfile:

FROM llama2:13b
PARAMETER num_gpu 1

A subset of the Ollama server (NOTE: not running through docker) logs:

> ./ollama serve
...
2023/10/06 16:33:17 images.go:317: [model] - llama2:13b
2023/10/06 16:33:22 images.go:317: [num_gpu] - 1
[GIN] 2023/10/06 - 16:33:22 | 200 |  4.835199917s |       127.0.0.1 | POST     "/api/create"

Next, ollama run llama2:james:

2023/10/06 16:33:43 llama.go:313: starting llama runner
2023/10/06 16:33:43 llama.go:349: waiting for llama runner to start responding
{"timestamp":1696624423,"level":"INFO","function":"main","line":1191,"message":"build info","build":1,"commit":"9e232f0"}
{"timestamp":1696624423,"level":"INFO","function":"main","line":1196,"message":"system info","n_threads":8,"total_threads":10,"system_info":"AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | "}
llama.cpp: loading model from /Users/user/.ollama/models/blobs/sha256:abc123
llama_model_load_internal: format     = ggjt v3 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 2048
llama_model_load_internal: n_embd     = 5120
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 40
llama_model_load_internal: n_head_kv  = 40
llama_model_load_internal: n_layer    = 40
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: n_gqa      = 1
llama_model_load_internal: rnorm_eps  = 5.0e-06
llama_model_load_internal: n_ff       = 13824
llama_model_load_internal: freq_base  = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: model size = 13B
llama_model_load_internal: ggml ctx size =    0.11 MB
llama_model_load_internal: mem required  = 6983.72 MB (+ 1600.00 MB per state)
llama_new_context_with_model: kv self size  = 1600.00 MB
ggml_metal_init: allocating
ggml_metal_init: loading '/var/folders/78/lm6p91s90fx99cshsxqz_19w0000gn/T/ollama596427202/llama.cpp/ggml/build/metal/bin/ggml-metal.metal'
ggml_metal_init: loaded kernel_add                            0x1537097c0
ggml_metal_init: loaded kernel_add_row                        0x15370a010
ggml_metal_init: loaded kernel_mul                            0x15370a550
ggml_metal_init: loaded kernel_mul_row                        0x15370aba0
ggml_metal_init: loaded kernel_scale                          0x15370b0e0
ggml_metal_init: loaded kernel_silu                           0x15370b620
ggml_metal_init: loaded kernel_relu                           0x15370bb60
ggml_metal_init: loaded kernel_gelu                           0x15370c0a0
ggml_metal_init: loaded kernel_soft_max                       0x15370c770
ggml_metal_init: loaded kernel_diag_mask_inf                  0x15370cdf0
ggml_metal_init: loaded kernel_get_rows_f16                   0x15370d4c0
ggml_metal_init: loaded kernel_get_rows_q4_0                  0x15370dd00
ggml_metal_init: loaded kernel_get_rows_q4_1                  0x15370e3d0
ggml_metal_init: loaded kernel_get_rows_q2_K                  0x15370eaa0
ggml_metal_init: loaded kernel_get_rows_q3_K                  0x15370f170
ggml_metal_init: loaded kernel_get_rows_q4_K                  0x15370f840
ggml_metal_init: loaded kernel_get_rows_q5_K                  0x15370ff10
ggml_metal_init: loaded kernel_get_rows_q6_K                  0x1537105e0
ggml_metal_init: loaded kernel_rms_norm                       0x153710cc0
ggml_metal_init: loaded kernel_norm                           0x153711500
ggml_metal_init: loaded kernel_mul_mat_f16_f32                0x102b04530
ggml_metal_init: loaded kernel_mul_mat_q4_0_f32               0x102b04dd0
ggml_metal_init: loaded kernel_mul_mat_q4_1_f32               0x102b05550
ggml_metal_init: loaded kernel_mul_mat_q2_K_f32               0x153711ce0
ggml_metal_init: loaded kernel_mul_mat_q3_K_f32               0x153712580
ggml_metal_init: loaded kernel_mul_mat_q4_K_f32               0x153712d00
ggml_metal_init: loaded kernel_mul_mat_q5_K_f32               0x153713480
ggml_metal_init: loaded kernel_mul_mat_q6_K_f32               0x153713e00
ggml_metal_init: loaded kernel_mul_mm_f16_f32                 0x153714820
ggml_metal_init: loaded kernel_mul_mm_q4_0_f32                0x153714fe0
ggml_metal_init: loaded kernel_mul_mm_q4_1_f32                0x102b05bf0
ggml_metal_init: loaded kernel_mul_mm_q2_K_f32                0x153715680
ggml_metal_init: loaded kernel_mul_mm_q3_K_f32                0x153715d20
ggml_metal_init: loaded kernel_mul_mm_q4_K_f32                0x153605150
ggml_metal_init: loaded kernel_mul_mm_q5_K_f32                0x153606d70
ggml_metal_init: loaded kernel_mul_mm_q6_K_f32                0x153607530
ggml_metal_init: loaded kernel_rope                           0x153716140
ggml_metal_init: loaded kernel_alibi_f32                      0x153716b40
ggml_metal_init: loaded kernel_cpy_f32_f16                    0x1537173f0
ggml_metal_init: loaded kernel_cpy_f32_f32                    0x153717ca0
ggml_metal_init: loaded kernel_cpy_f16_f16                    0x153718550
ggml_metal_init: recommendedMaxWorkingSetSize = 10922.67 MB
ggml_metal_init: hasUnifiedMemory             = true
ggml_metal_init: maxTransferRate              = built-in GPU
llama_new_context_with_model: compute buffer total size =  211.35 MB
llama_new_context_with_model: max tensor size =    87.89 MB
ggml_metal_add_buffer: allocated 'data            ' buffer, size =  6984.06 MB, ( 6984.50 / 10922.67)
ggml_metal_add_buffer: allocated 'eval            ' buffer, size =     1.36 MB, ( 6985.86 / 10922.67)
ggml_metal_add_buffer: allocated 'kv              ' buffer, size =  1602.00 MB, ( 8587.86 / 10922.67)
ggml_metal_add_buffer: allocated 'alloc           ' buffer, size =   210.02 MB, ( 8797.88 / 10922.67)

llama server listening at http://127.0.0.1:64106

We can see a bunch of ggml_metal_init! So that means Metal backend is being used.


Now, running the same model with docker run:

> docker run --rm --volume ~/.ollama:/root/.ollama --publish 11434:11434 --name ollama ollama/ollama
2023/10/06 20:37:41 images.go:996: total blobs: 8
2023/10/06 20:37:41 images.go:1003: total unused blobs removed: 0
2023/10/06 20:37:41 routes.go:572: Listening on [::]:11434
2023/10/06 20:37:41 routes.go:592: Warning: GPU support may not enabled, check you have installed install GPU drivers: nvidia-smi command failed
...
2023/10/06 20:37:57 llama.go:313: starting llama runner
2023/10/06 20:37:57 llama.go:349: waiting for llama runner to start responding
CUDA error 35 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml/ggml-cuda.cu:4883: CUDA driver version is insufficient for CUDA runtime version
2023/10/06 20:37:57 llama.go:323: llama runner exited with error: exit status 1
2023/10/06 20:37:57 llama.go:330: error starting llama runner: llama runner process has terminated
2023/10/06 20:37:57 llama.go:313: starting llama runner
2023/10/06 20:37:57 llama.go:349: waiting for llama runner to start responding
{"timestamp":1696624677,"level":"WARNING","function":"server_params_parse","line":845,"message":"Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support","n_gpu_layers":0}
{"timestamp":1696624677,"level":"INFO","function":"main","line":1190,"message":"build info","build":1009,"commit":"9e232f0"}
{"timestamp":1696624677,"level":"INFO","function":"main","line":1192,"message":"system info","n_threads":5,"total_threads":10,"system_info":"AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | VSX = 0 | "}
llama.cpp: loading model from /root/.ollama/models/blobs/sha256:abc123
llama_model_load_internal: format     = ggjt v3 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 2048
llama_model_load_internal: n_embd     = 5120
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 40
llama_model_load_internal: n_head_kv  = 40
llama_model_load_internal: n_layer    = 40
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: n_gqa      = 1
llama_model_load_internal: rnorm_eps  = 5.0e-06
llama_model_load_internal: n_ff       = 13824
llama_model_load_internal: freq_base  = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: model size = 13B
llama_model_load_internal: ggml ctx size =    0.11 MB
llama_model_load_internal: mem required  = 6983.72 MB (+ 1600.00 MB per state)
llama_new_context_with_model: kv self size  = 1600.00 MB
llama_new_context_with_model: compute buffer total size =  191.35 MB

llama server listening at http://127.0.0.1:64294

We see no mention of Metal backend.


Conclusion: the Metal backend is accessible when running with a local build, but not when running through Docker

<!-- gh-comment-id:1751380118 --> @jamesbraza commented on GitHub (Oct 6, 2023): Okay, running `ollama create llama2:james` with the following `Modelfile`: ```modelfile FROM llama2:13b PARAMETER num_gpu 1 ``` A subset of the Ollama server (NOTE: not running through `docker`) logs: ```none > ./ollama serve ... 2023/10/06 16:33:17 images.go:317: [model] - llama2:13b 2023/10/06 16:33:22 images.go:317: [num_gpu] - 1 [GIN] 2023/10/06 - 16:33:22 | 200 | 4.835199917s | 127.0.0.1 | POST "/api/create" ``` Next, `ollama run llama2:james`: ```none 2023/10/06 16:33:43 llama.go:313: starting llama runner 2023/10/06 16:33:43 llama.go:349: waiting for llama runner to start responding {"timestamp":1696624423,"level":"INFO","function":"main","line":1191,"message":"build info","build":1,"commit":"9e232f0"} {"timestamp":1696624423,"level":"INFO","function":"main","line":1196,"message":"system info","n_threads":8,"total_threads":10,"system_info":"AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | "} llama.cpp: loading model from /Users/user/.ollama/models/blobs/sha256:abc123 llama_model_load_internal: format = ggjt v3 (latest) llama_model_load_internal: n_vocab = 32000 llama_model_load_internal: n_ctx = 2048 llama_model_load_internal: n_embd = 5120 llama_model_load_internal: n_mult = 256 llama_model_load_internal: n_head = 40 llama_model_load_internal: n_head_kv = 40 llama_model_load_internal: n_layer = 40 llama_model_load_internal: n_rot = 128 llama_model_load_internal: n_gqa = 1 llama_model_load_internal: rnorm_eps = 5.0e-06 llama_model_load_internal: n_ff = 13824 llama_model_load_internal: freq_base = 10000.0 llama_model_load_internal: freq_scale = 1 llama_model_load_internal: ftype = 2 (mostly Q4_0) llama_model_load_internal: model size = 13B llama_model_load_internal: ggml ctx size = 0.11 MB llama_model_load_internal: mem required = 6983.72 MB (+ 1600.00 MB per state) llama_new_context_with_model: kv self size = 1600.00 MB ggml_metal_init: allocating ggml_metal_init: loading '/var/folders/78/lm6p91s90fx99cshsxqz_19w0000gn/T/ollama596427202/llama.cpp/ggml/build/metal/bin/ggml-metal.metal' ggml_metal_init: loaded kernel_add 0x1537097c0 ggml_metal_init: loaded kernel_add_row 0x15370a010 ggml_metal_init: loaded kernel_mul 0x15370a550 ggml_metal_init: loaded kernel_mul_row 0x15370aba0 ggml_metal_init: loaded kernel_scale 0x15370b0e0 ggml_metal_init: loaded kernel_silu 0x15370b620 ggml_metal_init: loaded kernel_relu 0x15370bb60 ggml_metal_init: loaded kernel_gelu 0x15370c0a0 ggml_metal_init: loaded kernel_soft_max 0x15370c770 ggml_metal_init: loaded kernel_diag_mask_inf 0x15370cdf0 ggml_metal_init: loaded kernel_get_rows_f16 0x15370d4c0 ggml_metal_init: loaded kernel_get_rows_q4_0 0x15370dd00 ggml_metal_init: loaded kernel_get_rows_q4_1 0x15370e3d0 ggml_metal_init: loaded kernel_get_rows_q2_K 0x15370eaa0 ggml_metal_init: loaded kernel_get_rows_q3_K 0x15370f170 ggml_metal_init: loaded kernel_get_rows_q4_K 0x15370f840 ggml_metal_init: loaded kernel_get_rows_q5_K 0x15370ff10 ggml_metal_init: loaded kernel_get_rows_q6_K 0x1537105e0 ggml_metal_init: loaded kernel_rms_norm 0x153710cc0 ggml_metal_init: loaded kernel_norm 0x153711500 ggml_metal_init: loaded kernel_mul_mat_f16_f32 0x102b04530 ggml_metal_init: loaded kernel_mul_mat_q4_0_f32 0x102b04dd0 ggml_metal_init: loaded kernel_mul_mat_q4_1_f32 0x102b05550 ggml_metal_init: loaded kernel_mul_mat_q2_K_f32 0x153711ce0 ggml_metal_init: loaded kernel_mul_mat_q3_K_f32 0x153712580 ggml_metal_init: loaded kernel_mul_mat_q4_K_f32 0x153712d00 ggml_metal_init: loaded kernel_mul_mat_q5_K_f32 0x153713480 ggml_metal_init: loaded kernel_mul_mat_q6_K_f32 0x153713e00 ggml_metal_init: loaded kernel_mul_mm_f16_f32 0x153714820 ggml_metal_init: loaded kernel_mul_mm_q4_0_f32 0x153714fe0 ggml_metal_init: loaded kernel_mul_mm_q4_1_f32 0x102b05bf0 ggml_metal_init: loaded kernel_mul_mm_q2_K_f32 0x153715680 ggml_metal_init: loaded kernel_mul_mm_q3_K_f32 0x153715d20 ggml_metal_init: loaded kernel_mul_mm_q4_K_f32 0x153605150 ggml_metal_init: loaded kernel_mul_mm_q5_K_f32 0x153606d70 ggml_metal_init: loaded kernel_mul_mm_q6_K_f32 0x153607530 ggml_metal_init: loaded kernel_rope 0x153716140 ggml_metal_init: loaded kernel_alibi_f32 0x153716b40 ggml_metal_init: loaded kernel_cpy_f32_f16 0x1537173f0 ggml_metal_init: loaded kernel_cpy_f32_f32 0x153717ca0 ggml_metal_init: loaded kernel_cpy_f16_f16 0x153718550 ggml_metal_init: recommendedMaxWorkingSetSize = 10922.67 MB ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: maxTransferRate = built-in GPU llama_new_context_with_model: compute buffer total size = 211.35 MB llama_new_context_with_model: max tensor size = 87.89 MB ggml_metal_add_buffer: allocated 'data ' buffer, size = 6984.06 MB, ( 6984.50 / 10922.67) ggml_metal_add_buffer: allocated 'eval ' buffer, size = 1.36 MB, ( 6985.86 / 10922.67) ggml_metal_add_buffer: allocated 'kv ' buffer, size = 1602.00 MB, ( 8587.86 / 10922.67) ggml_metal_add_buffer: allocated 'alloc ' buffer, size = 210.02 MB, ( 8797.88 / 10922.67) llama server listening at http://127.0.0.1:64106 ``` We can see a bunch of `ggml_metal_init`! So that means Metal backend is being used. --- Now, running the same model with `docker run`: ```none > docker run --rm --volume ~/.ollama:/root/.ollama --publish 11434:11434 --name ollama ollama/ollama 2023/10/06 20:37:41 images.go:996: total blobs: 8 2023/10/06 20:37:41 images.go:1003: total unused blobs removed: 0 2023/10/06 20:37:41 routes.go:572: Listening on [::]:11434 2023/10/06 20:37:41 routes.go:592: Warning: GPU support may not enabled, check you have installed install GPU drivers: nvidia-smi command failed ... 2023/10/06 20:37:57 llama.go:313: starting llama runner 2023/10/06 20:37:57 llama.go:349: waiting for llama runner to start responding CUDA error 35 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml/ggml-cuda.cu:4883: CUDA driver version is insufficient for CUDA runtime version 2023/10/06 20:37:57 llama.go:323: llama runner exited with error: exit status 1 2023/10/06 20:37:57 llama.go:330: error starting llama runner: llama runner process has terminated 2023/10/06 20:37:57 llama.go:313: starting llama runner 2023/10/06 20:37:57 llama.go:349: waiting for llama runner to start responding {"timestamp":1696624677,"level":"WARNING","function":"server_params_parse","line":845,"message":"Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support","n_gpu_layers":0} {"timestamp":1696624677,"level":"INFO","function":"main","line":1190,"message":"build info","build":1009,"commit":"9e232f0"} {"timestamp":1696624677,"level":"INFO","function":"main","line":1192,"message":"system info","n_threads":5,"total_threads":10,"system_info":"AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | VSX = 0 | "} llama.cpp: loading model from /root/.ollama/models/blobs/sha256:abc123 llama_model_load_internal: format = ggjt v3 (latest) llama_model_load_internal: n_vocab = 32000 llama_model_load_internal: n_ctx = 2048 llama_model_load_internal: n_embd = 5120 llama_model_load_internal: n_mult = 256 llama_model_load_internal: n_head = 40 llama_model_load_internal: n_head_kv = 40 llama_model_load_internal: n_layer = 40 llama_model_load_internal: n_rot = 128 llama_model_load_internal: n_gqa = 1 llama_model_load_internal: rnorm_eps = 5.0e-06 llama_model_load_internal: n_ff = 13824 llama_model_load_internal: freq_base = 10000.0 llama_model_load_internal: freq_scale = 1 llama_model_load_internal: ftype = 2 (mostly Q4_0) llama_model_load_internal: model size = 13B llama_model_load_internal: ggml ctx size = 0.11 MB llama_model_load_internal: mem required = 6983.72 MB (+ 1600.00 MB per state) llama_new_context_with_model: kv self size = 1600.00 MB llama_new_context_with_model: compute buffer total size = 191.35 MB llama server listening at http://127.0.0.1:64294 ``` We see no mention of Metal backend. --- Conclusion: the Metal backend is accessible when running with a local build, but not when running through Docker
Author
Owner

@jamesbraza commented on GitHub (Oct 6, 2023):

Alright I renamed this issue to be a request for Metal backend somehow being accessible when running through Docker. Please let me know if I am missing anything 👍

<!-- gh-comment-id:1751382722 --> @jamesbraza commented on GitHub (Oct 6, 2023): Alright I renamed this issue to be a request for Metal backend somehow being accessible when running through Docker. Please let me know if I am missing anything 👍
Author
Owner

@65a commented on GitHub (Oct 8, 2023):

It's been a while since I used a Mac, but doesn't Docker on Mac actually run a Linux kernel under the hood, and then attach run Docker on Linux? That might make this more difficult, since you're looking at Mac --> Linux --> Ollama, which would mean Linux would need to expose Metal somehow to Ollama, which is likely not really possible (yet?): see https://github.com/pytorch/pytorch/issues/81224

<!-- gh-comment-id:1752119246 --> @65a commented on GitHub (Oct 8, 2023): It's been a while since I used a Mac, but doesn't Docker on Mac actually run a Linux kernel under the hood, and then attach run Docker on Linux? That might make this more difficult, since you're looking at Mac --> Linux --> Ollama, which would mean Linux would need to expose Metal somehow to Ollama, which is likely not really possible (yet?): see https://github.com/pytorch/pytorch/issues/81224
Author
Owner

@mxyng commented on GitHub (Oct 19, 2023):

Docker on MacOS does not support Metal acceleration so it's not possible for Ollama to use it. If you're interested in running Ollama on MacOS, the Mac app provides the best experience.

<!-- gh-comment-id:1771775067 --> @mxyng commented on GitHub (Oct 19, 2023): Docker on MacOS does not support Metal acceleration so it's not possible for Ollama to use it. If you're interested in running Ollama on MacOS, the Mac app provides the best experience.
Author
Owner

@qdrddr commented on GitHub (Jun 28, 2024):

Though GPU with Apple Metal is supported, Apple neural engine ANE is not currently utilized
https://github.com/ollama/ollama/issues/3898

<!-- gh-comment-id:2197637324 --> @qdrddr commented on GitHub (Jun 28, 2024): Though GPU with Apple Metal is supported, Apple neural engine ANE is not currently utilized https://github.com/ollama/ollama/issues/3898
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#62369