[GH-ISSUE #14432] [Apple M5 / macOS 15] Metal backend fails to run models due to static_assert type mismatch #55883

Open
opened 2026-04-29 09:52:18 -05:00 by GiteaMirror · 32 comments
Owner

Originally created by @9527crazy on GitHub (Feb 26, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14432

What is the issue?

Description(描述)

I'm unable to run any Ollama models on my Apple M5 MacBook Pro running macOS 15 / Darwin 25.2.0. The runner process immediately terminates with a Metal compilation error. This seems related to the Metal backend and the bfloat/half type mismatch in the MPPTensorOpsMatMul2d kernel.

This issue occurs on Ollama 0.17.0, which is currently the latest release.

Steps to Reproduce(复现步骤)

Install Ollama 0.17.0 on macOS 15 (Apple M5).

Run ollama serve (works, binds successfully)

Pull and run any model, e.g.:

ollama run qwen2.5:7b

Observe the process crash with Metal errors.

Expected Behavior(期望行为)

The model should start and run using Metal GPU acceleration without crashing.

Actual Behavior(实际行为)

The process terminates immediately with errors similar to:

Error: 500 Internal Server Error: llama runner process has terminated: error:Error Domain=MTLLibraryErrorDomain Code=3 ...
program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever 'if' condition is false
...
/System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3266:5: error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<bfloat, half>' "Input types must match cooperative tensor types"
...
ggml_metal_init: error: failed to initialize the Metal library
ggml_backend_metal_device_init: error: failed to allocate context

The static_assert failure indicates a mismatch between bfloat and half types in the Metal cooperative tensor operations.

Environment(环境信息)
System: macOS 15 / Darwin 25.2.0
CPU: Apple M5
Ollama version: 0.17.0
Models tested: qwen2.5:7b, tinyllama
Backend: Metal (default)
Additional Context(补充信息)

Disabling Metal is not trivial; currently there is no fallback CPU-only path documented for Apple M5 / macOS 15.

Similar issues have been observed by other users in community discussions, e.g., Reddit: https://www.reddit.com/r/ollama/comments/1puguld/ollama_cannot_run_the_model_on_mac

This might be related to the Metal 4 version on macOS 15 vs the Metal 3 assumption in the Ollama build.

Relevant log output


OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.17.0

Originally created by @9527crazy on GitHub (Feb 26, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14432 ### What is the issue? Description(描述) I'm unable to run any Ollama models on my Apple M5 MacBook Pro running macOS 15 / Darwin 25.2.0. The runner process immediately terminates with a Metal compilation error. This seems related to the Metal backend and the bfloat/half type mismatch in the MPPTensorOpsMatMul2d kernel. This issue occurs on Ollama 0.17.0, which is currently the latest release. Steps to Reproduce(复现步骤) Install Ollama 0.17.0 on macOS 15 (Apple M5). Run ollama serve (works, binds successfully) Pull and run any model, e.g.: ollama run qwen2.5:7b Observe the process crash with Metal errors. Expected Behavior(期望行为) The model should start and run using Metal GPU acceleration without crashing. Actual Behavior(实际行为) The process terminates immediately with errors similar to: Error: 500 Internal Server Error: llama runner process has terminated: error:Error Domain=MTLLibraryErrorDomain Code=3 ... program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever 'if' condition is false ... /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3266:5: error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<bfloat, half>' "Input types must match cooperative tensor types" ... ggml_metal_init: error: failed to initialize the Metal library ggml_backend_metal_device_init: error: failed to allocate context The static_assert failure indicates a mismatch between bfloat and half types in the Metal cooperative tensor operations. Environment(环境信息) System: macOS 15 / Darwin 25.2.0 CPU: Apple M5 Ollama version: 0.17.0 Models tested: qwen2.5:7b, tinyllama Backend: Metal (default) Additional Context(补充信息) Disabling Metal is not trivial; currently there is no fallback CPU-only path documented for Apple M5 / macOS 15. Similar issues have been observed by other users in community discussions, e.g., Reddit: [https://www.reddit.com/r/ollama/comments/1puguld/ollama_cannot_run_the_model_on_mac](https://www.reddit.com/r/ollama/comments/1puguld/ollama_cannot_run_the_model_on_mac?utm_source=chatgpt.com) This might be related to the Metal 4 version on macOS 15 vs the Metal 3 assumption in the Ollama build. ### Relevant log output ```shell ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.17.0
GiteaMirror added the bug label 2026-04-29 09:52:19 -05:00
Author
Owner

@FinleyVeeDub commented on GitHub (Feb 26, 2026):

Same here on M1 Max 32GB RAM on Tahoe 26.2 with mlx 0.30.5 I was wondering why suddenly the GPU usage was nearly 0% after the current update to ollama version update to 0.17.0. I did some debugging - this is the outcome:

ERROR Failed to load MLX dynamic library symbols path=/Applications/Ollama.app/Contents/Resources/libmlxc.dylib
gpu_count=0
inference compute id=cpu library=cpu

Run the following with MLX_METAL_DEBUG=1 OLLAMA_DEBUG=1 open -a Ollama to check if you get the same ERROR.
Then tail -100 ~/.ollama/logs/app.log 2>&1 | grep -i -E "metal|mlx|gpu|error" and tail -100 ~/.ollama/logs/server.log 2>&1

The metal gpu library is not getting recognised on ollama version is 0.17.0. With LM Studio Version 0.4.5+2 and MLX Models everything runs smooth.

<!-- gh-comment-id:3966433835 --> @FinleyVeeDub commented on GitHub (Feb 26, 2026): Same here on M1 Max 32GB RAM on Tahoe 26.2 with mlx 0.30.5 I was wondering why suddenly the GPU usage was nearly 0% after the current update to ollama version update to 0.17.0. I did some debugging - this is the outcome: ``` ERROR Failed to load MLX dynamic library symbols path=/Applications/Ollama.app/Contents/Resources/libmlxc.dylib gpu_count=0 inference compute id=cpu library=cpu ``` Run the following with `MLX_METAL_DEBUG=1 OLLAMA_DEBUG=1 open -a Ollama` to check if you get the same ERROR. Then `tail -100 ~/.ollama/logs/app.log 2>&1 | grep -i -E "metal|mlx|gpu|error"` and `tail -100 ~/.ollama/logs/server.log 2>&1` The metal gpu library is not getting recognised on ollama version is 0.17.0. With LM Studio Version 0.4.5+2 and MLX Models everything runs smooth.
Author
Owner

@Lumysia commented on GitHub (Mar 4, 2026):

TLDR;

Currently, Ollama 0.17.5 consistently crashes with the static_assert type mismatch error in the Metal backend when running any model (even standard Llama 3.2 3B), but the latest upstream llama.cpp (master branch) works perfectly with full GPU acceleration on the same machine.

Running ollama run llama3.2:3b triggers the following crash

Error: 500 Internal Server Error: llama runner process has terminated: error:Error Domain=MTLLibraryErrorDomain Code=3 ...
/System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3266:5: error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<bfloat, half>' "Input types must match cooperative tensor types"

I manually built the latest llama.cpp (commit ecd99d6a9) on this same M5 machine. It runs the exact same Ollama blob perfectly on the GPU

➜  llama.cpp git:(master) ./build/bin/llama-cli -m ~/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff -p "Hello." -ngl 99
ggml_metal_device_init: testing tensor API for f16 support
ggml_metal_library_compile_pipeline: compiling pipeline: base = 'dummy_kernel', name = 'dummy_kernel'
ggml_metal_library_compile_pipeline: loaded dummy_kernel                                  0x106076af0 | th_max = 1024 | th_width =   32
ggml_metal_device_init: testing tensor API for bfloat support
ggml_metal_library_compile_pipeline: compiling pipeline: base = 'dummy_kernel', name = 'dummy_kernel'
ggml_metal_library_compile_pipeline: loaded dummy_kernel                                  0x1060734f0 | th_max = 1024 | th_width =   32
ggml_metal_library_init: using embedded metal library
ggml_metal_library_init: loaded in 0.014 sec
ggml_metal_rsets_init: creating a residency set collection (keep_alive = 180 s)
ggml_metal_device_init: GPU name:   MTL0
ggml_metal_device_init: GPU family: MTLGPUFamilyApple10  (1010)
ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_device_init: GPU family: MTLGPUFamilyMetal4  (5002)
ggml_metal_device_init: simdgroup reduction   = true
ggml_metal_device_init: simdgroup matrix mul. = true
ggml_metal_device_init: has unified memory    = true
ggml_metal_device_init: has bfloat            = true
ggml_metal_device_init: has tensor            = true
ggml_metal_device_init: use residency sets    = true
ggml_metal_device_init: use shared buffers    = true
ggml_metal_device_init: recommendedMaxWorkingSetSize  = 12713.12 MB

Loading model...  


▄▄ ▄▄
██ ██
██ ██  ▀▀█▄ ███▄███▄  ▀▀█▄    ▄████ ████▄ ████▄
██ ██ ▄█▀██ ██ ██ ██ ▄█▀██    ██    ██ ██ ██ ██
██ ██ ▀█▄██ ██ ██ ██ ▀█▄██ ██ ▀████ ████▀ ████▀
                                    ██    ██
                                    ▀▀    ▀▀

build      : b8193-ecd99d6a9
model      : sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
modalities : text

available commands:
  /exit or Ctrl+C     stop or exit
  /regen              regenerate the last response
  /clear              clear the chat history
  /read               add a text file


> Hello.

Hello! It's nice to meet you. Is there something I can help you with or would you like to chat?

[ Prompt: 559.7 t/s | Generation: 52.9 t/s ]

> /exit


Exiting...
llama_memory_breakdown_print: | memory breakdown [MiB] | total   free     self   model   context   compute    unaccounted |
llama_memory_breakdown_print: |   - MTL0 (Apple M5)    | 12124 = 1128 + (10994 =  1918 +    8820 +     256) +           0 |
llama_memory_breakdown_print: |   - Host               |                   477 =   308 +       0 +     169                |
ggml_metal_free: deallocating

➜  llama.cpp git:(master) ollama run llama3.2:3b
Error: 500 Internal Server Error: llama runner process has terminated: error:Error Domain=MTLLibraryErrorDomain Code=3 "program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:7251:76: note: in instantiation of function template specialization 'kernel_rope_multi<float>' requested here
template [[host_name("kernel_rope_multi_f32")]] kernel kernel_rope_multi_t kernel_rope_multi<float>;
                                                                           ^
program_source:7149:33: note: uninitialized use occurs here
            const float theta = theta_base * pow(args.freq_base, inv_ndims*i0);
                                ^~~~~~~~~~
program_source:7131:24: note: remove the 'if' if its condition is always true
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever '&&' condition is false [-Wsometimes-uninitialized]
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~
program_source:7149:33: note: uninitialized use occurs here
            const float theta = theta_base * pow(args.freq_base, inv_ndims*i0);
                                ^~~~~~~~~~
program_source:7131:28: note: remove the '&&' if its condition is always true
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~~~~
program_source:7125:29: note: initialize the variable 'theta_base' to silence this warning
            float theta_base;
                            ^
                             = 0.0
program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:7252:76: note: in instantiation of function template specialization 'kernel_rope_multi<half>' requested here
template [[host_name("kernel_rope_multi_f16")]] kernel kernel_rope_multi_t kernel_rope_multi<half>;
                                                                           ^
program_source:7149:33: note: uninitialized use occurs here
            const float theta = theta_base * pow(args.freq_base, inv_ndims*i0);
                                ^~~~~~~~~~
program_source:7131:24: note: remove the 'if' if its condition is always true
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever '&&' condition is false [-Wsometimes-uninitialized]
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~
program_source:7149:33: note: uninitialized use occurs here
            const float theta = theta_base * pow(args.freq_base, inv_ndims*i0);
                                ^~~~~~~~~~
program_source:7131:28: note: remove the '&&' if its condition is always true
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~~~~
program_source:7125:29: note: initialize the variable 'theta_base' to silence this warning
            float theta_base;
                            ^
                             = 0.0
In file included from program_source:2837:
In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MetalPerformancePrimitives.h:10:
In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:368:
/System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3266:5: error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<bfloat, half>' "Input types must match cooperative tensor types"
    static_assert(__tensor_ops_detail::__is_same_v<_leftType, leftValueType>, "Input types must match cooperative tensor types");
    ^             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:394:24: note: in instantiation of function template specialization 'mpp::tensor_ops::__mutmul2d_detail::__run<{32, 64, 32, false, true, false, 1}, metal::execution_simdgroups<4>, metal::tensor<threadgroup half, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::tensor<threadgroup bfloat, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::cooperative_tensor<float, metal::extents<int, 18446744073709551615, 18446744073709551615>, mpp::tensor_ops::__mutmul2d_detail::__operand_layout<{32, 64, 32, false, true, false, 1}, mpp::tensor_ops::__mutmul2d_detail::__matmul2d_cooperative_operand_index::destination, metal::execution_simdgroups<4>, bfloat, half, float, int>>>' requested here
    __mutmul2d_detail::__run<Descriptor, Scope, LeftOperandType,
                       ^
program_source:12128:12: note: in instantiation of function template specialization 'mpp::tensor_ops::matmul2d<{32, 64, 32, false, true, false, 1}, metal::execution_simdgroups<4>>::run<metal::tensor<threadgroup half, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::tensor<threadgroup bfloat, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::cooperative_tensor<float, metal::extents<int, 18446744073709551615, 18446744073709551615>, mpp::tensor_ops::__mutmul2d_detail::__operand_layout<{32, 64, 32, false, true, false, 1}, mpp::tensor_ops::__mutmul2d_detail::__matmul2d_cooperative_operand_index::destination, metal::execution_simdgroups<4>, bfloat, half, float, int>>, void>' requested here
        mm.run(sB, sA, cT);
           ^
In file included from program_source:2837:
In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MetalPerformancePrimitives.h:10:
In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:368:
/System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3267:5: error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<half, bfloat>' "Input types must match cooperative tensor types"
    static_assert(__tensor_ops_detail::__is_same_v<_rightType, rightValueType>, "Input types must match cooperative tensor types");
    ^             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
" UserInfo={NSLocalizedDescription=program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:7251:76: note: in instantiation of function template specialization 'kernel_rope_multi<float>' requested here
template [[host_name("kernel_rope_multi_f32")]] kernel kernel_rope_multi_t kernel_rope_multi<float>;
                                                                           ^
program_source:7149:33: note: uninitialized use occurs here
            const float theta = theta_base * pow(args.freq_base, inv_ndims*i0);
                                ^~~~~~~~~~
program_source:7131:24: note: remove the 'if' if its condition is always true
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever '&&' condition is false [-Wsometimes-uninitialized]
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~
program_source:7149:33: note: uninitialized use occurs here
            const float theta = theta_base * pow(args.freq_base, inv_ndims*i0);
                                ^~~~~~~~~~
program_source:7131:28: note: remove the '&&' if its condition is always true
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~~~~
program_source:7125:29: note: initialize the variable 'theta_base' to silence this warning
            float theta_base;
                            ^
                             = 0.0
program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:7252:76: note: in instantiation of function template specialization 'kernel_rope_multi<half>' requested here
template [[host_name("kernel_rope_multi_f16")]] kernel kernel_rope_multi_t kernel_rope_multi<half>;
                                                                           ^
program_source:7149:33: note: uninitialized use occurs here
            const float theta = theta_base * pow(args.freq_base, inv_ndims*i0);
                                ^~~~~~~~~~
program_source:7131:24: note: remove the 'if' if its condition is always true
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever '&&' condition is false [-Wsometimes-uninitialized]
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~
program_source:7149:33: note: uninitialized use occurs here
            const float theta = theta_base * pow(args.freq_base, inv_ndims*i0);
                                ^~~~~~~~~~
program_source:7131:28: note: remove the '&&' if its condition is always true
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~~~~
program_source:7125:29: note: initialize the variable 'theta_base' to silence this warning
            float theta_base;
                            ^
                             = 0.0
In file included from program_source:2837:
In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MetalPerformancePrimitives.h:10:
In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:368:
/System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3266:5: error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<bfloat, half>' "Input types must match cooperative tensor types"
    static_assert(__tensor_ops_detail::__is_same_v<_leftType, leftValueType>, "Input types must match cooperative tensor types");
    ^             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:394:24: note: in instantiation of function template specialization 'mpp::tensor_ops::__mutmul2d_detail::__run<{32, 64, 32, false, true, false, 1}, metal::execution_simdgroups<4>, metal::tensor<threadgroup half, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::tensor<threadgroup bfloat, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::cooperative_tensor<float, metal::extents<int, 18446744073709551615, 18446744073709551615>, mpp::tensor_ops::__mutmul2d_detail::__operand_layout<{32, 64, 32, false, true, false, 1}, mpp::tensor_ops::__mutmul2d_detail::__matmul2d_cooperative_operand_index::destination, metal::execution_simdgroups<4>, bfloat, half, float, int>>>' requested here
    __mutmul2d_detail::__run<Descriptor, Scope, LeftOperandType,
                       ^
program_source:12128:12: note: in instantiation of function template specialization 'mpp::tensor_ops::matmul2d<{32, 64, 32, false, true, false, 1}, metal::execution_simdgroups<4>>::run<metal::tensor<threadgroup half, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::tensor<threadgroup bfloat, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::cooperative_tensor<float, metal::extents<int, 18446744073709551615, 18446744073709551615>, mpp::tensor_ops::__mutmul2d_detail::__operand_layout<{32, 64, 32, false, true, false, 1}, mpp::tensor_ops::__mutmul2d_detail::__matmul2d_cooperative_operand_index::destination, metal::execution_simdgroups<4>, bfloat, half, float, int>>, void>' requested here
        mm.run(sB, sA, cT);
           ^
In file included from program_source:2837:
In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MetalPerformancePrimitives.h:10:
In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:368:
/System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3267:5: error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<half, bfloat>' "Input types must match cooperative tensor types"
    static_assert(__tensor_ops_detail::__is_same_v<_rightType, rightValueType>, "Input types must match cooperative tensor types");
    ^             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
}
ggml_metal_init: error: failed to initialize the Metal library
ggml_backend_metal_device_init: error: failed to allocate context
<!-- gh-comment-id:3994837025 --> @Lumysia commented on GitHub (Mar 4, 2026): TLDR; Currently, Ollama 0.17.5 consistently crashes with the static_assert type mismatch error in the Metal backend when running any model (even standard Llama 3.2 3B), but the latest upstream llama.cpp (master branch) works perfectly with full GPU acceleration on the same machine. - [llama.cpp issue 17986](https://github.com/ggml-org/llama.cpp/issues/17986) > Running ollama run llama3.2:3b triggers the following crash ``` Error: 500 Internal Server Error: llama runner process has terminated: error:Error Domain=MTLLibraryErrorDomain Code=3 ... /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3266:5: error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<bfloat, half>' "Input types must match cooperative tensor types" ``` > I manually built the latest llama.cpp (commit ecd99d6a9) on this same M5 machine. It runs the exact same Ollama blob perfectly on the GPU ```bash ➜ llama.cpp git:(master) ./build/bin/llama-cli -m ~/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff -p "Hello." -ngl 99 ggml_metal_device_init: testing tensor API for f16 support ggml_metal_library_compile_pipeline: compiling pipeline: base = 'dummy_kernel', name = 'dummy_kernel' ggml_metal_library_compile_pipeline: loaded dummy_kernel 0x106076af0 | th_max = 1024 | th_width = 32 ggml_metal_device_init: testing tensor API for bfloat support ggml_metal_library_compile_pipeline: compiling pipeline: base = 'dummy_kernel', name = 'dummy_kernel' ggml_metal_library_compile_pipeline: loaded dummy_kernel 0x1060734f0 | th_max = 1024 | th_width = 32 ggml_metal_library_init: using embedded metal library ggml_metal_library_init: loaded in 0.014 sec ggml_metal_rsets_init: creating a residency set collection (keep_alive = 180 s) ggml_metal_device_init: GPU name: MTL0 ggml_metal_device_init: GPU family: MTLGPUFamilyApple10 (1010) ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_device_init: GPU family: MTLGPUFamilyMetal4 (5002) ggml_metal_device_init: simdgroup reduction = true ggml_metal_device_init: simdgroup matrix mul. = true ggml_metal_device_init: has unified memory = true ggml_metal_device_init: has bfloat = true ggml_metal_device_init: has tensor = true ggml_metal_device_init: use residency sets = true ggml_metal_device_init: use shared buffers = true ggml_metal_device_init: recommendedMaxWorkingSetSize = 12713.12 MB Loading model... ▄▄ ▄▄ ██ ██ ██ ██ ▀▀█▄ ███▄███▄ ▀▀█▄ ▄████ ████▄ ████▄ ██ ██ ▄█▀██ ██ ██ ██ ▄█▀██ ██ ██ ██ ██ ██ ██ ██ ▀█▄██ ██ ██ ██ ▀█▄██ ██ ▀████ ████▀ ████▀ ██ ██ ▀▀ ▀▀ build : b8193-ecd99d6a9 model : sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff modalities : text available commands: /exit or Ctrl+C stop or exit /regen regenerate the last response /clear clear the chat history /read add a text file > Hello. Hello! It's nice to meet you. Is there something I can help you with or would you like to chat? [ Prompt: 559.7 t/s | Generation: 52.9 t/s ] > /exit Exiting... llama_memory_breakdown_print: | memory breakdown [MiB] | total free self model context compute unaccounted | llama_memory_breakdown_print: | - MTL0 (Apple M5) | 12124 = 1128 + (10994 = 1918 + 8820 + 256) + 0 | llama_memory_breakdown_print: | - Host | 477 = 308 + 0 + 169 | ggml_metal_free: deallocating ``` ```bash ➜ llama.cpp git:(master) ollama run llama3.2:3b Error: 500 Internal Server Error: llama runner process has terminated: error:Error Domain=MTLLibraryErrorDomain Code=3 "program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized] } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ program_source:7251:76: note: in instantiation of function template specialization 'kernel_rope_multi<float>' requested here template [[host_name("kernel_rope_multi_f32")]] kernel kernel_rope_multi_t kernel_rope_multi<float>; ^ program_source:7149:33: note: uninitialized use occurs here const float theta = theta_base * pow(args.freq_base, inv_ndims*i0); ^~~~~~~~~~ program_source:7131:24: note: remove the 'if' if its condition is always true } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever '&&' condition is false [-Wsometimes-uninitialized] } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~ program_source:7149:33: note: uninitialized use occurs here const float theta = theta_base * pow(args.freq_base, inv_ndims*i0); ^~~~~~~~~~ program_source:7131:28: note: remove the '&&' if its condition is always true } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~ program_source:7125:29: note: initialize the variable 'theta_base' to silence this warning float theta_base; ^ = 0.0 program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized] } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ program_source:7252:76: note: in instantiation of function template specialization 'kernel_rope_multi<half>' requested here template [[host_name("kernel_rope_multi_f16")]] kernel kernel_rope_multi_t kernel_rope_multi<half>; ^ program_source:7149:33: note: uninitialized use occurs here const float theta = theta_base * pow(args.freq_base, inv_ndims*i0); ^~~~~~~~~~ program_source:7131:24: note: remove the 'if' if its condition is always true } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever '&&' condition is false [-Wsometimes-uninitialized] } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~ program_source:7149:33: note: uninitialized use occurs here const float theta = theta_base * pow(args.freq_base, inv_ndims*i0); ^~~~~~~~~~ program_source:7131:28: note: remove the '&&' if its condition is always true } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~ program_source:7125:29: note: initialize the variable 'theta_base' to silence this warning float theta_base; ^ = 0.0 In file included from program_source:2837: In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MetalPerformancePrimitives.h:10: In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:368: /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3266:5: error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<bfloat, half>' "Input types must match cooperative tensor types" static_assert(__tensor_ops_detail::__is_same_v<_leftType, leftValueType>, "Input types must match cooperative tensor types"); ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:394:24: note: in instantiation of function template specialization 'mpp::tensor_ops::__mutmul2d_detail::__run<{32, 64, 32, false, true, false, 1}, metal::execution_simdgroups<4>, metal::tensor<threadgroup half, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::tensor<threadgroup bfloat, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::cooperative_tensor<float, metal::extents<int, 18446744073709551615, 18446744073709551615>, mpp::tensor_ops::__mutmul2d_detail::__operand_layout<{32, 64, 32, false, true, false, 1}, mpp::tensor_ops::__mutmul2d_detail::__matmul2d_cooperative_operand_index::destination, metal::execution_simdgroups<4>, bfloat, half, float, int>>>' requested here __mutmul2d_detail::__run<Descriptor, Scope, LeftOperandType, ^ program_source:12128:12: note: in instantiation of function template specialization 'mpp::tensor_ops::matmul2d<{32, 64, 32, false, true, false, 1}, metal::execution_simdgroups<4>>::run<metal::tensor<threadgroup half, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::tensor<threadgroup bfloat, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::cooperative_tensor<float, metal::extents<int, 18446744073709551615, 18446744073709551615>, mpp::tensor_ops::__mutmul2d_detail::__operand_layout<{32, 64, 32, false, true, false, 1}, mpp::tensor_ops::__mutmul2d_detail::__matmul2d_cooperative_operand_index::destination, metal::execution_simdgroups<4>, bfloat, half, float, int>>, void>' requested here mm.run(sB, sA, cT); ^ In file included from program_source:2837: In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MetalPerformancePrimitives.h:10: In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:368: /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3267:5: error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<half, bfloat>' "Input types must match cooperative tensor types" static_assert(__tensor_ops_detail::__is_same_v<_rightType, rightValueType>, "Input types must match cooperative tensor types"); ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ " UserInfo={NSLocalizedDescription=program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized] } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ program_source:7251:76: note: in instantiation of function template specialization 'kernel_rope_multi<float>' requested here template [[host_name("kernel_rope_multi_f32")]] kernel kernel_rope_multi_t kernel_rope_multi<float>; ^ program_source:7149:33: note: uninitialized use occurs here const float theta = theta_base * pow(args.freq_base, inv_ndims*i0); ^~~~~~~~~~ program_source:7131:24: note: remove the 'if' if its condition is always true } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever '&&' condition is false [-Wsometimes-uninitialized] } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~ program_source:7149:33: note: uninitialized use occurs here const float theta = theta_base * pow(args.freq_base, inv_ndims*i0); ^~~~~~~~~~ program_source:7131:28: note: remove the '&&' if its condition is always true } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~ program_source:7125:29: note: initialize the variable 'theta_base' to silence this warning float theta_base; ^ = 0.0 program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized] } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ program_source:7252:76: note: in instantiation of function template specialization 'kernel_rope_multi<half>' requested here template [[host_name("kernel_rope_multi_f16")]] kernel kernel_rope_multi_t kernel_rope_multi<half>; ^ program_source:7149:33: note: uninitialized use occurs here const float theta = theta_base * pow(args.freq_base, inv_ndims*i0); ^~~~~~~~~~ program_source:7131:24: note: remove the 'if' if its condition is always true } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever '&&' condition is false [-Wsometimes-uninitialized] } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~ program_source:7149:33: note: uninitialized use occurs here const float theta = theta_base * pow(args.freq_base, inv_ndims*i0); ^~~~~~~~~~ program_source:7131:28: note: remove the '&&' if its condition is always true } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~ program_source:7125:29: note: initialize the variable 'theta_base' to silence this warning float theta_base; ^ = 0.0 In file included from program_source:2837: In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MetalPerformancePrimitives.h:10: In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:368: /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3266:5: error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<bfloat, half>' "Input types must match cooperative tensor types" static_assert(__tensor_ops_detail::__is_same_v<_leftType, leftValueType>, "Input types must match cooperative tensor types"); ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:394:24: note: in instantiation of function template specialization 'mpp::tensor_ops::__mutmul2d_detail::__run<{32, 64, 32, false, true, false, 1}, metal::execution_simdgroups<4>, metal::tensor<threadgroup half, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::tensor<threadgroup bfloat, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::cooperative_tensor<float, metal::extents<int, 18446744073709551615, 18446744073709551615>, mpp::tensor_ops::__mutmul2d_detail::__operand_layout<{32, 64, 32, false, true, false, 1}, mpp::tensor_ops::__mutmul2d_detail::__matmul2d_cooperative_operand_index::destination, metal::execution_simdgroups<4>, bfloat, half, float, int>>>' requested here __mutmul2d_detail::__run<Descriptor, Scope, LeftOperandType, ^ program_source:12128:12: note: in instantiation of function template specialization 'mpp::tensor_ops::matmul2d<{32, 64, 32, false, true, false, 1}, metal::execution_simdgroups<4>>::run<metal::tensor<threadgroup half, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::tensor<threadgroup bfloat, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::cooperative_tensor<float, metal::extents<int, 18446744073709551615, 18446744073709551615>, mpp::tensor_ops::__mutmul2d_detail::__operand_layout<{32, 64, 32, false, true, false, 1}, mpp::tensor_ops::__mutmul2d_detail::__matmul2d_cooperative_operand_index::destination, metal::execution_simdgroups<4>, bfloat, half, float, int>>, void>' requested here mm.run(sB, sA, cT); ^ In file included from program_source:2837: In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MetalPerformancePrimitives.h:10: In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:368: /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3267:5: error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<half, bfloat>' "Input types must match cooperative tensor types" static_assert(__tensor_ops_detail::__is_same_v<_rightType, rightValueType>, "Input types must match cooperative tensor types"); ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ } ggml_metal_init: error: failed to initialize the Metal library ggml_backend_metal_device_init: error: failed to allocate context ```
Author
Owner

@devcodevault commented on GitHub (Mar 6, 2026):

PR #14604 breaks GPU discovery on macOS Apple Silicon.
Environment: macOS + Xcode: 26 Beta (with Metal Toolchain) + Ollama: Build from PR #14604 source
Expected behavior: library=metal should be detected and used
Actual behavior:

  • Build succeeds with Metal libs (libggml-metal.dylib created)
  • Server starts but GPU discovery fails silently
  • Falls back to library=cpu with total_vram=0 B
  • Error: "failure during GPU discovery" / timeout
    Logs:
    time=... level=INFO source=types.go:60 msg="inference compute"
    id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama
    driver="" pci_id="" type="" total="24.0 GiB" available="5.6 GiB"
    Steps tried:
  • Compiled with GGML_METAL=ON
  • Increased GPU timeout to 120s (no effect)
  • Verified libs are in lib/ollama/ (correct)
  • Runner starts but bootstrap device discovery returns empty
    Additional info:
  • Looks like the PR build has the discovery issue
<!-- gh-comment-id:4008982623 --> @devcodevault commented on GitHub (Mar 6, 2026): PR #14604 breaks GPU discovery on macOS Apple Silicon. Environment: macOS + Xcode: 26 Beta (with Metal Toolchain) + Ollama: Build from PR #14604 source Expected behavior: library=metal should be detected and used Actual behavior: - Build succeeds with Metal libs (libggml-metal.dylib created) - Server starts but GPU discovery fails silently - Falls back to `library=cpu` with `total_vram=0 B` - Error: "failure during GPU discovery" / timeout Logs: time=... level=INFO source=types.go:60 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="24.0 GiB" available="5.6 GiB" Steps tried: - Compiled with GGML_METAL=ON - Increased GPU timeout to 120s (no effect) - Verified libs are in lib/ollama/ (correct) - Runner starts but bootstrap device discovery returns empty Additional info: - Looks like the PR build has the discovery issue
Author
Owner

@Lumysia commented on GitHub (Mar 6, 2026):

@devcodevault Thanks for the report! I'm looking into this right now.

I suspect this might be an issue where the PR works fine with the embedded Metal library (my local setup) but fails during pipeline initialization when compiled with GGML_METAL=ON (dylib mode). I'll double-check the host-side Objective-C code to make sure no orphaned kernel calls were left behind.

In the meantime, to help me debug, could you provide:

  • The full verbose logs by running OLLAMA_DEBUG=1 ollama serve
  • Your specific Mac model and chip and the MTLGPUFamily detected
  • Any crash logs related to ollama in macOS Console.app

I noticed your log shows inference compute at types.go:60, but on a clean build of this PR branch, it's at types.go:42. Do you happen to have other local patches applied, or are you building on top of a different base commit?

<!-- gh-comment-id:4009292705 --> @Lumysia commented on GitHub (Mar 6, 2026): @devcodevault Thanks for the report! I'm looking into this right now. I suspect this might be an issue where the PR works fine with the embedded Metal library (my local setup) but fails during pipeline initialization when compiled with GGML_METAL=ON (dylib mode). I'll double-check the host-side Objective-C code to make sure no orphaned kernel calls were left behind. In the meantime, to help me debug, could you provide: - The full verbose logs by running OLLAMA_DEBUG=1 ollama serve - Your specific Mac model and chip and the MTLGPUFamily detected - Any crash logs related to ollama in macOS Console.app I noticed your log shows inference compute at types.go:60, but on a clean build of this PR branch, it's at types.go:42. Do you happen to have other local patches applied, or are you building on top of a different base commit?
Author
Owner

@Lumysia commented on GitHub (Mar 6, 2026):

@devcodevault You were right, fantastic catch!

The issue occurred because my previous PR removed the bf16_f16 Metal kernels but left behind the host-side C/Objective-C pipeline initialization code.

When built with embedded libraries (EMBED_LIBRARY=1, my local setup), this was silently tolerated. However, when built with external dylibs (GGML_METAL=ON, your setup), the host code attempted to load a non-existent kernel, returned a null pipeline, and caused the GPU discovery to fail silently and fall back to the CPU.

I will be pushing a new commit shortly that:

  • Adds proper null-checks during pipeline initialization.
  • Updates ggml_metal_device_supports_op to correctly report that BF16xF16 is unsupported on Metal. The graph scheduler will now gracefully handle this via casting/fallback instead of crashing.
  • Cleaned up remaining _id template variants in the Metal shaders that could still trigger the strict compiler assert.

I'll ping you here once it's pushed so you can pull and test it on your end. Thanks again for the detailed report!

<!-- gh-comment-id:4009471426 --> @Lumysia commented on GitHub (Mar 6, 2026): @devcodevault You were right, fantastic catch! The issue occurred because my previous PR removed the bf16_f16 Metal kernels but left behind the host-side C/Objective-C pipeline initialization code. When built with embedded libraries (EMBED_LIBRARY=1, my local setup), this was silently tolerated. However, when built with external dylibs (GGML_METAL=ON, your setup), the host code attempted to load a non-existent kernel, returned a null pipeline, and caused the GPU discovery to fail silently and fall back to the CPU. I will be pushing a new commit shortly that: - Adds proper null-checks during pipeline initialization. - Updates ggml_metal_device_supports_op to correctly report that BF16xF16 is unsupported on Metal. The graph scheduler will now gracefully handle this via casting/fallback instead of crashing. - Cleaned up remaining _id template variants in the Metal shaders that could still trigger the strict compiler assert. I'll ping you here once it's pushed so you can pull and test it on your end. Thanks again for the detailed report!
Author
Owner

@Lumysia commented on GitHub (Mar 6, 2026):

@devcodevault The new commit has been pushed. I have verified the fix on my machine by simulating your environment (GGML_METAL=ON and GGML_METAL_EMBED_LIBRARY=OFF). GPU discovery and inference are now working perfectly without any silent fallbacks or crashes.

Please pull the latest changes and let me know if it solves the issue on your end. Thanks!

<!-- gh-comment-id:4009543777 --> @Lumysia commented on GitHub (Mar 6, 2026): @devcodevault The new commit has been pushed. I have verified the fix on my machine by simulating your environment (GGML_METAL=ON and GGML_METAL_EMBED_LIBRARY=OFF). GPU discovery and inference are now working perfectly without any silent fallbacks or crashes. Please pull the latest changes and let me know if it solves the issue on your end. Thanks!
Author
Owner

@devcodevault commented on GitHub (Mar 8, 2026):

The GPU discovery issue seems resolved! The server now starts and attempts to load Metal (I can see ggml_metal_library_init being called).
However, there's a new blocker: Metal shader compilation is failing due to remaining BF16/bfloat16 references:

Log : ggml_metal_library_init: error: Error Domain=MTLLibraryErrorDomain Code=3
"program_source:7131:28: warning: variable 'theta_base' is used uninitialized...
...
/System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3266:5:
error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<bfloat, half>'
"Input types must match cooperative tensor types"
...
ggml_metal_init: error: failed to initialize the Metal library
ggml_backend_metal_device_init: error: failed to allocate context
llama_init_from_model: failed to initialize the context: failed to initialize Metal backend

GPU discovery: Working (runner starts) OK
Metal shader compilation: Failing (BF16 type mismatches in ggml-metal.metal) KO
Result: Falls back to CPU or crashes when loading models KO

It appears there are still BF16/bfloat16 references in the Metal shaders (around line 7131 in ggml-metal.metal) that weren't removed or properly guarded. The static_assert fails because of type mismatch between bfloat and half.
Could you check for any remaining BF16 kernel implementations in the Metal code? Happy to test any additional fixes!
Thank you.

<!-- gh-comment-id:4019935467 --> @devcodevault commented on GitHub (Mar 8, 2026): The GPU discovery issue seems resolved! The server now starts and attempts to load Metal (I can see ggml_metal_library_init being called). However, there's a new blocker: Metal shader compilation is failing due to remaining BF16/bfloat16 references: Log : ggml_metal_library_init: error: Error Domain=MTLLibraryErrorDomain Code=3 "program_source:7131:28: warning: variable 'theta_base' is used uninitialized... ... /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3266:5: error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<bfloat, half>' "Input types must match cooperative tensor types" ... ggml_metal_init: error: failed to initialize the Metal library ggml_backend_metal_device_init: error: failed to allocate context llama_init_from_model: failed to initialize the context: failed to initialize Metal backend GPU discovery: Working (runner starts) OK Metal shader compilation: Failing (BF16 type mismatches in ggml-metal.metal) KO Result: Falls back to CPU or crashes when loading models KO It appears there are still BF16/bfloat16 references in the Metal shaders (around line 7131 in ggml-metal.metal) that weren't removed or properly guarded. The static_assert fails because of type mismatch between bfloat and half. Could you check for any remaining BF16 kernel implementations in the Metal code? Happy to test any additional fixes! Thank you.
Author
Owner

@Lumysia commented on GitHub (Mar 9, 2026):

@devcodevault Thanks for the deep dive! It's interesting that this didn't trigger a failure in my local environment, but I suspect different Metal compiler strictness levels might be the reason.

I've identified the root cause: the FA_TYPES_BF macro in the Metal shaders had a type inconsistency in its last line, it was using half, half4, simdgroup_half8x8 instead of the consistent bfloat, bfloat4, simdgroup_bfloat8x8. This caused the static_assert failure when Metal Performance Primitives' cooperative tensor operations required matching types.

Could you please try applying this diff locally and see if it resolves the shader compilation error? If this works, I will include it in the next push.

Looking forward to your test results!

diff --git a/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.metal b/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.metal
index 59f44d00..90f55407 100644
--- a/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.metal
+++ b/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.metal
@@ -9072,7 +9072,7 @@ kernel void kernel_flash_attn_ext(
     bfloat, bfloat4x4, simdgroup_bfloat8x8, \
     float,             simdgroup_float8x8,  \
     float,  float2,    simdgroup_float8x8,  \
-    half,   half4,     simdgroup_half8x8
+    bfloat, bfloat4,   simdgroup_bfloat8x8
     //float,  float4,    simdgroup_float8x8
 
 #define FA_TYPES_F32 \
diff --git a/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal.metal b/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal.metal
index cab2d87f..0ae86760 100644
--- a/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal.metal
+++ b/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal.metal
@@ -6250,7 +6250,7 @@ kernel void kernel_flash_attn_ext(
     bfloat, bfloat4x4, simdgroup_bfloat8x8, \
     float,             simdgroup_float8x8,  \
     float,  float2,    simdgroup_float8x8,  \
-    half,   half4,     simdgroup_half8x8
+    bfloat, bfloat4,   simdgroup_bfloat8x8
     //float,  float4,    simdgroup_float8x8
 
 #define FA_TYPES_F32 \
<!-- gh-comment-id:4024519623 --> @Lumysia commented on GitHub (Mar 9, 2026): @devcodevault Thanks for the deep dive! It's interesting that this didn't trigger a failure in my local environment, but I suspect different Metal compiler strictness levels might be the reason. I've identified the root cause: the FA_TYPES_BF macro in the Metal shaders had a type inconsistency in its last line, it was using half, half4, simdgroup_half8x8 instead of the consistent bfloat, bfloat4, simdgroup_bfloat8x8. This caused the static_assert failure when Metal Performance Primitives' cooperative tensor operations required matching types. Could you please try applying this diff locally and see if it resolves the shader compilation error? If this works, I will include it in the next push. Looking forward to your test results! ```diff diff --git a/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.metal b/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.metal index 59f44d00..90f55407 100644 --- a/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.metal +++ b/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.metal @@ -9072,7 +9072,7 @@ kernel void kernel_flash_attn_ext( bfloat, bfloat4x4, simdgroup_bfloat8x8, \ float, simdgroup_float8x8, \ float, float2, simdgroup_float8x8, \ - half, half4, simdgroup_half8x8 + bfloat, bfloat4, simdgroup_bfloat8x8 //float, float4, simdgroup_float8x8 #define FA_TYPES_F32 \ diff --git a/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal.metal b/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal.metal index cab2d87f..0ae86760 100644 --- a/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal.metal +++ b/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal.metal @@ -6250,7 +6250,7 @@ kernel void kernel_flash_attn_ext( bfloat, bfloat4x4, simdgroup_bfloat8x8, \ float, simdgroup_float8x8, \ float, float2, simdgroup_float8x8, \ - half, half4, simdgroup_half8x8 + bfloat, bfloat4, simdgroup_bfloat8x8 //float, float4, simdgroup_float8x8 #define FA_TYPES_F32 \ ```
Author
Owner

@jdblack commented on GitHub (Mar 15, 2026):

I seem to be running into this issue on a brand new m5, installed via homebrew, which apparently installed 0.18.0. This homebrew version doesn't seem to be keeping logs, but I was able to catch this by running ollama serve on a console:

Would it be correct to believe that this patch didn't go into 0.18? I'm just running "ollama serve" on one terminal and trying to run qwen3.5 on another.

ggml_metal_library_init: error: Error Domain=MTLLibraryErrorDomain Code=3 "program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
} else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

<!-- gh-comment-id:4062632640 --> @jdblack commented on GitHub (Mar 15, 2026): I seem to be running into this issue on a brand new m5, installed via homebrew, which apparently installed 0.18.0. This homebrew version doesn't seem to be keeping logs, but I was able to catch this by running ollama serve on a console: Would it be correct to believe that this patch didn't go into 0.18? I'm just running "ollama serve" on one terminal and trying to run qwen3.5 on another. ggml_metal_library_init: error: Error Domain=MTLLibraryErrorDomain Code=3 "program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized] } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Author
Owner

@WaterKnight1998 commented on GitHub (Mar 15, 2026):

Same here

<!-- gh-comment-id:4062892495 --> @WaterKnight1998 commented on GitHub (Mar 15, 2026): Same here
Author
Owner

@joeellis commented on GitHub (Mar 15, 2026):

Note that I'm hitting the exact same issue @jdblack is hitting as well and am on 0.18.0. Is the full fix scheduled to be in the next release though?

<!-- gh-comment-id:4063316684 --> @joeellis commented on GitHub (Mar 15, 2026): Note that I'm hitting the exact same issue @jdblack is hitting as well and am on 0.18.0. Is the full fix scheduled to be in the next release though?
Author
Owner

@taogashi commented on GitHub (Mar 16, 2026):

brew installed ollama 0.18.0, m5, tahoe

➜  Downloads ollama run deepseek-r1:7b
Error: 500 Internal Server Error: llama runner process has terminated: error:Error Domain=MTLLibraryErrorDomain Code=3 "program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:7251:76: note: in instantiation of function template specialization 'kernel_rope_multi<float>' requested here
template [[host_name("kernel_rope_multi_f32")]] kernel kernel_rope_multi_t kernel_rope_multi<float>;
                                                                           ^
program_source:7149:33: note: uninitialized use occurs here
            const float theta = theta_base * pow(args.freq_base, inv_ndims*i0);
                                ^~~~~~~~~~
program_source:7131:24: note: remove the 'if' if its condition is always true
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever '&&' condition is false [-Wsometimes-uninitialized]
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~
program_source:7149:33: note: uninitialized use occurs here
            const float theta = theta_base * pow(args.freq_base, inv_ndims*i0);
                                ^~~~~~~~~~
program_source:7131:28: note: remove the '&&' if its condition is always true
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~~~~
program_source:7125:29: note: initialize the variable 'theta_base' to silence this warning
            float theta_base;
                            ^
                             = 0.0
program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:7252:76: note: in instantiation of function template specialization 'kernel_rope_multi<half>' requested here
template [[host_name("kernel_rope_multi_f16")]] kernel kernel_rope_multi_t kernel_rope_multi<half>;
                                                                           ^
program_source:7149:33: note: uninitialized use occurs here
            const float theta = theta_base * pow(args.freq_base, inv_ndims*i0);
                                ^~~~~~~~~~
program_source:7131:24: note: remove the 'if' if its condition is always true
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever '&&' condition is false [-Wsometimes-uninitialized]
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~
program_source:7149:33: note: uninitialized use occurs here
            const float theta = theta_base * pow(args.freq_base, inv_ndims*i0);
                                ^~~~~~~~~~
program_source:7131:28: note: remove the '&&' if its condition is always true
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~~~~
program_source:7125:29: note: initialize the variable 'theta_base' to silence this warning
            float theta_base;
                            ^
                             = 0.0
In file included from program_source:2837:
In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MetalPerformancePrimitives.h:10:
In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:368:
/System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3266:5: error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<bfloat, half>' "Input types must match cooperative tensor types"
    static_assert(__tensor_ops_detail::__is_same_v<_leftType, leftValueType>, "Input types must match cooperative tensor types");
    ^             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:394:24: note: in instantiation of function template specialization 'mpp::tensor_ops::__mutmul2d_detail::__run<{32, 64, 32, false, true, false, 1}, metal::execution_simdgroups<4>, metal::tensor<threadgroup half, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::tensor<threadgroup bfloat, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::cooperative_tensor<float, metal::extents<int, 18446744073709551615, 18446744073709551615>, mpp::tensor_ops::__mutmul2d_detail::__operand_layout<{32, 64, 32, false, true, false, 1}, mpp::tensor_ops::__mutmul2d_detail::__matmul2d_cooperative_operand_index::destination, metal::execution_simdgroups<4>, bfloat, half, float, int>>>' requested here
    __mutmul2d_detail::__run<Descriptor, Scope, LeftOperandType,
                       ^
program_source:12128:12: note: in instantiation of function template specialization 'mpp::tensor_ops::matmul2d<{32, 64, 32, false, true, false, 1}, metal::execution_simdgroups<4>>::run<metal::tensor<threadgroup half, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::tensor<threadgroup bfloat, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::cooperative_tensor<float, metal::extents<int, 18446744073709551615, 18446744073709551615>, mpp::tensor_ops::__mutmul2d_detail::__operand_layout<{32, 64, 32, false, true, false, 1}, mpp::tensor_ops::__mutmul2d_detail::__matmul2d_cooperative_operand_index::destination, metal::execution_simdgroups<4>, bfloat, half, float, int>>, void>' requested here
        mm.run(sB, sA, cT);
           ^
In file included from program_source:2837:
In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MetalPerformancePrimitives.h:10:
In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:368:
/System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3267:5: error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<half, bfloat>' "Input types must match cooperative tensor types"
    static_assert(__tensor_ops_detail::__is_same_v<_rightType, rightValueType>, "Input types must match cooperative tensor types");
    ^             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
" UserInfo={NSLocalizedDescription=program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:7251:76: note: in instantiation of function template specialization 'kernel_rope_multi<float>' requested here
template [[host_name("kernel_rope_multi_f32")]] kernel kernel_rope_multi_t kernel_rope_multi<float>;
                                                                           ^
program_source:7149:33: note: uninitialized use occurs here
            const float theta = theta_base * pow(args.freq_base, inv_ndims*i0);
                                ^~~~~~~~~~
program_source:7131:24: note: remove the 'if' if its condition is always true
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever '&&' condition is false [-Wsometimes-uninitialized]
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~
program_source:7149:33: note: uninitialized use occurs here
            const float theta = theta_base * pow(args.freq_base, inv_ndims*i0);
                                ^~~~~~~~~~
program_source:7131:28: note: remove the '&&' if its condition is always true
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~~~~
program_source:7125:29: note: initialize the variable 'theta_base' to silence this warning
            float theta_base;
                            ^
                             = 0.0
program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:7252:76: note: in instantiation of function template specialization 'kernel_rope_multi<half>' requested here
template [[host_name("kernel_rope_multi_f16")]] kernel kernel_rope_multi_t kernel_rope_multi<half>;
                                                                           ^
program_source:7149:33: note: uninitialized use occurs here
            const float theta = theta_base * pow(args.freq_base, inv_ndims*i0);
                                ^~~~~~~~~~
program_source:7131:24: note: remove the 'if' if its condition is always true
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever '&&' condition is false [-Wsometimes-uninitialized]
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~
program_source:7149:33: note: uninitialized use occurs here
            const float theta = theta_base * pow(args.freq_base, inv_ndims*i0);
                                ^~~~~~~~~~
program_source:7131:28: note: remove the '&&' if its condition is always true
                } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t
                           ^~~~~~~~~~~~~~~~~~
program_source:7125:29: note: initialize the variable 'theta_base' to silence this warning
            float theta_base;
                            ^
                             = 0.0
In file included from program_source:2837:
In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MetalPerformancePrimitives.h:10:
In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:368:
/System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3266:5: error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<bfloat, half>' "Input types must match cooperative tensor types"
    static_assert(__tensor_ops_detail::__is_same_v<_leftType, leftValueType>, "Input types must match cooperative tensor types");
    ^             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:394:24: note: in instantiation of function template specialization 'mpp::tensor_ops::__mutmul2d_detail::__run<{32, 64, 32, false, true, false, 1}, metal::execution_simdgroups<4>, metal::tensor<threadgroup half, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::tensor<threadgroup bfloat, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::cooperative_tensor<float, metal::extents<int, 18446744073709551615, 18446744073709551615>, mpp::tensor_ops::__mutmul2d_detail::__operand_layout<{32, 64, 32, false, true, false, 1}, mpp::tensor_ops::__mutmul2d_detail::__matmul2d_cooperative_operand_index::destination, metal::execution_simdgroups<4>, bfloat, half, float, int>>>' requested here
    __mutmul2d_detail::__run<Descriptor, Scope, LeftOperandType,
                       ^
program_source:12128:12: note: in instantiation of function template specialization 'mpp::tensor_ops::matmul2d<{32, 64, 32, false, true, false, 1}, metal::execution_simdgroups<4>>::run<metal::tensor<threadgroup half, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::tensor<threadgroup bfloat, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::cooperative_tensor<float, metal::extents<int, 18446744073709551615, 18446744073709551615>, mpp::tensor_ops::__mutmul2d_detail::__operand_layout<{32, 64, 32, false, true, false, 1}, mpp::tensor_ops::__mutmul2d_detail::__matmul2d_cooperative_operand_index::destination, metal::execution_simdgroups<4>, bfloat, half, float, int>>, void>' requested here
        mm.run(sB, sA, cT);
           ^
In file included from program_source:2837:
In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MetalPerformancePrimitives.h:10:
In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:368:
/System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3267:5: error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<half, bfloat>' "Input types must match cooperative tensor types"
    static_assert(__tensor_ops_detail::__is_same_v<_rightType, rightValueType>, "Input types must match cooperative tensor types");
    ^             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
}
ggml_metal_init: error: failed to initialize the Metal library
ggml_backend_metal_device_init: error: failed to allocate context
<!-- gh-comment-id:4065584819 --> @taogashi commented on GitHub (Mar 16, 2026): brew installed ollama 0.18.0, m5, tahoe ```bash ➜ Downloads ollama run deepseek-r1:7b Error: 500 Internal Server Error: llama runner process has terminated: error:Error Domain=MTLLibraryErrorDomain Code=3 "program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized] } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ program_source:7251:76: note: in instantiation of function template specialization 'kernel_rope_multi<float>' requested here template [[host_name("kernel_rope_multi_f32")]] kernel kernel_rope_multi_t kernel_rope_multi<float>; ^ program_source:7149:33: note: uninitialized use occurs here const float theta = theta_base * pow(args.freq_base, inv_ndims*i0); ^~~~~~~~~~ program_source:7131:24: note: remove the 'if' if its condition is always true } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever '&&' condition is false [-Wsometimes-uninitialized] } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~ program_source:7149:33: note: uninitialized use occurs here const float theta = theta_base * pow(args.freq_base, inv_ndims*i0); ^~~~~~~~~~ program_source:7131:28: note: remove the '&&' if its condition is always true } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~ program_source:7125:29: note: initialize the variable 'theta_base' to silence this warning float theta_base; ^ = 0.0 program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized] } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ program_source:7252:76: note: in instantiation of function template specialization 'kernel_rope_multi<half>' requested here template [[host_name("kernel_rope_multi_f16")]] kernel kernel_rope_multi_t kernel_rope_multi<half>; ^ program_source:7149:33: note: uninitialized use occurs here const float theta = theta_base * pow(args.freq_base, inv_ndims*i0); ^~~~~~~~~~ program_source:7131:24: note: remove the 'if' if its condition is always true } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever '&&' condition is false [-Wsometimes-uninitialized] } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~ program_source:7149:33: note: uninitialized use occurs here const float theta = theta_base * pow(args.freq_base, inv_ndims*i0); ^~~~~~~~~~ program_source:7131:28: note: remove the '&&' if its condition is always true } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~ program_source:7125:29: note: initialize the variable 'theta_base' to silence this warning float theta_base; ^ = 0.0 In file included from program_source:2837: In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MetalPerformancePrimitives.h:10: In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:368: /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3266:5: error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<bfloat, half>' "Input types must match cooperative tensor types" static_assert(__tensor_ops_detail::__is_same_v<_leftType, leftValueType>, "Input types must match cooperative tensor types"); ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:394:24: note: in instantiation of function template specialization 'mpp::tensor_ops::__mutmul2d_detail::__run<{32, 64, 32, false, true, false, 1}, metal::execution_simdgroups<4>, metal::tensor<threadgroup half, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::tensor<threadgroup bfloat, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::cooperative_tensor<float, metal::extents<int, 18446744073709551615, 18446744073709551615>, mpp::tensor_ops::__mutmul2d_detail::__operand_layout<{32, 64, 32, false, true, false, 1}, mpp::tensor_ops::__mutmul2d_detail::__matmul2d_cooperative_operand_index::destination, metal::execution_simdgroups<4>, bfloat, half, float, int>>>' requested here __mutmul2d_detail::__run<Descriptor, Scope, LeftOperandType, ^ program_source:12128:12: note: in instantiation of function template specialization 'mpp::tensor_ops::matmul2d<{32, 64, 32, false, true, false, 1}, metal::execution_simdgroups<4>>::run<metal::tensor<threadgroup half, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::tensor<threadgroup bfloat, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::cooperative_tensor<float, metal::extents<int, 18446744073709551615, 18446744073709551615>, mpp::tensor_ops::__mutmul2d_detail::__operand_layout<{32, 64, 32, false, true, false, 1}, mpp::tensor_ops::__mutmul2d_detail::__matmul2d_cooperative_operand_index::destination, metal::execution_simdgroups<4>, bfloat, half, float, int>>, void>' requested here mm.run(sB, sA, cT); ^ In file included from program_source:2837: In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MetalPerformancePrimitives.h:10: In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:368: /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3267:5: error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<half, bfloat>' "Input types must match cooperative tensor types" static_assert(__tensor_ops_detail::__is_same_v<_rightType, rightValueType>, "Input types must match cooperative tensor types"); ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ " UserInfo={NSLocalizedDescription=program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized] } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ program_source:7251:76: note: in instantiation of function template specialization 'kernel_rope_multi<float>' requested here template [[host_name("kernel_rope_multi_f32")]] kernel kernel_rope_multi_t kernel_rope_multi<float>; ^ program_source:7149:33: note: uninitialized use occurs here const float theta = theta_base * pow(args.freq_base, inv_ndims*i0); ^~~~~~~~~~ program_source:7131:24: note: remove the 'if' if its condition is always true } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever '&&' condition is false [-Wsometimes-uninitialized] } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~ program_source:7149:33: note: uninitialized use occurs here const float theta = theta_base * pow(args.freq_base, inv_ndims*i0); ^~~~~~~~~~ program_source:7131:28: note: remove the '&&' if its condition is always true } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~ program_source:7125:29: note: initialize the variable 'theta_base' to silence this warning float theta_base; ^ = 0.0 program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized] } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ program_source:7252:76: note: in instantiation of function template specialization 'kernel_rope_multi<half>' requested here template [[host_name("kernel_rope_multi_f16")]] kernel kernel_rope_multi_t kernel_rope_multi<half>; ^ program_source:7149:33: note: uninitialized use occurs here const float theta = theta_base * pow(args.freq_base, inv_ndims*i0); ^~~~~~~~~~ program_source:7131:24: note: remove the 'if' if its condition is always true } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ program_source:7131:28: warning: variable 'theta_base' is used uninitialized whenever '&&' condition is false [-Wsometimes-uninitialized] } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~ program_source:7149:33: note: uninitialized use occurs here const float theta = theta_base * pow(args.freq_base, inv_ndims*i0); ^~~~~~~~~~ program_source:7131:28: note: remove the '&&' if its condition is always true } else if (sector % 3 == 0 && sector < 3 * args.sect_0) { // t ^~~~~~~~~~~~~~~~~~ program_source:7125:29: note: initialize the variable 'theta_base' to silence this warning float theta_base; ^ = 0.0 In file included from program_source:2837: In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MetalPerformancePrimitives.h:10: In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:368: /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3266:5: error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<bfloat, half>' "Input types must match cooperative tensor types" static_assert(__tensor_ops_detail::__is_same_v<_leftType, leftValueType>, "Input types must match cooperative tensor types"); ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:394:24: note: in instantiation of function template specialization 'mpp::tensor_ops::__mutmul2d_detail::__run<{32, 64, 32, false, true, false, 1}, metal::execution_simdgroups<4>, metal::tensor<threadgroup half, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::tensor<threadgroup bfloat, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::cooperative_tensor<float, metal::extents<int, 18446744073709551615, 18446744073709551615>, mpp::tensor_ops::__mutmul2d_detail::__operand_layout<{32, 64, 32, false, true, false, 1}, mpp::tensor_ops::__mutmul2d_detail::__matmul2d_cooperative_operand_index::destination, metal::execution_simdgroups<4>, bfloat, half, float, int>>>' requested here __mutmul2d_detail::__run<Descriptor, Scope, LeftOperandType, ^ program_source:12128:12: note: in instantiation of function template specialization 'mpp::tensor_ops::matmul2d<{32, 64, 32, false, true, false, 1}, metal::execution_simdgroups<4>>::run<metal::tensor<threadgroup half, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::tensor<threadgroup bfloat, metal::extents<int, 18446744073709551615, 18446744073709551615>, metal::tensor_inline>, metal::cooperative_tensor<float, metal::extents<int, 18446744073709551615, 18446744073709551615>, mpp::tensor_ops::__mutmul2d_detail::__operand_layout<{32, 64, 32, false, true, false, 1}, mpp::tensor_ops::__mutmul2d_detail::__matmul2d_cooperative_operand_index::destination, metal::execution_simdgroups<4>, bfloat, half, float, int>>, void>' requested here mm.run(sB, sA, cT); ^ In file included from program_source:2837: In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MetalPerformancePrimitives.h:10: In file included from /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/MPPTensorOpsMatMul2d.h:368: /System/Library/Frameworks/MetalPerformancePrimitives.framework/Headers/__impl/MPPTensorOpsMatMul2dImpl.h:3267:5: error: static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<half, bfloat>' "Input types must match cooperative tensor types" static_assert(__tensor_ops_detail::__is_same_v<_rightType, rightValueType>, "Input types must match cooperative tensor types"); ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ } ggml_metal_init: error: failed to initialize the Metal library ggml_backend_metal_device_init: error: failed to allocate context ```
Author
Owner

@schricka commented on GitHub (Mar 16, 2026):

same here, but with the install.sh from ollama.com everything works fine

<!-- gh-comment-id:4065690521 --> @schricka commented on GitHub (Mar 16, 2026): same here, but with the install.sh from ollama.com everything works fine
Author
Owner

@ELadrimonos commented on GitHub (Mar 16, 2026):

same here, but with the install.sh from ollama.com everything works fine

MacOS 26.3.1, ollama 0.18.0;

I can confirm doesn't work with the brew package but it's working with the install script

<!-- gh-comment-id:4066035653 --> @ELadrimonos commented on GitHub (Mar 16, 2026): > same here, but with the install.sh from ollama.com everything works fine MacOS 26.3.1, ollama 0.18.0; I can confirm doesn't work with the brew package but it's working with the install script
Author
Owner

@innamulhassan commented on GitHub (Mar 16, 2026):

Working with install script but with brew not working even though same version 18

<!-- gh-comment-id:4066195752 --> @innamulhassan commented on GitHub (Mar 16, 2026): Working with install script but with brew not working even though same version 18
Author
Owner

@phillipross commented on GitHub (Mar 17, 2026):

I've yet to try install script but homebrew package as of 0.18.1 is still exhibiting the error for me

<!-- gh-comment-id:4078487154 --> @phillipross commented on GitHub (Mar 17, 2026): I've yet to try install script but homebrew package as of 0.18.1 is still exhibiting the error for me
Author
Owner

@absybvc-cloud commented on GitHub (Mar 17, 2026):

use install script problem fixed

<!-- gh-comment-id:4078696428 --> @absybvc-cloud commented on GitHub (Mar 17, 2026): use install script problem fixed
Author
Owner

@franko-f commented on GitHub (Mar 18, 2026):

Also just chiming in that using install.sh worked. Hopefully homebrew gets fixed, since I like managing/running from command line.

<!-- gh-comment-id:4084644541 --> @franko-f commented on GitHub (Mar 18, 2026): Also just chiming in that using install.sh worked. Hopefully homebrew gets fixed, since I like managing/running from command line.
Author
Owner

@dudeoverhere commented on GitHub (Mar 19, 2026):

Ollama v0.18.1 is completely broken on the M5 Max. Every model fails immediately with a Metal shader compilation error
— a bfloat/half type mismatch in MPPTensorOpsMatMul2dImpl.h that crashes Metal library initialization. Neither
OLLAMA_NO_METAL=1, GGML_METAL=off, nor setting num_gpu: 0 via the API bypasses the issue — Metal init runs regardless and kills the runner process. My MacBook Pro M5 Max has 48GB of unified memory with plenty of headroom (only 12.46GB used), so this is purely a GPU backend compatibility problem, not a resource constraint. This needs to be prioritized
— M5 Max users are completely locked out with no working workaround.

<!-- gh-comment-id:4087551911 --> @dudeoverhere commented on GitHub (Mar 19, 2026): Ollama v0.18.1 is completely broken on the M5 Max. Every model fails immediately with a Metal shader compilation error — a bfloat/half type mismatch in MPPTensorOpsMatMul2dImpl.h that crashes Metal library initialization. Neither OLLAMA_NO_METAL=1, GGML_METAL=off, nor setting num_gpu: 0 via the API bypasses the issue — Metal init runs regardless and kills the runner process. My MacBook Pro M5 Max has 48GB of unified memory with plenty of headroom (only 12.46GB used), so this is purely a GPU backend compatibility problem, not a resource constraint. This needs to be prioritized — M5 Max users are completely locked out with no working workaround.
Author
Owner

@jdblack commented on GitHub (Mar 19, 2026):

You can report the sheep spam above by clicking on the username and clicking "block or report"

<!-- gh-comment-id:4088264359 --> @jdblack commented on GitHub (Mar 19, 2026): You can report the sheep spam above by clicking on the username and clicking "block or report"
Author
Owner

@intrnauts commented on GitHub (Mar 22, 2026):

I'm just installed ollama 0.18.2 using the curl command, curl -fsSL https://ollama.com/install.sh | sh But I did first uninstall the installed version that I had using brew install. And I am finally having success running llama3.2 locally on my MacBook Air with M5.

I am very glad to have a working solution but would really appreciate being able to switch to brew install just to get back to my usual workflow.

<!-- gh-comment-id:4107224010 --> @intrnauts commented on GitHub (Mar 22, 2026): I'm just installed ollama 0.18.2 using the curl command, curl -fsSL https://ollama.com/install.sh | sh But I did first uninstall the installed version that I had using brew install. And I am finally having success running llama3.2 locally on my MacBook Air with M5. I am very glad to have a working solution but would really appreciate being able to switch to brew install just to get back to my usual workflow.
Author
Owner

@rnurgaliyev commented on GitHub (Mar 24, 2026):

If you still want working Ollama with Homebrew, the fix is simpler than it seems. There are actually two different packages, the formula (brew install ollama) compiles from source and is what causes the crash, while the cask (brew install --cask ollama) installs the official release binary directly from ollama.com.

brew uninstall ollama
brew install --cask ollama 

No need to run any scripts this way.

<!-- gh-comment-id:4118042558 --> @rnurgaliyev commented on GitHub (Mar 24, 2026): If you still want working Ollama with Homebrew, the fix is simpler than it seems. There are actually two different packages, the formula (brew install ollama) compiles from source and is what causes the crash, while the cask (brew install --cask ollama) installs the official release binary directly from ollama.com. ``` brew uninstall ollama brew install --cask ollama ``` No need to run any scripts this way.
Author
Owner

@hnshah commented on GitHub (Mar 25, 2026):

Workaround for M5 Users

The issue is specific to the Homebrew formula (which compiles from source with your system's Metal SDK). The Homebrew cask (which installs the pre-built binary) works fine because it was built with a different Metal SDK version.

Quick Fix:

brew uninstall ollama
brew install --cask ollama

This installs the same binary as the curl -fsSL https://ollama.com/install.sh | sh method, which works correctly on M5.

Why This Works:

Homebrew formula: Compiles from source using macOS 15's Metal 4 SDK
Homebrew cask: Pre-built binary from ollama.com (built with Metal 3 SDK)
Install script: Same pre-built binary as cask

The error you're seeing:

static_assert failed: "Input types must match cooperative tensor types"
bfloat vs half type mismatch

This happens because Metal 4 (macOS 15) has stricter type requirements than Metal 3. The source code compiles fine with Metal 3, but Metal 4's shader compiler rejects the bfloat/half mismatch in MPPTensorOpsMatMul2dImpl.h.

For Maintainers:

The fix likely needs to be in the source code's Metal shader type handling, OR the Homebrew formula needs to specify the Metal SDK version explicitly. I can help investigate the build configuration if useful.

Credit: Multiple users in this thread confirmed the cask workaround works. Just consolidating the findings to help others.

<!-- gh-comment-id:4124051274 --> @hnshah commented on GitHub (Mar 25, 2026): ## Workaround for M5 Users The issue is specific to the **Homebrew formula** (which compiles from source with your system's Metal SDK). The **Homebrew cask** (which installs the pre-built binary) works fine because it was built with a different Metal SDK version. ### Quick Fix: ```bash brew uninstall ollama brew install --cask ollama ``` This installs the same binary as the `curl -fsSL https://ollama.com/install.sh | sh` method, which works correctly on M5. ### Why This Works: **Homebrew formula:** Compiles from source using macOS 15's Metal 4 SDK **Homebrew cask:** Pre-built binary from ollama.com (built with Metal 3 SDK) **Install script:** Same pre-built binary as cask The error you're seeing: ``` static_assert failed: "Input types must match cooperative tensor types" bfloat vs half type mismatch ``` This happens because Metal 4 (macOS 15) has stricter type requirements than Metal 3. The source code compiles fine with Metal 3, but Metal 4's shader compiler rejects the `bfloat`/`half` mismatch in `MPPTensorOpsMatMul2dImpl.h`. ### For Maintainers: The fix likely needs to be in the source code's Metal shader type handling, OR the Homebrew formula needs to specify the Metal SDK version explicitly. I can help investigate the build configuration if useful. **Credit:** Multiple users in this thread confirmed the cask workaround works. Just consolidating the findings to help others.
Author
Owner

@alanhelu commented on GitHub (Apr 4, 2026):

Confirming this issue persists on Ollama v0.20.2 with macOS 26.3.1 (Tahoe) on Apple M5 24GB.

Environment

  • macOS: 26.3.1 (Build 25D2128)
  • Chip: Apple M5, 24GB unified memory
  • Ollama: 0.20.2 (installed via Homebrew, includes MLX backend)

Reproduction

Every model fails to load — tested with gemma4:e4b, gemma3:4b. The Metal library compilation fails with the same bfloat/half type mismatch in MPPTensorOpsMatMul2dImpl.h.

Key log lines

ggml_metal_library_init: using embedded metal library
static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<bfloat, half>'
  "Input types must match cooperative tensor types"

Workaround attempted

  • OLLAMA_LLM_LIBRARY=cpu — does not help. The Metal library still compiles (and fails) even with this flag set. There is no true CPU-only fallback path on Apple Silicon.

Working alternative

mlx-lm (v0.29.1) with mlx (v0.29.3) and mlx-metal (v0.29.3) works perfectly on the same machine. Models like mlx-community/gemma-3-4b-it-4bit load and run without issues via MLX's own Metal implementation. This suggests the issue is specific to Ollama's embedded Metal shaders, not a system-level Metal incompatibility.

<!-- gh-comment-id:4187763125 --> @alanhelu commented on GitHub (Apr 4, 2026): **Confirming this issue persists on Ollama v0.20.2 with macOS 26.3.1 (Tahoe) on Apple M5 24GB.** ### Environment - **macOS:** 26.3.1 (Build 25D2128) - **Chip:** Apple M5, 24GB unified memory - **Ollama:** 0.20.2 (installed via Homebrew, includes MLX backend) ### Reproduction Every model fails to load — tested with `gemma4:e4b`, `gemma3:4b`. The Metal library compilation fails with the same `bfloat`/`half` type mismatch in `MPPTensorOpsMatMul2dImpl.h`. ### Key log lines ``` ggml_metal_library_init: using embedded metal library static_assert failed due to requirement '__tensor_ops_detail::__is_same_v<bfloat, half>' "Input types must match cooperative tensor types" ``` ### Workaround attempted - `OLLAMA_LLM_LIBRARY=cpu` — does **not** help. The Metal library still compiles (and fails) even with this flag set. There is no true CPU-only fallback path on Apple Silicon. ### Working alternative `mlx-lm` (v0.29.1) with `mlx` (v0.29.3) and `mlx-metal` (v0.29.3) works perfectly on the same machine. Models like `mlx-community/gemma-3-4b-it-4bit` load and run without issues via MLX's own Metal implementation. This suggests the issue is specific to Ollama's embedded Metal shaders, not a system-level Metal incompatibility.
Author
Owner

@msmaz910 commented on GitHub (Apr 8, 2026):

Confirming this issue persists on Ollama v0.20.2 with macOS 26.3.1 (Tahoe) on Apple M5 24GB.

I'm having the same issue with on the exact same build as @alanhelu using Ollama v0.20.4. Has anyone found a workaround within Ollama?

<!-- gh-comment-id:4210321252 --> @msmaz910 commented on GitHub (Apr 8, 2026): > **Confirming this issue persists on Ollama v0.20.2 with macOS 26.3.1 (Tahoe) on Apple M5 24GB.** I'm having the same issue with on the exact same build as @alanhelu using Ollama v0.20.4. Has anyone found a workaround within Ollama?
Author
Owner

@ac-mmi commented on GitHub (Apr 9, 2026):

Same issue. Need a workaround too ?

<!-- gh-comment-id:4215857414 --> @ac-mmi commented on GitHub (Apr 9, 2026): Same issue. Need a workaround too ?
Author
Owner

@frankli0324 commented on GitHub (Apr 9, 2026):

https://github.com/ollama/ollama/pull/14604 this worked perfectly for me with regression from main branch merged back into the patch

<!-- gh-comment-id:4218435126 --> @frankli0324 commented on GitHub (Apr 9, 2026): https://github.com/ollama/ollama/pull/14604 this worked perfectly for me with regression from main branch merged back into the patch
Author
Owner

@frankli0324 commented on GitHub (Apr 13, 2026):

is there any blockers so a fix cannot be released? I see https://github.com/ollama/ollama/pull/14604 was submitted over a month ago but no one seem to be following up?

<!-- gh-comment-id:4234320775 --> @frankli0324 commented on GitHub (Apr 13, 2026): is there any blockers so a fix cannot be released? I see https://github.com/ollama/ollama/pull/14604 was submitted over a month ago but no one seem to be following up?
Author
Owner

@robin2026-code commented on GitHub (Apr 13, 2026):

Same here,how to fix ,thx

<!-- gh-comment-id:4237142073 --> @robin2026-code commented on GitHub (Apr 13, 2026): Same here,how to fix ,thx
Author
Owner

@go7th commented on GitHub (Apr 14, 2026):

eproduced on a fresh environment, same root cause.

Environment

  • macOS 26.3.1 (Build 25D2128) — Tahoe
  • Apple M5
  • Ollama v0.20.7 (also v0.20.6)
  • Installed via official Ollama-darwin.zip (.app, not Homebrew)

Symptom
Any model load fails with:

{"error":"llama runner process has terminated: %!w(<nil>)"}

Server log (key lines)

ggml_metal_init: the device does not have a precompiled Metal library - this is unexpected                         
ggml_metal_init: will try to compile it on the fly
ggml_metal_library_init: using embedded metal library
MPPTensorOpsMatMul2dImpl.h:3266 static_assert failed due to requirement __is_same_v<bfloat, half>                  
  "Input types must match cooperative tensor types"
ggml_metal_init: error: failed to initialize the Metal library                                                     
ggml_backend_metal_device_init: error: failed to allocate context
llama_init_from_model: failed to initialize the context: failed to initialize Metal backend                        
panic: unable to create llama context                                                                              

Notes

  • Affects every quantization and every model tried (qwen2.5:3b/7b/14b, gemma3:4b, qwen3.5:4b/9b).
  • OLLAMA_LLM_LIBRARY=cpu has no effect — the runner still attempts Metal init first.
  • Same bfloat/half MPPTensorOpsMatMul2d mismatch as closed #13460 (macOS 26.2 regression) — returned on 26.3 + M5.
  • Apple's newer MetalPerformancePrimitives tightens static_assert on cooperative tensor types; shipped ggml-metal
    shaders need to pass matching types to matmul2d.

Happy to provide full crash logs if useful.

<!-- gh-comment-id:4244202684 --> @go7th commented on GitHub (Apr 14, 2026): eproduced on a fresh environment, same root cause. **Environment** - macOS 26.3.1 (Build 25D2128) — Tahoe - Apple M5 - Ollama v0.20.7 (also v0.20.6) - Installed via official Ollama-darwin.zip (.app, not Homebrew) **Symptom** Any model load fails with: ``` {"error":"llama runner process has terminated: %!w(<nil>)"} ``` **Server log (key lines)** ``` ggml_metal_init: the device does not have a precompiled Metal library - this is unexpected ggml_metal_init: will try to compile it on the fly ggml_metal_library_init: using embedded metal library MPPTensorOpsMatMul2dImpl.h:3266 static_assert failed due to requirement __is_same_v<bfloat, half> "Input types must match cooperative tensor types" ggml_metal_init: error: failed to initialize the Metal library ggml_backend_metal_device_init: error: failed to allocate context llama_init_from_model: failed to initialize the context: failed to initialize Metal backend panic: unable to create llama context ``` **Notes** - Affects every quantization and every model tried (qwen2.5:3b/7b/14b, gemma3:4b, qwen3.5:4b/9b). - `OLLAMA_LLM_LIBRARY=cpu` has no effect — the runner still attempts Metal init first. - Same bfloat/half MPPTensorOpsMatMul2d mismatch as closed #13460 (macOS 26.2 regression) — returned on 26.3 + M5. - Apple's newer MetalPerformancePrimitives tightens `static_assert` on cooperative tensor types; shipped ggml-metal shaders need to pass matching types to `matmul2d`. Happy to provide full crash logs if useful.
Author
Owner

@borisdadvisard commented on GitHub (Apr 28, 2026):

Same issue on my side with Ollama 0.21, Apple M5, macOS Tahoe Version 26.2 (25C56)

<!-- gh-comment-id:4334160598 --> @borisdadvisard commented on GitHub (Apr 28, 2026): Same issue on my side with Ollama 0.21, Apple M5, macOS Tahoe Version 26.2 (25C56)
Author
Owner

@franko-f commented on GitHub (Apr 28, 2026):

Update to Tahoe 26.4.1

This resolved the issue for myself and others.

On 28 Apr 2026, at 11:53 AM, Boris Dadvisard (Personal) @.***> wrote:

borisdadvisard
left a comment
(ollama/ollama#14432)
https://github.com/ollama/ollama/issues/14432#issuecomment-4334160598
Same issue on my side with Apple M5, macOS Tahoe Version 26.2 (25C56)

<!-- gh-comment-id:4335854941 --> @franko-f commented on GitHub (Apr 28, 2026): Update to Tahoe 26.4.1 This resolved the issue for myself and others. > On 28 Apr 2026, at 11:53 AM, Boris Dadvisard (Personal) ***@***.***> wrote: > > > borisdadvisard > left a comment > (ollama/ollama#14432) > <https://github.com/ollama/ollama/issues/14432#issuecomment-4334160598> > Same issue on my side with Apple M5, macOS Tahoe Version 26.2 (25C56) >
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55883