[GH-ISSUE #2735] Build fails on MacOS #1645

Closed
opened 2026-04-12 11:35:39 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @jrp2014 on GitHub (Feb 24, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2735

Following the instructions in the Developer docs, out of the box I get:

(ollama) ➜  AI git clone https://github.com/ollama/ollama.git
Cloning into 'ollama'...
remote: Enumerating objects: 10778, done.
remote: Counting objects: 100% (2489/2489), done.
remote: Compressing objects: 100% (633/633), done.
remote: Total 10778 (delta 2143), reused 1987 (delta 1853), pack-reused 8289
Receiving objects: 100% (10778/10778), 6.65 MiB | 509.00 KiB/s, done.
Resolving deltas: 100% (6743/6743), done.
(ollama) ➜  AI git gc
fatal: not a git repository (or any of the parent directories): .git
(ollama) ➜  AI cd ollama 
(ollama) ➜  ollama git:(main) git gc   
Enumerating objects: 10778, done.
Counting objects: 100% (10778/10778), done.
Delta compression using up to 16 threads
Compressing objects: 100% (3774/3774), done.
Writing objects: 100% (10778/10778), done.
Total 10778 (delta 6743), reused 10778 (delta 6743), pack-reused 0
(ollama) ➜  ollama git:(main) go generate ./...
go: downloading github.com/gin-gonic/gin v1.9.1
go: downloading golang.org/x/term v0.13.0
go: downloading github.com/emirpasic/gods v1.18.1
go: downloading golang.org/x/sys v0.13.0
go: downloading github.com/containerd/console v1.0.3
go: downloading github.com/olekukonko/tablewriter v0.0.5
go: downloading github.com/spf13/cobra v1.7.0
go: downloading golang.org/x/crypto v0.14.0
go: downloading golang.org/x/exp v0.0.0-20230817173708-d852ddb80c63
go: downloading golang.org/x/sync v0.3.0
go: downloading github.com/gin-contrib/cors v1.4.0
go: downloading github.com/google/uuid v1.0.0
go: downloading github.com/mattn/go-runewidth v0.0.14
go: downloading github.com/gin-contrib/sse v0.1.0
go: downloading github.com/mattn/go-isatty v0.0.19
go: downloading golang.org/x/net v0.17.0
go: downloading github.com/pelletier/go-toml/v2 v2.0.8
go: downloading github.com/ugorji/go/codec v1.2.11
go: downloading google.golang.org/protobuf v1.30.0
go: downloading gopkg.in/yaml.v3 v3.0.1
go: downloading github.com/go-playground/validator/v10 v10.14.0
go: downloading github.com/spf13/pflag v1.0.5
go: downloading github.com/rivo/uniseg v0.2.0
go: downloading golang.org/x/text v0.13.0
go: downloading github.com/leodido/go-urn v1.2.4
go: downloading github.com/gabriel-vasile/mimetype v1.4.2
go: downloading github.com/go-playground/universal-translator v0.18.1
go: downloading github.com/go-playground/locales v0.14.1
+ set -o pipefail
+ echo 'Starting darwin generate script'
Starting darwin generate script
++ dirname ./gen_darwin.sh
+ source ./gen_common.sh
+ init_vars
+ case "${GOARCH}" in
+ ARCH=arm64
+ LLAMACPP_DIR=../llama.cpp
+ CMAKE_DEFS=
+ CMAKE_TARGETS='--target ext_server'
+ echo ''
+ grep -- -g
+ CMAKE_DEFS='-DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off '
+ case $(uname -s) in
++ uname -s
+ LIB_EXT=dylib
+ WHOLE_ARCHIVE=-Wl,-force_load
+ NO_WHOLE_ARCHIVE=
+ GCC_ARCH='-arch arm64'
+ '[' -z '' ']'
+ CMAKE_CUDA_ARCHITECTURES='50;52;61;70;75;80'
+ git_module_setup
+ '[' -n '' ']'
+ '[' -d ../llama.cpp/gguf ']'
+ git submodule init
Submodule 'llama.cpp' (https://github.com/ggerganov/llama.cpp.git) registered for path '../llama.cpp'
+ git submodule update --force ../llama.cpp
Cloning into '/Users/jrp/Documents/AI/ollama/llm/llama.cpp'...
remote: Enumerating objects: 12034, done.
remote: Counting objects: 100% (12034/12034), done.
remote: Compressing objects: 100% (3577/3577), done.
remote: Total 11732 (delta 8692), reused 11096 (delta 8075), pack-reused 0
Receiving objects: 100% (11732/11732), 8.48 MiB | 391.00 KiB/s, done.
Resolving deltas: 100% (8692/8692), completed with 246 local objects.
From https://github.com/ggerganov/llama.cpp
 * branch            96633eeca1265ed03e57230de54032041c58f9cd -> FETCH_HEAD
Submodule path '../llama.cpp': checked out '96633eeca1265ed03e57230de54032041c58f9cd'
+ apply_patches
+ grep ollama ../llama.cpp/examples/server/CMakeLists.txt
+ echo 'include (../../../ext_server/CMakeLists.txt) # ollama'
++ ls -A ../patches/01-cache.diff ../patches/02-cudaleaks.diff
+ '[' -n '../patches/01-cache.diff
../patches/02-cudaleaks.diff' ']'
+ for patch in '../patches/*.diff'
++ grep '^+++ ' ../patches/01-cache.diff
++ cut -f2 '-d '
++ cut -f2- -d/
+ for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)'
+ cd ../llama.cpp
+ git checkout examples/server/server.cpp
Updated 0 paths from the index
+ for patch in '../patches/*.diff'
++ grep '^+++ ' ../patches/02-cudaleaks.diff
++ cut -f2 '-d '
++ cut -f2- -d/
+ for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)'
+ cd ../llama.cpp
+ git checkout examples/server/server.cpp
Updated 0 paths from the index
+ for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)'
+ cd ../llama.cpp
+ git checkout ggml-cuda.cu
Updated 0 paths from the index
+ for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)'
+ cd ../llama.cpp
+ git checkout ggml-cuda.h
Updated 0 paths from the index
+ for patch in '../patches/*.diff'
+ cd ../llama.cpp
+ git apply ../patches/01-cache.diff
+ for patch in '../patches/*.diff'
+ cd ../llama.cpp
+ git apply ../patches/02-cudaleaks.diff
+ sed -e 's/int main(/int __main(/g'
+ mv ../llama.cpp/examples/server/server.cpp.tmp ../llama.cpp/examples/server/server.cpp
+ COMMON_DARWIN_DEFS='-DCMAKE_OSX_DEPLOYMENT_TARGET=11.0 -DCMAKE_SYSTEM_NAME=Darwin'
+ case "${GOARCH}" in
+ CMAKE_DEFS='-DCMAKE_OSX_DEPLOYMENT_TARGET=11.0 -DCMAKE_SYSTEM_NAME=Darwin -DLLAMA_ACCELERATE=on -DCMAKE_SYSTEM_PROCESSOR=arm64 -DCMAKE_OSX_ARCHITECTURES=arm64 -DLLAMA_METAL=on -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off '
+ BUILD_DIR=../llama.cpp/build/darwin/arm64/metal
+ EXTRA_LIBS=' -framework Accelerate -framework Foundation -framework Metal -framework MetalKit -framework MetalPerformanceShaders'
+ build
+ cmake -S ../llama.cpp -B ../llama.cpp/build/darwin/arm64/metal -DCMAKE_OSX_DEPLOYMENT_TARGET=11.0 -DCMAKE_SYSTEM_NAME=Darwin -DLLAMA_ACCELERATE=on -DCMAKE_SYSTEM_PROCESSOR=arm64 -DCMAKE_OSX_ARCHITECTURES=arm64 -DLLAMA_METAL=on -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off
-- The C compiler identification is AppleClang 15.0.0.15000100
-- The CXX compiler identification is AppleClang 15.0.0.15000100
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.39.3 (Apple Git-145)") 
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE  
-- Accelerate framework found
-- Metal framework found
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: arm64
-- ARM detected
-- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E
-- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E - Failed
-- Configuring done (0.7s)
-- Generating done (0.2s)
-- Build files have been written to: /Users/jrp/Documents/AI/ollama/llm/llama.cpp/build/darwin/arm64/metal
+ cmake --build ../llama.cpp/build/darwin/arm64/metal --target ext_server -j8
[  6%] Generating build details from Git
[ 12%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.o
[ 31%] Building C object CMakeFiles/ggml.dir/ggml-quants.c.o
[ 31%] Building C object CMakeFiles/ggml.dir/ggml.c.o
[ 31%] Building C object CMakeFiles/ggml.dir/ggml-metal.m.o
[ 31%] Building C object CMakeFiles/ggml.dir/ggml-backend.c.o
-- Found Git: /usr/bin/git (found version "2.39.3 (Apple Git-145)") 
[ 37%] Building CXX object common/CMakeFiles/build_info.dir/build-info.cpp.o
[ 37%] Built target build_info
/Users/jrp/Documents/AI/ollama/llm/llama.cpp/ggml.c:10374:17: warning: 'cblas_sgemm' is only available on macOS 13.3 or newer [-Wunguarded-availability-new]
                cblas_sgemm(CblasRowMajor, CblasNoTrans, CblasTrans,
                ^~~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.2.sdk/System/Library/Frameworks/vecLib.framework/Headers/cblas_new.h:891:6: note: 'cblas_sgemm' has been marked as being introduced in macOS 13.3 here, but the deployment target is macOS 11.0.0
void cblas_sgemm(const enum CBLAS_ORDER ORDER,
     ^
/Users/jrp/Documents/AI/ollama/llm/llama.cpp/ggml.c:10374:17: note: enclose 'cblas_sgemm' in a __builtin_available check to silence this warning
                cblas_sgemm(CblasRowMajor, CblasNoTrans, CblasTrans,
                ^~~~~~~~~~~
/Users/jrp/Documents/AI/ollama/llm/llama.cpp/ggml.c:10810:9: warning: 'cblas_sgemm' is only available on macOS 13.3 or newer [-Wunguarded-availability-new]
        cblas_sgemm(CblasRowMajor, transposeA, CblasNoTrans, m, n, k, 1.0, a, lda, b, n, 0.0, c, n);
        ^~~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.2.sdk/System/Library/Frameworks/vecLib.framework/Headers/cblas_new.h:891:6: note: 'cblas_sgemm' has been marked as being introduced in macOS 13.3 here, but the deployment target is macOS 11.0.0
void cblas_sgemm(const enum CBLAS_ORDER ORDER,
     ^
/Users/jrp/Documents/AI/ollama/llm/llama.cpp/ggml.c:10810:9: note: enclose 'cblas_sgemm' in a __builtin_available check to silence this warning
        cblas_sgemm(CblasRowMajor, transposeA, CblasNoTrans, m, n, k, 1.0, a, lda, b, n, 0.0, c, n);
        ^~~~~~~~~~~
2 warnings generated.
[ 37%] Built target ggml
[ 43%] Building CXX object CMakeFiles/llama.dir/llama.cpp.o
[ 50%] Linking CXX static library libllama.a
[ 50%] Built target llama
[ 56%] Building CXX object common/CMakeFiles/common.dir/common.cpp.o
[ 62%] Building CXX object examples/llava/CMakeFiles/llava.dir/llava.cpp.o
[ 62%] Building CXX object common/CMakeFiles/common.dir/sampling.cpp.o
[ 68%] Building CXX object common/CMakeFiles/common.dir/train.cpp.o
[ 87%] Building CXX object common/CMakeFiles/common.dir/console.cpp.o
[ 87%] Building CXX object examples/llava/CMakeFiles/llava.dir/clip.cpp.o
[ 87%] Building CXX object common/CMakeFiles/common.dir/grammar-parser.cpp.o
[ 87%] Linking CXX static library libcommon.a
[ 87%] Built target common
[ 87%] Built target llava
[100%] Building CXX object examples/server/CMakeFiles/ext_server.dir/Users/jrp/Documents/AI/ollama/llm/ext_server/ext_server.cpp.o
[100%] Building CXX object examples/server/CMakeFiles/ext_server.dir/__/__/llama.cpp.o
[100%] Linking CXX static library libext_server.a
[100%] Built target ext_server
+ mkdir -p ../llama.cpp/build/darwin/arm64/metal/lib/
+ g++ -fPIC -g -shared -o ../llama.cpp/build/darwin/arm64/metal/lib/libext_server.dylib -arch arm64 -Wl,-force_load ../llama.cpp/build/darwin/arm64/metal/examples/server/libext_server.a ../llama.cpp/build/darwin/arm64/metal/common/libcommon.a ../llama.cpp/build/darwin/arm64/metal/libllama.a '-Wl,-rpath,$ORIGIN' -lpthread -ldl -lm -framework Accelerate -framework Foundation -framework Metal -framework MetalKit -framework MetalPerformanceShaders
+ sign ../llama.cpp/build/darwin/arm64/metal/lib/libext_server.dylib
+ '[' -n '' ']'
+ compress_libs
+ echo 'Compressing payloads to reduce overall binary size...'
Compressing payloads to reduce overall binary size...
+ pids=
+ rm -rf '../llama.cpp/build/darwin/arm64/metal/lib/*.dylib*.gz'
+ for lib in '${BUILD_DIR}/lib/*.${LIB_EXT}*'
+ pids+=' 15225'
+ echo

+ for pid in '${pids}'
+ wait 15225
+ gzip --best -f ../llama.cpp/build/darwin/arm64/metal/lib/libext_server.dylib
+ echo 'Finished compression'
Finished compression
+ cleanup
+ cd ../llama.cpp/examples/server/
+ git checkout CMakeLists.txt server.cpp
Updated 2 paths from the index
++ ls -A ../patches/01-cache.diff ../patches/02-cudaleaks.diff
+ '[' -n '../patches/01-cache.diff
../patches/02-cudaleaks.diff' ']'
+ for patch in '../patches/*.diff'
++ grep '^+++ ' ../patches/01-cache.diff
++ cut -f2 '-d '
++ cut -f2- -d/
+ for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)'
+ cd ../llama.cpp
+ git checkout examples/server/server.cpp
Updated 0 paths from the index
+ for patch in '../patches/*.diff'
++ grep '^+++ ' ../patches/02-cudaleaks.diff
++ cut -f2 '-d '
++ cut -f2- -d/
+ for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)'
+ cd ../llama.cpp
+ git checkout examples/server/server.cpp
Updated 0 paths from the index
+ for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)'
+ cd ../llama.cpp
+ git checkout ggml-cuda.cu
Updated 1 path from the index
+ for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)'
+ cd ../llama.cpp
+ git checkout ggml-cuda.h
Updated 1 path from the index
(ollama) ➜  ollama git:(main) go build .
# github.com/jmorganca/ollama/llm
llm/llm.go:47:17: undefined: gpu.CheckVRAM
llm/llm.go:58:14: undefined: gpu.GetGPUInfo
llm/llm.go:158:15: undefined: newDynExtServer
(ollama) ➜  ollama git:(main) 
Originally created by @jrp2014 on GitHub (Feb 24, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2735 Following the instructions in the Developer docs, out of the box I get: ``` (ollama) ➜ AI git clone https://github.com/ollama/ollama.git Cloning into 'ollama'... remote: Enumerating objects: 10778, done. remote: Counting objects: 100% (2489/2489), done. remote: Compressing objects: 100% (633/633), done. remote: Total 10778 (delta 2143), reused 1987 (delta 1853), pack-reused 8289 Receiving objects: 100% (10778/10778), 6.65 MiB | 509.00 KiB/s, done. Resolving deltas: 100% (6743/6743), done. (ollama) ➜ AI git gc fatal: not a git repository (or any of the parent directories): .git (ollama) ➜ AI cd ollama (ollama) ➜ ollama git:(main) git gc Enumerating objects: 10778, done. Counting objects: 100% (10778/10778), done. Delta compression using up to 16 threads Compressing objects: 100% (3774/3774), done. Writing objects: 100% (10778/10778), done. Total 10778 (delta 6743), reused 10778 (delta 6743), pack-reused 0 (ollama) ➜ ollama git:(main) go generate ./... go: downloading github.com/gin-gonic/gin v1.9.1 go: downloading golang.org/x/term v0.13.0 go: downloading github.com/emirpasic/gods v1.18.1 go: downloading golang.org/x/sys v0.13.0 go: downloading github.com/containerd/console v1.0.3 go: downloading github.com/olekukonko/tablewriter v0.0.5 go: downloading github.com/spf13/cobra v1.7.0 go: downloading golang.org/x/crypto v0.14.0 go: downloading golang.org/x/exp v0.0.0-20230817173708-d852ddb80c63 go: downloading golang.org/x/sync v0.3.0 go: downloading github.com/gin-contrib/cors v1.4.0 go: downloading github.com/google/uuid v1.0.0 go: downloading github.com/mattn/go-runewidth v0.0.14 go: downloading github.com/gin-contrib/sse v0.1.0 go: downloading github.com/mattn/go-isatty v0.0.19 go: downloading golang.org/x/net v0.17.0 go: downloading github.com/pelletier/go-toml/v2 v2.0.8 go: downloading github.com/ugorji/go/codec v1.2.11 go: downloading google.golang.org/protobuf v1.30.0 go: downloading gopkg.in/yaml.v3 v3.0.1 go: downloading github.com/go-playground/validator/v10 v10.14.0 go: downloading github.com/spf13/pflag v1.0.5 go: downloading github.com/rivo/uniseg v0.2.0 go: downloading golang.org/x/text v0.13.0 go: downloading github.com/leodido/go-urn v1.2.4 go: downloading github.com/gabriel-vasile/mimetype v1.4.2 go: downloading github.com/go-playground/universal-translator v0.18.1 go: downloading github.com/go-playground/locales v0.14.1 + set -o pipefail + echo 'Starting darwin generate script' Starting darwin generate script ++ dirname ./gen_darwin.sh + source ./gen_common.sh + init_vars + case "${GOARCH}" in + ARCH=arm64 + LLAMACPP_DIR=../llama.cpp + CMAKE_DEFS= + CMAKE_TARGETS='--target ext_server' + echo '' + grep -- -g + CMAKE_DEFS='-DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' + case $(uname -s) in ++ uname -s + LIB_EXT=dylib + WHOLE_ARCHIVE=-Wl,-force_load + NO_WHOLE_ARCHIVE= + GCC_ARCH='-arch arm64' + '[' -z '' ']' + CMAKE_CUDA_ARCHITECTURES='50;52;61;70;75;80' + git_module_setup + '[' -n '' ']' + '[' -d ../llama.cpp/gguf ']' + git submodule init Submodule 'llama.cpp' (https://github.com/ggerganov/llama.cpp.git) registered for path '../llama.cpp' + git submodule update --force ../llama.cpp Cloning into '/Users/jrp/Documents/AI/ollama/llm/llama.cpp'... remote: Enumerating objects: 12034, done. remote: Counting objects: 100% (12034/12034), done. remote: Compressing objects: 100% (3577/3577), done. remote: Total 11732 (delta 8692), reused 11096 (delta 8075), pack-reused 0 Receiving objects: 100% (11732/11732), 8.48 MiB | 391.00 KiB/s, done. Resolving deltas: 100% (8692/8692), completed with 246 local objects. From https://github.com/ggerganov/llama.cpp * branch 96633eeca1265ed03e57230de54032041c58f9cd -> FETCH_HEAD Submodule path '../llama.cpp': checked out '96633eeca1265ed03e57230de54032041c58f9cd' + apply_patches + grep ollama ../llama.cpp/examples/server/CMakeLists.txt + echo 'include (../../../ext_server/CMakeLists.txt) # ollama' ++ ls -A ../patches/01-cache.diff ../patches/02-cudaleaks.diff + '[' -n '../patches/01-cache.diff ../patches/02-cudaleaks.diff' ']' + for patch in '../patches/*.diff' ++ grep '^+++ ' ../patches/01-cache.diff ++ cut -f2 '-d ' ++ cut -f2- -d/ + for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)' + cd ../llama.cpp + git checkout examples/server/server.cpp Updated 0 paths from the index + for patch in '../patches/*.diff' ++ grep '^+++ ' ../patches/02-cudaleaks.diff ++ cut -f2 '-d ' ++ cut -f2- -d/ + for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)' + cd ../llama.cpp + git checkout examples/server/server.cpp Updated 0 paths from the index + for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)' + cd ../llama.cpp + git checkout ggml-cuda.cu Updated 0 paths from the index + for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)' + cd ../llama.cpp + git checkout ggml-cuda.h Updated 0 paths from the index + for patch in '../patches/*.diff' + cd ../llama.cpp + git apply ../patches/01-cache.diff + for patch in '../patches/*.diff' + cd ../llama.cpp + git apply ../patches/02-cudaleaks.diff + sed -e 's/int main(/int __main(/g' + mv ../llama.cpp/examples/server/server.cpp.tmp ../llama.cpp/examples/server/server.cpp + COMMON_DARWIN_DEFS='-DCMAKE_OSX_DEPLOYMENT_TARGET=11.0 -DCMAKE_SYSTEM_NAME=Darwin' + case "${GOARCH}" in + CMAKE_DEFS='-DCMAKE_OSX_DEPLOYMENT_TARGET=11.0 -DCMAKE_SYSTEM_NAME=Darwin -DLLAMA_ACCELERATE=on -DCMAKE_SYSTEM_PROCESSOR=arm64 -DCMAKE_OSX_ARCHITECTURES=arm64 -DLLAMA_METAL=on -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off ' + BUILD_DIR=../llama.cpp/build/darwin/arm64/metal + EXTRA_LIBS=' -framework Accelerate -framework Foundation -framework Metal -framework MetalKit -framework MetalPerformanceShaders' + build + cmake -S ../llama.cpp -B ../llama.cpp/build/darwin/arm64/metal -DCMAKE_OSX_DEPLOYMENT_TARGET=11.0 -DCMAKE_SYSTEM_NAME=Darwin -DLLAMA_ACCELERATE=on -DCMAKE_SYSTEM_PROCESSOR=arm64 -DCMAKE_OSX_ARCHITECTURES=arm64 -DLLAMA_METAL=on -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off -- The C compiler identification is AppleClang 15.0.0.15000100 -- The CXX compiler identification is AppleClang 15.0.0.15000100 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Git: /usr/bin/git (found version "2.39.3 (Apple Git-145)") -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Accelerate framework found -- Metal framework found -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: arm64 -- ARM detected -- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E -- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E - Failed -- Configuring done (0.7s) -- Generating done (0.2s) -- Build files have been written to: /Users/jrp/Documents/AI/ollama/llm/llama.cpp/build/darwin/arm64/metal + cmake --build ../llama.cpp/build/darwin/arm64/metal --target ext_server -j8 [ 6%] Generating build details from Git [ 12%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.o [ 31%] Building C object CMakeFiles/ggml.dir/ggml-quants.c.o [ 31%] Building C object CMakeFiles/ggml.dir/ggml.c.o [ 31%] Building C object CMakeFiles/ggml.dir/ggml-metal.m.o [ 31%] Building C object CMakeFiles/ggml.dir/ggml-backend.c.o -- Found Git: /usr/bin/git (found version "2.39.3 (Apple Git-145)") [ 37%] Building CXX object common/CMakeFiles/build_info.dir/build-info.cpp.o [ 37%] Built target build_info /Users/jrp/Documents/AI/ollama/llm/llama.cpp/ggml.c:10374:17: warning: 'cblas_sgemm' is only available on macOS 13.3 or newer [-Wunguarded-availability-new] cblas_sgemm(CblasRowMajor, CblasNoTrans, CblasTrans, ^~~~~~~~~~~ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.2.sdk/System/Library/Frameworks/vecLib.framework/Headers/cblas_new.h:891:6: note: 'cblas_sgemm' has been marked as being introduced in macOS 13.3 here, but the deployment target is macOS 11.0.0 void cblas_sgemm(const enum CBLAS_ORDER ORDER, ^ /Users/jrp/Documents/AI/ollama/llm/llama.cpp/ggml.c:10374:17: note: enclose 'cblas_sgemm' in a __builtin_available check to silence this warning cblas_sgemm(CblasRowMajor, CblasNoTrans, CblasTrans, ^~~~~~~~~~~ /Users/jrp/Documents/AI/ollama/llm/llama.cpp/ggml.c:10810:9: warning: 'cblas_sgemm' is only available on macOS 13.3 or newer [-Wunguarded-availability-new] cblas_sgemm(CblasRowMajor, transposeA, CblasNoTrans, m, n, k, 1.0, a, lda, b, n, 0.0, c, n); ^~~~~~~~~~~ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.2.sdk/System/Library/Frameworks/vecLib.framework/Headers/cblas_new.h:891:6: note: 'cblas_sgemm' has been marked as being introduced in macOS 13.3 here, but the deployment target is macOS 11.0.0 void cblas_sgemm(const enum CBLAS_ORDER ORDER, ^ /Users/jrp/Documents/AI/ollama/llm/llama.cpp/ggml.c:10810:9: note: enclose 'cblas_sgemm' in a __builtin_available check to silence this warning cblas_sgemm(CblasRowMajor, transposeA, CblasNoTrans, m, n, k, 1.0, a, lda, b, n, 0.0, c, n); ^~~~~~~~~~~ 2 warnings generated. [ 37%] Built target ggml [ 43%] Building CXX object CMakeFiles/llama.dir/llama.cpp.o [ 50%] Linking CXX static library libllama.a [ 50%] Built target llama [ 56%] Building CXX object common/CMakeFiles/common.dir/common.cpp.o [ 62%] Building CXX object examples/llava/CMakeFiles/llava.dir/llava.cpp.o [ 62%] Building CXX object common/CMakeFiles/common.dir/sampling.cpp.o [ 68%] Building CXX object common/CMakeFiles/common.dir/train.cpp.o [ 87%] Building CXX object common/CMakeFiles/common.dir/console.cpp.o [ 87%] Building CXX object examples/llava/CMakeFiles/llava.dir/clip.cpp.o [ 87%] Building CXX object common/CMakeFiles/common.dir/grammar-parser.cpp.o [ 87%] Linking CXX static library libcommon.a [ 87%] Built target common [ 87%] Built target llava [100%] Building CXX object examples/server/CMakeFiles/ext_server.dir/Users/jrp/Documents/AI/ollama/llm/ext_server/ext_server.cpp.o [100%] Building CXX object examples/server/CMakeFiles/ext_server.dir/__/__/llama.cpp.o [100%] Linking CXX static library libext_server.a [100%] Built target ext_server + mkdir -p ../llama.cpp/build/darwin/arm64/metal/lib/ + g++ -fPIC -g -shared -o ../llama.cpp/build/darwin/arm64/metal/lib/libext_server.dylib -arch arm64 -Wl,-force_load ../llama.cpp/build/darwin/arm64/metal/examples/server/libext_server.a ../llama.cpp/build/darwin/arm64/metal/common/libcommon.a ../llama.cpp/build/darwin/arm64/metal/libllama.a '-Wl,-rpath,$ORIGIN' -lpthread -ldl -lm -framework Accelerate -framework Foundation -framework Metal -framework MetalKit -framework MetalPerformanceShaders + sign ../llama.cpp/build/darwin/arm64/metal/lib/libext_server.dylib + '[' -n '' ']' + compress_libs + echo 'Compressing payloads to reduce overall binary size...' Compressing payloads to reduce overall binary size... + pids= + rm -rf '../llama.cpp/build/darwin/arm64/metal/lib/*.dylib*.gz' + for lib in '${BUILD_DIR}/lib/*.${LIB_EXT}*' + pids+=' 15225' + echo + for pid in '${pids}' + wait 15225 + gzip --best -f ../llama.cpp/build/darwin/arm64/metal/lib/libext_server.dylib + echo 'Finished compression' Finished compression + cleanup + cd ../llama.cpp/examples/server/ + git checkout CMakeLists.txt server.cpp Updated 2 paths from the index ++ ls -A ../patches/01-cache.diff ../patches/02-cudaleaks.diff + '[' -n '../patches/01-cache.diff ../patches/02-cudaleaks.diff' ']' + for patch in '../patches/*.diff' ++ grep '^+++ ' ../patches/01-cache.diff ++ cut -f2 '-d ' ++ cut -f2- -d/ + for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)' + cd ../llama.cpp + git checkout examples/server/server.cpp Updated 0 paths from the index + for patch in '../patches/*.diff' ++ grep '^+++ ' ../patches/02-cudaleaks.diff ++ cut -f2 '-d ' ++ cut -f2- -d/ + for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)' + cd ../llama.cpp + git checkout examples/server/server.cpp Updated 0 paths from the index + for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)' + cd ../llama.cpp + git checkout ggml-cuda.cu Updated 1 path from the index + for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)' + cd ../llama.cpp + git checkout ggml-cuda.h Updated 1 path from the index (ollama) ➜ ollama git:(main) go build . # github.com/jmorganca/ollama/llm llm/llm.go:47:17: undefined: gpu.CheckVRAM llm/llm.go:58:14: undefined: gpu.GetGPUInfo llm/llm.go:158:15: undefined: newDynExtServer (ollama) ➜ ollama git:(main) ```
Author
Owner

@jmorganca commented on GitHub (Feb 25, 2024):

Hi there, sorry about the error. Would it be possible to pull main and try again? Also, gcc and g++ is required to build Ollama – would it be possible to make sure those are installed (you may need to download XCode)

<!-- gh-comment-id:1962814427 --> @jmorganca commented on GitHub (Feb 25, 2024): Hi there, sorry about the error. Would it be possible to pull `main` and try again? Also, `gcc` and `g++` is required to build Ollama – would it be possible to make sure those are installed (you may need to download XCode)
Author
Owner

@jrp2014 commented on GitHub (Feb 25, 2024):

No difference, I'm afraid:

+ for pid in '${pids}'
+ wait 22628
+ gzip --best -f ../llama.cpp/build/darwin/arm64/metal/lib/libext_server.dylib
+ echo 'Finished compression'
Finished compression
+ cleanup
+ cd ../llama.cpp/examples/server/
+ git checkout CMakeLists.txt server.cpp
Updated 2 paths from the index
++ ls -A ../patches/01-cache.diff ../patches/02-cudaleaks.diff
+ '[' -n '../patches/01-cache.diff
../patches/02-cudaleaks.diff' ']'
+ for patch in '../patches/*.diff'
++ grep '^+++ ' ../patches/01-cache.diff
++ cut -f2 '-d '
++ cut -f2- -d/
+ for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)'
+ cd ../llama.cpp
+ git checkout examples/server/server.cpp
Updated 0 paths from the index
+ for patch in '../patches/*.diff'
++ grep '^+++ ' ../patches/02-cudaleaks.diff
++ cut -f2 '-d '
++ cut -f2- -d/
+ for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)'
+ cd ../llama.cpp
+ git checkout examples/server/server.cpp
Updated 0 paths from the index
+ for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)'
+ cd ../llama.cpp
+ git checkout ggml-cuda.cu
Updated 1 path from the index
+ for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)'
+ cd ../llama.cpp
+ git checkout ggml-cuda.h
Updated 1 path from the index
(ollama) ➜  ollama git:(main) go build .
# github.com/jmorganca/ollama/llm
llm/llm.go:47:17: undefined: gpu.CheckVRAM
llm/llm.go:58:14: undefined: gpu.GetGPUInfo
llm/llm.go:158:15: undefined: newDynExtServer
(ollama) ➜  ollama git:(main) gcc --version
Apple clang version 15.0.0 (clang-1500.1.0.2.5)
Target: arm64-apple-darwin23.3.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
<!-- gh-comment-id:1962936622 --> @jrp2014 commented on GitHub (Feb 25, 2024): No difference, I'm afraid: ``` + for pid in '${pids}' + wait 22628 + gzip --best -f ../llama.cpp/build/darwin/arm64/metal/lib/libext_server.dylib + echo 'Finished compression' Finished compression + cleanup + cd ../llama.cpp/examples/server/ + git checkout CMakeLists.txt server.cpp Updated 2 paths from the index ++ ls -A ../patches/01-cache.diff ../patches/02-cudaleaks.diff + '[' -n '../patches/01-cache.diff ../patches/02-cudaleaks.diff' ']' + for patch in '../patches/*.diff' ++ grep '^+++ ' ../patches/01-cache.diff ++ cut -f2 '-d ' ++ cut -f2- -d/ + for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)' + cd ../llama.cpp + git checkout examples/server/server.cpp Updated 0 paths from the index + for patch in '../patches/*.diff' ++ grep '^+++ ' ../patches/02-cudaleaks.diff ++ cut -f2 '-d ' ++ cut -f2- -d/ + for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)' + cd ../llama.cpp + git checkout examples/server/server.cpp Updated 0 paths from the index + for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)' + cd ../llama.cpp + git checkout ggml-cuda.cu Updated 1 path from the index + for file in '$(grep "^+++ " ${patch} | cut -f2 -d'\'' '\'' | cut -f2- -d/)' + cd ../llama.cpp + git checkout ggml-cuda.h Updated 1 path from the index (ollama) ➜ ollama git:(main) go build . # github.com/jmorganca/ollama/llm llm/llm.go:47:17: undefined: gpu.CheckVRAM llm/llm.go:58:14: undefined: gpu.GetGPUInfo llm/llm.go:158:15: undefined: newDynExtServer (ollama) ➜ ollama git:(main) gcc --version Apple clang version 15.0.0 (clang-1500.1.0.2.5) Target: arm64-apple-darwin23.3.0 Thread model: posix InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin ```
Author
Owner

@nikhe commented on GitHub (Mar 4, 2024):

i have the same problem.
macbook m1 max Sonoma 14.4 (Xcode version is 15.2)
(try-mps) ➜ ollama git:(main) go build .

github.com/jmorganca/ollama/llm

llm/llm.go:47:17: undefined: gpu.CheckVRAM
llm/llm.go:58:14: undefined: gpu.GetGPUInfo
llm/llm.go:158:15: undefined: newDynExtServer

<!-- gh-comment-id:1976916484 --> @nikhe commented on GitHub (Mar 4, 2024): i have the same problem. macbook m1 max Sonoma 14.4 (Xcode version is 15.2) (try-mps) ➜ ollama git:(main) go build . # github.com/jmorganca/ollama/llm llm/llm.go:47:17: undefined: gpu.CheckVRAM llm/llm.go:58:14: undefined: gpu.GetGPUInfo llm/llm.go:158:15: undefined: newDynExtServer
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1645