[GH-ISSUE #4745] CMake Error at CMakeLists.txt:2 (project): Generator System.Management.Automation.RemoteException Ninja System.Management.Automation.RemoteException does not support platform specification, but platform #49500

Closed
opened 2026-04-28 12:05:18 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @transcendence-x on GitHub (May 31, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4745

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Your branch is up to date with 'origin/minicpm-v2.5'.
Already on 'minicpm-v2.5'
Submodule path '../llama.cpp': checked out 'd8974b8ea61e1268a4cad27f4f6e2cde3c5d1370'
Checking for MinGW...

CommandType Name Version Source


Application gcc.exe 0.0.0.0 C:\soft\develop\msys2\mingw64\bin\gcc.exe
Application mingw32-make.exe 0.0.0.0 C:\soft\develop\msys2\mingw64\bin\mingw32-make.exe
Building static library
generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64_static -G MinGW Makefiles -DCMAKE_C_COMPILER=gcc.exe -DCMAKE_CXX_COMPILER=g++.exe -DBUILD_SHARED_LIBS=off -DLLAMA_NATIVE=off -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_F16C=off -DLLAMA_FMA=off
cmake version 3.29.2

CMake suite maintained and supported by Kitware (kitware.com/cmake).
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- x86 detected
-- Configuring done (0.8s)
-- Generating done (3.1s)
-- Build files have been written to: D:/project/my/ollama/llm/build/windows/amd64_static
building with: cmake --build ../build/windows/amd64_static --config RelWithDebInfo --target llama --target ggml
[ 16%] Building C object CMakeFiles/ggml.dir/ggml.c.obj
D:\project\my\ollama\llm\llama.cpp\ggml.c: In function 'ggml_vec_mad_f16':
D:\project\my\ollama\llm\llama.cpp\ggml.c:2040:45: warning: passing argument 1 of '__sse_f16x4_load' discards 'const' qualifier from pointer target type [-Wdiscarded-qualifiers]
2040 | ax[j] = GGML_F16_VEC_LOAD(x + i + jGGML_F16_EPR, j);
| ^
D:\project\my\ollama\llm\llama.cpp\ggml.c:1501:50: note: in definition of macro 'GGML_F32Cx4_LOAD'
1501 | #define GGML_F32Cx4_LOAD(x) __sse_f16x4_load(x)
| ^
D:\project\my\ollama\llm\llama.cpp\ggml.c:2040:21: note: in expansion of macro 'GGML_F16_VEC_LOAD'
2040 | ax[j] = GGML_F16_VEC_LOAD(x + i + j
GGML_F16_EPR, j);
| ^~~~~~~~~~~~~~~~~
D:\project\my\ollama\llm\llama.cpp\ggml.c:1476:52: note: expected 'ggml_fp16_t ' {aka 'short unsigned int '} but argument is of type 'const ggml_fp16_t ' {aka 'const short unsigned int '}
1476 | static inline __m128 __sse_f16x4_load(ggml_fp16_t x) {
| ~~~~~~~~~~~~~^
[ 16%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.obj
[ 33%] Building C object CMakeFiles/ggml.dir/ggml-backend.c.obj
[ 50%] Building C object CMakeFiles/ggml.dir/ggml-quants.c.obj
[ 50%] Building CXX object CMakeFiles/ggml.dir/sgemm.cpp.obj
[ 50%] Built target ggml
[ 66%] Building CXX object CMakeFiles/llama.dir/llama.cpp.obj
D:\project\my\ollama\llm\llama.cpp\llama.cpp: In constructor 'llama_mmap::llama_mmap(llama_file
, size_t, bool)':
D:\project\my\ollama\llm\llama.cpp\llama.cpp:1428:38: warning: cast between incompatible function types from 'FARPROC' {aka 'long long int (
)()'} to 'BOOL (
)(HANDLE, ULONG_PTR, PWIN32_MEMORY_RANGE_ENTRY, ULONG)' {aka 'int (
)(void
, long long unsigned int, _WIN32_MEMORY_RANGE_ENTRY*, long unsigned int)'} [-Wcast-function-type]
1428 | pPrefetchVirtualMemory = reinterpret_cast<decltype(pPrefetchVirtualMemory)> (GetProcAddress(hKernel32, "PrefetchVirtualMemory"));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
D:\project\my\ollama\llm\llama.cpp\llama.cpp: In function 'float* llama_get_logits_ith(llama_context*, int32_t)':
D:\project\my\ollama\llm\llama.cpp\llama.cpp:17331:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector::size_type' {aka 'long long unsigned int'} [-Wformat=]
17331 | throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size()));
| ~~^ ~~~~~~~~~~~~~~~~~~~~~~
| | |
| long unsigned int std::vector::size_type {aka long long unsigned int}
| %llu
D:\project\my\ollama\llm\llama.cpp\llama.cpp: In function 'float* llama_get_embeddings_ith(llama_context*, int32_t)':
D:\project\my\ollama\llm\llama.cpp\llama.cpp:17376:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector::size_type' {aka 'long long unsigned int'} [-Wformat=]
17376 | throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size()));
| ~~^ ~~~~~~~~~~~~~~~~~~~~~~
| | |
| long unsigned int std::vector::size_type {aka long long unsigned int}
| %llu
[ 83%] Building CXX object CMakeFiles/llama.dir/unicode.cpp.obj
[ 83%] Building CXX object CMakeFiles/llama.dir/unicode-data.cpp.obj
[100%] Linking CXX static library libllama.a
[100%] Built target llama
[100%] Built target ggml
Building LCD CPU
generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64/cpu -DCMAKE_POSITION_INDEPENDENT_CODE=on -A x64 -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DBUILD_SHARED_LIBS=on -DLLAMA_NATIVE=off -DCMAKE_VERBOSE_MAKEFILE=on -DLLAMA_SERVER_VERBOSE=on -DCMAKE_BUILD_TYPE=RelWithDebInfo
cmake version 3.29.2

CMake suite maintained and supported by Kitware (kitware.com/cmake).
CMake Error at CMakeLists.txt:2 (project):
Generator
System.Management.Automation.RemoteException
Ninja
System.Management.Automation.RemoteException
does not support platform specification, but platform
System.Management.Automation.RemoteException
x64
System.Management.Automation.RemoteException
was specified.
System.Management.Automation.RemoteException
System.Management.Automation.RemoteException
CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage
CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage
-- Configuring incomplete, errors occurred!
llm\generate\generate_windows.go:3: running "powershell": exit status 1

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

No response

Originally created by @transcendence-x on GitHub (May 31, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4745 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Your branch is up to date with 'origin/minicpm-v2.5'. Already on 'minicpm-v2.5' Submodule path '../llama.cpp': checked out 'd8974b8ea61e1268a4cad27f4f6e2cde3c5d1370' Checking for MinGW... CommandType Name Version Source ----------- ---- ------- ------ Application gcc.exe 0.0.0.0 C:\soft\develop\msys2\mingw64\bin\gcc.exe Application mingw32-make.exe 0.0.0.0 C:\soft\develop\msys2\mingw64\bin\mingw32-make.exe Building static library generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64_static -G MinGW Makefiles -DCMAKE_C_COMPILER=gcc.exe -DCMAKE_CXX_COMPILER=g++.exe -DBUILD_SHARED_LIBS=off -DLLAMA_NATIVE=off -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_F16C=off -DLLAMA_FMA=off cmake version 3.29.2 CMake suite maintained and supported by Kitware (kitware.com/cmake). -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: AMD64 -- x86 detected -- Configuring done (0.8s) -- Generating done (3.1s) -- Build files have been written to: D:/project/my/ollama/llm/build/windows/amd64_static building with: cmake --build ../build/windows/amd64_static --config RelWithDebInfo --target llama --target ggml [ 16%] Building C object CMakeFiles/ggml.dir/ggml.c.obj D:\project\my\ollama\llm\llama.cpp\ggml.c: In function 'ggml_vec_mad_f16': D:\project\my\ollama\llm\llama.cpp\ggml.c:2040:45: warning: passing argument 1 of '__sse_f16x4_load' discards 'const' qualifier from pointer target type [-Wdiscarded-qualifiers] 2040 | ax[j] = GGML_F16_VEC_LOAD(x + i + j*GGML_F16_EPR, j); | ^ D:\project\my\ollama\llm\llama.cpp\ggml.c:1501:50: note: in definition of macro 'GGML_F32Cx4_LOAD' 1501 | #define GGML_F32Cx4_LOAD(x) __sse_f16x4_load(x) | ^ D:\project\my\ollama\llm\llama.cpp\ggml.c:2040:21: note: in expansion of macro 'GGML_F16_VEC_LOAD' 2040 | ax[j] = GGML_F16_VEC_LOAD(x + i + j*GGML_F16_EPR, j); | ^~~~~~~~~~~~~~~~~ D:\project\my\ollama\llm\llama.cpp\ggml.c:1476:52: note: expected 'ggml_fp16_t *' {aka 'short unsigned int *'} but argument is of type 'const ggml_fp16_t *' {aka 'const short unsigned int *'} 1476 | static inline __m128 __sse_f16x4_load(ggml_fp16_t *x) { | ~~~~~~~~~~~~~^ [ 16%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.obj [ 33%] Building C object CMakeFiles/ggml.dir/ggml-backend.c.obj [ 50%] Building C object CMakeFiles/ggml.dir/ggml-quants.c.obj [ 50%] Building CXX object CMakeFiles/ggml.dir/sgemm.cpp.obj [ 50%] Built target ggml [ 66%] Building CXX object CMakeFiles/llama.dir/llama.cpp.obj D:\project\my\ollama\llm\llama.cpp\llama.cpp: In constructor 'llama_mmap::llama_mmap(llama_file*, size_t, bool)': D:\project\my\ollama\llm\llama.cpp\llama.cpp:1428:38: warning: cast between incompatible function types from 'FARPROC' {aka 'long long int (*)()'} to 'BOOL (*)(HANDLE, ULONG_PTR, PWIN32_MEMORY_RANGE_ENTRY, ULONG)' {aka 'int (*)(void*, long long unsigned int, _WIN32_MEMORY_RANGE_ENTRY*, long unsigned int)'} [-Wcast-function-type] 1428 | pPrefetchVirtualMemory = reinterpret_cast<decltype(pPrefetchVirtualMemory)> (GetProcAddress(hKernel32, "PrefetchVirtualMemory")); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ D:\project\my\ollama\llm\llama.cpp\llama.cpp: In function 'float* llama_get_logits_ith(llama_context*, int32_t)': D:\project\my\ollama\llm\llama.cpp\llama.cpp:17331:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector<int>::size_type' {aka 'long long unsigned int'} [-Wformat=] 17331 | throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size())); | ~~^ ~~~~~~~~~~~~~~~~~~~~~~ | | | | long unsigned int std::vector<int>::size_type {aka long long unsigned int} | %llu D:\project\my\ollama\llm\llama.cpp\llama.cpp: In function 'float* llama_get_embeddings_ith(llama_context*, int32_t)': D:\project\my\ollama\llm\llama.cpp\llama.cpp:17376:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector<int>::size_type' {aka 'long long unsigned int'} [-Wformat=] 17376 | throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size())); | ~~^ ~~~~~~~~~~~~~~~~~~~~~~ | | | | long unsigned int std::vector<int>::size_type {aka long long unsigned int} | %llu [ 83%] Building CXX object CMakeFiles/llama.dir/unicode.cpp.obj [ 83%] Building CXX object CMakeFiles/llama.dir/unicode-data.cpp.obj [100%] Linking CXX static library libllama.a [100%] Built target llama [100%] Built target ggml Building LCD CPU generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64/cpu -DCMAKE_POSITION_INDEPENDENT_CODE=on -A x64 -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DBUILD_SHARED_LIBS=on -DLLAMA_NATIVE=off -DCMAKE_VERBOSE_MAKEFILE=on -DLLAMA_SERVER_VERBOSE=on -DCMAKE_BUILD_TYPE=RelWithDebInfo cmake version 3.29.2 CMake suite maintained and supported by Kitware (kitware.com/cmake). CMake Error at CMakeLists.txt:2 (project): Generator System.Management.Automation.RemoteException Ninja System.Management.Automation.RemoteException does not support platform specification, but platform System.Management.Automation.RemoteException x64 System.Management.Automation.RemoteException was specified. System.Management.Automation.RemoteException System.Management.Automation.RemoteException CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage -- Configuring incomplete, errors occurred! llm\generate\generate_windows.go:3: running "powershell": exit status 1 ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version _No response_
GiteaMirror added the bugwindows labels 2026-04-28 12:05:18 -05:00
Author
Owner

@jrmyio commented on GitHub (Jun 5, 2024):

Same issue here:

PS E:\Ollama> go version
go version go1.22.4 windows/amd64
PS E:\Ollama> cmake --version
cmake version 3.29.4

CMake suite maintained and supported by Kitware (kitware.com/cmake).
PS E:\Ollama> gcc --version
gcc.exe (Rev6, Built by MSYS2 project) 13.2.0
Copyright (C) 2023 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

PS E:\Ollama> $env:CGO_ENABLED="1"
>> go generate ./...
Submodule path '../llama.cpp': checked out '5921b8f089d3b7bda86aac5a66825df6a6c10603'
Updated 0 paths from the index
Updated 0 paths from the index
Updated 0 paths from the index
Updated 0 paths from the index
Updated 0 paths from the index
Updated 0 paths from the index
Checking for MinGW...

CommandType     Name                                               Version    Source
-----------     ----                                               -------    ------
Application     gcc.exe                                            0.0.0.0    C:\msys64\mingw64\bin\gcc.exe
Application     mingw32-make.exe                                   0.0.0.0    C:\msys64\mingw64\bin\mingw32-make.exe
Building static library
generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64_static -G MinGW Makefiles -DCMAKE_C_COMPILER=gcc.exe -DCMAKE_CXX_COMPILER=g++.exe -DBUILD_SHARED_LIBS=off -DLLAMA_NATIVE=off -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_F16C=off -DLLAMA_FMA=off
cmake version 3.29.4

CMake suite maintained and supported by Kitware (kitware.com/cmake).
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- x86 detected
-- Configuring done (0.5s)
-- Generating done (2.9s)
-- Build files have been written to: E:/Ollama/llm/build/windows/amd64_static
building with: cmake --build ../build/windows/amd64_static --config Release --target llama --target ggml
[ 50%] Built target ggml
[ 66%] Building CXX object CMakeFiles/llama.dir/llama.cpp.obj
E:\Ollama\llm\llama.cpp\llama.cpp: In constructor 'llama_mmap::llama_mmap(llama_file*, size_t, bool)':
E:\Ollama\llm\llama.cpp\llama.cpp:1504:38: warning: cast between incompatible function types from 'FARPROC' {aka 'long long int (*)()'} to 'BOOL (*)(HANDLE, ULONG_PTR, PWIN32_MEMORY_RANGE_ENTRY, ULONG)' {aka 'int (*)(void*, long long unsigned int, _WIN32_MEMORY_RANGE_ENTRY*, long unsigned int)'} [-Wcast-function-type]
 1504 |             pPrefetchVirtualMemory = reinterpret_cast<decltype(pPrefetchVirtualMemory)> (GetProcAddress(hKernel32, "PrefetchVirtualMemory"));
      |                                      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
E:\Ollama\llm\llama.cpp\llama.cpp: In function 'float* llama_get_logits_ith(llama_context*, int32_t)':
E:\Ollama\llm\llama.cpp\llama.cpp:18122:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector<int>::size_type' {aka 'long long unsigned int'} [-Wformat=]
18122 |             throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size()));
      |                                                               ~~^    ~~~~~~~~~~~~~~~~~~~~~~
      |                                                                 |                        |
      |                                                                 long unsigned int        std::vector<int>::size_type {aka long long unsigned int}
      |                                                               %llu
E:\Ollama\llm\llama.cpp\llama.cpp: In function 'float* llama_get_embeddings_ith(llama_context*, int32_t)':
E:\Ollama\llm\llama.cpp\llama.cpp:18167:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector<int>::size_type' {aka 'long long unsigned int'} [-Wformat=]
18167 |             throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size()));
      |                                                               ~~^    ~~~~~~~~~~~~~~~~~~~~~~
      |                                                                 |                        |
      |                                                                 long unsigned int        std::vector<int>::size_type {aka long long unsigned int}
      |                                                               %llu
[ 83%] Linking CXX static library libllama.a
[100%] Built target llama
[100%] Built target ggml
Building LCD CPU
generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64/cpu -DCMAKE_POSITION_INDEPENDENT_CODE=on -A x64 -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DBUILD_SHARED_LIBS=on -DLLAMA_NATIVE=off -DLLAMA_SERVER_VERBOSE=off -DCMAKE_BUILD_TYPE=Release
cmake version 3.29.4

CMake suite maintained and supported by Kitware (kitware.com/cmake).
CMake Error at CMakeLists.txt:2 (project):
  Generator

    NMake Makefiles

  does not support platform specification, but platform

    x64

  was specified.


CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage
CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage
-- Configuring incomplete, errors occurred!
llm\generate\generate_windows.go:3: running "powershell": exit status 1
<!-- gh-comment-id:2151068554 --> @jrmyio commented on GitHub (Jun 5, 2024): Same issue here: ``` PS E:\Ollama> go version go version go1.22.4 windows/amd64 PS E:\Ollama> cmake --version cmake version 3.29.4 CMake suite maintained and supported by Kitware (kitware.com/cmake). PS E:\Ollama> gcc --version gcc.exe (Rev6, Built by MSYS2 project) 13.2.0 Copyright (C) 2023 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. PS E:\Ollama> $env:CGO_ENABLED="1" >> go generate ./... Submodule path '../llama.cpp': checked out '5921b8f089d3b7bda86aac5a66825df6a6c10603' Updated 0 paths from the index Updated 0 paths from the index Updated 0 paths from the index Updated 0 paths from the index Updated 0 paths from the index Updated 0 paths from the index Checking for MinGW... CommandType Name Version Source ----------- ---- ------- ------ Application gcc.exe 0.0.0.0 C:\msys64\mingw64\bin\gcc.exe Application mingw32-make.exe 0.0.0.0 C:\msys64\mingw64\bin\mingw32-make.exe Building static library generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64_static -G MinGW Makefiles -DCMAKE_C_COMPILER=gcc.exe -DCMAKE_CXX_COMPILER=g++.exe -DBUILD_SHARED_LIBS=off -DLLAMA_NATIVE=off -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_F16C=off -DLLAMA_FMA=off cmake version 3.29.4 CMake suite maintained and supported by Kitware (kitware.com/cmake). -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: AMD64 -- x86 detected -- Configuring done (0.5s) -- Generating done (2.9s) -- Build files have been written to: E:/Ollama/llm/build/windows/amd64_static building with: cmake --build ../build/windows/amd64_static --config Release --target llama --target ggml [ 50%] Built target ggml [ 66%] Building CXX object CMakeFiles/llama.dir/llama.cpp.obj E:\Ollama\llm\llama.cpp\llama.cpp: In constructor 'llama_mmap::llama_mmap(llama_file*, size_t, bool)': E:\Ollama\llm\llama.cpp\llama.cpp:1504:38: warning: cast between incompatible function types from 'FARPROC' {aka 'long long int (*)()'} to 'BOOL (*)(HANDLE, ULONG_PTR, PWIN32_MEMORY_RANGE_ENTRY, ULONG)' {aka 'int (*)(void*, long long unsigned int, _WIN32_MEMORY_RANGE_ENTRY*, long unsigned int)'} [-Wcast-function-type] 1504 | pPrefetchVirtualMemory = reinterpret_cast<decltype(pPrefetchVirtualMemory)> (GetProcAddress(hKernel32, "PrefetchVirtualMemory")); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ E:\Ollama\llm\llama.cpp\llama.cpp: In function 'float* llama_get_logits_ith(llama_context*, int32_t)': E:\Ollama\llm\llama.cpp\llama.cpp:18122:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector<int>::size_type' {aka 'long long unsigned int'} [-Wformat=] 18122 | throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size())); | ~~^ ~~~~~~~~~~~~~~~~~~~~~~ | | | | long unsigned int std::vector<int>::size_type {aka long long unsigned int} | %llu E:\Ollama\llm\llama.cpp\llama.cpp: In function 'float* llama_get_embeddings_ith(llama_context*, int32_t)': E:\Ollama\llm\llama.cpp\llama.cpp:18167:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector<int>::size_type' {aka 'long long unsigned int'} [-Wformat=] 18167 | throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size())); | ~~^ ~~~~~~~~~~~~~~~~~~~~~~ | | | | long unsigned int std::vector<int>::size_type {aka long long unsigned int} | %llu [ 83%] Linking CXX static library libllama.a [100%] Built target llama [100%] Built target ggml Building LCD CPU generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64/cpu -DCMAKE_POSITION_INDEPENDENT_CODE=on -A x64 -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DBUILD_SHARED_LIBS=on -DLLAMA_NATIVE=off -DLLAMA_SERVER_VERBOSE=off -DCMAKE_BUILD_TYPE=Release cmake version 3.29.4 CMake suite maintained and supported by Kitware (kitware.com/cmake). CMake Error at CMakeLists.txt:2 (project): Generator NMake Makefiles does not support platform specification, but platform x64 was specified. CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage -- Configuring incomplete, errors occurred! llm\generate\generate_windows.go:3: running "powershell": exit status 1 ```
Author
Owner

@ZivenLu commented on GitHub (Jun 6, 2024):

I had the same build issue on Windows OS:

D:\GitHub\ollama>go version
go version go1.22.3 windows/amd64

D:\GitHub\ollama>gcc --version
gcc (Rev3, Built by MSYS2 project) 14.1.0
Copyright (C) 2024 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

D:\GitHub\ollama>set CGO_ENABLED="1"

D:\GitHub\ollama>go generate ./...
Submodule path '../llama.cpp': checked out '952d03dbead16e4dbdd1d3458486340673cc2465'
Updated 0 paths from the index
Updated 0 paths from the index
Updated 0 paths from the index
Updated 0 paths from the index
error: patch failed: examples/llava/clip.cpp:3
error: examples/llava/clip.cpp: patch does not apply
error: patch failed: llama.cpp:4756
error: llama.cpp: patch does not apply
error: patch failed: ggml-metal.m:1396
error: ggml-metal.m: patch does not apply
error: patch failed: examples/llava/clip.cpp:573
error: examples/llava/clip.cpp: patch does not apply
Checking for MinGW...

CommandType Name Version Source


Application gcc.exe 0.0.0.0 D:\msys64\ucrt64\bin\gcc.exe
Application mingw32-make.exe 0.0.0.0 D:\msys64\ucrt64\bin\mingw32-make.exe
Building static library
generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64_static -G MinGW Makefiles -DCMAKE_C_COMPILER=gcc.exe -DCMAKE_CXX_COMPILER=g++.exe -DBUILD_SHARED_LIBS=off -DLLAMA_NATIVE=off -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_F16C=off -DLLAMA_FMA=off
cmake version 3.29.3

CMake suite maintained and supported by Kitware (kitware.com/cmake).
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- x86 detected
-- Configuring done (4.1s)
-- Generating done (3.3s)
-- Build files have been written to: D:/GitHub/ollama/llm/build/windows/amd64_static
building with: cmake --build ../build/windows/amd64_static --config Release --target llama --target ggml
[ 50%] Built target ggml
[100%] Built target llama
[100%] Built target ggml
Building LCD CPU
generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64/cpu -DCMAKE_POSITION_INDEPENDENT_CODE=on -A x64 -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DBUILD_SHARED_LIBS=on -DLLAMA_NATIVE=off -DLLAMA_SERVER_VERBOSE=off -DCMAKE_BUILD_TYPE=Release
cmake version 3.29.3

CMake suite maintained and supported by Kitware (kitware.com/cmake).
CMake Error at CMakeLists.txt:2 (project):
Generator

Ninja

does not support platform specification, but platform

x64

was specified.

CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage
CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage
-- Configuring incomplete, errors occurred!
llm\generate\generate_windows.go:3: running "powershell": exit status 1

D:\GitHub\ollama>cmake --version
cmake version 3.29.3

CMake suite maintained and supported by Kitware (kitware.com/cmake).

D:\GitHub\ollama>

<!-- gh-comment-id:2151713637 --> @ZivenLu commented on GitHub (Jun 6, 2024): **I had the same build issue on Windows OS:** D:\GitHub\ollama>go version go version go1.22.3 windows/amd64 D:\GitHub\ollama>gcc --version gcc (Rev3, Built by MSYS2 project) 14.1.0 Copyright (C) 2024 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. D:\GitHub\ollama>set CGO_ENABLED="1" D:\GitHub\ollama>go generate ./... Submodule path '../llama.cpp': checked out '952d03dbead16e4dbdd1d3458486340673cc2465' Updated 0 paths from the index Updated 0 paths from the index Updated 0 paths from the index Updated 0 paths from the index error: patch failed: examples/llava/clip.cpp:3 error: examples/llava/clip.cpp: patch does not apply error: patch failed: llama.cpp:4756 error: llama.cpp: patch does not apply error: patch failed: ggml-metal.m:1396 error: ggml-metal.m: patch does not apply error: patch failed: examples/llava/clip.cpp:573 error: examples/llava/clip.cpp: patch does not apply Checking for MinGW... CommandType Name Version Source ----------- ---- ------- ------ Application gcc.exe 0.0.0.0 D:\msys64\ucrt64\bin\gcc.exe Application mingw32-make.exe 0.0.0.0 D:\msys64\ucrt64\bin\mingw32-make.exe Building static library generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64_static -G MinGW Makefiles -DCMAKE_C_COMPILER=gcc.exe -DCMAKE_CXX_COMPILER=g++.exe -DBUILD_SHARED_LIBS=off -DLLAMA_NATIVE=off -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_F16C=off -DLLAMA_FMA=off cmake version 3.29.3 CMake suite maintained and supported by Kitware (kitware.com/cmake). -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: AMD64 -- x86 detected -- Configuring done (4.1s) -- Generating done (3.3s) -- Build files have been written to: D:/GitHub/ollama/llm/build/windows/amd64_static building with: cmake --build ../build/windows/amd64_static --config Release --target llama --target ggml [ 50%] Built target ggml [100%] Built target llama [100%] Built target ggml Building LCD CPU generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64/cpu -DCMAKE_POSITION_INDEPENDENT_CODE=on -A x64 -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DBUILD_SHARED_LIBS=on -DLLAMA_NATIVE=off -DLLAMA_SERVER_VERBOSE=off -DCMAKE_BUILD_TYPE=Release cmake version 3.29.3 CMake suite maintained and supported by Kitware (kitware.com/cmake). CMake Error at CMakeLists.txt:2 (project): Generator Ninja does not support platform specification, but platform x64 was specified. CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage -- Configuring incomplete, errors occurred! llm\generate\generate_windows.go:3: running "powershell": exit status 1 D:\GitHub\ollama>cmake --version cmake version 3.29.3 CMake suite maintained and supported by Kitware (kitware.com/cmake). D:\GitHub\ollama>
Author
Owner

@dhiltgen commented on GitHub (Oct 23, 2024):

We're moving away from a cmake based build system for the native code, which should make this easier to get working.

Please take a look at the new instructions: https://github.com/ollama/ollama/blob/main/docs/development.md#windows-1

If you run into problems, let us know and I'll reopen the issue and assist.

<!-- gh-comment-id:2433503900 --> @dhiltgen commented on GitHub (Oct 23, 2024): We're moving away from a cmake based build system for the native code, which should make this easier to get working. Please take a look at the new instructions: https://github.com/ollama/ollama/blob/main/docs/development.md#windows-1 If you run into problems, let us know and I'll reopen the issue and assist.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#49500