[GH-ISSUE #495] Build Error: Unable to Apply Patch in 'examples/server/server.cpp' during Docker Build Process #25987

Closed
opened 2026-04-22 01:52:02 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @avri-schneider on GitHub (Sep 8, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/495

Issue Description:

During the Docker build process, an error occurred while attempting to apply patches to the 'examples/server/server.cpp' file. The error message indicated that the patch did not apply successfully. Upon investigation, it was discovered that the patches being applied have already been applied to the submodules used in the project.

Error Details:

...<--snip-->...
1.228 go: downloading github.com/go-playground/locales v0.14.1
3.836 Submodule 'llm/llama.cpp/gguf' (https://github.com/ggerganov/llama.cpp.git) registered for path 'gguf'
3.845 Cloning into '/go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf'...
5.359 Submodule path 'ggml': checked out '9e232f0234073358e7031c1b8d7aa45020469a3b'
8.199 From https://github.com/ggerganov/llama.cpp
8.199  * branch            53885d7256909ec3e2176cdc2477f3986c15ec69 -> FETCH_HEAD
8.226 Submodule path 'gguf': checked out '53885d7256909ec3e2176cdc2477f3986c15ec69'
8.227 error: patch failed: examples/server/server.cpp:1075
8.227 error: examples/server/server.cpp: patch does not apply
8.227 llm/llama.cpp/generate.go:8: running "git": exit status 1
------
Dockerfile:7
--------------------
   5 |
   6 |     COPY . .
   7 | >>> RUN go generate ./... && go build -ldflags '-linkmode external -extldflags "-static"' .
   8 |
   9 |     FROM alpine
--------------------
ERROR: failed to solve: process "/bin/sh -c go generate ./... && go build -ldflags '-linkmode external -extldflags \"-static\"' ." did not complete successfully: exit code: 1

Solution:

The reason for this error is that the patches being applied have already been integrated into the submodules used in the project. To resolve this issue, a pull request has been submitted to the repository: Pull Request #494.

Originally created by @avri-schneider on GitHub (Sep 8, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/495 **Issue Description:** During the Docker build process, an error occurred while attempting to apply patches to the 'examples/server/server.cpp' file. The error message indicated that the patch did not apply successfully. Upon investigation, it was discovered that the patches being applied have already been applied to the submodules used in the project. **Error Details:** ```less ...<--snip-->... 1.228 go: downloading github.com/go-playground/locales v0.14.1 3.836 Submodule 'llm/llama.cpp/gguf' (https://github.com/ggerganov/llama.cpp.git) registered for path 'gguf' 3.845 Cloning into '/go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf'... 5.359 Submodule path 'ggml': checked out '9e232f0234073358e7031c1b8d7aa45020469a3b' 8.199 From https://github.com/ggerganov/llama.cpp 8.199 * branch 53885d7256909ec3e2176cdc2477f3986c15ec69 -> FETCH_HEAD 8.226 Submodule path 'gguf': checked out '53885d7256909ec3e2176cdc2477f3986c15ec69' 8.227 error: patch failed: examples/server/server.cpp:1075 8.227 error: examples/server/server.cpp: patch does not apply 8.227 llm/llama.cpp/generate.go:8: running "git": exit status 1 ------ Dockerfile:7 -------------------- 5 | 6 | COPY . . 7 | >>> RUN go generate ./... && go build -ldflags '-linkmode external -extldflags "-static"' . 8 | 9 | FROM alpine -------------------- ERROR: failed to solve: process "/bin/sh -c go generate ./... && go build -ldflags '-linkmode external -extldflags \"-static\"' ." did not complete successfully: exit code: 1 ``` **Solution:** The reason for this error is that the patches being applied have already been integrated into the submodules used in the project. To resolve this issue, a pull request has been submitted to the repository: [Pull Request #494](https://github.com/jmorganca/ollama/pull/494).
Author
Owner

@mxyng commented on GitHub (Sep 8, 2023):

Continuing the conversation from #494:

git submodule does not checkout latest. Submodules are pinned to a particular commit:

$ git submodule status
 9e232f0234073358e7031c1b8d7aa45020469a3b llm/llama.cpp/ggml (master-9e232f0)
 53885d7256909ec3e2176cdc2477f3986c15ec69 llm/llama.cpp/gguf (b1112-3-g53885d7)

You can tell from your outputs that's exactly what has been checked out.

Getting back to the problem, what commit and Dockerfile are you using and what platform are you building on?

<!-- gh-comment-id:1711879837 --> @mxyng commented on GitHub (Sep 8, 2023): Continuing the conversation from #494: `git submodule` does not checkout latest. Submodules are pinned to a particular commit: ``` $ git submodule status 9e232f0234073358e7031c1b8d7aa45020469a3b llm/llama.cpp/ggml (master-9e232f0) 53885d7256909ec3e2176cdc2477f3986c15ec69 llm/llama.cpp/gguf (b1112-3-g53885d7) ``` You can tell from your outputs that's exactly what has been checked out. Getting back to the problem, what commit and Dockerfile are you using and what platform are you building on?
Author
Owner

@avri-schneider commented on GitHub (Sep 9, 2023):

@mxyng thanks for clarifying that git submodule is pinned to a particular commit.

I was trying to build the latest Dockerfile committed to the repo by running docker build . -t ollama-orig:

Microsoft Windows [Version 10.0.22621.2134]
(c) Microsoft Corporation. All rights reserved.

C:\Users\avri>mkdir debug

C:\Users\avri>cd debug

C:\Users\avri\debug>git clone https://github.com/jmorganca/ollama
Cloning into 'ollama'...
remote: Enumerating objects: 4780, done.
remote: Counting objects: 100% (942/942), done.
remote: Compressing objects: 100% (281/281), done.
remote: Total 4780 (delta 779), reused 742 (delta 660), pack-reused 3838
Receiving objects: 100% (4780/4780), 5.04 MiB | 9.47 MiB/s, done.
Resolving deltas: 100% (2774/2774), done.

C:\Users\avri\debug>cd ollama

C:\Users\avri\debug\ollama>docker build . -t ollama-orig
[+] Building 18.8s (13/16)                                                                               docker:default
 => [internal] load .dockerignore                                                                                  0.0s
 => => transferring context: 102B                                                                                  0.0s
 => [internal] load build definition from Dockerfile                                                               0.0s
 => => transferring dockerfile: 550B                                                                               0.0s
 => [internal] load metadata for docker.io/library/alpine:latest                                                   2.3s
 => [internal] load metadata for docker.io/library/golang:alpine                                                   2.3s
 => [auth] library/alpine:pull token for registry-1.docker.io                                                      0.0s
 => [auth] library/golang:pull token for registry-1.docker.io                                                      0.0s
 => [stage-1 1/4] FROM docker.io/library/alpine@sha256:7144f7bab3d4c2648d7e59409f15ec52a18006a128c733fcff20d3a4a5  0.0s
 => [stage-0 1/5] FROM docker.io/library/golang:alpine@sha256:96634e55b363cb93d39f78fb18aa64abc7f96d372c176660d7b  0.0s
 => [internal] load build context                                                                                  1.1s
 => => transferring context: 6.76MB                                                                                1.1s
 => CACHED [stage-0 2/5] WORKDIR /go/src/github.com/jmorganca/ollama                                               0.0s
 => CACHED [stage-0 3/5] RUN apk add --no-cache git build-base cmake                                               0.0s
 => [stage-0 4/5] COPY . .                                                                                         0.1s
 => ERROR [stage-0 5/5] RUN go generate ./... && go build -ldflags '-linkmode external -extldflags "-static"' .   15.2s
------
 > [stage-0 5/5] RUN go generate ./... && go build -ldflags '-linkmode external -extldflags "-static"' .:
0.611 go: downloading golang.org/x/term v0.10.0
0.637 go: downloading github.com/gin-gonic/gin v1.9.1
0.637 go: downloading github.com/gin-contrib/cors v1.4.0
0.638 go: downloading gonum.org/v1/gonum v0.13.0
0.638 go: downloading github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58
0.639 go: downloading github.com/mattn/go-runewidth v0.0.14
0.639 go: downloading github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db
0.639 go: downloading golang.org/x/exp v0.0.0-20230817173708-d852ddb80c63
0.639 go: downloading golang.org/x/crypto v0.10.0
0.643 go: downloading github.com/dustin/go-humanize v1.0.1
0.643 go: downloading github.com/olekukonko/tablewriter v0.0.5
0.643 go: downloading github.com/spf13/cobra v1.7.0
0.644 go: downloading github.com/chzyer/readline v1.5.1
0.962 go: downloading golang.org/x/sys v0.11.0
0.984 go: downloading github.com/rivo/uniseg v0.2.0
1.017 go: downloading github.com/spf13/pflag v1.0.5
1.701 go: downloading github.com/gin-contrib/sse v0.1.0
1.701 go: downloading github.com/ugorji/go/codec v1.2.11
1.701 go: downloading github.com/pelletier/go-toml/v2 v2.0.8
1.702 go: downloading golang.org/x/net v0.10.0
1.702 go: downloading github.com/go-playground/validator/v10 v10.14.0
1.702 go: downloading gopkg.in/yaml.v3 v3.0.1
1.702 go: downloading google.golang.org/protobuf v1.30.0
1.702 go: downloading github.com/mattn/go-isatty v0.0.19
1.963 go: downloading golang.org/x/text v0.10.0
2.541 go: downloading github.com/gabriel-vasile/mimetype v1.4.2
2.541 go: downloading github.com/leodido/go-urn v1.2.4
2.541 go: downloading github.com/go-playground/universal-translator v0.18.1
3.011 go: downloading github.com/go-playground/locales v0.14.1
5.808 Submodule 'llm/llama.cpp/ggml' (https://github.com/ggerganov/llama.cpp.git) registered for path 'ggml'
5.809 Submodule 'llm/llama.cpp/gguf' (https://github.com/ggerganov/llama.cpp.git) registered for path 'gguf'
5.827 Cloning into '/go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml'...
7.273 Cloning into '/go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf'...
11.60 From https://github.com/ggerganov/llama.cpp
11.60  * branch            9e232f0234073358e7031c1b8d7aa45020469a3b -> FETCH_HEAD
11.65 Submodule path 'ggml': checked out '9e232f0234073358e7031c1b8d7aa45020469a3b'
14.86 From https://github.com/ggerganov/llama.cpp
14.86  * branch            53885d7256909ec3e2176cdc2477f3986c15ec69 -> FETCH_HEAD
14.98 Submodule path 'gguf': checked out '53885d7256909ec3e2176cdc2477f3986c15ec69'
14.98 error: patch failed: examples/server/server.cpp:1075
14.98 error: examples/server/server.cpp: patch does not apply
14.98 llm/llama.cpp/generate.go:8: running "git": exit status 1
------
Dockerfile:7
--------------------
   5 |
   6 |     COPY . .
   7 | >>> RUN go generate ./... && go build -ldflags '-linkmode external -extldflags "-static"' .
   8 |
   9 |     FROM alpine
--------------------
ERROR: failed to solve: process "/bin/sh -c go generate ./... && go build -ldflags '-linkmode external -extldflags \"-static\"' ." did not complete successfully: exit code: 1

C:\Users\avri\debug\ollama>

Cloning and building my fork succeeds:

C:\Users\avri\debug>mkdir avri-schneider

C:\Users\avri\debug>cd avri-schneider

C:\Users\avri\debug\avri-schneider>git clone https://github.com/avri-schneider/ollama
Cloning into 'ollama'...
remote: Enumerating objects: 4419, done.
remote: Counting objects: 100% (688/688), done.
remote: Compressing objects: 100% (224/224), done.
remote: Total 4419 (delta 560), reused 515 (delta 463), pack-reused 3731
Receiving objects: 100% (4419/4419), 4.72 MiB | 8.12 MiB/s, done.
Resolving deltas: 100% (2587/2587), done.

C:\Users\avri\debug\avri-schneider>cd ollama

C:\Users\avri\debug\avri-schneider\ollama>docker build . -t ollama-fork
[+] Building 128.1s (15/15) FINISHED                                                                     docker:default
 => [internal] load build definition from Dockerfile                                                               0.0s
 => => transferring dockerfile: 550B                                                                               0.0s
 => [internal] load .dockerignore                                                                                  0.0s
 => => transferring context: 102B                                                                                  0.0s
 => [internal] load metadata for docker.io/library/golang:alpine                                                   0.8s
 => [internal] load metadata for docker.io/library/alpine:latest                                                   0.8s
 => [stage-0 1/5] FROM docker.io/library/golang:alpine@sha256:96634e55b363cb93d39f78fb18aa64abc7f96d372c176660d7b  0.0s
 => [stage-1 1/4] FROM docker.io/library/alpine@sha256:7144f7bab3d4c2648d7e59409f15ec52a18006a128c733fcff20d3a4a5  0.0s
 => [internal] load build context                                                                                  0.1s
 => => transferring context: 6.40MB                                                                                0.1s
 => CACHED [stage-0 2/5] WORKDIR /go/src/github.com/jmorganca/ollama                                               0.0s
 => CACHED [stage-0 3/5] RUN apk add --no-cache git build-base cmake                                               0.0s
 => [stage-0 4/5] COPY . .                                                                                         0.0s
 => [stage-0 5/5] RUN go generate ./... && go build -ldflags '-linkmode external -extldflags "-static"' .        126.8s
 => CACHED [stage-1 2/4] RUN apk add --no-cache libstdc++                                                          0.0s
 => CACHED [stage-1 3/4] RUN addgroup ollama && adduser -D -G ollama ollama                                        0.0s
 => [stage-1 4/4] COPY --from=0 /go/src/github.com/jmorganca/ollama/ollama /bin/ollama                             0.1s
 => exporting to image                                                                                             0.1s
 => => exporting layers                                                                                            0.1s
 => => writing image sha256:7b71e727d174f1cf6c46a7713bf5828027e63e1fdde2b718350e6c0d66c39599                       0.0s
 => => naming to docker.io/library/ollama-fork                                                                     0.0s

What's Next?
  View summary of image vulnerabilities and recommendations → docker scout quickview

C:\Users\avri\debug\avri-schneider\ollama>
<!-- gh-comment-id:1712584055 --> @avri-schneider commented on GitHub (Sep 9, 2023): @mxyng thanks for clarifying that `git submodule` is pinned to a particular commit. I was trying to build the [latest Dockerfile ](https://github.com/jmorganca/ollama/blob/41e976edde8920db7db82217e920ab50c465b6ee/Dockerfile) committed to the repo by running `docker build . -t ollama-orig`: ```less Microsoft Windows [Version 10.0.22621.2134] (c) Microsoft Corporation. All rights reserved. C:\Users\avri>mkdir debug C:\Users\avri>cd debug C:\Users\avri\debug>git clone https://github.com/jmorganca/ollama Cloning into 'ollama'... remote: Enumerating objects: 4780, done. remote: Counting objects: 100% (942/942), done. remote: Compressing objects: 100% (281/281), done. remote: Total 4780 (delta 779), reused 742 (delta 660), pack-reused 3838 Receiving objects: 100% (4780/4780), 5.04 MiB | 9.47 MiB/s, done. Resolving deltas: 100% (2774/2774), done. C:\Users\avri\debug>cd ollama C:\Users\avri\debug\ollama>docker build . -t ollama-orig [+] Building 18.8s (13/16) docker:default => [internal] load .dockerignore 0.0s => => transferring context: 102B 0.0s => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 550B 0.0s => [internal] load metadata for docker.io/library/alpine:latest 2.3s => [internal] load metadata for docker.io/library/golang:alpine 2.3s => [auth] library/alpine:pull token for registry-1.docker.io 0.0s => [auth] library/golang:pull token for registry-1.docker.io 0.0s => [stage-1 1/4] FROM docker.io/library/alpine@sha256:7144f7bab3d4c2648d7e59409f15ec52a18006a128c733fcff20d3a4a5 0.0s => [stage-0 1/5] FROM docker.io/library/golang:alpine@sha256:96634e55b363cb93d39f78fb18aa64abc7f96d372c176660d7b 0.0s => [internal] load build context 1.1s => => transferring context: 6.76MB 1.1s => CACHED [stage-0 2/5] WORKDIR /go/src/github.com/jmorganca/ollama 0.0s => CACHED [stage-0 3/5] RUN apk add --no-cache git build-base cmake 0.0s => [stage-0 4/5] COPY . . 0.1s => ERROR [stage-0 5/5] RUN go generate ./... && go build -ldflags '-linkmode external -extldflags "-static"' . 15.2s ------ > [stage-0 5/5] RUN go generate ./... && go build -ldflags '-linkmode external -extldflags "-static"' .: 0.611 go: downloading golang.org/x/term v0.10.0 0.637 go: downloading github.com/gin-gonic/gin v1.9.1 0.637 go: downloading github.com/gin-contrib/cors v1.4.0 0.638 go: downloading gonum.org/v1/gonum v0.13.0 0.638 go: downloading github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 0.639 go: downloading github.com/mattn/go-runewidth v0.0.14 0.639 go: downloading github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db 0.639 go: downloading golang.org/x/exp v0.0.0-20230817173708-d852ddb80c63 0.639 go: downloading golang.org/x/crypto v0.10.0 0.643 go: downloading github.com/dustin/go-humanize v1.0.1 0.643 go: downloading github.com/olekukonko/tablewriter v0.0.5 0.643 go: downloading github.com/spf13/cobra v1.7.0 0.644 go: downloading github.com/chzyer/readline v1.5.1 0.962 go: downloading golang.org/x/sys v0.11.0 0.984 go: downloading github.com/rivo/uniseg v0.2.0 1.017 go: downloading github.com/spf13/pflag v1.0.5 1.701 go: downloading github.com/gin-contrib/sse v0.1.0 1.701 go: downloading github.com/ugorji/go/codec v1.2.11 1.701 go: downloading github.com/pelletier/go-toml/v2 v2.0.8 1.702 go: downloading golang.org/x/net v0.10.0 1.702 go: downloading github.com/go-playground/validator/v10 v10.14.0 1.702 go: downloading gopkg.in/yaml.v3 v3.0.1 1.702 go: downloading google.golang.org/protobuf v1.30.0 1.702 go: downloading github.com/mattn/go-isatty v0.0.19 1.963 go: downloading golang.org/x/text v0.10.0 2.541 go: downloading github.com/gabriel-vasile/mimetype v1.4.2 2.541 go: downloading github.com/leodido/go-urn v1.2.4 2.541 go: downloading github.com/go-playground/universal-translator v0.18.1 3.011 go: downloading github.com/go-playground/locales v0.14.1 5.808 Submodule 'llm/llama.cpp/ggml' (https://github.com/ggerganov/llama.cpp.git) registered for path 'ggml' 5.809 Submodule 'llm/llama.cpp/gguf' (https://github.com/ggerganov/llama.cpp.git) registered for path 'gguf' 5.827 Cloning into '/go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml'... 7.273 Cloning into '/go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf'... 11.60 From https://github.com/ggerganov/llama.cpp 11.60 * branch 9e232f0234073358e7031c1b8d7aa45020469a3b -> FETCH_HEAD 11.65 Submodule path 'ggml': checked out '9e232f0234073358e7031c1b8d7aa45020469a3b' 14.86 From https://github.com/ggerganov/llama.cpp 14.86 * branch 53885d7256909ec3e2176cdc2477f3986c15ec69 -> FETCH_HEAD 14.98 Submodule path 'gguf': checked out '53885d7256909ec3e2176cdc2477f3986c15ec69' 14.98 error: patch failed: examples/server/server.cpp:1075 14.98 error: examples/server/server.cpp: patch does not apply 14.98 llm/llama.cpp/generate.go:8: running "git": exit status 1 ------ Dockerfile:7 -------------------- 5 | 6 | COPY . . 7 | >>> RUN go generate ./... && go build -ldflags '-linkmode external -extldflags "-static"' . 8 | 9 | FROM alpine -------------------- ERROR: failed to solve: process "/bin/sh -c go generate ./... && go build -ldflags '-linkmode external -extldflags \"-static\"' ." did not complete successfully: exit code: 1 C:\Users\avri\debug\ollama> ``` Cloning and building my fork succeeds: ```less C:\Users\avri\debug>mkdir avri-schneider C:\Users\avri\debug>cd avri-schneider C:\Users\avri\debug\avri-schneider>git clone https://github.com/avri-schneider/ollama Cloning into 'ollama'... remote: Enumerating objects: 4419, done. remote: Counting objects: 100% (688/688), done. remote: Compressing objects: 100% (224/224), done. remote: Total 4419 (delta 560), reused 515 (delta 463), pack-reused 3731 Receiving objects: 100% (4419/4419), 4.72 MiB | 8.12 MiB/s, done. Resolving deltas: 100% (2587/2587), done. C:\Users\avri\debug\avri-schneider>cd ollama C:\Users\avri\debug\avri-schneider\ollama>docker build . -t ollama-fork [+] Building 128.1s (15/15) FINISHED docker:default => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 550B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 102B 0.0s => [internal] load metadata for docker.io/library/golang:alpine 0.8s => [internal] load metadata for docker.io/library/alpine:latest 0.8s => [stage-0 1/5] FROM docker.io/library/golang:alpine@sha256:96634e55b363cb93d39f78fb18aa64abc7f96d372c176660d7b 0.0s => [stage-1 1/4] FROM docker.io/library/alpine@sha256:7144f7bab3d4c2648d7e59409f15ec52a18006a128c733fcff20d3a4a5 0.0s => [internal] load build context 0.1s => => transferring context: 6.40MB 0.1s => CACHED [stage-0 2/5] WORKDIR /go/src/github.com/jmorganca/ollama 0.0s => CACHED [stage-0 3/5] RUN apk add --no-cache git build-base cmake 0.0s => [stage-0 4/5] COPY . . 0.0s => [stage-0 5/5] RUN go generate ./... && go build -ldflags '-linkmode external -extldflags "-static"' . 126.8s => CACHED [stage-1 2/4] RUN apk add --no-cache libstdc++ 0.0s => CACHED [stage-1 3/4] RUN addgroup ollama && adduser -D -G ollama ollama 0.0s => [stage-1 4/4] COPY --from=0 /go/src/github.com/jmorganca/ollama/ollama /bin/ollama 0.1s => exporting to image 0.1s => => exporting layers 0.1s => => writing image sha256:7b71e727d174f1cf6c46a7713bf5828027e63e1fdde2b718350e6c0d66c39599 0.0s => => naming to docker.io/library/ollama-fork 0.0s What's Next? View summary of image vulnerabilities and recommendations → docker scout quickview C:\Users\avri\debug\avri-schneider\ollama> ```
Author
Owner

@avri-schneider commented on GitHub (Sep 9, 2023):

git submodule does not checkout latest. Submodules are pinned to a particular commit

@mxyng - isn't this line responsible for fetching the latest code from the submodule: 41e976edde/llm/llama.cpp/generate.go (L7)

<!-- gh-comment-id:1712586147 --> @avri-schneider commented on GitHub (Sep 9, 2023): > git submodule does not checkout latest. Submodules are pinned to a particular commit @mxyng - isn't this line responsible for fetching the latest code from the submodule: https://github.com/jmorganca/ollama/blob/41e976edde8920db7db82217e920ab50c465b6ee/llm/llama.cpp/generate.go#L7
Author
Owner

@mxyng commented on GitHub (Sep 9, 2023):

It does not. From the docs1:

Update the registered submodules to match what the superproject expects by cloning missing submodules, fetching missing commits in submodules and updating the working tree of the submodules.

The key phrase here is "to match what the superproject expects".

The --force flag2 will additionally discard any local change:

When running update (only effective with the checkout procedure), throw away local changes in submodules when switching to a different commit; and always run a checkout operation in the submodule, even if the commit listed in the index of the containing repository matches the commit checked out in the submodule.

You can validate this for yourself by running git submodule init && git submodule update --force:

$ git submodule init
Submodule 'llm/llama.cpp/ggml' (https://github.com/ggerganov/llama.cpp.git) registered for path 'llm/llama.cpp/ggml'
Submodule 'llm/llama.cpp/gguf' (https://github.com/ggerganov/llama.cpp.git) registered for path 'llm/llama.cpp/gguf'
$ git submodule update --force
Cloning into '/Users/michaelyang/git/tmp/ollama/llm/llama.cpp/ggml'...
Cloning into '/Users/michaelyang/git/tmp/ollama/llm/llama.cpp/gguf'...
remote: Enumerating objects: 4926, done.
remote: Counting objects: 100% (4926/4926), done.
remote: Compressing objects: 100% (1485/1485), done.
remote: Total 4786 (delta 3425), reused 4614 (delta 3273), pack-reused 0
Receiving objects: 100% (4786/4786), 3.15 MiB | 11.81 MiB/s, done.
Resolving deltas: 100% (3425/3425), completed with 106 local objects.
From github.com:ggerganov/llama.cpp
 * branch            9e232f0234073358e7031c1b8d7aa45020469a3b -> FETCH_HEAD
Submodule path 'llm/llama.cpp/ggml': checked out '9e232f0234073358e7031c1b8d7aa45020469a3b'
remote: Enumerating objects: 5562, done.
remote: Counting objects: 100% (5562/5562), done.
remote: Compressing objects: 100% (1654/1654), done.
remote: Total 5399 (delta 3900), reused 5188 (delta 3714), pack-reused 0
Receiving objects: 100% (5399/5399), 3.46 MiB | 10.77 MiB/s, done.
Resolving deltas: 100% (3900/3900), completed with 120 local objects.
From github.com:ggerganov/llama.cpp
 * branch            53885d7256909ec3e2176cdc2477f3986c15ec69 -> FETCH_HEAD
Submodule path 'llm/llama.cpp/gguf': checked out '53885d7256909ec3e2176cdc2477f3986c15ec69'
$ git -C llm/llama.cpp/ggml status
HEAD detached at 9e232f0
nothing to commit, working tree clean
$ git -C llm/llama.cpp/gguf status
HEAD detached at 53885d7
nothing to commit, working tree clean

I'm not able to reproduce this locally (macOS) using the same commands:

$ git clone https://github.com/jmorganca/ollama.git
Cloning into 'ollama'...
Warning: Permanently added 'github.com' (ED25519) to the list of known hosts.
remote: Enumerating objects: 4780, done.
remote: Counting objects: 100% (960/960), done.
remote: Compressing objects: 100% (277/277), done.
remote: Total 4780 (delta 796), reused 764 (delta 682), pack-reused 3820
Receiving objects: 100% (4780/4780), 4.91 MiB | 11.10 MiB/s, done.
Resolving deltas: 100% (2789/2789), done.
$ cd ollama
$ docker build -t test .
[+] Building 104.8s (15/15) FINISHED                                                                                                                                                   docker:desktop-linux
 => [internal] load build definition from Dockerfile                                                                                                                                                   0.0s
 => => transferring dockerfile: 529B                                                                                                                                                                   0.0s
 => [internal] load .dockerignore                                                                                                                                                                      0.0s
 => => transferring context: 97B                                                                                                                                                                       0.0s
 => [internal] load metadata for docker.io/library/alpine:latest                                                                                                                                       0.0s
 => [internal] load metadata for docker.io/library/golang:alpine                                                                                                                                       1.5s
 => [stage-0 1/5] FROM docker.io/library/golang:alpine@sha256:96634e55b363cb93d39f78fb18aa64abc7f96d372c176660d7b8b6118939d97b                                                                         0.0s
 => [internal] load build context                                                                                                                                                                      0.3s
 => => transferring context: 6.63MB                                                                                                                                                                    0.3s
 => [stage-1 1/4] FROM docker.io/library/alpine                                                                                                                                                        0.0s
 => CACHED [stage-0 2/5] WORKDIR /go/src/github.com/jmorganca/ollama                                                                                                                                   0.0s
 => CACHED [stage-0 3/5] RUN apk add --no-cache git build-base cmake                                                                                                                                   0.0s
 => [stage-0 4/5] COPY . .                                                                                                                                                                             0.1s
 => [stage-0 5/5] RUN go generate ./... && go build -ldflags '-linkmode external -extldflags "-static"' .                                                                                            102.6s
 => CACHED [stage-1 2/4] RUN apk add --no-cache libstdc++                                                                                                                                              0.0s
 => CACHED [stage-1 3/4] RUN addgroup ollama && adduser -D -G ollama ollama                                                                                                                            0.0s
 => CACHED [stage-1 4/4] COPY --from=0 /go/src/github.com/jmorganca/ollama/ollama /bin/ollama                                                                                                          0.0s
 => exporting to image                                                                                                                                                                                 0.0s
 => => exporting layers                                                                                                                                                                                0.0s
 => => writing image sha256:66ffa9b63111271ea0aea80916a0bce582214aaf0a3bd84e6292ea2a1c57efd8                                                                                                           0.0s
 => => naming to docker.io/library/test                                                                                                                                                                0.0s

What's Next?
  View summary of image vulnerabilities and recommendations → docker scout quickview
$ docker run --rm test
Couldn't find '/home/ollama/.ollama/id_ed25519'. Generating new private key.
Your new public key is:

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPnXX/0kyaPr61yNnFkJY5dePdwh6gNvTszEg6VcZV7i

[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:   export GIN_MODE=release
 - using code:  gin.SetMode(gin.ReleaseMode)

[GIN-debug] GET    /                         --> github.com/jmorganca/ollama/server.Serve.func1 (4 handlers)
[GIN-debug] HEAD   /                         --> github.com/jmorganca/ollama/server.Serve.func2 (4 handlers)
[GIN-debug] POST   /api/pull                 --> github.com/jmorganca/ollama/server.PullModelHandler (4 handlers)
[GIN-debug] POST   /api/generate             --> github.com/jmorganca/ollama/server.GenerateHandler (4 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/jmorganca/ollama/server.EmbeddingHandler (4 handlers)
[GIN-debug] POST   /api/create               --> github.com/jmorganca/ollama/server.CreateModelHandler (4 handlers)
[GIN-debug] POST   /api/push                 --> github.com/jmorganca/ollama/server.PushModelHandler (4 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/jmorganca/ollama/server.CopyModelHandler (4 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/jmorganca/ollama/server.ListModelsHandler (4 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/jmorganca/ollama/server.DeleteModelHandler (4 handlers)
[GIN-debug] POST   /api/show                 --> github.com/jmorganca/ollama/server.ShowModelHandler (4 handlers)
2023/09/09 21:46:12 routes.go:534: Listening on [::]:11434

An explanation of why these patches are important and shouldn't be removed:

llama.cpp recently introduced a breaking format GGUF. In order to not break our user and maintain support for the models they've already pulled, we support both GGML and GGUF which means supporting the most recent version of GGML llama.cpp.

However there has been a few critical fixes introduced after the switch to GGUF that we've backported to GGML. These are necessary to support models like codellama 34b as well as stability fixes. Removing these patches will have adverse effects for existing users using GGML-format models.

Therefore, these patches will be necessary until GGML support is EOL, whenever that may be.

<!-- gh-comment-id:1712647491 --> @mxyng commented on GitHub (Sep 9, 2023): It does not. From the docs[1]: > Update the registered submodules to match what the superproject expects by cloning missing submodules, fetching missing commits in submodules and updating the working tree of the submodules. The key phrase here is "to match what the superproject expects". The `--force` flag[2] will additionally discard any local change: > When running update (only effective with the checkout procedure), throw away local changes in submodules when switching to a different commit; and always run a checkout operation in the submodule, even if the commit listed in the index of the containing repository matches the commit checked out in the submodule. You can validate this for yourself by running `git submodule init && git submodule update --force`: ``` $ git submodule init Submodule 'llm/llama.cpp/ggml' (https://github.com/ggerganov/llama.cpp.git) registered for path 'llm/llama.cpp/ggml' Submodule 'llm/llama.cpp/gguf' (https://github.com/ggerganov/llama.cpp.git) registered for path 'llm/llama.cpp/gguf' $ git submodule update --force Cloning into '/Users/michaelyang/git/tmp/ollama/llm/llama.cpp/ggml'... Cloning into '/Users/michaelyang/git/tmp/ollama/llm/llama.cpp/gguf'... remote: Enumerating objects: 4926, done. remote: Counting objects: 100% (4926/4926), done. remote: Compressing objects: 100% (1485/1485), done. remote: Total 4786 (delta 3425), reused 4614 (delta 3273), pack-reused 0 Receiving objects: 100% (4786/4786), 3.15 MiB | 11.81 MiB/s, done. Resolving deltas: 100% (3425/3425), completed with 106 local objects. From github.com:ggerganov/llama.cpp * branch 9e232f0234073358e7031c1b8d7aa45020469a3b -> FETCH_HEAD Submodule path 'llm/llama.cpp/ggml': checked out '9e232f0234073358e7031c1b8d7aa45020469a3b' remote: Enumerating objects: 5562, done. remote: Counting objects: 100% (5562/5562), done. remote: Compressing objects: 100% (1654/1654), done. remote: Total 5399 (delta 3900), reused 5188 (delta 3714), pack-reused 0 Receiving objects: 100% (5399/5399), 3.46 MiB | 10.77 MiB/s, done. Resolving deltas: 100% (3900/3900), completed with 120 local objects. From github.com:ggerganov/llama.cpp * branch 53885d7256909ec3e2176cdc2477f3986c15ec69 -> FETCH_HEAD Submodule path 'llm/llama.cpp/gguf': checked out '53885d7256909ec3e2176cdc2477f3986c15ec69' $ git -C llm/llama.cpp/ggml status HEAD detached at 9e232f0 nothing to commit, working tree clean $ git -C llm/llama.cpp/gguf status HEAD detached at 53885d7 nothing to commit, working tree clean ``` I'm not able to reproduce this locally (macOS) using the same commands: ``` $ git clone https://github.com/jmorganca/ollama.git Cloning into 'ollama'... Warning: Permanently added 'github.com' (ED25519) to the list of known hosts. remote: Enumerating objects: 4780, done. remote: Counting objects: 100% (960/960), done. remote: Compressing objects: 100% (277/277), done. remote: Total 4780 (delta 796), reused 764 (delta 682), pack-reused 3820 Receiving objects: 100% (4780/4780), 4.91 MiB | 11.10 MiB/s, done. Resolving deltas: 100% (2789/2789), done. $ cd ollama $ docker build -t test . [+] Building 104.8s (15/15) FINISHED docker:desktop-linux => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 529B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 97B 0.0s => [internal] load metadata for docker.io/library/alpine:latest 0.0s => [internal] load metadata for docker.io/library/golang:alpine 1.5s => [stage-0 1/5] FROM docker.io/library/golang:alpine@sha256:96634e55b363cb93d39f78fb18aa64abc7f96d372c176660d7b8b6118939d97b 0.0s => [internal] load build context 0.3s => => transferring context: 6.63MB 0.3s => [stage-1 1/4] FROM docker.io/library/alpine 0.0s => CACHED [stage-0 2/5] WORKDIR /go/src/github.com/jmorganca/ollama 0.0s => CACHED [stage-0 3/5] RUN apk add --no-cache git build-base cmake 0.0s => [stage-0 4/5] COPY . . 0.1s => [stage-0 5/5] RUN go generate ./... && go build -ldflags '-linkmode external -extldflags "-static"' . 102.6s => CACHED [stage-1 2/4] RUN apk add --no-cache libstdc++ 0.0s => CACHED [stage-1 3/4] RUN addgroup ollama && adduser -D -G ollama ollama 0.0s => CACHED [stage-1 4/4] COPY --from=0 /go/src/github.com/jmorganca/ollama/ollama /bin/ollama 0.0s => exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:66ffa9b63111271ea0aea80916a0bce582214aaf0a3bd84e6292ea2a1c57efd8 0.0s => => naming to docker.io/library/test 0.0s What's Next? View summary of image vulnerabilities and recommendations → docker scout quickview $ docker run --rm test Couldn't find '/home/ollama/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPnXX/0kyaPr61yNnFkJY5dePdwh6gNvTszEg6VcZV7i [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] GET / --> github.com/jmorganca/ollama/server.Serve.func1 (4 handlers) [GIN-debug] HEAD / --> github.com/jmorganca/ollama/server.Serve.func2 (4 handlers) [GIN-debug] POST /api/pull --> github.com/jmorganca/ollama/server.PullModelHandler (4 handlers) [GIN-debug] POST /api/generate --> github.com/jmorganca/ollama/server.GenerateHandler (4 handlers) [GIN-debug] POST /api/embeddings --> github.com/jmorganca/ollama/server.EmbeddingHandler (4 handlers) [GIN-debug] POST /api/create --> github.com/jmorganca/ollama/server.CreateModelHandler (4 handlers) [GIN-debug] POST /api/push --> github.com/jmorganca/ollama/server.PushModelHandler (4 handlers) [GIN-debug] POST /api/copy --> github.com/jmorganca/ollama/server.CopyModelHandler (4 handlers) [GIN-debug] GET /api/tags --> github.com/jmorganca/ollama/server.ListModelsHandler (4 handlers) [GIN-debug] DELETE /api/delete --> github.com/jmorganca/ollama/server.DeleteModelHandler (4 handlers) [GIN-debug] POST /api/show --> github.com/jmorganca/ollama/server.ShowModelHandler (4 handlers) 2023/09/09 21:46:12 routes.go:534: Listening on [::]:11434 ``` An explanation of why these patches are important and shouldn't be removed: llama.cpp recently introduced a breaking format GGUF. In order to not break our user and maintain support for the models they've already pulled, we support both GGML and GGUF which means supporting the most recent version of GGML llama.cpp. However there has been a few critical fixes introduced after the switch to GGUF that we've backported to GGML. These are necessary to support models like codellama 34b as well as stability fixes. Removing these patches will have adverse effects for existing users using GGML-format models. Therefore, these patches will be necessary until GGML support is EOL, whenever that may be. [1]: https://www.git-scm.com/docs/git-submodule#Documentation/git-submodule.txt-update--init--remote-N--no-fetch--no-recommend-shallow-f--force--checkout--rebase--merge--referenceltrepositorygt--depthltdepthgt--recursive--jobsltngt--no-single-branch--filterltfilterspecgt--ltpathgt82308203 [2]: https://www.git-scm.com/docs/git-submodule#Documentation/git-submodule.txt--f
Author
Owner

@avri-schneider commented on GitHub (Sep 10, 2023):

@mxyng Thanks for the detailed explanation.

I am seeing that indeed, running the commands locally, one step at a time, applies the patches as expected, but when running the docker build command, the git submodule init and the following //go:generate git submodule update --force ggml gguf commands leave the ggml and gguf folders empty (on my Windows machine), thus the following git -C ggml apply ../ggml/... commands fail.

So I attempted to run the two commands manually and still I get a failure, but this time the folders are not empty and are checked-out at the expected commits:

3.896 Submodule path 'ggml': checked out '9e232f0234073358e7031c1b8d7aa45020469a3b'
3.946 Submodule path 'gguf': checked out '53885d7256909ec3e2176cdc2477f3986c15ec69'
3.947 error: patch failed: examples/server/server.cpp:1075
3.947 error: examples/server/server.cpp: patch does not apply
3.948 llm/llama.cpp/generate.go:8: running "git": exit status 1

So I continued to run manually the git -C ggml apply commands and they succeeded, removed the apply commands from the generate.go file, and now the docker build succeeds. Any idea why git leaves the submodule folders empty even though it claims they are checked out when running docker build on Windows, and similarly why even when the submodule folders are not empty and are checked out at the expected commits, the apply commands still fail?

<!-- gh-comment-id:1712692118 --> @avri-schneider commented on GitHub (Sep 10, 2023): @mxyng Thanks for the detailed explanation. I am seeing that indeed, running the commands locally, one step at a time, applies the patches as expected, but when running the `docker build` command, the `git submodule init` and the following `//go:generate git submodule update --force ggml gguf` commands leave the ggml and gguf folders empty (on my Windows machine), thus the following `git -C ggml apply ../ggml/...` commands fail. So I attempted to run the two commands manually and still I get a failure, but this time the folders are not empty and are checked-out at the expected commits: ```less 3.896 Submodule path 'ggml': checked out '9e232f0234073358e7031c1b8d7aa45020469a3b' 3.946 Submodule path 'gguf': checked out '53885d7256909ec3e2176cdc2477f3986c15ec69' 3.947 error: patch failed: examples/server/server.cpp:1075 3.947 error: examples/server/server.cpp: patch does not apply 3.948 llm/llama.cpp/generate.go:8: running "git": exit status 1 ``` So I continued to run manually the `git -C ggml apply` commands and they succeeded, removed the apply commands from the `generate.go` file, and now the `docker build` succeeds. Any idea why git leaves the submodule folders empty even though it claims they are checked out when running `docker build` on Windows, and similarly why even when the submodule folders are not empty and are checked out at the expected commits, the apply commands still fail?
Author
Owner

@mxyng commented on GitHub (Sep 13, 2023):

I haven't been able to get access to a Windows system to reproduce this. It maybe some time before I can get one though I don't recall running into issues last time I built ollama on Windows.

In the mean time, what version of git are you using? It's possible an older version of git doesn't have the same semantics. This is unlikely since the version inside the docker build should be fairly up-to-date.

<!-- gh-comment-id:1718230236 --> @mxyng commented on GitHub (Sep 13, 2023): I haven't been able to get access to a Windows system to reproduce this. It maybe some time before I can get one though I don't recall running into issues last time I built ollama on Windows. In the mean time, what version of git are you using? It's possible an older version of git doesn't have the same semantics. This is unlikely since the version inside the docker build should be fairly up-to-date.
Author
Owner

@avri-schneider commented on GitHub (Sep 18, 2023):

It seems to be a weird Windows + Git filesystem issue, as it works fine on Windows when using WSL, but I find that running Ollama on Docker is much slower - is that expected?

<!-- gh-comment-id:1724300287 --> @avri-schneider commented on GitHub (Sep 18, 2023): It seems to be a weird Windows + Git filesystem issue, as it works fine on Windows when using WSL, but I find that running Ollama on Docker is much slower - is that expected?
Author
Owner

@mxyng commented on GitHub (Sep 26, 2023):

There are now Ollama Docker images hosted in Docker Hub built with CUDA if you're interested in just running Ollama in Docker.

I find that running Ollama on Docker is much slower - is that expected?

It may be slower due to Docker capabilities and virtualization layers.

In particular, comparing native MacOS and MacOS Docker Desktop is unfair because GPU acceleration isn't available inside the container. On my M1 MBP, orca-mini 3b produces ~20 tokens/s on MacOS with Metal while the container only produces 12 tokens/s.

The difference between native Linux and containers is closer provided both are using GPU.

Window Docker Desktop might be slower because Docker is actually running on a lightweight VM as opposed to bare metal of a Linux system.

<!-- gh-comment-id:1736406339 --> @mxyng commented on GitHub (Sep 26, 2023): There are now Ollama Docker images hosted in [Docker Hub](https://hub.docker.com/r/ollama/ollama) built with CUDA if you're interested in just running Ollama in Docker. > I find that running Ollama on Docker is much slower - is that expected? It may be slower due to Docker capabilities and virtualization layers. In particular, comparing native MacOS and MacOS Docker Desktop is unfair because GPU acceleration isn't available inside the container. On my M1 MBP, orca-mini 3b produces ~20 tokens/s on MacOS with Metal while the container only produces 12 tokens/s. The difference between native Linux and containers is closer provided both are using GPU. Window Docker Desktop might be slower because Docker is actually running on a lightweight VM as opposed to bare metal of a Linux system.
Author
Owner

@bazfp commented on GitHub (Oct 7, 2023):

Having the same git problem problem building the root Dockerfile on WSL using docker build . -t ollama

Should be the same as a straight up linux build no?

40.72 go: downloading github.com/mattn/go-isatty v0.0.19
41.05 go: downloading golang.org/x/net v0.10.0
60.40 go: downloading github.com/go-playground/validator/v10 v10.14.0
60.41 go: downloading github.com/pelletier/go-toml/v2 v2.0.8
60.41 go: downloading google.golang.org/protobuf v1.30.0
87.66 Submodule 'llm/llama.cpp/ggml' (https://github.com/ggerganov/llama.cpp.git) registered for path 'ggml'
87.66 Submodule 'llm/llama.cpp/gguf' (https://github.com/ggerganov/llama.cpp.git) registered for path 'gguf'
87.68 Cloning into '/go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml'...
108.1 From https://github.com/ggerganov/llama.cpp
108.2 Submodule path 'ggml': checked out '9e232f0234073358e7031c1b8d7aa45020469a3b'
108.2 error: patch failed: examples/server/server.cpp:1075
------
Dockerfile:16
  15 |     ENV GOFLAGS=$GOFLAGS
  16 | >>> RUN /usr/local/go/bin/go generate ./... \
  18 |
--------------------

I would like to use a different CUDA version than the version hosted on dockerhub as the dockerhub version is too new.

<!-- gh-comment-id:1751814889 --> @bazfp commented on GitHub (Oct 7, 2023): Having the same git problem problem building the root `Dockerfile` on WSL using `docker build . -t ollama` Should be the same as a straight up linux build no? ``` 40.72 go: downloading github.com/mattn/go-isatty v0.0.19 41.05 go: downloading golang.org/x/net v0.10.0 60.40 go: downloading github.com/go-playground/validator/v10 v10.14.0 60.41 go: downloading github.com/pelletier/go-toml/v2 v2.0.8 60.41 go: downloading google.golang.org/protobuf v1.30.0 87.66 Submodule 'llm/llama.cpp/ggml' (https://github.com/ggerganov/llama.cpp.git) registered for path 'ggml' 87.66 Submodule 'llm/llama.cpp/gguf' (https://github.com/ggerganov/llama.cpp.git) registered for path 'gguf' 87.68 Cloning into '/go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml'... 108.1 From https://github.com/ggerganov/llama.cpp 108.2 Submodule path 'ggml': checked out '9e232f0234073358e7031c1b8d7aa45020469a3b' 108.2 error: patch failed: examples/server/server.cpp:1075 ------ Dockerfile:16 15 | ENV GOFLAGS=$GOFLAGS 16 | >>> RUN /usr/local/go/bin/go generate ./... \ 18 | -------------------- ``` I would like to use a different CUDA version than the version hosted on dockerhub as the dockerhub version is too new.
Author
Owner

@jmorganca commented on GitHub (Oct 28, 2023):

Hey folks this should be fixed now, but feel free to re-open if it's not.

<!-- gh-comment-id:1783905128 --> @jmorganca commented on GitHub (Oct 28, 2023): Hey folks this should be fixed now, but feel free to re-open if it's not.
Author
Owner

@skye0402 commented on GitHub (Dec 6, 2023):

@jmorganca Having the same problem like above. Both under Windows and WSL2 Ubuntu. Any workaround to enable a build?

5.417 Submodule 'llm/llama.cpp/ggml' (https://github.com/ggerganov/llama.cpp.git) registered for path 'ggml'
5.418 Submodule 'llm/llama.cpp/gguf' (https://github.com/ggerganov/llama.cpp.git) registered for path 'gguf'
5.432 Cloning into '/go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml'...
9.715 From https://github.com/ggerganov/llama.cpp
9.715  * branch            9e232f0234073358e7031c1b8d7aa45020469a3b -> FETCH_HEAD
9.750 Submodule path 'ggml': checked out '9e232f0234073358e7031c1b8d7aa45020469a3b'
9.754 error: patch failed: examples/server/server.cpp:1075
9.754 error: examples/server/server.cpp: patch does not apply
9.754 llm/llama.cpp/generate_linux.go:6: running "git": exit status 1
------
Dockerfile:14
--------------------
  13 |     ENV GOFLAGS=$GOFLAGS
  14 | >>> RUN /usr/local/go/bin/go generate ./... \
  15 | >>>     && /usr/local/go/bin/go build .
  16 |     
--------------------
ERROR: failed to solve: process "/bin/sh -c /usr/local/go/bin/go generate ./...     && /usr/local/go/bin/go build ." did not complete successfully: exit code: 1

I've cloned the latest version of ollama and use the latest Docker Desktop.

<!-- gh-comment-id:1842704335 --> @skye0402 commented on GitHub (Dec 6, 2023): @jmorganca Having the same problem like above. Both under Windows and WSL2 Ubuntu. Any workaround to enable a build? ``` 5.417 Submodule 'llm/llama.cpp/ggml' (https://github.com/ggerganov/llama.cpp.git) registered for path 'ggml' 5.418 Submodule 'llm/llama.cpp/gguf' (https://github.com/ggerganov/llama.cpp.git) registered for path 'gguf' 5.432 Cloning into '/go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml'... 9.715 From https://github.com/ggerganov/llama.cpp 9.715 * branch 9e232f0234073358e7031c1b8d7aa45020469a3b -> FETCH_HEAD 9.750 Submodule path 'ggml': checked out '9e232f0234073358e7031c1b8d7aa45020469a3b' 9.754 error: patch failed: examples/server/server.cpp:1075 9.754 error: examples/server/server.cpp: patch does not apply 9.754 llm/llama.cpp/generate_linux.go:6: running "git": exit status 1 ------ Dockerfile:14 -------------------- 13 | ENV GOFLAGS=$GOFLAGS 14 | >>> RUN /usr/local/go/bin/go generate ./... \ 15 | >>> && /usr/local/go/bin/go build . 16 | -------------------- ERROR: failed to solve: process "/bin/sh -c /usr/local/go/bin/go generate ./... && /usr/local/go/bin/go build ." did not complete successfully: exit code: 1 ``` I've cloned the latest version of ollama and use the latest Docker Desktop.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#25987