Error running latest git pull on Pi #4366

Closed
opened 2025-11-12 12:16:53 -06:00 by GiteaMirror · 13 comments
Owner

Originally created by @bkev on GitHub (Sep 20, 2024).

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Hi,
I've tried to pull and build the Ollama git like I have done in the past, but the latest version doesn't seem to work when installed.

If I try to run a model I get

ollama run phi3.5:latest
Error: no suitable llama servers found

In the log I get

INFO source=common.go:49 msg="Dynamic LLM libraries" runners=[]
...
source=sched.go:428 msg="NewLlamaServer failed" model=/usr/share/ollama/.ollama/models/............

If I install using the auto install script, the log shows
Dynamic LLM libraries" runners="[cpu cuda_v11 cuda_v12]"

I've tried adding
Environment="OLLAMA_LLM_LIBRARY=cpu cuda_v11 cuda_v12"

to the service file in the git build but this doesn't seem to make any difference and it still shows in the log as

Dynamic LLM libraries" runners=[]

Not sure if I'm looking in the wrong place or not, but I can't get this to work anymore.

OS

Linux

GPU

Other

CPU

Other

Ollama version

No response

Originally created by @bkev on GitHub (Sep 20, 2024). Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Hi, I've tried to pull and build the Ollama git like I have done in the past, but the latest version doesn't seem to work when installed. If I try to run a model I get ollama run phi3.5:latest Error: no suitable llama servers found In the log I get INFO source=common.go:49 msg="Dynamic LLM libraries" runners=[] ... source=sched.go:428 msg="NewLlamaServer failed" model=/usr/share/ollama/.ollama/models/............ If I install using the auto install script, the log shows Dynamic LLM libraries" runners="[cpu cuda_v11 cuda_v12]" I've tried adding Environment="OLLAMA_LLM_LIBRARY=cpu cuda_v11 cuda_v12" to the service file in the git build but this doesn't seem to make any difference and it still shows in the log as Dynamic LLM libraries" runners=[] Not sure if I'm looking in the wrong place or not, but I can't get this to work anymore. ### OS Linux ### GPU Other ### CPU Other ### Ollama version _No response_
GiteaMirror added the buglinux labels 2025-11-12 12:16:53 -06:00
Author
Owner

@bkev commented on GitHub (Sep 22, 2024):

I've done a bit of testing. Not sure if I've done it right, but if I add this

git reset --hard fef257c5

And build Ollama the build works.

If I then remove the above and add this

git reset --hard cd5c8f64

And build it, this seems to fail.

I'm not sure if I've done that right and I will test where I can but it seems it's broken for me from this point.

I get the error

ollama run phi3.5:latest
Error: no suitable llama servers found

@bkev commented on GitHub (Sep 22, 2024): I've done a bit of testing. Not sure if I've done it right, but if I add this git reset --hard fef257c5 And build Ollama the build works. If I then remove the above and add this git reset --hard cd5c8f64 And build it, this seems to fail. I'm not sure if I've done that right and I will test where I can but it seems it's broken for me from this point. I get the error ollama run phi3.5:latest Error: no suitable llama servers found
Author
Owner

@dhiltgen commented on GitHub (Sep 22, 2024):

Can you clarify your setup and how you're building? From the output of the available runners when it is working I think you're building on arm, not x86. Is that correct?

Did go generate ./... work, or did it fail, and if it worked, did it report the runners it built?

@dhiltgen commented on GitHub (Sep 22, 2024): Can you clarify your setup and how you're building? From the output of the available runners when it is working I think you're building on arm, not x86. Is that correct? Did `go generate ./...` work, or did it fail, and if it worked, did it report the runners it built?
Author
Owner

@bkev commented on GitHub (Sep 22, 2024):

Hi, thanks for the response.

It is being built on ARM.

go generate ./... does work for both examples.

So, for the latest version I run

git reset --hard
git pull
go generate ./...
go build .

From the go generate ./... log I get this, are these the bits you need?

  • ARCH=arm64
    ...........

  • DIST_BASE=../../dist/linux-arm64/

  • PAYLOAD_BASE=../../build/linux/arm64
    .................
    -- CMAKE_SYSTEM_PROCESSOR: aarch64
    -- ARM detected
    ...............

  • cmake --build ../build/linux/arm64_static --target llama --target ggml -j8
    ........
    [ 55%] Built target ggml
    [ 55%] Building CXX object src/CMakeFiles/llama.dir/llama.cpp.o
    [ 66%] Linking CXX static library libllama.a
    [100%] Built target llama
    [100%] Built target ggml
    ............

  • cmake --build ../build/linux/arm64/cpu --target ollama_llama_server -j8
    [ 0%] Generating build details from Git
    -- Found Git: /usr/bin/git (found version "2.39.5")
    [ 25%] Built target ggml
    [ 31%] Building CXX object common/CMakeFiles/build_info.dir/build-info.cpp.o
    [ 31%] Built target build_info
    [ 37%] Building CXX object src/CMakeFiles/llama.dir/llama.cpp.o
    [ 43%] Linking CXX shared library libllama.so
    [ 62%] Built target llama
    [ 68%] Building CXX object examples/llava/CMakeFiles/llava.dir/llava.cpp.o
    [ 68%] Building CXX object examples/llava/CMakeFiles/llava.dir/clip.cpp.o
    [ 75%] Building CXX object common/CMakeFiles/common.dir/sampling.cpp.o
    [ 75%] Building CXX object common/CMakeFiles/common.dir/common.cpp.o
    [ 81%] Building CXX object common/CMakeFiles/common.dir/ngram-cache.cpp.o
    [ 81%] Building CXX object common/CMakeFiles/common.dir/train.cpp.o
    [ 81%] Built target llava
    [ 81%] Linking CXX static library libcommon.a
    [ 93%] Built target common
    [100%] Building CXX object ext_server/CMakeFiles/ollama_llama_server.dir/server.cpp.o
    [100%] Linking CXX executable ../bin/ollama_llama_server
    [100%] Built target ollama_llama_server
    .........................
    Anything to do with runners:

  • RUNNER_BASE=../../dist/linux-arm64//lib/ollama/runners
    ..........

  • RUNNER_BASE=../../dist/linux-arm64//lib/ollama/runners
    .......

  • RUNNER=cpu
    ....
    After this
    go build .
    Then

sudo cp -f ollama /usr/local/bin
sudo chmod +x /usr/local/bin/ollama

At this point, if I run

ollama run phi3.5:latest
Error: no suitable llama servers found

Different Commit

So, now if I do this

git reset --hard
git pull
git reset --hard fef257c5

I get

HEAD is now at ad935f45 examples: use punkt_tab instead of punkt (#6907)
Already up to date.
HEAD is now at fef257c5 examples: updated requirements.txt for privategpt example

Then

go generate ./...

The output is

  • ARCH=arm64
    ..................

-- CMAKE_SYSTEM_PROCESSOR: aarch64
-- ARM detected
..................

  • DIST_BASE=../../dist/linux-arm64/
    ......
    -- CMAKE_SYSTEM_PROCESSOR: aarch64
    -- ARM detected
    .........
  • cmake --build ../build/linux/arm64_static --target llama --target ggml -j8
    [ 55%] Built target ggml
    [ 55%] Building CXX object src/CMakeFiles/llama.dir/llama.cpp.o
    [ 66%] Linking CXX static library libllama.a
    [100%] Built target llama
    [100%] Built target ggml
    .............
  • cmake --build ../build/linux/arm64/cpu --target ollama_llama_server -j8

[ 0%] Generating build details from Git
-- Found Git: /usr/bin/git (found version "2.39.5")
[ 25%] Built target ggml
[ 25%] Generating build details from Git
-- Found Git: /usr/bin/git (found version "2.39.5")
[ 31%] Built target build_info
[ 37%] Building CXX object src/CMakeFiles/llama.dir/llama.cpp.o
[ 43%] Linking CXX shared library libllama.so
[ 62%] Built target llama
[ 68%] Building CXX object examples/llava/CMakeFiles/llava.dir/llava.cpp.o
[ 68%] Building CXX object examples/llava/CMakeFiles/llava.dir/clip.cpp.o
[ 68%] Building CXX object common/CMakeFiles/common.dir/common.cpp.o
[ 68%] Building CXX object common/CMakeFiles/common.dir/train.cpp.o
[ 81%] Building CXX object common/CMakeFiles/common.dir/ngram-cache.cpp.o
[ 81%] Building CXX object common/CMakeFiles/common.dir/sampling.cpp.o
[ 81%] Built target llava
[ 81%] Linking CXX static library libcommon.a
[ 93%] Built target common
[100%] Building CXX object ext_server/CMakeFiles/ollama_llama_server.dir/server.cpp.o
[100%] Linking CXX executable ../bin/ollama_llama_server
[100%] Built target ollama_llama_server

............

I can't find anything to do with runners in this generate. But this is the one that works.

ollama run phi3.5:latest

Send a message (/? for help)

@bkev commented on GitHub (Sep 22, 2024): Hi, thanks for the response. It is being built on ARM. go generate ./... does work for both examples. So, for the latest version I run ``` git reset --hard git pull go generate ./... go build . ``` From the `go generate ./...` log I get this, are these the bits you need? + ARCH=arm64 ........... + DIST_BASE=../../dist/linux-arm64/ + PAYLOAD_BASE=../../build/linux/arm64 ................. -- CMAKE_SYSTEM_PROCESSOR: aarch64 -- ARM detected ............... + cmake --build ../build/linux/arm64_static --target llama --target ggml -j8 ........ [ 55%] Built target ggml [ 55%] Building CXX object src/CMakeFiles/llama.dir/llama.cpp.o [ 66%] Linking CXX static library libllama.a [100%] Built target llama [100%] Built target ggml ............ + cmake --build ../build/linux/arm64/cpu --target ollama_llama_server -j8 [ 0%] Generating build details from Git -- Found Git: /usr/bin/git (found version "2.39.5") [ 25%] Built target ggml [ 31%] Building CXX object common/CMakeFiles/build_info.dir/build-info.cpp.o [ 31%] Built target build_info [ 37%] Building CXX object src/CMakeFiles/llama.dir/llama.cpp.o [ 43%] Linking CXX shared library libllama.so [ 62%] Built target llama [ 68%] Building CXX object examples/llava/CMakeFiles/llava.dir/llava.cpp.o [ 68%] Building CXX object examples/llava/CMakeFiles/llava.dir/clip.cpp.o [ 75%] Building CXX object common/CMakeFiles/common.dir/sampling.cpp.o [ 75%] Building CXX object common/CMakeFiles/common.dir/common.cpp.o [ 81%] Building CXX object common/CMakeFiles/common.dir/ngram-cache.cpp.o [ 81%] Building CXX object common/CMakeFiles/common.dir/train.cpp.o [ 81%] Built target llava [ 81%] Linking CXX static library libcommon.a [ 93%] Built target common [100%] Building CXX object ext_server/CMakeFiles/ollama_llama_server.dir/server.cpp.o [100%] Linking CXX executable ../bin/ollama_llama_server [100%] Built target ollama_llama_server ......................... Anything to do with runners: + RUNNER_BASE=../../dist/linux-arm64//lib/ollama/runners .......... + RUNNER_BASE=../../dist/linux-arm64//lib/ollama/runners ....... + RUNNER=cpu .... After this `go build .` Then ``` sudo cp -f ollama /usr/local/bin sudo chmod +x /usr/local/bin/ollama ``` At this point, if I run ``` ollama run phi3.5:latest Error: no suitable llama servers found ``` **Different Commit** So, now if I do this ``` git reset --hard git pull git reset --hard fef257c5 ``` I get HEAD is now at ad935f45 examples: use punkt_tab instead of punkt (#6907) Already up to date. HEAD is now at fef257c5 examples: updated requirements.txt for privategpt example Then `go generate ./...` The output is + ARCH=arm64 .................. -- CMAKE_SYSTEM_PROCESSOR: aarch64 -- ARM detected .................. + DIST_BASE=../../dist/linux-arm64/ ...... -- CMAKE_SYSTEM_PROCESSOR: aarch64 -- ARM detected ......... + cmake --build ../build/linux/arm64_static --target llama --target ggml -j8 [ 55%] Built target ggml [ 55%] Building CXX object src/CMakeFiles/llama.dir/llama.cpp.o [ 66%] Linking CXX static library libllama.a [100%] Built target llama [100%] Built target ggml ............. + cmake --build ../build/linux/arm64/cpu --target ollama_llama_server -j8 [ 0%] Generating build details from Git -- Found Git: /usr/bin/git (found version "2.39.5") [ 25%] Built target ggml [ 25%] Generating build details from Git -- Found Git: /usr/bin/git (found version "2.39.5") [ 31%] Built target build_info [ 37%] Building CXX object src/CMakeFiles/llama.dir/llama.cpp.o [ 43%] Linking CXX shared library libllama.so [ 62%] Built target llama [ 68%] Building CXX object examples/llava/CMakeFiles/llava.dir/llava.cpp.o [ 68%] Building CXX object examples/llava/CMakeFiles/llava.dir/clip.cpp.o [ 68%] Building CXX object common/CMakeFiles/common.dir/common.cpp.o [ 68%] Building CXX object common/CMakeFiles/common.dir/train.cpp.o [ 81%] Building CXX object common/CMakeFiles/common.dir/ngram-cache.cpp.o [ 81%] Building CXX object common/CMakeFiles/common.dir/sampling.cpp.o [ 81%] Built target llava [ 81%] Linking CXX static library libcommon.a [ 93%] Built target common [100%] Building CXX object ext_server/CMakeFiles/ollama_llama_server.dir/server.cpp.o [100%] Linking CXX executable ../bin/ollama_llama_server [100%] Built target ollama_llama_server ............ I can't find anything to do with runners in this generate. But this is the one that works. ollama run phi3.5:latest >>> Send a message (/? for help)
Author
Owner

@bkev commented on GitHub (Sep 22, 2024):

I see this in the log where it fails, not sure if this is anything or not?

error="decompress payload linux/arm64/cpu/ollama_llama_server.gz.gz: EOF"

@bkev commented on GitHub (Sep 22, 2024): I see this in the log where it fails, not sure if this is anything or not? error="decompress payload linux/arm64/cpu/ollama_llama_server.gz.gz: EOF"
Author
Owner

@dhiltgen commented on GitHub (Sep 24, 2024):

The double .gz does seem like the problem. We probably have a bug related to incremental builds transitioning across some build changes we've made recently. As a workaround, you could try rm -r build ; git checkout build which should zap incremental build state for the payloads, then try go generate ./... && go build . again. What you should end up with after the generate is:

% ls -lh ./build/linux/arm64/cpu/ollama_llama_server.gz
-rw-r--r-- 1 root root 680K Sep 24 16:34 ./build/linux/arm64/cpu/ollama_llama_server.gz
@dhiltgen commented on GitHub (Sep 24, 2024): The double `.gz` does seem like the problem. We probably have a bug related to incremental builds transitioning across some build changes we've made recently. As a workaround, you could try `rm -r build ; git checkout build` which should zap incremental build state for the payloads, then try `go generate ./... && go build .` again. What you should end up with after the generate is: ``` % ls -lh ./build/linux/arm64/cpu/ollama_llama_server.gz -rw-r--r-- 1 root root 680K Sep 24 16:34 ./build/linux/arm64/cpu/ollama_llama_server.gz ```
Author
Owner

@bkev commented on GitHub (Sep 24, 2024):

I've tried that. And apologies if I'm doing something wrong.

But, if I put it back to the latest and delete the build folder and try again, I end up with this
ls -lh ./build/linux/arm64/cpu
total 1.7M
329K Sep 24 19:54 libggml.so.gz
0 Sep 24 19:54 libggml.so.gz.gz
578K Sep 24 19:54 libllama.so.gz
0 Sep 24 19:54 libllama.so.gz.gz
683K Sep 24 19:54 ollama_llama_server.gz
0 Sep 24 19:54 ollama_llama_server.gz.gz

If I run the commit that works I don't get a build folder

ls -lh ./build/linux/arm64/cpu
ls: cannot access './build/linux/arm64/cpu': No such file or directory

Am I doing something wrong?

After it had completed I do this

sudo cp -f ollama /usr/local/bin
sudo chmod +x /usr/local/bin/ollama

I then use a service file from the actual installer and restart it.

@bkev commented on GitHub (Sep 24, 2024): I've tried that. And apologies if I'm doing something wrong. But, if I put it back to the latest and delete the build folder and try again, I end up with this ls -lh ./build/linux/arm64/cpu total 1.7M 329K Sep 24 19:54 libggml.so.gz 0 Sep 24 19:54 libggml.so.gz.gz 578K Sep 24 19:54 libllama.so.gz 0 Sep 24 19:54 libllama.so.gz.gz 683K Sep 24 19:54 ollama_llama_server.gz 0 Sep 24 19:54 ollama_llama_server.gz.gz If I run the commit that works I don't get a build folder ls -lh ./build/linux/arm64/cpu ls: cannot access './build/linux/arm64/cpu': No such file or directory Am I doing something wrong? After it had completed I do this sudo cp -f ollama /usr/local/bin sudo chmod +x /usr/local/bin/ollama I then use a service file from the actual installer and restart it.
Author
Owner

@bkev commented on GitHub (Sep 24, 2024):

So, I've got this working.

I've deleted the entire git folder, cloned it again and then built it and now it works.

I've no idea why it didn't before. I don't make changes so I don't know what had happened to the copy I had, but it seems ok now.

I still get this but it does work now

329K Sep 24 20:27 libggml.so.gz
0 Sep 24 20:27 libggml.so.gz.gz
578K Sep 24 20:27 libllama.so.gz
0 Sep 24 20:27 libllama.so.gz.gz
1 pi pi 683K Sep 24 20:27 ollama_llama_server.gz
0 Sep 24 20:27 ollama_llama_server.gz.gz

@bkev commented on GitHub (Sep 24, 2024): So, I've got this working. I've deleted the entire git folder, cloned it again and then built it and now it works. I've no idea why it didn't before. I don't make changes so I don't know what had happened to the copy I had, but it seems ok now. I still get this but it does work now 329K Sep 24 20:27 libggml.so.gz 0 Sep 24 20:27 libggml.so.gz.gz 578K Sep 24 20:27 libllama.so.gz 0 Sep 24 20:27 libllama.so.gz.gz 1 pi pi 683K Sep 24 20:27 ollama_llama_server.gz 0 Sep 24 20:27 ollama_llama_server.gz.gz
Author
Owner

@dhiltgen commented on GitHub (Sep 24, 2024):

I've deleted the entire git folder, cloned it again and then built it and now it works.

We've been moving things around in preparation for #5034 so directories have changed, and the build likely doesn't clean up artifacts from builds before the new layout. This would explain why a clean repo clears things up. Sorry about that.

If you're still getting these double gz files in your new fresh repo, did you try switching between old/new commits? If so, that's like the same incremental build artifact leakage between the two layouts. Another workaround you can use to clear incremental state is rm -r llm/build before running go generate ./... which should eliminate those extraneous files.

Note: we'll be switching to a makefile based build in the near future which will do a better job cleaning up after itself.

@dhiltgen commented on GitHub (Sep 24, 2024): > I've deleted the entire git folder, cloned it again and then built it and now it works. We've been moving things around in preparation for #5034 so directories have changed, and the build likely doesn't clean up artifacts from builds before the new layout. This would explain why a clean repo clears things up. Sorry about that. If you're still getting these double gz files in your new fresh repo, did you try switching between old/new commits? If so, that's like the same incremental build artifact leakage between the two layouts. Another workaround you can use to clear incremental state is `rm -r llm/build` before running `go generate ./...` which should eliminate those extraneous files. Note: we'll be switching to a makefile based build in the near future which will do a better job cleaning up after itself.
Author
Owner

@bkev commented on GitHub (Sep 27, 2024):

Thanks for the info.

Happy to test when makefile is implemented if there is instructions on what to do :)

@bkev commented on GitHub (Sep 27, 2024): Thanks for the info. Happy to test when makefile is implemented if there is instructions on what to do :)
Author
Owner

@dhiltgen commented on GitHub (Oct 17, 2024):

Instructions for building the new Go runner with a makefile based model are explained here - https://github.com/ollama/ollama/blob/main/docs/development.md#transition-to-go-runner

@dhiltgen commented on GitHub (Oct 17, 2024): Instructions for building the new Go runner with a makefile based model are explained here - https://github.com/ollama/ollama/blob/main/docs/development.md#transition-to-go-runner
Author
Owner

@bkev commented on GitHub (Oct 21, 2024):

Thanks @dhiltgen
I've given this a try and I'm happy to say the build seems to work very well for me and the file created works fine too.
I'm going to use this method now and see how I get on 👍

@bkev commented on GitHub (Oct 21, 2024): Thanks @dhiltgen I've given this a try and I'm happy to say the build seems to work very well for me and the file created works fine too. I'm going to use this method now and see how I get on 👍
Author
Owner

@dhiltgen commented on GitHub (Oct 22, 2024):

It sounds like we can close this now. Happy to hear the new build approach is working well.

@dhiltgen commented on GitHub (Oct 22, 2024): It sounds like we can close this now. Happy to hear the new build approach is working well.
Author
Owner

@bkev commented on GitHub (Oct 23, 2024):

It's working well thanks @dhiltgen
It's also so much faster (after the initial build) when pulling the changes from the git and building too.

@bkev commented on GitHub (Oct 23, 2024): It's working well thanks @dhiltgen It's also so much faster (after the initial build) when pulling the changes from the git and building too.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#4366