[GH-ISSUE #721] Termux? #62370

Closed
opened 2026-05-03 08:29:23 -05:00 by GiteaMirror · 24 comments
Owner

Originally created by @GameOverFlowChart on GitHub (Oct 6, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/721

Can this run in Termux, and if yes can we get instructions to install and run it in Termux?

Originally created by @GameOverFlowChart on GitHub (Oct 6, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/721 Can this run in Termux, and if yes can we get instructions to install and run it in Termux?
Author
Owner

@platinaCoder commented on GitHub (Oct 18, 2023):

I tried to manually install but you need a rooted phone. Without root it's not possible with the normal installation, I'll keep trying and report back.

<!-- gh-comment-id:1768747002 --> @platinaCoder commented on GitHub (Oct 18, 2023): I tried to manually install but you need a rooted phone. Without root it's not possible with the normal installation, I'll keep trying and report back.
Author
Owner

@ManzoniGiuseppe commented on GitHub (Oct 19, 2023):

I do not have a rooted phone and proot doesn't work, so I tried the following. I downloaded the debian fs from the proot-distro's github

curl -L https://github.com/termux/proot-distro/releases/download/v3.12.1/debian-aarch64-pd-v3.12.1.tar.xz -o debian-aarch64-pd-v3.12.1.tar.xz

and unzipped it in ~/debian. I downloaded the ollama executable by following the manual install.

curl -L https://ollama.ai/download/ollama-linux-arm64 -o ../usr/bin/ollama
chmod +x ../usr/bin/ollama

The problem is that the dynamic linker and the shared lib indicated by ollama are actually in a subdir (where the debian fs is) so I used patchelf to fix their location.

patchelf --set-interpreter /data/data/com.termux/files/home/debian/lib/ld-linux-aarch64.so.1 ../usr/bin/ollama
patchelf --set-rpath /data/data/com.termux/files/home/debian/lib/aarch64-linux-gnu/ ../usr/bin/ollama

with this i get ollama to start but it immediatly dies with Segmentation fault even with the --help argument. I don't know if I should change something else too or if ollama just crashes if the environment is different from what it expects (and so a bug) but I post this in case anyone knows or finds it useful.

<!-- gh-comment-id:1771189376 --> @ManzoniGiuseppe commented on GitHub (Oct 19, 2023): I do not have a rooted phone and proot doesn't work, so I tried the following. I downloaded the debian fs from the proot-distro's github `curl -L https://github.com/termux/proot-distro/releases/download/v3.12.1/debian-aarch64-pd-v3.12.1.tar.xz -o debian-aarch64-pd-v3.12.1.tar.xz` and unzipped it in `~/debian`. I downloaded the ollama executable by following the manual install. `curl -L https://ollama.ai/download/ollama-linux-arm64 -o ../usr/bin/ollama` `chmod +x ../usr/bin/ollama` The problem is that the dynamic linker and the shared lib indicated by ollama are actually in a subdir (where the debian fs is) so I used patchelf to fix their location. `patchelf --set-interpreter /data/data/com.termux/files/home/debian/lib/ld-linux-aarch64.so.1 ../usr/bin/ollama` `patchelf --set-rpath /data/data/com.termux/files/home/debian/lib/aarch64-linux-gnu/ ../usr/bin/ollama` with this i get `ollama` to start but it immediatly dies with `Segmentation fault` even with the `--help` argument. I don't know if I should change something else too or if ollama just crashes if the environment is different from what it expects (and so a bug) but I post this in case anyone knows or finds it useful.
Author
Owner

@platinaCoder commented on GitHub (Oct 21, 2023):

Thanks for the info, interesting. I tried a lot to get it to work but even building from scratch with go seemed to go wrong because of a /bin/*/usr/bin/something error.

Easiest to a model on a smartphone is to build llama.cpp. Works fine for me and gives me a lot of freedom with extensions etc. Downside is the ease of deploying a model which ollama does very well. I might return to this and try to get it to work... I even tried to change the install script without success

<!-- gh-comment-id:1773896933 --> @platinaCoder commented on GitHub (Oct 21, 2023): Thanks for the info, interesting. I tried a lot to get it to work but even building from scratch with go seemed to go wrong because of a /bin/*/usr/bin/something error. Easiest to a model on a smartphone is to build llama.cpp. Works fine for me and gives me a lot of freedom with extensions etc. Downside is the ease of deploying a model which ollama does very well. I might return to this and try to get it to work... I even tried to change the install script without success
Author
Owner

@romanovj commented on GitHub (Oct 21, 2023):

no problems on sdm662 with 4gb ram, Android13, native termux
Screenshot_20231021-233659_Termux

git clone --depth 1 https://github.com/jmorganca/ollama
cd ollama
go generate ./...
go build .
./ollama serve &
./ollama run orca-mini
<!-- gh-comment-id:1773916521 --> @romanovj commented on GitHub (Oct 21, 2023): no problems on sdm662 with 4gb ram, Android13, native termux ![Screenshot_20231021-233659_Termux](https://github.com/jmorganca/ollama/assets/105647092/0ed4ab77-4cf2-48ba-9a72-32fe3510188f) ``` git clone --depth 1 https://github.com/jmorganca/ollama cd ollama go generate ./... go build . ./ollama serve & ./ollama run orca-mini ```
Author
Owner

@platinaCoder commented on GitHub (Oct 22, 2023):

no problems on sdm662 with 4gb ram, Android13, native termux
Screenshot_20231021-233659_Termux

git clone --depth 1 https://github.com/jmorganca/ollama
cd ollama
go generate ./...
go build .
./ollama serve &
./ollama run orca-mini

This worked! Awesome! I totally forgot to do 'go generate'. Runs fine now.

<!-- gh-comment-id:1774071370 --> @platinaCoder commented on GitHub (Oct 22, 2023): > no problems on sdm662 with 4gb ram, Android13, native termux > ![Screenshot_20231021-233659_Termux](https://github.com/jmorganca/ollama/assets/105647092/0ed4ab77-4cf2-48ba-9a72-32fe3510188f) > > ``` > git clone --depth 1 https://github.com/jmorganca/ollama > cd ollama > go generate ./... > go build . > ./ollama serve & > ./ollama run orca-mini > ``` This worked! Awesome! I totally forgot to do 'go generate'. Runs fine now.
Author
Owner

@mxyng commented on GitHub (Oct 25, 2023):

There's currently no official plans on building an Android release so I'm going to close this for now. Anyone interested can follow the steps described above to build from source.

<!-- gh-comment-id:1779996480 --> @mxyng commented on GitHub (Oct 25, 2023): There's currently no official plans on building an Android release so I'm going to close this for now. Anyone interested can follow the steps described above to build from source.
Author
Owner

@ghost commented on GitHub (Jan 4, 2024):

I get an error here:

~/ollama $ go build .
# github.com/jmorganca/ollama/llm
cgo-gcc-prolog:153:33: warning: unused variable '_cgo_a' [-Wunused-variable]
cgo-gcc-prolog:165:33: warning: unused variable '_cgo_a' [-Wunused-variable]
# github.com/jmorganca/ollama/llm
dynamic_shim.c:62:15: error: use of undeclared identifier 'RTLD_DEEPBIND'
dynamic_shim.c:8:54: note: expanded from macro 'LOAD_LIBRARY'
<!-- gh-comment-id:1877806796 --> @ghost commented on GitHub (Jan 4, 2024): I get an error here: ``` ~/ollama $ go build . # github.com/jmorganca/ollama/llm cgo-gcc-prolog:153:33: warning: unused variable '_cgo_a' [-Wunused-variable] cgo-gcc-prolog:165:33: warning: unused variable '_cgo_a' [-Wunused-variable] # github.com/jmorganca/ollama/llm dynamic_shim.c:62:15: error: use of undeclared identifier 'RTLD_DEEPBIND' dynamic_shim.c:8:54: note: expanded from macro 'LOAD_LIBRARY' ```
Author
Owner

@ghost commented on GitHub (Jan 4, 2024):

@codrutpopescu I was actually just about to share this patch:

diff --git a/llm/dynamic_shim.c b/llm/dynamic_shim.c
index 8b5d67c..2660eb9 100644
--- a/llm/dynamic_shim.c
+++ b/llm/dynamic_shim.c
@@ -5,7 +5,11 @@
 
 #ifdef __linux__
 #include <dlfcn.h>
+#ifdef __TERMUX__
+#define LOAD_LIBRARY(lib, flags) dlopen(lib, flags | RTLD_LAZY)
+#else
 #define LOAD_LIBRARY(lib, flags) dlopen(lib, flags | RTLD_DEEPBIND)
+#endif
 #define LOAD_SYMBOL(handle, sym) dlsym(handle, sym)
 #define LOAD_ERR() dlerror()
 #define UNLOAD_LIBRARY(handle) dlclose(handle)

This allows it to compile under Termux but may break the GPU accelerated modules (perhaps only ROCm?). It my case this is sufficient since ollama doesn't presently support Vulkan nor OpenCL directly anyway. CPU-only performance is pretty good on my Pixel 7 Pro with small models like tinyllama.

<!-- gh-comment-id:1877844143 --> @ghost commented on GitHub (Jan 4, 2024): @codrutpopescu I was actually just about to share this patch: ``` diff --git a/llm/dynamic_shim.c b/llm/dynamic_shim.c index 8b5d67c..2660eb9 100644 --- a/llm/dynamic_shim.c +++ b/llm/dynamic_shim.c @@ -5,7 +5,11 @@ #ifdef __linux__ #include <dlfcn.h> +#ifdef __TERMUX__ +#define LOAD_LIBRARY(lib, flags) dlopen(lib, flags | RTLD_LAZY) +#else #define LOAD_LIBRARY(lib, flags) dlopen(lib, flags | RTLD_DEEPBIND) +#endif #define LOAD_SYMBOL(handle, sym) dlsym(handle, sym) #define LOAD_ERR() dlerror() #define UNLOAD_LIBRARY(handle) dlclose(handle) ``` This allows it to compile under Termux but may break the GPU accelerated modules (perhaps only ROCm?). It my case this is sufficient since ollama doesn't presently support Vulkan nor OpenCL directly anyway. CPU-only performance is pretty good on my Pixel 7 Pro with small models like _tinyllama_.
Author
Owner

@ghost commented on GitHub (Jan 4, 2024):

I have managed to compile it using

git clone -b v0.1.16 --depth 1 https://github.com/jmorganca/ollama

I received some warning but at least it finished building.
Let me know when the patch is applied in github and I will try to rebuild it
I am using Galaxy Tab S9 Ultra which has 16 GB of memory and a Snapdragon 8 Gen 2

<!-- gh-comment-id:1877853933 --> @ghost commented on GitHub (Jan 4, 2024): I have managed to compile it using git clone -b v0.1.16 --depth 1 https://github.com/jmorganca/ollama I received some warning but at least it finished building. Let me know when the patch is applied in github and I will try to rebuild it I am using Galaxy Tab S9 Ultra which has 16 GB of memory and a Snapdragon 8 Gen 2
Author
Owner

@ghost commented on GitHub (Jan 4, 2024):

Since there's no official support for Android (or Termux) planned, I didn't bother to submit a pull request but I can if the maintainers are open to it. Until then you'll have to apply the patch yourself or build the outdated version(s).

<!-- gh-comment-id:1877858069 --> @ghost commented on GitHub (Jan 4, 2024): Since there's no official support for Android (or Termux) planned, I didn't bother to submit a pull request but I can if the maintainers are open to it. Until then you'll have to apply the patch yourself or build the outdated version(s).
Author
Owner

@ghost commented on GitHub (Jan 5, 2024):

Sorry, how do I apply this patch you created?

<!-- gh-comment-id:1879223981 --> @ghost commented on GitHub (Jan 5, 2024): Sorry, how do I apply this patch you created?
Author
Owner

@ghost commented on GitHub (Jan 6, 2024):

  1. Clone a fresh origin/main branch and navigate to the new directory: git clone --depth 1 https://github.com/jmorganca/ollama && cd ollama
  2. Create a new file named ollama_termux_dynamic_shim.patch.txt & paste the contents from my prior comment OR download ollama_termux_dynamic_shim.patch.txt: wget https://github.com/jmorganca/ollama/files/13847788/ollama_termux_dynamic_shim.patch.txt
  3. Apply the patch: patch -p1 <ollama_termux_dynamic_shim.patch.txt
  4. Continue build as per the previous comments in this ticket
<!-- gh-comment-id:1879472154 --> @ghost commented on GitHub (Jan 6, 2024): 1. Clone a fresh _origin/main_ branch and navigate to the new directory: `git clone --depth 1 https://github.com/jmorganca/ollama && cd ollama` 2. Create a new file named `ollama_termux_dynamic_shim.patch.txt` & paste the contents from my prior [comment](https://github.com/jmorganca/ollama/issues/721#issuecomment-1877844143) **OR** download [ollama_termux_dynamic_shim.patch.txt](https://github.com/jmorganca/ollama/files/13847788/ollama_termux_dynamic_shim.patch.txt): `wget https://github.com/jmorganca/ollama/files/13847788/ollama_termux_dynamic_shim.patch.txt` 3. Apply the patch: `patch -p1 <ollama_termux_dynamic_shim.patch.txt` 4. Continue build as per the previous [comments](https://github.com/jmorganca/ollama/issues/721#issuecomment-1773916521) in this ticket
Author
Owner

@ghost commented on GitHub (Jan 6, 2024):

Worked like a charm. Thank you very much!!!
Here are the steps for someone who would be interested in this:

pkg install golang

git clone --depth https://github.com/jmorganca/ollama

cd ollama

curl -LJO https://github.com/jmorganca/ollama/files/13847788/ollama_termux_dynamic_shim.patch.txt

patch -p1 < ollama_termux_dynamic_shim.patch.txt

go generate ./...

go build .

./ollama serve &

./ollama run mistral
<!-- gh-comment-id:1879819854 --> @ghost commented on GitHub (Jan 6, 2024): Worked like a charm. Thank you very much!!! Here are the steps for someone who would be interested in this: ``` pkg install golang git clone --depth https://github.com/jmorganca/ollama cd ollama curl -LJO https://github.com/jmorganca/ollama/files/13847788/ollama_termux_dynamic_shim.patch.txt patch -p1 < ollama_termux_dynamic_shim.patch.txt go generate ./... go build . ./ollama serve & ./ollama run mistral ```
Author
Owner

@wviana commented on GitHub (Jan 16, 2024):

Hi there. I think this dynamic_shim.c file was moved. This diff applies to the new file

This is the diff so I could run go build .

diff --git a/llm/dyn_ext_server.c b/llm/dyn_ext_server.c
index 111e4ab..10487e1 100644
--- a/llm/dyn_ext_server.c
+++ b/llm/dyn_ext_server.c
@@ -5,7 +5,11 @@
 
 #ifdef __linux__
 #include <dlfcn.h>
+#ifdef __TERMUX__
+#define LOAD_LIBRARY(lib, flags) dlopen(lib, flags | RTLD_LAZY)
+#else
 #define LOAD_LIBRARY(lib, flags) dlopen(lib, flags | RTLD_DEEPBIND)
+#endif
 #define LOAD_SYMBOL(handle, sym) dlsym(handle, sym)
 #define LOAD_ERR() strdup(dlerror())
 #define UNLOAD_LIBRARY(handle) dlclose(handle)

But I'm getting error when running a model. Here is the error I got trying orca-mini

2024/01/16 22:46:59 [Recovery] 2024/01/16 - 22:46:59 panic recovered:
POST /api/chat HTTP/1.1
Host: 127.0.0.1:11434
Accept: application/x-ndjson
Accept-Encoding: gzip
Content-Length: 62
Content-Type: application/json
User-Agent: ollama/0.0.0 (arm64 android) Go/go1.21.6


runtime error: invalid memory address or nil pointer dereference
/data/data/com.termux/files/usr/lib/go/src/runtime/panic.go:261 (0x64957fc29b)
        panicmem: panic(memoryError)
/data/data/com.termux/files/usr/lib/go/src/runtime/signal_unix.go:861 (0x64957fc268)
        sigpanic: panicmem()
/data/data/com.termux/files/home/ollama/gpu/gpu.go:122 (0x6495b06718)
        GetGPUInfo: if gpuHandles.cuda != nil {
/data/data/com.termux/files/home/ollama/gpu/gpu.go:190 (0x6495b0762f)
        CheckVRAM: gpuInfo := GetGPUInfo()
/data/data/com.termux/files/home/ollama/llm/llm.go:47 (0x6495b0b027)
        New: vram, _ := gpu.CheckVRAM()
/data/data/com.termux/files/home/ollama/server/routes.go:84 (0x6495cb8daf)
        load: llmRunner, err := llm.New(workDir, model.ModelPath, model.AdapterPaths, model.ProjectorPaths, opts)
/data/data/com.termux/files/home/ollama/server/routes.go:1061 (0x6495cc2853)
        ChatHandler: if err := load(c, model, opts, sessionDuration); err != nil {
/data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x6495cc15e7)
        (*Context).Next: c.handlers[c.index](c)
/data/data/com.termux/files/home/ollama/server/routes.go:880 (0x6495cc15cc)
        (*Server).GenerateRoutes.func1: c.Next()
/data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x6495ca00df)
        (*Context).Next: c.handlers[c.index](c)
/data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/recovery.go:102 (0x6495ca00c4)
        CustomRecoveryWithWriter.func1: c.Next()
/data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x6495c9f47f)
        (*Context).Next: c.handlers[c.index](c)
/data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/logger.go:240 (0x6495c9f448)
        LoggerWithConfig.func1: c.Next()
/data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x6495c9e5b3)
        (*Context).Next: c.handlers[c.index](c)
/data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:620 (0x6495c9e2dc)
        (*Engine).handleHTTPRequest: c.Next()
/data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:576 (0x6495c9deff)
        (*Engine).ServeHTTP: engine.handleHTTPRequest(c)
/data/data/com.termux/files/usr/lib/go/src/net/http/server.go:2938 (0x6495a3aaeb)
        serverHandler.ServeHTTP: handler.ServeHTTP(rw, req)
/data/data/com.termux/files/usr/lib/go/src/net/http/server.go:2009 (0x6495a36ee7)
        (*conn).serve: serverHandler{c.server}.ServeHTTP(w, w.req)
/data/data/com.termux/files/usr/lib/go/src/runtime/asm_arm64.s:1197 (0x6495819183)
        goexit: MOVD    R0, R0  // NOP

I Will try to update with latest and test again.

Couldn't we pull request this change to ollama project?

<!-- gh-comment-id:1894651664 --> @wviana commented on GitHub (Jan 16, 2024): Hi there. I think this `dynamic_shim.c` file was moved. This diff applies to the new file This is the diff so I could run `go build .` ```diff diff --git a/llm/dyn_ext_server.c b/llm/dyn_ext_server.c index 111e4ab..10487e1 100644 --- a/llm/dyn_ext_server.c +++ b/llm/dyn_ext_server.c @@ -5,7 +5,11 @@ #ifdef __linux__ #include <dlfcn.h> +#ifdef __TERMUX__ +#define LOAD_LIBRARY(lib, flags) dlopen(lib, flags | RTLD_LAZY) +#else #define LOAD_LIBRARY(lib, flags) dlopen(lib, flags | RTLD_DEEPBIND) +#endif #define LOAD_SYMBOL(handle, sym) dlsym(handle, sym) #define LOAD_ERR() strdup(dlerror()) #define UNLOAD_LIBRARY(handle) dlclose(handle) ``` But I'm getting error when running a model. Here is the error I got trying orca-mini ``` 2024/01/16 22:46:59 [Recovery] 2024/01/16 - 22:46:59 panic recovered: POST /api/chat HTTP/1.1 Host: 127.0.0.1:11434 Accept: application/x-ndjson Accept-Encoding: gzip Content-Length: 62 Content-Type: application/json User-Agent: ollama/0.0.0 (arm64 android) Go/go1.21.6 runtime error: invalid memory address or nil pointer dereference /data/data/com.termux/files/usr/lib/go/src/runtime/panic.go:261 (0x64957fc29b) panicmem: panic(memoryError) /data/data/com.termux/files/usr/lib/go/src/runtime/signal_unix.go:861 (0x64957fc268) sigpanic: panicmem() /data/data/com.termux/files/home/ollama/gpu/gpu.go:122 (0x6495b06718) GetGPUInfo: if gpuHandles.cuda != nil { /data/data/com.termux/files/home/ollama/gpu/gpu.go:190 (0x6495b0762f) CheckVRAM: gpuInfo := GetGPUInfo() /data/data/com.termux/files/home/ollama/llm/llm.go:47 (0x6495b0b027) New: vram, _ := gpu.CheckVRAM() /data/data/com.termux/files/home/ollama/server/routes.go:84 (0x6495cb8daf) load: llmRunner, err := llm.New(workDir, model.ModelPath, model.AdapterPaths, model.ProjectorPaths, opts) /data/data/com.termux/files/home/ollama/server/routes.go:1061 (0x6495cc2853) ChatHandler: if err := load(c, model, opts, sessionDuration); err != nil { /data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x6495cc15e7) (*Context).Next: c.handlers[c.index](c) /data/data/com.termux/files/home/ollama/server/routes.go:880 (0x6495cc15cc) (*Server).GenerateRoutes.func1: c.Next() /data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x6495ca00df) (*Context).Next: c.handlers[c.index](c) /data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/recovery.go:102 (0x6495ca00c4) CustomRecoveryWithWriter.func1: c.Next() /data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x6495c9f47f) (*Context).Next: c.handlers[c.index](c) /data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/logger.go:240 (0x6495c9f448) LoggerWithConfig.func1: c.Next() /data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x6495c9e5b3) (*Context).Next: c.handlers[c.index](c) /data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:620 (0x6495c9e2dc) (*Engine).handleHTTPRequest: c.Next() /data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:576 (0x6495c9deff) (*Engine).ServeHTTP: engine.handleHTTPRequest(c) /data/data/com.termux/files/usr/lib/go/src/net/http/server.go:2938 (0x6495a3aaeb) serverHandler.ServeHTTP: handler.ServeHTTP(rw, req) /data/data/com.termux/files/usr/lib/go/src/net/http/server.go:2009 (0x6495a36ee7) (*conn).serve: serverHandler{c.server}.ServeHTTP(w, w.req) /data/data/com.termux/files/usr/lib/go/src/runtime/asm_arm64.s:1197 (0x6495819183) goexit: MOVD R0, R0 // NOP ``` I Will try to update with latest and test again. Couldn't we pull request this change to ollama project?
Author
Owner

@ghost commented on GitHub (Jan 16, 2024):

Actually, I've already created a pull request. Don't use the aforementioned patch, give #1999 a try.

<!-- gh-comment-id:1894657756 --> @ghost commented on GitHub (Jan 16, 2024): Actually, I've already created a pull request. Don't use the aforementioned patch, give #1999 a try.
Author
Owner

@wviana commented on GitHub (Jan 16, 2024):

@lainedfles thanks. I moved that line up, working fine. I was able to run mixtral in my phone. But it's so slow.

<!-- gh-comment-id:1894671659 --> @wviana commented on GitHub (Jan 16, 2024): @lainedfles thanks. I moved that line up, working fine. I was able to run mixtral in my phone. But it's so slow.
Author
Owner

@ghost commented on GitHub (Jan 18, 2024):

How can I create a diff file patch from https://github.com/jmorganca/ollama/pull/1999/files
Sorry, I am not an expert

<!-- gh-comment-id:1899050572 --> @ghost commented on GitHub (Jan 18, 2024): How can I create a diff file patch from https://github.com/jmorganca/ollama/pull/1999/files Sorry, I am not an expert
Author
Owner

@ghost commented on GitHub (Jan 18, 2024):

@codrutpopescu Github offers a very nice feature. You can append .patch to the end of pull request URLs. Try: wget https://github.com/jmorganca/ollama/pull/1999.patch

<!-- gh-comment-id:1899175533 --> @ghost commented on GitHub (Jan 18, 2024): @codrutpopescu Github offers a very nice feature. You can append _.patch_ to the end of pull request URLs. Try: `wget https://github.com/jmorganca/ollama/pull/1999.patch`
Author
Owner

@ghost commented on GitHub (Jan 18, 2024):

Amazing! Thanks

<!-- gh-comment-id:1899186267 --> @ghost commented on GitHub (Jan 18, 2024): Amazing! Thanks
Author
Owner

@inguna87 commented on GitHub (Jan 18, 2024):

@lainedfles Hi, do you happen to know what could cause this error? I have tried running orca-mini and vicuna, same error. Was libext_server.so built incorrectly? What I did was:

pkg install golang

git clone --depth 1 https://github.com/jmorganca/ollama

cd ollama

wget https://github.com/jmorganca/ollama/pull/1999.patch

patch -p1 < 1999.patch

go generate ./...

go build .

./ollama serve &

./ollama run orca-mini
Screenshot_20240118-224954

Thank you

<!-- gh-comment-id:1899356828 --> @inguna87 commented on GitHub (Jan 18, 2024): @lainedfles Hi, do you happen to know what could cause this error? I have tried running orca-mini and vicuna, same error. Was libext_server.so built incorrectly? What I did was: pkg install golang git clone --depth 1 https://github.com/jmorganca/ollama cd ollama wget https://github.com/jmorganca/ollama/pull/1999.patch patch -p1 < 1999.patch go generate ./... go build . ./ollama serve & ./ollama run orca-mini ![Screenshot_20240118-224954](https://github.com/jmorganca/ollama/assets/103364968/86e8edc9-2b26-4478-858c-66c6d3d661ad) Thank you
Author
Owner

@ghost commented on GitHub (Jan 19, 2024):

@inguna87 That looks like some kind of linker problem. There has recently been significant merges including #1999. Maybe you cloned while the repo was still being updated. I'd recommend a fresh clone and validation that Termux is up-to-date (pkg upgrade).

Since #1999 has been merged, patching is no long required. I've just tested using the main branch. It builds and runs successfully for me. Here is my process:

  1. Clone without --depth 1 so that updates (and tag checkout) are easier (git pull): git clone https://github.com/jmorganca/ollama && cd ollama
  2. I like to observe how long operations require so I use time with the generate command: time go generate ./...
  3. Build: time go build .
  4. Screen (or tmux) makes it easy to background and re-attach: screen -S ollama ~/ollama/ollama serve
  5. Test (note that I've added my ollama directory to the shell PATH variable): ollama run orca-mini

Good luck!

<!-- gh-comment-id:1899537929 --> @ghost commented on GitHub (Jan 19, 2024): @inguna87 That looks like some kind of linker problem. There has recently been significant merges including #1999. Maybe you cloned while the repo was still being updated. I'd recommend a fresh clone and validation that Termux is up-to-date (`pkg upgrade`). Since #1999 has been merged, patching is no long required. I've just tested using the main branch. It builds and runs successfully for me. Here is my process: 1. Clone without `--depth 1` so that updates (and tag checkout) are easier (`git pull`): `git clone https://github.com/jmorganca/ollama && cd ollama` 2. I like to observe how long operations require so I use `time` with the generate command: `time go generate ./...` 3. Build: `time go build .` 4. Screen (or `tmux`) makes it easy to background and re-attach: `screen -S ollama ~/ollama/ollama serve` 5. Test (note that I've added my ollama directory to the shell PATH variable): `ollama run orca-mini` Good luck!
Author
Owner

@ghost commented on GitHub (Jan 19, 2024):

Questions to @lainedfles since it seems you are the Lead Engineer for Termux and we are grateful for that.

  1. When compiling we get these warnings:
    warning: implicit conversion increases floating-point precision: 'float32_t' (aka 'float') to 'ggml_float' (aka 'double') [-Wdouble-promotion]
    These can be safely ignored, right? I always wondered when compiling blindly code what is the effect of these warnings.
  2. There are powerful CPUs noawaday, for example I am running this on a Tab S9 Utra which has a Snapdragon 8 Gen 2 CPU and 16 GB of memory. These CPUs seems to have some AI built in features, Gen 3 has even more. Is there any chance that someday we will have some hardware acceleration?
    Thank you for your support and everything!
<!-- gh-comment-id:1900117330 --> @ghost commented on GitHub (Jan 19, 2024): Questions to @lainedfles since it seems you are the Lead Engineer for Termux and we are grateful for that. 1. When compiling we get these warnings: `warning: implicit conversion increases floating-point precision: 'float32_t' (aka 'float') to 'ggml_float' (aka 'double') [-Wdouble-promotion]` These can be safely ignored, right? I always wondered when compiling blindly code what is the effect of these warnings. 2. There are powerful CPUs noawaday, for example I am running this on a Tab S9 Utra which has a Snapdragon 8 Gen 2 CPU and 16 GB of memory. These CPUs seems to have some AI built in features, Gen 3 has even more. Is there any chance that someday we will have some hardware acceleration? Thank you for your support and everything!
Author
Owner

@inguna87 commented on GitHub (Jan 19, 2024):

@inguna87 That looks like some kind of linker problem. There has recently been significant merges including #1999. Maybe you cloned while the repo was still being updated. I'd recommend a fresh clone and validation that Termux is up-to-date (pkg upgrade).

Since #1999 has been merged, patching is no long required. I've just tested using the main branch. It builds and runs successfully for me. Here is my process:

  1. Clone without --depth 1 so that updates (and tag checkout) are easier (git pull): git clone https://github.com/jmorganca/ollama && cd ollama
  2. I like to observe how long operations require so I use time with the generate command: time go generate ./...
  3. Build: time go build .
  4. Screen (or tmux) makes it easy to background and re-attach: screen -S ollama ~/ollama/ollama serve
  5. Test (note that I've added my ollama directory to the shell PATH variable): ollama run orca-mini

Good luck!

Thank you! It worked.
I cloned without "--depth 1" this time and because your patch was merged - it succeeded.
I appreciate your help

<!-- gh-comment-id:1900778612 --> @inguna87 commented on GitHub (Jan 19, 2024): > @inguna87 That looks like some kind of linker problem. There has recently been significant merges including #1999. Maybe you cloned while the repo was still being updated. I'd recommend a fresh clone and validation that Termux is up-to-date (`pkg upgrade`). > > Since #1999 has been merged, patching is no long required. I've just tested using the main branch. It builds and runs successfully for me. Here is my process: > > 1. Clone without `--depth 1` so that updates (and tag checkout) are easier (`git pull`): `git clone https://github.com/jmorganca/ollama && cd ollama` > 2. I like to observe how long operations require so I use `time` with the generate command: `time go generate ./...` > 3. Build: `time go build .` > 4. Screen (or `tmux`) makes it easy to background and re-attach: `screen -S ollama ~/ollama/ollama serve` > 5. Test (note that I've added my ollama directory to the shell PATH variable): `ollama run orca-mini` > > Good luck! Thank you! It worked. I cloned without "--depth 1" this time and because your patch was merged - it succeeded. I appreciate your help
Author
Owner

@ghost commented on GitHub (Jan 19, 2024):

Questions to @lainedfles since it seems you are the Lead Engineer for Termux and we are grateful for that.

1. When compiling we get  these warnings:
   `warning: implicit conversion increases floating-point precision: 'float32_t' (aka 'float') to 'ggml_float' (aka 'double') [-Wdouble-promotion]`
   These can be safely ignored, right? I always wondered when compiling blindly code what is the effect of these warnings.

See my comment in the pull request

CPU-only inference works on my Pixel 7 pro (aarch64) using update-to-date Termux (F-Droid) running on top of GrapheneOS (based on AOSP Android 14). I've not attempted to build with the NDK directly but Termux doesn't provide a native GCC compiler (nor do modern NDKs), it uses Clang with GCC compatibility mode. This produces warnings like: implicit conversion increases floating-point precision which I suspect affects newer model quantization formats.

So far, I've had decent success (albeit slow) with the legacy q4 and q5 formats but K_S & K_M not so much. It will be nice if working Vulkan and/or TPU support can eventually be added. Otherwise, without RTLD_DEEPBIND, the build succeeds and the dynamic CPU module loads successfully.

2. There are powerful CPUs noawaday, for example I am running this on a Tab S9 Utra which has a Snapdragon 8 Gen 2 CPU and 16 GB of memory. These CPUs seems to have some AI built in features, Gen 3 has even more. Is there any chance that someday we will have some hardware acceleration?
   Thank you for your support and everything!

I'd suggest that there's a good chance we'll eventually see acceleration for mobile NPUs & TPUs. However often times these are very limited on mobile devices in core count and memory and intended for less demanding operations like "AI image filtering" for cameras, not for LLM inference. My bet is that Vulkan support is the most realistic acceleration.

I should set expectations appropriately, I just enjoy tinkering and find contributing to open-source software fulfilling. I'm not an expert, the true engineers built & maintain this project. That being said, if possible I will make an attempt to help in the future.

And now I'll share a bit more about my setup, I'm quite happy with the current state of chatbot-ollama as it functions under Termux. I fire up this & Ollama in screen and use my browser (Firefox or Vanadium) to interact with the Ollama API. Fun!

<!-- gh-comment-id:1901013697 --> @ghost commented on GitHub (Jan 19, 2024): > Questions to @lainedfles since it seems you are the Lead Engineer for Termux and we are grateful for that. > > 1. When compiling we get these warnings: > `warning: implicit conversion increases floating-point precision: 'float32_t' (aka 'float') to 'ggml_float' (aka 'double') [-Wdouble-promotion]` > These can be safely ignored, right? I always wondered when compiling blindly code what is the effect of these warnings. See my comment in the [pull request](https://github.com/jmorganca/ollama/pull/1999#discussion_r1457974722) >> CPU-only inference works on my Pixel 7 pro (aarch64) using update-to-date Termux (F-Droid) running on top of GrapheneOS (based on AOSP Android 14). I've not attempted to build with the NDK directly but Termux doesn't provide a native GCC compiler (nor do modern NDKs), it uses Clang with GCC compatibility mode. This produces warnings like: _implicit conversion increases floating-point precision_ which I suspect affects newer model quantization formats. >> >> So far, I've had decent success (albeit slow) with the legacy q4 and q5 formats but K_S & K_M not so much. It will be nice if working Vulkan and/or TPU support can eventually be added. Otherwise, without `RTLD_DEEPBIND`, the build succeeds and the dynamic CPU module loads successfully. > 2. There are powerful CPUs noawaday, for example I am running this on a Tab S9 Utra which has a Snapdragon 8 Gen 2 CPU and 16 GB of memory. These CPUs seems to have some AI built in features, Gen 3 has even more. Is there any chance that someday we will have some hardware acceleration? > Thank you for your support and everything! I'd suggest that there's a good chance we'll eventually see acceleration for mobile NPUs & TPUs. However often times these are very limited on mobile devices in core count and memory and intended for less demanding operations like "AI image filtering" for cameras, not for LLM inference. My bet is that Vulkan support is the most realistic acceleration. I should set expectations appropriately, I just enjoy tinkering and find contributing to open-source software fulfilling. I'm not an expert, the true engineers built & maintain this project. That being said, if possible I will make an attempt to help in the future. And now I'll share a bit more about my setup, I'm quite happy with the current state of [chatbot-ollama](https://github.com/ivanfioravanti/chatbot-ollama) as it functions under Termux. I fire up this & Ollama in screen and use my browser (Firefox or Vanadium) to interact with the Ollama API. Fun!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#62370