[GH-ISSUE #12261] Model is not running on GPU, but "ollama ps" show 100% GPU #33910

Closed
opened 2026-04-22 17:05:33 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @Xyy-tj on GitHub (Sep 12, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12261

What is the issue?

(base) ubuntu@ubuntu:~$ ollama ps
NAME ID SIZE PROCESSOR CONTEXT UNTIL
qwen2.5vl:latest 5ced39dfa4ba 8.5 GB 100% GPU 4096 4 minutes from now

Image

Relevant log output

(base) ubuntu@ubuntu:~$ lspci | grep -i nvidia
0000:3b:00.0 VGA compatible controller: NVIDIA Corporation Device 2684 (rev a1)
0000:3b:00.1 Audio device: NVIDIA Corporation Device 22ba (rev a1)
(base) ubuntu@ubuntu:~$ ollama serve
time=2025-09-12T11:27:31.783+08:00 level=INFO source=routes.go:1331 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/ubuntu/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-09-12T11:27:31.783+08:00 level=INFO source=images.go:477 msg="total blobs: 6"
time=2025-09-12T11:27:31.784+08:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2025-09-12T11:27:31.784+08:00 level=INFO source=routes.go:1384 msg="Listening on 127.0.0.1:11434 (version 0.11.8)"
time=2025-09-12T11:27:31.784+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-09-12T11:27:32.042+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-77bdf413-e641-6e13-19ab-89695a97c5c3 library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4090" total="23.5 GiB" available="23.1 GiB"
[GIN] 2025/09/12 - 11:27:52 | 200 |      98.704µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/09/12 - 11:27:52 | 200 |   98.131053ms |       127.0.0.1 | POST     "/api/show"
time=2025-09-12T11:27:53.223+08:00 level=INFO source=server.go:388 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /home/ubuntu/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 --port 40359"
time=2025-09-12T11:27:53.244+08:00 level=INFO source=runner.go:1006 msg="starting ollama engine"
time=2025-09-12T11:27:53.244+08:00 level=INFO source=runner.go:1043 msg="Server listening on 127.0.0.1:40359"
time=2025-09-12T11:27:53.371+08:00 level=INFO source=server.go:493 msg="system memory" total="62.4 GiB" free="57.8 GiB" free_swap="8.0 GiB"
time=2025-09-12T11:27:53.373+08:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/home/ubuntu/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 library=cuda parallel=1 required="8.0 GiB" gpus=1
time=2025-09-12T11:27:53.375+08:00 level=INFO source=server.go:533 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split=[29] memory.available="[23.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.0 GiB" memory.required.partial="8.0 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[8.0 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="261.3 MiB" memory.graph.partial="261.3 MiB" projector.weights="1.2 GiB" projector.graph="1.6 GiB"
time=2025-09-12T11:27:53.376+08:00 level=INFO source=runner.go:925 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:16 GPULayers:29[ID:GPU-77bdf413-e641-6e13-19ab-89695a97c5c3 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-09-12T11:27:53.436+08:00 level=INFO source=ggml.go:130 msg="" architecture=qwen25vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=36
time=2025-09-12T11:27:53.436+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)
time=2025-09-12T11:27:54.116+08:00 level=INFO source=ggml.go:486 msg="offloading 0 repeating layers to GPU"
time=2025-09-12T11:27:54.116+08:00 level=INFO source=ggml.go:490 msg="offloading output layer to CPU"
time=2025-09-12T11:27:54.116+08:00 level=INFO source=ggml.go:497 msg="offloaded 0/29 layers to GPU"
time=2025-09-12T11:27:54.116+08:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="5.6 GiB"
time=2025-09-12T11:27:54.116+08:00 level=INFO source=backend.go:326 msg="kv cache" device=CPU size="224.0 MiB"
time=2025-09-12T11:27:54.116+08:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="1.6 GiB"
time=2025-09-12T11:27:54.116+08:00 level=INFO source=backend.go:342 msg="total memory" size="7.3 GiB"
time=2025-09-12T11:27:54.116+08:00 level=INFO source=sched.go:473 msg="loaded runners" count=1
time=2025-09-12T11:27:54.116+08:00 level=INFO source=server.go:1236 msg="waiting for llama runner to start responding"
time=2025-09-12T11:27:54.117+08:00 level=INFO source=server.go:1270 msg="waiting for server to become available" status="llm server loading model"
time=2025-09-12T11:27:55.121+08:00 level=INFO source=server.go:1274 msg="llama runner started in 1.90 seconds"
[GIN] 2025/09/12 - 11:27:55 | 200 |  2.461303227s |       127.0.0.1 | POST     "/api/generate"

OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @Xyy-tj on GitHub (Sep 12, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12261 ### What is the issue? (base) ubuntu@ubuntu:~$ ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL qwen2.5vl:latest 5ced39dfa4ba 8.5 GB 100% GPU 4096 4 minutes from now <img width="751" height="333" alt="Image" src="https://github.com/user-attachments/assets/7e2f2933-2944-42c0-8bee-885878779943" /> ### Relevant log output ```shell (base) ubuntu@ubuntu:~$ lspci | grep -i nvidia 0000:3b:00.0 VGA compatible controller: NVIDIA Corporation Device 2684 (rev a1) 0000:3b:00.1 Audio device: NVIDIA Corporation Device 22ba (rev a1) (base) ubuntu@ubuntu:~$ ollama serve time=2025-09-12T11:27:31.783+08:00 level=INFO source=routes.go:1331 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/ubuntu/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-09-12T11:27:31.783+08:00 level=INFO source=images.go:477 msg="total blobs: 6" time=2025-09-12T11:27:31.784+08:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2025-09-12T11:27:31.784+08:00 level=INFO source=routes.go:1384 msg="Listening on 127.0.0.1:11434 (version 0.11.8)" time=2025-09-12T11:27:31.784+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-09-12T11:27:32.042+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-77bdf413-e641-6e13-19ab-89695a97c5c3 library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4090" total="23.5 GiB" available="23.1 GiB" [GIN] 2025/09/12 - 11:27:52 | 200 | 98.704µs | 127.0.0.1 | HEAD "/" [GIN] 2025/09/12 - 11:27:52 | 200 | 98.131053ms | 127.0.0.1 | POST "/api/show" time=2025-09-12T11:27:53.223+08:00 level=INFO source=server.go:388 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /home/ubuntu/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 --port 40359" time=2025-09-12T11:27:53.244+08:00 level=INFO source=runner.go:1006 msg="starting ollama engine" time=2025-09-12T11:27:53.244+08:00 level=INFO source=runner.go:1043 msg="Server listening on 127.0.0.1:40359" time=2025-09-12T11:27:53.371+08:00 level=INFO source=server.go:493 msg="system memory" total="62.4 GiB" free="57.8 GiB" free_swap="8.0 GiB" time=2025-09-12T11:27:53.373+08:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=/home/ubuntu/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 library=cuda parallel=1 required="8.0 GiB" gpus=1 time=2025-09-12T11:27:53.375+08:00 level=INFO source=server.go:533 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split=[29] memory.available="[23.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="8.0 GiB" memory.required.partial="8.0 GiB" memory.required.kv="224.0 MiB" memory.required.allocations="[8.0 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="261.3 MiB" memory.graph.partial="261.3 MiB" projector.weights="1.2 GiB" projector.graph="1.6 GiB" time=2025-09-12T11:27:53.376+08:00 level=INFO source=runner.go:925 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:false KvSize:4096 KvCacheType: NumThreads:16 GPULayers:29[ID:GPU-77bdf413-e641-6e13-19ab-89695a97c5c3 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-09-12T11:27:53.436+08:00 level=INFO source=ggml.go:130 msg="" architecture=qwen25vl file_type=Q4_K_M name="" description="" num_tensors=858 num_key_values=36 time=2025-09-12T11:27:53.436+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) time=2025-09-12T11:27:54.116+08:00 level=INFO source=ggml.go:486 msg="offloading 0 repeating layers to GPU" time=2025-09-12T11:27:54.116+08:00 level=INFO source=ggml.go:490 msg="offloading output layer to CPU" time=2025-09-12T11:27:54.116+08:00 level=INFO source=ggml.go:497 msg="offloaded 0/29 layers to GPU" time=2025-09-12T11:27:54.116+08:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="5.6 GiB" time=2025-09-12T11:27:54.116+08:00 level=INFO source=backend.go:326 msg="kv cache" device=CPU size="224.0 MiB" time=2025-09-12T11:27:54.116+08:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="1.6 GiB" time=2025-09-12T11:27:54.116+08:00 level=INFO source=backend.go:342 msg="total memory" size="7.3 GiB" time=2025-09-12T11:27:54.116+08:00 level=INFO source=sched.go:473 msg="loaded runners" count=1 time=2025-09-12T11:27:54.116+08:00 level=INFO source=server.go:1236 msg="waiting for llama runner to start responding" time=2025-09-12T11:27:54.117+08:00 level=INFO source=server.go:1270 msg="waiting for server to become available" status="llm server loading model" time=2025-09-12T11:27:55.121+08:00 level=INFO source=server.go:1274 msg="llama runner started in 1.90 seconds" [GIN] 2025/09/12 - 11:27:55 | 200 | 2.461303227s | 127.0.0.1 | POST "/api/generate" ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-22 17:05:33 -05:00
Author
Owner

@Xyy-tj commented on GitHub (Sep 12, 2025):

I installed Olama offline and did not use scripts. How should I check or ensure that Olama uses GPU acceleration next?

<!-- gh-comment-id:3283534494 --> @Xyy-tj commented on GitHub (Sep 12, 2025): I installed Olama offline and did not use scripts. How should I check or ensure that Olama uses GPU acceleration next?
Author
Owner

@FieldMouse-AI commented on GitHub (Sep 12, 2025):

I installed Olama offline and did not use scripts. How should I check or ensure that Olama uses GPU acceleration next?

Hello, @Xyy-tj !

I always set the environment variable OLLAMA_LLM_DEVICE=GPU to ensure that my GPU will be used.

Also, I just noticed the exact same problem you are having with my environment.

I had stop and restart the Docker container and make sure that OLLAMA_LLM_DEVICE=GPU was set for the Docker container.
I just did this now, so it will take me some testing to see if things stick.

BTW: What is your environment? Mine is as follows:

  • OS: Host and Docker: Ubuntu Linux 22.04.5 LTS
  • GPU NVIDIA RTX 3060 12GB VRAM
  • CPU: Intel i5-6500
  • Ollama version: 0.11.10
<!-- gh-comment-id:3284383822 --> @FieldMouse-AI commented on GitHub (Sep 12, 2025): > I installed Olama offline and did not use scripts. How should I check or ensure that Olama uses GPU acceleration next? Hello, @Xyy-tj ! I always set the environment variable `OLLAMA_LLM_DEVICE=GPU` to ensure that my GPU will be used. Also, I just noticed the exact same problem you are having with my environment. I had stop and restart the Docker container and make sure that `OLLAMA_LLM_DEVICE=GPU` was set for the Docker container. I just did this now, so it will take me some testing to see if things stick. BTW: What is your environment? Mine is as follows: - OS: Host and Docker: Ubuntu Linux 22.04.5 LTS - GPU NVIDIA RTX 3060 12GB VRAM - CPU: Intel i5-6500 - Ollama version: 0.11.10
Author
Owner

@rick-github commented on GitHub (Sep 12, 2025):

OLLAMA_LLM_DEVICE is not an ollama configuration variable.

The problem is that the OP doesn't have any backends:

level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)

What's the output of

find /usr/local/lib/ollama
<!-- gh-comment-id:3284433017 --> @rick-github commented on GitHub (Sep 12, 2025): `OLLAMA_LLM_DEVICE` is not an ollama configuration variable. The problem is that the OP doesn't have any backends: ``` level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) ``` What's the output of ``` find /usr/local/lib/ollama ```
Author
Owner

@Xyy-tj commented on GitHub (Sep 12, 2025):

OLLAMA_LLM_DEVICE is not an ollama configuration variable.

The problem is that the OP doesn't have any backends:

level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc)

What's the output of

find /usr/local/lib/ollama

Thanks for your reply, the output log is:

(base) ubuntu@ubuntu:~$ find /usr/local/lib/ollama
/usr/local/lib/ollama

Additionally, some potentially useful information:
Image

<!-- gh-comment-id:3284830808 --> @Xyy-tj commented on GitHub (Sep 12, 2025): > `OLLAMA_LLM_DEVICE` is not an ollama configuration variable. > > The problem is that the OP doesn't have any backends: > > ``` > level=INFO source=ggml.go:104 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(gcc) > ``` > > What's the output of > > ``` > find /usr/local/lib/ollama > ``` Thanks for your reply, **the output log is:** (base) ubuntu@ubuntu:~$ find /usr/local/lib/ollama /usr/local/lib/ollama **Additionally, some potentially useful information:** <img width="830" height="285" alt="Image" src="https://github.com/user-attachments/assets/2b6b16cc-7a31-49e5-a9a7-088de341cf82" />
Author
Owner

@Xyy-tj commented on GitHub (Sep 12, 2025):

I installed Olama offline and did not use scripts. How should I check or ensure that Olama uses GPU acceleration next?

Hello, @Xyy-tj !

I always set the environment variable OLLAMA_LLM_DEVICE=GPU to ensure that my GPU will be used.

Also, I just noticed the exact same problem you are having with my environment.

I had stop and restart the Docker container and make sure that OLLAMA_LLM_DEVICE=GPU was set for the Docker container. I just did this now, so it will take me some testing to see if things stick.

BTW: What is your environment? Mine is as follows:

  • OS: Host and Docker: Ubuntu Linux 22.04.5 LTS
  • GPU NVIDIA RTX 3060 12GB VRAM
  • CPU: Intel i5-6500
  • Ollama version: 0.11.10

Thanks!But my installation situation may be quite special. I installed directly in the Ubuntu without Docker, and I used an offline installation pkg, cause the machine had no network available :(

<!-- gh-comment-id:3284840470 --> @Xyy-tj commented on GitHub (Sep 12, 2025): > > I installed Olama offline and did not use scripts. How should I check or ensure that Olama uses GPU acceleration next? > > Hello, [@Xyy-tj](https://github.com/Xyy-tj) ! > > I always set the environment variable `OLLAMA_LLM_DEVICE=GPU` to ensure that my GPU will be used. > > Also, I just noticed the exact same problem you are having with my environment. > > I had stop and restart the Docker container and make sure that `OLLAMA_LLM_DEVICE=GPU` was set for the Docker container. I just did this now, so it will take me some testing to see if things stick. > > BTW: What is your environment? Mine is as follows: > > * OS: Host and Docker: Ubuntu Linux 22.04.5 LTS > * GPU NVIDIA RTX 3060 12GB VRAM > * CPU: Intel i5-6500 > * Ollama version: 0.11.10 Thanks!But my installation situation may be quite special. I installed directly in the Ubuntu without Docker, and I used an offline installation pkg, cause the machine had no network available :(
Author
Owner

@rick-github commented on GitHub (Sep 12, 2025):

Your installation is incomplete, /usr/local/lib/ollama should contain the loadable backends:

$ find /usr/local/lib/ollama/
/usr/local/lib/ollama/
/usr/local/lib/ollama/libcublasLt.so.12
/usr/local/lib/ollama/libcudart.so.12
/usr/local/lib/ollama/libggml-hip.so
/usr/local/lib/ollama/libggml-base.so
/usr/local/lib/ollama/libggml-cuda.so
/usr/local/lib/ollama/libggml-cpu-x64.so
/usr/local/lib/ollama/libggml-cpu-sandybridge.so
/usr/local/lib/ollama/libggml-cpu-alderlake.so
/usr/local/lib/ollama/libggml-cpu-haswell.so
/usr/local/lib/ollama/libcublas.so.12
/usr/local/lib/ollama/libcublasLt.so.12.8.4.1
/usr/local/lib/ollama/libcudart.so.12.8.90
/usr/local/lib/ollama/libggml-cpu-skylakex.so
/usr/local/lib/ollama/libggml-cpu-sse42.so
/usr/local/lib/ollama/libggml-cpu-icelake.so
/usr/local/lib/ollama/libcublas.so.12.8.4.1
<!-- gh-comment-id:3284843638 --> @rick-github commented on GitHub (Sep 12, 2025): Your installation is incomplete, `/usr/local/lib/ollama` should contain the loadable backends: ```console $ find /usr/local/lib/ollama/ /usr/local/lib/ollama/ /usr/local/lib/ollama/libcublasLt.so.12 /usr/local/lib/ollama/libcudart.so.12 /usr/local/lib/ollama/libggml-hip.so /usr/local/lib/ollama/libggml-base.so /usr/local/lib/ollama/libggml-cuda.so /usr/local/lib/ollama/libggml-cpu-x64.so /usr/local/lib/ollama/libggml-cpu-sandybridge.so /usr/local/lib/ollama/libggml-cpu-alderlake.so /usr/local/lib/ollama/libggml-cpu-haswell.so /usr/local/lib/ollama/libcublas.so.12 /usr/local/lib/ollama/libcublasLt.so.12.8.4.1 /usr/local/lib/ollama/libcudart.so.12.8.90 /usr/local/lib/ollama/libggml-cpu-skylakex.so /usr/local/lib/ollama/libggml-cpu-sse42.so /usr/local/lib/ollama/libggml-cpu-icelake.so /usr/local/lib/ollama/libcublas.so.12.8.4.1 ```
Author
Owner

@Xyy-tj commented on GitHub (Sep 12, 2025):

Your installation is incomplete, /usr/local/lib/ollama should contain the loadable backends:

$ find /usr/local/lib/ollama/
/usr/local/lib/ollama/
/usr/local/lib/ollama/libcublasLt.so.12
/usr/local/lib/ollama/libcudart.so.12
/usr/local/lib/ollama/libggml-hip.so
/usr/local/lib/ollama/libggml-base.so
/usr/local/lib/ollama/libggml-cuda.so
/usr/local/lib/ollama/libggml-cpu-x64.so
/usr/local/lib/ollama/libggml-cpu-sandybridge.so
/usr/local/lib/ollama/libggml-cpu-alderlake.so
/usr/local/lib/ollama/libggml-cpu-haswell.so
/usr/local/lib/ollama/libcublas.so.12
/usr/local/lib/ollama/libcublasLt.so.12.8.4.1
/usr/local/lib/ollama/libcudart.so.12.8.90
/usr/local/lib/ollama/libggml-cpu-skylakex.so
/usr/local/lib/ollama/libggml-cpu-sse42.so
/usr/local/lib/ollama/libggml-cpu-icelake.so
/usr/local/lib/ollama/libcublas.so.12.8.4.1

Great! Finally found the reason. So could this be caused by my offline installation? There is no network on the machine, so I manually installed it according to the docs.

So how should I reinstall Olama completely in this situation?

I have tried reinstall it with:

sudo rm -rf /usr/lib/ollama
sudo tar -C /usr -xzf ollama-linux-amd64.tgz

but seems not useful

<!-- gh-comment-id:3284905206 --> @Xyy-tj commented on GitHub (Sep 12, 2025): > Your installation is incomplete, `/usr/local/lib/ollama` should contain the loadable backends: > > $ find /usr/local/lib/ollama/ > /usr/local/lib/ollama/ > /usr/local/lib/ollama/libcublasLt.so.12 > /usr/local/lib/ollama/libcudart.so.12 > /usr/local/lib/ollama/libggml-hip.so > /usr/local/lib/ollama/libggml-base.so > /usr/local/lib/ollama/libggml-cuda.so > /usr/local/lib/ollama/libggml-cpu-x64.so > /usr/local/lib/ollama/libggml-cpu-sandybridge.so > /usr/local/lib/ollama/libggml-cpu-alderlake.so > /usr/local/lib/ollama/libggml-cpu-haswell.so > /usr/local/lib/ollama/libcublas.so.12 > /usr/local/lib/ollama/libcublasLt.so.12.8.4.1 > /usr/local/lib/ollama/libcudart.so.12.8.90 > /usr/local/lib/ollama/libggml-cpu-skylakex.so > /usr/local/lib/ollama/libggml-cpu-sse42.so > /usr/local/lib/ollama/libggml-cpu-icelake.so > /usr/local/lib/ollama/libcublas.so.12.8.4.1 Great! Finally found the reason. So could this be caused by my offline installation? There is no network on the machine, so I manually installed it according to the docs. So how should I reinstall Olama completely in this situation? I have tried reinstall it with: sudo rm -rf /usr/lib/ollama sudo tar -C /usr -xzf ollama-linux-amd64.tgz but seems not useful
Author
Owner

@rick-github commented on GitHub (Sep 12, 2025):

From your log:

time=2025-09-12T11:27:53.223+08:00 level=INFO source=server.go:388 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /home/ubuntu/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 --port 40359"

the ollama binary is in /usr/local/bin. You are installing ollama in /usr:

sudo tar -C /usr -xzf ollama-linux-amd64.tgz

You need to install it in /usr/local:

sudo tar -C /usr/local -xzf ollama-linux-amd64.tgz
<!-- gh-comment-id:3284949422 --> @rick-github commented on GitHub (Sep 12, 2025): From your log: ``` time=2025-09-12T11:27:53.223+08:00 level=INFO source=server.go:388 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /home/ubuntu/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 --port 40359" ``` the ollama binary is in `/usr/local/bin`. You are installing ollama in `/usr`: ``` sudo tar -C /usr -xzf ollama-linux-amd64.tgz ``` You need to install it in `/usr/local`: ``` sudo tar -C /usr/local -xzf ollama-linux-amd64.tgz ```
Author
Owner

@Xyy-tj commented on GitHub (Sep 12, 2025):

From your log:

time=2025-09-12T11:27:53.223+08:00 level=INFO source=server.go:388 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /home/ubuntu/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 --port 40359"

the ollama binary is in /usr/local/bin. You are installing ollama in /usr:

sudo tar -C /usr -xzf ollama-linux-amd64.tgz

You need to install it in /usr/local:

sudo tar -C /usr/local -xzf ollama-linux-amd64.tgz

AAAAmazing, it perfectly solved my problem!! You are truly the god of Ollama :)!

Thank you very much, this issue has been bothering me for a week. And Thank you very much for your enthusiasm and contribution for community, hope u have a beautiful weekend and smooth work!

<!-- gh-comment-id:3284978513 --> @Xyy-tj commented on GitHub (Sep 12, 2025): > From your log: > > ``` > time=2025-09-12T11:27:53.223+08:00 level=INFO source=server.go:388 msg="starting runner" cmd="/usr/local/bin/ollama runner --ollama-engine --model /home/ubuntu/.ollama/models/blobs/sha256-a99b7f834d754b88f122d865f32758ba9f0994a83f8363df2c1e71c17605a025 --port 40359" > ``` > > the ollama binary is in `/usr/local/bin`. You are installing ollama in `/usr`: > > ``` > sudo tar -C /usr -xzf ollama-linux-amd64.tgz > ``` > > You need to install it in `/usr/local`: > > ``` > sudo tar -C /usr/local -xzf ollama-linux-amd64.tgz > ``` AAAAmazing, it perfectly solved my problem!! You are truly the god of Ollama :)! Thank you very much, this issue has been bothering me for a week. And Thank you very much for your enthusiasm and contribution for community, hope u have a beautiful weekend and smooth work!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33910