[GH-ISSUE #11323] Enable GPU support with non-sudo local installation #7472

Closed
opened 2026-04-12 19:32:40 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @vitaglianog on GitHub (Jul 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11323

Hello,
I have installed ollama locally on a slurm node using the method detailed in the docs
Since I do not have sudo access, instead of extracting the tar in the /usr folder, I did it in a local folder with:

curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz
tar -C .local/ -xzf ollama-linux-amd64.tgz

Although the GPUs are seen by the ollama server, it seems that the ollama-runner is unable to load the cuda backend (see logs).
I tried adding the .local/bin and .local/lib folders to the path with the following commands, but that does not seem to help.

export PATH="$PATH:~/.local/bin"
export LD_LIBRARY_PATH=:~/.local/lib:~/.local/lib/ollama:$LD_LIBRARY_PATH

ollama.log

The log attached is obtained runningollama serve and ollama run llama2 on a machine with 2 40GB GPUs so VRAM should not be a limiting factor.

Any insights on how to enable GPU inference?

Relevant log output

time=2025-07-07T16:46:35.854-04:00 level=DEBUG source=server.go:291 msg="compatible gpu libraries" compatible=[]

time=2025-07-07T16:46:35.940-04:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)

load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: layer   0 assigned to device CPU, is_swa = 0
load_tensors: layer   1 assigned to device CPU, is_swa = 0
load_tensors: layer   2 assigned to device CPU, is_swa = 0
load_tensors: layer   3 assigned to device CPU, is_swa = 0
load_tensors: layer   4 assigned to device CPU, is_swa = 0
load_tensors: layer   5 assigned to device CPU, is_swa = 0
load_tensors: layer   6 assigned to device CPU, is_swa = 0
load_tensors: layer   7 assigned to device CPU, is_swa = 0
load_tensors: layer   8 assigned to device CPU, is_swa = 0
load_tensors: layer   9 assigned to device CPU, is_swa = 0
load_tensors: layer  10 assigned to device CPU, is_swa = 0
load_tensors: layer  11 assigned to device CPU, is_swa = 0
load_tensors: layer  12 assigned to device CPU, is_swa = 0
load_tensors: layer  13 assigned to device CPU, is_swa = 0
load_tensors: layer  14 assigned to device CPU, is_swa = 0
load_tensors: layer  15 assigned to device CPU, is_swa = 0
load_tensors: layer  16 assigned to device CPU, is_swa = 0
load_tensors: layer  17 assigned to device CPU, is_swa = 0
load_tensors: layer  18 assigned to device CPU, is_swa = 0
load_tensors: layer  19 assigned to device CPU, is_swa = 0
load_tensors: layer  20 assigned to device CPU, is_swa = 0
load_tensors: layer  21 assigned to device CPU, is_swa = 0
load_tensors: layer  22 assigned to device CPU, is_swa = 0
load_tensors: layer  23 assigned to device CPU, is_swa = 0
load_tensors: layer  24 assigned to device CPU, is_swa = 0
load_tensors: layer  25 assigned to device CPU, is_swa = 0
load_tensors: layer  26 assigned to device CPU, is_swa = 0
load_tensors: layer  27 assigned to device CPU, is_swa = 0
load_tensors: layer  28 assigned to device CPU, is_swa = 0
load_tensors: layer  29 assigned to device CPU, is_swa = 0
load_tensors: layer  30 assigned to device CPU, is_swa = 0
load_tensors: layer  31 assigned to device CPU, is_swa = 0
load_tensors: layer  32 assigned to device CPU, is_swa = 0

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.9.5

Originally created by @vitaglianog on GitHub (Jul 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11323 Hello, I have installed ollama locally on a slurm node using the method detailed in the [docs](https://github.com/ollama/ollama/blob/main/docs/linux.md#uninstall) Since I do not have sudo access, instead of extracting the tar in the `/usr` folder, I did it in a local folder with: ``` curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz tar -C .local/ -xzf ollama-linux-amd64.tgz ``` Although the GPUs are seen by the ollama server, it seems that the ollama-runner is unable to load the cuda backend (see logs). I tried adding the `.local/bin` and `.local/lib` folders to the path with the following commands, but that does not seem to help. ``` export PATH="$PATH:~/.local/bin" export LD_LIBRARY_PATH=:~/.local/lib:~/.local/lib/ollama:$LD_LIBRARY_PATH ``` [ollama.log](https://github.com/user-attachments/files/21110000/ollama.log) The log attached is obtained running`ollama serve` and `ollama run llama2` on a machine with 2 40GB GPUs so VRAM should not be a limiting factor. Any insights on how to enable GPU inference? ### Relevant log output ```shell time=2025-07-07T16:46:35.854-04:00 level=DEBUG source=server.go:291 msg="compatible gpu libraries" compatible=[] time=2025-07-07T16:46:35.940-04:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: layer 0 assigned to device CPU, is_swa = 0 load_tensors: layer 1 assigned to device CPU, is_swa = 0 load_tensors: layer 2 assigned to device CPU, is_swa = 0 load_tensors: layer 3 assigned to device CPU, is_swa = 0 load_tensors: layer 4 assigned to device CPU, is_swa = 0 load_tensors: layer 5 assigned to device CPU, is_swa = 0 load_tensors: layer 6 assigned to device CPU, is_swa = 0 load_tensors: layer 7 assigned to device CPU, is_swa = 0 load_tensors: layer 8 assigned to device CPU, is_swa = 0 load_tensors: layer 9 assigned to device CPU, is_swa = 0 load_tensors: layer 10 assigned to device CPU, is_swa = 0 load_tensors: layer 11 assigned to device CPU, is_swa = 0 load_tensors: layer 12 assigned to device CPU, is_swa = 0 load_tensors: layer 13 assigned to device CPU, is_swa = 0 load_tensors: layer 14 assigned to device CPU, is_swa = 0 load_tensors: layer 15 assigned to device CPU, is_swa = 0 load_tensors: layer 16 assigned to device CPU, is_swa = 0 load_tensors: layer 17 assigned to device CPU, is_swa = 0 load_tensors: layer 18 assigned to device CPU, is_swa = 0 load_tensors: layer 19 assigned to device CPU, is_swa = 0 load_tensors: layer 20 assigned to device CPU, is_swa = 0 load_tensors: layer 21 assigned to device CPU, is_swa = 0 load_tensors: layer 22 assigned to device CPU, is_swa = 0 load_tensors: layer 23 assigned to device CPU, is_swa = 0 load_tensors: layer 24 assigned to device CPU, is_swa = 0 load_tensors: layer 25 assigned to device CPU, is_swa = 0 load_tensors: layer 26 assigned to device CPU, is_swa = 0 load_tensors: layer 27 assigned to device CPU, is_swa = 0 load_tensors: layer 28 assigned to device CPU, is_swa = 0 load_tensors: layer 29 assigned to device CPU, is_swa = 0 load_tensors: layer 30 assigned to device CPU, is_swa = 0 load_tensors: layer 31 assigned to device CPU, is_swa = 0 load_tensors: layer 32 assigned to device CPU, is_swa = 0 ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.9.5
GiteaMirror added the bug label 2026-04-12 19:32:40 -05:00
Author
Owner

@rick-github commented on GitHub (Jul 7, 2025):

time=2025-07-07T16:46:35.915-04:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/nfs/home2/user/.local/lib/ollama
load_backend: loaded CPU backend from /nfs/home2/user/.local/lib/ollama/libggml-cpu-icelake.so
time=2025-07-07T16:46:35.940-04:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)

The runner found the CPU backend but not any GPU backends. What's the output of

ls -l /nfs/home2/user/.local/lib/ollama
<!-- gh-comment-id:3046544015 --> @rick-github commented on GitHub (Jul 7, 2025): ``` time=2025-07-07T16:46:35.915-04:00 level=DEBUG source=ggml.go:94 msg="ggml backend load all from path" path=/nfs/home2/user/.local/lib/ollama load_backend: loaded CPU backend from /nfs/home2/user/.local/lib/ollama/libggml-cpu-icelake.so time=2025-07-07T16:46:35.940-04:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc) ``` The runner found the CPU backend but not any GPU backends. What's the output of ``` ls -l /nfs/home2/user/.local/lib/ollama ```
Author
Owner

@vitaglianog commented on GitHub (Jul 7, 2025):

Thank you for the quick reply! The output is:

$ ls -l /nfs/home2/user/.local/lib/ollama/
total 2700456
lrwxrwxrwx 1 user user         21 Jul  2 13:32 libcublas.so.12 -> libcublas.so.12.8.4.1
-rwxr-xr-x 1 user user  116388640 Jul  7  2015 libcublas.so.12.8.4.1
lrwxrwxrwx 1 user user         23 Jul  2 13:32 libcublasLt.so.12 -> libcublasLt.so.12.8.4.1
-rwxr-xr-x 1 user user  751771728 Jul  7  2015 libcublasLt.so.12.8.4.1
lrwxrwxrwx 1 user user         20 Jul  2 13:32 libcudart.so.12 -> libcudart.so.12.8.90
-rwxr-xr-x 1 user user     728800 Jul  7  2015 libcudart.so.12.8.90
-rwxr-xr-x 1 user user     595648 Jul  2 13:23 libggml-base.so
-rwxr-xr-x 1 user user     619280 Jul  2 13:23 libggml-cpu-alderlake.so
-rwxr-xr-x 1 user user     619280 Jul  2 13:23 libggml-cpu-haswell.so
-rwxr-xr-x 1 user user     725776 Jul  2 13:23 libggml-cpu-icelake.so
-rwxr-xr-x 1 user user     606992 Jul  2 13:23 libggml-cpu-sandybridge.so
-rwxr-xr-x 1 user user     729872 Jul  2 13:23 libggml-cpu-skylakex.so
-rwxr-xr-x 1 user user     480048 Jul  2 13:23 libggml-cpu-sse42.so
-rwxr-xr-x 1 user user     475952 Jul  2 13:23 libggml-cpu-x64.so
-rwxr-xr-x 1 user user 1286539248 Jul  2 13:32 libggml-cuda.so
-rwxr-xr-x 1 user user  604949568 Jul  2 13:33 libggml-hip.so
<!-- gh-comment-id:3046558208 --> @vitaglianog commented on GitHub (Jul 7, 2025): Thank you for the quick reply! The output is: ``` $ ls -l /nfs/home2/user/.local/lib/ollama/ total 2700456 lrwxrwxrwx 1 user user 21 Jul 2 13:32 libcublas.so.12 -> libcublas.so.12.8.4.1 -rwxr-xr-x 1 user user 116388640 Jul 7 2015 libcublas.so.12.8.4.1 lrwxrwxrwx 1 user user 23 Jul 2 13:32 libcublasLt.so.12 -> libcublasLt.so.12.8.4.1 -rwxr-xr-x 1 user user 751771728 Jul 7 2015 libcublasLt.so.12.8.4.1 lrwxrwxrwx 1 user user 20 Jul 2 13:32 libcudart.so.12 -> libcudart.so.12.8.90 -rwxr-xr-x 1 user user 728800 Jul 7 2015 libcudart.so.12.8.90 -rwxr-xr-x 1 user user 595648 Jul 2 13:23 libggml-base.so -rwxr-xr-x 1 user user 619280 Jul 2 13:23 libggml-cpu-alderlake.so -rwxr-xr-x 1 user user 619280 Jul 2 13:23 libggml-cpu-haswell.so -rwxr-xr-x 1 user user 725776 Jul 2 13:23 libggml-cpu-icelake.so -rwxr-xr-x 1 user user 606992 Jul 2 13:23 libggml-cpu-sandybridge.so -rwxr-xr-x 1 user user 729872 Jul 2 13:23 libggml-cpu-skylakex.so -rwxr-xr-x 1 user user 480048 Jul 2 13:23 libggml-cpu-sse42.so -rwxr-xr-x 1 user user 475952 Jul 2 13:23 libggml-cpu-x64.so -rwxr-xr-x 1 user user 1286539248 Jul 2 13:32 libggml-cuda.so -rwxr-xr-x 1 user user 604949568 Jul 2 13:33 libggml-hip.so ```
Author
Owner

@rick-github commented on GitHub (Jul 7, 2025):

Unset ROCR_VISIBLE_DEVICES.

<!-- gh-comment-id:3046573436 --> @rick-github commented on GitHub (Jul 7, 2025): Unset `ROCR_VISIBLE_DEVICES`.
Author
Owner

@vitaglianog commented on GitHub (Jul 7, 2025):

If I run the server with OLLAMA_DEBUG='1' ROCR_VISIBLE_DEVICES='' ollama serve
the runner is still not using the cuda backend. However the log confirms there are no ROCR_VISIBLE_DEVICES:

time=2025-07-07T17:36:45.640-04:00 level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0,1 GPU_DEVICE_ORDINAL:0,1 HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/nobackup1/user/ollama-models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"

[...]
time=2025-07-07T17:37:12.790-04:00 level=DEBUG source=server.go:291 msg="compatible gpu libraries" compatible=[]

ollama.log

<!-- gh-comment-id:3046598298 --> @vitaglianog commented on GitHub (Jul 7, 2025): If I run the server with `OLLAMA_DEBUG='1' ROCR_VISIBLE_DEVICES='' ollama serve` the runner is still not using the cuda backend. However the log confirms there are no `ROCR_VISIBLE_DEVICES`: ``` time=2025-07-07T17:36:45.640-04:00 level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0,1 GPU_DEVICE_ORDINAL:0,1 HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/nobackup1/user/ollama-models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:true ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" [...] time=2025-07-07T17:37:12.790-04:00 level=DEBUG source=server.go:291 msg="compatible gpu libraries" compatible=[] ``` [ollama.log](https://github.com/user-attachments/files/21110496/ollama.log)
Author
Owner

@rick-github commented on GitHub (Jul 7, 2025):

Setting it to the empty string is not enough, it needs to be unset.

$ unset ROCR_VISIBLE_DEVICES
$ OLLAMA_DEBUG='1' ollama serve
<!-- gh-comment-id:3046603326 --> @rick-github commented on GitHub (Jul 7, 2025): Setting it to the empty string is not enough, it needs to be unset. ```console $ unset ROCR_VISIBLE_DEVICES $ OLLAMA_DEBUG='1' ollama serve ```
Author
Owner

@vitaglianog commented on GitHub (Jul 7, 2025):

Thank you very much! By doing this the runner actually picks up the CUDA devices. I assume this is an artifact of the slurm cluster having other nodes with ROCR devices.
Probably it would be helpful for the runner to prioritize CUDA_DEVICES if they are found.

<!-- gh-comment-id:3046610634 --> @vitaglianog commented on GitHub (Jul 7, 2025): Thank you very much! By doing this the runner actually picks up the CUDA devices. I assume this is an artifact of the slurm cluster having other nodes with ROCR devices. Probably it would be helpful for the runner to prioritize `CUDA_DEVICES` if they are found.
Author
Owner

@Noonlord commented on GitHub (Jul 23, 2025):

Thanks a lot! Unsetting ROCR_VISIBLE_DEVICES also fixed the issue for us, in an HPC cluster. Though we didn't have any AMD GPUs in the cluster at all!

<!-- gh-comment-id:3106350729 --> @Noonlord commented on GitHub (Jul 23, 2025): Thanks a lot! Unsetting ROCR_VISIBLE_DEVICES also fixed the issue for us, in an HPC cluster. Though we didn't have any AMD GPUs in the cluster at all!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7472