[GH-ISSUE #6369] Ubuntu22.04 - Warning: could not connect to a running Ollama instance #29760

Closed
opened 2026-04-22 08:57:39 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @ACodingfreak on GitHub (Aug 15, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6369

What is the issue?

I am not able to access ollama instance as shown in below logs, eventhough I am able to access the same before.

$ ollama --version 
Warning: could not connect to a running Ollama instance
Warning: client version is 0.3.4

$  systemctl status ollama
● ollama.service - Ollama Service
     Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/ollama.service.d
             └─override.conf
     Active: active (running) since Wed 2024-08-14 19:56:09 PDT; 35min ago
   Main PID: 772316 (ollama)
      Tasks: 17 (limit: 76755)
     Memory: 1.1G
        CPU: 6.959s
     CGroup: /system.slice/ollama.service
             └─772316 /usr/local/bin/ollama serve

$ cat /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
Environment="OLLAMA_HOST=10.10.26.188"
Environment="OLLAMA_MODELS=/opt/ollama/models/"
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/cuda-12.3/bin:/usr/local/cuda-12.3/bin:/usr/local/cuda-12.3/bin:/home/codingfreak/anaconda3/envs/llama3_test/bin:/home/codingfreak/anaconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"

[Install]
WantedBy=default.target

Then I upgraded the same as shown below


$ curl -fsSL https://ollama.com/install.sh | sh
>>> Downloading ollama...
######################################################################## 100.0%##O#-#                                               ######################################################################## 100.0%
>>> Installing ollama to /usr/local/bin...
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
>>> NVIDIA GPU installed.
$ sudo systemctl daemon-reload
$ sudo systemctl enable ollama

$ ollama --version 
Warning: could not connect to a running Ollama instance
Warning: client version is 0.3.6

$ sudo systemctl status ollama
● ollama.service - Ollama Service
     Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/ollama.service.d
             └─override.conf
     Active: active (running) since Wed 2024-08-14 20:39:16 PDT; 6min ago
   Main PID: 785243 (ollama)
      Tasks: 16 (limit: 76755)
     Memory: 1.1G
        CPU: 6.766s
     CGroup: /system.slice/ollama.service
             └─785243 /usr/local/bin/ollama serve

Aug 14 20:39:16 gpu01 systemd[1]: Started Ollama Service.
Aug 14 20:39:16 gpu01 ollama[785243]: 2024/08/14 20:39:16 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEV>
Aug 14 20:39:16 gpu01 ollama[785243]: time=2024-08-14T20:39:16.849-07:00 level=INFO source=images.go:782 msg="total blobs: 0"
Aug 14 20:39:16 gpu01 ollama[785243]: time=2024-08-14T20:39:16.850-07:00 level=INFO source=images.go:790 msg="total unused blobs re>
Aug 14 20:39:16 gpu01 ollama[785243]: time=2024-08-14T20:39:16.850-07:00 level=INFO source=routes.go:1172 msg="Listening on 10.10.2>
Aug 14 20:39:16 gpu01 ollama[785243]: time=2024-08-14T20:39:16.850-07:00 level=INFO source=payload.go:30 msg="extracting embedded f>
Aug 14 20:39:19 gpu01 ollama[785243]: time=2024-08-14T20:39:19.216-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries>
Aug 14 20:39:19 gpu01 ollama[785243]: time=2024-08-14T20:39:19.216-07:00 level=INFO source=gpu.go:204 msg="looking for compatible G>
Aug 14 20:39:19 gpu01 ollama[785243]: time=2024-08-14T20:39:19.298-07:00 level=INFO source=types.go:105 msg="inference compute" id=>

Loga

Aug 14 21:02:33 gpu01 systemd[1]: ollama.service: Consumed 6.913s CPU time.                                                [62/8875]
Aug 14 21:02:33 gpu01 systemd[1]: Started Ollama Service.                                                                           
Aug 14 21:02:33 gpu01 ollama[808132]: 2024/08/14 21:02:33 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVI
CE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://10.10.2
6.188:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA
_MODELS:/opt/ollama/models/ OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https
://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http:/
/0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:fals
e OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"                                                                                            
Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.695-07:00 level=INFO source=images.go:782 msg="total blobs: 0"       
Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.696-07:00 level=INFO source=images.go:790 msg="total unused blobs rem
oved: 0"                                                                                                                            
Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.696-07:00 level=INFO source=routes.go:1172 msg="Listening on 10.10.26
.188:11434 (version 0.3.6)"                                                                                                         
Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.697-07:00 level=INFO source=payload.go:30 msg="extracting embedded fi
les" dir=/tmp/ollama120779224/runners                                                                                               
Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.697-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cp
u file=build/linux/x86_64/cpu/bin/ollama_llama_server.gz                                                                            
Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.697-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cp
u_avx file=build/linux/x86_64/cpu_avx/bin/ollama_llama_server.gz
Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.697-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cp
u_avx2 file=build/linux/x86_64/cpu_avx2/bin/ollama_llama_server.gz 
Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.697-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cu
da_v11 file=build/linux/x86_64/cuda_v11/bin/libcublas.so.11.gz
Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.697-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cu
da_v11 file=build/linux/x86_64/cuda_v11/bin/libcublasLt.so.11.gz
Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.697-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cu
da_v11 file=build/linux/x86_64/cuda_v11/bin/libcudart.so.11.0.gz
Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.697-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cu
da_v11 file=build/linux/x86_64/cuda_v11/bin/ollama_llama_server.gz 
Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.697-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=ro
cm_v60102 file=build/linux/x86_64/rocm_v60102/bin/deps.txt.gz
Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.697-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=ro
cm_v60102 file=build/linux/x86_64/rocm_v60102/bin/ollama_llama_server.gz
Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.044-07:00 level=DEBUG source=payload.go:71 msg="availableServers : fo
und" file=/tmp/ollama120779224/runners/cpu/ollama_llama_server
Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.044-07:00 level=DEBUG source=payload.go:71 msg="availableServers : fo
und" file=/tmp/ollama120779224/runners/cpu_avx/ollama_llama_server 
Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.044-07:00 level=DEBUG source=payload.go:71 msg="availableServers : fo
und" file=/tmp/ollama120779224/runners/cpu_avx2/ollama_llama_server                                                                 
Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.044-07:00 level=DEBUG source=payload.go:71 msg="availableServers : fo
und" file=/tmp/ollama120779224/runners/cuda_v11/ollama_llama_server
Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.044-07:00 level=DEBUG source=payload.go:71 msg="availableServers : fo
und" file=/tmp/ollama120779224/runners/rocm_v60102/ollama_llama_server
Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.044-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries 
[cuda_v11 rocm_v60102 cpu cpu_avx cpu_avx2]"
Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.044-07:00 level=DEBUG source=payload.go:45 msg="Override detection lo
gic by setting OLLAMA_LLM_LIBRARY"
Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.044-07:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler
"
Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.044-07:00 level=INFO source=gpu.go:204 msg="looking for compatible GP
Us"
Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.045-07:00 level=DEBUG source=gpu.go:90 msg="searching for GPU discove
ry libraries for NVIDIA"
Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.045-07:00 level=DEBUG source=gpu.go:472 msg="Searching for GPU librar
y" name=libcuda.so*
Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.045-07:00 level=DEBUG source=gpu.go:491 msg="gpu library search" glob
s="[/libcuda.so** /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/li
bcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so
* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.046-07:00 level=DEBUG source=gpu.go:525 msg="discovered GPU libraries
" paths="[/usr/lib/i386-linux-gnu/libcuda.so.535.183.01 /usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01]"
Aug 14 21:02:36 gpu01 ollama[808132]: library /usr/lib/i386-linux-gnu/libcuda.so.535.183.01 load err: /usr/lib/i386-linux-gnu/libcud
a.so.535.183.01: wrong ELF class: ELFCLASS32
Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.047-07:00 level=DEBUG source=gpu.go:566 msg="skipping 32bit library" 
library=/usr/lib/i386-linux-gnu/libcuda.so.535.183.01
Aug 14 21:02:36 gpu01 ollama[808132]: CUDA driver version: 12.2
Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.061-07:00 level=DEBUG source=gpu.go:123 msg="detected GPUs" count=1 l
ibrary=/usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01
Aug 14 21:02:36 gpu01 ollama[808132]: [GPU-838e39e8-08a3-37c0-85d9-66182a7927ba] CUDA totalMem 12044 mb
Aug 14 21:02:36 gpu01 ollama[808132]: [GPU-838e39e8-08a3-37c0-85d9-66182a7927ba] CUDA freeMem 11919 mb
Aug 14 21:02:36 gpu01 ollama[808132]: [GPU-838e39e8-08a3-37c0-85d9-66182a7927ba] Compute Capability 8.6
Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.129-07:00 level=DEBUG source=amd_linux.go:371 msg="amdgpu driver not 
detected /sys/module/amdgpu"
Aug 14 21:02:36 gpu01 ollama[808132]: releasing cuda driver library
Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.129-07:00 level=INFO source=types.go:105 msg="inference compute" id=G
PU-838e39e8-08a3-37c0-85d9-66182a7927ba library=cuda compute=8.6 driver=12.2 name="NVIDIA GeForce RTX 3060" total="11.8 GiB" availab
le="11.6 GiB"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.3.4

Originally created by @ACodingfreak on GitHub (Aug 15, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6369 ### What is the issue? I am not able to access ollama instance as shown in below logs, eventhough I am able to access the same before. ``` $ ollama --version Warning: could not connect to a running Ollama instance Warning: client version is 0.3.4 $ systemctl status ollama ● ollama.service - Ollama Service Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/ollama.service.d └─override.conf Active: active (running) since Wed 2024-08-14 19:56:09 PDT; 35min ago Main PID: 772316 (ollama) Tasks: 17 (limit: 76755) Memory: 1.1G CPU: 6.959s CGroup: /system.slice/ollama.service └─772316 /usr/local/bin/ollama serve $ cat /etc/systemd/system/ollama.service [Unit] Description=Ollama Service After=network-online.target [Service] Environment="OLLAMA_HOST=10.10.26.188" Environment="OLLAMA_MODELS=/opt/ollama/models/" ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/usr/local/cuda-12.3/bin:/usr/local/cuda-12.3/bin:/usr/local/cuda-12.3/bin:/home/codingfreak/anaconda3/envs/llama3_test/bin:/home/codingfreak/anaconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" [Install] WantedBy=default.target ``` Then I upgraded the same as shown below ``` $ curl -fsSL https://ollama.com/install.sh | sh >>> Downloading ollama... ######################################################################## 100.0%##O#-# ######################################################################## 100.0% >>> Installing ollama to /usr/local/bin... >>> Adding ollama user to render group... >>> Adding ollama user to video group... >>> Adding current user to ollama group... >>> Creating ollama systemd service... >>> Enabling and starting ollama service... >>> NVIDIA GPU installed. $ sudo systemctl daemon-reload $ sudo systemctl enable ollama $ ollama --version Warning: could not connect to a running Ollama instance Warning: client version is 0.3.6 $ sudo systemctl status ollama ● ollama.service - Ollama Service Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/ollama.service.d └─override.conf Active: active (running) since Wed 2024-08-14 20:39:16 PDT; 6min ago Main PID: 785243 (ollama) Tasks: 16 (limit: 76755) Memory: 1.1G CPU: 6.766s CGroup: /system.slice/ollama.service └─785243 /usr/local/bin/ollama serve Aug 14 20:39:16 gpu01 systemd[1]: Started Ollama Service. Aug 14 20:39:16 gpu01 ollama[785243]: 2024/08/14 20:39:16 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEV> Aug 14 20:39:16 gpu01 ollama[785243]: time=2024-08-14T20:39:16.849-07:00 level=INFO source=images.go:782 msg="total blobs: 0" Aug 14 20:39:16 gpu01 ollama[785243]: time=2024-08-14T20:39:16.850-07:00 level=INFO source=images.go:790 msg="total unused blobs re> Aug 14 20:39:16 gpu01 ollama[785243]: time=2024-08-14T20:39:16.850-07:00 level=INFO source=routes.go:1172 msg="Listening on 10.10.2> Aug 14 20:39:16 gpu01 ollama[785243]: time=2024-08-14T20:39:16.850-07:00 level=INFO source=payload.go:30 msg="extracting embedded f> Aug 14 20:39:19 gpu01 ollama[785243]: time=2024-08-14T20:39:19.216-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries> Aug 14 20:39:19 gpu01 ollama[785243]: time=2024-08-14T20:39:19.216-07:00 level=INFO source=gpu.go:204 msg="looking for compatible G> Aug 14 20:39:19 gpu01 ollama[785243]: time=2024-08-14T20:39:19.298-07:00 level=INFO source=types.go:105 msg="inference compute" id=> Loga Aug 14 21:02:33 gpu01 systemd[1]: ollama.service: Consumed 6.913s CPU time. [62/8875] Aug 14 21:02:33 gpu01 systemd[1]: Started Ollama Service. Aug 14 21:02:33 gpu01 ollama[808132]: 2024/08/14 21:02:33 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVI CE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://10.10.2 6.188:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA _MODELS:/opt/ollama/models/ OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https ://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http:/ /0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:fals e OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.695-07:00 level=INFO source=images.go:782 msg="total blobs: 0" Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.696-07:00 level=INFO source=images.go:790 msg="total unused blobs rem oved: 0" Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.696-07:00 level=INFO source=routes.go:1172 msg="Listening on 10.10.26 .188:11434 (version 0.3.6)" Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.697-07:00 level=INFO source=payload.go:30 msg="extracting embedded fi les" dir=/tmp/ollama120779224/runners Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.697-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cp u file=build/linux/x86_64/cpu/bin/ollama_llama_server.gz Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.697-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cp u_avx file=build/linux/x86_64/cpu_avx/bin/ollama_llama_server.gz Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.697-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cp u_avx2 file=build/linux/x86_64/cpu_avx2/bin/ollama_llama_server.gz Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.697-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cu da_v11 file=build/linux/x86_64/cuda_v11/bin/libcublas.so.11.gz Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.697-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cu da_v11 file=build/linux/x86_64/cuda_v11/bin/libcublasLt.so.11.gz Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.697-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cu da_v11 file=build/linux/x86_64/cuda_v11/bin/libcudart.so.11.0.gz Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.697-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cu da_v11 file=build/linux/x86_64/cuda_v11/bin/ollama_llama_server.gz Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.697-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=ro cm_v60102 file=build/linux/x86_64/rocm_v60102/bin/deps.txt.gz Aug 14 21:02:33 gpu01 ollama[808132]: time=2024-08-14T21:02:33.697-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=ro cm_v60102 file=build/linux/x86_64/rocm_v60102/bin/ollama_llama_server.gz Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.044-07:00 level=DEBUG source=payload.go:71 msg="availableServers : fo und" file=/tmp/ollama120779224/runners/cpu/ollama_llama_server Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.044-07:00 level=DEBUG source=payload.go:71 msg="availableServers : fo und" file=/tmp/ollama120779224/runners/cpu_avx/ollama_llama_server Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.044-07:00 level=DEBUG source=payload.go:71 msg="availableServers : fo und" file=/tmp/ollama120779224/runners/cpu_avx2/ollama_llama_server Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.044-07:00 level=DEBUG source=payload.go:71 msg="availableServers : fo und" file=/tmp/ollama120779224/runners/cuda_v11/ollama_llama_server Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.044-07:00 level=DEBUG source=payload.go:71 msg="availableServers : fo und" file=/tmp/ollama120779224/runners/rocm_v60102/ollama_llama_server Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.044-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11 rocm_v60102 cpu cpu_avx cpu_avx2]" Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.044-07:00 level=DEBUG source=payload.go:45 msg="Override detection lo gic by setting OLLAMA_LLM_LIBRARY" Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.044-07:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler " Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.044-07:00 level=INFO source=gpu.go:204 msg="looking for compatible GP Us" Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.045-07:00 level=DEBUG source=gpu.go:90 msg="searching for GPU discove ry libraries for NVIDIA" Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.045-07:00 level=DEBUG source=gpu.go:472 msg="Searching for GPU librar y" name=libcuda.so* Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.045-07:00 level=DEBUG source=gpu.go:491 msg="gpu library search" glob s="[/libcuda.so** /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/li bcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so * /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.046-07:00 level=DEBUG source=gpu.go:525 msg="discovered GPU libraries " paths="[/usr/lib/i386-linux-gnu/libcuda.so.535.183.01 /usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01]" Aug 14 21:02:36 gpu01 ollama[808132]: library /usr/lib/i386-linux-gnu/libcuda.so.535.183.01 load err: /usr/lib/i386-linux-gnu/libcud a.so.535.183.01: wrong ELF class: ELFCLASS32 Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.047-07:00 level=DEBUG source=gpu.go:566 msg="skipping 32bit library" library=/usr/lib/i386-linux-gnu/libcuda.so.535.183.01 Aug 14 21:02:36 gpu01 ollama[808132]: CUDA driver version: 12.2 Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.061-07:00 level=DEBUG source=gpu.go:123 msg="detected GPUs" count=1 l ibrary=/usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01 Aug 14 21:02:36 gpu01 ollama[808132]: [GPU-838e39e8-08a3-37c0-85d9-66182a7927ba] CUDA totalMem 12044 mb Aug 14 21:02:36 gpu01 ollama[808132]: [GPU-838e39e8-08a3-37c0-85d9-66182a7927ba] CUDA freeMem 11919 mb Aug 14 21:02:36 gpu01 ollama[808132]: [GPU-838e39e8-08a3-37c0-85d9-66182a7927ba] Compute Capability 8.6 Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.129-07:00 level=DEBUG source=amd_linux.go:371 msg="amdgpu driver not detected /sys/module/amdgpu" Aug 14 21:02:36 gpu01 ollama[808132]: releasing cuda driver library Aug 14 21:02:36 gpu01 ollama[808132]: time=2024-08-14T21:02:36.129-07:00 level=INFO source=types.go:105 msg="inference compute" id=G PU-838e39e8-08a3-37c0-85d9-66182a7927ba library=cuda compute=8.6 driver=12.2 name="NVIDIA GeForce RTX 3060" total="11.8 GiB" availab le="11.6 GiB" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.4
GiteaMirror added the bug label 2026-04-22 08:57:39 -05:00
Author
Owner

@ACodingfreak commented on GitHub (Aug 15, 2024):

But if I run the serve command as shown below it works properly as shown below.
One major change is i can see the OLLAMA_MODELS path

$ sudo su - ollama -s /bin/bash -c '/usr/local/bin/ollama serve'
[sudo] password for codingfreak: 
2024/08/14 21:29:57 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-14T21:29:57.625-07:00 level=INFO source=images.go:782 msg="total blobs: 5"
time=2024-08-14T21:29:57.643-07:00 level=INFO source=images.go:790 msg="total unused blobs removed: 0"
time=2024-08-14T21:29:57.644-07:00 level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.6)"
time=2024-08-14T21:29:57.644-07:00 level=WARN source=assets.go:89 msg="process still running, skipping" pid=2236 path=/tmp/ollama719788881/ollama.pid
time=2024-08-14T21:29:57.644-07:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1752088433/runners
time=2024-08-14T21:29:59.957-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [rocm_v60102 cpu cpu_avx cpu_avx2 cuda_v11]"
time=2024-08-14T21:29:59.957-07:00 level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
time=2024-08-14T21:30:00.029-07:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-838e39e8-08a3-37c0-85d9-66182a7927ba library=cuda compute=8.6 driver=12.2 name="NVIDIA GeForce RTX 3060" total="11.8 GiB" available="11.6 GiB"

$ ollama --version 
ollama version is 0.3.6
<!-- gh-comment-id:2290571766 --> @ACodingfreak commented on GitHub (Aug 15, 2024): But if I run the serve command as shown below it works properly as shown below. One major change is i can see the OLLAMA_MODELS path ``` $ sudo su - ollama -s /bin/bash -c '/usr/local/bin/ollama serve' [sudo] password for codingfreak: 2024/08/14 21:29:57 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-08-14T21:29:57.625-07:00 level=INFO source=images.go:782 msg="total blobs: 5" time=2024-08-14T21:29:57.643-07:00 level=INFO source=images.go:790 msg="total unused blobs removed: 0" time=2024-08-14T21:29:57.644-07:00 level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.6)" time=2024-08-14T21:29:57.644-07:00 level=WARN source=assets.go:89 msg="process still running, skipping" pid=2236 path=/tmp/ollama719788881/ollama.pid time=2024-08-14T21:29:57.644-07:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1752088433/runners time=2024-08-14T21:29:59.957-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [rocm_v60102 cpu cpu_avx cpu_avx2 cuda_v11]" time=2024-08-14T21:29:59.957-07:00 level=INFO source=gpu.go:204 msg="looking for compatible GPUs" time=2024-08-14T21:30:00.029-07:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-838e39e8-08a3-37c0-85d9-66182a7927ba library=cuda compute=8.6 driver=12.2 name="NVIDIA GeForce RTX 3060" total="11.8 GiB" available="11.6 GiB" $ ollama --version ollama version is 0.3.6 ```
Author
Owner

@rick-github commented on GitHub (Aug 15, 2024):

Environment="OLLAMA_HOST=10.10.26.188"

The server is configured to bind to a specific IP address, the client uses a default of localhost. I'm assuming this has been set to allow connections from different devices. Either set this to OLLAMA_HOST=0.0.0.0 in ollama.service, or add export OLLAMA_HOST=10.10.26.188 to your .bashrc.

<!-- gh-comment-id:2290775912 --> @rick-github commented on GitHub (Aug 15, 2024): > Environment="OLLAMA_HOST=10.10.26.188" The server is configured to bind to a specific IP address, the client uses a default of `localhost`. I'm assuming this has been set to allow connections from different devices. Either set this to `OLLAMA_HOST=0.0.0.0` in ollama.service, or add `export OLLAMA_HOST=10.10.26.188` to your .bashrc.
Author
Owner

@ACodingfreak commented on GitHub (Aug 15, 2024):

Well this is the ipaddress of the server i.e. 10.10.26.188.

I have modified the ipaddress to 0.0.0.0 and restarted ollama.service

Aug 15 06:14:21 gpu01 systemd[1]: ollama.service: Consumed 8.129s CPU time.                                                         
Aug 15 06:14:21 gpu01 systemd[1]: Started Ollama Service.                                                                           
Aug 15 06:14:21 gpu01 ollama[60589]: 2024/08/15 06:14:21 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVIC
E_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:
11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODEL
S:/opt/ollama/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost http
s://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http:
//0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"   

And it works fine now.

<!-- gh-comment-id:2291286097 --> @ACodingfreak commented on GitHub (Aug 15, 2024): Well this is the ipaddress of the server i.e. 10.10.26.188. I have modified the ipaddress to 0.0.0.0 and restarted ollama.service ``` Aug 15 06:14:21 gpu01 systemd[1]: ollama.service: Consumed 8.129s CPU time. Aug 15 06:14:21 gpu01 systemd[1]: Started Ollama Service. Aug 15 06:14:21 gpu01 ollama[60589]: 2024/08/15 06:14:21 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVIC E_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0: 11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODEL S:/opt/ollama/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost http s://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http: //0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" ``` And it works fine now.
Author
Owner

@ouening commented on GitHub (Nov 17, 2024):

I got the same error after I upgrade nvidia cuda and driver. My ollama version is 0.4.2. In ollama.service I have set

Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_ORIGINS=*"

ollama list not worked. But I can still use ollama service from remote.

<!-- gh-comment-id:2481098415 --> @ouening commented on GitHub (Nov 17, 2024): I got the same error after I upgrade nvidia cuda and driver. My ollama version is 0.4.2. In ollama.service I have set ``` Environment="OLLAMA_HOST=0.0.0.0:11434" Environment="OLLAMA_ORIGINS=*" ``` `ollama lis`t not worked. But I can still use ollama service from remote.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#29760