[GH-ISSUE #9145] Manual install, and Adding Ollama as a startup service (recommended) but Error: could not connect to ollama app, is it running? #5950

Closed
opened 2026-04-12 17:17:44 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @ye7love7 on GitHub (Feb 16, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9145

显卡“:单卡3090 24G,内存:250G
按照https://github.com/ollama/ollama/blob/main/docs/linux.md执行了安装,并且添加了startup service:

sudo nano /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
# 设置 GPU 加载层数
# Environment="OLLAMA_GPU_LAYERS=10"
# 设置 keep-alive 时间为 10 分钟
Environment="OLLAMA_KEEP_ALIVE=10m"
# 设置 host 和 port
Environment="OLLAMA_HOST=0.0.0.0:8080"
# 设置模型路径
Environment="OLLAMA_MODELS=/home/tskj/MOD/ollama_models"
Environment="PATH=$PATH"
Environment="OLLAMA_DEBUG=1"
ExecStart=/usr/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
[Install]
WantedBy=default.target

sudo systemctl status ollama显示如下:

● ollama.service - Ollama Service
     Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2025-02-15 23:56:25 CST; 9h ago
   Main PID: 1403868 (ollama)
      Tasks: 22 (limit: 309002)
     Memory: 17.5M
        CPU: 2.754s
     CGroup: /system.slice/ollama.service
             └─1403868 /usr/bin/ollama serve

Feb 15 23:56:26 tskj ollama[1403868]: CUDA driver version: 12.4
Feb 15 23:56:26 tskj ollama[1403868]: calling cuDeviceGetCount
Feb 15 23:56:26 tskj ollama[1403868]: device count 1
Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.090+08:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux->
Feb 15 23:56:26 tskj ollama[1403868]: [GPU-866b9226-469d-27c8-6ae4-99b9d17d5cf6] CUDA totalMem 24252 mb
Feb 15 23:56:26 tskj ollama[1403868]: [GPU-866b9226-469d-27c8-6ae4-99b9d17d5cf6] CUDA freeMem 21542 mb
Feb 15 23:56:26 tskj ollama[1403868]: [GPU-866b9226-469d-27c8-6ae4-99b9d17d5cf6] Compute Capability 8.6
Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.306+08:00 level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu"
Feb 15 23:56:26 tskj ollama[1403868]: releasing cuda driver library
Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.306+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-866b9226-469d-27c8-6ae4-99>

看样子识别到了显卡。

(base) tskj@tskj:~$ journalctl -u ollama -e
Feb 15 23:56:26 tskj ollama[1403868]: 2025/02/15 23:56:26 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVIC>
Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.032+08:00 level=INFO source=images.go:432 msg="total blobs: 0"
Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.032+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.032+08:00 level=INFO source=routes.go:1237 msg="Listening on [::]:8080 (version 0.5.11)"
Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.032+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.032+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.059+08:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA"
Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.059+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so*
Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.059+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.s>
Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.064+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[/usr/lib/x86_64-lin>
Feb 15 23:56:26 tskj ollama[1403868]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.120
Feb 15 23:56:26 tskj ollama[1403868]: dlsym: cuInit - 0x7f95e96aebc0
Feb 15 23:56:26 tskj ollama[1403868]: dlsym: cuDriverGetVersion - 0x7f95e96aebe0
Feb 15 23:56:26 tskj ollama[1403868]: dlsym: cuDeviceGetCount - 0x7f95e96aec20
Feb 15 23:56:26 tskj ollama[1403868]: dlsym: cuDeviceGet - 0x7f95e96aec00
Feb 15 23:56:26 tskj ollama[1403868]: dlsym: cuDeviceGetAttribute - 0x7f95e96aed00
Feb 15 23:56:26 tskj ollama[1403868]: dlsym: cuDeviceGetUuid - 0x7f95e96aec60
Feb 15 23:56:26 tskj ollama[1403868]: dlsym: cuDeviceGetName - 0x7f95e96aec40
Feb 15 23:56:26 tskj ollama[1403868]: dlsym: cuCtxCreate_v3 - 0x7f95e96aeee0
Feb 15 23:56:26 tskj ollama[1403868]: dlsym: cuMemGetInfo_v2 - 0x7f95e96b8e20
Feb 15 23:56:26 tskj ollama[1403868]: dlsym: cuCtxDestroy - 0x7f95e9713850
Feb 15 23:56:26 tskj ollama[1403868]: calling cuInit
Feb 15 23:56:26 tskj ollama[1403868]: calling cuDriverGetVersion
Feb 15 23:56:26 tskj ollama[1403868]: raw version 0x2f08
Feb 15 23:56:26 tskj ollama[1403868]: CUDA driver version: 12.4
Feb 15 23:56:26 tskj ollama[1403868]: calling cuDeviceGetCount
Feb 15 23:56:26 tskj ollama[1403868]: device count 1
Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.090+08:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux->
Feb 15 23:56:26 tskj ollama[1403868]: [GPU-866b9226-469d-27c8-6ae4-99b9d17d5cf6] CUDA totalMem 24252 mb
Feb 15 23:56:26 tskj ollama[1403868]: [GPU-866b9226-469d-27c8-6ae4-99b9d17d5cf6] CUDA freeMem 21542 mb
Feb 15 23:56:26 tskj ollama[1403868]: [GPU-866b9226-469d-27c8-6ae4-99b9d17d5cf6] Compute Capability 8.6
Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.306+08:00 level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu"
Feb 15 23:56:26 tskj ollama[1403868]: releasing cuda driver library
Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.306+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-866b9226-469d-27c8-6ae4-99>
lines 967-1000/1000 (END)

似乎端口也监听到了8080,好像一切都正常启动了,但是ollama list:Error: could not connect to ollama app, is it running?
已经尝试卸载、重装,但是依然是这个问题,求大哥们解决!

Originally created by @ye7love7 on GitHub (Feb 16, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9145 显卡“:单卡3090 24G,内存:250G 按照https://github.com/ollama/ollama/blob/main/docs/linux.md执行了安装,并且添加了startup service: ``` sudo nano /etc/systemd/system/ollama.service [Unit] Description=Ollama Service After=network-online.target [Service] # 设置 GPU 加载层数 # Environment="OLLAMA_GPU_LAYERS=10" # 设置 keep-alive 时间为 10 分钟 Environment="OLLAMA_KEEP_ALIVE=10m" # 设置 host 和 port Environment="OLLAMA_HOST=0.0.0.0:8080" # 设置模型路径 Environment="OLLAMA_MODELS=/home/tskj/MOD/ollama_models" Environment="PATH=$PATH" Environment="OLLAMA_DEBUG=1" ExecStart=/usr/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 [Install] WantedBy=default.target ``` sudo systemctl status ollama显示如下: ``` ● ollama.service - Ollama Service Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2025-02-15 23:56:25 CST; 9h ago Main PID: 1403868 (ollama) Tasks: 22 (limit: 309002) Memory: 17.5M CPU: 2.754s CGroup: /system.slice/ollama.service └─1403868 /usr/bin/ollama serve Feb 15 23:56:26 tskj ollama[1403868]: CUDA driver version: 12.4 Feb 15 23:56:26 tskj ollama[1403868]: calling cuDeviceGetCount Feb 15 23:56:26 tskj ollama[1403868]: device count 1 Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.090+08:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-> Feb 15 23:56:26 tskj ollama[1403868]: [GPU-866b9226-469d-27c8-6ae4-99b9d17d5cf6] CUDA totalMem 24252 mb Feb 15 23:56:26 tskj ollama[1403868]: [GPU-866b9226-469d-27c8-6ae4-99b9d17d5cf6] CUDA freeMem 21542 mb Feb 15 23:56:26 tskj ollama[1403868]: [GPU-866b9226-469d-27c8-6ae4-99b9d17d5cf6] Compute Capability 8.6 Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.306+08:00 level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu" Feb 15 23:56:26 tskj ollama[1403868]: releasing cuda driver library Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.306+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-866b9226-469d-27c8-6ae4-99> ``` 看样子识别到了显卡。 ``` (base) tskj@tskj:~$ journalctl -u ollama -e Feb 15 23:56:26 tskj ollama[1403868]: 2025/02/15 23:56:26 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVIC> Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.032+08:00 level=INFO source=images.go:432 msg="total blobs: 0" Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.032+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.032+08:00 level=INFO source=routes.go:1237 msg="Listening on [::]:8080 (version 0.5.11)" Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.032+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.032+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.059+08:00 level=DEBUG source=gpu.go:98 msg="searching for GPU discovery libraries for NVIDIA" Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.059+08:00 level=DEBUG source=gpu.go:501 msg="Searching for GPU library" name=libcuda.so* Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.059+08:00 level=DEBUG source=gpu.go:525 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.s> Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.064+08:00 level=DEBUG source=gpu.go:558 msg="discovered GPU libraries" paths="[/usr/lib/x86_64-lin> Feb 15 23:56:26 tskj ollama[1403868]: initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.120 Feb 15 23:56:26 tskj ollama[1403868]: dlsym: cuInit - 0x7f95e96aebc0 Feb 15 23:56:26 tskj ollama[1403868]: dlsym: cuDriverGetVersion - 0x7f95e96aebe0 Feb 15 23:56:26 tskj ollama[1403868]: dlsym: cuDeviceGetCount - 0x7f95e96aec20 Feb 15 23:56:26 tskj ollama[1403868]: dlsym: cuDeviceGet - 0x7f95e96aec00 Feb 15 23:56:26 tskj ollama[1403868]: dlsym: cuDeviceGetAttribute - 0x7f95e96aed00 Feb 15 23:56:26 tskj ollama[1403868]: dlsym: cuDeviceGetUuid - 0x7f95e96aec60 Feb 15 23:56:26 tskj ollama[1403868]: dlsym: cuDeviceGetName - 0x7f95e96aec40 Feb 15 23:56:26 tskj ollama[1403868]: dlsym: cuCtxCreate_v3 - 0x7f95e96aeee0 Feb 15 23:56:26 tskj ollama[1403868]: dlsym: cuMemGetInfo_v2 - 0x7f95e96b8e20 Feb 15 23:56:26 tskj ollama[1403868]: dlsym: cuCtxDestroy - 0x7f95e9713850 Feb 15 23:56:26 tskj ollama[1403868]: calling cuInit Feb 15 23:56:26 tskj ollama[1403868]: calling cuDriverGetVersion Feb 15 23:56:26 tskj ollama[1403868]: raw version 0x2f08 Feb 15 23:56:26 tskj ollama[1403868]: CUDA driver version: 12.4 Feb 15 23:56:26 tskj ollama[1403868]: calling cuDeviceGetCount Feb 15 23:56:26 tskj ollama[1403868]: device count 1 Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.090+08:00 level=DEBUG source=gpu.go:125 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-> Feb 15 23:56:26 tskj ollama[1403868]: [GPU-866b9226-469d-27c8-6ae4-99b9d17d5cf6] CUDA totalMem 24252 mb Feb 15 23:56:26 tskj ollama[1403868]: [GPU-866b9226-469d-27c8-6ae4-99b9d17d5cf6] CUDA freeMem 21542 mb Feb 15 23:56:26 tskj ollama[1403868]: [GPU-866b9226-469d-27c8-6ae4-99b9d17d5cf6] Compute Capability 8.6 Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.306+08:00 level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu" Feb 15 23:56:26 tskj ollama[1403868]: releasing cuda driver library Feb 15 23:56:26 tskj ollama[1403868]: time=2025-02-15T23:56:26.306+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-866b9226-469d-27c8-6ae4-99> lines 967-1000/1000 (END) ``` 似乎端口也监听到了8080,好像一切都正常启动了,但是ollama list:Error: could not connect to ollama app, is it running? 已经尝试卸载、重装,但是依然是这个问题,求大哥们解决!
Author
Owner

@ye7love7 commented on GitHub (Feb 16, 2025):

我设置Environment="OLLAMA_HOST=0.0.0.0,监听到11434,ollama list就可以正常启动,这是什么原因?

<!-- gh-comment-id:2661181874 --> @ye7love7 commented on GitHub (Feb 16, 2025): 我设置Environment="OLLAMA_HOST=0.0.0.0,监听到11434,ollama list就可以正常启动,这是什么原因?
Author
Owner

@rick-github commented on GitHub (Feb 16, 2025):

You have to set OLLAMA_HOST=0.0.0.0:8080 in the client environment as well as the server environment.

OLLAMA_HOST=0.0.0.0:8080 ollama list
<!-- gh-comment-id:2661376164 --> @rick-github commented on GitHub (Feb 16, 2025): You have to set `OLLAMA_HOST=0.0.0.0:8080` in the client environment as well as the server environment. ``` OLLAMA_HOST=0.0.0.0:8080 ollama list ```
Author
Owner

@ye7love7 commented on GitHub (Feb 19, 2025):

You have to set OLLAMA_HOST=0.0.0.0:8080 in the client environment as well as the server environment.

OLLAMA_HOST=0.0.0.0:8080 ollama list

right,thanks a lot!

<!-- gh-comment-id:2667277786 --> @ye7love7 commented on GitHub (Feb 19, 2025): > You have to set `OLLAMA_HOST=0.0.0.0:8080` in the client environment as well as the server environment. > > ``` > OLLAMA_HOST=0.0.0.0:8080 ollama list > ``` right,thanks a lot!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5950