[GH-ISSUE #7892] After the deployment of ollama, it can only be accessed through 127.0.0.1 and cannot be accessed through IP #5050

Closed
opened 2026-04-12 16:08:38 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @2277509846 on GitHub (Nov 30, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7892

Version: 0.4.6
OS: Ubuntu

Download and install
curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.4.6 sh

Edit service file
sudo vim /etc/systemd/system/ollama.service
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
Environment=“OLLAMA_HOST=0.0.0.0”

[Install]
WantedBy=default.target

When I use curl http://127.0.0.1:11434 When accessing ollama, it can be accessed, but cannot be accessed using curl http://[ip]: 11434

Originally created by @2277509846 on GitHub (Nov 30, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7892 Version: 0.4.6 OS: Ubuntu Download and install curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.4.6 sh Edit service file sudo vim /etc/systemd/system/ollama.service [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" Environment=“OLLAMA_HOST=0.0.0.0” [Install] WantedBy=default.target When I use curl http://127.0.0.1:11434 When accessing ollama, it can be accessed, but cannot be accessed using curl http://[ip]: 11434
GiteaMirror added the feature request label 2026-04-12 16:08:38 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 30, 2024):

Did you restart the server? What do the server logs show?

<!-- gh-comment-id:2508919750 --> @rick-github commented on GitHub (Nov 30, 2024): Did you restart the server? What do the server logs show?
Author
Owner

@2277509846 commented on GitHub (Nov 30, 2024):

Yes, I executed the 'sudo systemctl daemon-reload' and 'sudo systemctl restart ollama' commands, and the service started normally

<!-- gh-comment-id:2508920959 --> @2277509846 commented on GitHub (Nov 30, 2024): Yes, I executed the 'sudo systemctl daemon-reload' and 'sudo systemctl restart ollama' commands, and the service started normally
Author
Owner

@rick-github commented on GitHub (Nov 30, 2024):

What's in your server logs?

<!-- gh-comment-id:2508928091 --> @rick-github commented on GitHub (Nov 30, 2024): What's in your [server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues)?
Author
Owner

@2277509846 commented on GitHub (Nov 30, 2024):

屏幕截图 2024-11-30 192406

<!-- gh-comment-id:2508928859 --> @2277509846 commented on GitHub (Nov 30, 2024): ![屏幕截图 2024-11-30 192406](https://github.com/user-attachments/assets/669c9bb4-23b5-4471-b4e6-267fb89bb064)
Author
Owner

@rick-github commented on GitHub (Nov 30, 2024):

Don't use screenshots, add text logs.

<!-- gh-comment-id:2508929216 --> @rick-github commented on GitHub (Nov 30, 2024): Don't use screenshots, add text logs.
Author
Owner

@2277509846 commented on GitHub (Nov 30, 2024):

fangjunpeng@fangjunpeng-virtual-machine:~/v2ray$ sudo systemctl status ollama
● ollama.service - Ollama Service
Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2024-11-30 19:28:03 CST; 3s ago
Main PID: 6248 (ollama)
Tasks: 7 (limit: 4554)
Memory: 63.3M
CPU: 376ms
CGroup: /system.slice/ollama.service
└─6248 /usr/local/bin/ollama serve

11月 30 19:28:03 fangjunpeng-virtual-machine systemd[1]: Started Ollama Service.
11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: 2024/11/30 19:28:03 routes.go:1197: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_>
11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.046+08:00 level=INFO source=images.go:753 msg="total blobs: 5"
11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.046+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.046+08:00 level=INFO source=routes.go:1248 msg="Listening on 127.0.0.1:11434 (version 0.4.6)"
11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.046+08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama1040625498/runners
11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.238+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11 cuda_v12 rocm cpu]"
11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.238+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.242+08:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered"
11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.243+08:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="3.8 GiB" available="2.9 >

<!-- gh-comment-id:2508929794 --> @2277509846 commented on GitHub (Nov 30, 2024): fangjunpeng@fangjunpeng-virtual-machine:~/v2ray$ sudo systemctl status ollama ● ollama.service - Ollama Service Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2024-11-30 19:28:03 CST; 3s ago Main PID: 6248 (ollama) Tasks: 7 (limit: 4554) Memory: 63.3M CPU: 376ms CGroup: /system.slice/ollama.service └─6248 /usr/local/bin/ollama serve 11月 30 19:28:03 fangjunpeng-virtual-machine systemd[1]: Started Ollama Service. 11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: 2024/11/30 19:28:03 routes.go:1197: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_> 11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.046+08:00 level=INFO source=images.go:753 msg="total blobs: 5" 11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.046+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" 11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.046+08:00 level=INFO source=routes.go:1248 msg="Listening on 127.0.0.1:11434 (version 0.4.6)" 11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.046+08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama1040625498/runners 11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.238+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11 cuda_v12 rocm cpu]" 11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.238+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" 11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.242+08:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered" 11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.243+08:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="3.8 GiB" available="2.9 >
Author
Owner

@rick-github commented on GitHub (Nov 30, 2024):

Please follow the instructions in the link I provided:

journalctl -u ollama --no-pager
<!-- gh-comment-id:2508930149 --> @rick-github commented on GitHub (Nov 30, 2024): Please follow the instructions in the link I provided: ``` journalctl -u ollama --no-pager ```
Author
Owner

@2277509846 commented on GitHub (Nov 30, 2024):

11月 30 17:57:48 fangjunpeng-virtual-machine systemd[1]: Started Ollama Service.
11月 30 17:57:48 fangjunpeng-virtual-machine ollama[5645]: 2024/11/30 17:57:48 routes.go:1197: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
11月 30 17:57:48 fangjunpeng-virtual-machine ollama[5645]: time=2024-11-30T17:57:48.683+08:00 level=INFO source=images.go:753 msg="total blobs: 5"
11月 30 17:57:48 fangjunpeng-virtual-machine ollama[5645]: time=2024-11-30T17:57:48.684+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
11月 30 17:57:48 fangjunpeng-virtual-machine ollama[5645]: time=2024-11-30T17:57:48.684+08:00 level=INFO source=routes.go:1248 msg="Listening on 127.0.0.1:11434 (version 0.4.6)"
11月 30 17:57:48 fangjunpeng-virtual-machine ollama[5645]: time=2024-11-30T17:57:48.684+08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama1545343897/runners
11月 30 17:57:48 fangjunpeng-virtual-machine ollama[5645]: time=2024-11-30T17:57:48.861+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12 rocm]"
11月 30 17:57:48 fangjunpeng-virtual-machine ollama[5645]: time=2024-11-30T17:57:48.862+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
11月 30 17:57:48 fangjunpeng-virtual-machine ollama[5645]: time=2024-11-30T17:57:48.867+08:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered"
11月 30 17:57:48 fangjunpeng-virtual-machine ollama[5645]: time=2024-11-30T17:57:48.867+08:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="3.8 GiB" available="2.9 GiB"
11月 30 17:57:53 fangjunpeng-virtual-machine ollama[5645]: [GIN] 2024/11/30 - 17:57:53 | 200 | 45.799µs | 127.0.0.1 | GET "/api/version"
11月 30 17:58:38 fangjunpeng-virtual-machine systemd[1]: /etc/systemd/system/ollama.service:12: Invalid environment assignment, ignoring: “OLLAMA_HOST=0.0.0.0”
11月 30 17:58:42 fangjunpeng-virtual-machine systemd[1]: Stopping Ollama Service...
11月 30 17:58:42 fangjunpeng-virtual-machine systemd[1]: ollama.service: Deactivated successfully.
11月 30 17:58:42 fangjunpeng-virtual-machine systemd[1]: Stopped Ollama Service.
11月 30 17:58:42 fangjunpeng-virtual-machine systemd[1]: Started Ollama Service.
11月 30 17:58:42 fangjunpeng-virtual-machine ollama[5697]: 2024/11/30 17:58:42 routes.go:1197: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:
https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
11月 30 17:58:42 fangjunpeng-virtual-machine ollama[5697]: time=2024-11-30T17:58:42.526+08:00 level=INFO source=images.go:753 msg="total blobs: 5"
11月 30 17:58:42 fangjunpeng-virtual-machine ollama[5697]: time=2024-11-30T17:58:42.526+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
11月 30 17:58:42 fangjunpeng-virtual-machine ollama[5697]: time=2024-11-30T17:58:42.526+08:00 level=INFO source=routes.go:1248 msg="Listening on 127.0.0.1:11434 (version 0.4.6)"
11月 30 17:58:42 fangjunpeng-virtual-machine ollama[5697]: time=2024-11-30T17:58:42.527+08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama544527110/runners
11月 30 17:58:42 fangjunpeng-virtual-machine ollama[5697]: time=2024-11-30T17:58:42.687+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v12 rocm cpu cpu_avx cpu_avx2 cuda_v11]"
11月 30 17:58:42 fangjunpeng-virtual-machine ollama[5697]: time=2024-11-30T17:58:42.687+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
11月 30 17:58:42 fangjunpeng-virtual-machine ollama[5697]: time=2024-11-30T17:58:42.691+08:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered"
11月 30 17:58:42 fangjunpeng-virtual-machine ollama[5697]: time=2024-11-30T17:58:42.691+08:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="3.8 GiB" available="2.9 GiB"
11月 30 17:58:54 fangjunpeng-virtual-machine ollama[5697]: [GIN] 2024/11/30 - 17:58:54 | 200 | 30.476µs | 127.0.0.1 | GET "/"
11月 30 18:00:15 fangjunpeng-virtual-machine systemd[1]: /etc/systemd/system/ollama.service:12: Invalid environment assignment, ignoring: “OLLAMA_HOST=0.0.0.0:11434”
11月 30 18:00:19 fangjunpeng-virtual-machine systemd[1]: Stopping Ollama Service...
11月 30 18:00:19 fangjunpeng-virtual-machine systemd[1]: ollama.service: Deactivated successfully.
11月 30 18:00:19 fangjunpeng-virtual-machine systemd[1]: Stopped Ollama Service.
11月 30 18:00:19 fangjunpeng-virtual-machine systemd[1]: Started Ollama Service.
11月 30 18:00:19 fangjunpeng-virtual-machine ollama[5747]: 2024/11/30 18:00:19 routes.go:1197: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:
https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
11月 30 18:00:19 fangjunpeng-virtual-machine ollama[5747]: time=2024-11-30T18:00:19.819+08:00 level=INFO source=images.go:753 msg="total blobs: 5"
11月 30 18:00:19 fangjunpeng-virtual-machine ollama[5747]: time=2024-11-30T18:00:19.819+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
11月 30 18:00:19 fangjunpeng-virtual-machine ollama[5747]: time=2024-11-30T18:00:19.819+08:00 level=INFO source=routes.go:1248 msg="Listening on 127.0.0.1:11434 (version 0.4.6)"
11月 30 18:00:19 fangjunpeng-virtual-machine ollama[5747]: time=2024-11-30T18:00:19.819+08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama2848400173/runners
11月 30 18:00:19 fangjunpeng-virtual-machine ollama[5747]: time=2024-11-30T18:00:19.994+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11 cuda_v12 rocm cpu cpu_avx]"
11月 30 18:00:19 fangjunpeng-virtual-machine ollama[5747]: time=2024-11-30T18:00:19.994+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
11月 30 18:00:19 fangjunpeng-virtual-machine ollama[5747]: time=2024-11-30T18:00:19.998+08:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered"
11月 30 18:00:19 fangjunpeng-virtual-machine ollama[5747]: time=2024-11-30T18:00:19.998+08:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="3.8 GiB" available="2.9 GiB"
11月 30 18:02:08 fangjunpeng-virtual-machine ollama[5747]: [GIN] 2024/11/30 - 18:02:08 | 200 | 58.387µs | 127.0.0.1 | GET "/api/version"
11月 30 19:10:22 fangjunpeng-virtual-machine systemd[1]: /etc/systemd/system/ollama.service:12: Invalid environment assignment, ignoring: “OLLAMA_HOST=0.0.0.0:11434”
11月 30 19:10:28 fangjunpeng-virtual-machine systemd[1]: /etc/systemd/system/ollama.service:12: Invalid environment assignment, ignoring: “OLLAMA_HOST=0.0.0.0:11434”
11月 30 19:27:59 fangjunpeng-virtual-machine systemd[1]: /etc/systemd/system/ollama.service:12: Invalid environment assignment, ignoring: “OLLAMA_HOST=0.0.0.0”
11月 30 19:28:03 fangjunpeng-virtual-machine systemd[1]: Stopping Ollama Service...
11月 30 19:28:03 fangjunpeng-virtual-machine systemd[1]: ollama.service: Deactivated successfully.
11月 30 19:28:03 fangjunpeng-virtual-machine systemd[1]: Stopped Ollama Service.
11月 30 19:28:03 fangjunpeng-virtual-machine systemd[1]: Started Ollama Service.
11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: 2024/11/30 19:28:03 routes.go:1197: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:
https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.046+08:00 level=INFO source=images.go:753 msg="total blobs: 5"
11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.046+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.046+08:00 level=INFO source=routes.go:1248 msg="Listening on 127.0.0.1:11434 (version 0.4.6)"
11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.046+08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama1040625498/runners
11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.238+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11 cuda_v12 rocm cpu]"
11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.238+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.242+08:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered"
11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.243+08:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="3.8 GiB" available="2.9 GiB"

<!-- gh-comment-id:2508931092 --> @2277509846 commented on GitHub (Nov 30, 2024): 11月 30 17:57:48 fangjunpeng-virtual-machine systemd[1]: Started Ollama Service. 11月 30 17:57:48 fangjunpeng-virtual-machine ollama[5645]: 2024/11/30 17:57:48 routes.go:1197: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" 11月 30 17:57:48 fangjunpeng-virtual-machine ollama[5645]: time=2024-11-30T17:57:48.683+08:00 level=INFO source=images.go:753 msg="total blobs: 5" 11月 30 17:57:48 fangjunpeng-virtual-machine ollama[5645]: time=2024-11-30T17:57:48.684+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" 11月 30 17:57:48 fangjunpeng-virtual-machine ollama[5645]: time=2024-11-30T17:57:48.684+08:00 level=INFO source=routes.go:1248 msg="Listening on 127.0.0.1:11434 (version 0.4.6)" 11月 30 17:57:48 fangjunpeng-virtual-machine ollama[5645]: time=2024-11-30T17:57:48.684+08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama1545343897/runners 11月 30 17:57:48 fangjunpeng-virtual-machine ollama[5645]: time=2024-11-30T17:57:48.861+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12 rocm]" 11月 30 17:57:48 fangjunpeng-virtual-machine ollama[5645]: time=2024-11-30T17:57:48.862+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" 11月 30 17:57:48 fangjunpeng-virtual-machine ollama[5645]: time=2024-11-30T17:57:48.867+08:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered" 11月 30 17:57:48 fangjunpeng-virtual-machine ollama[5645]: time=2024-11-30T17:57:48.867+08:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="3.8 GiB" available="2.9 GiB" 11月 30 17:57:53 fangjunpeng-virtual-machine ollama[5645]: [GIN] 2024/11/30 - 17:57:53 | 200 | 45.799µs | 127.0.0.1 | GET "/api/version" 11月 30 17:58:38 fangjunpeng-virtual-machine systemd[1]: /etc/systemd/system/ollama.service:12: Invalid environment assignment, ignoring: “OLLAMA_HOST=0.0.0.0” 11月 30 17:58:42 fangjunpeng-virtual-machine systemd[1]: Stopping Ollama Service... 11月 30 17:58:42 fangjunpeng-virtual-machine systemd[1]: ollama.service: Deactivated successfully. 11月 30 17:58:42 fangjunpeng-virtual-machine systemd[1]: Stopped Ollama Service. 11月 30 17:58:42 fangjunpeng-virtual-machine systemd[1]: Started Ollama Service. 11月 30 17:58:42 fangjunpeng-virtual-machine ollama[5697]: 2024/11/30 17:58:42 routes.go:1197: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" 11月 30 17:58:42 fangjunpeng-virtual-machine ollama[5697]: time=2024-11-30T17:58:42.526+08:00 level=INFO source=images.go:753 msg="total blobs: 5" 11月 30 17:58:42 fangjunpeng-virtual-machine ollama[5697]: time=2024-11-30T17:58:42.526+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" 11月 30 17:58:42 fangjunpeng-virtual-machine ollama[5697]: time=2024-11-30T17:58:42.526+08:00 level=INFO source=routes.go:1248 msg="Listening on 127.0.0.1:11434 (version 0.4.6)" 11月 30 17:58:42 fangjunpeng-virtual-machine ollama[5697]: time=2024-11-30T17:58:42.527+08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama544527110/runners 11月 30 17:58:42 fangjunpeng-virtual-machine ollama[5697]: time=2024-11-30T17:58:42.687+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v12 rocm cpu cpu_avx cpu_avx2 cuda_v11]" 11月 30 17:58:42 fangjunpeng-virtual-machine ollama[5697]: time=2024-11-30T17:58:42.687+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" 11月 30 17:58:42 fangjunpeng-virtual-machine ollama[5697]: time=2024-11-30T17:58:42.691+08:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered" 11月 30 17:58:42 fangjunpeng-virtual-machine ollama[5697]: time=2024-11-30T17:58:42.691+08:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="3.8 GiB" available="2.9 GiB" 11月 30 17:58:54 fangjunpeng-virtual-machine ollama[5697]: [GIN] 2024/11/30 - 17:58:54 | 200 | 30.476µs | 127.0.0.1 | GET "/" 11月 30 18:00:15 fangjunpeng-virtual-machine systemd[1]: /etc/systemd/system/ollama.service:12: Invalid environment assignment, ignoring: “OLLAMA_HOST=0.0.0.0:11434” 11月 30 18:00:19 fangjunpeng-virtual-machine systemd[1]: Stopping Ollama Service... 11月 30 18:00:19 fangjunpeng-virtual-machine systemd[1]: ollama.service: Deactivated successfully. 11月 30 18:00:19 fangjunpeng-virtual-machine systemd[1]: Stopped Ollama Service. 11月 30 18:00:19 fangjunpeng-virtual-machine systemd[1]: Started Ollama Service. 11月 30 18:00:19 fangjunpeng-virtual-machine ollama[5747]: 2024/11/30 18:00:19 routes.go:1197: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" 11月 30 18:00:19 fangjunpeng-virtual-machine ollama[5747]: time=2024-11-30T18:00:19.819+08:00 level=INFO source=images.go:753 msg="total blobs: 5" 11月 30 18:00:19 fangjunpeng-virtual-machine ollama[5747]: time=2024-11-30T18:00:19.819+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" 11月 30 18:00:19 fangjunpeng-virtual-machine ollama[5747]: time=2024-11-30T18:00:19.819+08:00 level=INFO source=routes.go:1248 msg="Listening on 127.0.0.1:11434 (version 0.4.6)" 11月 30 18:00:19 fangjunpeng-virtual-machine ollama[5747]: time=2024-11-30T18:00:19.819+08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama2848400173/runners 11月 30 18:00:19 fangjunpeng-virtual-machine ollama[5747]: time=2024-11-30T18:00:19.994+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11 cuda_v12 rocm cpu cpu_avx]" 11月 30 18:00:19 fangjunpeng-virtual-machine ollama[5747]: time=2024-11-30T18:00:19.994+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" 11月 30 18:00:19 fangjunpeng-virtual-machine ollama[5747]: time=2024-11-30T18:00:19.998+08:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered" 11月 30 18:00:19 fangjunpeng-virtual-machine ollama[5747]: time=2024-11-30T18:00:19.998+08:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="3.8 GiB" available="2.9 GiB" 11月 30 18:02:08 fangjunpeng-virtual-machine ollama[5747]: [GIN] 2024/11/30 - 18:02:08 | 200 | 58.387µs | 127.0.0.1 | GET "/api/version" 11月 30 19:10:22 fangjunpeng-virtual-machine systemd[1]: /etc/systemd/system/ollama.service:12: Invalid environment assignment, ignoring: “OLLAMA_HOST=0.0.0.0:11434” 11月 30 19:10:28 fangjunpeng-virtual-machine systemd[1]: /etc/systemd/system/ollama.service:12: Invalid environment assignment, ignoring: “OLLAMA_HOST=0.0.0.0:11434” 11月 30 19:27:59 fangjunpeng-virtual-machine systemd[1]: /etc/systemd/system/ollama.service:12: Invalid environment assignment, ignoring: “OLLAMA_HOST=0.0.0.0” 11月 30 19:28:03 fangjunpeng-virtual-machine systemd[1]: Stopping Ollama Service... 11月 30 19:28:03 fangjunpeng-virtual-machine systemd[1]: ollama.service: Deactivated successfully. 11月 30 19:28:03 fangjunpeng-virtual-machine systemd[1]: Stopped Ollama Service. 11月 30 19:28:03 fangjunpeng-virtual-machine systemd[1]: Started Ollama Service. 11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: 2024/11/30 19:28:03 routes.go:1197: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" 11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.046+08:00 level=INFO source=images.go:753 msg="total blobs: 5" 11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.046+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" 11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.046+08:00 level=INFO source=routes.go:1248 msg="Listening on 127.0.0.1:11434 (version 0.4.6)" 11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.046+08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama1040625498/runners 11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.238+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11 cuda_v12 rocm cpu]" 11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.238+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" 11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.242+08:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered" 11月 30 19:28:03 fangjunpeng-virtual-machine ollama[6248]: time=2024-11-30T19:28:03.243+08:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="3.8 GiB" available="2.9 GiB"
Author
Owner

@rick-github commented on GitHub (Nov 30, 2024):

OLLAMA_HOST:http://127.0.0.1:11434

ollama thinks the address to bind to is still 127.0.0.1. What's the output of

systemctl --no-pager cat ollama
<!-- gh-comment-id:2508931888 --> @rick-github commented on GitHub (Nov 30, 2024): ``` OLLAMA_HOST:http://127.0.0.1:11434 ``` ollama thinks the address to bind to is still 127.0.0.1. What's the output of ``` systemctl --no-pager cat ollama ```
Author
Owner

@2277509846 commented on GitHub (Nov 30, 2024):

fangjunpeng@fangjunpeng-virtual-machine:~/v2ray$ systemctl --no-pager cat ollama

/etc/systemd/system/ollama.service

[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
Environment=“OLLAMA_HOST=0.0.0.0”

[Install]
WantedBy=default.target

<!-- gh-comment-id:2508932253 --> @2277509846 commented on GitHub (Nov 30, 2024): fangjunpeng@fangjunpeng-virtual-machine:~/v2ray$ systemctl --no-pager cat ollama # /etc/systemd/system/ollama.service [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/local/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" Environment=“OLLAMA_HOST=0.0.0.0” [Install] WantedBy=default.target
Author
Owner

@vansatchen commented on GitHub (Nov 30, 2024):

Try to use correct quotation marks like this

Environment="OLLAMA_HOST=0.0.0.0"

not

Environment=“OLLAMA_HOST=0.0.0.0”

<!-- gh-comment-id:2508933580 --> @vansatchen commented on GitHub (Nov 30, 2024): Try to use correct quotation marks like this > Environment="OLLAMA_HOST=0.0.0.0" not > Environment=“OLLAMA_HOST=0.0.0.0”
Author
Owner

@2277509846 commented on GitHub (Nov 30, 2024):

I wrote the configuration file incorrectly, thank you

<!-- gh-comment-id:2508934377 --> @2277509846 commented on GitHub (Nov 30, 2024): I wrote the configuration file incorrectly, thank you
Author
Owner

@Jong-cx commented on GitHub (Nov 30, 2024):

OLLAMA_HOST:http://127.0.0.1:11434

ollama thinks the address to bind to is still 127.0.0.1. What's the output ofOLLAMA 认为要绑定的地址仍然是 127.0.0.1。的输出是什么

systemctl --no-pager cat ollama

hello,I wonder how to change this ip

<!-- gh-comment-id:2508956769 --> @Jong-cx commented on GitHub (Nov 30, 2024): > ``` > OLLAMA_HOST:http://127.0.0.1:11434 > ``` > > ollama thinks the address to bind to is still 127.0.0.1. What's the output ofOLLAMA 认为要绑定的地址仍然是 127.0.0.1。的输出是什么 > > ``` > systemctl --no-pager cat ollama > ``` hello,I wonder how to change this ip
Author
Owner

@rick-github commented on GitHub (Nov 30, 2024):

Edit the configuration file and set the OLLAMA_HOST variable.

<!-- gh-comment-id:2508957403 --> @rick-github commented on GitHub (Nov 30, 2024): Edit the configuration file and set the `OLLAMA_HOST` variable.
Author
Owner

@Jong-cx commented on GitHub (Nov 30, 2024):

ok,I've solven this problem,thank you

<!-- gh-comment-id:2508959130 --> @Jong-cx commented on GitHub (Nov 30, 2024): ok,I've solven this problem,thank you
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5050