[GH-ISSUE #4554] v0.1.38 OLLAMA_HOST no longer works. #2854

Closed
opened 2026-04-12 13:12:06 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @szRyu666 on GitHub (May 21, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4554

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

When I updated to v0.1.38, I encountered two issues:

OLLAMA_HOST no longer works. I followed the previous method of configuring the service by adding environment variables and restarting the service. However, when I checked using ss -tuln, port 11434 still appears after 127.0.0.1.

After resetting the model path and restarting the service, running ollama list still returns an empty list. The model list from before the update only reloads after running ollama run with any model.

OS

Linux

GPU

Other

CPU

Intel

Ollama version

0.1.38

Originally created by @szRyu666 on GitHub (May 21, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4554 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? When I updated to v0.1.38, I encountered two issues: OLLAMA_HOST no longer works. I followed the previous method of configuring the service by adding environment variables and restarting the service. However, when I checked using ss -tuln, port 11434 still appears after 127.0.0.1. After resetting the model path and restarting the service, running ollama list still returns an empty list. The model list from before the update only reloads after running ollama run with any model. ### OS Linux ### GPU Other ### CPU Intel ### Ollama version 0.1.38
GiteaMirror added the bug label 2026-04-12 13:12:06 -05:00
Author
Owner

@hhk123 commented on GitHub (May 22, 2024):

same

<!-- gh-comment-id:2123681376 --> @hhk123 commented on GitHub (May 22, 2024): same
Author
Owner

@pdevine commented on GitHub (May 22, 2024):

I just tried it and everything in 0.1.38 seems to be working just fine.

OLLAMA_HOST=0.0.0.0:11434 ollama serve

and then on a separate host:

OLLAMA_HOST=x.x.x.x ollama run llama3
>>> hi there
Hi there! It's nice to meet you. Is there something I can help you
with, or would you like to chat?

The output of ss:

$ ss -tuln
...
tcp   LISTEN 0      4096                              *:11434              *:*
<!-- gh-comment-id:2123877970 --> @pdevine commented on GitHub (May 22, 2024): I just tried it and everything in `0.1.38` seems to be working just fine. `OLLAMA_HOST=0.0.0.0:11434 ollama serve` and then on a separate host: ``` OLLAMA_HOST=x.x.x.x ollama run llama3 >>> hi there Hi there! It's nice to meet you. Is there something I can help you with, or would you like to chat? ``` The output of `ss`: ``` $ ss -tuln ... tcp LISTEN 0 4096 *:11434 *:* ```
Author
Owner

@dhiltgen commented on GitHub (May 22, 2024):

@szRyu666 can you share the value you use for OLLAMA_HOST and your server log?

<!-- gh-comment-id:2125792509 --> @dhiltgen commented on GitHub (May 22, 2024): @szRyu666 can you share the value you use for OLLAMA_HOST and your server log?
Author
Owner

@hhk123 commented on GitHub (May 27, 2024):

I just tried it and everything in 0.1.38 seems to be working just fine.

OLLAMA_HOST=0.0.0.0:11434 ollama serve

and then on a separate host:

OLLAMA_HOST=x.x.x.x ollama run llama3
>>> hi there
Hi there! It's nice to meet you. Is there something I can help you
with, or would you like to chat?

The output of ss:

$ ss -tuln
...
tcp   LISTEN 0      4096                              *:11434              *:*

thanks a lot! It works!

<!-- gh-comment-id:2132872877 --> @hhk123 commented on GitHub (May 27, 2024): > I just tried it and everything in `0.1.38` seems to be working just fine. > > `OLLAMA_HOST=0.0.0.0:11434 ollama serve` > > and then on a separate host: > > ``` > OLLAMA_HOST=x.x.x.x ollama run llama3 > >>> hi there > Hi there! It's nice to meet you. Is there something I can help you > with, or would you like to chat? > ``` > > The output of `ss`: > > ``` > $ ss -tuln > ... > tcp LISTEN 0 4096 *:11434 *:* > ``` thanks a lot! It works!
Author
Owner

@Samsonium commented on GitHub (Aug 5, 2024):

Hello, guys!
Not working for me ;(

$ OLLAMA_HOST=0.0.0.0:11434 ollama serve
2024/08/05 17:11:17 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/samsonium/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-05T17:11:17.650+03:00 level=INFO source=images.go:781 msg="total blobs: 0"
time=2024-08-05T17:11:17.650+03:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0"
time=2024-08-05T17:11:17.650+03:00 level=INFO source=routes.go:1155 msg="Listening on [::]:11434 (version 0.3.3)"
time=2024-08-05T17:11:17.650+03:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama356581123/runners
time=2024-08-05T17:11:20.524+03:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11 rocm_v60102 cpu cpu_avx cpu_avx2]"
time=2024-08-05T17:11:20.525+03:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-08-05T17:11:20.605+03:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-a75829ad-6b01-4c50-18c5-28ac60ad905f library=cuda compute=8.6 driver=12.2 name="NVIDIA GeForce RTX 3090 Ti" total="23.6 GiB" available="22.0 GiB"
$ netstat -tuln | grep 11434
tcp6       0      0 :::11434                :::*                    LISTEN
<!-- gh-comment-id:2269180914 --> @Samsonium commented on GitHub (Aug 5, 2024): Hello, guys! Not working for me ;( ``` $ OLLAMA_HOST=0.0.0.0:11434 ollama serve 2024/08/05 17:11:17 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/samsonium/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-08-05T17:11:17.650+03:00 level=INFO source=images.go:781 msg="total blobs: 0" time=2024-08-05T17:11:17.650+03:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0" time=2024-08-05T17:11:17.650+03:00 level=INFO source=routes.go:1155 msg="Listening on [::]:11434 (version 0.3.3)" time=2024-08-05T17:11:17.650+03:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama356581123/runners time=2024-08-05T17:11:20.524+03:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11 rocm_v60102 cpu cpu_avx cpu_avx2]" time=2024-08-05T17:11:20.525+03:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs" time=2024-08-05T17:11:20.605+03:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-a75829ad-6b01-4c50-18c5-28ac60ad905f library=cuda compute=8.6 driver=12.2 name="NVIDIA GeForce RTX 3090 Ti" total="23.6 GiB" available="22.0 GiB" ``` ``` $ netstat -tuln | grep 11434 tcp6 0 0 :::11434 :::* LISTEN ```
Author
Owner

@dhiltgen commented on GitHub (Aug 5, 2024):

@Samsonium this sounds unrelated to the original resolved issue. It looks more like an IPv4 vs. IPv6 related issue. Please go ahead and submit a new issue and it will be helpful to explain how you have your systems network configured (pure IPv6, mixed IPv4/IPv6, etc.)

<!-- gh-comment-id:2270024818 --> @dhiltgen commented on GitHub (Aug 5, 2024): @Samsonium this sounds unrelated to the original resolved issue. It looks more like an IPv4 vs. IPv6 related issue. Please go ahead and submit a new issue and it will be helpful to explain how you have your systems network configured (pure IPv6, mixed IPv4/IPv6, etc.)
Author
Owner

@Samsonium commented on GitHub (Aug 5, 2024):

Ok, thank you for the reply. I'll create it tomorrow

<!-- gh-comment-id:2270084457 --> @Samsonium commented on GitHub (Aug 5, 2024): Ok, thank you for the reply. I'll create it tomorrow
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#2854