[GH-ISSUE #10308] podman error with ollama image with "Error: Head "http://0.0.0.0:11434/": EOF" #32528

Closed
opened 2026-04-22 13:53:51 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @doyoungim999 on GitHub (Apr 16, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10308

What is the issue?

Hi
I am supposed to use podman for enterprise.
I did not have a problem with docker but I got an error with podman and did not work.
Is there any solution to avoid the error?

$podman run --name ollamamodel -p 11434:11434 ollamamodel:0.1

user01@user01-400TDA-400SDA:~$ podman exec -it 9deb8b6c1e84 bash
root@9deb8b6c1e84:/# ollama list
Error: Head "http://0.0.0.0:11434/": EOF

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @doyoungim999 on GitHub (Apr 16, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10308 ### What is the issue? Hi I am supposed to use podman for enterprise. I did not have a problem with docker but I got an error with podman and did not work. Is there any solution to avoid the error? ------------------------------------------- $podman run --name ollamamodel -p 11434:11434 ollamamodel:0.1 user01@user01-400TDA-400SDA:~$ podman exec -it 9deb8b6c1e84 bash root@9deb8b6c1e84:/# ollama list Error: Head "http://0.0.0.0:11434/": EOF ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-22 13:53:51 -05:00
Author
Owner

@rick-github commented on GitHub (Apr 17, 2025):

Server logs will aid in debugging.

<!-- gh-comment-id:2812299505 --> @rick-github commented on GitHub (Apr 17, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will aid in debugging.
Author
Owner

@doyoungim999 commented on GitHub (Apr 17, 2025):

server log:
user01@user01-400TDA-400SDA:~$ podman logs ollama_4181
Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
Your new public key is:

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKRYakrXYja7S8z+dq1KU6pWkgmGQSfWtTUxDlnzSRLJ

2025/04/17 22:46:03 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:16.7.241.6:8080 HTTP_PROXY:16.7.241.6:8080 NO_PROXY:localhost,127.0.0.1/8,::1,16.3.30.54,166.79.51.50,166.79.51.70 OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-04-17T22:46:03.539Z level=INFO source=images.go:458 msg="total blobs: 0"
time=2025-04-17T22:46:03.539Z level=INFO source=images.go:465 msg="total unused blobs removed: 0"
time=2025-04-17T22:46:03.539Z level=INFO source=routes.go:1298 msg="Listening on [::]:11434 (version 0.6.5)"
time=2025-04-17T22:46:03.540Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-04-17T22:46:03.548Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-04-17T22:46:03.548Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="31.1 GiB" available="26.9 GiB"
//----
I built a ollama image first without pulling llama3.2.

[Dockerfile]
FROM ollama/ollama
USER root
RUN useradd ollama -r -s /bin/false -U -m -d /usr/share/ollama
&& usermod -a -G ollama $(whoami)

USER ollama

RUN ollama serve & server=$! ; sleep 2 ;
ENTRYPOINT [ "/bin/bash", "-c", "(sleep 2 ; ) & exec /bin/ollama $0" ]
CMD [ "serve" ]

<!-- gh-comment-id:2814160529 --> @doyoungim999 commented on GitHub (Apr 17, 2025): server log: user01@user01-400TDA-400SDA:~$ podman logs ollama_4181 Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKRYakrXYja7S8z+dq1KU6pWkgmGQSfWtTUxDlnzSRLJ 2025/04/17 22:46:03 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:16.7.241.6:8080 HTTP_PROXY:16.7.241.6:8080 NO_PROXY:localhost,127.0.0.1/8,::1,16.3.30.54,166.79.51.50,166.79.51.70 OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/usr/share/ollama/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-04-17T22:46:03.539Z level=INFO source=images.go:458 msg="total blobs: 0" time=2025-04-17T22:46:03.539Z level=INFO source=images.go:465 msg="total unused blobs removed: 0" time=2025-04-17T22:46:03.539Z level=INFO source=routes.go:1298 msg="Listening on [::]:11434 (version 0.6.5)" time=2025-04-17T22:46:03.540Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-04-17T22:46:03.548Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" time=2025-04-17T22:46:03.548Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="31.1 GiB" available="26.9 GiB" //---- I built a ollama image first without pulling llama3.2. [Dockerfile] FROM ollama/ollama USER root RUN useradd ollama -r -s /bin/false -U -m -d /usr/share/ollama \ && usermod -a -G ollama $(whoami) USER ollama RUN ollama serve & server=$! ; sleep 2 ; ENTRYPOINT [ "/bin/bash", "-c", "(sleep 2 ; ) & exec /bin/ollama $0" ] CMD [ "serve" ]
Author
Owner

@rick-github commented on GitHub (Apr 17, 2025):

Don't set HTTP_PROXY inside the container.

<!-- gh-comment-id:2814187174 --> @rick-github commented on GitHub (Apr 17, 2025): Don't set `HTTP_PROXY` inside the container.
Author
Owner

@doyoungim999 commented on GitHub (Apr 17, 2025):

I did not set any HTTP_PROXY inside container. I just pulled ollama/ollama container to deploy the container on openshift.
May I ask how to unset HTTP_PROXY inside?

<!-- gh-comment-id:2814189780 --> @doyoungim999 commented on GitHub (Apr 17, 2025): I did not set any HTTP_PROXY inside container. I just pulled ollama/ollama container to deploy the container on openshift. May I ask how to unset HTTP_PROXY inside?
Author
Owner

@rick-github commented on GitHub (Apr 17, 2025):

Presumably there's an environment block in your podman config that sets up variables to be passed into the container, the same place where HTTPS_PROXY is set so that ollama can communicate with the outside world.

<!-- gh-comment-id:2814194038 --> @rick-github commented on GitHub (Apr 17, 2025): Presumably there's an `environment` block in your podman config that sets up variables to be passed into the container, the same place where `HTTPS_PROXY` is set so that ollama can communicate with the outside world.
Author
Owner

@doyoungim999 commented on GitHub (Apr 17, 2025):

After I access the container and unset HTTPS_PROXY, HTTP_PROXY, I can see the result without error.
Thanks!

<!-- gh-comment-id:2814202192 --> @doyoungim999 commented on GitHub (Apr 17, 2025): After I access the container and unset HTTPS_PROXY, HTTP_PROXY, I can see the result without error. Thanks!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32528