[GH-ISSUE #9781] curl missing from Ollama Docker image, causing healthcheck failures #68452

Closed
opened 2026-05-04 13:59:34 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @Merlin-ki on GitHub (Mar 15, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9781

What is the issue?

Issue Type: Bug

Summary:

The ollama/ollama Docker image (tested with latest and 0.1.26) is missing the curl command, which is required for the default healthcheck to function correctly. This results in the container being marked as unhealthy by Docker, even though the Ollama server itself is running and responsive. This breaks integrations that rely on the healthcheck, such as Traefik's automatic service discovery and certificate management.

Steps to Reproduce:

  1. Use the following minimal docker-compose.yml:

    version: '3.8'
    
    services:
      ollama:
        image: ollama/ollama:latest  # OR: ollama/ollama:0.1.26
        container_name: ollama
        volumes:
          - ./ollama_models:/root/.ollama  # Optional: Mount a volume for persistent models
        restart: unless-stopped
        deploy:
          resources:
            limits:
              cpus: '8'   # Adjust as needed
              memory: 32G  # Adjust as needed
        healthcheck:
          test: ["CMD", "curl", "-f", "http://localhost:11434/api/tags"]
          interval: 60s
          timeout: 30s
          retries: 5
          start_period: 120s
        # No environment variables needed for minimal reproduction
        # No labels needed for minimal reproduction
        networks:
          - default
    
    networks:
      default:
        # Use default Docker network for simple reproduction. No Traefik needed.
    
    
    • Create an empty directory (e.g., ollama_test).
    • Create a docker-compose.yml file inside that directory with the content above.
    • Create an empty directory ollama_models
  2. Start the container:

    cd ollama_test
    docker-compose up -d
    
  3. Observe the health status:

    docker ps
    

    The container will show a status of (health: starting) for a while, and then switch to (unhealthy).

  4. Check the logs:

    docker logs ollama
    

    The logs will show messages like: OCI runtime exec failed: exec failed: unable to start container process: exec: "curl": executable file not found in $PATH: unknown

  5. Enter the Container and test:

     docker exec -it ollama bash
     curl --version
    

    You will see, bash: curl: command not found

Expected Behavior:

The docker ps command should show the Ollama container with a status of (healthy). The healthcheck should succeed without requiring manual intervention.

Actual Behavior:

The docker ps command shows the Ollama container with a status of (unhealthy). The healthcheck fails because curl is not found within the container's $PATH.

Workaround (Temporary):

Manually installing curl inside the running container allows the healthcheck to pass:

docker exec -it ollama bash
apt-get update && apt-get install -y curl
exit
# The healthcheck should now pass after a few intervals.
docker ps # Check again

Environment:

  • Operating System: Ubuntu 22.04 (Please replace this with the exact output of lsb_release -a and uname -a on your server)
  • Docker Version: (Please provide the output of docker version)
  • Docker Compose Version: (Please provide the output of docker-compose version or docker compose version)
  • Ollama Image(s): ollama/ollama:latest, ollama/ollama:0.1.26 (and any other versions you tested)
  • No GPU Usage

Additional Notes:

  • This issue was discovered while attempting to integrate Ollama with Traefik, but the problem is reproducible with a minimal docker-compose.yml and is not specific to Traefik.
  • The problem persists even after completely removing all Ollama containers, images, volumes, and networks, and rebuilding from scratch.
  • We have spent considerable effort to ensure this is not caused by any custom configuration or environment variables.
  • This issue is similar to, but distinct from, previously closed issues like #6641 and #4551, as it focuses specifically on the missing curl dependency and provides a minimal, reproducible example.

Proposed Solution:

Include curl (or another suitable HTTP client) in the base Ollama Docker image. The healthcheck depends on it, and it's a generally useful tool for debugging within the container.

Impact:

This bug prevents users from easily deploying Ollama with Docker Compose and using healthchecks, which are crucial for production deployments and integrations with other services like Traefik.

best regards merlin

### Relevant log output

```shell
**Ollama Container Logs (showing the missing `curl` error):**


Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
Your new public key is:
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGe+UlaL0GTE3wGIg/T1xouY8F9wqFkv1IFciHvhmmQx
2025/03/15 07:07:53 routes.go:1225: INFO server config env="map[... OLLAMA_HOST:[http://0.0.0.0:11434](http://0.0.0.0:11434) ...]"  # Shortened, but OLLAMA_HOST shows the issue
time=2025-03-15T07:07:53.613Z level=INFO source=images.go:432 msg="total blobs: 0"
time=2025-03-15T07:07:53.613Z level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-15T07:07:53.613Z level=INFO source=routes.go:1292 msg="Listening on [::]:11434 (version 0.6.0)"
time=2025-03-15T07:07:53.613Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-15T07:07:53.637Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-03-15T07:07:53.637Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="125.7 GiB" available="105.6 GiB"
# ... (Waiting for healthcheck to fail) ...
# The following lines might appear multiple times, depending on the healthcheck interval:
time=2025-03-15T07:08:57.052947268+01:00 ... "Output": "OCI runtime exec failed: exec failed: unable to start container process: exec: \"curl\": executable file not found in $PATH: unknown"


**Output of `docker ps` (showing the `unhealthy` status):**


CONTAINER ID   IMAGE                   COMMAND                CREATED          STATUS                            PORTS      NAMES
15d1713e05dc   ollama/ollama:0.1.26   "/bin/ollama serve"   5 minutes ago    Up 5 minutes (unhealthy)          11434/tcp  ollama-01
c47247cb2b50   traefik:v3.1.4         "/entrypoint.sh trae…"  50 minutes ago   Up 50 minutes                     80/tcp, 443/tcp   traefik


**Output of `docker inspect ollama-01` (showing the `OLLAMA_HOST` variable and healthcheck details):**


[
    {
        "Id": "f9dc6380c1bcd61b6d0ad216192e2043251b5ccde49e8a1717e85d86b7d9b559",
        "Created": "2025-03-15T09:04:12.059930813Z",
        "Path": "/bin/ollama",
        "Args": [
            "serve"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 28394,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2025-03-15T09:04:54.663853985Z",
            "FinishedAt": "0001-01-01T00:00:00Z",
            "Health": {
                "Status": "unhealthy",
                "FailingStreak": 7,
                "Log": [
                    {
                        "Start": "2025-03-15T10:08:57.052947268+01:00",
                        "End": "2025-03-15T10:08:57.142997693+01:00",
                        "ExitCode": -1,
                        "Output": "OCI runtime exec failed: exec failed: unable to start container process: exec: \"curl\": executable file not found in $PATH: unknown"
                    },
                    {
                        "Start": "2025-03-15T10:09:57.144141427+01:00",
                        "End": "2025-03-15T10:09:57.181955129+01:00",
                        "ExitCode": -1,
                        "Output": "OCI runtime exec failed: exec failed: unable to start container process: exec: \"curl\": executable file not found in $PATH: unknown"
                    },
                    {
                        "Start": "2025-03-15T10:10:57.182858911+01:00",
                        "End": "2025-03-15T10:10:57.222526325+01:00",
                        "ExitCode": -1,
                        "Output": "OCI runtime exec failed: exec failed: unable to start container process: exec: \"curl\": executable file not found in $PATH: unknown"
                    },
                    {
                        "Start": "2025-03-15T10:11:57.223619676+01:00",
                        "End": "2025-03-15T10:11:57.313431065+01:00",
                        "ExitCode": -1,
                        "Output": "OCI runtime exec failed: exec failed: unable to start container process: exec: \"curl\": executable file not found in $PATH: unknown"
                    },
                    {
                        "Start": "2025-03-15T10:12:57.314774903+01:00",
                        "End": "2025-03-15T10:12:57.347084427+01:00",
                        "ExitCode": -1,
                        "Output": "OCI runtime exec failed: exec failed: unable to start container process: exec: \"curl\": executable file not found in $PATH: unknown"
                    }
                ]
            }
        },
        "Image": "sha256:b9162cd6df73694f32c5e7c7250bcdd8b7dc6f77359df5a9693d7c2ca074cf2f",
        "ResolvConfPath": "/var/lib/docker/containers/f9dc6380c1bcd61b6d0ad216192e2043251b5ccde49e8a1717e85d86b7d9b559/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/f9dc6380c1bcd61b6d0ad216192e2043251b5ccde49e8a1717e85d86b7d9b559/hostname",
        "HostsPath": "/var/lib/docker/containers/f9dc6380c1bcd61b6d0ad216192e2043251b5ccde49e8a1717e85d86b7d9b559/hosts",
        "LogPath": "/var/lib/docker/containers/f9dc6380c1bcd61b6d0ad216192e2043251b5ccde49e8a1717e85d86b7d9b559/f9dc6380c1bcd61b6d0ad216192e2043251b5ccde49e8a1717e85d86b7d9b559-json.log",
        "Name": "/ollama-01",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "docker-default",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [
                "/home/docker/ollama/ollama_models:/root/.ollama:rw"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "lan-router",
            "PortBindings": {},
            "RestartPolicy": {
                "Name": "unless-stopped",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "ConsoleSize": [
                0,
                0
            ],
            "CapAdd": null,
            "CapDrop": null,
            "CgroupnsMode": "private",
            "Dns": null,
            "DnsOptions": null,
            "DnsSearch": null,
            "ExtraHosts": [],
            "GroupAdd": null,
            "IpcMode": "private",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 34359738368,
            "NanoCpus": 8000000000,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": null,
            "DeviceCgroupRules": null,
            "DeviceRequests": null,
            "MemoryReservation": 0,
            "MemorySwap": 68719476736,
            "MemorySwappiness": null,
            "OomKillDisable": null,
            "PidsLimit": null,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "MaskedPaths": [
                "/proc/asound",
                "/proc/acpi",
                "/proc/kcore",
                "/proc/keys",
                "/proc/latency_stats",
                "/proc/timer_list",
                "/proc/timer_stats",
                "/proc/sched_debug",
                "/proc/scsi",
                "/sys/firmware",
                "/sys/devices/virtual/powercap"
            ],
            "ReadonlyPaths": [
                "/proc/bus",
                "/proc/fs",
                "/proc/irq",
                "/proc/sys",
                "/proc/sysrq-trigger"
            ]
        },
        "GraphDriver": {
            "Data": {
                "ID": "f9dc6380c1bcd61b6d0ad216192e2043251b5ccde49e8a1717e85d86b7d9b559",
                "LowerDir": "/var/lib/docker/overlay2/2c558401727e1bf7f62f2884db24006d8e98c4e57aa6af9db6504fc1f46620a3-init/diff:/var/lib/docker/overlay2/03af0830aa124fef0445c825fdb0dab5e69c6614c86a564a3dce8aed1d4a7dba/diff:/var/lib/docker/overlay2/1975be41e148498b808bd36fbc49a601895fe1612596bf7154f7ce7a2a0d6313/diff:/var/lib/docker/overlay2/901134b61deba46df06b6b54f19d5cd4dab0d469136aef7137ac98d1f271d0c3/diff:/var/lib/docker/overlay2/88b1ff91151662ef23432b01a487d8edaf1477d7ae53b6b8ff7405bdcaf87b07/diff",
                "MergedDir": "/var/lib/docker/overlay2/2c558401727e1bf7f62f2884db24006d8e98c4e57aa6af9db6504fc1f46620a3/merged",
                "UpperDir": "/var/lib/docker/overlay2/2c558401727e1bf7f62f2884db24006d8e98c4e57aa6af9db6504fc1f46620a3/diff",
                "WorkDir": "/var/lib/docker/overlay2/2c558401727e1bf7f62f2884db24006d8e98c4e57aa6af9db6504fc1f46620a3/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [
            {
                "Type": "bind",
                "Source": "/home/docker/ollama/ollama_models",
                "Destination": "/root/.ollama",
                "Mode": "rw",
                "RW": true,
                "Propagation": "rprivate"
            }
        ],
        "Config": {
            "Hostname": "f9dc6380c1bc",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": true,
            "AttachStderr": true,
            "ExposedPorts": {
                "11434/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia

OS

Linux

GPU

Intel

CPU

Intel

Ollama version

image: ollama/ollama:0.1.26 and image: ollama/ollama:latest

Originally created by @Merlin-ki on GitHub (Mar 15, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9781 ### What is the issue? **Issue Type:** Bug **Summary:** The `ollama/ollama` Docker image (tested with `latest` and `0.1.26`) is missing the `curl` command, which is required for the default healthcheck to function correctly. This results in the container being marked as `unhealthy` by Docker, even though the Ollama server itself is running and responsive. This breaks integrations that rely on the healthcheck, such as Traefik's automatic service discovery and certificate management. **Steps to Reproduce:** 1. **Use the following minimal `docker-compose.yml`:** ```yaml version: '3.8' services: ollama: image: ollama/ollama:latest # OR: ollama/ollama:0.1.26 container_name: ollama volumes: - ./ollama_models:/root/.ollama # Optional: Mount a volume for persistent models restart: unless-stopped deploy: resources: limits: cpus: '8' # Adjust as needed memory: 32G # Adjust as needed healthcheck: test: ["CMD", "curl", "-f", "http://localhost:11434/api/tags"] interval: 60s timeout: 30s retries: 5 start_period: 120s # No environment variables needed for minimal reproduction # No labels needed for minimal reproduction networks: - default networks: default: # Use default Docker network for simple reproduction. No Traefik needed. ``` * Create an empty directory (e.g., `ollama_test`). * Create a `docker-compose.yml` file *inside* that directory with the content above. * Create an empty directory `ollama_models` 2. **Start the container:** ```bash cd ollama_test docker-compose up -d ``` 3. **Observe the health status:** ```bash docker ps ``` The container will show a status of `(health: starting)` for a while, and then switch to `(unhealthy)`. 4. **Check the logs:** ```bash docker logs ollama ``` The logs will show messages like: `OCI runtime exec failed: exec failed: unable to start container process: exec: "curl": executable file not found in $PATH: unknown` 5. **Enter the Container and test:** ```bash docker exec -it ollama bash curl --version ``` You will see, `bash: curl: command not found` **Expected Behavior:** The `docker ps` command should show the Ollama container with a status of `(healthy)`. The healthcheck should succeed without requiring manual intervention. **Actual Behavior:** The `docker ps` command shows the Ollama container with a status of `(unhealthy)`. The healthcheck fails because `curl` is not found within the container's `$PATH`. **Workaround (Temporary):** Manually installing `curl` inside the running container allows the healthcheck to pass: ```bash docker exec -it ollama bash apt-get update && apt-get install -y curl exit # The healthcheck should now pass after a few intervals. docker ps # Check again ``` **Environment:** * **Operating System:** Ubuntu 22.04 (Please *replace* this with the *exact* output of `lsb_release -a` and `uname -a` on your server) * **Docker Version:** (Please provide the output of `docker version`) * **Docker Compose Version:** (Please provide the output of `docker-compose version` or `docker compose version`) * **Ollama Image(s):** `ollama/ollama:latest`, `ollama/ollama:0.1.26` (and any other versions you tested) * **No GPU Usage** **Additional Notes:** * This issue was discovered while attempting to integrate Ollama with Traefik, but the problem is reproducible with a minimal `docker-compose.yml` and is *not* specific to Traefik. * The problem *persists* even after completely removing all Ollama containers, images, volumes, and networks, and rebuilding from scratch. * We have spent considerable effort to ensure this is *not* caused by any custom configuration or environment variables. * This issue is similar to, but distinct from, previously closed issues like #6641 and #4551, as it focuses specifically on the *missing `curl` dependency* and provides a *minimal, reproducible example*. **Proposed Solution:** Include `curl` (or another suitable HTTP client) in the base Ollama Docker image. The healthcheck depends on it, and it's a generally useful tool for debugging within the container. **Impact:** This bug prevents users from easily deploying Ollama with Docker Compose and using healthchecks, which are crucial for production deployments and integrations with other services like Traefik. ``` best regards merlin ### Relevant log output ```shell **Ollama Container Logs (showing the missing `curl` error):** Couldn't find '/root/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGe+UlaL0GTE3wGIg/T1xouY8F9wqFkv1IFciHvhmmQx 2025/03/15 07:07:53 routes.go:1225: INFO server config env="map[... OLLAMA_HOST:[http://0.0.0.0:11434](http://0.0.0.0:11434) ...]" # Shortened, but OLLAMA_HOST shows the issue time=2025-03-15T07:07:53.613Z level=INFO source=images.go:432 msg="total blobs: 0" time=2025-03-15T07:07:53.613Z level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-03-15T07:07:53.613Z level=INFO source=routes.go:1292 msg="Listening on [::]:11434 (version 0.6.0)" time=2025-03-15T07:07:53.613Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-03-15T07:07:53.637Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" time=2025-03-15T07:07:53.637Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="125.7 GiB" available="105.6 GiB" # ... (Waiting for healthcheck to fail) ... # The following lines might appear multiple times, depending on the healthcheck interval: time=2025-03-15T07:08:57.052947268+01:00 ... "Output": "OCI runtime exec failed: exec failed: unable to start container process: exec: \"curl\": executable file not found in $PATH: unknown" **Output of `docker ps` (showing the `unhealthy` status):** CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 15d1713e05dc ollama/ollama:0.1.26 "/bin/ollama serve" 5 minutes ago Up 5 minutes (unhealthy) 11434/tcp ollama-01 c47247cb2b50 traefik:v3.1.4 "/entrypoint.sh trae…" 50 minutes ago Up 50 minutes 80/tcp, 443/tcp traefik **Output of `docker inspect ollama-01` (showing the `OLLAMA_HOST` variable and healthcheck details):** [ { "Id": "f9dc6380c1bcd61b6d0ad216192e2043251b5ccde49e8a1717e85d86b7d9b559", "Created": "2025-03-15T09:04:12.059930813Z", "Path": "/bin/ollama", "Args": [ "serve" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 28394, "ExitCode": 0, "Error": "", "StartedAt": "2025-03-15T09:04:54.663853985Z", "FinishedAt": "0001-01-01T00:00:00Z", "Health": { "Status": "unhealthy", "FailingStreak": 7, "Log": [ { "Start": "2025-03-15T10:08:57.052947268+01:00", "End": "2025-03-15T10:08:57.142997693+01:00", "ExitCode": -1, "Output": "OCI runtime exec failed: exec failed: unable to start container process: exec: \"curl\": executable file not found in $PATH: unknown" }, { "Start": "2025-03-15T10:09:57.144141427+01:00", "End": "2025-03-15T10:09:57.181955129+01:00", "ExitCode": -1, "Output": "OCI runtime exec failed: exec failed: unable to start container process: exec: \"curl\": executable file not found in $PATH: unknown" }, { "Start": "2025-03-15T10:10:57.182858911+01:00", "End": "2025-03-15T10:10:57.222526325+01:00", "ExitCode": -1, "Output": "OCI runtime exec failed: exec failed: unable to start container process: exec: \"curl\": executable file not found in $PATH: unknown" }, { "Start": "2025-03-15T10:11:57.223619676+01:00", "End": "2025-03-15T10:11:57.313431065+01:00", "ExitCode": -1, "Output": "OCI runtime exec failed: exec failed: unable to start container process: exec: \"curl\": executable file not found in $PATH: unknown" }, { "Start": "2025-03-15T10:12:57.314774903+01:00", "End": "2025-03-15T10:12:57.347084427+01:00", "ExitCode": -1, "Output": "OCI runtime exec failed: exec failed: unable to start container process: exec: \"curl\": executable file not found in $PATH: unknown" } ] } }, "Image": "sha256:b9162cd6df73694f32c5e7c7250bcdd8b7dc6f77359df5a9693d7c2ca074cf2f", "ResolvConfPath": "/var/lib/docker/containers/f9dc6380c1bcd61b6d0ad216192e2043251b5ccde49e8a1717e85d86b7d9b559/resolv.conf", "HostnamePath": "/var/lib/docker/containers/f9dc6380c1bcd61b6d0ad216192e2043251b5ccde49e8a1717e85d86b7d9b559/hostname", "HostsPath": "/var/lib/docker/containers/f9dc6380c1bcd61b6d0ad216192e2043251b5ccde49e8a1717e85d86b7d9b559/hosts", "LogPath": "/var/lib/docker/containers/f9dc6380c1bcd61b6d0ad216192e2043251b5ccde49e8a1717e85d86b7d9b559/f9dc6380c1bcd61b6d0ad216192e2043251b5ccde49e8a1717e85d86b7d9b559-json.log", "Name": "/ollama-01", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "docker-default", "ExecIDs": null, "HostConfig": { "Binds": [ "/home/docker/ollama/ollama_models:/root/.ollama:rw" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "lan-router", "PortBindings": {}, "RestartPolicy": { "Name": "unless-stopped", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "ConsoleSize": [ 0, 0 ], "CapAdd": null, "CapDrop": null, "CgroupnsMode": "private", "Dns": null, "DnsOptions": null, "DnsSearch": null, "ExtraHosts": [], "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": false, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": null, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "Isolation": "", "CpuShares": 0, "Memory": 34359738368, "NanoCpus": 8000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": null, "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": null, "DeviceCgroupRules": null, "DeviceRequests": null, "MemoryReservation": 0, "MemorySwap": 68719476736, "MemorySwappiness": null, "OomKillDisable": null, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "ReadonlyPaths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ] }, "GraphDriver": { "Data": { "ID": "f9dc6380c1bcd61b6d0ad216192e2043251b5ccde49e8a1717e85d86b7d9b559", "LowerDir": "/var/lib/docker/overlay2/2c558401727e1bf7f62f2884db24006d8e98c4e57aa6af9db6504fc1f46620a3-init/diff:/var/lib/docker/overlay2/03af0830aa124fef0445c825fdb0dab5e69c6614c86a564a3dce8aed1d4a7dba/diff:/var/lib/docker/overlay2/1975be41e148498b808bd36fbc49a601895fe1612596bf7154f7ce7a2a0d6313/diff:/var/lib/docker/overlay2/901134b61deba46df06b6b54f19d5cd4dab0d469136aef7137ac98d1f271d0c3/diff:/var/lib/docker/overlay2/88b1ff91151662ef23432b01a487d8edaf1477d7ae53b6b8ff7405bdcaf87b07/diff", "MergedDir": "/var/lib/docker/overlay2/2c558401727e1bf7f62f2884db24006d8e98c4e57aa6af9db6504fc1f46620a3/merged", "UpperDir": "/var/lib/docker/overlay2/2c558401727e1bf7f62f2884db24006d8e98c4e57aa6af9db6504fc1f46620a3/diff", "WorkDir": "/var/lib/docker/overlay2/2c558401727e1bf7f62f2884db24006d8e98c4e57aa6af9db6504fc1f46620a3/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/home/docker/ollama/ollama_models", "Destination": "/root/.ollama", "Mode": "rw", "RW": true, "Propagation": "rprivate" } ], "Config": { "Hostname": "f9dc6380c1bc", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": true, "AttachStderr": true, "ExposedPorts": { "11434/tcp": {} }, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia ``` ### OS Linux ### GPU Intel ### CPU Intel ### Ollama version image: ollama/ollama:0.1.26 and image: ollama/ollama:latest
GiteaMirror added the bug label 2026-05-04 13:59:34 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 15, 2025):

https://github.com/ollama/ollama/issues/5389

<!-- gh-comment-id:2726410077 --> @rick-github commented on GitHub (Mar 15, 2025): https://github.com/ollama/ollama/issues/5389
Author
Owner

@Merlin-ki commented on GitHub (Mar 15, 2025):

@rick-github Thank you very much for linking this issue to #5389 and for suggesting the workarounds!
We appreciate your quick response and engagement on this.

While the workarounds (custom HEALTHCHECK, a derived image, or a separate curl service) are helpful in the short term,
they do add complexity to the deployment process. Our main concern is that the documented, default healthcheck, which relies on curl,
does not work out-of-the-box with the official ollama/ollama images. The ideal solution would be for the base ollama/ollama image to include curl
(or a similar, suitable HTTP client) to support the documented healthcheck. This would simplify deployments for everyone and ensure consistency across different environments.

We've updated the original issue description with the detailed output of docker version, docker compose version, lsb_release -a, and uname -a.
We've also attached the complete output of docker inspect ollama-01 as a file (ollama-01_inspect.json).
The key issue remains that the healthcheck fails with the following error because curl is not present in the container:

OCI runtime exec failed: exec failed: unable to start container process: exec: "curl": executable file not found in $PATH: unknown

We also continued our investigation to find the source of a spurious OLLAMA_HOST environment variable that was being injected into the container,
even when not specified in the docker-compose.yml. After a complete system reboot, and extensive searches, we have not been able to locate the source of this variable.
It may have been related to prior, conflicting Let's Encrypt configurations (using certbot) that have since been completely removed.
It is no longer present in the environment after the reboot, and is not currently interfering with the container's operation,
but its previous presence may have contributed to initial difficulties.

The core problem at this time is the missing curl in the image, preventing the default healthcheck from working.

We have confirmed that manually installing curl inside the container (as a temporary workaround) allows the healthcheck to pass.
``

<!-- gh-comment-id:2726417666 --> @Merlin-ki commented on GitHub (Mar 15, 2025): @rick-github Thank you very much for linking this issue to #5389 and for suggesting the workarounds! We appreciate your quick response and engagement on this. While the workarounds (custom `HEALTHCHECK`, a derived image, or a separate `curl` service) are helpful in the short term, they do add complexity to the deployment process. Our main concern is that the documented, default healthcheck, which relies on `curl`, does not work out-of-the-box with the official `ollama/ollama` images. The ideal solution would be for the base `ollama/ollama` image to include `curl` (or a similar, suitable HTTP client) to support the documented healthcheck. This would simplify deployments for everyone and ensure consistency across different environments. We've updated the original issue description with the detailed output of `docker version`, `docker compose version`, `lsb_release -a`, and `uname -a`. We've also attached the complete output of `docker inspect ollama-01` as a file (`ollama-01_inspect.json`). The key issue remains that the healthcheck fails with the following error because `curl` is not present in the container: ``` OCI runtime exec failed: exec failed: unable to start container process: exec: "curl": executable file not found in $PATH: unknown ``` We also continued our investigation to find the source of a spurious `OLLAMA_HOST` environment variable that was being injected into the container, *even* when not specified in the `docker-compose.yml`. After a *complete* system reboot, and extensive searches, we have *not* been able to locate the source of this variable. It *may* have been related to prior, conflicting Let's Encrypt configurations (using `certbot`) that have since been completely removed. It is no longer present in the environment after the reboot, and is not currently interfering with the container's operation, *but* its previous presence may have contributed to initial difficulties. The core problem at this time is the missing `curl` in the image, preventing the default healthcheck from working. We have confirmed that manually installing `curl` inside the container (as a temporary workaround) allows the healthcheck to pass. ``
Author
Owner

@rick-github commented on GitHub (Mar 15, 2025):

Realistically, these simple health checks really don't say anything about the health of the ollama service, they just tell you that the server is running. The same can be accomplished with

    healthcheck:
      test: kill -0 1
      interval: 60s
      start_period: 120s

A proper healthcheck requires exporting real metrics from the server with something #3144.

<!-- gh-comment-id:2726464302 --> @rick-github commented on GitHub (Mar 15, 2025): Realistically, these simple health checks really don't say anything about the health of the ollama service, they just tell you that the server is running. The same can be accomplished with ```yaml healthcheck: test: kill -0 1 interval: 60s start_period: 120s ``` A proper healthcheck requires exporting real metrics from the server with something #3144.
Author
Owner

@ZelphirKaltstahl commented on GitHub (Oct 12, 2025):

@rick-github

No, that is not actually the same at all. Testing whether a process exists is different from testing from whether it responds to an HTTP request for a specific route. While it is true, that responsiveness on one specific route doesn't mean the service will be ready to reply to requests on all valid routes and in all scenarios, it is definitely different from sending a low level signal to the root process.

The issue you are linking to is not solved. The PR for a metrics endpoint is not merged, and it is therefore no solution to this issue.

<!-- gh-comment-id:3394860280 --> @ZelphirKaltstahl commented on GitHub (Oct 12, 2025): @rick-github No, that is not actually the same at all. Testing whether a process exists is different from testing from whether it responds to an HTTP request for a specific route. While it is true, that responsiveness on one specific route doesn't mean the service will be ready to reply to requests on all valid routes and in all scenarios, it is definitely different from sending a low level signal to the root process. The issue you are linking to is not solved. The PR for a metrics endpoint is not merged, and it is therefore no solution to this issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#68452