[GH-ISSUE #1378] Is there a health check endpoint? #727

Closed
opened 2026-04-12 10:23:44 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @jamesbraza on GitHub (Dec 4, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1378

Is there a health check endpoint for the Ollama server? And if yes, where can I find docs on it?

Alternately, is there a currently existing endpoint that can function as a health check?

Originally created by @jamesbraza on GitHub (Dec 4, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1378 Is there a health check endpoint for the Ollama server? And if yes, where can I find docs on it? Alternately, is there a currently existing endpoint that can function as a health check?
Author
Owner

@technovangelist commented on GitHub (Dec 4, 2023):

Depends on what you are asking for. There is and endpoint at / that lets you know that the server is running. Is that good enough for your needs?

<!-- gh-comment-id:1839458926 --> @technovangelist commented on GitHub (Dec 4, 2023): Depends on what you are asking for. There is and endpoint at / that lets you know that the server is running. Is that good enough for your needs?
Author
Owner

@jamesbraza commented on GitHub (Dec 4, 2023):

Oh nice, that's exactly what is needed, thank you!

> curl localhost:11434
Ollama is running
<!-- gh-comment-id:1839464076 --> @jamesbraza commented on GitHub (Dec 4, 2023): Oh nice, that's exactly what is needed, thank you! ```bash > curl localhost:11434 Ollama is running ```
Author
Owner

@jamesbraza commented on GitHub (Feb 20, 2024):

It's worth mentioning for posterity that https://github.com/ollama/ollama/pull/2045 poses a possible health check implementation that uses bash sockets:

    healthcheck:
      test: "bash -c 'cat < /dev/null > /dev/tcp/localhost/11434'"
<!-- gh-comment-id:1954380856 --> @jamesbraza commented on GitHub (Feb 20, 2024): It's worth mentioning for posterity that https://github.com/ollama/ollama/pull/2045 poses a possible health check implementation that uses bash sockets: ```yaml healthcheck: test: "bash -c 'cat < /dev/null > /dev/tcp/localhost/11434'" ```
Author
Owner

@icfly2 commented on GitHub (Jun 11, 2024):

This sadly doesn't help.

I have a situation where ollama responds with ollama is running when queried, but hitting /api/generate results in a timeout.

<!-- gh-comment-id:2160537724 --> @icfly2 commented on GitHub (Jun 11, 2024): This sadly doesn't help. I have a situation where ollama responds with ollama is running when queried, but hitting /api/generate results in a timeout.
Author
Owner

@gaardhus commented on GitHub (Oct 3, 2024):

I guess ollama ps is kind of a health check? https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-tell-if-my-model-was-loaded-onto-the-gpu

<!-- gh-comment-id:2392021901 --> @gaardhus commented on GitHub (Oct 3, 2024): I guess `ollama ps` is kind of a health check? https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-tell-if-my-model-was-loaded-onto-the-gpu
Author
Owner

@Martinez1991 commented on GitHub (Oct 25, 2024):

I had the same problem, I tested it and it worked

test: "ollama --version && ollama ps || exit 1"

<!-- gh-comment-id:2436650823 --> @Martinez1991 commented on GitHub (Oct 25, 2024): I had the same problem, I tested it and it worked ` test: "ollama --version && ollama ps || exit 1"`
Author
Owner

@adamoutler commented on GitHub (Jan 14, 2025):

Health check ensuring the service is running and capable of outputting data on port 11434

    healthcheck:
      test:  [ "CMD-SHELL", "bash", "-c", "{ printf >&3 'GET / HTTP/1.0\\r\\n\\r\\n'; cat <&3; } 3<>/dev/tcp/localhost/11434 | grep 'Ollama is' || exit 1"  ]
      interval: 10s
      timeout: 5s
      retries: 3
      start_period: 10s

<!-- gh-comment-id:2590779518 --> @adamoutler commented on GitHub (Jan 14, 2025): Health check ensuring the service is running and capable of outputting data on port 11434 ``` healthcheck: test: [ "CMD-SHELL", "bash", "-c", "{ printf >&3 'GET / HTTP/1.0\\r\\n\\r\\n'; cat <&3; } 3<>/dev/tcp/localhost/11434 | grep 'Ollama is' || exit 1" ] interval: 10s timeout: 5s retries: 3 start_period: 10s ```
Author
Owner

@davidquicast commented on GitHub (Feb 16, 2025):

This entrypoint.sh script worked for me:

#!/bin/bash

# Start Ollama in the background
/bin/ollama serve &
# Record Process ID
pid=$!

# Pause for Ollama to start
sleep 5

# Extract model name from MODEL variable (removing quotes if present)
MODEL_NAME=$(echo $MODEL | tr -d '"')

# Check if MODEL_NAME has a value
if [ -z "$MODEL_NAME" ]; then
    echo "❌ No model specified in MODEL environment variable"
else
    # Check if model exists
    if ollama list | grep -q "$MODEL_NAME"; then
        echo "🟢 Model ($MODEL_NAME) already installed"
        touch /tmp/ollama_ready  # Creates a temporary file to signal readiness
    else
        echo "🔴 Retrieving model ($MODEL_NAME)..."
        # Attempt to pull model and verify before creating the ready flag
        if ollama pull "$MODEL_NAME" 2>/dev/null && ollama list | grep -q "$MODEL_NAME"; then
            echo "🟢 Model download complete!"
            touch /tmp/ollama_ready  # Mark readiness after successful download
        else
            echo "❌ Error downloading model ($MODEL_NAME)"
        fi
    fi
fi

# Wait for Ollama process to finish
wait $pid

And in my docker-compose.yml:

ollama:
    image: ollama/ollama:${OLLAMA_HARDWARE:-latest}
    ports:
      - "11434:11434"
    env_file:
      - .env
    environment:
      - OLLAMA_BASE_URL=${OLLAMA_BASE_URL:-}
    volumes:
      - ollama_data:/root/.ollama
      - ./docker/ollama/entrypoint.sh:/entrypoint.sh
    networks:
      - app-network
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]
    tty: true
    restart: always
    entrypoint: ["/usr/bin/bash", "/entrypoint.sh"]
    healthcheck:
      test: 
        - "CMD-SHELL"
        - |
          test -f /tmp/ollama_ready && \
          bash -c '</dev/tcp/localhost/11434'  # Checks if Ollama is accepting connections
      interval: 10s
      timeout: 10s
      retries: 100
      start_period: 10s
  • Temporary file (/tmp/ollama_ready): Used as a flag to indicate that the model is ready, so the health check knows when Ollama is initialized and the model is listed/downloaded.
  • /dev/tcp/localhost/11434: This tests if Ollama's API is accessible on port 11434.
<!-- gh-comment-id:2661324119 --> @davidquicast commented on GitHub (Feb 16, 2025): This **entrypoint.sh** script worked for me: ```bash #!/bin/bash # Start Ollama in the background /bin/ollama serve & # Record Process ID pid=$! # Pause for Ollama to start sleep 5 # Extract model name from MODEL variable (removing quotes if present) MODEL_NAME=$(echo $MODEL | tr -d '"') # Check if MODEL_NAME has a value if [ -z "$MODEL_NAME" ]; then echo "❌ No model specified in MODEL environment variable" else # Check if model exists if ollama list | grep -q "$MODEL_NAME"; then echo "🟢 Model ($MODEL_NAME) already installed" touch /tmp/ollama_ready # Creates a temporary file to signal readiness else echo "🔴 Retrieving model ($MODEL_NAME)..." # Attempt to pull model and verify before creating the ready flag if ollama pull "$MODEL_NAME" 2>/dev/null && ollama list | grep -q "$MODEL_NAME"; then echo "🟢 Model download complete!" touch /tmp/ollama_ready # Mark readiness after successful download else echo "❌ Error downloading model ($MODEL_NAME)" fi fi fi # Wait for Ollama process to finish wait $pid ``` And in my `docker-compose.yml`: ```yaml ollama: image: ollama/ollama:${OLLAMA_HARDWARE:-latest} ports: - "11434:11434" env_file: - .env environment: - OLLAMA_BASE_URL=${OLLAMA_BASE_URL:-} volumes: - ollama_data:/root/.ollama - ./docker/ollama/entrypoint.sh:/entrypoint.sh networks: - app-network deploy: resources: reservations: devices: - driver: nvidia count: all capabilities: [gpu] tty: true restart: always entrypoint: ["/usr/bin/bash", "/entrypoint.sh"] healthcheck: test: - "CMD-SHELL" - | test -f /tmp/ollama_ready && \ bash -c '</dev/tcp/localhost/11434' # Checks if Ollama is accepting connections interval: 10s timeout: 10s retries: 100 start_period: 10s ``` - **Temporary file (`/tmp/ollama_ready`)**: Used as a flag to indicate that the model is ready, so the health check knows when Ollama is initialized and the model is listed/downloaded. - **`/dev/tcp/localhost/11434`**: This tests if Ollama's API is accessible on port **11434**.
Author
Owner

@evrial commented on GitHub (Jul 12, 2025):

Created PR #11393 to add health check

<!-- gh-comment-id:3065062847 --> @evrial commented on GitHub (Jul 12, 2025): Created PR #11393 to add health check
Author
Owner

@YKDZ commented on GitHub (Oct 26, 2025):

How about:

services:
  ollama:
    image: ollama/ollama
    healthcheck:
      test: ollama --version || exit 1
      start_period: 20s
      interval: 30s
      retries: 5
      timeout: 5s

This works very well for me.

<!-- gh-comment-id:3448477327 --> @YKDZ commented on GitHub (Oct 26, 2025): How about: ```yml services: ollama: image: ollama/ollama healthcheck: test: ollama --version || exit 1 start_period: 20s interval: 30s retries: 5 timeout: 5s ``` This works very well for me.
Author
Owner

@adamoutler commented on GitHub (Oct 26, 2025):

How about:

services:
ollama:
image: ollama/ollama
healthcheck:
test: ollama --version || exit 1
start_period: 20s
interval: 30s
retries: 5
timeout: 5s
This works very well for me.

That does verify that ollama binary will operate but it's a bit more thorough to verify the server is up and able to respond to requests.

healthcheck:
      test: 
        - "CMD-SHELL"
        - |
          test -f /tmp/ollama_ready && \
          bash -c '</dev/tcp/localhost/11434'  # Checks if Ollama is accepting connections
<!-- gh-comment-id:3448795675 --> @adamoutler commented on GitHub (Oct 26, 2025): > How about: > > services: > ollama: > image: ollama/ollama > healthcheck: > test: ollama --version || exit 1 > start_period: 20s > interval: 30s > retries: 5 > timeout: 5s > This works very well for me. That does verify that ollama binary will operate but it's a bit more thorough to verify the server is up and able to respond to requests. ``` healthcheck: test: - "CMD-SHELL" - | test -f /tmp/ollama_ready && \ bash -c '</dev/tcp/localhost/11434' # Checks if Ollama is accepting connections ```
Author
Owner

@luckylinux commented on GitHub (Jan 24, 2026):

@adamoutler: are you sure that it works ?

root@ollama:/# ls -l /tmp/ollama_ready 
ls: cannot access '/tmp/ollama_ready': No such file or directory
root@ollama:/# ls -l /dev/tcp/
ls: cannot access '/dev/tcp/': No such file or directory

Maybe there is a Regression ?

I could do curl from the Ollama WebUI Container in the same Pod, also installing curl temporarily in the ollama Container works.

root@ollama:/# apt-get update; apt-get install -y curl
...
root@ollama:/# curl -vvv -L http://127.0.0.1:11434
*   Trying 127.0.0.1:11434...
* Connected to 127.0.0.1 (127.0.0.1) port 11434
> GET / HTTP/1.1
> Host: 127.0.0.1:11434
> User-Agent: curl/8.5.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< Content-Type: text/plain; charset=utf-8
< Date: Sat, 24 Jan 2026 07:09:15 GMT
< Content-Length: 17
< 
* Connection #0 to host 127.0.0.1 left intact
Ollama is running

root@ollama:/# echo $?                                                                     
0

Note that I'm running Podman Rootless so maybe /dev/tcp might not have enough Privileges (?).

That doesn't explain why /tmp/ollama_ready is missing though 😕.

EDIT 1: a possible alternative is to grep

root@ollama:/# grep -q "00000000000000000000000000000000:2CAA" /proc/net/tcp*
root@ollama:/# echo $?
0

Since 0x2CAA is 11434.

strangely enough I find the result in /proc/net/tcp6 instead of /proc/net/tcp:

root@ollama:/# cat /proc/net/tcp6
  sl  local_address                         remote_address                        st tx_queue rx_queue tr tm->when retrnsmt   uid  timeout inode
   0: 00000000000000000000000000000000:2CAA 00000000000000000000000000000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 448745 2 00000000e987687a 100 0 0 10 0

Installing iproute2 temporarily:

root@ollama:/# ss -nlt
State                         Recv-Q                        Send-Q                                                Local Address:Port                                                  Peer Address:Port                        Process                        
LISTEN                        0                             2048                                                        0.0.0.0:8080                                                       0.0.0.0:*                                                          
LISTEN                        0                             4096                                                              *:11434                                                            *:*                                   ```

So if it listens to BOTH IPv4 and IPv6, then it shows up in `/proc/net/tcp6` alright 😃.
<!-- gh-comment-id:3794033092 --> @luckylinux commented on GitHub (Jan 24, 2026): @adamoutler: are you sure that it works ? ``` root@ollama:/# ls -l /tmp/ollama_ready ls: cannot access '/tmp/ollama_ready': No such file or directory root@ollama:/# ls -l /dev/tcp/ ls: cannot access '/dev/tcp/': No such file or directory ``` Maybe there is a Regression ? I could do `curl` from the Ollama WebUI Container in the same Pod, also installing `curl` temporarily in the `ollama` Container works. ``` root@ollama:/# apt-get update; apt-get install -y curl ... root@ollama:/# curl -vvv -L http://127.0.0.1:11434 * Trying 127.0.0.1:11434... * Connected to 127.0.0.1 (127.0.0.1) port 11434 > GET / HTTP/1.1 > Host: 127.0.0.1:11434 > User-Agent: curl/8.5.0 > Accept: */* > < HTTP/1.1 200 OK < Content-Type: text/plain; charset=utf-8 < Date: Sat, 24 Jan 2026 07:09:15 GMT < Content-Length: 17 < * Connection #0 to host 127.0.0.1 left intact Ollama is running root@ollama:/# echo $? 0 ``` Note that I'm running Podman Rootless so maybe `/dev/tcp` might not have enough Privileges (?). That doesn't explain why `/tmp/ollama_ready` is missing though 😕. EDIT 1: a possible alternative is to `grep` ``` root@ollama:/# grep -q "00000000000000000000000000000000:2CAA" /proc/net/tcp* root@ollama:/# echo $? 0 ``` Since `0x2CAA` is `11434`. strangely enough I find the result in `/proc/net/tcp6` instead of `/proc/net/tcp`: ``` root@ollama:/# cat /proc/net/tcp6 sl local_address remote_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode 0: 00000000000000000000000000000000:2CAA 00000000000000000000000000000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 448745 2 00000000e987687a 100 0 0 10 0 ``` Installing `iproute2` temporarily: ``` root@ollama:/# ss -nlt State Recv-Q Send-Q Local Address:Port Peer Address:Port Process LISTEN 0 2048 0.0.0.0:8080 0.0.0.0:* LISTEN 0 4096 *:11434 *:* ``` So if it listens to BOTH IPv4 and IPv6, then it shows up in `/proc/net/tcp6` alright 😃.
Author
Owner

@adamoutler commented on GitHub (Jan 24, 2026):

@adamoutler: are you sure that it works ?

Yes I'm sure it works. It's how Linux networking works.

root@c69eca68b0a8:/# bash -c '</dev/tcp/localhost/11434'
root@c69eca68b0a8:/# echo $?
0
root@c69eca68b0a8:/# bash -c '</dev/tcp/localhost/11435'
bash: connect: Connection refused
bash: line 1: /dev/tcp/localhost/11435: Connection refused
root@c69eca68b0a8:/# echo $?
1
root@c69eca68b0a8:/#
<!-- gh-comment-id:3794645666 --> @adamoutler commented on GitHub (Jan 24, 2026): > @adamoutler: are you sure that it works ? Yes I'm sure it works. It's how Linux networking works. ``` root@c69eca68b0a8:/# bash -c '</dev/tcp/localhost/11434' root@c69eca68b0a8:/# echo $? 0 root@c69eca68b0a8:/# bash -c '</dev/tcp/localhost/11435' bash: connect: Connection refused bash: line 1: /dev/tcp/localhost/11435: Connection refused root@c69eca68b0a8:/# echo $? 1 root@c69eca68b0a8:/# ```
Author
Owner

@luckylinux commented on GitHub (Jan 24, 2026):

@adamoutler: I see, I think I saw from a quick Search that's just a BASH "built-in" / Function / Feature, it's not a real Endpoint in /dev as I first thought.

Same Result here then:

root@ollama:/# bash -c '</dev/tcp/localhost/11434'
root@ollama:/# echo $?
0
root@ollama:/# bash -c '</dev/tcp/localhost/11435'
bash: connect: Connection refused
bash: line 1: /dev/tcp/localhost/11435: Connection refused
root@ollama:/# echo $?
1
root@ollama:/# ls -l /dev/tcp/
ls: cannot access '/dev/tcp/': No such file or directory
<!-- gh-comment-id:3794820119 --> @luckylinux commented on GitHub (Jan 24, 2026): @adamoutler: I see, I think I saw from a quick Search that's just a BASH "built-in" / Function / Feature, it's not a real Endpoint in `/dev` as I first thought. Same Result here then: ``` root@ollama:/# bash -c '</dev/tcp/localhost/11434' root@ollama:/# echo $? 0 root@ollama:/# bash -c '</dev/tcp/localhost/11435' bash: connect: Connection refused bash: line 1: /dev/tcp/localhost/11435: Connection refused root@ollama:/# echo $? 1 root@ollama:/# ls -l /dev/tcp/ ls: cannot access '/dev/tcp/': No such file or directory ```
Author
Owner

@ilyesAj commented on GitHub (Mar 28, 2026):

For health check, you can also use /api/ps endpoint

<!-- gh-comment-id:4148077596 --> @ilyesAj commented on GitHub (Mar 28, 2026): For health check, you can also use `/api/ps` endpoint
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#727