[GH-ISSUE #1868] ollama in a docker - can't check healthiness - Support Ollama under Rosetta #63107

Closed
opened 2026-05-03 12:07:44 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @FreakDev on GitHub (Jan 9, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/1868

Originally assigned to: @dhiltgen on GitHub.

Hello !

i'm trying to setup ollama to run in a docker container, in order to have it run in runpod serverless function and to do so i'd like to pull a model file in my container image (embed the model file into the docker image)

basically i'd like to have a script like this that run during the build fo the image :

#!/bin/bash

/bin/ollama serve &

while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' http://0.0.0.0:11434)" != "200" ]]; do 
  echo "waiting for ollama"
  sleep 1
done

/bin/ollama pull mistral

but this doesn't work the curl never returns a http code 200...

any idea why ? and/or how could I achieve this (maybe there is another/easier way of doing this) ?

thanks in advance !

Originally created by @FreakDev on GitHub (Jan 9, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/1868 Originally assigned to: @dhiltgen on GitHub. Hello ! i'm trying to setup ollama to run in a docker container, in order to have it run in runpod serverless function and to do so i'd like to pull a model file in my container image (embed the model file into the docker image) basically i'd like to have a script like this that run during the build fo the image : ```bash #!/bin/bash /bin/ollama serve & while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' http://0.0.0.0:11434)" != "200" ]]; do echo "waiting for ollama" sleep 1 done /bin/ollama pull mistral ``` but this doesn't work the curl never returns a http code 200... any idea why ? and/or how could I achieve this (maybe there is another/easier way of doing this) ? thanks in advance !
Author
Owner

@FreakDev commented on GitHub (Jan 9, 2024):

actually it works when I'm building the image without specifying a platform (I am on a mac) but if I try to build the image with --platform linux/amd64 option it tells me

> [11/13] RUN /bin/bash setup.sh tinyllama:
10.15 setup.sh: line 10:    18 Illegal instruction     ollama serve

here is my docker file

FROM ollama/ollama:latest

RUN apt-get install -y curl

ADD . .

ARG MODEL

RUN /bin/bash setup.sh ${MODEL}

ENTRYPOINT ["/bin/bash", "start.sh"]

any idea ?

<!-- gh-comment-id:1883418871 --> @FreakDev commented on GitHub (Jan 9, 2024): actually it works when I'm building the image without specifying a platform (I am on a mac) but if I try to build the image with `--platform linux/amd64` option it tells me ``` > [11/13] RUN /bin/bash setup.sh tinyllama: 10.15 setup.sh: line 10: 18 Illegal instruction ollama serve ``` here is my docker file ```Dockerfile FROM ollama/ollama:latest RUN apt-get install -y curl ADD . . ARG MODEL RUN /bin/bash setup.sh ${MODEL} ENTRYPOINT ["/bin/bash", "start.sh"] ``` any idea ?
Author
Owner

@mxyng commented on GitHub (Jan 9, 2024):

It looks like you're building and running this on Apple Silicon. With --platform linux/amd64 it's possible it's using Rosetta. The Linux build currently enables AVX which isn't supported on Rosetta hence the illegal instruction.

<!-- gh-comment-id:1883472533 --> @mxyng commented on GitHub (Jan 9, 2024): It looks like you're building and running this on Apple Silicon. With `--platform linux/amd64` it's possible it's using Rosetta. The Linux build currently enables AVX which isn't supported on Rosetta hence the illegal instruction.
Author
Owner

@FreakDev commented on GitHub (Jan 9, 2024):

I see... so, as far as I understand, I can't from my Apple Silicon mac, build image that uses Ollama and targets linux/amd64 platform ?

thank you for your feedback ! By any chance, do you know if there is another way to do what am I trying to do (embedding a model into a docker file) ?

<!-- gh-comment-id:1883799571 --> @FreakDev commented on GitHub (Jan 9, 2024): I see... so, as far as I understand, I can't from my Apple Silicon mac, build image that uses Ollama and targets linux/amd64 platform ? thank you for your feedback ! By any chance, do you know if there is another way to do what am I trying to do (embedding a model into a docker file) ?
Author
Owner

@dhiltgen commented on GitHub (Jan 10, 2024):

At present, that is correct. Ollama won't run under Rosetta.

I'm working on some updates that will enable Rosetta support as a fall back mode.

<!-- gh-comment-id:1885690776 --> @dhiltgen commented on GitHub (Jan 10, 2024): At present, that is correct. Ollama won't run under Rosetta. I'm working on some updates that will enable Rosetta support as a fall back mode.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#63107