[GH-ISSUE #4475] It is possible to enable OpenAI Api in Docker image #64834

Closed
opened 2026-05-03 18:55:20 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @Tomichi on GitHub (May 16, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4475

Hi,
I want ask you if it is possible to enable openAI api compatibility to official OIlama Docker image. I try that feature works in desktop app well and it's missing in docker image. https://ollama.com/blog/openai-compatibility Desktop app works well.

Thank you for anybody helps.

Originally created by @Tomichi on GitHub (May 16, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4475 Hi, I want ask you if it is possible to enable openAI api compatibility to official OIlama Docker image. I try that feature works in desktop app well and it's missing in docker image. https://ollama.com/blog/openai-compatibility Desktop app works well. Thank you for anybody helps.
GiteaMirror added the feature request label 2026-05-03 18:55:21 -05:00
Author
Owner

@pdevine commented on GitHub (May 16, 2024):

Hi @Tomichi yes, it should work out of the box. Make certain you have the newest version of ollama by doing a docker pull ollama/ollama.

You can then start ollama in docker with:

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama. Follow the directions in the documentation if you want to use the GPU. Make sure there isn't a port conflict with any locally running copy of ollama (it defaults to port 11434).

You can then pull a model to ollama and use it:

% ollama pull orca-mini
pulling manifest
pulling 66002b78c70a... 100% ▕███████████████████████████████████████████████████████████
pulling dd90d0f2b7ee... 100% ▕███████████████████████████████████████████████████████████
pulling 93ca9b3d83dc... 100% ▕███████████████████████████████████████████████████████████
pulling 33eb43a1488d... 100% ▕███████████████████████████████████████████████████████████
pulling fd52b10ee3ee... 100% ▕███████████████████████████████████████████████████████████
verifying sha256 digest
writing manifest
removing any unused layers
success
% curl localhost:11434/v1/chat/completions -H "Content-Type: application/json" \
-d '{
        "model": "orca-mini",
        "messages": [
            {
                "role": "system",
                "content": "You are a helpful assistant."
            },
            {
                "role": "user",
                "content": "Hello!"
            }
        ]
    }' | jq

Hopefully this helps! I'll close out the issue.

<!-- gh-comment-id:2115978094 --> @pdevine commented on GitHub (May 16, 2024): Hi @Tomichi yes, it should work out of the box. Make certain you have the newest version of ollama by doing a `docker pull ollama/ollama`. You can then start ollama in docker with: `docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama`. Follow the directions in the [documentation](https://github.com/ollama/ollama/blob/main/docs/docker.md) if you want to use the GPU. Make sure there isn't a port conflict with any locally running copy of ollama (it defaults to port `11434`). You can then pull a model to ollama and use it: ``` % ollama pull orca-mini pulling manifest pulling 66002b78c70a... 100% ▕███████████████████████████████████████████████████████████ pulling dd90d0f2b7ee... 100% ▕███████████████████████████████████████████████████████████ pulling 93ca9b3d83dc... 100% ▕███████████████████████████████████████████████████████████ pulling 33eb43a1488d... 100% ▕███████████████████████████████████████████████████████████ pulling fd52b10ee3ee... 100% ▕███████████████████████████████████████████████████████████ verifying sha256 digest writing manifest removing any unused layers success % curl localhost:11434/v1/chat/completions -H "Content-Type: application/json" \ -d '{ "model": "orca-mini", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Hello!" } ] }' | jq ``` Hopefully this helps! I'll close out the issue.
Author
Owner

@Tomichi commented on GitHub (May 16, 2024):

Thank you, yes i have the school type mistake, that I havent got newest version of docker, and due this reason i got 404 error when I send response to docker ollama. It works fine. Thank you @pdevine. It works well.

<!-- gh-comment-id:2116240657 --> @Tomichi commented on GitHub (May 16, 2024): Thank you, yes i have the school type mistake, that I havent got newest version of docker, and due this reason i got 404 error when I send response to docker ollama. It works fine. Thank you @pdevine. It works well.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64834