[GH-ISSUE #2810] Sending twice an empty prompt to the embedding API stalls ollama. #1702

Closed
opened 2026-04-12 11:40:44 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @dstruck on GitHub (Feb 28, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2810

Originally assigned to: @jmorganca on GitHub.

"ollama version is 0.1.27" running on "Debian GNU/Linux 12 (bookworm)".

Running the API call curl http://localhost:11434/api/embeddings -d '{"model": "llama2", "prompt": ""}' returns {"embedding":null} as expected.

Running the same API call a second time, stalls ollama completely. Restarting the service does not work, you have to kill the process with -9.

Originally created by @dstruck on GitHub (Feb 28, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2810 Originally assigned to: @jmorganca on GitHub. "ollama version is 0.1.27" running on "Debian GNU/Linux 12 (bookworm)". Running the API call `curl http://localhost:11434/api/embeddings -d '{"model": "llama2", "prompt": ""}'` returns `{"embedding":null}` as expected. Running the same API call a second time, stalls ollama completely. Restarting the service does not work, you have to kill the process with `-9`.
GiteaMirror added the bug label 2026-04-12 11:40:44 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1702