[GH-ISSUE #3944] /api/embeddings hangs when prompt is only whitespace #48958

Closed
opened 2026-04-28 10:17:55 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @alexmavr on GitHub (Apr 26, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3944

Originally assigned to: @jmorganca on GitHub.

What is the issue?

The following invocation hangs indefinitely:

$ curl http://localhost:11434/api/embeddings -d '{
  "model": "all-minilm",
  "prompt": " "
}'

Same behavior for model "mxbai-embed-large"

Relevant debug logs:

{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":6,"tid":"0x1ea27fac0","timestamp":1714138666}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53684,"status":200,"tid":"0x16be2b000","timestamp":1714138666}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":7,"tid":"0x1ea27fac0","timestamp":1714138666}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53689,"status":200,"tid":"0x16beb7000","timestamp":1714138666}
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":8,"tid":"0x1ea27fac0","timestamp":1714138666}
{"function":"update_slots","level":"INFO","line":1840,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":8,"tid":"0x1ea27fac0","timestamp":1714138666}

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.1.32

Originally created by @alexmavr on GitHub (Apr 26, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3944 Originally assigned to: @jmorganca on GitHub. ### What is the issue? The following invocation hangs indefinitely: ``` $ curl http://localhost:11434/api/embeddings -d '{ "model": "all-minilm", "prompt": " " }' ``` Same behavior for model "mxbai-embed-large" Relevant debug logs: ``` {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":6,"tid":"0x1ea27fac0","timestamp":1714138666} {"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53684,"status":200,"tid":"0x16be2b000","timestamp":1714138666} {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":7,"tid":"0x1ea27fac0","timestamp":1714138666} {"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53689,"status":200,"tid":"0x16beb7000","timestamp":1714138666} {"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":8,"tid":"0x1ea27fac0","timestamp":1714138666} {"function":"update_slots","level":"INFO","line":1840,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":8,"tid":"0x1ea27fac0","timestamp":1714138666} ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.1.32
GiteaMirror added the bug label 2026-04-28 10:17:55 -05:00
Author
Owner

@brycereitano commented on GitHub (Apr 26, 2024):

I was able to reproduce when using an embedding only model on CPU. When using a chat model it handled it gracefully and produced an embedding. In contrast to an empty string that returns no vectors.

I got the following error when force cancelling the request, which points to a llama.cpp issue instead of a bug with ollama.

time=2024-04-26T07:43:23.124-06:00 level=INFO source=routes.go:485 msg="embedding generation failed: do embedding request: Post \"http://127.0.0.1:3
6759/embedding\": EOF"
<!-- gh-comment-id:2079435193 --> @brycereitano commented on GitHub (Apr 26, 2024): I was able to reproduce when using an embedding only model on CPU. When using a chat model it handled it gracefully and produced an embedding. In contrast to an empty string that returns no vectors. I got the following error when force cancelling the request, which points to a llama.cpp issue instead of a bug with ollama. ``` time=2024-04-26T07:43:23.124-06:00 level=INFO source=routes.go:485 msg="embedding generation failed: do embedding request: Post \"http://127.0.0.1:3 6759/embedding\": EOF" ```
Author
Owner

@EnGassa commented on GitHub (Apr 26, 2024):

+1

<!-- gh-comment-id:2079436550 --> @EnGassa commented on GitHub (Apr 26, 2024): +1
Author
Owner

@alexmavr commented on GitHub (Apr 26, 2024):

For more context, the issue initially presented while trying to get the embedding of a string representation including only the UTF-8 BOM byte sequence as rendered by golang with fmt.Sprintf("%s", bomString), so this is not exclusive to whitespace.

<!-- gh-comment-id:2079485289 --> @alexmavr commented on GitHub (Apr 26, 2024): For more context, the issue initially presented while trying to get the embedding of a string representation including only the UTF-8 BOM byte sequence as rendered by golang with `fmt.Sprintf("%s", bomString)`, so this is not exclusive to whitespace.
Author
Owner

@JornWildt commented on GitHub (May 29, 2024):

Also seen here (Windows, GEForce RTX 4060).

Ollama hangs completely when sending this:

POST {{LocalAddress}}/api/embeddings
Content-Type: application/json
Accept: application/json

{
  "model": "nomic-embed-text",
  "prompt": "\r\n"
}
<!-- gh-comment-id:2136648809 --> @JornWildt commented on GitHub (May 29, 2024): Also seen here (Windows, GEForce RTX 4060). Ollama hangs completely when sending this: ``` POST {{LocalAddress}}/api/embeddings Content-Type: application/json Accept: application/json { "model": "nomic-embed-text", "prompt": "\r\n" } ```
Author
Owner

@jmorganca commented on GitHub (Jun 29, 2024):

Hi all this should be fixed now - thanks for reporting @alexmavr 😊

<!-- gh-comment-id:2198366534 --> @jmorganca commented on GitHub (Jun 29, 2024): Hi all this should be fixed now - thanks for reporting @alexmavr 😊
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#48958