[GH-ISSUE #8178] qwen 2.5 coder stuck "Stopping" #51731

Closed
opened 2026-04-28 20:48:58 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @MHugonKaliop on GitHub (Dec 19, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/8178

What is the issue?

I have an ollama server alone on a server with a L4 Nvidia card :

Ubuntu 22.04.5 LTS (GNU/Linux 5.15.0-122-generic x86_64)
NVIDIA-SMI 550.107.02 Driver Version: 550.107.02 CUDA Version: 12.4

The only environment variable I've configured is Environment="OLLAMA_KEEP_ALIVE=360m" (tried various values)

And the ollama server hosts only one model
qwen2.5-coder:latest 2b0496514337 4.7 GB 27 hours ago

ollama version is 0.5.4

And this ollama server is used for an internal copilot tool in my company (mainly with continue plugin). We use it also for the embedding (it may be the problem, as you'll see).

Sometimes, the server doesn't handle new queries. There aren't any reply at all, continue plugin waits indefinitely for an answer.
"ollama ps" shows this
NAME ID SIZE PROCESSOR UNTIL
qwen2.5-coder:latest 2b0496514337 6.0 GB 100% GPU Stopping...

The only method I've found is to restart the service.

Since yesterday, I've got a crontab that every minute checks the status and restarts ollama if it seems stuck. I have a log to have the history.

It happened today again, so I've looked at ollama log, and may have found something.

My log is this :
Thu Dec 19 12:47:01 UTC 2024 - Server is running normally.
Thu Dec 19 12:48:01 UTC 2024 - Detected 'Stopping...' status. Restarting the server...

So I've looked at what happened before, and I've got this

Dec 19 12:43:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:43:01 | 200 |       28.31µs |       127.0.0.1 | HEAD     "/"
Dec 19 12:43:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:43:01 | 200 |       31.81µs |       127.0.0.1 | GET      "/api/ps"
Dec 19 12:44:01 copilot CRON[44051]: (root) CMD (/root/ollama_detector.sh)
Dec 19 12:44:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:44:01 | 200 |       29.14µs |       127.0.0.1 | HEAD     "/"
Dec 19 12:44:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:44:01 | 200 |       25.74µs |       127.0.0.1 | GET      "/api/ps"
Dec 19 12:45:01 copilot CRON[44068]: (root) CMD (/root/ollama_detector.sh)
Dec 19 12:45:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:45:01 | 200 |      27.819µs |       127.0.0.1 | HEAD     "/"
Dec 19 12:45:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:45:01 | 200 |       27.72µs |       127.0.0.1 | GET      "/api/ps"
Dec 19 12:45:29 copilot ollama[3122]: [GIN] 2024/12/19 - 12:45:29 | 200 |        30m40s |   145.239.103.2 | POST     "/api/embed"
Dec 19 12:46:01 copilot CRON[44083]: (root) CMD (/root/ollama_detector.sh)
Dec 19 12:46:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:46:01 | 200 |      32.771µs |       127.0.0.1 | HEAD     "/"
Dec 19 12:46:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:46:01 | 200 |      28.599µs |       127.0.0.1 | GET      "/api/ps"
Dec 19 12:46:10 copilot ollama[3122]: [GIN] 2024/12/19 - 12:46:10 | 200 |        31m22s |   145.239.103.2 | POST     "/api/embed"
Dec 19 12:46:46 copilot ollama[3122]: [GIN] 2024/12/19 - 12:46:46 | 200 |        31m57s |   145.239.103.2 | POST     "/api/embed"
Dec 19 12:47:01 copilot CRON[44098]: (root) CMD (/root/ollama_detector.sh)
Dec 19 12:47:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:47:01 | 200 |      29.859µs |       127.0.0.1 | HEAD     "/"
Dec 19 12:47:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:47:01 | 200 |       31.32µs |       127.0.0.1 | GET      "/api/ps"
Dec 19 12:48:01 copilot CRON[44115]: (root) CMD (/root/ollama_detector.sh)
Dec 19 12:48:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:48:01 | 200 |      26.699µs |       127.0.0.1 | HEAD     "/"
Dec 19 12:48:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:48:01 | 200 |       22.65��s |       127.0.0.1 | GET      "/api/ps"
Dec 19 12:48:01 copilot systemd[1]: Stopping Ollama Service...

So, it seems that the server was "ok" for 5 minutes before my cron detected something.

But actually, the previous line is :
Dec 19 12:42:52 copilot ollama[3122]: time=2024-12-19T12:42:52.483Z level=ERROR source=routes.go:479 msg="embedding generation failed" error="context canceled"

And before this line, from 12:42:13 up to the previous line, the log is filled with 23 493 lines of log (!)

It looks like this

Dec 19 12:42:52 copilot ollama[3122]: time=2024-12-19T12:42:52.479Z level=INFO source=server.go:875 msg="aborting embedding request due to client closing the connection"
Dec 19 12:42:52 copilot ollama[3122]: time=2024-12-19T12:42:52.481Z level=INFO source=server.go:875 msg="aborting embedding request due to client closing the connection"
Dec 19 12:42:52 copilot ollama[3122]: message repeated 6 times: [ time=2024-12-19T12:42:52.481Z level=INFO source=server.go:875 msg="aborting embedding request due to client closing the connection"]
Dec 19 12:42:52 copilot ollama[3122]: time=2024-12-19T12:42:52.479Z level=INFO source=server.go:875 msg="aborting embedding request due to client closing the connection"
Dec 19 12:42:52 copilot ollama[3122]: time=2024-12-19T12:42:52.479Z level=INFO source=server.go:875 msg="aborting embedding request due to client closing the connection"
Dec 19 12:42:52 copilot ollama[3122]: time=2024-12-19T12:42:52.481Z level=INFO source=server.go:875 msg="aborting embedding request due to client closing the connection"
Dec 19 12:42:52 copilot ollama[3122]: message repeated 55 times: [ time=2024-12-19T12:42:52.481Z level=INFO source=server.go:875 msg="aborting embedding request due to client closing the connection"]

Seems somethhing wrong, don't you think ?

By the way, when my crontab restarted ollama, I've had thousands of other log lines like this before seeing the log for the server startup

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.4

Originally created by @MHugonKaliop on GitHub (Dec 19, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/8178 ### What is the issue? I have an ollama server alone on a server with a L4 Nvidia card : Ubuntu 22.04.5 LTS (GNU/Linux 5.15.0-122-generic x86_64) NVIDIA-SMI 550.107.02 Driver Version: 550.107.02 CUDA Version: 12.4 The only environment variable I've configured is Environment="OLLAMA_KEEP_ALIVE=360m" (tried various values) And the ollama server hosts only one model qwen2.5-coder:latest 2b0496514337 4.7 GB 27 hours ago ollama version is 0.5.4 And this ollama server is used for an internal copilot tool in my company (mainly with continue plugin). We use it also for the embedding (it may be the problem, as you'll see). Sometimes, the server doesn't handle new queries. There aren't any reply at all, continue plugin waits indefinitely for an answer. "ollama ps" shows this NAME ID SIZE PROCESSOR UNTIL qwen2.5-coder:latest 2b0496514337 6.0 GB 100% GPU Stopping... The only method I've found is to restart the service. Since yesterday, I've got a crontab that every minute checks the status and restarts ollama if it seems stuck. I have a log to have the history. It happened today again, so I've looked at ollama log, and may have found something. My log is this : Thu Dec 19 12:47:01 UTC 2024 - Server is running normally. Thu Dec 19 12:48:01 UTC 2024 - Detected 'Stopping...' status. Restarting the server... So I've looked at what happened before, and I've got this ``` Dec 19 12:43:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:43:01 | 200 | 28.31µs | 127.0.0.1 | HEAD "/" Dec 19 12:43:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:43:01 | 200 | 31.81µs | 127.0.0.1 | GET "/api/ps" Dec 19 12:44:01 copilot CRON[44051]: (root) CMD (/root/ollama_detector.sh) Dec 19 12:44:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:44:01 | 200 | 29.14µs | 127.0.0.1 | HEAD "/" Dec 19 12:44:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:44:01 | 200 | 25.74µs | 127.0.0.1 | GET "/api/ps" Dec 19 12:45:01 copilot CRON[44068]: (root) CMD (/root/ollama_detector.sh) Dec 19 12:45:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:45:01 | 200 | 27.819µs | 127.0.0.1 | HEAD "/" Dec 19 12:45:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:45:01 | 200 | 27.72µs | 127.0.0.1 | GET "/api/ps" Dec 19 12:45:29 copilot ollama[3122]: [GIN] 2024/12/19 - 12:45:29 | 200 | 30m40s | 145.239.103.2 | POST "/api/embed" Dec 19 12:46:01 copilot CRON[44083]: (root) CMD (/root/ollama_detector.sh) Dec 19 12:46:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:46:01 | 200 | 32.771µs | 127.0.0.1 | HEAD "/" Dec 19 12:46:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:46:01 | 200 | 28.599µs | 127.0.0.1 | GET "/api/ps" Dec 19 12:46:10 copilot ollama[3122]: [GIN] 2024/12/19 - 12:46:10 | 200 | 31m22s | 145.239.103.2 | POST "/api/embed" Dec 19 12:46:46 copilot ollama[3122]: [GIN] 2024/12/19 - 12:46:46 | 200 | 31m57s | 145.239.103.2 | POST "/api/embed" Dec 19 12:47:01 copilot CRON[44098]: (root) CMD (/root/ollama_detector.sh) Dec 19 12:47:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:47:01 | 200 | 29.859µs | 127.0.0.1 | HEAD "/" Dec 19 12:47:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:47:01 | 200 | 31.32µs | 127.0.0.1 | GET "/api/ps" Dec 19 12:48:01 copilot CRON[44115]: (root) CMD (/root/ollama_detector.sh) Dec 19 12:48:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:48:01 | 200 | 26.699µs | 127.0.0.1 | HEAD "/" Dec 19 12:48:01 copilot ollama[3122]: [GIN] 2024/12/19 - 12:48:01 | 200 | 22.65��s | 127.0.0.1 | GET "/api/ps" Dec 19 12:48:01 copilot systemd[1]: Stopping Ollama Service... ``` So, it seems that the server was "ok" for 5 minutes before my cron detected something. But actually, the previous line is : `Dec 19 12:42:52 copilot ollama[3122]: time=2024-12-19T12:42:52.483Z level=ERROR source=routes.go:479 msg="embedding generation failed" error="context canceled" ` And before this line, from 12:42:13 up to the previous line, the log is filled with 23 493 lines of log (!) It looks like this ``` Dec 19 12:42:52 copilot ollama[3122]: time=2024-12-19T12:42:52.479Z level=INFO source=server.go:875 msg="aborting embedding request due to client closing the connection" Dec 19 12:42:52 copilot ollama[3122]: time=2024-12-19T12:42:52.481Z level=INFO source=server.go:875 msg="aborting embedding request due to client closing the connection" Dec 19 12:42:52 copilot ollama[3122]: message repeated 6 times: [ time=2024-12-19T12:42:52.481Z level=INFO source=server.go:875 msg="aborting embedding request due to client closing the connection"] Dec 19 12:42:52 copilot ollama[3122]: time=2024-12-19T12:42:52.479Z level=INFO source=server.go:875 msg="aborting embedding request due to client closing the connection" Dec 19 12:42:52 copilot ollama[3122]: time=2024-12-19T12:42:52.479Z level=INFO source=server.go:875 msg="aborting embedding request due to client closing the connection" Dec 19 12:42:52 copilot ollama[3122]: time=2024-12-19T12:42:52.481Z level=INFO source=server.go:875 msg="aborting embedding request due to client closing the connection" Dec 19 12:42:52 copilot ollama[3122]: message repeated 55 times: [ time=2024-12-19T12:42:52.481Z level=INFO source=server.go:875 msg="aborting embedding request due to client closing the connection"] ``` Seems somethhing wrong, don't you think ? By the way, when my crontab restarted ollama, I've had thousands of other log lines like this before seeing the log for the server startup ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.4
GiteaMirror added the bug label 2026-04-28 20:48:58 -05:00
Author
Owner

@YonTracks commented on GitHub (Dec 20, 2024):

Howdy I had similar issue, found my issue was continue.
Check the continue config for the embedding setup, for me if I use ollama as the embedding provider, continue tries to embed the entire repo epic lol.

look for the ... "more" tab for continue, I found a @codebase index, and yep loading the entire repo lol (can pause it) or setup continue accordingly if that is the actual issue.
good luck.
Screenshot 2024-12-20 233528

<!-- gh-comment-id:2557052548 --> @YonTracks commented on GitHub (Dec 20, 2024): Howdy I had similar issue, found my issue was continue. Check the continue config for the embedding setup, for me if I use ollama as the embedding provider, continue tries to embed the entire repo epic lol. look for the ... "more" tab for continue, I found a @codebase index, and yep loading the entire repo lol (can pause it) or setup continue accordingly if that is the actual issue. good luck. ![Screenshot 2024-12-20 233528](https://github.com/user-attachments/assets/68e05976-3e3b-4272-8c99-2d3dda7161e2)
Author
Owner

@MHugonKaliop commented on GitHub (Dec 20, 2024):

Hello

Thank you for your input.
Actually, this is intended to embed all the project files in order to have some kind of RAG.
From what I'm seeing on the server, embedding seems to be realllly long (more than 10 minutes)...
I'm trying to find what files (or chunks) are sent

<!-- gh-comment-id:2557221772 --> @MHugonKaliop commented on GitHub (Dec 20, 2024): Hello Thank you for your input. Actually, this is intended to embed all the project files in order to have some kind of RAG. From what I'm seeing on the server, embedding seems to be realllly long (more than 10 minutes)... I'm trying to find what files (or chunks) are sent
Author
Owner

@rick-github commented on GitHub (Dec 20, 2024):

If it's an embedding issue, it might be #7288. If the chunk size for the embed is larger than the context window it causes problems.

<!-- gh-comment-id:2557260947 --> @rick-github commented on GitHub (Dec 20, 2024): If it's an embedding issue, it might be #7288. If the chunk size for the embed is larger than the context window it causes problems.
Author
Owner

@YonTracks commented on GitHub (Dec 20, 2024):

cheers, yep same issue for me, takes really long for full indexing big projects, but should be good info here https://docs.continue.dev/customize/deep-dives/codebase/

I see a few things to get setup should help, I have not mastered this yet but seems the right direction.
but will be epic, very powerful this even with pausing and limiting.
good luck, hopefully someone else will know more.

<!-- gh-comment-id:2557262186 --> @YonTracks commented on GitHub (Dec 20, 2024): cheers, yep same issue for me, takes really long for full indexing big projects, but should be good info here https://docs.continue.dev/customize/deep-dives/codebase/ I see a few things to get setup should help, I have not mastered this yet but seems the right direction. but will be epic, very powerful this even with pausing and limiting. good luck, hopefully someone else will know more.
Author
Owner

@pdevine commented on GitHub (Dec 20, 2024):

@MHugonKaliop you can change OLLAMA_KEEP_ALIVE=-1m to prevent the model from ever being unloaded. The reason why it's probably in the Stopping... state is that it is trying to unload the model because no one has refreshed it in a while, but it's also serving up a long request (as others have mentioned) and just hasn't finished yet.

<!-- gh-comment-id:2557754019 --> @pdevine commented on GitHub (Dec 20, 2024): @MHugonKaliop you can change `OLLAMA_KEEP_ALIVE=-1m` to prevent the model from ever being unloaded. The reason why it's probably in the `Stopping...` state is that it is trying to unload the model because no one has refreshed it in a while, but it's also serving up a long request (as others have mentioned) and just hasn't finished yet.
Author
Owner

@jessegross commented on GitHub (Dec 21, 2024):

For those that are experiencing this, did it first appear in 0.5.4? Were you previously running with a similar setup successfully on older versions?

<!-- gh-comment-id:2557988222 --> @jessegross commented on GitHub (Dec 21, 2024): For those that are experiencing this, did it first appear in 0.5.4? Were you previously running with a similar setup successfully on older versions?
Author
Owner

@YonTracks commented on GitHub (Dec 21, 2024):

For those that are experiencing this, did it first appear in 0.5.4? Were you previously running with a similar setup successfully on older versions?

I think older. somewhere here https://github.com/ollama/ollama/compare/v0.4.3...v0.4.4
but also continue, updated, and made the issue start showing for me nomic-embed-text.

not fully an ollama issue I think, ollama works great now I know maybe the parallel bit or concurrent bit for embeddings not sure, for me all is related to the frontend that is calling ollama? like continue, if I wait or set the frontend accordingly then all is great and super impressive.
Also was a similar, seemed like looping issue same continue related before but the 0.5 updates made this easier to see in the logs.

so, 2 similar issues I think:
-1. embedding model ctx size to big I think from the #7288 causes an issue
-2. many embed requests from continue or frontend and ollama will try until completed or timeout or something like that (if ollama restarts the server.log is cleared), and if all good, still logs streaming away looks feels bad, but is kind of not, I think.

good luck

<!-- gh-comment-id:2557991932 --> @YonTracks commented on GitHub (Dec 21, 2024): > For those that are experiencing this, did it first appear in 0.5.4? Were you previously running with a similar setup successfully on older versions? I think older. somewhere here https://github.com/ollama/ollama/compare/v0.4.3...v0.4.4 but also continue, updated, and made the issue start showing for me nomic-embed-text. not fully an ollama issue I think, ollama works great now I know maybe the parallel bit or concurrent bit for embeddings not sure, for me all is related to the frontend that is calling ollama? like continue, if I wait or set the frontend accordingly then all is great and super impressive. Also was a similar, seemed like looping issue same continue related before but the 0.5 updates made this easier to see in the logs. so, 2 similar issues I think: -1. embedding model ctx size to big I think from the #7288 causes an issue -2. many embed requests from continue or frontend and ollama will try until completed or timeout or something like that (if ollama restarts the server.log is cleared), and if all good, still logs streaming away looks feels bad, but is kind of not, I think. good luck
Author
Owner

@YonTracks commented on GitHub (Dec 21, 2024):

forgive me if hindering.

While investigating, I noticed a potential issue related to the expireRunner functionality in server/sched.go and how it interacts with the runner-related logic. This might have surfaced around the time of the new Go runners update (so possibly before version 0.4.3).

Observations

  1. blocking behavior In server/sched.go, the expireRunner uses *Model from images.go, while the *runnerRef struct exists, and the channel being used is of type runnerRef.
  2. The processPending() function appears to have blocking behavior, which might affect handling when using embedded models in parallel.
    Here's the relevant code snippet:
case pending := <-s.pendingReqCh:
   		// Block other requests until we get this pending request running
   		pending.schedAttempts++
   		if pending.origNumCtx == 0 {
   			pending.origNumCtx = pending.opts.NumCtx
   		}

   		if pending.ctx.Err() != nil {
   			slog.Debug("pending request cancelled or timed out, skipping scheduling")
   			continue
   		} 
  1. So, it continues, the expireRunner function seems to be used in routes.go, indicating this issue might be tied to how runner-related logic is managed.

While I believe the root cause may not entirely be within Ollama's scope, it seems the system could benefit from a more graceful way of handling misconfigurations or "bad setups." This could help avoid unexpected behavior.
Thank you for your hard work and dedication!
good luck.

<!-- gh-comment-id:2558082850 --> @YonTracks commented on GitHub (Dec 21, 2024): forgive me if hindering. While investigating, I noticed a potential issue related to the `expireRunner` functionality in `server/sched.go` and how it interacts with the runner-related logic. This might have surfaced around the time of the new Go runners update (so possibly before version 0.4.3). ### Observations 1. `blocking behavior` In `server/sched.go`, the `expireRunner` uses `*Model` from `images.go`, while the `*runnerRef` struct exists, and the channel being used is of type `runnerRef`. 2. The `processPending()` function appears to have blocking behavior, which might affect handling when using embedded models in parallel. Here's the relevant code snippet: ``` case pending := <-s.pendingReqCh: // Block other requests until we get this pending request running pending.schedAttempts++ if pending.origNumCtx == 0 { pending.origNumCtx = pending.opts.NumCtx } if pending.ctx.Err() != nil { slog.Debug("pending request cancelled or timed out, skipping scheduling") continue } ``` 3. So, it continues, the expireRunner function seems to be used in routes.go, indicating this issue might be tied to how runner-related logic is managed. While I believe the root cause may not entirely be within Ollama's scope, it seems the system could benefit from a more graceful way of handling misconfigurations or "bad setups." This could help avoid unexpected behavior. Thank you for your hard work and dedication! good luck.
Author
Owner

@MHugonKaliop commented on GitHub (Dec 22, 2024):

Hello
I haven't had enough time to look deeply on what continue does.
It probably is related to too long and / or too many parallel calls for the embedding.
Thing is, as a "novice" user of continue, I can't customise it in order to make it work "correctly".
It would be better if ollama server can reject bad usage of the API to protect itself from being stuck

<!-- gh-comment-id:2558392887 --> @MHugonKaliop commented on GitHub (Dec 22, 2024): Hello I haven't had enough time to look deeply on what continue does. It probably is related to too long and / or too many parallel calls for the embedding. Thing is, as a "novice" user of continue, I can't customise it in order to make it work "correctly". It would be better if ollama server can reject bad usage of the API to protect itself from being stuck
Author
Owner

@dhiltgen commented on GitHub (Apr 9, 2025):

There seems to be a race somewhere in the scheduler under heavy load, possibly related to clients closing connections prematurely. If people are still seeing models get stuck in a "Stopping..." state in the ollama ps output and the model never actually unloads, please try running the server with OLLAMA_DEBUG=1 and share the logs including the model load, and eventual stuck state.

<!-- gh-comment-id:2791069526 --> @dhiltgen commented on GitHub (Apr 9, 2025): There seems to be a race somewhere in the scheduler under heavy load, possibly related to clients closing connections prematurely. If people are still seeing models get stuck in a "Stopping..." state in the `ollama ps` output and the model never actually unloads, please try running the server with OLLAMA_DEBUG=1 and share the logs including the model load, and eventual stuck state.
Author
Owner

@aaronpliu commented on GitHub (Apr 25, 2025):

Also experiencing the issue in the latest ollama (v0.6.6). Im running on Mac M2 Studio with qwen2.5-coder:7b.

<!-- gh-comment-id:2829246195 --> @aaronpliu commented on GitHub (Apr 25, 2025): Also experiencing the issue in the latest ollama (v0.6.6). Im running on Mac M2 Studio with qwen2.5-coder:7b.
Author
Owner

@NGC13009 commented on GitHub (Jun 24, 2025):

I've encountered the same issue. When using the "Immersive Translation" (A Chrome browser extension) to translate extremely long documents, similar problems appear in the logs. I think it might be the same cause. My logs mainly have these two entries that appear hundreds of times:

decode: cannot decode batches with this context (use llama_encode() instead)
time=2025-06-25T01:55:59.456+08:00 level=INFO source=server.go:899 msg="aborting embedding request due to client closing the connection"
<!-- gh-comment-id:3001428788 --> @NGC13009 commented on GitHub (Jun 24, 2025): I've encountered the same issue. When using the "Immersive Translation" (A Chrome browser extension) to translate extremely long documents, similar problems appear in the logs. I think it might be the same cause. My logs mainly have these two entries that appear hundreds of times: ```log decode: cannot decode batches with this context (use llama_encode() instead) time=2025-06-25T01:55:59.456+08:00 level=INFO source=server.go:899 msg="aborting embedding request due to client closing the connection" ```
Author
Owner

@rick-github commented on GitHub (Jun 24, 2025):

The likely problem is that the client has a timeout, and has sent so many embedding requests that ollama can't respond before the client times out and closes the connection.

<!-- gh-comment-id:3001436966 --> @rick-github commented on GitHub (Jun 24, 2025): The likely problem is that the client has a timeout, and has sent so many embedding requests that ollama can't respond before the client times out and closes the connection.
Author
Owner

@karim20010 commented on GitHub (Jul 30, 2025):

how to solve this issue?

<!-- gh-comment-id:3136778085 --> @karim20010 commented on GitHub (Jul 30, 2025): how to solve this issue?
Author
Owner

@aaronpliu commented on GitHub (Aug 5, 2025):

what's solution to handle it? It still occurred in new version of Ollama (v0.10.1)

<!-- gh-comment-id:3153550622 --> @aaronpliu commented on GitHub (Aug 5, 2025): what's solution to handle it? It still occurred in new version of Ollama (v0.10.1)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51731