[GH-ISSUE #10831] Ollama gets stuck using RAGFlow #7112

Open
opened 2026-04-12 19:05:57 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @arturo-air on GitHub (May 23, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10831

What is the issue?

When I am embedding some documents (maybe 50 small txts) using RAGFlow, it calls to Ollama using mxbai-embed-large:latest for embedding, but also gemma3:27b for resuming some parts (I guess).

After some minutes, the progress in RAGFlow stops, and when I explore my Ollama, I see something like this:

arturo@thinkpad:~$ ollama ps
NAME                        ID              SIZE      PROCESSOR    UNTIL              
mxbai-embed-large:latest    468836162de7    1.2 GB    100% GPU     3 minutes from now    
gemma3:27b                  a418f5838eaf    23 GB     100% GPU     Stopping...           

The gemma model shows always Stopping... state, and the embedding model eventually unloads. If I try to load some model manually, I cannot do it, because it Ollama keeps hanged e.g. ollama run gemma3:4b.

Relevant log output

May 23 08:51:16 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:16 | 200 |   50.643792ms |       10.1.0.10 | POST     "/api/embeddings"
May 23 08:51:16 ai-machine ollama[1640479]: decode: cannot decode batches with this context (use llama_encode() instead)
May 23 08:51:16 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:16 | 200 |   28.444272ms |       10.1.0.10 | POST     "/api/embeddings"
May 23 08:51:17 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:17 | 200 |      23.965µs |       127.0.0.1 | HEAD     "/"
May 23 08:51:17 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:17 | 200 |      37.531µs |       127.0.0.1 | GET      "/api/ps"
May 23 08:51:18 ai-machine ollama[1640479]: time=2025-05-23T08:51:18.021Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="3.1 GiB"
May 23 08:51:18 ai-machine ollama[1640479]: time=2025-05-23T08:51:18.021Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="10.5 MiB"
May 23 08:51:18 ai-machine ollama[1640479]: time=2025-05-23T08:51:18.181Z level=INFO source=server.go:630 msg="llama runner started in 2.76 seconds"
May 23 08:51:19 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:19 | 200 |       21.21µs |       127.0.0.1 | HEAD     "/"
May 23 08:51:19 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:19 | 200 |      35.897µs |       127.0.0.1 | GET      "/api/ps"
May 23 08:51:21 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:21 | 200 |      35.357µs |       127.0.0.1 | HEAD     "/"
May 23 08:51:21 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:21 | 200 |      39.334µs |       127.0.0.1 | GET      "/api/ps"
May 23 08:51:23 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:23 | 200 |      19.648µs |       127.0.0.1 | HEAD     "/"
May 23 08:51:23 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:23 | 200 |      19.988µs |       127.0.0.1 | GET      "/api/ps"
May 23 08:51:25 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:25 | 200 |      24.617µs |       127.0.0.1 | HEAD     "/"
May 23 08:51:25 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:25 | 200 |       21.44µs |       127.0.0.1 | GET      "/api/ps"
May 23 08:51:27 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:27 | 200 |      21.411µs |       127.0.0.1 | HEAD     "/"
May 23 08:51:27 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:27 | 200 |      18.194µs |       127.0.0.1 | GET      "/api/ps"
May 23 08:51:28 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:28 | 200 |         1m55s |       10.1.0.10 | POST     "/api/chat"
May 23 08:51:29 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:29 | 200 |      59.642µs |        10.1.0.1 | GET      "/api/version"
May 23 08:51:29 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:29 | 200 |      26.029µs |       127.0.0.1 | HEAD     "/"
May 23 08:51:29 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:29 | 200 |      30.788µs |       127.0.0.1 | GET      "/api/ps"
May 23 08:51:29 ai-machine ollama[1640479]: time=2025-05-23T08:51:29.809Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=GPU-68a32fe6-1c1a-9be1-5097-98aeb59acf22 library=cuda total="23.5 GiB" available="16.3 GiB"
May 23 08:51:29 ai-machine ollama[1640479]: time=2025-05-23T08:51:29.809Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=GPU-31d8a8db-e31e-dcbd-54b4-c5768035dd06 library=cuda total="31.4 GiB" available="30.9 GiB"
May 23 08:51:29 ai-machine ollama[1640479]: time=2025-05-23T08:51:29.810Z level=INFO source=sched.go:777 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-e796792eba26c4d3b04b0ac5adb01a453dd9ec2dfd83b6c59cbf6fe5f30b0f68 gpu=GPU-31d8a8db-e31e-dcbd-54b4-c5768035dd06 parallel=2 available=33150730240 required="24.3 GiB"
May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.094Z level=INFO source=server.go:135 msg="system memory" total="251.5 GiB" free="231.2 GiB" free_swap="8.0 GiB"
May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.096Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=63 layers.split="" memory.available="[30.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="24.3 GiB" memory.required.partial="24.3 GiB" memory.required.kv="3.5 GiB" memory.required.allocations="[24.3 GiB]" memory.weights.total="15.4 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="2.1 GiB" memory.graph.partial="2.2 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.152Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-e796792eba26c4d3b04b0ac5adb01a453dd9ec2dfd83b6c59cbf6fe5f30b0f68 --ctx-size 32768 --batch-size 512 --n-gpu-layers 63 --threads 24 --parallel 2 --port 39751"
May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.153Z level=INFO source=sched.go:472 msg="loaded runners" count=2
May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.153Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.153Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
May 23 08:51:30 ai-machine ollama[1640479]: decode: cannot decode batches with this context (use llama_encode() instead)
May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.164Z level=INFO source=runner.go:836 msg="starting ollama engine"
May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.164Z level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:39751"
May 23 08:51:30 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:30 | 200 | 13.324016531s |       10.1.0.10 | POST     "/api/embeddings"
May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.219Z level=INFO source=ggml.go:73 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1247 num_key_values=37
May 23 08:51:30 ai-machine ollama[1640479]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
May 23 08:51:30 ai-machine ollama[1640479]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
May 23 08:51:30 ai-machine ollama[1640479]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
May 23 08:51:30 ai-machine ollama[1640479]: ggml_cuda_init: found 1 CUDA devices:
May 23 08:51:30 ai-machine ollama[1640479]:   Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes
May 23 08:51:30 ai-machine ollama[1640479]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.289Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.386Z level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="16.2 GiB"
May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.386Z level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="1.1 GiB"
May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.404Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
May 23 08:51:31 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:31 | 200 |      24.296µs |       127.0.0.1 | HEAD     "/"
May 23 08:51:31 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:31 | 200 |      27.361µs |       127.0.0.1 | GET      "/api/ps"
May 23 08:51:32 ai-machine ollama[1640479]: time=2025-05-23T08:51:32.685Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="2.1 GiB"
May 23 08:51:32 ai-machine ollama[1640479]: time=2025-05-23T08:51:32.686Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="10.5 MiB"
May 23 08:51:32 ai-machine ollama[1640479]: time=2025-05-23T08:51:32.911Z level=INFO source=server.go:630 msg="llama runner started in 2.76 seconds"
May 23 08:51:33 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:33 | 200 |      22.412µs |       127.0.0.1 | HEAD     "/"
May 23 08:51:33 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:33 | 200 |      24.626µs |       127.0.0.1 | GET      "/api/ps"
May 23 08:51:35 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:35 | 200 |       26.54µs |       127.0.0.1 | HEAD     "/"
May 23 08:51:35 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:35 | 200 |      32.933µs |       127.0.0.1 | GET      "/api/ps"
May 23 08:51:37 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:37 | 200 |      15.559µs |       127.0.0.1 | HEAD     "/"
May 23 08:51:37 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:37 | 200 |      28.364µs |       127.0.0.1 | GET      "/api/ps"
May 23 08:51:39 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:39 | 200 |      23.174µs |       127.0.0.1 | HEAD     "/"
May 23 08:51:39 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:39 | 200 |      25.278µs |       127.0.0.1 | GET      "/api/ps"
May 23 08:51:41 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:41 | 200 |      24.165µs |       127.0.0.1 | HEAD     "/"
May 23 08:51:41 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:41 | 200 |      19.196µs |       127.0.0.1 | GET      "/api/ps"
May 23 08:51:42 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:42 | 200 | 25.528164429s |       10.1.0.10 | POST     "/api/chat"
May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.515Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=GPU-68a32fe6-1c1a-9be1-5097-98aeb59acf22 library=cuda total="23.5 GiB" available="16.3 GiB"
May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.515Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=GPU-31d8a8db-e31e-dcbd-54b4-c5768035dd06 library=cuda total="31.4 GiB" available="30.9 GiB"
May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.516Z level=INFO source=sched.go:777 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-e796792eba26c4d3b04b0ac5adb01a453dd9ec2dfd83b6c59cbf6fe5f30b0f68 gpu=GPU-31d8a8db-e31e-dcbd-54b4-c5768035dd06 parallel=2 available=33150730240 required="22.1 GiB"
May 23 08:51:43 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:43 | 200 |      26.409µs |       127.0.0.1 | HEAD     "/"
May 23 08:51:43 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:43 | 200 |      18.204µs |       127.0.0.1 | GET      "/api/ps"
May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.801Z level=INFO source=server.go:135 msg="system memory" total="251.5 GiB" free="231.2 GiB" free_swap="8.0 GiB"
May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.803Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=63 layers.split="" memory.available="[30.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="22.1 GiB" memory.required.partial="22.1 GiB" memory.required.kv="2.3 GiB" memory.required.allocations="[22.1 GiB]" memory.weights.total="15.4 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.6 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.859Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-e796792eba26c4d3b04b0ac5adb01a453dd9ec2dfd83b6c59cbf6fe5f30b0f68 --ctx-size 16384 --batch-size 512 --n-gpu-layers 63 --threads 24 --parallel 2 --port 39857"
May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.860Z level=INFO source=sched.go:472 msg="loaded runners" count=2
May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.860Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.860Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.871Z level=INFO source=runner.go:836 msg="starting ollama engine"
May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.872Z level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:39857"
May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.926Z level=INFO source=ggml.go:73 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1247 num_key_values=37
May 23 08:51:43 ai-machine ollama[1640479]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so
May 23 08:51:43 ai-machine ollama[1640479]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
May 23 08:51:43 ai-machine ollama[1640479]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
May 23 08:51:43 ai-machine ollama[1640479]: ggml_cuda_init: found 1 CUDA devices:
May 23 08:51:43 ai-machine ollama[1640479]:   Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes
May 23 08:51:43 ai-machine ollama[1640479]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so
May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.996Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)
May 23 08:51:44 ai-machine ollama[1640479]: time=2025-05-23T08:51:44.094Z level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="16.2 GiB"
May 23 08:51:44 ai-machine ollama[1640479]: time=2025-05-23T08:51:44.094Z level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="1.1 GiB"
May 23 08:51:44 ai-machine ollama[1640479]: time=2025-05-23T08:51:44.111Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
May 23 08:51:45 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:45 | 200 |      30.507µs |       127.0.0.1 | HEAD     "/"
May 23 08:51:45 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:45 | 200 |      41.548µs |       127.0.0.1 | GET      "/api/ps"
May 23 08:51:46 ai-machine ollama[1640479]: time=2025-05-23T08:51:46.366Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB"
May 23 08:51:46 ai-machine ollama[1640479]: time=2025-05-23T08:51:46.366Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="10.5 MiB"
May 23 08:51:46 ai-machine ollama[1640479]: time=2025-05-23T08:51:46.620Z level=INFO source=server.go:630 msg="llama runner started in 2.76 seconds"

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.7.0

Originally created by @arturo-air on GitHub (May 23, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10831 ### What is the issue? When I am embedding some documents (maybe 50 small txts) using RAGFlow, it calls to Ollama using `mxbai-embed-large:latest` for embedding, but also `gemma3:27b` for resuming some parts (I guess). After some minutes, the progress in RAGFlow stops, and when I explore my Ollama, I see something like this: ``` arturo@thinkpad:~$ ollama ps NAME ID SIZE PROCESSOR UNTIL mxbai-embed-large:latest 468836162de7 1.2 GB 100% GPU 3 minutes from now gemma3:27b a418f5838eaf 23 GB 100% GPU Stopping... ``` The gemma model shows always _Stopping..._ state, and the embedding model eventually unloads. If I try to load some model manually, I cannot do it, because it Ollama keeps hanged e.g. `ollama run gemma3:4b`. ### Relevant log output ```shell May 23 08:51:16 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:16 | 200 | 50.643792ms | 10.1.0.10 | POST "/api/embeddings" May 23 08:51:16 ai-machine ollama[1640479]: decode: cannot decode batches with this context (use llama_encode() instead) May 23 08:51:16 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:16 | 200 | 28.444272ms | 10.1.0.10 | POST "/api/embeddings" May 23 08:51:17 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:17 | 200 | 23.965µs | 127.0.0.1 | HEAD "/" May 23 08:51:17 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:17 | 200 | 37.531µs | 127.0.0.1 | GET "/api/ps" May 23 08:51:18 ai-machine ollama[1640479]: time=2025-05-23T08:51:18.021Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="3.1 GiB" May 23 08:51:18 ai-machine ollama[1640479]: time=2025-05-23T08:51:18.021Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="10.5 MiB" May 23 08:51:18 ai-machine ollama[1640479]: time=2025-05-23T08:51:18.181Z level=INFO source=server.go:630 msg="llama runner started in 2.76 seconds" May 23 08:51:19 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:19 | 200 | 21.21µs | 127.0.0.1 | HEAD "/" May 23 08:51:19 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:19 | 200 | 35.897µs | 127.0.0.1 | GET "/api/ps" May 23 08:51:21 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:21 | 200 | 35.357µs | 127.0.0.1 | HEAD "/" May 23 08:51:21 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:21 | 200 | 39.334µs | 127.0.0.1 | GET "/api/ps" May 23 08:51:23 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:23 | 200 | 19.648µs | 127.0.0.1 | HEAD "/" May 23 08:51:23 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:23 | 200 | 19.988µs | 127.0.0.1 | GET "/api/ps" May 23 08:51:25 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:25 | 200 | 24.617µs | 127.0.0.1 | HEAD "/" May 23 08:51:25 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:25 | 200 | 21.44µs | 127.0.0.1 | GET "/api/ps" May 23 08:51:27 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:27 | 200 | 21.411µs | 127.0.0.1 | HEAD "/" May 23 08:51:27 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:27 | 200 | 18.194µs | 127.0.0.1 | GET "/api/ps" May 23 08:51:28 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:28 | 200 | 1m55s | 10.1.0.10 | POST "/api/chat" May 23 08:51:29 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:29 | 200 | 59.642µs | 10.1.0.1 | GET "/api/version" May 23 08:51:29 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:29 | 200 | 26.029µs | 127.0.0.1 | HEAD "/" May 23 08:51:29 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:29 | 200 | 30.788µs | 127.0.0.1 | GET "/api/ps" May 23 08:51:29 ai-machine ollama[1640479]: time=2025-05-23T08:51:29.809Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=GPU-68a32fe6-1c1a-9be1-5097-98aeb59acf22 library=cuda total="23.5 GiB" available="16.3 GiB" May 23 08:51:29 ai-machine ollama[1640479]: time=2025-05-23T08:51:29.809Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=GPU-31d8a8db-e31e-dcbd-54b4-c5768035dd06 library=cuda total="31.4 GiB" available="30.9 GiB" May 23 08:51:29 ai-machine ollama[1640479]: time=2025-05-23T08:51:29.810Z level=INFO source=sched.go:777 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-e796792eba26c4d3b04b0ac5adb01a453dd9ec2dfd83b6c59cbf6fe5f30b0f68 gpu=GPU-31d8a8db-e31e-dcbd-54b4-c5768035dd06 parallel=2 available=33150730240 required="24.3 GiB" May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.094Z level=INFO source=server.go:135 msg="system memory" total="251.5 GiB" free="231.2 GiB" free_swap="8.0 GiB" May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.096Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=63 layers.split="" memory.available="[30.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="24.3 GiB" memory.required.partial="24.3 GiB" memory.required.kv="3.5 GiB" memory.required.allocations="[24.3 GiB]" memory.weights.total="15.4 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="2.1 GiB" memory.graph.partial="2.2 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.152Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-e796792eba26c4d3b04b0ac5adb01a453dd9ec2dfd83b6c59cbf6fe5f30b0f68 --ctx-size 32768 --batch-size 512 --n-gpu-layers 63 --threads 24 --parallel 2 --port 39751" May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.153Z level=INFO source=sched.go:472 msg="loaded runners" count=2 May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.153Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding" May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.153Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" May 23 08:51:30 ai-machine ollama[1640479]: decode: cannot decode batches with this context (use llama_encode() instead) May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.164Z level=INFO source=runner.go:836 msg="starting ollama engine" May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.164Z level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:39751" May 23 08:51:30 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:30 | 200 | 13.324016531s | 10.1.0.10 | POST "/api/embeddings" May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.219Z level=INFO source=ggml.go:73 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1247 num_key_values=37 May 23 08:51:30 ai-machine ollama[1640479]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so May 23 08:51:30 ai-machine ollama[1640479]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no May 23 08:51:30 ai-machine ollama[1640479]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no May 23 08:51:30 ai-machine ollama[1640479]: ggml_cuda_init: found 1 CUDA devices: May 23 08:51:30 ai-machine ollama[1640479]: Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes May 23 08:51:30 ai-machine ollama[1640479]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.289Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.386Z level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="16.2 GiB" May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.386Z level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="1.1 GiB" May 23 08:51:30 ai-machine ollama[1640479]: time=2025-05-23T08:51:30.404Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" May 23 08:51:31 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:31 | 200 | 24.296µs | 127.0.0.1 | HEAD "/" May 23 08:51:31 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:31 | 200 | 27.361µs | 127.0.0.1 | GET "/api/ps" May 23 08:51:32 ai-machine ollama[1640479]: time=2025-05-23T08:51:32.685Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="2.1 GiB" May 23 08:51:32 ai-machine ollama[1640479]: time=2025-05-23T08:51:32.686Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="10.5 MiB" May 23 08:51:32 ai-machine ollama[1640479]: time=2025-05-23T08:51:32.911Z level=INFO source=server.go:630 msg="llama runner started in 2.76 seconds" May 23 08:51:33 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:33 | 200 | 22.412µs | 127.0.0.1 | HEAD "/" May 23 08:51:33 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:33 | 200 | 24.626µs | 127.0.0.1 | GET "/api/ps" May 23 08:51:35 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:35 | 200 | 26.54µs | 127.0.0.1 | HEAD "/" May 23 08:51:35 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:35 | 200 | 32.933µs | 127.0.0.1 | GET "/api/ps" May 23 08:51:37 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:37 | 200 | 15.559µs | 127.0.0.1 | HEAD "/" May 23 08:51:37 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:37 | 200 | 28.364µs | 127.0.0.1 | GET "/api/ps" May 23 08:51:39 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:39 | 200 | 23.174µs | 127.0.0.1 | HEAD "/" May 23 08:51:39 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:39 | 200 | 25.278µs | 127.0.0.1 | GET "/api/ps" May 23 08:51:41 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:41 | 200 | 24.165µs | 127.0.0.1 | HEAD "/" May 23 08:51:41 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:41 | 200 | 19.196µs | 127.0.0.1 | GET "/api/ps" May 23 08:51:42 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:42 | 200 | 25.528164429s | 10.1.0.10 | POST "/api/chat" May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.515Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=GPU-68a32fe6-1c1a-9be1-5097-98aeb59acf22 library=cuda total="23.5 GiB" available="16.3 GiB" May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.515Z level=INFO source=sched.go:537 msg="updated VRAM based on existing loaded models" gpu=GPU-31d8a8db-e31e-dcbd-54b4-c5768035dd06 library=cuda total="31.4 GiB" available="30.9 GiB" May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.516Z level=INFO source=sched.go:777 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-e796792eba26c4d3b04b0ac5adb01a453dd9ec2dfd83b6c59cbf6fe5f30b0f68 gpu=GPU-31d8a8db-e31e-dcbd-54b4-c5768035dd06 parallel=2 available=33150730240 required="22.1 GiB" May 23 08:51:43 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:43 | 200 | 26.409µs | 127.0.0.1 | HEAD "/" May 23 08:51:43 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:43 | 200 | 18.204µs | 127.0.0.1 | GET "/api/ps" May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.801Z level=INFO source=server.go:135 msg="system memory" total="251.5 GiB" free="231.2 GiB" free_swap="8.0 GiB" May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.803Z level=INFO source=server.go:168 msg=offload library=cuda layers.requested=-1 layers.model=63 layers.offload=63 layers.split="" memory.available="[30.9 GiB]" memory.gpu_overhead="0 B" memory.required.full="22.1 GiB" memory.required.partial="22.1 GiB" memory.required.kv="2.3 GiB" memory.required.allocations="[22.1 GiB]" memory.weights.total="15.4 GiB" memory.weights.repeating="14.3 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.6 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB" May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.859Z level=INFO source=server.go:431 msg="starting llama server" cmd="/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama/models/blobs/sha256-e796792eba26c4d3b04b0ac5adb01a453dd9ec2dfd83b6c59cbf6fe5f30b0f68 --ctx-size 16384 --batch-size 512 --n-gpu-layers 63 --threads 24 --parallel 2 --port 39857" May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.860Z level=INFO source=sched.go:472 msg="loaded runners" count=2 May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.860Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding" May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.860Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.871Z level=INFO source=runner.go:836 msg="starting ollama engine" May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.872Z level=INFO source=runner.go:899 msg="Server listening on 127.0.0.1:39857" May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.926Z level=INFO source=ggml.go:73 msg="" architecture=gemma3 file_type=Q4_K_M name="" description="" num_tensors=1247 num_key_values=37 May 23 08:51:43 ai-machine ollama[1640479]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-haswell.so May 23 08:51:43 ai-machine ollama[1640479]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no May 23 08:51:43 ai-machine ollama[1640479]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no May 23 08:51:43 ai-machine ollama[1640479]: ggml_cuda_init: found 1 CUDA devices: May 23 08:51:43 ai-machine ollama[1640479]: Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes May 23 08:51:43 ai-machine ollama[1640479]: load_backend: loaded CUDA backend from /usr/local/lib/ollama/cuda_v12/libggml-cuda.so May 23 08:51:43 ai-machine ollama[1640479]: time=2025-05-23T08:51:43.996Z level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc) May 23 08:51:44 ai-machine ollama[1640479]: time=2025-05-23T08:51:44.094Z level=INFO source=ggml.go:299 msg="model weights" buffer=CUDA0 size="16.2 GiB" May 23 08:51:44 ai-machine ollama[1640479]: time=2025-05-23T08:51:44.094Z level=INFO source=ggml.go:299 msg="model weights" buffer=CPU size="1.1 GiB" May 23 08:51:44 ai-machine ollama[1640479]: time=2025-05-23T08:51:44.111Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" May 23 08:51:45 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:45 | 200 | 30.507µs | 127.0.0.1 | HEAD "/" May 23 08:51:45 ai-machine ollama[1640479]: [GIN] 2025/05/23 - 08:51:45 | 200 | 41.548µs | 127.0.0.1 | GET "/api/ps" May 23 08:51:46 ai-machine ollama[1640479]: time=2025-05-23T08:51:46.366Z level=INFO source=ggml.go:556 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="1.1 GiB" May 23 08:51:46 ai-machine ollama[1640479]: time=2025-05-23T08:51:46.366Z level=INFO source=ggml.go:556 msg="compute graph" backend=CPU buffer_type=CPU size="10.5 MiB" May 23 08:51:46 ai-machine ollama[1640479]: time=2025-05-23T08:51:46.620Z level=INFO source=server.go:630 msg="llama runner started in 2.76 seconds" ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.7.0
GiteaMirror added the bug label 2026-04-12 19:05:57 -05:00
Author
Owner

@XENE1 commented on GitHub (May 24, 2025):

Same issue with mistral-small:22b on Ragflow.

It takes a lot of time to unload model during RAPTOR ang get stuck in stopping when extracting knowledge graph.

<!-- gh-comment-id:2906491708 --> @XENE1 commented on GitHub (May 24, 2025): Same issue with mistral-small:22b on Ragflow. It takes a lot of time to unload model during RAPTOR ang get stuck in **stopping** when extracting knowledge graph.
Author
Owner

@arturo-air commented on GitHub (May 27, 2025):

I have found a similar problem in different issues, that was fixed on #10487, a race condition.

I wanted to check if there could be a race condition present already, and I have set OLLAMA_NUM_PARALLEL=1, to avoid a model giving more than 1 response at the same time.

Then I have sent 80 files to embed to Ragflow (just a quick reminder, the embedding model works, but Ragflow uses the chat model with RAPTOR when embedding). The result: all files embedded correctly.

So my early conclusion is that the race condition is still present, or maybe there is another one...

<!-- gh-comment-id:2912114350 --> @arturo-air commented on GitHub (May 27, 2025): I have found a similar problem in different issues, that was fixed on #10487, a race condition. I wanted to check if there could be a race condition present already, and I have set `OLLAMA_NUM_PARALLEL=1`, to avoid a model giving more than 1 response at the same time. Then I have sent 80 files to embed to Ragflow (just a quick reminder, the embedding model works, but Ragflow uses the chat model with RAPTOR when embedding). The result: **all files embedded correctly**. So my early conclusion is that the race condition is still present, or maybe there is another one...
Author
Owner

@arturo-air commented on GitHub (May 27, 2025):

I have not been able to replicate the problem manually, it only happens when I run Ragflow over Ollama. But here is my guess:

Ragflow sets a really small keep_alive (like 10s), and I am trying to embed a bunch of files at the same time (ergo Ragflow is calling to /api/chat many times to resume with RAPTOR). In one of all these tries, the 10s are finished, and the race condition is achieved.

I see that on the file sched.go you do all this mutex control, and I've been debugging it locally, I think that the problem is in lines L150-L157, because needsReload returns false but then useLoadedRunner is not going to be able to load the actual runner. But as I said, I've not been able to reproduce the error manually.

Edit:
I used a MITM to see the request, Ragflow is using "keep_alive": null as parameter and different values in "num_ctx": 8192, that is why I always see Stopping..., because the model needs to be reloaded due to the new num_ctx. This is an example of curl:

curl -H 'Host: localhost:11434' --compressed -H 'Connection: keep-alive' -H 'Content-Type: application/json' -H 'Accept: application/json' -X POST http://localhost:11434/api/chat -d '{"model": "gemma3:1b", "messages": [{"role": "system", "content": "You'"'"'re a helpful assistant."}, {"role": "user", "content": "Please summarize the following paragraphs..."}], "stream": false, "format": "", "options": {"num_ctx": 8192, "temperature": 0.3}, "keep_alive": null}'
<!-- gh-comment-id:2913050914 --> @arturo-air commented on GitHub (May 27, 2025): I have not been able to replicate the problem manually, it only happens when I run Ragflow over Ollama. But here is my guess: Ragflow sets a really small keep_alive (like 10s), and I am trying to embed a bunch of files at the same time (ergo Ragflow is calling to /api/chat many times to resume with RAPTOR). In one of all these tries, the 10s are finished, and the race condition is achieved. I see that on the file [sched.go](https://github.com/ollama/ollama/blob/main/server/sched.go) you do all this mutex control, and I've been debugging it locally, I think that the problem is in lines [L150-L157](https://github.com/ollama/ollama/blob/main/server/sched.go#L150-L157), because `needsReload` returns false but then `useLoadedRunner` is not going to be able to load the actual runner. But as I said, I've not been able to reproduce the error manually. Edit: I used a MITM to see the request, Ragflow is using `"keep_alive": null` as parameter and different values in `"num_ctx": 8192`, that is why I always see _Stopping..._, because the model needs to be reloaded due to the new `num_ctx`. This is an example of curl: ``` curl -H 'Host: localhost:11434' --compressed -H 'Connection: keep-alive' -H 'Content-Type: application/json' -H 'Accept: application/json' -X POST http://localhost:11434/api/chat -d '{"model": "gemma3:1b", "messages": [{"role": "system", "content": "You'"'"'re a helpful assistant."}, {"role": "user", "content": "Please summarize the following paragraphs..."}], "stream": false, "format": "", "options": {"num_ctx": 8192, "temperature": 0.3}, "keep_alive": null}' ```
Author
Owner

@akshaybabloo commented on GitHub (Jun 29, 2025):

I seem to have the same issue on gemma3n:latest. I am on version - 0.9.3

NAME              ID              SIZE      PROCESSOR    UNTIL
gemma3n:latest    e8975a94482c    5.2 GB    100% CPU     Stopping...
<!-- gh-comment-id:3017196139 --> @akshaybabloo commented on GitHub (Jun 29, 2025): I seem to have the same issue on `gemma3n:latest`. I am on version - `0.9.3` ``` NAME ID SIZE PROCESSOR UNTIL gemma3n:latest e8975a94482c 5.2 GB 100% CPU Stopping... ```
Author
Owner

@SamInTheShell commented on GitHub (Aug 27, 2025):

Working on some custom code tonight and ran into this issue with qwen3:4b.

What I was specifically testing was having qwen3:4b flood the context window. I demanded a 900,000 word story, had to explain the context problem and that we're debugging the client.

While it was generating, I had a loop going to monitor the stop time.

To be absolutely clear, each of these are 5 seconds apart and it's just ollama ps. The model is still generating text in a 2nd terminal all the way to the point of stopping....

NAME        ID              SIZE     PROCESSOR    UNTIL               
qwen3:4b    e55aed6fe643    35 GB    100% GPU     21 seconds from now    


NAME        ID              SIZE     PROCESSOR    UNTIL               
qwen3:4b    e55aed6fe643    35 GB    100% GPU     16 seconds from now    


NAME        ID              SIZE     PROCESSOR    UNTIL               
qwen3:4b    e55aed6fe643    35 GB    100% GPU     11 seconds from now    


NAME        ID              SIZE     PROCESSOR    UNTIL              
qwen3:4b    e55aed6fe643    35 GB    100% GPU     6 seconds from now    


NAME        ID              SIZE     PROCESSOR    UNTIL             
qwen3:4b    e55aed6fe643    35 GB    100% GPU     1 second from now    


NAME        ID              SIZE     PROCESSOR    UNTIL       
qwen3:4b    e55aed6fe643    35 GB    100% GPU     Stopping...    


NAME        ID              SIZE     PROCESSOR    UNTIL       
qwen3:4b    e55aed6fe643    35 GB    100% GPU     Stopping...    


NAME        ID              SIZE     PROCESSOR    UNTIL       
qwen3:4b    e55aed6fe643    35 GB    100% GPU     Stopping...    


NAME        ID              SIZE     PROCESSOR    UNTIL       
qwen3:4b    e55aed6fe643    35 GB    100% GPU     Stopping...    

In this debugging session, I'm just using ollama run qwen3:4b.

This is on version Version 0.11.7 (0.11.7), MacOS.


Edit: Additional observations.

Using the ollama cli, I don't end up stuck in "stopping...", if I kill the cli client while it's generating and I see stopped from ollama ps, it returns to 4/5 minutes remaining.

However, in my custom code, I realized I never cancel my http client if the websocket to the end user gets disconnected.

Takes forever for any other request to get a response when I have a long running generation going and the model is showing "stopping..." status.

<!-- gh-comment-id:3226868592 --> @SamInTheShell commented on GitHub (Aug 27, 2025): Working on some custom code tonight and ran into this issue with `qwen3:4b`. What I was specifically testing was having `qwen3:4b` flood the context window. I demanded a 900,000 word story, had to explain the context problem and that we're debugging the client. While it was generating, I had a loop going to monitor the stop time. To be absolutely clear, each of these are 5 seconds apart and it's just `ollama ps`. The model is still generating text in a 2nd terminal all the way to the point of `stopping...`. ``` NAME ID SIZE PROCESSOR UNTIL qwen3:4b e55aed6fe643 35 GB 100% GPU 21 seconds from now NAME ID SIZE PROCESSOR UNTIL qwen3:4b e55aed6fe643 35 GB 100% GPU 16 seconds from now NAME ID SIZE PROCESSOR UNTIL qwen3:4b e55aed6fe643 35 GB 100% GPU 11 seconds from now NAME ID SIZE PROCESSOR UNTIL qwen3:4b e55aed6fe643 35 GB 100% GPU 6 seconds from now NAME ID SIZE PROCESSOR UNTIL qwen3:4b e55aed6fe643 35 GB 100% GPU 1 second from now NAME ID SIZE PROCESSOR UNTIL qwen3:4b e55aed6fe643 35 GB 100% GPU Stopping... NAME ID SIZE PROCESSOR UNTIL qwen3:4b e55aed6fe643 35 GB 100% GPU Stopping... NAME ID SIZE PROCESSOR UNTIL qwen3:4b e55aed6fe643 35 GB 100% GPU Stopping... NAME ID SIZE PROCESSOR UNTIL qwen3:4b e55aed6fe643 35 GB 100% GPU Stopping... ``` In this debugging session, I'm just using `ollama run qwen3:4b`. This is on version `Version 0.11.7 (0.11.7)`, MacOS. --- Edit: Additional observations. Using the ollama cli, I don't end up stuck in "stopping...", if I kill the cli client while it's generating and I see stopped from `ollama ps`, it returns to 4/5 minutes remaining. However, in my custom code, I realized I never cancel my http client if the websocket to the end user gets disconnected. Takes forever for any other request to get a response when I have a long running generation going and the model is showing "stopping..." status.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7112