[GH-ISSUE #8969] Models can't be stopped correctly when using Webui combine with Ollama. #31581

Closed
opened 2026-04-22 12:09:59 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @ILoveWhatILoss on GitHub (Feb 9, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8969

What is the issue?

Very confirmed that model stops outputting words arl, but Ollama still running with no text outputting. and it can't be stopped so I can;t change another model or continue use it.it's going to be stuck forever if I not force quit and restart.

Relevant log output

NAME                                    ID              SIZE     PROCESSOR    UNTIL       
hf.co/xwen-team/Xwen-72B-Chat:latest    ffc000f9c47a    48 GB    100% GPU     Stopping...    
(base) cheziming@chezimingdeMini ~ % ollama stop hf.co/xwen-team/Xwen-72B-Chat:latest

Its like this.stop command couldn't stop the model is running, it'll stuck forever at first two line if I don't force quit.

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.5.7

Originally created by @ILoveWhatILoss on GitHub (Feb 9, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8969 ### What is the issue? Very confirmed that model stops outputting words arl, but Ollama still running with no text outputting. and it can't be stopped so I can;t change another model or continue use it.it's going to be stuck forever if I not force quit and restart. ### Relevant log output ```shell NAME ID SIZE PROCESSOR UNTIL hf.co/xwen-team/Xwen-72B-Chat:latest ffc000f9c47a 48 GB 100% GPU Stopping... (base) cheziming@chezimingdeMini ~ % ollama stop hf.co/xwen-team/Xwen-72B-Chat:latest Its like this.stop command couldn't stop the model is running, it'll stuck forever at first two line if I don't force quit. ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.5.7
GiteaMirror added the bug label 2026-04-22 12:09:59 -05:00
Author
Owner

@lynn158 commented on GitHub (Feb 12, 2025):

same here,
NAME ID SIZE PROCESSOR UNTIL
deepseek-r1:32b 38056bbcbb2d 23 GB 100% GPU Stopping...

OS
Windows

<!-- gh-comment-id:2652745204 --> @lynn158 commented on GitHub (Feb 12, 2025): same here, NAME ID SIZE PROCESSOR UNTIL deepseek-r1:32b 38056bbcbb2d 23 GB 100% GPU Stopping... OS Windows <!-- Failed to upload "server.log" -->
Author
Owner

@jossalgon commented on GitHub (Apr 5, 2025):

Same here with macOS — have you been able to fix it?

<!-- gh-comment-id:2781036502 --> @jossalgon commented on GitHub (Apr 5, 2025): Same here with macOS — have you been able to fix it?
Author
Owner

@wnowicki commented on GitHub (Apr 8, 2025):

Same issue on Raspberry PI 5, WebUI is running in docker. Additionally, model doesny stop itself after 5 minutes of idle, it restarts counter.

wojtek@raspi-lab:~ $ ollama stop gemma3:12b
wojtek@raspi-lab:~ $ ollama ps
NAME          ID              SIZE     PROCESSOR    UNTIL       
gemma3:12b    f4031aab637d    11 GB    100% CPU     Stopping...  
<!-- gh-comment-id:2785932902 --> @wnowicki commented on GitHub (Apr 8, 2025): Same issue on Raspberry PI 5, WebUI is running in docker. Additionally, model doesny stop itself after 5 minutes of idle, it restarts counter. ```shell wojtek@raspi-lab:~ $ ollama stop gemma3:12b wojtek@raspi-lab:~ $ ollama ps NAME ID SIZE PROCESSOR UNTIL gemma3:12b f4031aab637d 11 GB 100% CPU Stopping... ```
Author
Owner

@dhiltgen commented on GitHub (Apr 9, 2025):

There seems to be a race somewhere in the scheduler under heavy load, possibly related to clients closing connections prematurely. If people are still seeing models get stuck in a "Stopping..." state in the ollama ps output and the model never actually unloads, please try running the server with OLLAMA_DEBUG=1 and share the logs including the model load, and eventual stuck state.

<!-- gh-comment-id:2791069848 --> @dhiltgen commented on GitHub (Apr 9, 2025): There seems to be a race somewhere in the scheduler under heavy load, possibly related to clients closing connections prematurely. If people are still seeing models get stuck in a "Stopping..." state in the `ollama ps` output and the model never actually unloads, please try running the server with OLLAMA_DEBUG=1 and share the logs including the model load, and eventual stuck state.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#31581