[GH-ISSUE #384] Can we stop the model response? #172

Closed
opened 2026-04-12 09:42:12 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @technoplato on GitHub (Aug 18, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/384

I must be missing this in the docs.

Originally created by @technoplato on GitHub (Aug 18, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/384 I must be missing this in the docs.
GiteaMirror added the question label 2026-04-12 09:42:12 -05:00
Author
Owner

@technovangelist commented on GitHub (Aug 21, 2023):

Are you asking how to stop the model responding after it has started? Pressing CTRL-C should always stop it.

<!-- gh-comment-id:1687009264 --> @technovangelist commented on GitHub (Aug 21, 2023): Are you asking how to stop the model responding after it has started? Pressing CTRL-C should always stop it.
Author
Owner

@technoplato commented on GitHub (Aug 21, 2023):

Are you asking how to stop the model responding after it has started? Pressing CTRL-C should always stop it.

I guess I was expecting not to have to run Ollama again after pressing ctrl-c

Ctrl-c quits the program. I should have worded my original query better. I'm looking for a way to interrupt the model and keep Ollama running

<!-- gh-comment-id:1687014612 --> @technoplato commented on GitHub (Aug 21, 2023): > Are you asking how to stop the model responding after it has started? Pressing CTRL-C should always stop it. I guess I was expecting not to have to run Ollama again after pressing ctrl-c Ctrl-c quits the program. I should have worded my original query better. I'm looking for a way to interrupt the model and keep Ollama running
Author
Owner

@mchiang0610 commented on GitHub (Aug 22, 2023):

@technoplato Totally understand. Sorry about that. The current workaround that is for us to keep the model in memory for 5 minutes before clearing it, so if you quit it, and run ollama again for the same model, it'll still be fast.

Thanks for sending this in! There are so much to improve on the CLI as we iterate on this.

<!-- gh-comment-id:1687277874 --> @mchiang0610 commented on GitHub (Aug 22, 2023): @technoplato Totally understand. Sorry about that. The current workaround that is for us to keep the model in memory for 5 minutes before clearing it, so if you quit it, and run ollama again for the same model, it'll still be fast. Thanks for sending this in! There are so much to improve on the CLI as we iterate on this.
Author
Owner

@technoplato commented on GitHub (Aug 22, 2023):

Awesome compromise. No worries at all just wanted to make sure I wasn’t
missing anything. Great project would recommend

On Mon, Aug 21, 2023 at 21:30 Michael Chiang @.***>
wrote:

@technoplato https://github.com/technoplato Totally understand. Sorry
about that. The current workaround that is for us to keep the model in
memory for 5 minutes before clearing it, so if you quit it, and run ollama
again for the same model, it'll still be fast.

Thanks for sending this in! There are so much to improve on the CLI as we
iterate on this.


Reply to this email directly, view it on GitHub
https://github.com/jmorganca/ollama/issues/384#issuecomment-1687277874,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABU2FGCKCNOHF26HJQ6W6SDXWQDRTANCNFSM6AAAAAA3WBIOYA
.
You are receiving this because you were mentioned.Message ID:
@.***>

--
Best,
Michael Lustig

LinkedIn https://www.linkedin.com/in/michaellustig/ | Github
https://github.com/technoplato | StackOverflow
https://stackoverflow.com/users/2441420/lustig

<!-- gh-comment-id:1687279667 --> @technoplato commented on GitHub (Aug 22, 2023): Awesome compromise. No worries at all just wanted to make sure I wasn’t missing anything. Great project would recommend On Mon, Aug 21, 2023 at 21:30 Michael Chiang ***@***.***> wrote: > @technoplato <https://github.com/technoplato> Totally understand. Sorry > about that. The current workaround that is for us to keep the model in > memory for 5 minutes before clearing it, so if you quit it, and run ollama > again for the same model, it'll still be fast. > > Thanks for sending this in! There are so much to improve on the CLI as we > iterate on this. > > — > Reply to this email directly, view it on GitHub > <https://github.com/jmorganca/ollama/issues/384#issuecomment-1687277874>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABU2FGCKCNOHF26HJQ6W6SDXWQDRTANCNFSM6AAAAAA3WBIOYA> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> > -- Best, Michael Lustig LinkedIn <https://www.linkedin.com/in/michaellustig/> | Github <https://github.com/technoplato> | StackOverflow <https://stackoverflow.com/users/2441420/lustig>
Author
Owner

@KingMob commented on GitHub (Sep 2, 2023):

I was also searching for this, and not expecting Ctrl+C to kill the whole program.

Maybe have Ctrl+C stop the output, and when it's at the prompt, kill the program? Or have Ctrl+D kill the output?

<!-- gh-comment-id:1703723157 --> @KingMob commented on GitHub (Sep 2, 2023): I was also searching for this, and not expecting Ctrl+C to kill the whole program. Maybe have Ctrl+C stop the output, and when it's at the prompt, kill the program? Or have Ctrl+D kill the output?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#172