[GH-ISSUE #5270] How to interrupt streaming output via request? #3300

Closed
opened 2026-04-12 13:51:46 -05:00 by GiteaMirror · 15 comments
Owner

Originally created by @wltime on GitHub (Jun 25, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5270

When using ollama for streaming dialogue in the terminal, I can stop the output using CTRL+C, how do I interrupt ollama's output if I use the request?

curl http://localhost:11434/api/generate -d '{
"model": "llama3",
"prompt": "Why is the sky blue?",
"stream": true
}'

Originally created by @wltime on GitHub (Jun 25, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5270 When using ollama for streaming dialogue in the terminal, I can stop the output using CTRL+C, how do I interrupt ollama's output if I use the request? curl http://localhost:11434/api/generate -d '{ "model": "llama3", "prompt": "Why is the sky blue?", "stream": true }'
GiteaMirror added the question label 2026-04-12 13:51:46 -05:00
Author
Owner

@pdevine commented on GitHub (Jul 8, 2024):

CTRL+C should work in that case too:

pdevine@MacBook-Pro-5 ollama % curl http://localhost:11434/api/generate -d '{
"model": "llama3",
"prompt": "Why is the sky blue?",
"stream": true
}'
{"model":"llama3","created_at":"2024-07-08T23:21:07.755286Z","response":"The","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.772244Z","response":" sky","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.788902Z","response":" appears","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.805707Z","response":" blue","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.82271Z","response":" because","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.839931Z","response":" of","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.857103Z","response":" a","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.874151Z","response":" phenomenon","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.891806Z","response":" called","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.908785Z","response":" Ray","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.926127Z","response":"leigh","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.943237Z","response":" scattering","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.96033Z","response":",","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.977535Z","response":" which","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.994651Z","response":" is","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:08.012004Z","response":" the","done":false}
^C

This is in MacOS, but it should work the same regardless of your OS. I'm going to close the issue, but feel free to keep commenting and if there's something I missed we can re-open.

<!-- gh-comment-id:2215511960 --> @pdevine commented on GitHub (Jul 8, 2024): `CTRL+C` should work in that case too: ``` pdevine@MacBook-Pro-5 ollama % curl http://localhost:11434/api/generate -d '{ "model": "llama3", "prompt": "Why is the sky blue?", "stream": true }' {"model":"llama3","created_at":"2024-07-08T23:21:07.755286Z","response":"The","done":false} {"model":"llama3","created_at":"2024-07-08T23:21:07.772244Z","response":" sky","done":false} {"model":"llama3","created_at":"2024-07-08T23:21:07.788902Z","response":" appears","done":false} {"model":"llama3","created_at":"2024-07-08T23:21:07.805707Z","response":" blue","done":false} {"model":"llama3","created_at":"2024-07-08T23:21:07.82271Z","response":" because","done":false} {"model":"llama3","created_at":"2024-07-08T23:21:07.839931Z","response":" of","done":false} {"model":"llama3","created_at":"2024-07-08T23:21:07.857103Z","response":" a","done":false} {"model":"llama3","created_at":"2024-07-08T23:21:07.874151Z","response":" phenomenon","done":false} {"model":"llama3","created_at":"2024-07-08T23:21:07.891806Z","response":" called","done":false} {"model":"llama3","created_at":"2024-07-08T23:21:07.908785Z","response":" Ray","done":false} {"model":"llama3","created_at":"2024-07-08T23:21:07.926127Z","response":"leigh","done":false} {"model":"llama3","created_at":"2024-07-08T23:21:07.943237Z","response":" scattering","done":false} {"model":"llama3","created_at":"2024-07-08T23:21:07.96033Z","response":",","done":false} {"model":"llama3","created_at":"2024-07-08T23:21:07.977535Z","response":" which","done":false} {"model":"llama3","created_at":"2024-07-08T23:21:07.994651Z","response":" is","done":false} {"model":"llama3","created_at":"2024-07-08T23:21:08.012004Z","response":" the","done":false} ^C ``` This is in MacOS, but it should work the same regardless of your OS. I'm going to close the issue, but feel free to keep commenting and if there's something I missed we can re-open.
Author
Owner

@wltime commented on GitHub (Jul 9, 2024):

I would like to know if there is an API to interrupt the output of the model

<!-- gh-comment-id:2216154026 --> @wltime commented on GitHub (Jul 9, 2024): I would like to know if there is an API to interrupt the output of the model
Author
Owner

@kevin-stableedge commented on GitHub (Jul 19, 2024):

Looking for something similar. Seeing a lot of other posts recommending to just cancel streaming client side but i want Ollama to stop it's inference.

But i cant find anything in the API docs for how to achieve this.

<!-- gh-comment-id:2239972559 --> @kevin-stableedge commented on GitHub (Jul 19, 2024): Looking for something similar. Seeing a lot of other posts recommending to just cancel streaming client side but i want Ollama to stop it's inference. But i cant find anything in the API docs for how to achieve this.
Author
Owner

@wes-kay commented on GitHub (Nov 24, 2024):

CTRL+C should work in that case too:

pdevine@MacBook-Pro-5 ollama % curl http://localhost:11434/api/generate -d '{
"model": "llama3",
"prompt": "Why is the sky blue?",
"stream": true
}'
{"model":"llama3","created_at":"2024-07-08T23:21:07.755286Z","response":"The","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.772244Z","response":" sky","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.788902Z","response":" appears","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.805707Z","response":" blue","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.82271Z","response":" because","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.839931Z","response":" of","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.857103Z","response":" a","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.874151Z","response":" phenomenon","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.891806Z","response":" called","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.908785Z","response":" Ray","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.926127Z","response":"leigh","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.943237Z","response":" scattering","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.96033Z","response":",","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.977535Z","response":" which","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:07.994651Z","response":" is","done":false}
{"model":"llama3","created_at":"2024-07-08T23:21:08.012004Z","response":" the","done":false}
^C

This is in MacOS, but it should work the same regardless of your OS. I'm going to close the issue, but feel free to keep commenting and if there's something I missed we can re-open.

This isn't canceling the request through the API but is instead terminating the client-side process. this issue shouldn't have been closed, we're looking for a way to handle canceling the stream to start a new stream.

<!-- gh-comment-id:2495855245 --> @wes-kay commented on GitHub (Nov 24, 2024): > `CTRL+C` should work in that case too: > > ``` > pdevine@MacBook-Pro-5 ollama % curl http://localhost:11434/api/generate -d '{ > "model": "llama3", > "prompt": "Why is the sky blue?", > "stream": true > }' > {"model":"llama3","created_at":"2024-07-08T23:21:07.755286Z","response":"The","done":false} > {"model":"llama3","created_at":"2024-07-08T23:21:07.772244Z","response":" sky","done":false} > {"model":"llama3","created_at":"2024-07-08T23:21:07.788902Z","response":" appears","done":false} > {"model":"llama3","created_at":"2024-07-08T23:21:07.805707Z","response":" blue","done":false} > {"model":"llama3","created_at":"2024-07-08T23:21:07.82271Z","response":" because","done":false} > {"model":"llama3","created_at":"2024-07-08T23:21:07.839931Z","response":" of","done":false} > {"model":"llama3","created_at":"2024-07-08T23:21:07.857103Z","response":" a","done":false} > {"model":"llama3","created_at":"2024-07-08T23:21:07.874151Z","response":" phenomenon","done":false} > {"model":"llama3","created_at":"2024-07-08T23:21:07.891806Z","response":" called","done":false} > {"model":"llama3","created_at":"2024-07-08T23:21:07.908785Z","response":" Ray","done":false} > {"model":"llama3","created_at":"2024-07-08T23:21:07.926127Z","response":"leigh","done":false} > {"model":"llama3","created_at":"2024-07-08T23:21:07.943237Z","response":" scattering","done":false} > {"model":"llama3","created_at":"2024-07-08T23:21:07.96033Z","response":",","done":false} > {"model":"llama3","created_at":"2024-07-08T23:21:07.977535Z","response":" which","done":false} > {"model":"llama3","created_at":"2024-07-08T23:21:07.994651Z","response":" is","done":false} > {"model":"llama3","created_at":"2024-07-08T23:21:08.012004Z","response":" the","done":false} > ^C > ``` > > This is in MacOS, but it should work the same regardless of your OS. I'm going to close the issue, but feel free to keep commenting and if there's something I missed we can re-open. This isn't canceling the request through the API but is instead terminating the client-side process. this issue shouldn't have been closed, we're looking for a way to handle canceling the stream to start a new stream.
Author
Owner

@pdevine commented on GitHub (Jan 18, 2025):

@wes-kay hanging up on the stream will stop inference. The model stays loaded in memory of course until you unload it or it times out, but there is no more inference running once you hang up on the stream.

If you watch the GPU with something like asitop on macOS you should be able to see the GPU immediately stop processing data. LMK if I'm missing something here. Are you looking to stop a different user's API call?

<!-- gh-comment-id:2599567212 --> @pdevine commented on GitHub (Jan 18, 2025): @wes-kay hanging up on the stream will stop inference. The model stays loaded in memory of course until you unload it or it times out, but there is no more inference running once you hang up on the stream. If you watch the GPU with something like `asitop` on macOS you should be able to see the GPU immediately stop processing data. LMK if I'm missing something here. Are you looking to stop a different user's API call?
Author
Owner

@wes-kay commented on GitHub (Jan 30, 2025):

@wes-kay hanging up on the stream will stop inference. The model stays loaded in memory of course until you unload it or it times out, but there is no more inference running once you hang up on the stream.

If you watch the GPU with something like asitop on macOS you should be able to see the GPU immediately stop processing data. LMK if I'm missing something here. Are you looking to stop a different user's API call?

The entire point of this issue is to be able to interrupt through code so you can start and stop when you want. I don't think this issue should be closed, as there's still no solution.

CTRL+C sends a interrupt in the terminal that terminates the process, OP and myself and the 3 other people that gave the 👍 are looking for a way to stop the stream to start another api generate.

<!-- gh-comment-id:2623453081 --> @wes-kay commented on GitHub (Jan 30, 2025): > [@wes-kay](https://github.com/wes-kay) hanging up on the stream will stop inference. The model stays loaded in memory of course until you unload it or it times out, but there is no more inference running once you hang up on the stream. > > If you watch the GPU with something like `asitop` on macOS you should be able to see the GPU immediately stop processing data. LMK if I'm missing something here. Are you looking to stop a different user's API call? The entire point of this issue is to be able to interrupt through code so you can start and stop when you want. I don't think this issue should be closed, as there's still no solution. CTRL+C sends a interrupt in the terminal that terminates the process, OP and myself and the 3 other people that gave the 👍 are looking for a way to stop the stream to start another api generate.
Author
Owner

@davidmorrill commented on GitHub (Feb 6, 2025):

I think there was a bit of a disconnect between @pdevine 's answer and @wes-kay 's question. I stumbled across this thread because I had the same exact problem of wanting to programmatically cancel a streaming response from within my web browser-based client. I think the misunderstanding is in the interpretation of what was meant by "hanging up on the stream".

After reading it a couple of times, I realized that what he probably meant was "cancel the ReadableStream being used to transfer the Ollama responses to the receiving client". At least that's how it translates in the case of a web browser client. It took me a bit time to sort out the details, since I'm using a Web Worker thread to handle the communication with the Ollama server, which the web worker then uses to post progress messages back to the main browser UI thread.

But once I did, I was able to add a button in my client UI which causes the Web Worker to cancel the ReadableStream it uses to funnel the individual Ollama responses back to the UI, which instantly shuts down any further messages from Ollama. I can then type in a new prompt and submit that to Ollama and start receiving responses for the new prompt.

So, while it's not a specific Ollama Rest API function to cancel an existing stream, it's certainly possible to terminate the stream using other API's (in my case, by canceling the Web standard ReadableStream used to transport the Ollama responses to the client).

I hope this helps clarify how it is possible to go about doing this without having to terminate the Ollama client.

<!-- gh-comment-id:2640939130 --> @davidmorrill commented on GitHub (Feb 6, 2025): I think there was a bit of a disconnect between @pdevine 's answer and @wes-kay 's question. I stumbled across this thread because I had the same exact problem of wanting to programmatically cancel a streaming response from within my web browser-based client. I think the misunderstanding is in the interpretation of what was meant by "hanging up on the stream". After reading it a couple of times, I realized that what he probably meant was "cancel the ReadableStream being used to transfer the Ollama responses to the receiving client". At least that's how it translates in the case of a web browser client. It took me a bit time to sort out the details, since I'm using a Web Worker thread to handle the communication with the Ollama server, which the web worker then uses to post progress messages back to the main browser UI thread. But once I did, I was able to add a button in my client UI which causes the Web Worker to cancel the ReadableStream it uses to funnel the individual Ollama responses back to the UI, which instantly shuts down any further messages from Ollama. I can then type in a new prompt and submit that to Ollama and start receiving responses for the new prompt. So, while it's not a specific Ollama Rest API function to cancel an existing stream, it's certainly possible to terminate the stream using other API's (in my case, by canceling the Web standard ReadableStream used to transport the Ollama responses to the client). I hope this helps clarify how it is possible to go about doing this without having to terminate the Ollama client.
Author
Owner

@wes-kay commented on GitHub (Feb 6, 2025):

After reading it a couple of times, I realized that what he probably meant was "cancel the ReadableStream being used to transfer the Ollama responses to the receiving client". At least that's how it translates in the case of a web browser client. It took me a bit time to sort out the details, since I'm using a Web Worker thread to handle the communication with the Ollama server, which the web worker then uses to post progress messages back to the main browser UI thread.

My issue is currently there's no way to stop the API from generating, if I need to cancel the context and restart a new prompt I can't and have to wait for it to finish before I can send another request.

What is the solution in your case? Does it just restart the client?

<!-- gh-comment-id:2640979296 --> @wes-kay commented on GitHub (Feb 6, 2025): > After reading it a couple of times, I realized that what he probably meant was "cancel the ReadableStream being used to transfer the Ollama responses to the receiving client". At least that's how it translates in the case of a web browser client. It took me a bit time to sort out the details, since I'm using a Web Worker thread to handle the communication with the Ollama server, which the web worker then uses to post progress messages back to the main browser UI thread. My issue is currently there's no way to stop the API from generating, if I need to cancel the context and restart a new prompt I can't and have to wait for it to finish before I can send another request. What is the solution in your case? Does it just restart the client?
Author
Owner

@pdevine commented on GitHub (Feb 6, 2025):

My issue is currently there's no way to stop the API from generating, if I need to cancel the context and restart a new prompt I can't and have to wait for it to finish before I can send another request.

The way we handle this in the Ollama client in cmd.go is:

...
        cancelCtx, cancel := context.WithCancel(cmd.Context())
        defer cancel()

        sigChan := make(chan os.Signal, 1)
        signal.Notify(sigChan, syscall.SIGINT)

        go func() {
                <-sigChan
                cancel()
        }()
...

        if err := client.Chat(cancelCtx, req, fn); err != nil {
                if errors.Is(err, context.Canceled) {
                        return nil, nil
                }
                return nil, err
        }

Which just says if you hit Ctrl-C (SIGINT) it will trigger cancel() to be called and an error gets returned from client.Chat() that we can trap (context.Canceled). Ultimately in client.Chat() (which is in api/client.go) it calls c.http.Do(request) and the cancel will propagate to here and hang up the stream which will immediately stop inference from happening on the ollama server. I know this is in golang, but it should be possible to replicate in other languages as well too.

I was mulling over what you had mentioned before and was trying to determine if you building a multi-user application where you needed to hang up on someone else's stream. That's possible right now too, but definitely trickier to implement.

<!-- gh-comment-id:2641185746 --> @pdevine commented on GitHub (Feb 6, 2025): > My issue is currently there's no way to stop the API from generating, if I need to cancel the context and restart a new prompt I can't and have to wait for it to finish before I can send another request. The way we handle this in the Ollama client in `cmd.go` is: ``` ... cancelCtx, cancel := context.WithCancel(cmd.Context()) defer cancel() sigChan := make(chan os.Signal, 1) signal.Notify(sigChan, syscall.SIGINT) go func() { <-sigChan cancel() }() ... if err := client.Chat(cancelCtx, req, fn); err != nil { if errors.Is(err, context.Canceled) { return nil, nil } return nil, err } ``` Which just says if you hit Ctrl-C (SIGINT) it will trigger `cancel()` to be called and an error gets returned from `client.Chat()` that we can trap (`context.Canceled`). Ultimately in `client.Chat()` (which is in api/client.go) it calls `c.http.Do(request)` and the cancel will propagate to here and hang up the stream which will immediately stop inference from happening on the ollama server. I know this is in golang, but it should be possible to replicate in other languages as well too. I was mulling over what you had mentioned before and was trying to determine if you building a multi-user application where you needed to hang up on someone _else's_ stream. That's possible right now too, but definitely trickier to implement.
Author
Owner

@pdevine commented on GitHub (Feb 6, 2025):

I just realized we also have some examples in the ollama javascript client which show how to abort a stream.

<!-- gh-comment-id:2641255244 --> @pdevine commented on GitHub (Feb 6, 2025): I just realized we also have some [examples](https://github.com/ollama/ollama-js/blob/main/examples/abort) in the [ollama javascript client](https://github.com/ollama/ollama-js) which show how to abort a stream.
Author
Owner

@wes-kay commented on GitHub (Feb 6, 2025):

My issue is currently there's no way to stop the API from generating, if I need to cancel the context and restart a new prompt I can't and have to wait for it to finish before I can send another request.

The way we handle this in the Ollama client in cmd.go is:

...
        cancelCtx, cancel := context.WithCancel(cmd.Context())
        defer cancel()

        sigChan := make(chan os.Signal, 1)
        signal.Notify(sigChan, syscall.SIGINT)

        go func() {
                <-sigChan
                cancel()
        }()
...

        if err := client.Chat(cancelCtx, req, fn); err != nil {
                if errors.Is(err, context.Canceled) {
                        return nil, nil
                }
                return nil, err
        }

Which just says if you hit Ctrl-C (SIGINT) it will trigger cancel() to be called and an error gets returned from client.Chat() that we can trap (context.Canceled). Ultimately in client.Chat() (which is in api/client.go) it calls c.http.Do(request) and the cancel will propagate to here and hang up the stream which will immediately stop inference from happening on the ollama server. I know this is in golang, but it should be possible to replicate in other languages as well too.

I was mulling over what you had mentioned before and was trying to determine if you building a multi-user application where you needed to hang up on someone else's stream. That's possible right now too, but definitely trickier to implement.

I'm specifically talking about the cli API and since OP is using curl he would most likely be too.

In terms of your code your just canceling the SIGINT is doing the same thing (like you mentioned) as interrupt, that's not what we're looking for. We don't have access to the same context.

We need the ability to cancel an inference with the API.

<!-- gh-comment-id:2641255457 --> @wes-kay commented on GitHub (Feb 6, 2025): > > My issue is currently there's no way to stop the API from generating, if I need to cancel the context and restart a new prompt I can't and have to wait for it to finish before I can send another request. > > The way we handle this in the Ollama client in `cmd.go` is: > > ``` > ... > cancelCtx, cancel := context.WithCancel(cmd.Context()) > defer cancel() > > sigChan := make(chan os.Signal, 1) > signal.Notify(sigChan, syscall.SIGINT) > > go func() { > <-sigChan > cancel() > }() > ... > > if err := client.Chat(cancelCtx, req, fn); err != nil { > if errors.Is(err, context.Canceled) { > return nil, nil > } > return nil, err > } > ``` > > Which just says if you hit Ctrl-C (SIGINT) it will trigger `cancel()` to be called and an error gets returned from `client.Chat()` that we can trap (`context.Canceled`). Ultimately in `client.Chat()` (which is in api/client.go) it calls `c.http.Do(request)` and the cancel will propagate to here and hang up the stream which will immediately stop inference from happening on the ollama server. I know this is in golang, but it should be possible to replicate in other languages as well too. > > I was mulling over what you had mentioned before and was trying to determine if you building a multi-user application where you needed to hang up on someone _else's_ stream. That's possible right now too, but definitely trickier to implement. I'm specifically talking about the cli API and since OP is using curl he would most likely be too. In terms of your code your just canceling the SIGINT is doing the same thing (like you mentioned) as interrupt, that's not what we're looking for. We don't have access to the same context. We need the ability to cancel an inference with the API.
Author
Owner

@wes-kay commented on GitHub (Feb 6, 2025):

I just realized we also have some examples in the ollama javascript client which show how to abort a stream.

Almost but: https://github.com/ollama/ollama-js/blob/main/src/browser.ts#L51

request.abort() is a method used to cancel an ongoing HTTP request in JavaScript. It is commonly available in the XMLHttpRequest (XHR) and Fetch API (via AbortController).

Not available to the CLI.

<!-- gh-comment-id:2641266136 --> @wes-kay commented on GitHub (Feb 6, 2025): > I just realized we also have some [examples](https://github.com/ollama/ollama-js/blob/main/examples/abort) in the [ollama javascript client](https://github.com/ollama/ollama-js) which show how to abort a stream. Almost but: https://github.com/ollama/ollama-js/blob/main/src/browser.ts#L51 `request.abort()` is a method used to cancel an ongoing HTTP request in JavaScript. It is commonly available in the XMLHttpRequest (XHR) and Fetch API (via AbortController). Not available to the CLI.
Author
Owner

@pdevine commented on GitHub (Feb 7, 2025):

@wes-kay I'm not sure what your operating system is, but in linux/macos you can use kill -SIGINT <pid> to send an interrupt signal to a process. That's the equivalent of pressing Ctrl-C.

<!-- gh-comment-id:2643913471 --> @pdevine commented on GitHub (Feb 7, 2025): @wes-kay I'm not sure what your operating system is, but in linux/macos you can use `kill -SIGINT <pid>` to send an interrupt signal to a process. That's the equivalent of pressing `Ctrl-C`.
Author
Owner

@wes-kay commented on GitHub (Feb 7, 2025):

@wes-kay I'm not sure what your operating system is, but in linux/macos you can use kill -SIGINT <pid> to send an interrupt signal to a process. That's the equivalent of pressing Ctrl-C.

But that's killing the process, something we're not looking to do, we just want to stop inference and start a new one with the same stream.

<!-- gh-comment-id:2643953207 --> @wes-kay commented on GitHub (Feb 7, 2025): > [@wes-kay](https://github.com/wes-kay) I'm not sure what your operating system is, but in linux/macos you can use `kill -SIGINT <pid>` to send an interrupt signal to a process. That's the equivalent of pressing `Ctrl-C`. But that's killing the process, something we're not looking to do, we just want to stop inference and start a new one with the same stream.
Author
Owner

@pdevine commented on GitHub (Feb 7, 2025):

But that's killing the process, something we're not looking to do, we just want to stop inference and start a new one with the same stream.

The API doesn't work that way; the stream only stays open for a single request. To generate a new chat completion you'll have to open a new connection and make a new request.

<!-- gh-comment-id:2644109764 --> @pdevine commented on GitHub (Feb 7, 2025): > But that's killing the process, something we're not looking to do, we just want to stop inference and start a new one with the same stream. The API doesn't work that way; the stream only stays open for a single request. To generate a new chat completion you'll have to open a new connection and make a new request.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3300