[GH-ISSUE #5272] keep_alive and OLLAMA_KEEP_ALIVE not effective #65339

Closed
opened 2026-05-03 20:41:32 -05:00 by GiteaMirror · 20 comments
Owner

Originally created by @peanutfs on GitHub (Jun 25, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5272

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

  • After Ollama starts the qwen2-72b model, if there is no interaction for about 5 minutes, the graphics memory will be automatically released, causing the model port process to automatically exit.
  • I want the model to continue to exist, so I tried setting OLLAMA_KEEP_ALIVE=-1 in ollama.service, and also setting keep-alive=-1 when calling the interface. However, it does not take effect. I also tried setting keep_alive=24h with ollama run qwen2:72b --keepalive 24h, but it didn't work either.
  • I used nvidia-smi to check and there were no running processes.
  • The graphics card is NVIDIA GeForce RTX 3090 24G * 8
  • CUDA Version: 12.5

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.44

Originally created by @peanutfs on GitHub (Jun 25, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5272 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? * After Ollama starts the qwen2-72b model, if there is no interaction for about 5 minutes, the graphics memory will be automatically released, causing the model port process to automatically exit. * I want the model to continue to exist, so I tried setting OLLAMA_KEEP_ALIVE=-1 in ollama.service, and also setting keep-alive=-1 when calling the interface. However, it does not take effect. I also tried setting keep_alive=24h with `ollama run qwen2:72b --keepalive 24h`, but it didn't work either. * I used nvidia-smi to check and there were no running processes. * The graphics card is NVIDIA GeForce RTX 3090 24G * 8 * CUDA Version: 12.5 ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.44
GiteaMirror added the bug label 2026-05-03 20:41:32 -05:00
Author
Owner

@lstep commented on GitHub (Jun 25, 2024):

I tried with the command line parameter:

$ ollama run moondream:1.8b-v2-fp16 --keepalive=24h
[...]
$ ollama ps
NAME                    ID              SIZE    PROCESSOR       UNTIL             
moondream:1.8b-v2-fp16  8921b6c3990a    4.7 GB  100% GPU        24 hours from now

I get the correct behaviour.
I don't think the model itself could be the cause (qwen2-72b, not enough RAM on my system to try it). The version you're using is not the latest, but I don't think there was any modification in this part of code.

It works also for me with the global environment variable OLLAMA_KEEP_ALIVE. Using Ubuntu Linux:
Created a /etc/systemd/system/ollama.service.d/override.conf:

Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_MODELS=/home/myusername/workspace/Apps/OLLAMA/models"
Environment="OLLAMA_ORIGINS=*"
Environment="OLLAMA_FLASH_ATTENTION=1"
Environment="OLLAMA_KEEP_ALIVE=-1"
User=myusername
Group=myusername

Then systemctl daemon-reload and restart the ollama server with systemctl restart ollama. Try to load a model (for example ollama run deepseek-coder-v2:16b-lite-instruct-q8_0.
And to check it is loaded "forever", use ollama ps which should show UNTIL forever:

$ ollama ps
NAME                                            ID              SIZE    PROCESSOR       UNTIL   
deepseek-coder-v2:16b-lite-instruct-q8_0        44250301ba51    19 GB   100% GPU        Forever

Maybe that's because you edited the ollama.service and not the override, so when you upgraded, the ollama.service was reset to the default one?

<!-- gh-comment-id:2188413984 --> @lstep commented on GitHub (Jun 25, 2024): I tried with the command line parameter: ``` $ ollama run moondream:1.8b-v2-fp16 --keepalive=24h [...] $ ollama ps NAME ID SIZE PROCESSOR UNTIL moondream:1.8b-v2-fp16 8921b6c3990a 4.7 GB 100% GPU 24 hours from now ``` I get the correct behaviour. I don't think the model itself could be the cause (qwen2-72b, not enough RAM on my system to try it). The version you're using is not the latest, but I don't think there was any modification in this part of code. It works also for me with the global environment variable `OLLAMA_KEEP_ALIVE`. Using Ubuntu Linux: Created a `/etc/systemd/system/ollama.service.d/override.conf`: ``` Environment="OLLAMA_HOST=0.0.0.0" Environment="OLLAMA_MODELS=/home/myusername/workspace/Apps/OLLAMA/models" Environment="OLLAMA_ORIGINS=*" Environment="OLLAMA_FLASH_ATTENTION=1" Environment="OLLAMA_KEEP_ALIVE=-1" User=myusername Group=myusername ``` Then `systemctl daemon-reload` and restart the ollama server with `systemctl restart ollama`. Try to load a model (for example `ollama run deepseek-coder-v2:16b-lite-instruct-q8_0`. And to check it is loaded "forever", use `ollama ps` which should show `UNTIL forever`: ``` $ ollama ps NAME ID SIZE PROCESSOR UNTIL deepseek-coder-v2:16b-lite-instruct-q8_0 44250301ba51 19 GB 100% GPU Forever ``` Maybe that's because you edited the `ollama.service` and not the override, so when you upgraded, the ollama.service was reset to the default one?
Author
Owner

@peanutfs commented on GitHub (Jun 25, 2024):

Thank you for your reply. When I run the model on my machine, I set keep_alive. If I call the model interface remotely without setting keep_alive, the previously set keep_alive will become invalid. Is this scene normal?

<!-- gh-comment-id:2188669681 --> @peanutfs commented on GitHub (Jun 25, 2024): Thank you for your reply. When I run the model on my machine, I set keep_alive. If I call the model interface remotely without setting keep_alive, the previously set keep_alive will become invalid. Is this scene normal?
Author
Owner

@peanutfs commented on GitHub (Jun 25, 2024):

I just tested it and I'm sorry for the feedback that keep_alive doesn't work. ollama run qwen2:72b --keepalive 24h is effective, but as I said above, when I call the interface remotely, UNTIL will change from 24 hours from now to 4 minutes from now. Is this normal?

<!-- gh-comment-id:2188709424 --> @peanutfs commented on GitHub (Jun 25, 2024): I just tested it and I'm sorry for the feedback that keep_alive doesn't work. `ollama run qwen2:72b --keepalive 24h` is effective, but as I said above, when I call the interface remotely, UNTIL will change from `24 hours from now` to `4 minutes from now`. Is this normal?
Author
Owner

@lstep commented on GitHub (Jun 25, 2024):

I just tested it and I'm sorry for the feedback that keep_alive doesn't work. ollama run qwen2:72b --keepalive 24h is effective, but as I said above, when I call the interface remotely, UNTIL will change from 24 hours from now to 4 minutes from now. Is this normal?

What do you mean by "calling remotely" exactly? Using the ollama-webui?

<!-- gh-comment-id:2188918950 --> @lstep commented on GitHub (Jun 25, 2024): > I just tested it and I'm sorry for the feedback that keep_alive doesn't work. `ollama run qwen2:72b --keepalive 24h` is effective, but as I said above, when I call the interface remotely, UNTIL will change from `24 hours from now` to `4 minutes from now`. Is this normal? What do you mean by "calling remotely" exactly? Using the ollama-webui?
Author
Owner

@peanutfs commented on GitHub (Jun 26, 2024):

My ollama host is 0.0.0.0, port is 11434, then I use a domain name to forward the request to port 11434 of the ollama server. When calling with the domain name, the above situation will occur.
My request method is openai api, and keep_alive is not set. I tested that if I use ollama run qwen2:72 --keepalive 24h, then after the call, UNTIL will become 4 minutes from now. When using OLLAM_KEEP_ALIVE=-1, the call is normal, and UNTIL is still forever

<!-- gh-comment-id:2190490424 --> @peanutfs commented on GitHub (Jun 26, 2024): My ollama host is 0.0.0.0, port is 11434, then I use a domain name to forward the request to port 11434 of the ollama server. When calling with the domain name, the above situation will occur. My request method is openai api, and keep_alive is not set. I tested that if I use `ollama run qwen2:72 --keepalive 24h`, then after the call, UNTIL will become `4 minutes from now`. When using `OLLAM_KEEP_ALIVE=-1`, the call is normal, and UNTIL is still forever
Author
Owner

@dhiltgen commented on GitHub (Jul 2, 2024):

I think there is a bug here. The intent on this settings is if unset by the client, any prior setting should be respected, but the following shows it reset to the default ~5 minute setting:

% ollama run llama3 --keepalive 1h hi
Hi! It's nice to meet you. Is there something I can help you with or would you like to chat?

% ollama ps
NAME         	ID          	SIZE  	PROCESSOR	UNTIL
llama3:latest	365c0bd3c000	6.7 GB	100% GPU 	59 minutes from now
% curl http://localhost:11434/api/generate -d '{
  "model": "llama3",
  "prompt": "hi",
  "stream": false
}' > /dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   592  100   532  100    60    787     88 --:--:-- --:--:-- --:--:--   877
% ollama ps
NAME         	ID          	SIZE  	PROCESSOR	UNTIL
llama3:latest	365c0bd3c000	6.7 GB	100% GPU 	4 minutes from now
<!-- gh-comment-id:2204491896 --> @dhiltgen commented on GitHub (Jul 2, 2024): I think there is a bug here. The intent on this settings is if unset by the client, any prior setting should be respected, but the following shows it reset to the default ~5 minute setting: ``` % ollama run llama3 --keepalive 1h hi Hi! It's nice to meet you. Is there something I can help you with or would you like to chat? % ollama ps NAME ID SIZE PROCESSOR UNTIL llama3:latest 365c0bd3c000 6.7 GB 100% GPU 59 minutes from now % curl http://localhost:11434/api/generate -d '{ "model": "llama3", "prompt": "hi", "stream": false }' > /dev/null % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 592 100 532 100 60 787 88 --:--:-- --:--:-- --:--:-- 877 % ollama ps NAME ID SIZE PROCESSOR UNTIL llama3:latest 365c0bd3c000 6.7 GB 100% GPU 4 minutes from now ```
Author
Owner

@amirvenus commented on GitHub (Oct 13, 2024):

How can I use it with serve as I currently do this:

OLLAMA_HOST=0.0.0.0 ollama serve

<!-- gh-comment-id:2408791092 --> @amirvenus commented on GitHub (Oct 13, 2024): How can I use it with serve as I currently do this: `OLLAMA_HOST=0.0.0.0 ollama serve`
Author
Owner

@dhiltgen commented on GitHub (Oct 14, 2024):

@amirvenus something like would work:

OLLAMA_HOST=0.0.0.0 OLLAM_KEEP_ALIVE=1h ollama serve
<!-- gh-comment-id:2411701139 --> @dhiltgen commented on GitHub (Oct 14, 2024): @amirvenus something like would work: ``` OLLAMA_HOST=0.0.0.0 OLLAM_KEEP_ALIVE=1h ollama serve ```
Author
Owner

@xgdgsc commented on GitHub (Nov 21, 2024):

image

Why doesn' t it keep the model in memory with this serve ? From the screenshot you can see the OLLAMA_KEEP_ALIVE is set to -1. I use it from Continue extension in vscode.

<!-- gh-comment-id:2489940273 --> @xgdgsc commented on GitHub (Nov 21, 2024): ![image](https://github.com/user-attachments/assets/92ed891c-ba87-458a-a4d2-9add8603e1f7) Why doesn' t it keep the model in memory with this serve ? From the screenshot you can see the OLLAMA_KEEP_ALIVE is set to -1. I use it from Continue extension in vscode.
Author
Owner

@dhiltgen commented on GitHub (Nov 21, 2024):

@xgdgsc it works for me, so I'm not sure what's going on.

OLLAMA_KEEP_ALIVE=-1 ollama serve
% ollama run orca-mini hello
 Hello there! How can I assist you today?
% ollama ps
NAME                ID              SIZE      PROCESSOR    UNTIL
orca-mini:latest    2dbd9f439647    5.9 GB    100% GPU     Forever

My best guess is your client is setting it to something. The server setting establishes the default, but a client can still request a different value.

<!-- gh-comment-id:2491736751 --> @dhiltgen commented on GitHub (Nov 21, 2024): @xgdgsc it works for me, so I'm not sure what's going on. ``` OLLAMA_KEEP_ALIVE=-1 ollama serve ``` ``` % ollama run orca-mini hello Hello there! How can I assist you today? % ollama ps NAME ID SIZE PROCESSOR UNTIL orca-mini:latest 2dbd9f439647 5.9 GB 100% GPU Forever ``` My best guess is your client is setting it to something. The server setting establishes the default, but a client can still request a different value.
Author
Owner

@Zeeeeta commented on GitHub (Nov 25, 2024):

Same issue here, I use curl to set a particular model's keep_alive to -1, but after a call from llamaindex (without setting any keep_alive), it is reset to expire after 5 mins.

But when call from Open WebUI, the FOREVER is retained, I am not sure if Open WebUI first query existing setting and apply to its call or what.

I am not using the env var OLLAMA_KEEP_ALIVE because I don't want FOREVER to apply to all models.

<!-- gh-comment-id:2498522617 --> @Zeeeeta commented on GitHub (Nov 25, 2024): Same issue here, I use curl to set a particular model's keep_alive to -1, but after a call from llamaindex (without setting any keep_alive), it is reset to expire after 5 mins. But when call from Open WebUI, the FOREVER is retained, I am not sure if Open WebUI first query existing setting and apply to its call or what. I am not using the env var OLLAMA_KEEP_ALIVE because I don't want FOREVER to apply to all models.
Author
Owner

@ankos792 commented on GitHub (Dec 28, 2024):

I have same problem. It used to work, but now the systemctl edit ollama and there:
Environment="OLLAMA_NUM_PARALLEL=6"
Environment="OLLAMA_MAX_LOADED_MODELS=3"
Environment="OLLAMA_MAX_QUEUE=256"
Environment="OLLAMA_KEEP_ALIVE=-1"

Environment="OLLAMA_HOST=0.0.0.0"

these are not effective.

Also they are in the override.conf file.

/etc/systemd/system/ollama.service.d/override.conf:1: Assignment outside of section. Ignoring.

EDIT: Solved my issue, variables started to work after adding [Service] above them.

<!-- gh-comment-id:2564388105 --> @ankos792 commented on GitHub (Dec 28, 2024): I have same problem. It used to work, but now the systemctl edit ollama and there: Environment="OLLAMA_NUM_PARALLEL=6" Environment="OLLAMA_MAX_LOADED_MODELS=3" Environment="OLLAMA_MAX_QUEUE=256" Environment="OLLAMA_KEEP_ALIVE=-1" Environment="OLLAMA_HOST=0.0.0.0" these are not effective. Also they are in the override.conf file. /etc/systemd/system/ollama.service.d/override.conf:1: Assignment outside of section. Ignoring. EDIT: Solved my issue, variables started to work after adding [Service] above them.
Author
Owner

@MiMoHo commented on GitHub (Dec 29, 2024):

I'm facing the 5 minutes timeout issue on an n8n setup serving my Supabase data ingest with Ollama running from Docker Desktop on an Apple Silicon Mac. I create a container from Ollama in Docker specifying variable OLLAMA_KEEP_ALIVE with value 24h in the Environment variables.

@dhiltgen You have an unfavorable typo above. Even when setting the correct variable "OLLAMA_KEEP_ALIVE", the text-ingest task to my database is aborted when exceeding 300 seconds. I couldn't set yet another keep_alive parameter in the run command for non-generative embedding models.

People, please mind to write keep_alive with the underscore as it seems to be relevant according to the documentation. It should deal briefly with Docker Desktop setups as you don't use curl commands there.

Why is there actually a default stop parameter when you can stop running a model anytime? When I run out of RAM, I know that I just have to quit Ollama, otherwise, I want to use the running model. A 5 minute timeout is unexpected and unhelpful in regular usage!

Maybe we should reopen this issue as the title applies also to other setups that are apparently still not working.

<!-- gh-comment-id:2564770238 --> @MiMoHo commented on GitHub (Dec 29, 2024): I'm facing the 5 minutes timeout issue on an n8n setup serving my Supabase data ingest with Ollama running from Docker Desktop on an Apple Silicon Mac. I create a container from Ollama in Docker specifying variable `OLLAMA_KEEP_ALIVE` with value `24h` in the _Environment variables_. @dhiltgen You have an unfavorable typo [above](https://github.com/ollama/ollama/issues/5272#issuecomment-2411701139). Even when setting the correct variable "OLLAM**A**_KEEP_ALIVE", the text-ingest task to my database is aborted when exceeding 300 seconds. I couldn't set yet another `keep_alive` parameter in the run command for non-generative embedding models. People, please mind to write `keep_alive` with the underscore as it seems to be relevant according to the [documentation](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-keep-a-model-loaded-in-memory-or-make-it-unload-immediately). It should deal briefly with Docker Desktop setups as you don't use curl commands there. Why is there actually a default stop parameter when you can `stop` running a model anytime? When I run out of RAM, I know that I just have to quit Ollama, otherwise, I want to use the running model. A 5 minute timeout is unexpected and unhelpful in regular usage! Maybe we should reopen this issue as the title applies also to other setups that are apparently still not working.
Author
Owner

@DY-ATL commented on GitHub (Feb 8, 2025):

How to make OLLAMA_KEEP_ALIVE effective in the ollama official docker? I cannot restart the ollama via systemctl restart ollama because there is no systemctl.

<!-- gh-comment-id:2644435467 --> @DY-ATL commented on GitHub (Feb 8, 2025): How to make OLLAMA_KEEP_ALIVE effective in the ollama official docker? I cannot restart the ollama via `systemctl restart ollama` because there is no `systemctl`.
Author
Owner

@DY-ATL commented on GitHub (Feb 8, 2025):

I had the same problem as https://github.com/ollama/ollama/issues/5272#issuecomment-2188709424. If I run ollama run deepseek-r1:671b, then the UNTIL is as expected. However, if I use Chrome extension Page Assist, the UNTIL is still default 5 min.

<!-- gh-comment-id:2644478699 --> @DY-ATL commented on GitHub (Feb 8, 2025): I had the same problem as https://github.com/ollama/ollama/issues/5272#issuecomment-2188709424. If I run `ollama run deepseek-r1:671b`, then the UNTIL is as expected. However, if I use Chrome extension **Page Assist**, the UNTIL is still default 5 min.
Author
Owner

@SuperJunier666 commented on GitHub (Feb 8, 2025):

I had the same problem as #5272 (comment). If I run ollama run deepseek-r1:671b, then the UNTIL is as expected. However, if I use Chrome extension Page Assist, the UNTIL is still default 5 min.我遇到了与 #5272 (comment) . 相同的问题。如果我运行 ollama run deepseek-r1:671b,则 UNTIL 符合预期。但是,如果我使用 Chrome 扩展 Page Assist,则 UNTIL 仍默认为 5 分钟。

I'm having the same problem, do you have a solution?

<!-- gh-comment-id:2644539130 --> @SuperJunier666 commented on GitHub (Feb 8, 2025): > I had the same problem as [#5272 (comment)](https://github.com/ollama/ollama/issues/5272#issuecomment-2188709424). If I run `ollama run deepseek-r1:671b`, then the UNTIL is as expected. However, if I use Chrome extension **Page Assist**, the UNTIL is still default 5 min.我遇到了与 [#5272 (comment)](https://github.com/ollama/ollama/issues/5272#issuecomment-2188709424) . 相同的问题。如果我运行 `ollama run deepseek-r1:671b`,则 UNTIL 符合预期。但是,如果我使用 Chrome 扩展 **Page Assist**,则 UNTIL 仍默认为 5 分钟。 I'm having the same problem, do you have a solution?
Author
Owner

@tzelalouzeir commented on GitHub (Mar 4, 2025):

How to make OLLAMA_KEEP_ALIVE effective in the ollama official docker? I cannot restart the ollama via systemctl restart ollama because there is no systemctl.

If you are using docker you need to forget about the systemctl ollama service and instead use docker restart ollama_gpu, also you need to set environment variables before the build the container due to you can't access to env with systemctl, for example ollama_gpu usage:

docker-compose.yml:

services:
  ollama_gpu:
    image: ollama/ollama
    container_name: ollama_gpu
    environment:
      - OLLAMA_FLASH_ATTENTION=1
    ports:
      - "11434:11434"
    volumes:
      - ollama_gpu:/root/.ollama
    runtime: nvidia

volumes:
  ollama_gpu:
<!-- gh-comment-id:2697399154 --> @tzelalouzeir commented on GitHub (Mar 4, 2025): > How to make OLLAMA_KEEP_ALIVE effective in the ollama official docker? I cannot restart the ollama via `systemctl restart ollama` because there is no `systemctl`. If you are using docker you need to forget about the systemctl ollama service and instead use docker restart ollama_gpu, also you need to set environment variables before the build the container due to you can't access to env with systemctl, for example ollama_gpu usage: ```docker-compose.yml```: ``` services: ollama_gpu: image: ollama/ollama container_name: ollama_gpu environment: - OLLAMA_FLASH_ATTENTION=1 ports: - "11434:11434" volumes: - ollama_gpu:/root/.ollama runtime: nvidia volumes: ollama_gpu: ```
Author
Owner

@AndregoVersailles commented on GitHub (May 22, 2025):

so, the way to work around this, because the OP is clearly using windows and the given codes are in mac, as stated above, is:
ollama run (client of choice) --keepalive 999999h
It will not accept more than 1 million hours, and will only accept hours as a medium when using windows. It assumes this to be the mac's version of "-1", which it will not accept as a variable. So when you do your ollama ps it spits out "UNTIL Forever".

<!-- gh-comment-id:2899939618 --> @AndregoVersailles commented on GitHub (May 22, 2025): so, the way to work around this, because the OP is clearly using windows and the given codes are in mac, as stated above, is: ollama run (client of choice) --keepalive 999999h It will not accept more than 1 million hours, and will only accept hours as a medium when using windows. It assumes this to be the mac's version of "-1", which it will not accept as a variable. So when you do your ollama ps it spits out "UNTIL Forever".
Author
Owner

@sweihub commented on GitHub (Jan 30, 2026):

Environment="OLLAMA_KEEP_ALIVE=-1"

This works for me

sudo systemctl stop ollama
sudo vim /etc/systemd/system/ollama.service.d/override.conf
sudo systemctl daemon-reload
sudo systemctl start ollama
[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_ORIGINS=*"
Environment="OLLAMA_KEEP_ALIVE=-1"
<!-- gh-comment-id:3821789215 --> @sweihub commented on GitHub (Jan 30, 2026): > Environment="OLLAMA_KEEP_ALIVE=-1" This works for me ``` sudo systemctl stop ollama sudo vim /etc/systemd/system/ollama.service.d/override.conf sudo systemctl daemon-reload sudo systemctl start ollama ``` ``` [Service] Environment="OLLAMA_HOST=0.0.0.0:11434" Environment="OLLAMA_ORIGINS=*" Environment="OLLAMA_KEEP_ALIVE=-1" ```
Author
Owner

@dimensiondata commented on GitHub (Mar 20, 2026):

in ollama.service: OLLAMA_KEEP_ALIVE=-1 , it is not effective.

the reason is below:

keep_alive covers OLLAMA_KEEP_ALIVE=-1.

when i remove this params. it is effective.

ollama ps
NAME ID SIZE PROCESSOR UNTIL
qwen3-14b-0129:latest aa27a159c061 15 GB 100% GPU Forever

    payload: Dict[str, Any] = {
        "model": model,
        "messages": messages,
        "options": options,
        "stream": True,
        #"keep_alive": keep_alive,
        #"keep_alive": "-1"
    }
    with requests.post(url, json=payload, stream=True, timeout=timeout) as r:

<!-- gh-comment-id:4095636936 --> @dimensiondata commented on GitHub (Mar 20, 2026): in ollama.service: OLLAMA_KEEP_ALIVE=-1 , it is not effective. the reason is below: keep_alive covers `OLLAMA_KEEP_ALIVE=-1`. when i remove this params. it is effective. ollama ps NAME ID SIZE PROCESSOR UNTIL qwen3-14b-0129:latest aa27a159c061 15 GB 100% GPU **Forever** ```python payload: Dict[str, Any] = { "model": model, "messages": messages, "options": options, "stream": True, #"keep_alive": keep_alive, #"keep_alive": "-1" } with requests.post(url, json=payload, stream=True, timeout=timeout) as r: ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65339