[GH-ISSUE #2490] [Question] Do not offload to CPU RAM #47965

Closed
opened 2026-04-28 06:12:29 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @freQuensy23-coder on GitHub (Feb 14, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2490

By default, after some time of inactivity, ollama will automatically be offloaded from GPU memory, that caused some latency, especially to large models)

Originally created by @freQuensy23-coder on GitHub (Feb 14, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2490 By default, after some time of inactivity, ollama will automatically be offloaded from GPU memory, that caused some latency, especially to large models)
Author
Owner

@wrapss commented on GitHub (Feb 14, 2024):

look this PR

<!-- gh-comment-id:1943605740 --> @wrapss commented on GitHub (Feb 14, 2024): look this [PR](https://github.com/ollama/ollama/pull/2146)
Author
Owner

@hoyyeva commented on GitHub (Mar 11, 2024):

@freQuensy23-coder is this still an issue that you are experiencing? You should be able to set the keep_alive to -1 to prevent that. Let us know if you are still running into the same issue.

<!-- gh-comment-id:1989164841 --> @hoyyeva commented on GitHub (Mar 11, 2024): @freQuensy23-coder is this still an issue that you are experiencing? You should be able to set the `keep_alive` to `-1` to prevent that. Let us know if you are still running into the same issue.
Author
Owner

@Mayorc1978 commented on GitHub (Mar 16, 2024):

look this PR

How do you use the keepalive parameter when accessing models from code?
Using Ollama for Windows installer.

<!-- gh-comment-id:2002102226 --> @Mayorc1978 commented on GitHub (Mar 16, 2024): > look this [PR](https://github.com/ollama/ollama/pull/2146) How do you use the `keepalive` parameter when accessing models from code? Using **Ollama for Windows** installer.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47965