[GH-ISSUE #1871] Switching from a high num_ctx to a model with a low num_ctx causes cuda out of memory errors #1071

Closed
opened 2026-04-12 10:49:28 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @jmorganca on GitHub (Jan 9, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/1871

Originally assigned to: @dhiltgen on GitHub.

When switching from a large context window to a small one (a high num_ctx to a low num_ctx), Ollama will error due to out of memory. It seems that it will incorrectly try to re-allocate the same amount of memory as before (vs a new, smaller amount).

Originally created by @jmorganca on GitHub (Jan 9, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/1871 Originally assigned to: @dhiltgen on GitHub. When switching from a large context window to a small one (a high `num_ctx` to a low `num_ctx`), Ollama will error due to out of memory. It seems that it will incorrectly try to re-allocate the same amount of memory as before (vs a new, smaller amount).
GiteaMirror added the bug label 2026-04-12 10:49:28 -05:00
Author
Owner

@iplayfast commented on GitHub (Jan 10, 2024):

I wonder if that's what's causing https://github.com/jmorganca/ollama/issues/1691

<!-- gh-comment-id:1884212925 --> @iplayfast commented on GitHub (Jan 10, 2024): I wonder if that's what's causing https://github.com/jmorganca/ollama/issues/1691
Author
Owner

@dhiltgen commented on GitHub (Feb 27, 2024):

Repro scenario

while true; do curl http://localhost:11434/api/generate -d "{
  \"model\": \"mistral\",
  \"prompt\": \"Summarize the following: $(tr -d '\n' < ~/shakespeare.txt)\",
  \"stream\": false, \"options\": {
    \"num_ctx\": 16384
  }
}" || break; curl http://localhost:11434/api/generate -d "{
  \"model\": \"mistral\",
  \"prompt\": \"Summarize the following: $(tr -d '\n' < ~/shakespeare.txt)\",
  \"stream\": false, \"options\": {
    \"num_ctx\": 1024
  }
}" || break; done

I see memory slowly climbing on each iteration so there's a leak in there somewhere. I'll run under the cuda memory analysis tools next...

<!-- gh-comment-id:1967733155 --> @dhiltgen commented on GitHub (Feb 27, 2024): Repro scenario ``` while true; do curl http://localhost:11434/api/generate -d "{ \"model\": \"mistral\", \"prompt\": \"Summarize the following: $(tr -d '\n' < ~/shakespeare.txt)\", \"stream\": false, \"options\": { \"num_ctx\": 16384 } }" || break; curl http://localhost:11434/api/generate -d "{ \"model\": \"mistral\", \"prompt\": \"Summarize the following: $(tr -d '\n' < ~/shakespeare.txt)\", \"stream\": false, \"options\": { \"num_ctx\": 1024 } }" || break; done ``` I see memory slowly climbing on each iteration so there's a leak in there somewhere. I'll run under the cuda memory analysis tools next...
Author
Owner

@dhiltgen commented on GitHub (Feb 27, 2024):

Unfortunately, from the CUDA perspective, the memory allocations aren't leaked (they seem to be tracked, and cleaned up on exit) so it's some higher-level logic error in llama.cpp.

(multiple iterations, and I see the nvidia-smi memory usage slowly climbing...)

compute-sanitizer --tool memcheck  --leak-check full ./ollama-linux-amd64 serve
========= COMPUTE-SANITIZER
time=2024-02-27T22:41:19.218Z level=INFO source=images.go:710 msg="total blobs: 20"
time=2024-02-27T22:41:19.219Z level=INFO source=images.go:717 msg="total unused blobs removed: 0"
time=2024-02-27T22:41:19.221Z level=INFO source=routes.go:1019 msg="Listening on 127.0.0.1:11434 (version 0.1.27-11-g076237b)"
time=2024-02-27T22:41:19.221Z level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-02-27T22:41:21.958Z level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [rocm_v6 cpu cpu_avx cpu_avx2 cuda_v11 rocm_v5]"
time=2024-02-27T22:41:21.958Z level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-27T22:41:21.958Z level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-02-27T22:41:21.959Z level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.545.23.08]"
time=2024-02-27T22:41:21.987Z level=INFO source=gpu.go:99 msg="Nvidia GPU detected"
...

{"function":"update_slots","level":"INFO","line":1680,"msg":"slot released","n_cache_tokens":820,"n_ctx":1024,"n_past":819,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"140381368845888","timestamp":1709076897,"truncated":true}
[GIN] 2024/02/27 - 23:34:57 | 200 |         7m25s |       127.0.0.1 | POST     "/api/generate"
========= LEAK SUMMARY: 0 bytes leaked in 0 allocations
========= ERROR SUMMARY: 0 errors
<!-- gh-comment-id:1967918084 --> @dhiltgen commented on GitHub (Feb 27, 2024): Unfortunately, from the CUDA perspective, the memory allocations aren't leaked (they seem to be tracked, and cleaned up on exit) so it's some higher-level logic error in llama.cpp. (multiple iterations, and I see the `nvidia-smi` memory usage slowly climbing...) ``` compute-sanitizer --tool memcheck --leak-check full ./ollama-linux-amd64 serve ========= COMPUTE-SANITIZER time=2024-02-27T22:41:19.218Z level=INFO source=images.go:710 msg="total blobs: 20" time=2024-02-27T22:41:19.219Z level=INFO source=images.go:717 msg="total unused blobs removed: 0" time=2024-02-27T22:41:19.221Z level=INFO source=routes.go:1019 msg="Listening on 127.0.0.1:11434 (version 0.1.27-11-g076237b)" time=2024-02-27T22:41:19.221Z level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..." time=2024-02-27T22:41:21.958Z level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [rocm_v6 cpu cpu_avx cpu_avx2 cuda_v11 rocm_v5]" time=2024-02-27T22:41:21.958Z level=INFO source=gpu.go:94 msg="Detecting GPU type" time=2024-02-27T22:41:21.958Z level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so" time=2024-02-27T22:41:21.959Z level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.545.23.08]" time=2024-02-27T22:41:21.987Z level=INFO source=gpu.go:99 msg="Nvidia GPU detected" ... {"function":"update_slots","level":"INFO","line":1680,"msg":"slot released","n_cache_tokens":820,"n_ctx":1024,"n_past":819,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"140381368845888","timestamp":1709076897,"truncated":true} [GIN] 2024/02/27 - 23:34:57 | 200 | 7m25s | 127.0.0.1 | POST "/api/generate" ========= LEAK SUMMARY: 0 bytes leaked in 0 allocations ========= ERROR SUMMARY: 0 errors ```
Author
Owner

@dhiltgen commented on GitHub (Mar 20, 2024):

This should be resolved by #3218

<!-- gh-comment-id:2009967060 --> @dhiltgen commented on GitHub (Mar 20, 2024): This should be resolved by #3218
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1071