[GH-ISSUE #7350] Ollama keeps reloading the same model repeatedly #51182

Closed
opened 2026-04-28 18:53:20 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @cray1031 on GitHub (Oct 25, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7350

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

docker run -d --gpus=all -v /data/ollama:/root/.ollama -p 9112:11434 -e OLLAMA_ORIGINS="*" -e OLLAMA_NUM_PARALLEL=15 -e OLLAMA_KEEP_ALIVE=2h -e OLLAMA_DEBUG=1 --name ollama_v0314 ollama/ollama:latest

eleasing cuda driver library
time=2024-10-25T03:39:46.460Z level=DEBUG source=server.go:1086 msg="stopping llama server"
time=2024-10-25T03:39:46.460Z level=DEBUG source=server.go:1092 msg="waiting for llama server to exit"
time=2024-10-25T03:39:46.549Z level=DEBUG source=server.go:1096 msg="llama server stopped"
time=2024-10-25T03:39:46.549Z level=DEBUG source=sched.go:380 msg="runner released" modelPath=/root/.ollama/models/blobs/sha256-ced7796abcbb47ef96412198ebd31ac1eca21e8bbc831d72a31df69e4a30aad5
time=2024-10-25T03:39:46.711Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="501.9 GiB" before.free="482.1 GiB" before.free_swap="0 B" now.total="501.9 GiB" now.free="482.1 GiB" now.free_swap="0 B"
CUDA driver version: 11.7
time=2024-10-25T03:39:46.940Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-556705f3-b8be-5aa8-7580-ce93ecdb297e name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:47.132Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-45d04aa6-0eed-614b-1b71-7fedc703efb6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="30.4 GiB" now.total="31.7 GiB" now.free="30.4 GiB" now.used="1.4 GiB"
time=2024-10-25T03:39:47.329Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-8a9608c9-324e-1c6b-dfd0-b1bb2ee1ad92 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:47.547Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-007f6b5a-bb43-c925-1206-1985428efa33 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:47.745Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-4b8aaa40-bd18-365a-3057-b6523495086c name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:47.948Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-d298f359-e8fd-12a0-5c50-d9c5b75ca8b6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:48.119Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-b369557d-d7cf-f951-1c83-302ebb5ce7f3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:48.273Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-f30d65b6-22cd-76ba-2cc3-22fe7b481010 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
releasing cuda driver library
time=2024-10-25T03:39:48.273Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="501.9 GiB" before.free="482.1 GiB" before.free_swap="0 B" now.total="501.9 GiB" now.free="482.1 GiB" now.free_swap="0 B"
CUDA driver version: 11.7
time=2024-10-25T03:39:48.458Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-556705f3-b8be-5aa8-7580-ce93ecdb297e name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:48.606Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-45d04aa6-0eed-614b-1b71-7fedc703efb6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="30.4 GiB" now.total="31.7 GiB" now.free="30.4 GiB" now.used="1.4 GiB"
time=2024-10-25T03:39:48.753Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-8a9608c9-324e-1c6b-dfd0-b1bb2ee1ad92 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:48.900Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-007f6b5a-bb43-c925-1206-1985428efa33 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:49.087Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-4b8aaa40-bd18-365a-3057-b6523495086c name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:49.235Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-d298f359-e8fd-12a0-5c50-d9c5b75ca8b6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:49.417Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-b369557d-d7cf-f951-1c83-302ebb5ce7f3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:49.565Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-f30d65b6-22cd-76ba-2cc3-22fe7b481010 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
releasing cuda driver library
time=2024-10-25T03:39:49.565Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="501.9 GiB" before.free="482.1 GiB" before.free_swap="0 B" now.total="501.9 GiB" now.free="482.1 GiB" now.free_swap="0 B"
CUDA driver version: 11.7
time=2024-10-25T03:39:49.758Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-556705f3-b8be-5aa8-7580-ce93ecdb297e name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:49.911Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-45d04aa6-0eed-614b-1b71-7fedc703efb6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="30.4 GiB" now.total="31.7 GiB" now.free="30.4 GiB" now.used="1.4 GiB"
time=2024-10-25T03:39:50.058Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-8a9608c9-324e-1c6b-dfd0-b1bb2ee1ad92 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:50.204Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-007f6b5a-bb43-c925-1206-1985428efa33 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:50.403Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-4b8aaa40-bd18-365a-3057-b6523495086c name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:50.565Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-d298f359-e8fd-12a0-5c50-d9c5b75ca8b6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:50.711Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-b369557d-d7cf-f951-1c83-302ebb5ce7f3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:50.858Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-f30d65b6-22cd-76ba-2cc3-22fe7b481010 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
releasing cuda driver library
time=2024-10-25T03:39:50.858Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.861122605 model=/root/.ollama/models/blobs/sha256-ced7796abcbb47ef96412198ebd31ac1eca21e8bbc831d72a31df69e4a30aad5
time=2024-10-25T03:39:50.858Z level=DEBUG source=sched.go:384 msg="sending an unloaded event" modelPath=/root/.ollama/models/blobs/sha256-ced7796abcbb47ef96412198ebd31ac1eca21e8bbc831d72a31df69e4a30aad5
time=2024-10-25T03:39:50.858Z level=DEBUG source=sched.go:308 msg="ignoring unload event with no pending requests"
time=2024-10-25T03:39:50.858Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="501.9 GiB" before.free="482.1 GiB" before.free_swap="0 B" now.total="501.9 GiB" now.free="482.1 GiB" now.free_swap="0 B"
CUDA driver version: 11.7
time=2024-10-25T03:39:51.071Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-556705f3-b8be-5aa8-7580-ce93ecdb297e name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:51.219Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-45d04aa6-0eed-614b-1b71-7fedc703efb6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="30.4 GiB" now.total="31.7 GiB" now.free="30.4 GiB" now.used="1.4 GiB"
time=2024-10-25T03:39:51.414Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-8a9608c9-324e-1c6b-dfd0-b1bb2ee1ad92 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:51.591Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-007f6b5a-bb43-c925-1206-1985428efa33 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:51.784Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-4b8aaa40-bd18-365a-3057-b6523495086c name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:52.011Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-d298f359-e8fd-12a0-5c50-d9c5b75ca8b6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:52.171Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-b369557d-d7cf-f951-1c83-302ebb5ce7f3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:52.359Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-f30d65b6-22cd-76ba-2cc3-22fe7b481010 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
releasing cuda driver library
time=2024-10-25T03:39:52.360Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=7.362796717 model=/root/.ollama/models/blobs/sha256-ced7796abcbb47ef96412198ebd31ac1eca21e8bbc831d72a31df69e4a30aad5
time=2024-10-25T03:39:52.360Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="501.9 GiB" before.free="482.1 GiB" before.free_swap="0 B" now.total="501.9 GiB" now.free="482.1 GiB" now.free_swap="0 B"
CUDA driver version: 11.7
time=2024-10-25T03:39:52.566Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-556705f3-b8be-5aa8-7580-ce93ecdb297e name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:52.720Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-45d04aa6-0eed-614b-1b71-7fedc703efb6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="30.4 GiB" now.total="31.7 GiB" now.free="30.4 GiB" now.used="1.4 GiB"
time=2024-10-25T03:39:52.868Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-8a9608c9-324e-1c6b-dfd0-b1bb2ee1ad92 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:53.045Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-007f6b5a-bb43-c925-1206-1985428efa33 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:53.383Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-4b8aaa40-bd18-365a-3057-b6523495086c name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:53.652Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-d298f359-e8fd-12a0-5c50-d9c5b75ca8b6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:53.923Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-b369557d-d7cf-f951-1c83-302ebb5ce7f3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
time=2024-10-25T03:39:54.161Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-f30d65b6-22cd-76ba-2cc3-22fe7b481010 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB"
releasing cuda driver library
time=2024-10-25T03:39:54.161Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=9.164325299 model=/root/.ollama/models/blobs/sha256-ced7796abcbb47ef96412198ebd31ac1eca21e8bbc831d72a31df69e4a30aad5

OS

Linux

GPU

Nvidia

CPU

No response

Ollama version

0.1.34

Originally created by @cray1031 on GitHub (Oct 25, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7350 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? `docker run -d --gpus=all -v /data/ollama:/root/.ollama -p 9112:11434 -e OLLAMA_ORIGINS="*" -e OLLAMA_NUM_PARALLEL=15 -e OLLAMA_KEEP_ALIVE=2h -e OLLAMA_DEBUG=1 --name ollama_v0314 ollama/ollama:latest` ``` eleasing cuda driver library time=2024-10-25T03:39:46.460Z level=DEBUG source=server.go:1086 msg="stopping llama server" time=2024-10-25T03:39:46.460Z level=DEBUG source=server.go:1092 msg="waiting for llama server to exit" time=2024-10-25T03:39:46.549Z level=DEBUG source=server.go:1096 msg="llama server stopped" time=2024-10-25T03:39:46.549Z level=DEBUG source=sched.go:380 msg="runner released" modelPath=/root/.ollama/models/blobs/sha256-ced7796abcbb47ef96412198ebd31ac1eca21e8bbc831d72a31df69e4a30aad5 time=2024-10-25T03:39:46.711Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="501.9 GiB" before.free="482.1 GiB" before.free_swap="0 B" now.total="501.9 GiB" now.free="482.1 GiB" now.free_swap="0 B" CUDA driver version: 11.7 time=2024-10-25T03:39:46.940Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-556705f3-b8be-5aa8-7580-ce93ecdb297e name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:47.132Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-45d04aa6-0eed-614b-1b71-7fedc703efb6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="30.4 GiB" now.total="31.7 GiB" now.free="30.4 GiB" now.used="1.4 GiB" time=2024-10-25T03:39:47.329Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-8a9608c9-324e-1c6b-dfd0-b1bb2ee1ad92 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:47.547Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-007f6b5a-bb43-c925-1206-1985428efa33 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:47.745Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-4b8aaa40-bd18-365a-3057-b6523495086c name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:47.948Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-d298f359-e8fd-12a0-5c50-d9c5b75ca8b6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:48.119Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-b369557d-d7cf-f951-1c83-302ebb5ce7f3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:48.273Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-f30d65b6-22cd-76ba-2cc3-22fe7b481010 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" releasing cuda driver library time=2024-10-25T03:39:48.273Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="501.9 GiB" before.free="482.1 GiB" before.free_swap="0 B" now.total="501.9 GiB" now.free="482.1 GiB" now.free_swap="0 B" CUDA driver version: 11.7 time=2024-10-25T03:39:48.458Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-556705f3-b8be-5aa8-7580-ce93ecdb297e name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:48.606Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-45d04aa6-0eed-614b-1b71-7fedc703efb6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="30.4 GiB" now.total="31.7 GiB" now.free="30.4 GiB" now.used="1.4 GiB" time=2024-10-25T03:39:48.753Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-8a9608c9-324e-1c6b-dfd0-b1bb2ee1ad92 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:48.900Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-007f6b5a-bb43-c925-1206-1985428efa33 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:49.087Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-4b8aaa40-bd18-365a-3057-b6523495086c name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:49.235Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-d298f359-e8fd-12a0-5c50-d9c5b75ca8b6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:49.417Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-b369557d-d7cf-f951-1c83-302ebb5ce7f3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:49.565Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-f30d65b6-22cd-76ba-2cc3-22fe7b481010 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" releasing cuda driver library time=2024-10-25T03:39:49.565Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="501.9 GiB" before.free="482.1 GiB" before.free_swap="0 B" now.total="501.9 GiB" now.free="482.1 GiB" now.free_swap="0 B" CUDA driver version: 11.7 time=2024-10-25T03:39:49.758Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-556705f3-b8be-5aa8-7580-ce93ecdb297e name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:49.911Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-45d04aa6-0eed-614b-1b71-7fedc703efb6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="30.4 GiB" now.total="31.7 GiB" now.free="30.4 GiB" now.used="1.4 GiB" time=2024-10-25T03:39:50.058Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-8a9608c9-324e-1c6b-dfd0-b1bb2ee1ad92 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:50.204Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-007f6b5a-bb43-c925-1206-1985428efa33 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:50.403Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-4b8aaa40-bd18-365a-3057-b6523495086c name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:50.565Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-d298f359-e8fd-12a0-5c50-d9c5b75ca8b6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:50.711Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-b369557d-d7cf-f951-1c83-302ebb5ce7f3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:50.858Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-f30d65b6-22cd-76ba-2cc3-22fe7b481010 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" releasing cuda driver library time=2024-10-25T03:39:50.858Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.861122605 model=/root/.ollama/models/blobs/sha256-ced7796abcbb47ef96412198ebd31ac1eca21e8bbc831d72a31df69e4a30aad5 time=2024-10-25T03:39:50.858Z level=DEBUG source=sched.go:384 msg="sending an unloaded event" modelPath=/root/.ollama/models/blobs/sha256-ced7796abcbb47ef96412198ebd31ac1eca21e8bbc831d72a31df69e4a30aad5 time=2024-10-25T03:39:50.858Z level=DEBUG source=sched.go:308 msg="ignoring unload event with no pending requests" time=2024-10-25T03:39:50.858Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="501.9 GiB" before.free="482.1 GiB" before.free_swap="0 B" now.total="501.9 GiB" now.free="482.1 GiB" now.free_swap="0 B" CUDA driver version: 11.7 time=2024-10-25T03:39:51.071Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-556705f3-b8be-5aa8-7580-ce93ecdb297e name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:51.219Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-45d04aa6-0eed-614b-1b71-7fedc703efb6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="30.4 GiB" now.total="31.7 GiB" now.free="30.4 GiB" now.used="1.4 GiB" time=2024-10-25T03:39:51.414Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-8a9608c9-324e-1c6b-dfd0-b1bb2ee1ad92 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:51.591Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-007f6b5a-bb43-c925-1206-1985428efa33 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:51.784Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-4b8aaa40-bd18-365a-3057-b6523495086c name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:52.011Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-d298f359-e8fd-12a0-5c50-d9c5b75ca8b6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:52.171Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-b369557d-d7cf-f951-1c83-302ebb5ce7f3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:52.359Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-f30d65b6-22cd-76ba-2cc3-22fe7b481010 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" releasing cuda driver library time=2024-10-25T03:39:52.360Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=7.362796717 model=/root/.ollama/models/blobs/sha256-ced7796abcbb47ef96412198ebd31ac1eca21e8bbc831d72a31df69e4a30aad5 time=2024-10-25T03:39:52.360Z level=DEBUG source=gpu.go:396 msg="updating system memory data" before.total="501.9 GiB" before.free="482.1 GiB" before.free_swap="0 B" now.total="501.9 GiB" now.free="482.1 GiB" now.free_swap="0 B" CUDA driver version: 11.7 time=2024-10-25T03:39:52.566Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-556705f3-b8be-5aa8-7580-ce93ecdb297e name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:52.720Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-45d04aa6-0eed-614b-1b71-7fedc703efb6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="30.4 GiB" now.total="31.7 GiB" now.free="30.4 GiB" now.used="1.4 GiB" time=2024-10-25T03:39:52.868Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-8a9608c9-324e-1c6b-dfd0-b1bb2ee1ad92 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:53.045Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-007f6b5a-bb43-c925-1206-1985428efa33 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:53.383Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-4b8aaa40-bd18-365a-3057-b6523495086c name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:53.652Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-d298f359-e8fd-12a0-5c50-d9c5b75ca8b6 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:53.923Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-b369557d-d7cf-f951-1c83-302ebb5ce7f3 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" time=2024-10-25T03:39:54.161Z level=DEBUG source=gpu.go:444 msg="updating cuda memory data" gpu=GPU-f30d65b6-22cd-76ba-2cc3-22fe7b481010 name="Tesla V100-SXM2-32GB" overhead="0 B" before.total="31.7 GiB" before.free="31.4 GiB" now.total="31.7 GiB" now.free="31.4 GiB" now.used="306.8 MiB" releasing cuda driver library time=2024-10-25T03:39:54.161Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=9.164325299 model=/root/.ollama/models/blobs/sha256-ced7796abcbb47ef96412198ebd31ac1eca21e8bbc831d72a31df69e4a30aad5 ``` ### OS Linux ### GPU Nvidia ### CPU _No response_ ### Ollama version 0.1.34
GiteaMirror added the needs more infobug labels 2026-04-28 18:53:20 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 25, 2024):

Your log doesn't show that ollama is reloading the same model repeatedly. It shows that ollama is waiting for resources given to the model that was just unloaded to be returned. That is taking a long time, if you add a more complete log then the reason may be discovered.

<!-- gh-comment-id:2437696017 --> @rick-github commented on GitHub (Oct 25, 2024): Your log doesn't show that ollama is reloading the same model repeatedly. It shows that ollama is waiting for resources given to the model that was just unloaded to be returned. That is taking a long time, if you add a more complete log then the reason may be discovered.
Author
Owner

@dhiltgen commented on GitHub (Oct 25, 2024):

Might be a dup of #7130 if the model in question is small

<!-- gh-comment-id:2438308654 --> @dhiltgen commented on GitHub (Oct 25, 2024): Might be a dup of #7130 if the model in question is small
Author
Owner

@rick-github commented on GitHub (Oct 25, 2024):

The model is qwen2.5-coder:7b-instruct-q4_K_M, 4.7G. Given the size of the GPU, #7130 sounds plausible.

<!-- gh-comment-id:2438341994 --> @rick-github commented on GitHub (Oct 25, 2024): The model is [qwen2.5-coder:7b-instruct-q4_K_M](https://ollama.com/library/qwen2.5-coder:7b-instruct-q4_K_M), 4.7G. Given the size of the GPU, #7130 sounds plausible.
Author
Owner

@rick-github commented on GitHub (Nov 17, 2024):

closing as dupe

<!-- gh-comment-id:2481284837 --> @rick-github commented on GitHub (Nov 17, 2024): closing as dupe
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51182