[GH-ISSUE #9926] Ollama unable to unload model if another program use (a little) VRAM #6499

Open
opened 2026-04-12 18:04:46 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @agademer on GitHub (Mar 21, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9926

What is the issue?

Context

I'm using Ollama on a dedicated VM with a L40S GPU (with paththrough).
I'm using docker

docker run -d --gpus=all --restart always -e OLLAMA_DEBUG=1 -e OLLAMA_NUM_PARALLEL=1 -e OLLAMA_MAX_LOADED_MODELS=1 -v /:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:latest

I have several users with different need and they load different models (from llama3.2 to deepseek-r1:70b).

I have an Open-WebUI front-end on another machine that request ollama through the API.

With one model at a time, it was working well.

Description of the bug:

Lately I wanted to add the possibility of image generation and installed comfyUI alongside ollama. (In its own container).

I started to observe that each time comfyUI was running, ollama would start to freeze after some (random) time. Only killing ollama process or restarting the ollama container would make it back to work.

I thought it was because comfyUI was taking too much VRAM but I was able to observe it even when it was just started (600 Mo VRAM) and with small models (llama3.2 is using 3Gb on my 48Gb VRAM available).

I was able to narrow the problem :

  • loading the model is OK.
  • using the model is OK (several queries without any trouble)
  • unloading the model is the problem.

Each time ollama try to unload the model (either to load another one or because timeout is reached), an ollama serve process launches and enter an infite loop (using 1 cpu at 100% forever).

This behavior only occures if comfyUI use some VRAM. Without comfyUI, the unloading process goes well.

That said, I think ollama should be able to unload models even if comfyUI is present, so I classify that as a bug.

Visual exemple:

1/ Before running an ollama query (ollama docker is running but nothing in VRAM, comfyUI run but is idle, it use small chunk of VRAM)

Image

2/ Running an ollama query (ollama load the model and return the answer. The model is still in VRAM.)

Image

3/ Running a comfyUI query (comfyUI load the model, return the image, then unload the model.)

Image

4/ Running an ollama query with another model (or waiting 5 min) : ollama try to unload the model --> infinite 100%CPU loop

Image

Reproduce

  1. run comfyui docker container (or any software that hold to some VRAM)
  2. run ollama docker container
  3. query ollama (loading a model, receiving answer)
  4. waiting for the model to unload (or provoke it)
  5. observe the infinite loop with htop/ps/nvtop/etc.

Relevant log output

ollama logs during the infinite loop:

time=2025-03-21T10:49:13.400Z level=DEBUG source=sched.go:156 msg="max runners achieved, unloading one to make room" runner_count=1
time=2025-03-21T10:49:13.400Z level=DEBUG source=sched.go:785 msg="found an idle runner to unload"
time=2025-03-21T10:49:13.400Z level=DEBUG source=sched.go:284 msg="resetting model to expire immediately to make room" modelPath=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff refCount=0
time=2025-03-21T10:49:13.400Z level=DEBUG source=sched.go:297 msg="waiting for pending requests to complete and unload to occur" modelPath=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
time=2025-03-21T10:49:13.400Z level=DEBUG source=sched.go:361 msg="runner expired event received" modelPath=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
time=2025-03-21T10:49:13.400Z level=DEBUG source=sched.go:376 msg="got lock to unload" modelPath=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
time=2025-03-21T10:49:13.400Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="94.3 GiB" before.free="92.1 GiB" before.free_swap="0 B" now.total="94.3 GiB" now.free="82.6 GiB" now.free_swap="0 B"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.120
dlsym: cuInit - 0x74b3a5c7cbc0
dlsym: cuDriverGetVersion - 0x74b3a5c7cbe0
dlsym: cuDeviceGetCount - 0x74b3a5c7cc20
dlsym: cuDeviceGet - 0x74b3a5c7cc00
dlsym: cuDeviceGetAttribute - 0x74b3a5c7cd00
dlsym: cuDeviceGetUuid - 0x74b3a5c7cc60
dlsym: cuDeviceGetName - 0x74b3a5c7cc40
dlsym: cuCtxCreate_v3 - 0x74b3a5c7cee0
dlsym: cuMemGetInfo_v2 - 0x74b3a5c86e20
dlsym: cuCtxDestroy - 0x74b3a5ce1850
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1

(nothing more after that, whatever time one wait)

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.6.1

Originally created by @agademer on GitHub (Mar 21, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9926 ### What is the issue? ### Context I'm using Ollama on a dedicated VM with a L40S GPU (with paththrough). I'm using docker `docker run -d --gpus=all --restart always -e OLLAMA_DEBUG=1 -e OLLAMA_NUM_PARALLEL=1 -e OLLAMA_MAX_LOADED_MODELS=1 -v /:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:latest` I have several users with different need and they load different models (from llama3.2 to deepseek-r1:70b). I have an Open-WebUI front-end on another machine that request ollama through the API. With one model at a time, it was working well. ### Description of the bug: Lately I wanted to add the possibility of image generation and installed comfyUI alongside ollama. (In its own container). I started to observe that each time comfyUI was running, ollama would start to freeze after some (random) time. Only killing ollama process or restarting the ollama container would make it back to work. I thought it was because comfyUI was taking too much VRAM but I was able to observe it even when it was just started (600 Mo VRAM) and with small models (llama3.2 is using 3Gb on my 48Gb VRAM available). I was able to narrow the problem : - loading the model is OK. - using the model is OK (several queries without any trouble) - unloading the model is the problem. Each time ollama try to unload the model (either to load another one or because timeout is reached), an ollama serve process launches and enter an infite loop (using 1 cpu at 100% forever). This behavior only occures if comfyUI use some VRAM. Without comfyUI, the unloading process goes well. That said, I think ollama should be able to unload models even if comfyUI is present, so I classify that as a bug. ### Visual exemple: 1/ Before running an ollama query (ollama docker is running but nothing in VRAM, comfyUI run but is idle, it use small chunk of VRAM) ![Image](https://github.com/user-attachments/assets/8f5a315e-1c34-433d-98ea-fadc814b1036) 2/ Running an ollama query (ollama load the model and return the answer. The model is still in VRAM.) ![Image](https://github.com/user-attachments/assets/e4176e2f-ffcf-49f5-89f1-df0352c913dd) 3/ Running a comfyUI query (comfyUI load the model, return the image, then unload the model.) ![Image](https://github.com/user-attachments/assets/8fb2aa1f-483d-4e8a-92a9-1c7c9cc606ba) 4/ Running an ollama query with another model (or waiting 5 min) : ollama try to unload the model --> infinite 100%CPU loop ![Image](https://github.com/user-attachments/assets/35fc7212-cf9f-47a6-9ba8-8a345150173d) ### Reproduce 1. run comfyui docker container (or any software that hold to some VRAM) 2. run ollama docker container 3. query ollama (loading a model, receiving answer) 4. waiting for the model to unload (or provoke it) 5. observe the infinite loop with htop/ps/nvtop/etc. ### Relevant log output ```shell ollama logs during the infinite loop: time=2025-03-21T10:49:13.400Z level=DEBUG source=sched.go:156 msg="max runners achieved, unloading one to make room" runner_count=1 time=2025-03-21T10:49:13.400Z level=DEBUG source=sched.go:785 msg="found an idle runner to unload" time=2025-03-21T10:49:13.400Z level=DEBUG source=sched.go:284 msg="resetting model to expire immediately to make room" modelPath=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff refCount=0 time=2025-03-21T10:49:13.400Z level=DEBUG source=sched.go:297 msg="waiting for pending requests to complete and unload to occur" modelPath=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff time=2025-03-21T10:49:13.400Z level=DEBUG source=sched.go:361 msg="runner expired event received" modelPath=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff time=2025-03-21T10:49:13.400Z level=DEBUG source=sched.go:376 msg="got lock to unload" modelPath=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff time=2025-03-21T10:49:13.400Z level=DEBUG source=gpu.go:391 msg="updating system memory data" before.total="94.3 GiB" before.free="92.1 GiB" before.free_swap="0 B" now.total="94.3 GiB" now.free="82.6 GiB" now.free_swap="0 B" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.550.120 dlsym: cuInit - 0x74b3a5c7cbc0 dlsym: cuDriverGetVersion - 0x74b3a5c7cbe0 dlsym: cuDeviceGetCount - 0x74b3a5c7cc20 dlsym: cuDeviceGet - 0x74b3a5c7cc00 dlsym: cuDeviceGetAttribute - 0x74b3a5c7cd00 dlsym: cuDeviceGetUuid - 0x74b3a5c7cc60 dlsym: cuDeviceGetName - 0x74b3a5c7cc40 dlsym: cuCtxCreate_v3 - 0x74b3a5c7cee0 dlsym: cuMemGetInfo_v2 - 0x74b3a5c86e20 dlsym: cuCtxDestroy - 0x74b3a5ce1850 calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 1 (nothing more after that, whatever time one wait) ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.6.1
GiteaMirror added the bug label 2026-04-12 18:04:46 -05:00
Author
Owner

@agademer commented on GitHub (Mar 21, 2025):

Side note: I'm observing the same behavior when I load two models in VRAM.
(I just remembered that it was a reason of my OLLAMA_MAX_LOADED_MODELS=1 😮 )

Image

<!-- gh-comment-id:2743132483 --> @agademer commented on GitHub (Mar 21, 2025): Side note: I'm observing the same behavior when I load two models in VRAM. (I just remembered that it was a reason of my `OLLAMA_MAX_LOADED_MODELS=1` 😮 ) ![Image](https://github.com/user-attachments/assets/ad071b42-4745-4970-9e58-73276d6df90b)
Author
Owner

@agademer commented on GitHub (Mar 26, 2025):

Tracked the bug to the call of cuCtxCreate_v3 in gpu_info_nvcuda.c

e5d84fb90b/discover/gpu_info_nvcuda.c (L223)

When comfyui has already generated at least one image, this function don't return, nor timeout and it seems that the timeout set in sched.go is inefficient too.

e5d84fb90b/server/sched.go (L642)

If I return imediatly in the waitForVRAMRecovery (like it's proposed for cpu/metal/windows) the code works.

(but obviously the VRAM optmization is not used).

I was not able to find a way to detect that the call will fail before doing it or make the timeout work either 😢

Any help on the subject would be appreciated.

<!-- gh-comment-id:2755811772 --> @agademer commented on GitHub (Mar 26, 2025): Tracked the bug to the call of cuCtxCreate_v3 in gpu_info_nvcuda.c https://github.com/ollama/ollama/blob/e5d84fb90b21d71f8eb816656ca0b34191425216/discover/gpu_info_nvcuda.c#L223 When comfyui has already generated at least one image, this function don't return, nor timeout and it seems that the timeout set in sched.go is inefficient too. https://github.com/ollama/ollama/blob/e5d84fb90b21d71f8eb816656ca0b34191425216/server/sched.go#L642 If I return imediatly in the waitForVRAMRecovery (like it's proposed for cpu/metal/windows) the code works. (but obviously the VRAM optmization is not used). I was not able to find a way to detect that the call will fail before doing it or make the timeout work either 😢 Any help on the subject would be appreciated.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6499