[GH-ISSUE #6271] ollama will restart processes if 2 GPU is running for the same large model when new request comes #3928

Closed
opened 2026-04-12 14:48:16 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @FreemanFeng on GitHub (Aug 9, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6271

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Firstly, ollama start two processes to utilize the 2 GPU (Nvidia RTX A5000) capacity to launch the large model qwen2:72b.
than after it handled last request, when the new request for the same model comes, Ollama will kill the current processes and restart process to load the same model

It is expected that Ollama will keep alive for the same model before timeout.

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.3.2

Originally created by @FreemanFeng on GitHub (Aug 9, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6271 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Firstly, ollama start two processes to utilize the 2 GPU (Nvidia RTX A5000) capacity to launch the large model qwen2:72b. than after it handled last request, when the new request for the same model comes, Ollama will kill the current processes and restart process to load the same model It is expected that Ollama will keep alive for the same model before timeout. ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.2
GiteaMirror added the bug label 2026-04-12 14:48:16 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 9, 2024):

Server logs will help in debugging.

<!-- gh-comment-id:2277204622 --> @rick-github commented on GitHub (Aug 9, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) will help in debugging.
Author
Owner

@dhiltgen commented on GitHub (Aug 9, 2024):

ollama start two processes

If you're referring to ollama.exe and ollama_llama_server.exe, then only the second one is connecting to the GPU. If you see two ollama_llama_server.exe processes, then that implies you have loaded 2 models.

My suspicion is you may be changing some parameter which is causing the model to get reloaded. If the same model with same settings is requested, it should use the already loaded model. As Rick mentioned, logs will help understand what's going on and if this is expected behavior or if there's a bug.

<!-- gh-comment-id:2278538812 --> @dhiltgen commented on GitHub (Aug 9, 2024): > ollama start two processes If you're referring to `ollama.exe` and `ollama_llama_server.exe`, then only the second one is connecting to the GPU. If you see two `ollama_llama_server.exe` processes, then that implies you have loaded 2 models. My suspicion is you may be changing some parameter which is causing the model to get reloaded. If the same model with same settings is requested, it should use the already loaded model. As Rick mentioned, logs will help understand what's going on and if this is expected behavior or if there's a bug.
Author
Owner

@mrmiket64 commented on GitHub (Aug 11, 2024):

Hi, I am having the same issue.

I am running Ollama 0.3.4 on Ubuntu 22.04.

Please see the attached logs.
ollama_troubleshooting_logs.txt

Could you please help us to correct it?

Thank you

Best regards
Miguel

<!-- gh-comment-id:2282752957 --> @mrmiket64 commented on GitHub (Aug 11, 2024): Hi, I am having the same issue. I am running Ollama 0.3.4 on Ubuntu 22.04. Please see the attached logs. [ollama_troubleshooting_logs.txt](https://github.com/user-attachments/files/16573733/ollama_troubleshooting_logs.txt) Could you please help us to correct it? Thank you Best regards Miguel
Author
Owner

@rick-github commented on GitHub (Aug 11, 2024):

Model qwen2:72b-instruct-q4_0 is loaded at Aug 11 09:20:23 and then again at 09:24:44 with no indication of a model unload inbetween. OLLAMA_KEEP_ALIVE=1h0m0s. Can you add OLLAMA_DEBUG=1 to your server environment and add the logs after you see this happen again?

<!-- gh-comment-id:2282766712 --> @rick-github commented on GitHub (Aug 11, 2024): Model qwen2:72b-instruct-q4_0 is loaded at Aug 11 09:20:23 and then again at 09:24:44 with no indication of a model unload inbetween. OLLAMA_KEEP_ALIVE=1h0m0s. Can you add `OLLAMA_DEBUG=1` to your server environment and add the logs after you see this happen again?
Author
Owner

@mrmiket64 commented on GitHub (Aug 14, 2024):

Sure, please see the attached file with the logs on Debug mode.
20240814_troubleshooting_logs.txt

For this test I only did the following:
Step 1: $ollama run qwen2:72b
Screenshot 1

Step 2: >>>Hi, how are you?
Screenshot 2

Step 3: >>>Thank you
Screenshot 3

Notes:

  • With every interaction it looks like the ollama process is restarted (receiving a new PID) and the NVTOP graph shows the memory release and reload.
  • I thought about taking the screenshots after the log capture, but the steps and behavior are the same.
  • I see the same behavior with llama3.1:70b
  • This does Not happen with the model llama3.1:8b-instruct-q8_0

Thanks!

<!-- gh-comment-id:2289561188 --> @mrmiket64 commented on GitHub (Aug 14, 2024): Sure, please see the attached file with the logs on Debug mode. [20240814_troubleshooting_logs.txt](https://github.com/user-attachments/files/16617429/20240814_troubleshooting_logs.txt) For this test I only did the following: Step 1: $ollama run qwen2:72b <img width="792" alt="Screenshot 1" src="https://github.com/user-attachments/assets/017605f4-8bd9-4b01-b951-5b3e76c0e03c"> Step 2: >>>Hi, how are you? <img width="791" alt="Screenshot 2" src="https://github.com/user-attachments/assets/6a36881f-fc29-4489-9565-8ba1dbb51a0d"> Step 3: >>>Thank you <img width="782" alt="Screenshot 3" src="https://github.com/user-attachments/assets/8113031f-c500-4970-a87b-af840764bfdf"> Notes: - With every interaction it looks like the ollama process is restarted (receiving a new PID) and the NVTOP graph shows the memory release and reload. - I thought about taking the screenshots after the log capture, but the steps and behavior are the same. - I see the same behavior with llama3.1:70b - This does Not happen with the model llama3.1:8b-instruct-q8_0 Thanks!
Author
Owner

@rick-github commented on GitHub (Aug 14, 2024):

Aug 14 18:24:04 neuron0 ollama[35955]: time=2024-08-14T18:24:04.434Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=16 layers.split=8,8 memory.available="[7.8 GiB 7.8 GiB]" memory.required.full="49.8 GiB" memory.required.partial="15.4 GiB" memory.required.kv="5.0 GiB" memory.required.allocations="[7.7 GiB 7.7 GiB]" memory.weights.total="41.8 GiB" memory.weights.repeating="40.8 GiB" memory.weights.nonrepeating="974.6 MiB" memory.graph.full="2.6 GiB" memory.graph.partial="2.6 GiB"
Aug 14 18:24:04 neuron0 ollama[35955]: time=2024-08-14T18:24:04.435Z level=INFO source=server.go:392 msg="starting llama server" cmd="/tmp/ollama3305466951/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb --ctx-size 16384 --batch-size 512 --embedding --log-disable --n-gpu-layers 16 --verbose --parallel 1 --tensor-split 8,8 --port 35587"
Aug 14 18:24:21 neuron0 ollama[35955]: time=2024-08-14T18:24:21.254Z level=DEBUG source=sched.go:571 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb
Aug 14 18:24:21 neuron0 ollama[35955]: time=2024-08-14T18:24:21.254Z level=DEBUG source=sched.go:278 msg="resetting model to expire immediately to make room" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb refCount=0
Aug 14 18:24:23 neuron0 ollama[35955]: time=2024-08-14T18:24:23.210Z level=INFO source=server.go:392 msg="starting llama server" cmd="/tmp/ollama3305466951/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb --ctx-size 16384 --batch-size 512 --embedding --log-disable --n-gpu-layers 16 --verbose --parallel 1 --tensor-split 8,8 --port 42907"

This looks a lot like https://github.com/ollama/ollama/issues/6148, except with two GPUs. The GPUs have 7.8G available and 7.7G is allocated for the model. The model is loaded, does inference, and when another request comes in, ollama checks to see if it can re-use the current model in function needsReload, which is what prints evaluating already loaded. For some reasons needReload says that the current model doesn't match the constraints of the new request, unloads the model, and then re-loads the exact same configuration. needsReload is a pretty simple function and it's not clear why it's failing in your case - I've added debugging locally and can't figure out why it would fail (I haven't been able to replicate this problem of every request causing a model reload).

The work around is to tell ollama to not offload so many layers. At the moment, it's offloading 16 layers, 8 per GPU. I would like to try some commands to verify the issue.

First, replicate the current problem:

curl localhost:11434/api/chat -d '{"model":"qwen2:72b","messages":[{"role":"user","content":"Hi, how are you?"}],"stream":false}'
sleep 5
curl localhost:11434/api/chat -d '{"model":"qwen2:72b","messages":[{"role":"user","content":"Hi, how are you?"}],"stream":false}'

Hopefully this will load the model, do a chat completion, wait 5 seconds, then re-load the model and do another chat completion.

Now we see if we can stop the reload:

curl localhost:11434/api/chat -d '{"model":"qwen2:72b","messages":[{"role":"user","content":"Hi, how are you?"}],"stream":false,"options":{"num_gpu":14}}'
sleep 5
curl localhost:11434/api/chat -d '{"model":"qwen2:72b","messages":[{"role":"user","content":"Hi, how are you?"}],"stream":false,"options":{"num_gpu":14}}'

Hopefully this time in nvtop you won't see memory graph change and the pid will stay the same. You might have to adjust 14 a bit, maybe 12.

If the latter works, then that indicates a problem with ollama's ability to determine whether the current model needs to be reloaded or not, and that at least provides an area for investigation.

<!-- gh-comment-id:2289716661 --> @rick-github commented on GitHub (Aug 14, 2024): ``` Aug 14 18:24:04 neuron0 ollama[35955]: time=2024-08-14T18:24:04.434Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=16 layers.split=8,8 memory.available="[7.8 GiB 7.8 GiB]" memory.required.full="49.8 GiB" memory.required.partial="15.4 GiB" memory.required.kv="5.0 GiB" memory.required.allocations="[7.7 GiB 7.7 GiB]" memory.weights.total="41.8 GiB" memory.weights.repeating="40.8 GiB" memory.weights.nonrepeating="974.6 MiB" memory.graph.full="2.6 GiB" memory.graph.partial="2.6 GiB" Aug 14 18:24:04 neuron0 ollama[35955]: time=2024-08-14T18:24:04.435Z level=INFO source=server.go:392 msg="starting llama server" cmd="/tmp/ollama3305466951/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb --ctx-size 16384 --batch-size 512 --embedding --log-disable --n-gpu-layers 16 --verbose --parallel 1 --tensor-split 8,8 --port 35587" Aug 14 18:24:21 neuron0 ollama[35955]: time=2024-08-14T18:24:21.254Z level=DEBUG source=sched.go:571 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb Aug 14 18:24:21 neuron0 ollama[35955]: time=2024-08-14T18:24:21.254Z level=DEBUG source=sched.go:278 msg="resetting model to expire immediately to make room" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb refCount=0 Aug 14 18:24:23 neuron0 ollama[35955]: time=2024-08-14T18:24:23.210Z level=INFO source=server.go:392 msg="starting llama server" cmd="/tmp/ollama3305466951/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb --ctx-size 16384 --batch-size 512 --embedding --log-disable --n-gpu-layers 16 --verbose --parallel 1 --tensor-split 8,8 --port 42907" ``` This looks a lot like https://github.com/ollama/ollama/issues/6148, except with two GPUs. The GPUs have 7.8G available and 7.7G is allocated for the model. The model is loaded, does inference, and when another request comes in, ollama checks to see if it can re-use the current model in function `needsReload`, which is what prints `evaluating already loaded`. For some reasons `needReload` says that the current model doesn't match the constraints of the new request, unloads the model, and then re-loads the exact same configuration. `needsReload` is a pretty simple function and it's not clear why it's failing in your case - I've added debugging locally and can't figure out why it would fail (I haven't been able to replicate this problem of every request causing a model reload). The work around is to tell ollama to not offload so many layers. At the moment, it's offloading 16 layers, 8 per GPU. I would like to try some commands to verify the issue. First, replicate the current problem: ``` curl localhost:11434/api/chat -d '{"model":"qwen2:72b","messages":[{"role":"user","content":"Hi, how are you?"}],"stream":false}' sleep 5 curl localhost:11434/api/chat -d '{"model":"qwen2:72b","messages":[{"role":"user","content":"Hi, how are you?"}],"stream":false}' ``` Hopefully this will load the model, do a chat completion, wait 5 seconds, then re-load the model and do another chat completion. Now we see if we can stop the reload: ``` curl localhost:11434/api/chat -d '{"model":"qwen2:72b","messages":[{"role":"user","content":"Hi, how are you?"}],"stream":false,"options":{"num_gpu":14}}' sleep 5 curl localhost:11434/api/chat -d '{"model":"qwen2:72b","messages":[{"role":"user","content":"Hi, how are you?"}],"stream":false,"options":{"num_gpu":14}}' ``` Hopefully this time in `nvtop` you won't see memory graph change and the pid will stay the same. You might have to adjust `14` a bit, maybe `12`. If the latter works, then that indicates a problem with ollama's ability to determine whether the current model needs to be reloaded or not, and that at least provides an area for investigation.
Author
Owner

@mrmiket64 commented on GitHub (Aug 17, 2024):

Hi again

I performed the tests as you kindly indicated and the results were as you expected. Please see the details below:

Debug log of the entire test steps:
20220816_troubleshoot_log.txt

  1. Running the initial two curl commands, we can see that the issue is replicated.
    Here we can see that with the 1st CURL command the PID is 52642
    Screenshot 2024-08-16 at 9 39 37 p m

When the 2nd CURL kicks in, we can see the memory release and reload, now PID is 52958
Screenshot 2024-08-16 at 9 40 15 p m

  1. Running the second set of CURL commands
    The process reloads for the first CURL command of the sencond set, PID is 53460
    Screenshot 2024-08-16 at 9 43 42 p m

Then the second CURL command of the second set starts and as you predicted, no model reload, we still have the same PID 53460
Screenshot 2024-08-16 at 9 47 13 p m

If you want, I can give you ssh access to the server.

Please let me know if additional detail is needed.

Thank you

<!-- gh-comment-id:2294613716 --> @mrmiket64 commented on GitHub (Aug 17, 2024): Hi again I performed the tests as you kindly indicated and the results were as you expected. Please see the details below: Debug log of the entire test steps: [20220816_troubleshoot_log.txt](https://github.com/user-attachments/files/16643290/20220816_troubleshoot_log.txt) 1. Running the initial two curl commands, we can see that the issue is replicated. Here we can see that with the 1st CURL command the PID is 52642 <img width="842" alt="Screenshot 2024-08-16 at 9 39 37 p m" src="https://github.com/user-attachments/assets/2181f691-cbb7-4c05-ae31-a7081348c79d"> When the 2nd CURL kicks in, we can see the memory release and reload, now PID is 52958 <img width="838" alt="Screenshot 2024-08-16 at 9 40 15 p m" src="https://github.com/user-attachments/assets/4f9a9191-5ba0-4d66-8256-6deac05a7502"> 2. Running the second set of CURL commands The process reloads for the first CURL command of the sencond set, PID is 53460 <img width="841" alt="Screenshot 2024-08-16 at 9 43 42 p m" src="https://github.com/user-attachments/assets/5c1a3863-206e-4ffc-8013-50afcf5dc334"> Then the second CURL command of the second set starts and as you predicted, no model reload, we still have the same PID 53460 <img width="835" alt="Screenshot 2024-08-16 at 9 47 13 p m" src="https://github.com/user-attachments/assets/f8fae642-5222-4948-b1a7-622e11bb2d45"> If you want, I can give you ssh access to the server. Please let me know if additional detail is needed. Thank you
Author
Owner

@mrmiket64 commented on GitHub (Aug 17, 2024):

Hi again

It seems I found a way to replicate the issue, even with a 1 GPU machine with a smaller model like "llama3.1:latest".

In order to replicate it, I am using the following program to simulate users:
ollamaldr.txt
To stress the server, I entered 200 users running for 60 sec.

To replicate the behavior I entered a high number of parallel processes in the variable "OLLAMA_NUM_PARALLEL"

TESTING

Test 1: Running with "OLLAMA_NUM_PARALLEL=18" or lower.
Screenshot 2024-08-16 at 11 35 50 p m

The system runs ok
Screenshot 2024-08-16 at 11 36 03 p m

Test 2: Running with "OLLAMA_NUM_PARALLEL=20" or higher.
Screenshot 2024-08-16 at 11 39 00 p m

Now the system is reloading the model for every request
Screenshot 2024-08-16 at 11 41 24 p m

I could replicate the exact same behavior in another server I have with one 1080ti, just by simulating 200 users for 60 seconds and increasing the OLLAMA_NUM_PARALLEL variable until the behavior happened.

Hope it helps.

All the best
Miguel

PS. Interestingly, running with the same "llama3.1:latest" model, if I setup the OLLAMA_NUM_PARALLEL variable to a low enough value, the system only loads the model to 1 GPU.
Screenshot 2024-08-17 at 12 01 30 a m

Here we can see that only the second GPU's memory is being used
Screenshot 2024-08-17 at 12 01 38 a m

<!-- gh-comment-id:2294642305 --> @mrmiket64 commented on GitHub (Aug 17, 2024): Hi again It seems I found a way to replicate the issue, even with a 1 GPU machine with a smaller model like "llama3.1:latest". In order to replicate it, I am using the following program to simulate users: [ollamaldr.txt](https://github.com/user-attachments/files/16643590/ollamaldr.txt) To stress the server, I entered 200 users running for 60 sec. To replicate the behavior I entered a high number of parallel processes in the variable "OLLAMA_NUM_PARALLEL" **TESTING** **_Test 1:_** Running with "OLLAMA_NUM_PARALLEL=18" or lower. <img width="833" alt="Screenshot 2024-08-16 at 11 35 50 p m" src="https://github.com/user-attachments/assets/f1c5c85c-537e-4311-b95d-a5ffe77ab5fc"> The system runs ok <img width="844" alt="Screenshot 2024-08-16 at 11 36 03 p m" src="https://github.com/user-attachments/assets/fab31945-5f04-4634-8d18-0d8376a50cef"> **_Test 2_**: Running with "OLLAMA_NUM_PARALLEL=20" or higher. <img width="835" alt="Screenshot 2024-08-16 at 11 39 00 p m" src="https://github.com/user-attachments/assets/ff3bd8ea-819b-4584-8136-ef8e95205975"> Now the system is reloading the model for every request <img width="845" alt="Screenshot 2024-08-16 at 11 41 24 p m" src="https://github.com/user-attachments/assets/35017451-0218-4323-be06-8fdb41e47026"> I could replicate the exact same behavior in another server I have with one 1080ti, just by simulating 200 users for 60 seconds and increasing the OLLAMA_NUM_PARALLEL variable until the behavior happened. Hope it helps. All the best Miguel PS. Interestingly, running with the same "llama3.1:latest" model, if I setup the OLLAMA_NUM_PARALLEL variable to a low enough value, the system only loads the model to 1 GPU. <img width="841" alt="Screenshot 2024-08-17 at 12 01 30 a m" src="https://github.com/user-attachments/assets/8d502a01-966e-40a3-824a-d35712bbc822"> Here we can see that only the second GPU's memory is being used <img width="841" alt="Screenshot 2024-08-17 at 12 01 38 a m" src="https://github.com/user-attachments/assets/99b19522-93ea-429c-b62a-b5819f66ea8a">
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3928