[GH-ISSUE #7927] Multiple ollama_llama_server process are created and then not released #67129

Closed
opened 2026-05-04 09:31:26 -05:00 by GiteaMirror · 22 comments
Owner

Originally created by @zxq9133 on GitHub (Dec 4, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7927

What is the issue?

In the process of use, after a period of time through nvidia-smi view, there will be multiple processes using the GPU, but only one of these processes actually works. You can confirm this by using the ollama ps command
screen_shot
Refer to the above pic, only the 2115700 process is valid, and it is clear that there are two other processes 1990922,2036868 that occupy a fixed size GPU memory, and another process 2117261 is still running..

OS

Linux

GPU

Nvidia

CPU

Other

Ollama version

0.3.5

Originally created by @zxq9133 on GitHub (Dec 4, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7927 ### What is the issue? In the process of use, after a period of time through nvidia-smi view, there will be multiple processes using the GPU, but only one of these processes actually works. You can confirm this by using the ollama ps command ![screen_shot](https://github.com/user-attachments/assets/d5799b73-6ed0-4b0b-8c61-fcc2531e0e50) Refer to the above pic, only the 2115700 process is valid, and it is clear that there are two other processes 1990922,2036868 that occupy a fixed size GPU memory, and another process 2117261 is still running.. ### OS Linux ### GPU Nvidia ### CPU Other ### Ollama version 0.3.5
GiteaMirror added the bug label 2026-05-04 09:31:26 -05:00
Author
Owner

@zxq9133 commented on GitHub (Dec 4, 2024):

Refer to another set of data

(base) xxng$ ollama ps
NAME ID SIZE PROCESSOR UNTIL
codeqwen:chat df352abf55b1 6.8 GB 100% GPU 27 minutes from now
qwen2.5:3b 357c53fb659c 3.1 GB 100% GPU 2 minutes from now
(base) xxng$ nvidia-smi
Wed Dec 4 15:13:24 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.29.06 Driver Version: 545.29.06 CUDA Version: 12.3 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA A100-PCIE-40GB Off | 00000000:17:00.0 Off | 0 |
| N/A 31C P0 58W / 250W | 38781MiB / 40960MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 1573 G /usr/lib/xorg/Xorg 4MiB |
| 0 N/A N/A 708055 C text-embeddings-router 1586MiB |
| 0 N/A N/A 1330781 C /app/api/.venv/bin/python 2812MiB |
| 0 N/A N/A 1398516 C Model: whisper-large-v3-turbo-0 2292MiB |
| 0 N/A N/A 1630783 C python 7158MiB |
| 0 N/A N/A 1990922 C ...unners/cuda_v11/ollama_llama_server 6692MiB |
| 0 N/A N/A 2036868 C ...unners/cuda_v11/ollama_llama_server 6692MiB |
| 0 N/A N/A 2117261 C ...unners/cuda_v11/ollama_llama_server 6692MiB |
| 0 N/A N/A 2130892 C ...unners/cuda_v11/ollama_llama_server 3226MiB |
| 0 N/A N/A 3573426 C text-embeddings-router 1554MiB |
+---------------------------------------------------------------------------------------+
(base) xxng$ ^C
(base) xxng$ top -b | grep ollama
1950229 ollama 20 0 9378976 699876 287400 S 23.5 0.5 4:21.30 ollama
1330180 ollama 20 0 33580 4652 2028 S 0.0 0.0 0:44.99 redis-server
1411787 ollama 20 0 2910268 4440 500 S 0.0 0.0 871:07.99 mysqld
1990922 ollama 20 0 46.2g 1.3g 396956 S 0.0 1.0 0:03.92 ollama_llama_se
2036868 ollama 20 0 46.1g 1.3g 397020 S 0.0 1.0 0:03.44 ollama_llama_se
2117261 ollama 20 0 48.2g 1.3g 397696 S 0.0 1.0 0:03.19 ollama_llama_se
2130892 ollama 20 0 43.0g 1.6g 627340 S 0.0 1.2 0:02.66 ollama_llama_se

<!-- gh-comment-id:2516391123 --> @zxq9133 commented on GitHub (Dec 4, 2024): **Refer to another set of data** ============================================================ (base) xxng$ ollama ps NAME ID SIZE PROCESSOR UNTIL codeqwen:chat df352abf55b1 6.8 GB 100% GPU 27 minutes from now qwen2.5:3b 357c53fb659c 3.1 GB 100% GPU 2 minutes from now (base) xxng$ nvidia-smi Wed Dec 4 15:13:24 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 545.29.06 Driver Version: 545.29.06 CUDA Version: 12.3 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA A100-PCIE-40GB Off | 00000000:17:00.0 Off | 0 | | N/A 31C P0 58W / 250W | 38781MiB / 40960MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 1573 G /usr/lib/xorg/Xorg 4MiB | | 0 N/A N/A 708055 C text-embeddings-router 1586MiB | | 0 N/A N/A 1330781 C /app/api/.venv/bin/python 2812MiB | | 0 N/A N/A 1398516 C Model: whisper-large-v3-turbo-0 2292MiB | | 0 N/A N/A 1630783 C python 7158MiB | | 0 N/A N/A **1990922 C ...unners/cuda_v11/ollama_llama_server 6692MiB** | | 0 N/A N/A **2036868 C ...unners/cuda_v11/ollama_llama_server 6692MiB** | | 0 N/A N/A 2117261 C ...unners/cuda_v11/ollama_llama_server 6692MiB | | 0 N/A N/A 2130892 C ...unners/cuda_v11/ollama_llama_server 3226MiB | | 0 N/A N/A 3573426 C text-embeddings-router 1554MiB | +---------------------------------------------------------------------------------------+ (base) xxng$ ^C (base) xxng$ top -b | grep ollama 1950229 ollama 20 0 9378976 699876 287400 S 23.5 0.5 4:21.30 ollama 1330180 ollama 20 0 33580 4652 2028 S 0.0 0.0 0:44.99 redis-server 1411787 ollama 20 0 2910268 4440 500 S 0.0 0.0 871:07.99 mysqld 1990922 ollama 20 0 46.2g 1.3g 396956 S 0.0 1.0 0:03.92 ollama_llama_se 2036868 ollama 20 0 46.1g 1.3g 397020 S 0.0 1.0 0:03.44 ollama_llama_se 2117261 ollama 20 0 48.2g 1.3g 397696 S 0.0 1.0 0:03.19 ollama_llama_se 2130892 ollama 20 0 43.0g 1.6g 627340 S 0.0 1.2 0:02.66 ollama_llama_se
Author
Owner

@zxq9133 commented on GitHub (Dec 4, 2024):

Below is a two-hour log attachment : Please remove the suffix .txt because the original log is about 150M
ollama_new.7z.txt

<!-- gh-comment-id:2516549983 --> @zxq9133 commented on GitHub (Dec 4, 2024): Below is a two-hour log attachment : **Please remove the suffix .txt because the original log is about 150M** [ollama_new.7z.txt](https://github.com/user-attachments/files/18004894/ollama_new.7z.txt)
Author
Owner

@fxmbsw7 commented on GitHub (Dec 4, 2024):

i suppose it could be simple subprocesses .. i think they can share much , also in metadata

<!-- gh-comment-id:2516998729 --> @fxmbsw7 commented on GitHub (Dec 4, 2024): i suppose it could be simple subprocesses .. i think they can share much , also in metadata
Author
Owner

@YonTracks commented on GitHub (Dec 4, 2024):

I have a similar issue, windows?. If for whatever reason, ollama silently crashes and restarts itself no errors or, restarts itself and the log resets also, so ollama also skipped the errors debug logs etc? but it leaves orphaned ollama_llama_server processes (eg: using llava with context / message history from llama.3.1 / 3.2 or using continue vscode and something goes wrong). I fixed my issue by tracking the pid and removing orphaned server processes. slog.Debug("detected orphaned llama server processes", "pids", orphanedPIDs)

<!-- gh-comment-id:2518728474 --> @YonTracks commented on GitHub (Dec 4, 2024): I have a similar issue, windows?. If for whatever reason, ollama silently crashes and restarts itself no errors or, restarts itself and the log resets also, so ollama also skipped the errors debug logs etc? but it leaves orphaned `ollama_llama_server` processes (eg: using llava with context / message history from llama.3.1 / 3.2 or using continue vscode and something goes wrong). I fixed my issue by tracking the pid and removing orphaned server processes. ```slog.Debug("detected orphaned llama server processes", "pids", orphanedPIDs)```
Author
Owner

@YonTracks commented on GitHub (Dec 4, 2024):

re-create the issue: Using the generate endpoint with context[] (yes, I know the context is being depreciated) does it easy! first use llama3.1 and using the context / message history with llava, but there are many ways this issue is caused. Good luck. heres my attempt lol. 9eb047d01c

<!-- gh-comment-id:2518758589 --> @YonTracks commented on GitHub (Dec 4, 2024): re-create the issue: Using the generate endpoint with context[] (yes, I know the context is being depreciated) does it easy! first use llama3.1 and using the context / message history with llava, but there are many ways this issue is caused. Good luck. heres my attempt lol. https://github.com/YonTracks/ollama-yontracks/commit/9eb047d01c3d45acb6200f9c6dfe1358e0ae2047
Author
Owner

@zxq9133 commented on GitHub (Dec 5, 2024):

i suppose it could be simple subprocesses .. i think they can share much , also in metadata

Thanks, but it seems to be more than that, looking at the log seems to keep trying to load the same model repeatedly.. Ollama output a lot of logs in a short time, but I don't understand them very well.

<!-- gh-comment-id:2518827291 --> @zxq9133 commented on GitHub (Dec 5, 2024): > i suppose it could be simple subprocesses .. i think they can share much , also in metadata Thanks, but it seems to be more than that, looking at the log seems to keep trying to load the same model repeatedly.. Ollama output a lot of logs in a short time, but I don't understand them very well.
Author
Owner

@zxq9133 commented on GitHub (Dec 5, 2024):

re-create the issue: Using the generate endpoint with context[] (yes, I know the context is being depreciated) does it easy! first use llama3.1 and using the context / message history with llava, but there are many ways this issue is caused. Good luck. heres my attempt lol. YonTracks@9eb047d

OK. thanks. glad you already have a solution. Simply analyzing the logs, ollama rocesses are being rebuilt all the time, and then orphans are left behind. I will also try your suggestion

<!-- gh-comment-id:2518843002 --> @zxq9133 commented on GitHub (Dec 5, 2024): > re-create the issue: Using the generate endpoint with context[] (yes, I know the context is being depreciated) does it easy! first use llama3.1 and using the context / message history with llava, but there are many ways this issue is caused. Good luck. heres my attempt lol. [YonTracks@9eb047d](https://github.com/YonTracks/ollama-yontracks/commit/9eb047d01c3d45acb6200f9c6dfe1358e0ae2047) OK. thanks. glad you already have a solution. Simply analyzing the logs, ollama rocesses are being rebuilt all the time, and then orphans are left behind. I will also try your suggestion
Author
Owner

@fxmbsw7 commented on GitHub (Dec 10, 2024):

did u have off on's on the internet ?
i noticed recently , in phone termux , if i let model download and then switch away from termux and off inet , and on it again ,
.. go termux , the dl restarts at 0% or saved_last_state , just as the cmd started ...

maybe its somehow related ..

<!-- gh-comment-id:2530592540 --> @fxmbsw7 commented on GitHub (Dec 10, 2024): did u have off on's on the internet ? i noticed recently , in phone termux , if i let model download and then switch away from termux and off inet , and on it again , .. go termux , the dl restarts at 0% or saved_last_state , just as the cmd started ... maybe its somehow related ..
Author
Owner

@YonTracks commented on GitHub (Dec 17, 2024):

windows: vscode: cotinue:
This happens with windows continue and vscode and also loops at times. I think mostly a continue and vscode causing issue. I have no issues or surprises using my own UI unless I purposeful error causing actions lol. I will keep testing and learning what is happening. trying to get the server.logs at the correct time of the issue is hard lol.
ollama-embedding-loop.txt
with 0.5.3 I can capture the logs lol. seems to be trying to embed the whole repo lol.
I can fix the orphaned server processes but still looping issues related I reckon so only temporary for now (I regularly test official ollama repo via OllamaSetup.exe). I am trying to trace the initial cause and reason's, the other loop I found was expired runner retry and looping same until timeout or crash.

<!-- gh-comment-id:2547918135 --> @YonTracks commented on GitHub (Dec 17, 2024): windows: vscode: cotinue: This happens with windows continue and vscode and also loops at times. I think mostly a continue and vscode causing issue. I have no issues or surprises using my own UI unless I purposeful error causing actions lol. I will keep testing and learning what is happening. trying to get the server.logs at the correct time of the issue is hard lol. [ollama-embedding-loop.txt](https://github.com/user-attachments/files/18162854/ollama-embedding-loop.txt) with 0.5.3 I can capture the logs lol. seems to be trying to embed the whole repo lol. I can fix the orphaned server processes but still looping issues related I reckon so only temporary for now (I regularly test official ollama repo via OllamaSetup.exe). I am trying to trace the initial cause and reason's, the other loop I found was expired runner retry and looping same until timeout or crash.
Author
Owner

@YonTracks commented on GitHub (Dec 17, 2024):

Found how to re-create the current embed looping issue:
So far with vscode, windows 11, nomic-embed-text and continue 0.5.3 but similar will earlier releases.
ollama installed via OllamaSetup.exe.

  1. First (with ollama not running) open vscode (current test is ollama fork repo, but I have had same / similar with other projects), then run ollama via the windows ollama icon (will see the tray icon appear).
  2. Open the server.log via tray icon view logs and open with vscode.
    no issue yet.
  3. inside vscode currently(ollama repo), right click dist/OllamaSetup.exe Reveal in File Explorer
    looping... gpu max nomic model vram 1.9gb or something but runs 90+% and cpu starts to quickly climb.
    continue-config.txt

so far this is a easy way to have this issue happen, but is has happened to me without using vscode(but loaded the logs into vs code to view logs only, so not always large repo embeddings), and also while using a local dev build and serve or start.

I don't think caused by ollama and if everything is correct and ollama is used correct I think all is good, but if something is not then issue / similar issue.

For now for me I just keep using my fork with orphaned server fix, and monitor the resources if leaving ollama running or just let it crash lol when it restarts, the orphaned server fix catches the orphans, if looping my fans kick in lol. Good luck

If I find anything worthy, I will share. cheers

<!-- gh-comment-id:2548152643 --> @YonTracks commented on GitHub (Dec 17, 2024): Found how to re-create the current embed looping issue: So far with vscode, windows 11, nomic-embed-text and continue 0.5.3 but similar will earlier releases. ollama installed via OllamaSetup.exe. 1. First (with ollama not running) open vscode (current test is ollama fork repo, but I have had same / similar with other projects), then run ollama via the windows ollama icon (will see the tray icon appear). 2. Open the server.log via tray icon view logs and open with vscode. no issue yet. 3. inside vscode currently(ollama repo), right click dist/OllamaSetup.exe `Reveal in File Explorer` looping... gpu max nomic model vram 1.9gb or something but runs 90+% and cpu starts to quickly climb. [continue-config.txt](https://github.com/user-attachments/files/18164104/continue-config.txt) so far this is a easy way to have this issue happen, but is has happened to me without using vscode(but loaded the logs into vs code to view logs only, so not always large repo embeddings), and also while using a local dev build and `serve` or `start`. I don't think caused by ollama and if everything is correct and ollama is used correct I think all is good, but if something is not then issue / similar issue. For now for me I just keep using my fork with orphaned server fix, and monitor the resources if leaving ollama running or just let it crash lol when it restarts, the orphaned server fix catches the orphans, if looping my fans kick in lol. Good luck If I find anything worthy, I will share. cheers
Author
Owner

@fxmbsw7 commented on GitHub (Feb 4, 2025):

i think its a vscode issue
it uses at that point ollama ? else its not that
so it may be 1 vscode bug loop crash , due to its reading a file that updates occasionally ,
or and 2 not enuff OLLAMA_PARALLEL set
.. or of course any other bug =pp

<!-- gh-comment-id:2633410622 --> @fxmbsw7 commented on GitHub (Feb 4, 2025): i think its a vscode issue it uses at that point ollama ? else its not that so it may be 1 vscode bug loop crash , due to its reading a file that updates occasionally , or and 2 not enuff OLLAMA_PARALLEL set .. or of course any other bug =pp
Author
Owner

@fxmbsw7 commented on GitHub (Feb 4, 2025):

same bug with other sw's with updating log files ? sounds me so ..

<!-- gh-comment-id:2633412777 --> @fxmbsw7 commented on GitHub (Feb 4, 2025): same bug with other sw's with updating log files ? sounds me so ..
Author
Owner

@YonTracks commented on GitHub (Feb 4, 2025):

i think its a vscode issue it uses at that point ollama ? else its not that so it may be 1 vscode bug loop crash , due to its reading a file that updates occasionally , or and 2 not enuff OLLAMA_PARALLEL set .. or of course any other bug =pp

Yep, I agree. continue and vs code was my issue, and the orphaned processes and long embeddings and logs not updating. It was hard to see the actual reason as vscode and continue has its own logs and ollama would restart.

If an ollama server process was running while ollama crashes for some reason (e.g. due to wrong params given or image or context etc.) it would be orphaned, but no problems since and I tried to make it fail. if I use nomic-embed-text the load time is slow for big projects, but after vscode and continue finishes the embeddings, wow. super impressed, love ollama lots.

<!-- gh-comment-id:2633505891 --> @YonTracks commented on GitHub (Feb 4, 2025): > i think its a vscode issue it uses at that point ollama ? else its not that so it may be 1 vscode bug loop crash , due to its reading a file that updates occasionally , or and 2 not enuff OLLAMA_PARALLEL set .. or of course any other bug =pp Yep, I agree. continue and vs code was my issue, and the orphaned processes and long embeddings and logs not updating. It was hard to see the actual reason as vscode and continue has its own logs and ollama would restart. If an ollama server process was running while ollama crashes for some reason (e.g. due to wrong params given or image or context etc.) it would be orphaned, but no problems since and I tried to make it fail. if I use nomic-embed-text the load time is slow for big projects, but after vscode and continue finishes the embeddings, wow. super impressed, love ollama lots.
Author
Owner

@YonTracks commented on GitHub (Feb 4, 2025):

I believe this can be closed? epic cheers.

<!-- gh-comment-id:2633512787 --> @YonTracks commented on GitHub (Feb 4, 2025): I believe this can be closed? epic cheers.
Author
Owner

@magnussp commented on GitHub (Feb 14, 2025):

What is the solution to use ollama together with vscode and continue without having this issue occur? I need to restart ollama several times a day since the orphaned ollama process is hogging vram from the GPU.

<!-- gh-comment-id:2658967105 --> @magnussp commented on GitHub (Feb 14, 2025): What is the solution to use ollama together with vscode and continue without having this issue occur? I need to restart ollama several times a day since the orphaned ollama process is hogging vram from the GPU.
Author
Owner

@YonTracks commented on GitHub (Feb 14, 2025):

What is the solution to use ollama together with vscode and continue without having this issue occur? I need to restart ollama several times a day since the orphaned ollama process is hogging vram from the GPU.

should be fixed now? update 0.5.10, and update continue, and vscode also? if continue is indexing and you cancel or quit ollama while embedding multi vscode windows maybe (but I can't get it to fail no more)? setup, continue proper and wow! very good. I have tested and tested and tested lol, I did manage to get an orphan with 0.5.7 - 0.5.8 but all seems, very fixed and very good.

<!-- gh-comment-id:2658982876 --> @YonTracks commented on GitHub (Feb 14, 2025): > What is the solution to use ollama together with vscode and continue without having this issue occur? I need to restart ollama several times a day since the orphaned ollama process is hogging vram from the GPU. should be fixed now? update `0.5.10`, and update continue, and vscode also? if continue is indexing and you cancel or quit ollama while embedding multi vscode windows maybe (but I can't get it to fail no more)? setup, continue proper and wow! very good. I have tested and tested and tested lol, I did manage to get an orphan with 0.5.7 - 0.5.8 but all seems, very fixed and very good.
Author
Owner

@YonTracks commented on GitHub (Feb 14, 2025):

If embeddings are too large and take too long, for a embed model running on gpu ollama, try

{
  "embeddingsProvider": {
    "provider": "transformers.js"
  }
}

instead, cpu is used freeing up the gpu, it does still work well, just not as good as nomic imo.
indexing can be paused, and also there is a continue ignore file (same like gitignore) for what to ignore.
https://docs.continue.dev/customize/model-types/embeddings/
good luck.

<!-- gh-comment-id:2659017132 --> @YonTracks commented on GitHub (Feb 14, 2025): If embeddings are too large and take too long, for a embed model running on gpu `ollama`, try ``` { "embeddingsProvider": { "provider": "transformers.js" } } ``` instead, cpu is used freeing up the gpu, it does still work well, just not as good as nomic imo. indexing can be paused, and also there is a continue ignore file (same like gitignore) for what to ignore. https://docs.continue.dev/customize/model-types/embeddings/ good luck.
Author
Owner

@magnussp commented on GitHub (Feb 14, 2025):

@YonTracks wow! That was a super fast answer 👍 And thanks for clarifying the solution. I will try that and hope that if resolves the issue for me as well. All the best!

<!-- gh-comment-id:2659158163 --> @magnussp commented on GitHub (Feb 14, 2025): @YonTracks wow! That was a super fast answer 👍 And thanks for clarifying the solution. I will try that and hope that if resolves the issue for me as well. All the best!
Author
Owner

@YonTracks commented on GitHub (Feb 14, 2025):

You should be very happy when sorted.
A celebration of the imo best out there? 0.5.10+ love it. so far, yep most likely biased but I do test lots, my local ollama 8b llama3.1 with continue and tools and indexing, embedding, and then qwq with tools... and can switch and really get quality. so far keeping up with the big guys gpt+, deepseek, the big models (but I can only test paid gpt4+), and up to 32b, but, I only use paid, for testing, true, and imo, I still got a lot more I can do to get better (so far default models) but earlier test shows modelfile parms etc.

<!-- gh-comment-id:2659158624 --> @YonTracks commented on GitHub (Feb 14, 2025): You should be very happy when sorted. A celebration of the imo best out there? 0.5.10+ love it. so far, `yep most likely biased` but I do test lots, my local ollama 8b llama3.1 with `continue` and tools and indexing, embedding, and then qwq with tools... and can switch and really get quality. so far keeping up with the big guys gpt+, deepseek, the big models (but I can only test paid gpt4+), and up to 32b, but, I only use paid, for testing, true, and imo, I still got a lot more I can do to get better (so far default models) but earlier test shows modelfile parms etc.
Author
Owner

@YonTracks commented on GitHub (Feb 14, 2025):

I thought, I better test more and make sure lol, I jinxed it, far out, yep, pretty sure I can also get multi ollama.exe orphans with 0.5.10 - latest, I will see exactly what I did, more testing. so far, all was good, but I have been Installing and updating and testing the latest ollama, so ollama and the ollama.exe's are being reset anyway, (the older ollama, they would not reset), I'm currently testing the overriding params likekeepAlive = OLLAMA_KEEP_ALIVE with its own differences, and bug like ollama ps shows nothing, and also, possible orphans from that. also ollama serve vs go run . serve and ./ollama serve imo, always best to only use the app icon on windows and small picky stuff.

trial and error, when it's good, it's really good. If a good communicator (not like me, I am bad srry), share the learned info.
I will test more anyway.
good luck

<!-- gh-comment-id:2659321334 --> @YonTracks commented on GitHub (Feb 14, 2025): I thought, I better test more and make sure lol, I jinxed it, far out, yep, pretty sure I can also get multi ollama.exe orphans with 0.5.10 - latest, I will see exactly what I did, more testing. so far, all was good, but I have been Installing and updating and testing the latest `ollama`, so ollama and the ollama.exe's are being reset anyway, (the older ollama, they would not reset), I'm currently testing the overriding params like`keepAlive` = ` OLLAMA_KEEP_ALIVE` with its own differences, and bug like `ollama ps` shows nothing, and also, possible orphans from that. also `ollama serve` vs `go run . serve` and `./ollama serve` imo, always best to `only use the app icon` on windows and small picky stuff. trial and error, when it's good, it's really good. If a good communicator (not like me, I am bad srry), share the learned info. I will test more anyway. good luck
Author
Owner

@YonTracks commented on GitHub (Feb 14, 2025):

continue config example:

{
  "models": [
    {
      "title": "llama3.1",
      "provider": "ollama",
      "model": "llama3.1"
    }
],
  "completionOptions": {
    "stream": false,
    "temperature": 0.5,
    "keepAlive": 900
  },
  "tabAutocompleteModel": {
    "title": "qwen2.5-coder",
    "provider": "ollama",
    "model": "qwen2.5-coder"
  },
  "embeddingsProvider": {
    "provider": "ollama",
    "model": "nomic-embed-text"
  },
}

I can confirm, the continue, will cause an issue if the config.json is bad? a good way to check is via ollama ps if nothing shows when it should, or orphans, or other issues...

cheers, good luck testing, or find default safe settings, hopefully will be sorted soon.

<!-- gh-comment-id:2659367922 --> @YonTracks commented on GitHub (Feb 14, 2025): continue config example: ``` { "models": [ { "title": "llama3.1", "provider": "ollama", "model": "llama3.1" } ], "completionOptions": { "stream": false, "temperature": 0.5, "keepAlive": 900 }, "tabAutocompleteModel": { "title": "qwen2.5-coder", "provider": "ollama", "model": "qwen2.5-coder" }, "embeddingsProvider": { "provider": "ollama", "model": "nomic-embed-text" }, } ``` I can confirm, the continue, will cause an issue if the config.json is bad? a good way to check is via `ollama ps` if nothing shows when it should, or orphans, or other issues... cheers, good luck testing, or find default safe settings, hopefully will be sorted soon.
Author
Owner

@YonTracks commented on GitHub (Feb 14, 2025):

I'm wondering if ollama should use environment varables like the actual system env variables, OLLAMA_KEEP_ALIVE vs keepAlive ollama should win. so far if I set the env for OLLAMA_KEEP_ALIVE, continue will override it when using continue, but yes it can be set in continue and .continue, but if invalid, then ollama should handle that somehow?

so far, my, check for orphaned processes, is looking like making a comeback lol. not sure I'll test more hopefully tomorrow.
edit:^"check for orphaned processes" no it won't, the ollama methods are doing that, in my testing, as when you restart ollama, it resets. edit^: na imo, it is correct, the current env variables methods. if all is correctly setup it is fine, so this seems like correct, expected behavior.
again, cheers, good luck

<!-- gh-comment-id:2659406562 --> @YonTracks commented on GitHub (Feb 14, 2025): I'm wondering if `ollama` should use environment varables like the actual system env variables, `OLLAMA_KEEP_ALIVE` vs `keepAlive` ollama should win. so far if I set the env for `OLLAMA_KEEP_ALIVE`, `continue` will override it when using continue, but yes it can be set in `continue` and .continue, but if invalid, then ollama should handle that somehow? so far, my, check for orphaned processes, is looking like making a comeback lol. not sure I'll test more hopefully tomorrow. edit:^"check for orphaned processes" no it won't, the ollama methods are doing that, in my testing, as when you restart ollama, it resets. edit^: na imo, it is correct, the current env variables methods. if all is correctly setup it is fine, so this seems like correct, expected behavior. again, cheers, good luck
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67129