Ollama Runner Fails with “Exit Status 2” and Random Non-Responsive Behavior on Windows #8587

Open
opened 2025-11-12 14:46:34 -06:00 by GiteaMirror · 23 comments
Owner

Originally created by @Anurag1940 on GitHub (Nov 4, 2025).

Ollama Runner fails intermittently on Windows when running models like llama3.2, gemma3:4b, and phi3:mini.

When executing a simple command such as:

ollama run llama3.2 "Hello"

it either terminates immediately with:

Error: 500 Internal Server Error: llama runner process has terminated: exit status 2

or hangs indefinitely without providing any output or visible error.

This happens both in GPU mode and CPU-only mode ($env:OLLAMA_NO_GPU=1).

Expected behavior: the model should initialize and respond normally without termination or hanging.

Actual behavior:

The process stops abruptly or becomes non-responsive.

Logs indicate “entering low VRAM mode” despite having sufficient system memory (~11.7 GiB total).

Restarting the Ollama daemon and re-pulling models did not resolve the issue.

Relevant log output:

time=2025-11-01T10:28:26.946+05:30 level=INFO source=routes.go:1577 msg="Listening on 127.0.0.1:11434 (version 0.12.8)"
time=2025-11-01T10:28:29.534+05:30 level=INFO source=routes.go:1618 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
Error: 500 Internal Server Error: llama runner process has terminated: exit status 2

System details:

OS: Windows 11 (PowerShell environment)

Ollama version: 0.12.8

Installed models: llama3.2:latest, gemma3:4b, phi3:mini

System memory: 11.7 GiB total / 1.2 GiB available

Tested in both GPU and CPU-only configurations

Troubleshooting steps already performed:

Restarted Ollama service and system

Cleared cache and re-pulled models

Verified ports and memory allocation

Switched between GPU and CPU modes

Despite these steps, the runner process remains unstable and occasionally fails without any visible logs or output.

Requesting guidance on possible configuration adjustments, additional debug parameters, or diagnostic utilities to trace this behavior further.

server-1.log
server-2.log
server-3.log
server-4.log

OS

Windows

GPU

Intel

CPU

Intel

Ollama version

0.12.9

Originally created by @Anurag1940 on GitHub (Nov 4, 2025). Ollama Runner fails intermittently on Windows when running models like llama3.2, gemma3:4b, and phi3:mini. When executing a simple command such as: ollama run llama3.2 "Hello" it either terminates immediately with: Error: 500 Internal Server Error: llama runner process has terminated: exit status 2 or hangs indefinitely without providing any output or visible error. This happens both in GPU mode and CPU-only mode ($env:OLLAMA_NO_GPU=1). Expected behavior: the model should initialize and respond normally without termination or hanging. Actual behavior: The process stops abruptly or becomes non-responsive. Logs indicate “entering low VRAM mode” despite having sufficient system memory (~11.7 GiB total). Restarting the Ollama daemon and re-pulling models did not resolve the issue. Relevant log output: time=2025-11-01T10:28:26.946+05:30 level=INFO source=routes.go:1577 msg="Listening on 127.0.0.1:11434 (version 0.12.8)" time=2025-11-01T10:28:29.534+05:30 level=INFO source=routes.go:1618 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" Error: 500 Internal Server Error: llama runner process has terminated: exit status 2 System details: OS: Windows 11 (PowerShell environment) Ollama version: 0.12.8 Installed models: llama3.2:latest, gemma3:4b, phi3:mini System memory: 11.7 GiB total / 1.2 GiB available Tested in both GPU and CPU-only configurations Troubleshooting steps already performed: Restarted Ollama service and system Cleared cache and re-pulled models Verified ports and memory allocation Switched between GPU and CPU modes Despite these steps, the runner process remains unstable and occasionally fails without any visible logs or output. Requesting guidance on possible configuration adjustments, additional debug parameters, or diagnostic utilities to trace this behavior further. [server-1.log](https://github.com/user-attachments/files/23323309/server-1.log) [server-2.log](https://github.com/user-attachments/files/23323310/server-2.log) [server-3.log](https://github.com/user-attachments/files/23323307/server-3.log) [server-4.log](https://github.com/user-attachments/files/23323308/server-4.log) ### OS Windows ### GPU Intel ### CPU Intel ### Ollama version 0.12.9
GiteaMirror added the
bug
label 2025-11-12 14:46:34 -06:00
Author
Owner

@rick-github commented on GitHub (Nov 4, 2025):

OLLAMA_NO_GPU is not an ollama configuration variable so has no effect. But the logs show ollama never successfully detects a GPU, so CPU is always used. The logs also don't show a model load or runner crash, so there's little information to go on. If you set OLLAMA_DEBUG=2 and post the resulting logs it will be easier to make progress,

@rick-github commented on GitHub (Nov 4, 2025): `OLLAMA_NO_GPU` is not an ollama configuration variable so has no effect. But the logs show ollama never successfully detects a GPU, so CPU is always used. The logs also don't show a model load or runner crash, so there's little information to go on. If you set `OLLAMA_DEBUG=2` and post the resulting logs it will be easier to make progress,
Author
Owner

@dhiltgen commented on GitHub (Nov 4, 2025):

12G of system memory, with only 2.7G available isn't going to be able to load very many models.

Intel GPUs are not officially supported yet, but Vulkan support is coming soon which will enable many Intel GPUs. However if your GPU is an iGPU, it may struggle to load models with so little available memory.

@dhiltgen commented on GitHub (Nov 4, 2025): 12G of system memory, with only 2.7G available isn't going to be able to load very many models. Intel GPUs are not officially supported yet, but Vulkan support is coming soon which will enable many Intel GPUs. However if your GPU is an iGPU, it may struggle to load models with so little available memory.
Author
Owner

@Nantris commented on GitHub (Nov 9, 2025):

Upgraded from 0.12.3 to 0.12.10 on Windows 11 and Ollama is entirely unusable now with the same error, Error: 500 Internal Server Error: llama runner process has terminated: exit status 2

Nvidia GPU and plenty of RAM - everything worked fine on 0.12.3 but I was getting empty responses from granite4:7b-a1b-h intermittently so I upgraded and now I can't run any model.

I noticed the program is no longer ollama.exe but instead ollama app.exe and I noticed it makes an ollama app.exe folder in AppData. I wonder if any of this (the file extension in the folder name or the space in the executable name) are causing issues.

@Nantris commented on GitHub (Nov 9, 2025): Upgraded from `0.12.3` to `0.12.10` on Windows 11 and Ollama is entirely unusable now with the same error, `Error: 500 Internal Server Error: llama runner process has terminated: exit status 2` Nvidia GPU and plenty of RAM - everything worked fine on `0.12.3` but I was getting empty responses from `granite4:7b-a1b-h` intermittently so I upgraded and now I can't run any model. I noticed the program is no longer `ollama.exe` but instead `ollama app.exe` and I noticed it makes an `ollama app.exe` folder in AppData. I wonder if any of this (the file extension in the folder name or the space in the executable name) are causing issues.
Author
Owner

@Nantris commented on GitHub (Nov 9, 2025):

Also tried setting OLLAMA_DEBUG=1 and OLLAMA_DEBUG=2 but nothing prints besides that error, and nothing is logged to any file either.

@Nantris commented on GitHub (Nov 9, 2025): Also tried setting `OLLAMA_DEBUG=1` and `OLLAMA_DEBUG=2` but nothing prints besides that error, and nothing is logged to any file either.
Author
Owner

@Nantris commented on GitHub (Nov 9, 2025):

0.12.3 is the last version I can use ollama run granite4:7b-a1b-h without facing this error.

When I tried running any model in 0.12.4 in the GUI I got:

400 Bad Request: registry.ollama.ai/library/granite4:7b-a1b-h does not support thinking

But in the CLI it's the message from above (Server Error: llama runner process has terminated: exit status 2)

@Nantris commented on GitHub (Nov 9, 2025): `0.12.3` is the last version I can use `ollama run granite4:7b-a1b-h` without facing this error. When I tried running any model in `0.12.4` in the GUI I got: 400 Bad Request: registry.ollama.ai/library/granite4:7b-a1b-h does not support thinking But in the CLI it's the message from above (`Server Error: llama runner process has terminated: exit status 2`)
Author
Owner

@rick-github commented on GitHub (Nov 9, 2025):

ollama.exe is the CLI/server, ollama app.exe is the UI. You have to set OLLAMA_DEBUG in the server environment for it to have any effect, and then check the server.log file in %LOCALAPPDATA%\Ollama.

@rick-github commented on GitHub (Nov 9, 2025): ollama.exe is the CLI/server, ollama app.exe is the UI. You have to set `OLLAMA_DEBUG` in the [server environment](https://github.com/ollama/ollama/blob/main/docs/faq.mdx#setting-environment-variables-on-windows) for it to have any effect, and then check the [`server.log`](https://docs.ollama.com/troubleshooting) file in `%LOCALAPPDATA%\Ollama`.
Author
Owner

@Nantris commented on GitHub (Nov 9, 2025):

I didn't see ollama.exe running anymore when I used ollama run after 0.12.3. I doubt that's the issue, but maybe something to look into. I just spent 30 minutes installing various versions so I'm not inclined to do any more bisecting now that I'm back on 0.12.3 and it runs (albeit maybe with tool-calling bugs)

@Nantris commented on GitHub (Nov 9, 2025): I didn't see `ollama.exe` running anymore when I used `ollama run` after `0.12.3`. I doubt that's the issue, but maybe something to look into. I just spent 30 minutes installing various versions so I'm not inclined to do any more bisecting now that I'm back on `0.12.3` and it runs (albeit maybe with tool-calling bugs)
Author
Owner

@Nantris commented on GitHub (Nov 9, 2025):

Oh and to clarify the above, in case it's unclear, I was using the CLI/server so I can confirm 100% nothing gets logged whatsoever (at least in 0.12.10 - I didn't test in any older version except 0.12.3 I know it works)

@Nantris commented on GitHub (Nov 9, 2025): Oh and to clarify the above, in case it's unclear, I was using the CLI/server so I can confirm 100% nothing gets logged whatsoever (at least in `0.12.10` - I didn't test in any older version except `0.12.3` I know it works)
Author
Owner

@rick-github commented on GitHub (Nov 9, 2025):

Stop the ollama server by clicking on the systray icon and selecting "Quit Ollama". Open a CMD window and run the following:

C:> set OLLAMA_DEBUG=2
C:> ollama serve

Then open a second CMD windows and run:

C:> ollama run granite4:7b-a1b-h

What's the output in the first CMD window?

@rick-github commented on GitHub (Nov 9, 2025): Stop the ollama server by clicking on the systray icon and selecting "Quit Ollama". Open a CMD window and run the following: ```console C:> set OLLAMA_DEBUG=2 C:> ollama serve ``` Then open a second CMD windows and run: ```console C:> ollama run granite4:7b-a1b-h ``` What's the output in the first CMD window?
Author
Owner

@Nantris commented on GitHub (Nov 9, 2025):

Thank you for your replies @rick-github. I did do exactly that. Respectfully, I think you're underestimating my technical proficiency. The output is as previously stated:

Server Error: llama runner process has terminated: exit status 2

Unfortunately because it logs nothing anywhere that's all I can offer. Nothing in Windows Event Logs either. I installed with the OllamaSetup.exe and I exited Windows Terminal between each new install.

@Nantris commented on GitHub (Nov 9, 2025): Thank you for your replies @rick-github. I did do exactly that. Respectfully, I think you're underestimating my technical proficiency. The output is as previously stated: `Server Error: llama runner process has terminated: exit status 2` Unfortunately because it logs nothing anywhere that's all I can offer. Nothing in Windows Event Logs either. I installed with the `OllamaSetup.exe` and I exited Windows Terminal between each new install.
Author
Owner

@rick-github commented on GitHub (Nov 9, 2025):

What command do you run that displays Server Error: llama runner process has terminated: exit status 2?

@rick-github commented on GitHub (Nov 9, 2025): What command do you run that displays `Server Error: llama runner process has terminated: exit status 2`?
Author
Owner

@Nantris commented on GitHub (Nov 9, 2025):

ollama run granite4:7b-a1b-h

Exchange any other model and the error is the same.

@Nantris commented on GitHub (Nov 9, 2025): `ollama run granite4:7b-a1b-h` Exchange any other model and the error is the same.
Author
Owner

@rick-github commented on GitHub (Nov 9, 2025):

ollama run granite4:7b-a1b-h cannot emit that message without connecting to a server. In the first CMD window in my advice above, you should either have a failed server start, or a bunch of log lines. What is the content of the first CMD window?

@rick-github commented on GitHub (Nov 9, 2025): `ollama run granite4:7b-a1b-h` cannot emit that message without connecting to a server. In the first CMD window in my [advice above](https://github.com/ollama/ollama/issues/12940#issuecomment-3507418893), you should either have a failed server start, or a bunch of log lines. What is the content of the first CMD window?
Author
Owner

@Nantris commented on GitHub (Nov 9, 2025):

Thanks again for your reply and I apologize as I misread your message.

So yes that logs and that also resolves the problem. It seems in the past it was never necessary to run ollama serve and using ollama run would open the app in the system tray automatically. But I don't see anything in the release notes for 0.12.4 that suggests that that's expected.

@Nantris commented on GitHub (Nov 9, 2025): Thanks again for your reply and I apologize as I misread your message. So yes that logs and that also resolves the problem. It seems in the past it was never necessary to run `ollama serve` and using `ollama run` would open the app in the system tray automatically. But I don't see anything in the release notes for `0.12.4` that suggests that that's expected.
Author
Owner

@rick-github commented on GitHub (Nov 9, 2025):

The purpose of starting the server from the CMD window was to increase the visibility of the logs to determine the cause of Server Error: llama runner process has terminated: exit status 2. If this error is no longer occurring, it seems it was a transient issue. If it re-occurs, update this issue (or create a new issue) with the server log.

@rick-github commented on GitHub (Nov 9, 2025): The purpose of starting the server from the CMD window was to increase the visibility of the logs to determine the cause of `Server Error: llama runner process has terminated: exit status 2`. If this error is no longer occurring, it seems it was a transient issue. If it re-occurs, update this issue (or create a new issue) with the [server log](https://docs.ollama.com/troubleshooting).
Author
Owner

@TigerGod commented on GitHub (Nov 9, 2025):

希望别再更新了,用上一个老版本就挺好的,更新后影响太大了。。。

@TigerGod commented on GitHub (Nov 9, 2025): 希望别再更新了,用上一个老版本就挺好的,更新后影响太大了。。。
Author
Owner

@Nantris commented on GitHub (Nov 9, 2025):

@rick-github I feel you're overlooking the change in behavior, and perhaps I was not clear enough about it.

ollama run [model] "just works" in 0.12.3. In 0.12.4 and beyond it produces the error unless you run ollama serve first. That seems to be the cause for this issue existing at all. It's definitely not transient and I didn't upload the log because the issue IS that ollama serve now needs to be run first, but when it is, it runs fine.

If this is intended, it should be documented.

@Nantris commented on GitHub (Nov 9, 2025): @rick-github I feel you're overlooking the change in behavior, and perhaps I was not clear enough about it. `ollama run [model]` _"just works"_ in `0.12.3`. In `0.12.4` and beyond it produces the error unless you run `ollama serve` first. That seems to be the cause for this issue existing at all. It's definitely not transient and I didn't upload the log because the issue IS that `ollama serve` now needs to be run first, but when it is, it runs fine. If this is intended, it should be documented.
Author
Owner

@rick-github commented on GitHub (Nov 9, 2025):

The behaviour hasn't changed. There was a bug in the 0.12.4 to 0.12.9 range that caused model loading to stall, perhaps that's what you experienced. If ollama run (or ollama list if you want to avoid a load stall) is run in a terminal window when the server is not running, the server will be started.

C:\Users\bill>ollama -v
Warning: could not connect to a running Ollama instance
Warning: client version is 0.12.9

C:\Users\bill>ollama list
NAME            ID              SIZE      MODIFIED
qwen2.5:0.5b    a8b0c5157701    397 MB    2 hours ago

C:\Users\bill>ollama -v
ollama version is 0.12.9

If, in 0.12.4 and beyond, you do not manually start the ollama server by running ollama serve in a command window, and you run ollama run granite4:7b-a1b-h and get an Error: llama runner process has terminated: exit status 2 message, then there must be a server running, either started as part of the Startup apps or by the autostart triggered by the run command. In that case, the server log will contain details about the runner crash.

@rick-github commented on GitHub (Nov 9, 2025): The behaviour hasn't changed. There was a [bug](https://github.com/ollama/ollama/issues/12699) in the 0.12.4 to 0.12.9 range that caused model loading to stall, perhaps that's what you experienced. If `ollama run` (or `ollama list` if you want to avoid a load stall) is run in a terminal window when the server is not running, the server will be started. ```console C:\Users\bill>ollama -v Warning: could not connect to a running Ollama instance Warning: client version is 0.12.9 C:\Users\bill>ollama list NAME ID SIZE MODIFIED qwen2.5:0.5b a8b0c5157701 397 MB 2 hours ago C:\Users\bill>ollama -v ollama version is 0.12.9 ``` If, in 0.12.4 and beyond, you do not manually start the ollama server by running `ollama serve` in a command window, and you run `ollama run granite4:7b-a1b-h` and get an `Error: llama runner process has terminated: exit status 2` message, then there must be a server running, either started as part of the `Startup apps` or by the autostart triggered by the `run` command. In that case, the server log will contain details about the runner crash.
Author
Owner

@Nantris commented on GitHub (Nov 10, 2025):

I'm in 0.12.10 now and whether it was intentional or not, I can assure you the behavior changed on Windows in 0.12.4.

From your instructions, it sounds like the old behavior was unexpected but I don't know for sure. Was ollama run ever supposed to work without separately starting the server first? Because it did.

I have run ollama run [model] hundreds of times and it just works as stated, but as of 0.12.4 it no longer works and it errors as stated. It immediately starts working if you run ollama serve first and separately as you advised. If you do not, what happens instead is that ollama app.exe as well as two ollama.exe instances and you get the Error: llama runner process has terminated: exit status 2

If you use the GUI which it starts when you run ollama run [model] , there it errors: 500 Internal Server Error: llama runner process has terminated: exit status 2. - This also happens if you run it from the start menu. (The first time I installed 0.12.10 the GUI app was not starting which may make some of my earlier reports confusing to reconcile)

As far as I can tell, the GUI app no longer ever works, but the CLI interface works fine if you run ollama serve. That workaround doesn't work for the GUI app because it seems to end any ollama serve that's running, and trying to run it after the GUI app, sensibly, yields: Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.

Please let me know if there's any information you'd like me to try, investigate, or share to try to give you a more complete insight into this. I have it working for my development now and don't use the GUI, so it's fine with me this new way it works, but it is definitely new.

I apologize for the lengthy report but wanted to be complete.

@Nantris commented on GitHub (Nov 10, 2025): I'm in `0.12.10` now and whether it was intentional or not, I can assure you the behavior changed on Windows in `0.12.4`. From your instructions, it sounds like the old behavior was unexpected but I don't know for sure. Was `ollama run` ever supposed to work without separately starting the server first? Because it did. I have run `ollama run [model]` hundreds of times and it just works as stated, but as of `0.12.4` it no longer works and it errors as stated. It immediately starts working if you run `ollama serve` first and separately as you advised. If you do not, what happens instead is that `ollama app.exe` as well as two `ollama.exe` instances and you get the `Error: llama runner process has terminated: exit status 2` If you use the GUI which it starts when you run `ollama run [model]` , there it errors: `500 Internal Server Error: llama runner process has terminated: exit status 2`. - This also happens if you run it from the start menu. (The first time I installed `0.12.10` the GUI app was not starting which may make some of my earlier reports confusing to reconcile) As far as I can tell, the GUI app no longer ever works, but the CLI interface works fine if you run `ollama serve`. That workaround doesn't work for the GUI app because it seems to end any `ollama serve` that's running, and trying to run it after the GUI app, sensibly, yields: `Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.` Please let me know if there's any information you'd like me to try, investigate, or share to try to give you a more complete insight into this. _**I have it working for my development now and don't use the GUI, so it's fine with me this new way it works, but it is definitely new.**_ I apologize for the lengthy report but wanted to be complete.
Author
Owner

@rick-github commented on GitHub (Nov 10, 2025):

From your instructions, it sounds like the old behavior was unexpected but I don't know for sure. Was ollama run ever supposed to work without separately starting the server first? Because it did.

The code that does server start if missing was committed early 2024 and wasn't modified during 0.12.*. AFAIK there have been no other reports of this failing and I am unable to reproduce this, so it seems like it's specific to your installation. In which case I propose moving the discussion of this to a new issue.

If you do not [manually start the server], what happens instead is that ollama app.exe as well as two ollama.exe instances and you get the Error: llama runner process has terminated: exit status 2

The reason for the termination of the runner process will be in the server log. If there is no information in the log it could be another problem with your installation. What's the output of dir %LOCALAPPDATA%\Ollama?

@rick-github commented on GitHub (Nov 10, 2025): > From your instructions, it sounds like the old behavior was unexpected but I don't know for sure. Was ollama run ever supposed to work without separately starting the server first? Because it did. The code that does server start if missing was committed early 2024 and wasn't modified during 0.12.*. AFAIK there have been no other reports of this failing and I am unable to reproduce this, so it seems like it's specific to your installation. In which case I propose moving the discussion of this to a [new issue](https://github.com/ollama/ollama/issues/13037). > If you do not [manually start the server], what happens instead is that ollama app.exe as well as two ollama.exe instances and you get the `Error: llama runner process has terminated: exit status 2` The reason for the termination of the runner process will be in the server log. If there is no information in the log it could be another problem with your installation. What's the output of `dir %LOCALAPPDATA%\Ollama`?
Author
Owner

@Apil120 commented on GitHub (Nov 11, 2025):

Have you tried doing following the steps mentioned by @rick-github in this issue? I used taskkill to kill the ollama process and ran ollama list, which seems to have fixed the issue for me.

@Apil120 commented on GitHub (Nov 11, 2025): Have you tried doing following the steps mentioned by @rick-github in [this issue](https://github.com/ollama/ollama/issues/13037)? I used taskkill to kill the ollama process and ran ollama list, which seems to have fixed the issue for me.
Author
Owner

@YonTracks commented on GitHub (Nov 11, 2025):

did you uninstall the old before updating the new (old ollama was more forgiving with this), updating seems an issue on windows (I think app id or something) sometimes 2 ollama's, especially if the version name was modified like me, example version 0.12.10-yontracks. good luck. I'm a bad communicator but check that. cheers. good luck.

@YonTracks commented on GitHub (Nov 11, 2025): did you uninstall the old before updating the new (old ollama was more forgiving with this), updating seems an issue on windows (I think app id or something) sometimes 2 ollama's, especially if the version name was modified like me, example `version 0.12.10-yontracks`. good luck. I'm a bad communicator but check that. cheers. good luck.
Author
Owner

@YonTracks commented on GitHub (Nov 11, 2025):

I modified the iss script to do it (check for previous installs, remove and update accordingly). cheers. good luck.

heres the actual, app error when using ollama run without running it first via the app. ollama serve works fine (console load, with console streaming logs etc). and via the app it works fine, but UI loads.

clarity: so, with no ollama processes running at all.

I run ollama run llama3.1

no server.log as the server exits and rather silently?, tricky error messages anyway etc.

app.log

time=2025-11-11T14:18:04.917+10:00 level=INFO source=app_windows.go:273 msg="starting Ollama" 
time=2025-11-11T14:18:04.918+10:00 level=INFO source=app.go:237 msg="initialized tools registry" tool_count=0
time=2025-11-11T14:18:04.930+10:00 level=INFO source=app.go:252 msg="starting ollama server"
time=2025-11-11T14:18:04.930+10:00 level=INFO source=ui.go:138 msg="configuring ollama proxy" target=http://127.0.0.1:11434
time=2025-11-11T14:18:05.387+10:00 level=INFO source=app.go:281 msg="starting ui server" port=61420
time=2025-11-11T14:18:05.387+10:00 level=DEBUG source=app.go:283 msg="starting ui server on port" port=61420
time=2025-11-11T14:18:05.387+10:00 level=DEBUG source=app.go:321 msg="no URL scheme request to handle"
time=2025-11-11T14:18:05.403+10:00 level=DEBUG source=eventloop.go:45 msg="starting event handling loop""
it hangs here^ but the ollama icon is showing in the tray, the eventloop has fired, but no processes show in task manager and it times out in the console, errors vary based on method / terminal, but same same issue I believe.

I tried ollama run again here while the icon is still in the tray
time=2025-11-11T14:28:24.974+10:00 level=INFO source=app_windows.go:273 msg="starting Ollama" 
time=2025-11-11T14:28:24.974+10:00 level=INFO source=eventloop.go:336 msg="existing instance found, not focusing due to startHidden"
time=2025-11-11T14:28:24.974+10:00 level=INFO source=app_windows.go:79 msg="existing instance found, exiting"
```.

if I now quit ollama via the tray icon and start normally using the app icon all is well. 

srry for many edits, good luck
@YonTracks commented on GitHub (Nov 11, 2025): I modified the iss script to do it (check for previous installs, remove and update accordingly). cheers. good luck. heres the actual, app error when using `ollama run` without running it first via the app. `ollama serve` works fine (console load, with console streaming logs etc). and via the app it works fine, but UI loads. clarity: so, with no ollama processes running at all. I run `ollama run llama3.1` no `server.log` as the server exits and rather silently?, tricky error messages anyway etc. app.log ``` time=2025-11-11T14:18:04.917+10:00 level=INFO source=app_windows.go:273 msg="starting Ollama" time=2025-11-11T14:18:04.918+10:00 level=INFO source=app.go:237 msg="initialized tools registry" tool_count=0 time=2025-11-11T14:18:04.930+10:00 level=INFO source=app.go:252 msg="starting ollama server" time=2025-11-11T14:18:04.930+10:00 level=INFO source=ui.go:138 msg="configuring ollama proxy" target=http://127.0.0.1:11434 time=2025-11-11T14:18:05.387+10:00 level=INFO source=app.go:281 msg="starting ui server" port=61420 time=2025-11-11T14:18:05.387+10:00 level=DEBUG source=app.go:283 msg="starting ui server on port" port=61420 time=2025-11-11T14:18:05.387+10:00 level=DEBUG source=app.go:321 msg="no URL scheme request to handle" time=2025-11-11T14:18:05.403+10:00 level=DEBUG source=eventloop.go:45 msg="starting event handling loop"" it hangs here^ but the ollama icon is showing in the tray, the eventloop has fired, but no processes show in task manager and it times out in the console, errors vary based on method / terminal, but same same issue I believe. I tried ollama run again here while the icon is still in the tray time=2025-11-11T14:28:24.974+10:00 level=INFO source=app_windows.go:273 msg="starting Ollama" time=2025-11-11T14:28:24.974+10:00 level=INFO source=eventloop.go:336 msg="existing instance found, not focusing due to startHidden" time=2025-11-11T14:28:24.974+10:00 level=INFO source=app_windows.go:79 msg="existing instance found, exiting" ```. if I now quit ollama via the tray icon and start normally using the app icon all is well. srry for many edits, good luck
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#8587
No description provided.