[GH-ISSUE #14000] Ollama does not open on Windows 10 22H2 x64, AMD 9950x / Nvidia 3060TIFE / 32GB of system Memory #55663

Closed
opened 2026-04-29 09:33:24 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @jekv2 on GitHub (Jan 31, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14000

Ollama does not open on Windows 10 22H2 x64, AMD 9950x / Nvidia 3060TIFE / 32GB of system Memory.

Logs:

Server log:

time=2026-01-31T15:21:10.009-06:00 level=INFO source=routes.go:1631 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\Admin\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]"

time=2026-01-31T15:21:10.009-06:00 level=INFO source=images.go:473 msg="total blobs: 0"
time=2026-01-31T15:21:10.009-06:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0"
time=2026-01-31T15:21:10.010-06:00 level=INFO source=routes.go:1684 msg="Listening on 127.0.0.1:11434 (version 0.15.2)"
time=2026-01-31T15:21:10.011-06:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-01-31T15:21:10.016-06:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\Users\Admin\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --port 56582"
time=2026-01-31T15:21:10.157-06:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\Users\Admin\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --port 56586"
time=2026-01-31T15:21:10.284-06:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\Users\Admin\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --port 56590"
time=2026-01-31T15:21:10.496-06:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1"
time=2026-01-31T15:21:10.496-06:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\Users\Admin\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --port 56594"
time=2026-01-31T15:21:10.496-06:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\Users\Admin\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --port 56595"
time=2026-01-31T15:21:10.496-06:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\Users\Admin\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --port 56596"
time=2026-01-31T15:21:10.646-06:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-31f97312-c981-417d-08bd-00e35abe8dd1 filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3060 Ti" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="8.0 GiB" available="7.5 GiB"
time=2026-01-31T15:21:10.646-06:00 level=INFO source=routes.go:1725 msg="entering low vram mode" "total vram"="8.0 GiB" threshold="20.0 GiB"
[GIN] 2026/01/31 - 15:21:10 | 200 | 0s | 127.0.0.1 | GET "/api/version"

App Log:

time=2026-01-31T15:21:08.973-06:00 level=INFO source=app_windows.go:270 msg="starting Ollama" app=C:\Users\Admin\AppData\Local\Programs\Ollama version=0.15.2 OS=Windows/10.0.19045

time=2026-01-31T15:21:08.973-06:00 level=INFO source=app.go:237 msg="initialized tools registry" tool_count=0
time=2026-01-31T15:21:08.976-06:00 level=INFO source=app.go:252 msg="starting ollama server"
time=2026-01-31T15:21:08.976-06:00 level=INFO source=app.go:277 msg="starting ui server" port=56482
time=2026-01-31T15:21:11.976-06:00 level=INFO source=updater.go:254 msg="beginning update checker" interval=1h0m0s

Originally created by @jekv2 on GitHub (Jan 31, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14000 Ollama does not open on Windows 10 22H2 x64, AMD 9950x / Nvidia 3060TIFE / 32GB of system Memory. Logs: Server log: time=2026-01-31T15:21:10.009-06:00 level=INFO source=routes.go:1631 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Admin\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]" time=2026-01-31T15:21:10.009-06:00 level=INFO source=images.go:473 msg="total blobs: 0" time=2026-01-31T15:21:10.009-06:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0" time=2026-01-31T15:21:10.010-06:00 level=INFO source=routes.go:1684 msg="Listening on 127.0.0.1:11434 (version 0.15.2)" time=2026-01-31T15:21:10.011-06:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-01-31T15:21:10.016-06:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 56582" time=2026-01-31T15:21:10.157-06:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 56586" time=2026-01-31T15:21:10.284-06:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 56590" time=2026-01-31T15:21:10.496-06:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-01-31T15:21:10.496-06:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 56594" time=2026-01-31T15:21:10.496-06:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 56595" time=2026-01-31T15:21:10.496-06:00 level=INFO source=server.go:429 msg="starting runner" cmd="C:\\Users\\Admin\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 56596" time=2026-01-31T15:21:10.646-06:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-31f97312-c981-417d-08bd-00e35abe8dd1 filter_id="" library=CUDA compute=8.6 name=CUDA0 description="NVIDIA GeForce RTX 3060 Ti" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="8.0 GiB" available="7.5 GiB" time=2026-01-31T15:21:10.646-06:00 level=INFO source=routes.go:1725 msg="entering low vram mode" "total vram"="8.0 GiB" threshold="20.0 GiB" [GIN] 2026/01/31 - 15:21:10 | 200 | 0s | 127.0.0.1 | GET "/api/version" App Log: time=2026-01-31T15:21:08.973-06:00 level=INFO source=app_windows.go:270 msg="starting Ollama" app=C:\Users\Admin\AppData\Local\Programs\Ollama version=0.15.2 OS=Windows/10.0.19045 time=2026-01-31T15:21:08.973-06:00 level=INFO source=app.go:237 msg="initialized tools registry" tool_count=0 time=2026-01-31T15:21:08.976-06:00 level=INFO source=app.go:252 msg="starting ollama server" time=2026-01-31T15:21:08.976-06:00 level=INFO source=app.go:277 msg="starting ui server" port=56482 time=2026-01-31T15:21:11.976-06:00 level=INFO source=updater.go:254 msg="beginning update checker" interval=1h0m0s
Author
Owner

@rick-github commented on GitHub (Jan 31, 2026):

Looks like the server is working fine. Do you mean that the UI does not open when you click on the ollama icon?

<!-- gh-comment-id:3829423072 --> @rick-github commented on GitHub (Jan 31, 2026): Looks like the server is working fine. Do you mean that the UI does not open when you click on the ollama icon?
Author
Owner

@jekv2 commented on GitHub (Jan 31, 2026):

Looks like the server is working fine. Do you mean that the UI does not open when you click on the ollama icon?

Yes, correct. Settings don't open and open don't open.

I installed python & Open-WebUI, I am on it now, downloaded llama3.1 and able to text with it, I further investigated that in fact the server does work.

Settings/About
Open WebUI Version
v0.7.2
(latest)
Ollama Version
0.14.3

I am completely new to this.

As for the ollama not opening and settings, I can provide more info if you need it.

<!-- gh-comment-id:3829469952 --> @jekv2 commented on GitHub (Jan 31, 2026): > Looks like the server is working fine. Do you mean that the UI does not open when you click on the ollama icon? Yes, correct. Settings don't open and open don't open. I installed python & Open-WebUI, I am on it now, downloaded llama3.1 and able to text with it, I further investigated that in fact the server does work. Settings/About Open WebUI Version v0.7.2 [(latest)](https://github.com/open-webui/open-webui/releases/tag/v0.7.2) Ollama Version 0.14.3 I am completely new to this. As for the ollama not opening and settings, I can provide more info if you need it.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55663