[GH-ISSUE #15186] OLLAMA_HOST does not work in system variables. #71782

Closed
opened 2026-05-05 02:29:25 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @RNGMARTIN on GitHub (Apr 1, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15186

The Windows system can automatically launch Ollama after each startup, but currently only Ollama.app can be started.In the app, the dialog box model remains in a loading state. To use Ollama, you must open a command prompt and run the ollama serve command. What's causing this issue? This wasn't a problem before.

Originally created by @RNGMARTIN on GitHub (Apr 1, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15186 The Windows system can automatically launch Ollama after each startup, but currently only Ollama.app can be started.In the app, the dialog box model remains in a loading state. To use Ollama, you must open a command prompt and run the `ollama serve` command. What's causing this issue? This wasn't a problem before.
Author
Owner

@rick-github commented on GitHub (Apr 1, 2026):

Server and app logs will aid in debugging.

<!-- gh-comment-id:4166653638 --> @rick-github commented on GitHub (Apr 1, 2026): [Server and app logs](https://docs.ollama.com/troubleshooting) will aid in debugging.
Author
Owner

@RNGMARTIN commented on GitHub (Apr 1, 2026):

server.log

time=2026-04-01T09:44:43.924+08:00 level=INFO source=routes.go:1742 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434/ OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:true OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1/ https://127.0.0.1/ [http://127.0.0.1:](http://127.0.0.1/)* [https://127.0.0.1:](https://127.0.0.1/)* http://0.0.0.0/ https://0.0.0.0/ [http://0.0.0.0:](http://0.0.0.0/)* [https://0.0.0.0:](https://0.0.0.0/)* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]"
time=2026-04-01T09:44:43.933+08:00 level=INFO source=routes.go:1744 msg="Ollama cloud disabled: true"
time=2026-04-01T09:44:43.943+08:00 level=INFO source=images.go:477 msg="total blobs: 55"
time=2026-04-01T09:44:43.944+08:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2026-04-01T09:44:43.948+08:00 level=INFO source=routes.go:1800 msg="Listening on [::]:11434 (version 0.19.0)"
time=2026-04-01T09:44:43.949+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-04-01T09:44:43.966+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 61926"
time=2026-04-01T09:44:44.665+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 61942"
time=2026-04-01T09:44:44.891+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 61957"
time=2026-04-01T09:44:45.165+08:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1"
time=2026-04-01T09:44:45.166+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 61971"
time=2026-04-01T09:44:45.166+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 61972"
time=2026-04-01T09:44:45.441+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-f338af54-25a6-0f7c-17ed-475fde7cd07d filter_id="" library=CUDA compute=8.9 name=CUDA0 description="NVIDIA GeForce RTX 4090 D" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="24.0 GiB" available="22.5 GiB"
time=2026-04-01T09:44:45.441+08:00 level=INFO source=routes.go:1850 msg="vram-based default context" total_vram="24.0 GiB" default_num_ctx=32768

app.log

time=2026-04-01T09:34:26.420+08:00 level=INFO source=app_windows.go:282 msg="starting Ollama" app=D:\Ollama version=0.19.0 OS=Windows/10.0.26200
time=2026-04-01T09:34:26.428+08:00 level=INFO source=app.go:239 msg="initialized tools registry" tool_count=0
time=2026-04-01T09:34:26.437+08:00 level=INFO source=app.go:285 msg="starting ui server" port=50462
time=2026-04-01T09:34:26.437+08:00 level=INFO source=app.go:254 msg="starting ollama server"
time=2026-04-01T09:34:29.437+08:00 level=INFO source=updater.go:296 msg="beginning update checker" interval=1h0m0s
time=2026-04-01T09:34:31.153+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/cloud http.pattern="GET /api/v1/cloud" http.status=200 http.d=0s request_id=1775007271153211800 version=0.19.0
time=2026-04-01T09:34:31.154+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=514.5µs request_id=1775007271153725700 version=0.19.0
time=2026-04-01T09:34:31.154+08:00 level=INFO source=server.go:362 msg=Matched "inference compute"="{Library:CUDA Variant: Compute:8.9 Driver:13.1 Name:CUDA0 VRAM:24.0 GiB}"
time=2026-04-01T09:34:31.154+08:00 level=INFO source=server.go:373 msg="Matched default context length" default_num_ctx=32768
time=2026-04-01T09:34:31.154+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=200 http.d=514.5µs request_id=1775007271153725700 version=0.19.0
time=2026-04-01T09:34:36.442+08:00 level=WARN source=app.go:342 msg="ollama server not ready, continuing anyway" error="timeout waiting for Ollama server to be ready"
time=2026-04-01T09:34:38.184+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=518.3µs request_id=1775007278183909700 version=0.19.0
time=2026-04-01T09:34:38.184+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1775007278184428000 version=0.19.0
time=2026-04-01T09:34:39.094+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/019cf447-1d9f-71f9-851c-aee88b4f99cc http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=2.1229ms request_id=1775007279091951900 version=0.19.0
time=2026-04-01T09:34:41.145+08:00 level=WARN source=ui.go:142 msg="ollama server not ready, retrying" attempt=2
time=2026-04-01T09:34:44.976+08:00 level=INFO source=ui.go:160 msg="configuring ollama proxy" target=http://:4090
time=2026-04-01T09:34:46.074+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=POST http.path=/api/v1/model/upstream http.pattern="POST /api/v1/model/upstream" http.status=200 http.d=1.0797355s request_id=1775007284994998800 version=0.19.0
time=2026-04-01T09:34:57.092+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/019a70f1-ab61-7624-81f0-3853429e28a8 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=1.8117ms request_id=1775007297090952400 version=0.19.0
time=2026-04-01T09:34:57.094+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/019a710e-2d06-70cb-997c-3a48dba6534b http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=14.8691ms request_id=1775007297079579600 version=0.19.0
time=2026-04-01T09:34:57.108+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d2c2-7970-7ed1-b2ad-46e9940f05d4 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=535.2µs request_id=1775007297107834900 version=0.19.0
time=2026-04-01T09:34:57.125+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199c863-1aa1-74ac-8a6c-a0bf88e1261f http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1775007297125151000 version=0.19.0
time=2026-04-01T09:35:00.293+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/019a70f1-ab61-7624-81f0-3853429e28a8 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=526.6µs request_id=1775007300292658000 version=0.19.0
time=2026-04-01T09:35:00.325+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d2c2-7970-7ed1-b2ad-46e9940f05d4 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=554.2µs request_id=1775007300325136900 version=0.19.0
time=2026-04-01T09:35:00.374+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199c863-1aa1-74ac-8a6c-a0bf88e1261f http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1775007300374961400 version=0.19.0
time=2026-04-01T09:36:09.662+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1775007369662594900 version=0.19.0
time=2026-04-01T09:36:09.663+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=514.8µs request_id=1775007369662594900 version=0.19.0
time=2026-04-01T09:36:09.663+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/cloud http.pattern="GET /api/v1/cloud" http.status=200 http.d=514.8µs request_id=1775007369662594900 version=0.19.0
time=2026-04-01T09:43:06.045+08:00 level=ERROR source=server.go:201 msg="ollama exited" err="exit status 0x40010004"
time=2026-04-01T09:44:42.574+08:00 level=INFO source=app_windows.go:282 msg="starting Ollama" app=D:\Ollama version=0.19.0 OS=Windows/10.0.26200
time=2026-04-01T09:44:42.581+08:00 level=INFO source=app.go:239 msg="initialized tools registry" tool_count=0
time=2026-04-01T09:44:42.590+08:00 level=INFO source=app.go:285 msg="starting ui server" port=61921
time=2026-04-01T09:44:42.589+08:00 level=INFO source=app.go:254 msg="starting ollama server"
time=2026-04-01T09:44:45.590+08:00 level=INFO source=updater.go:296 msg="beginning update checker" interval=1h0m0s
time=2026-04-01T09:44:52.726+08:00 level=WARN source=app.go:342 msg="ollama server not ready, continuing anyway" error="timeout waiting for Ollama server to be ready"
time=2026-04-01T09:46:10.073+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/cloud http.pattern="GET /api/v1/cloud" http.status=200 http.d=521.5µs request_id=1775007970073276900 version=0.19.0
time=2026-04-01T09:46:10.073+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=521.5µs request_id=1775007970073276900 version=0.19.0
time=2026-04-01T09:46:10.073+08:00 level=INFO source=server.go:362 msg=Matched "inference compute"="{Library:CUDA Variant: Compute:8.9 Driver:13.1 Name:CUDA0 VRAM:24.0 GiB}"
time=2026-04-01T09:46:10.074+08:00 level=INFO source=server.go:373 msg="Matched default context length" default_num_ctx=32768
time=2026-04-01T09:46:10.074+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=200 http.d=1.5627ms request_id=1775007970073276900 version=0.19.0
time=2026-04-01T09:46:11.793+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1775007971793860100 version=0.19.0
time=2026-04-01T09:46:11.796+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=2.5868ms request_id=1775007971793860100 version=0.19.0
time=2026-04-01T09:46:20.230+08:00 level=WARN source=ui.go:142 msg="ollama server not ready, retrying" attempt=2
time=2026-04-01T09:46:31.436+08:00 level=ERROR source=ui.go:154 msg="ollama server not ready after retries" error="timeout waiting for Ollama server to be ready"
<!-- gh-comment-id:4166893915 --> @RNGMARTIN commented on GitHub (Apr 1, 2026): server.log ``` time=2026-04-01T09:44:43.924+08:00 level=INFO source=routes.go:1742 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:0 OLLAMA_DEBUG:INFO OLLAMA_DEBUG_LOG_REQUESTS:false OLLAMA_EDITOR: OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434/ OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NO_CLOUD:true OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1/ https://127.0.0.1/ [http://127.0.0.1:](http://127.0.0.1/)* [https://127.0.0.1:](https://127.0.0.1/)* http://0.0.0.0/ https://0.0.0.0/ [http://0.0.0.0:](http://0.0.0.0/)* [https://0.0.0.0:](https://0.0.0.0/)* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]" time=2026-04-01T09:44:43.933+08:00 level=INFO source=routes.go:1744 msg="Ollama cloud disabled: true" time=2026-04-01T09:44:43.943+08:00 level=INFO source=images.go:477 msg="total blobs: 55" time=2026-04-01T09:44:43.944+08:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" time=2026-04-01T09:44:43.948+08:00 level=INFO source=routes.go:1800 msg="Listening on [::]:11434 (version 0.19.0)" time=2026-04-01T09:44:43.949+08:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2026-04-01T09:44:43.966+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 61926" time=2026-04-01T09:44:44.665+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 61942" time=2026-04-01T09:44:44.891+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 61957" time=2026-04-01T09:44:45.165+08:00 level=INFO source=runner.go:106 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2026-04-01T09:44:45.166+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 61971" time=2026-04-01T09:44:45.166+08:00 level=INFO source=server.go:432 msg="starting runner" cmd="D:\\Ollama\\ollama.exe runner --ollama-engine --port 61972" time=2026-04-01T09:44:45.441+08:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-f338af54-25a6-0f7c-17ed-475fde7cd07d filter_id="" library=CUDA compute=8.9 name=CUDA0 description="NVIDIA GeForce RTX 4090 D" libdirs=ollama,cuda_v13 driver=13.1 pci_id=0000:01:00.0 type=discrete total="24.0 GiB" available="22.5 GiB" time=2026-04-01T09:44:45.441+08:00 level=INFO source=routes.go:1850 msg="vram-based default context" total_vram="24.0 GiB" default_num_ctx=32768 ``` app.log ``` time=2026-04-01T09:34:26.420+08:00 level=INFO source=app_windows.go:282 msg="starting Ollama" app=D:\Ollama version=0.19.0 OS=Windows/10.0.26200 time=2026-04-01T09:34:26.428+08:00 level=INFO source=app.go:239 msg="initialized tools registry" tool_count=0 time=2026-04-01T09:34:26.437+08:00 level=INFO source=app.go:285 msg="starting ui server" port=50462 time=2026-04-01T09:34:26.437+08:00 level=INFO source=app.go:254 msg="starting ollama server" time=2026-04-01T09:34:29.437+08:00 level=INFO source=updater.go:296 msg="beginning update checker" interval=1h0m0s time=2026-04-01T09:34:31.153+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/cloud http.pattern="GET /api/v1/cloud" http.status=200 http.d=0s request_id=1775007271153211800 version=0.19.0 time=2026-04-01T09:34:31.154+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=514.5µs request_id=1775007271153725700 version=0.19.0 time=2026-04-01T09:34:31.154+08:00 level=INFO source=server.go:362 msg=Matched "inference compute"="{Library:CUDA Variant: Compute:8.9 Driver:13.1 Name:CUDA0 VRAM:24.0 GiB}" time=2026-04-01T09:34:31.154+08:00 level=INFO source=server.go:373 msg="Matched default context length" default_num_ctx=32768 time=2026-04-01T09:34:31.154+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=200 http.d=514.5µs request_id=1775007271153725700 version=0.19.0 time=2026-04-01T09:34:36.442+08:00 level=WARN source=app.go:342 msg="ollama server not ready, continuing anyway" error="timeout waiting for Ollama server to be ready" time=2026-04-01T09:34:38.184+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=518.3µs request_id=1775007278183909700 version=0.19.0 time=2026-04-01T09:34:38.184+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1775007278184428000 version=0.19.0 time=2026-04-01T09:34:39.094+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/019cf447-1d9f-71f9-851c-aee88b4f99cc http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=2.1229ms request_id=1775007279091951900 version=0.19.0 time=2026-04-01T09:34:41.145+08:00 level=WARN source=ui.go:142 msg="ollama server not ready, retrying" attempt=2 time=2026-04-01T09:34:44.976+08:00 level=INFO source=ui.go:160 msg="configuring ollama proxy" target=http://:4090 time=2026-04-01T09:34:46.074+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=POST http.path=/api/v1/model/upstream http.pattern="POST /api/v1/model/upstream" http.status=200 http.d=1.0797355s request_id=1775007284994998800 version=0.19.0 time=2026-04-01T09:34:57.092+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/019a70f1-ab61-7624-81f0-3853429e28a8 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=1.8117ms request_id=1775007297090952400 version=0.19.0 time=2026-04-01T09:34:57.094+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/019a710e-2d06-70cb-997c-3a48dba6534b http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=14.8691ms request_id=1775007297079579600 version=0.19.0 time=2026-04-01T09:34:57.108+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d2c2-7970-7ed1-b2ad-46e9940f05d4 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=535.2µs request_id=1775007297107834900 version=0.19.0 time=2026-04-01T09:34:57.125+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199c863-1aa1-74ac-8a6c-a0bf88e1261f http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1775007297125151000 version=0.19.0 time=2026-04-01T09:35:00.293+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/019a70f1-ab61-7624-81f0-3853429e28a8 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=526.6µs request_id=1775007300292658000 version=0.19.0 time=2026-04-01T09:35:00.325+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d2c2-7970-7ed1-b2ad-46e9940f05d4 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=554.2µs request_id=1775007300325136900 version=0.19.0 time=2026-04-01T09:35:00.374+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199c863-1aa1-74ac-8a6c-a0bf88e1261f http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1775007300374961400 version=0.19.0 time=2026-04-01T09:36:09.662+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1775007369662594900 version=0.19.0 time=2026-04-01T09:36:09.663+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=514.8µs request_id=1775007369662594900 version=0.19.0 time=2026-04-01T09:36:09.663+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/cloud http.pattern="GET /api/v1/cloud" http.status=200 http.d=514.8µs request_id=1775007369662594900 version=0.19.0 time=2026-04-01T09:43:06.045+08:00 level=ERROR source=server.go:201 msg="ollama exited" err="exit status 0x40010004" time=2026-04-01T09:44:42.574+08:00 level=INFO source=app_windows.go:282 msg="starting Ollama" app=D:\Ollama version=0.19.0 OS=Windows/10.0.26200 time=2026-04-01T09:44:42.581+08:00 level=INFO source=app.go:239 msg="initialized tools registry" tool_count=0 time=2026-04-01T09:44:42.590+08:00 level=INFO source=app.go:285 msg="starting ui server" port=61921 time=2026-04-01T09:44:42.589+08:00 level=INFO source=app.go:254 msg="starting ollama server" time=2026-04-01T09:44:45.590+08:00 level=INFO source=updater.go:296 msg="beginning update checker" interval=1h0m0s time=2026-04-01T09:44:52.726+08:00 level=WARN source=app.go:342 msg="ollama server not ready, continuing anyway" error="timeout waiting for Ollama server to be ready" time=2026-04-01T09:46:10.073+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/cloud http.pattern="GET /api/v1/cloud" http.status=200 http.d=521.5µs request_id=1775007970073276900 version=0.19.0 time=2026-04-01T09:46:10.073+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=521.5µs request_id=1775007970073276900 version=0.19.0 time=2026-04-01T09:46:10.073+08:00 level=INFO source=server.go:362 msg=Matched "inference compute"="{Library:CUDA Variant: Compute:8.9 Driver:13.1 Name:CUDA0 VRAM:24.0 GiB}" time=2026-04-01T09:46:10.074+08:00 level=INFO source=server.go:373 msg="Matched default context length" default_num_ctx=32768 time=2026-04-01T09:46:10.074+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=200 http.d=1.5627ms request_id=1775007970073276900 version=0.19.0 time=2026-04-01T09:46:11.793+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1775007971793860100 version=0.19.0 time=2026-04-01T09:46:11.796+08:00 level=INFO source=ui.go:242 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=2.5868ms request_id=1775007971793860100 version=0.19.0 time=2026-04-01T09:46:20.230+08:00 level=WARN source=ui.go:142 msg="ollama server not ready, retrying" attempt=2 time=2026-04-01T09:46:31.436+08:00 level=ERROR source=ui.go:154 msg="ollama server not ready after retries" error="timeout waiting for Ollama server to be ready" ```
Author
Owner

@RNGMARTIN commented on GitHub (Apr 1, 2026):

@rick-github I have already set my ollama port to 4090 for one year since I use ollama. Log analysis indicates that ollama serve was still running on port 11434, causing the ollama app to fail in locating Ollama when querying port 4090 via system variables. Manual execution of ollama serve in the CMD window will initiate the service from port 4090.What is the issue?I have not modified this system variable for over a year.

<!-- gh-comment-id:4166894439 --> @RNGMARTIN commented on GitHub (Apr 1, 2026): @rick-github I have already set my ollama port to 4090 for one year since I use ollama. Log analysis indicates that ollama serve was still running on port 11434, causing the ollama app to fail in locating Ollama when querying port 4090 via system variables. Manual execution of ollama serve in the CMD window will initiate the service from port 4090.What is the issue?I have not modified this system variable for over a year.
Author
Owner

@rick-github commented on GitHub (Apr 2, 2026):

OLLAMA_HOST works fine, but it looks like it's not been set in the server environment since it's the default:

time=2026-04-01T09:44:43.924+08:00 level=INFO source=routes.go:1742 msg="server config" env="map[...  OLLAMA_HOST:http://0.0.0.0:11434/ ...]"

Open a CMD shell and run the following and post the output:

set | findstr OLLAMA
<!-- gh-comment-id:4173799528 --> @rick-github commented on GitHub (Apr 2, 2026): `OLLAMA_HOST` works fine, but it looks like it's not been set in the server environment since it's the default: ``` time=2026-04-01T09:44:43.924+08:00 level=INFO source=routes.go:1742 msg="server config" env="map[... OLLAMA_HOST:http://0.0.0.0:11434/ ...]" ``` Open a CMD shell and run the following and post the output: ``` set | findstr OLLAMA ```
Author
Owner

@RNGMARTIN commented on GitHub (Apr 2, 2026):

OLLAMA_HOST运行正常,但看起来服务器环境里没有设置,因为它是默认设置:

time=2026-04-01T09:44:43.924+08:00 level=INFO source=routes.go:1742 msg="server config" env="map[...  OLLAMA_HOST:http://0.0.0.0:11434/ ...]"

打开一个CMD shell,运行以下操作并发布输出:

set | findstr OLLAMA

set | findstr OLLAMA
OLLAMA_HOST=:4090
OLLAMA_KEEP_ALIVE=-1
OLLAMA_MODELS=D:\Ollama
OLLAMA_ORIGINS=*

<!-- gh-comment-id:4173814045 --> @RNGMARTIN commented on GitHub (Apr 2, 2026): > `OLLAMA_HOST`运行正常,但看起来服务器环境里没有设置,因为它是默认设置: > > ``` > time=2026-04-01T09:44:43.924+08:00 level=INFO source=routes.go:1742 msg="server config" env="map[... OLLAMA_HOST:http://0.0.0.0:11434/ ...]" > ``` > > 打开一个CMD shell,运行以下操作并发布输出: > > ``` > set | findstr OLLAMA > ``` set | findstr OLLAMA OLLAMA_HOST=:4090 OLLAMA_KEEP_ALIVE=-1 OLLAMA_MODELS=D:\Ollama OLLAMA_ORIGINS=*
Author
Owner

@RNGMARTIN commented on GitHub (Apr 2, 2026):

OLLAMA_HOST works fine, but it looks like it's not been set in the server environment since it's the default:

time=2026-04-01T09:44:43.924+08:00 level=INFO source=routes.go:1742 msg="server config" env="map[...  OLLAMA_HOST:http://0.0.0.0:11434/ ...]"

Open a CMD shell and run the following and post the output:

set | findstr OLLAMA

I have use this settings for more than a year and it only went wrong yesterday. I did not change anything

<!-- gh-comment-id:4173817923 --> @RNGMARTIN commented on GitHub (Apr 2, 2026): > `OLLAMA_HOST` works fine, but it looks like it's not been set in the server environment since it's the default: > > ``` > time=2026-04-01T09:44:43.924+08:00 level=INFO source=routes.go:1742 msg="server config" env="map[... OLLAMA_HOST:http://0.0.0.0:11434/ ...]" > ``` > > Open a CMD shell and run the following and post the output: > > ``` > set | findstr OLLAMA > ``` I have use this settings for more than a year and it only went wrong yesterday. I did not change anything
Author
Owner

@rick-github commented on GitHub (Apr 2, 2026):

How was the server started? It inherited the OLLAMA_KEEP_ALIVE and OLLAMA_MODELS variables but not OLLAMA_HOST, and it also got OLLAMA_NO_CLOUD=1. So however it was started, it had a different environment.

<!-- gh-comment-id:4173852065 --> @rick-github commented on GitHub (Apr 2, 2026): How was the server started? It inherited the `OLLAMA_KEEP_ALIVE` and `OLLAMA_MODELS` variables but not `OLLAMA_HOST`, and it also got `OLLAMA_NO_CLOUD=1`. So however it was started, it had a different environment.
Author
Owner

@RNGMARTIN commented on GitHub (Apr 2, 2026):

How was the server started? It inherited the and variables but not , and it also got . So however it was started, it had a different environment. OLLAMA_KEEP_ALIVEOLLAMA_MODELSOLLAMA_HOSTOLLAMA_NO_CLOUD=1 ``

I simply downloaded the Windows version of Ollama, which automatically launches upon system startup.

<!-- gh-comment-id:4173860884 --> @RNGMARTIN commented on GitHub (Apr 2, 2026): > > How was the server started? It inherited the and variables but not , and it also got . So however it was started, it had a different environment.`` OLLAMA_KEEP_ALIVE``OLLAMA_MODELS``OLLAMA_HOST``OLLAMA_NO_CLOUD=1 `` I simply downloaded the Windows version of Ollama, which automatically launches upon system startup.
Author
Owner

@RNGMARTIN commented on GitHub (Apr 2, 2026):

How was the server started? It inherited the OLLAMA_KEEP_ALIVE and OLLAMA_MODELS variables but not OLLAMA_HOST, and it also got OLLAMA_NO_CLOUD=1. So however it was started, it had a different environment.

there's a little difference in OLLAMA_MODELS . My settign is D:\Ollama but the log reveals D:\ollama. I think the OLLAMA_MODELS didn't inherit as well.

<!-- gh-comment-id:4173868056 --> @RNGMARTIN commented on GitHub (Apr 2, 2026): > How was the server started? It inherited the `OLLAMA_KEEP_ALIVE` and `OLLAMA_MODELS` variables but not `OLLAMA_HOST`, and it also got `OLLAMA_NO_CLOUD=1`. So however it was started, it had a different environment. there's a little difference in OLLAMA_MODELS . My settign is D:\Ollama but the log reveals D:\ollama. I think the OLLAMA_MODELS didn't inherit as well.
Author
Owner

@rick-github commented on GitHub (Apr 2, 2026):

It's inheriting from a different environment. Wherever OLLAMA_NO_CLOUD was set.

<!-- gh-comment-id:4174013816 --> @rick-github commented on GitHub (Apr 2, 2026): It's inheriting from a different environment. Wherever `OLLAMA_NO_CLOUD` was set.
Author
Owner

@RNGMARTIN commented on GitHub (Apr 2, 2026):

OLLAMA_NO_CLOUD was set in the app

Image
<!-- gh-comment-id:4174071377 --> @RNGMARTIN commented on GitHub (Apr 2, 2026): OLLAMA_NO_CLOUD was set in the app <img width="1391" height="524" alt="Image" src="https://github.com/user-attachments/assets/fd2e53ad-89fd-4777-b3e7-4e4a9f834e8d" />
Author
Owner

@rick-github commented on GitHub (Apr 2, 2026):

Settings made in the app override environment settings. If you want the ollama server to listen on port 4090, disable "Expose Ollama to the network". That will cause the ollama server to use the value of OLLAMA_HOST set in the environment variable.

<!-- gh-comment-id:4175861842 --> @rick-github commented on GitHub (Apr 2, 2026): Settings made in the app override environment settings. If you want the ollama server to listen on port 4090, disable "Expose Ollama to the network". That will cause the ollama server to use the value of `OLLAMA_HOST` set in the environment variable.
Author
Owner

@RNGMARTIN commented on GitHub (Apr 2, 2026):

Settings made in the app override environment settings. If you want the ollama server to listen on port 4090, disable "Expose Ollama to the network". That will cause the ollama server to use the value of OLLAMA_HOST set in the environment variable.

thanks. The probelm is solved.

<!-- gh-comment-id:4175876175 --> @RNGMARTIN commented on GitHub (Apr 2, 2026): > Settings made in the app override environment settings. If you want the ollama server to listen on port 4090, disable "Expose Ollama to the network". That will cause the ollama server to use the value of `OLLAMA_HOST` set in the environment variable. thanks. The probelm is solved.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71782