[GH-ISSUE #13280] UI glitches in Windows (blank/naked app) #55288

Open
opened 2026-04-29 08:45:36 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @Raboo on GitHub (Nov 30, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13280

Originally assigned to: @hoyyeva on GitHub.

What is the issue?

My ollama app looks a bit naked.

Main window
Image

Settings Screen
Image

Relevant log output

app.log:
time=2025-11-30T11:18:22.955+01:00 level=INFO source=app_windows.go:273 msg="starting Ollama" app=C:\Users\Raboo\AppData\Local\Programs\Ollama version=0.13.0 OS=Windows/10.0.26200
time=2025-11-30T11:18:22.957+01:00 level=INFO source=app.go:237 msg="initialized tools registry" tool_count=0
time=2025-11-30T11:18:22.967+01:00 level=INFO source=app.go:252 msg="starting ollama server"
time=2025-11-30T11:18:22.969+01:00 level=INFO source=ui.go:138 msg="configuring ollama proxy" target=http://127.0.0.1:11434
time=2025-11-30T11:18:23.176+01:00 level=INFO source=app.go:281 msg="starting ui server" port=58553
time=2025-11-30T11:18:26.176+01:00 level=INFO source=updater.go:254 msg="beginning update checker" interval=1h0m0s
time=2025-11-30T23:53:49.264+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=2.5129ms request_id=1764543229261063200 version=0.13.0
time=2025-11-30T23:53:49.271+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/health http.pattern="GET /api/v1/health" http.status=200 http.d=10.7495ms request_id=1764543229261063200 version=0.13.0
time=2025-11-30T23:53:49.276+01:00 level=INFO source=server.go:346 msg=Matched "inference compute"="{Library:CUDA Variant: Compute:7.5 Driver:13.0 Name:CUDA0 VRAM:11.0 GiB}"
time=2025-11-30T23:53:49.277+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=200 http.d=16.4015ms request_id=1764543229261063200 version=0.13.0
time=2025-11-30T23:53:49.277+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=15.449ms request_id=1764543229262015700 version=0.13.0
time=2025-11-30T23:53:49.384+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=156.1911ms request_id=1764543229228530200 version=0.13.0
time=2025-11-30T23:54:15.242+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1764543255242592300 version=0.13.0
time=2025-11-30T23:54:15.243+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/health http.pattern="GET /api/v1/health" http.status=200 http.d=708.8µs request_id=1764543255243097400 version=0.13.0
time=2025-11-30T23:54:15.243+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=708.8µs request_id=1764543255243097400 version=0.13.0
time=2025-11-30T23:54:15.258+01:00 level=INFO source=server.go:346 msg=Matched "inference compute"="{Library:CUDA Variant: Compute:7.5 Driver:13.0 Name:CUDA0 VRAM:11.0 GiB}"
time=2025-11-30T23:54:15.258+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=200 http.d=15.8117ms request_id=1764543255243097400 version=0.13.0
time=2025-11-30T23:54:15.319+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=120.1057ms request_id=1764543255199256600 version=0.13.0
time=2025-11-30T23:59:26.219+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1764543566219838600 version=0.13.0
time=2025-11-30T23:59:26.369+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=149.5177ms request_id=1764543566219838600 version=0.13.0
time=2025-11-30T23:59:26.491+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=121.0897ms request_id=1764543566370242600 version=0.13.0

server.log:
time=2025-11-30T11:18:24.229+01:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Raboo\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]"
time=2025-11-30T11:18:24.251+01:00 level=INFO source=images.go:522 msg="total blobs: 7"
time=2025-11-30T11:18:24.252+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-11-30T11:18:24.255+01:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.13.0)"
time=2025-11-30T11:18:24.258+01:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2025-11-30T11:18:24.266+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\Raboo\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 52364"
time=2025-11-30T11:18:26.219+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\Raboo\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 52374"
time=2025-11-30T11:18:27.327+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\Raboo\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 52382"
time=2025-11-30T11:18:27.782+01:00 level=INFO source=runner.go:102 msg="experimental Vulkan support disabled.  To enable, set OLLAMA_VULKAN=1"
time=2025-11-30T11:18:27.783+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-36b61623-f5ec-6036-40d9-ceb940a9e5c3 filter_id="" library=CUDA compute=7.5 name=CUDA0 description="NVIDIA GeForce RTX 2080 Ti" libdirs=ollama,cuda_v12 driver=13.0 pci_id=0000:07:00.0 type=discrete total="11.0 GiB" available="10.2 GiB"
time=2025-11-30T11:18:27.783+01:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="11.0 GiB" threshold="20.0 GiB"
[GIN] 2025/11/30 - 23:53:49 | 200 |      5.1435ms |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/11/30 - 23:53:49 | 200 |     31.3242ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/30 - 23:53:49 | 404 |     14.4282ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/11/30 - 23:54:15 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/11/30 - 23:54:15 | 200 |      2.2737ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/30 - 23:54:15 | 404 |       1.047ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/11/30 - 23:54:45 | 200 |       1.019ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/30 - 23:55:15 | 200 |      1.5424ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/30 - 23:55:45 | 200 |       1.019ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/30 - 23:56:15 | 200 |      1.5697ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/30 - 23:56:45 | 200 |      1.0353ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/30 - 23:57:15 | 200 |      1.5555ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/30 - 23:57:45 | 200 |      1.0213ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/30 - 23:58:15 | 200 |      1.0153ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/30 - 23:58:45 | 200 |      1.5348ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/11/30 - 23:59:15 | 200 |      1.5702ms |       127.0.0.1 | GET      "/api/tags"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.13.0

Originally created by @Raboo on GitHub (Nov 30, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13280 Originally assigned to: @hoyyeva on GitHub. ### What is the issue? My ollama app looks a bit naked. Main window <img width="1660" height="971" alt="Image" src="https://github.com/user-attachments/assets/61bb9970-bc0a-40ea-806e-bf4f2c8b25aa" /> Settings Screen <img width="1660" height="971" alt="Image" src="https://github.com/user-attachments/assets/56c2d5f0-0725-4bf5-9143-32d908338742" /> ### Relevant log output ```shell app.log: time=2025-11-30T11:18:22.955+01:00 level=INFO source=app_windows.go:273 msg="starting Ollama" app=C:\Users\Raboo\AppData\Local\Programs\Ollama version=0.13.0 OS=Windows/10.0.26200 time=2025-11-30T11:18:22.957+01:00 level=INFO source=app.go:237 msg="initialized tools registry" tool_count=0 time=2025-11-30T11:18:22.967+01:00 level=INFO source=app.go:252 msg="starting ollama server" time=2025-11-30T11:18:22.969+01:00 level=INFO source=ui.go:138 msg="configuring ollama proxy" target=http://127.0.0.1:11434 time=2025-11-30T11:18:23.176+01:00 level=INFO source=app.go:281 msg="starting ui server" port=58553 time=2025-11-30T11:18:26.176+01:00 level=INFO source=updater.go:254 msg="beginning update checker" interval=1h0m0s time=2025-11-30T23:53:49.264+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=2.5129ms request_id=1764543229261063200 version=0.13.0 time=2025-11-30T23:53:49.271+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/health http.pattern="GET /api/v1/health" http.status=200 http.d=10.7495ms request_id=1764543229261063200 version=0.13.0 time=2025-11-30T23:53:49.276+01:00 level=INFO source=server.go:346 msg=Matched "inference compute"="{Library:CUDA Variant: Compute:7.5 Driver:13.0 Name:CUDA0 VRAM:11.0 GiB}" time=2025-11-30T23:53:49.277+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=200 http.d=16.4015ms request_id=1764543229261063200 version=0.13.0 time=2025-11-30T23:53:49.277+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=15.449ms request_id=1764543229262015700 version=0.13.0 time=2025-11-30T23:53:49.384+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=156.1911ms request_id=1764543229228530200 version=0.13.0 time=2025-11-30T23:54:15.242+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1764543255242592300 version=0.13.0 time=2025-11-30T23:54:15.243+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/health http.pattern="GET /api/v1/health" http.status=200 http.d=708.8µs request_id=1764543255243097400 version=0.13.0 time=2025-11-30T23:54:15.243+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=708.8µs request_id=1764543255243097400 version=0.13.0 time=2025-11-30T23:54:15.258+01:00 level=INFO source=server.go:346 msg=Matched "inference compute"="{Library:CUDA Variant: Compute:7.5 Driver:13.0 Name:CUDA0 VRAM:11.0 GiB}" time=2025-11-30T23:54:15.258+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=200 http.d=15.8117ms request_id=1764543255243097400 version=0.13.0 time=2025-11-30T23:54:15.319+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=120.1057ms request_id=1764543255199256600 version=0.13.0 time=2025-11-30T23:59:26.219+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1764543566219838600 version=0.13.0 time=2025-11-30T23:59:26.369+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=149.5177ms request_id=1764543566219838600 version=0.13.0 time=2025-11-30T23:59:26.491+01:00 level=INFO source=ui.go:211 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=121.0897ms request_id=1764543566370242600 version=0.13.0 server.log: time=2025-11-30T11:18:24.229+01:00 level=INFO source=routes.go:1544 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Raboo\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:false ROCR_VISIBLE_DEVICES:]" time=2025-11-30T11:18:24.251+01:00 level=INFO source=images.go:522 msg="total blobs: 7" time=2025-11-30T11:18:24.252+01:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-11-30T11:18:24.255+01:00 level=INFO source=routes.go:1597 msg="Listening on 127.0.0.1:11434 (version 0.13.0)" time=2025-11-30T11:18:24.258+01:00 level=INFO source=runner.go:67 msg="discovering available GPUs..." time=2025-11-30T11:18:24.266+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\Raboo\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 52364" time=2025-11-30T11:18:26.219+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\Raboo\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 52374" time=2025-11-30T11:18:27.327+01:00 level=INFO source=server.go:392 msg="starting runner" cmd="C:\\Users\\Raboo\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 52382" time=2025-11-30T11:18:27.782+01:00 level=INFO source=runner.go:102 msg="experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1" time=2025-11-30T11:18:27.783+01:00 level=INFO source=types.go:42 msg="inference compute" id=GPU-36b61623-f5ec-6036-40d9-ceb940a9e5c3 filter_id="" library=CUDA compute=7.5 name=CUDA0 description="NVIDIA GeForce RTX 2080 Ti" libdirs=ollama,cuda_v12 driver=13.0 pci_id=0000:07:00.0 type=discrete total="11.0 GiB" available="10.2 GiB" time=2025-11-30T11:18:27.783+01:00 level=INFO source=routes.go:1638 msg="entering low vram mode" "total vram"="11.0 GiB" threshold="20.0 GiB" [GIN] 2025/11/30 - 23:53:49 | 200 | 5.1435ms | 127.0.0.1 | GET "/api/version" [GIN] 2025/11/30 - 23:53:49 | 200 | 31.3242ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/30 - 23:53:49 | 404 | 14.4282ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/30 - 23:54:15 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/11/30 - 23:54:15 | 200 | 2.2737ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/30 - 23:54:15 | 404 | 1.047ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/11/30 - 23:54:45 | 200 | 1.019ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/30 - 23:55:15 | 200 | 1.5424ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/30 - 23:55:45 | 200 | 1.019ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/30 - 23:56:15 | 200 | 1.5697ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/30 - 23:56:45 | 200 | 1.0353ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/30 - 23:57:15 | 200 | 1.5555ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/30 - 23:57:45 | 200 | 1.0213ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/30 - 23:58:15 | 200 | 1.0153ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/30 - 23:58:45 | 200 | 1.5348ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/11/30 - 23:59:15 | 200 | 1.5702ms | 127.0.0.1 | GET "/api/tags" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.13.0
GiteaMirror added the bug label 2026-04-29 08:45:36 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55288