[GH-ISSUE #12112] Taskbar Icon won't work on Ẃindows #54562

Closed
opened 2026-04-29 06:20:53 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @git-emil on GitHub (Aug 29, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12112

Originally assigned to: @jmorganca on GitHub.

What is the issue?

Hi!

I’ve been using Ollama on Windows 11 for quite some time. For the past two weeks the
taskbar icon has been acting oddly.

So I completely remove it and reinstalled it.

After running OllamaSetup.exe, I can launch Ollama from the command line without
issue. The installer also places a small icon next to the clock in the taskbar.
Left‑clicking the icon brings up a menu. Clicking “Settings” does nothing, and
subsequent left‑clicks on the icon no longer display the menu at all.

OS

Windows 11 Pro, 24H2

GPU

RX 7800 XT

CPU

AMD Ryzen 5 5600X 6-Core Processor

Ollama version

ollama version is 0.11.8

Originally created by @git-emil on GitHub (Aug 29, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12112 Originally assigned to: @jmorganca on GitHub. ### What is the issue? Hi! I’ve been using Ollama on Windows 11 for quite some time. For the past two weeks the taskbar icon has been acting oddly. So I completely remove it and reinstalled it. After running **OllamaSetup.exe**, I can launch Ollama from the command line without issue. The installer also places a small icon next to the clock in the taskbar. Left‑clicking the icon brings up a menu. Clicking “Settings” does nothing, and subsequent left‑clicks on the icon no longer display the menu at all. ### OS Windows 11 Pro, 24H2 ### GPU RX 7800 XT ### CPU AMD Ryzen 5 5600X 6-Core Processor ### Ollama version ollama version is 0.11.8
GiteaMirror added the bug label 2026-04-29 06:20:53 -05:00
Author
Owner

@jmorganca commented on GitHub (Aug 29, 2025):

Thanks for reporting. Sorry about this. Looking into it

<!-- gh-comment-id:3235494011 --> @jmorganca commented on GitHub (Aug 29, 2025): Thanks for reporting. Sorry about this. Looking into it
Author
Owner

@BruceMacD commented on GitHub (Aug 29, 2025):

Hi @git-emil, sorry about this. Do you see any errors in the log file at: C:\Users\<Username>\AppData\Local\Ollama\app.log?

<!-- gh-comment-id:3235558498 --> @BruceMacD commented on GitHub (Aug 29, 2025): Hi @git-emil, sorry about this. Do you see any errors in the log file at: `C:\Users\<Username>\AppData\Local\Ollama\app.log`?
Author
Owner

@grester commented on GitHub (Aug 29, 2025):

This has also been a problem with me for far longer, but I thought it was just a me problem because I was running it in Win Server 2019. I'll try get logs for you.
Side note, I can access the web ui/app but it can't list any models despite the CLI working properly. My models folder is the default one.
Also this seems to be a duplicate of #12050

<!-- gh-comment-id:3236326757 --> @grester commented on GitHub (Aug 29, 2025): This has also been a problem with me for far longer, but I thought it was just a me problem because I was running it in Win Server 2019. I'll try get logs for you. Side note, I can access the web ui/app but it can't list any models despite the CLI working properly. My models folder is the default one. Also this seems to be a duplicate of #12050
Author
Owner

@pdevine commented on GitHub (Aug 29, 2025):

Going to close as a dupe

<!-- gh-comment-id:3237781686 --> @pdevine commented on GitHub (Aug 29, 2025): Going to close as a dupe
Author
Owner

@git-emil commented on GitHub (Aug 30, 2025):

Here's my app.log:
time=2025-08-30T05:31:16.708+02:00 level=INFO source=app_windows.go:272 msg="starting Ollama" app=C:\Users\jl\AppData\Local\Programs\Ollama version=0.11.8 OS=Windows/10.0.26100 time=2025-08-30T05:31:16.738+02:00 level=INFO source=app.go:212 msg="initialized tools registry" tool_count=0 time=2025-08-30T05:31:16.742+02:00 level=INFO source=app.go:227 msg="starting ollama server" time=2025-08-30T05:31:16.962+02:00 level=INFO source=app.go:256 msg="starting ui server" port=49765 time=2025-08-30T05:31:19.962+02:00 level=INFO source=updater.go:252 msg="beginning update checker" interval=1h0m0s

<!-- gh-comment-id:3238907956 --> @git-emil commented on GitHub (Aug 30, 2025): Here's my app.log: `time=2025-08-30T05:31:16.708+02:00 level=INFO source=app_windows.go:272 msg="starting Ollama" app=C:\Users\jl\AppData\Local\Programs\Ollama version=0.11.8 OS=Windows/10.0.26100 time=2025-08-30T05:31:16.738+02:00 level=INFO source=app.go:212 msg="initialized tools registry" tool_count=0 time=2025-08-30T05:31:16.742+02:00 level=INFO source=app.go:227 msg="starting ollama server" time=2025-08-30T05:31:16.962+02:00 level=INFO source=app.go:256 msg="starting ui server" port=49765 time=2025-08-30T05:31:19.962+02:00 level=INFO source=updater.go:252 msg="beginning update checker" interval=1h0m0s `
Author
Owner

@hoyyeva commented on GitHub (Sep 2, 2025):

Here's my app.log: time=2025-08-30T05:31:16.708+02:00 level=INFO source=app_windows.go:272 msg="starting Ollama" app=C:\Users\jl\AppData\Local\Programs\Ollama version=0.11.8 OS=Windows/10.0.26100 time=2025-08-30T05:31:16.738+02:00 level=INFO source=app.go:212 msg="initialized tools registry" tool_count=0 time=2025-08-30T05:31:16.742+02:00 level=INFO source=app.go:227 msg="starting ollama server" time=2025-08-30T05:31:16.962+02:00 level=INFO source=app.go:256 msg="starting ui server" port=49765 time=2025-08-30T05:31:19.962+02:00 level=INFO source=updater.go:252 msg="beginning update checker" interval=1h0m0s

Hi @git-emil is this the end of the log? were you able to see any error in the log? it should start with level=ERROR

<!-- gh-comment-id:3246323611 --> @hoyyeva commented on GitHub (Sep 2, 2025): > Here's my app.log: `time=2025-08-30T05:31:16.708+02:00 level=INFO source=app_windows.go:272 msg="starting Ollama" app=C:\Users\jl\AppData\Local\Programs\Ollama version=0.11.8 OS=Windows/10.0.26100 time=2025-08-30T05:31:16.738+02:00 level=INFO source=app.go:212 msg="initialized tools registry" tool_count=0 time=2025-08-30T05:31:16.742+02:00 level=INFO source=app.go:227 msg="starting ollama server" time=2025-08-30T05:31:16.962+02:00 level=INFO source=app.go:256 msg="starting ui server" port=49765 time=2025-08-30T05:31:19.962+02:00 level=INFO source=updater.go:252 msg="beginning update checker" interval=1h0m0s ` Hi @git-emil is this the end of the log? were you able to see any error in the log? it should start with `level=ERROR`
Author
Owner

@git-emil commented on GitHub (Sep 5, 2025):

Yes that was all, nothing else in the log.

I've just upgraded to v 0.11.10 and started in on CLI (ollama run gpt-oss:latest).

This is the complete app.log:

time=2025-09-05T09:48:22.954+02:00 level=INFO source=app_windows.go:272 msg="starting Ollama" app=C:\Users\emil\AppData\Local\Programs\Ollama version=0.11.10 OS=Windows/10.0.26100
time=2025-09-05T09:48:23.012+02:00 level=INFO source=app.go:212 msg="initialized tools registry" tool_count=0
time=2025-09-05T09:48:23.013+02:00 level=INFO source=app.go:227 msg="starting ollama server"
time=2025-09-05T09:48:23.192+02:00 level=INFO source=app.go:256 msg="starting ui server" port=52919
time=2025-09-05T09:48:26.192+02:00 level=INFO source=updater.go:252 msg="beginning update checker" interval=1h0m0s

And this is the complete server.log:

time=2025-09-05T09:48:24.477+02:00 level=INFO source=routes.go:1331 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\emil\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-09-05T09:48:24.496+02:00 level=INFO source=images.go:477 msg="total blobs: 5"
time=2025-09-05T09:48:24.496+02:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2025-09-05T09:48:24.497+02:00 level=INFO source=routes.go:1384 msg="Listening on 127.0.0.1:11434 (version 0.11.10)"
time=2025-09-05T09:48:24.497+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-09-05T09:48:24.497+02:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-09-05T09:48:24.497+02:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=6 efficiency=0 threads=12
time=2025-09-05T09:48:24.984+02:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=rocm variant="" compute=gfx1101 driver=6.4 name="AMD Radeon RX 7800 XT" total="16.0 GiB" available="15.8 GiB"
time=2025-09-05T09:48:24.984+02:00 level=INFO source=routes.go:1425 msg="entering low vram mode" "total vram"="16.0 GiB" threshold="20.0 GiB"
[GIN] 2025/09/05 - 09:50:29 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/09/05 - 09:50:29 | 200 | 1.0287ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/09/05 - 09:50:37 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/09/05 - 09:50:37 | 200 | 87.3583ms | 127.0.0.1 | POST "/api/show"
time=2025-09-05T09:50:38.733+02:00 level=INFO source=sched.go:192 msg="one or more GPUs detected that are unable to accurately report free memory - disabling default concurrency"
time=2025-09-05T09:50:39.533+02:00 level=INFO source=server.go:199 msg="model wants flash attention"
time=2025-09-05T09:50:39.533+02:00 level=INFO source=server.go:216 msg="enabling flash attention"
time=2025-09-05T09:50:39.533+02:00 level=WARN source=server.go:224 msg="kv cache type not supported by model" type=""
time=2025-09-05T09:50:39.541+02:00 level=INFO source=server.go:398 msg="starting runner" cmd="C:\Users\emil\AppData\Local\Programs\Ollama\ollama.exe runner --ollama-engine --model C:\Users\emil\.ollama\models\blobs\sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --port 52939"
time=2025-09-05T09:50:39.575+02:00 level=INFO source=runner.go:1251 msg="starting ollama engine"
time=2025-09-05T09:50:39.594+02:00 level=INFO source=runner.go:1286 msg="Server listening on 127.0.0.1:52939"
time=2025-09-05T09:50:40.085+02:00 level=INFO source=server.go:503 msg="system memory" total="79.9 GiB" free="67.6 GiB" free_swap="67.2 GiB"
time=2025-09-05T09:50:40.667+02:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=C:\Users\emil.ollama\models\blobs\sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 library=rocm parallel=1 required="13.0 GiB" gpus=1
time=2025-09-05T09:50:41.117+02:00 level=INFO source=server.go:543 msg=offload library=rocm layers.requested=-1 layers.model=25 layers.offload=25 layers.split=[25] memory.available="[15.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="13.0 GiB" memory.required.partial="13.0 GiB" memory.required.kv="204.0 MiB" memory.required.allocations="[13.0 GiB]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="118.0 MiB" memory.graph.partial="118.0 MiB"
time=2025-09-05T09:50:41.127+02:00 level=INFO source=runner.go:1170 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:25[ID:0 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-09-05T09:50:41.225+02:00 level=INFO source=ggml.go:131 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon RX 7800 XT, gfx1101 (0x1101), VMM: no, Wave Size: 32, ID: 0
load_backend: loaded ROCm backend from C:\Users\emil\AppData\Local\Programs\Ollama\lib\ollama\ggml-hip.dll
load_backend: loaded CPU backend from C:\Users\emil\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll
time=2025-09-05T09:51:05.084+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.NO_PEER_COPY=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-09-05T09:51:06.019+02:00 level=INFO source=ggml.go:487 msg="offloading 24 repeating layers to GPU"
time=2025-09-05T09:51:06.019+02:00 level=INFO source=ggml.go:493 msg="offloading output layer to GPU"
time=2025-09-05T09:51:06.019+02:00 level=INFO source=ggml.go:498 msg="offloaded 25/25 layers to GPU"
time=2025-09-05T09:51:06.020+02:00 level=INFO source=backend.go:310 msg="model weights" device=ROCm0 size="11.8 GiB"
time=2025-09-05T09:51:06.020+02:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB"
time=2025-09-05T09:51:06.020+02:00 level=INFO source=backend.go:321 msg="kv cache" device=ROCm0 size="204.0 MiB"
time=2025-09-05T09:51:06.020+02:00 level=INFO source=backend.go:332 msg="compute graph" device=ROCm0 size="117.8 MiB"
time=2025-09-05T09:51:06.020+02:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB"
time=2025-09-05T09:51:06.020+02:00 level=INFO source=backend.go:342 msg="total memory" size="13.2 GiB"
time=2025-09-05T09:51:06.020+02:00 level=INFO source=sched.go:473 msg="loaded runners" count=1
time=2025-09-05T09:51:06.020+02:00 level=INFO source=server.go:1250 msg="waiting for llama runner to start responding"
time=2025-09-05T09:51:06.021+02:00 level=INFO source=server.go:1284 msg="waiting for server to become available" status="llm server loading model"
time=2025-09-05T09:51:14.060+02:00 level=INFO source=server.go:1288 msg="llama runner started in 34.52 seconds"
[GIN] 2025/09/05 - 09:51:14 | 200 | 36.0745655s | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/09/05 - 09:51:17 | 200 | 0s | 127.0.0.1 | GET "/api/version"

After initial startup right clicking the taskbar icon

  • allows to "view logs" several times without any issue
  • clicking "open ollama" yields to nothing no window opens, but disables the taskbar icon (no context menu anymore)
  • clicking "Settings" same as clicking "open ollama"
  • clicking "Quit" same as clicking "open ollama"

For all options "ollama app.exe" and "ollama.exe" stay in memory an I can still use it on CLI.

I haven't changed any hardware and all the above used to work with an older version several weeks ago.

Sorry for the long post. I've got now idea where to look at :-)

Thanks for reading, emil

<!-- gh-comment-id:3257484654 --> @git-emil commented on GitHub (Sep 5, 2025): Yes that was all, nothing else in the log. I've just upgraded to v 0.11.10 and started in on CLI (ollama run gpt-oss:latest). This is the complete app.log: > time=2025-09-05T09:48:22.954+02:00 level=INFO source=app_windows.go:272 msg="starting Ollama" app=C:\Users\emil\AppData\Local\Programs\Ollama version=0.11.10 OS=Windows/10.0.26100 > time=2025-09-05T09:48:23.012+02:00 level=INFO source=app.go:212 msg="initialized tools registry" tool_count=0 > time=2025-09-05T09:48:23.013+02:00 level=INFO source=app.go:227 msg="starting ollama server" > time=2025-09-05T09:48:23.192+02:00 level=INFO source=app.go:256 msg="starting ui server" port=52919 > time=2025-09-05T09:48:26.192+02:00 level=INFO source=updater.go:252 msg="beginning update checker" interval=1h0m0s > And this is the complete server.log: > > time=2025-09-05T09:48:24.477+02:00 level=INFO source=routes.go:1331 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\emil\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" > time=2025-09-05T09:48:24.496+02:00 level=INFO source=images.go:477 msg="total blobs: 5" > time=2025-09-05T09:48:24.496+02:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0" > time=2025-09-05T09:48:24.497+02:00 level=INFO source=routes.go:1384 msg="Listening on 127.0.0.1:11434 (version 0.11.10)" > time=2025-09-05T09:48:24.497+02:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" > time=2025-09-05T09:48:24.497+02:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 > time=2025-09-05T09:48:24.497+02:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=6 efficiency=0 threads=12 > time=2025-09-05T09:48:24.984+02:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=rocm variant="" compute=gfx1101 driver=6.4 name="AMD Radeon RX 7800 XT" total="16.0 GiB" available="15.8 GiB" > time=2025-09-05T09:48:24.984+02:00 level=INFO source=routes.go:1425 msg="entering low vram mode" "total vram"="16.0 GiB" threshold="20.0 GiB" > [GIN] 2025/09/05 - 09:50:29 | 200 | 0s | 127.0.0.1 | HEAD "/" > [GIN] 2025/09/05 - 09:50:29 | 200 | 1.0287ms | 127.0.0.1 | GET "/api/tags" > [GIN] 2025/09/05 - 09:50:37 | 200 | 0s | 127.0.0.1 | HEAD "/" > [GIN] 2025/09/05 - 09:50:37 | 200 | 87.3583ms | 127.0.0.1 | POST "/api/show" > time=2025-09-05T09:50:38.733+02:00 level=INFO source=sched.go:192 msg="one or more GPUs detected that are unable to accurately report free memory - disabling default concurrency" > time=2025-09-05T09:50:39.533+02:00 level=INFO source=server.go:199 msg="model wants flash attention" > time=2025-09-05T09:50:39.533+02:00 level=INFO source=server.go:216 msg="enabling flash attention" > time=2025-09-05T09:50:39.533+02:00 level=WARN source=server.go:224 msg="kv cache type not supported by model" type="" > time=2025-09-05T09:50:39.541+02:00 level=INFO source=server.go:398 msg="starting runner" cmd="C:\\Users\\emil\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model C:\\Users\\emil\\.ollama\\models\\blobs\\sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 --port 52939" > time=2025-09-05T09:50:39.575+02:00 level=INFO source=runner.go:1251 msg="starting ollama engine" > time=2025-09-05T09:50:39.594+02:00 level=INFO source=runner.go:1286 msg="Server listening on 127.0.0.1:52939" > time=2025-09-05T09:50:40.085+02:00 level=INFO source=server.go:503 msg="system memory" total="79.9 GiB" free="67.6 GiB" free_swap="67.2 GiB" > time=2025-09-05T09:50:40.667+02:00 level=INFO source=memory.go:36 msg="new model will fit in available VRAM across minimum required GPUs, loading" model=C:\Users\emil\.ollama\models\blobs\sha256-b112e727c6f18875636c56a779790a590d705aec9e1c0eb5a97d51fc2a778583 library=rocm parallel=1 required="13.0 GiB" gpus=1 > time=2025-09-05T09:50:41.117+02:00 level=INFO source=server.go:543 msg=offload library=rocm layers.requested=-1 layers.model=25 layers.offload=25 layers.split=[25] memory.available="[15.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="13.0 GiB" memory.required.partial="13.0 GiB" memory.required.kv="204.0 MiB" memory.required.allocations="[13.0 GiB]" memory.weights.total="11.7 GiB" memory.weights.repeating="10.7 GiB" memory.weights.nonrepeating="1.1 GiB" memory.graph.full="118.0 MiB" memory.graph.partial="118.0 MiB" > time=2025-09-05T09:50:41.127+02:00 level=INFO source=runner.go:1170 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:4096 KvCacheType: NumThreads:6 GPULayers:25[ID:0 Layers:25(0..24)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" > time=2025-09-05T09:50:41.225+02:00 level=INFO source=ggml.go:131 msg="" architecture=gptoss file_type=MXFP4 name="" description="" num_tensors=315 num_key_values=30 > ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no > ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no > ggml_cuda_init: found 1 ROCm devices: > Device 0: AMD Radeon RX 7800 XT, gfx1101 (0x1101), VMM: no, Wave Size: 32, ID: 0 > load_backend: loaded ROCm backend from C:\Users\emil\AppData\Local\Programs\Ollama\lib\ollama\ggml-hip.dll > load_backend: loaded CPU backend from C:\Users\emil\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-haswell.dll > time=2025-09-05T09:51:05.084+02:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 ROCm.0.NO_VMM=1 ROCm.0.NO_PEER_COPY=1 ROCm.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) > time=2025-09-05T09:51:06.019+02:00 level=INFO source=ggml.go:487 msg="offloading 24 repeating layers to GPU" > time=2025-09-05T09:51:06.019+02:00 level=INFO source=ggml.go:493 msg="offloading output layer to GPU" > time=2025-09-05T09:51:06.019+02:00 level=INFO source=ggml.go:498 msg="offloaded 25/25 layers to GPU" > time=2025-09-05T09:51:06.020+02:00 level=INFO source=backend.go:310 msg="model weights" device=ROCm0 size="11.8 GiB" > time=2025-09-05T09:51:06.020+02:00 level=INFO source=backend.go:315 msg="model weights" device=CPU size="1.1 GiB" > time=2025-09-05T09:51:06.020+02:00 level=INFO source=backend.go:321 msg="kv cache" device=ROCm0 size="204.0 MiB" > time=2025-09-05T09:51:06.020+02:00 level=INFO source=backend.go:332 msg="compute graph" device=ROCm0 size="117.8 MiB" > time=2025-09-05T09:51:06.020+02:00 level=INFO source=backend.go:337 msg="compute graph" device=CPU size="5.6 MiB" > time=2025-09-05T09:51:06.020+02:00 level=INFO source=backend.go:342 msg="total memory" size="13.2 GiB" > time=2025-09-05T09:51:06.020+02:00 level=INFO source=sched.go:473 msg="loaded runners" count=1 > time=2025-09-05T09:51:06.020+02:00 level=INFO source=server.go:1250 msg="waiting for llama runner to start responding" > time=2025-09-05T09:51:06.021+02:00 level=INFO source=server.go:1284 msg="waiting for server to become available" status="llm server loading model" > time=2025-09-05T09:51:14.060+02:00 level=INFO source=server.go:1288 msg="llama runner started in 34.52 seconds" > [GIN] 2025/09/05 - 09:51:14 | 200 | 36.0745655s | 127.0.0.1 | POST "/api/generate" > [GIN] 2025/09/05 - 09:51:17 | 200 | 0s | 127.0.0.1 | GET "/api/version" After initial startup right clicking the taskbar icon - allows to "view logs" several times without any issue - clicking "open ollama" yields to nothing no window opens, but disables the taskbar icon (no context menu anymore) - clicking "Settings" same as clicking "open ollama" - clicking "Quit" same as clicking "open ollama" For all options "ollama app.exe" and "ollama.exe" stay in memory an I can still use it on CLI. I haven't changed any hardware and all the above used to work with an older version several weeks ago. Sorry for the long post. I've got now idea where to look at :-) Thanks for reading, emil
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#54562