[GH-ISSUE #12322] [Windows] Remote/Turbo models inaccessible in Desktop v0.11.11, causing "Connection lost" error (persists in v0.12.0) #33947

Closed
opened 2026-04-22 17:07:48 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @Derkida on GitHub (Sep 17, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12322

Originally assigned to: @BruceMacD on GitHub.

What is the issue?

Describe the bug

After updating the Ollama Desktop application to the latest version 0.11.11, I am no longer able to use remote-only/Turbo models (e.g., gpt-oss:120b). When I select such a model and attempt to start a chat, the UI immediately shows an "Error: Connection lost" message.

The backend logs reveal that the application is incorrectly trying to find the remote model on the local server, resulting in a model '...' not found error and a 500 Internal Server Error.

This functionality was working perfectly in version 0.11.8. Downgrading to 0.11.8 completely resolves the issue, indicating this is a regression.

To Reproduce

  1. Install or update to Ollama Desktop v0.11.11.
  2. Log in to an Ollama Pro account with the Turbo feature enabled.
  3. From the model selection list, choose a remote-only model that requires the Turbo feature (e.g., gpt-oss:120b).
  4. Attempt to start a new chat.
  5. Observe the "Error: Connection lost" message in the user interface.

Expected behavior

The chat should initiate successfully. The request for the remote model should be seamlessly offloaded to the Ollama cloud service for processing without trying to resolve it locally, just as it behaved in version 0.11.8.

Actual behavior

The UI displays an "Error: Connection lost" message. The application seems to be querying the local API endpoint for the remote model's details, which fails because the model does not exist locally.

Logs

Here are the relevant logs from the application, showing the "model not found" error when trying to access gpt-oss:120b:

log
time=2025-09-18T01:30:43.746+08:00 level=ERROR source=ui.go:1378 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b
time=2025-09-18T01:30:43.746+08:00 level=ERROR source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=1.5767ms request_id=1758130243745176000 version=0.11.11
time=2025-09-18T01:30:47.487+08:00 level=ERROR source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/chat/019958b1-2c82-7a24-885c-2ec78d1e9f34 http.pattern="POST /api/v1/chat/{id}" http.status=500 http.d=825.9µs request_id=1758130247486373000 version=0.11.11

System Information
OS: Windows 11]
Ollama Version: 0.11.11
Previous Working Version: 0.11.8

Relevant log output


OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

11.11

Originally created by @Derkida on GitHub (Sep 17, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12322 Originally assigned to: @BruceMacD on GitHub. ### What is the issue? ### Describe the bug After updating the Ollama Desktop application to the latest version `0.11.11`, I am no longer able to use remote-only/Turbo models (e.g., `gpt-oss:120b`). When I select such a model and attempt to start a chat, the UI immediately shows an "Error: Connection lost" message. The backend logs reveal that the application is incorrectly trying to find the remote model on the local server, resulting in a `model '...' not found` error and a `500 Internal Server Error`. This functionality was working perfectly in version `0.11.8`. Downgrading to `0.11.8` completely resolves the issue, indicating this is a regression. ### To Reproduce 1. Install or update to Ollama Desktop `v0.11.11`. 2. Log in to an Ollama Pro account with the Turbo feature enabled. 3. From the model selection list, choose a remote-only model that requires the Turbo feature (e.g., `gpt-oss:120b`). 4. Attempt to start a new chat. 5. Observe the "Error: Connection lost" message in the user interface. ### Expected behavior The chat should initiate successfully. The request for the remote model should be seamlessly offloaded to the Ollama cloud service for processing without trying to resolve it locally, just as it behaved in version `0.11.8`. ### Actual behavior The UI displays an "Error: Connection lost" message. The application seems to be querying the local API endpoint for the remote model's details, which fails because the model does not exist locally. ### Logs Here are the relevant logs from the application, showing the "model not found" error when trying to access `gpt-oss:120b`: log time=2025-09-18T01:30:43.746+08:00 level=ERROR source=ui.go:1378 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b time=2025-09-18T01:30:43.746+08:00 level=ERROR source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=1.5767ms request_id=1758130243745176000 version=0.11.11 time=2025-09-18T01:30:47.487+08:00 level=ERROR source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/chat/019958b1-2c82-7a24-885c-2ec78d1e9f34 http.pattern="POST /api/v1/chat/{id}" http.status=500 http.d=825.9µs request_id=1758130247486373000 version=0.11.11 System Information OS: Windows 11] Ollama Version: 0.11.11 Previous Working Version: 0.11.8 ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 11.11
GiteaMirror added the bug label 2026-04-22 17:07:48 -05:00
Author
Owner

@pdevine commented on GitHub (Sep 17, 2025):

I'm having problems reproducing this. I think I have the same setup (Windows 11, Ollama 0.11.11, and an RTX 5090). Just wondering if there is something else about your setup which is unique?

<!-- gh-comment-id:3304383559 --> @pdevine commented on GitHub (Sep 17, 2025): I'm having problems reproducing this. I think I have the same setup (Windows 11, Ollama 0.11.11, and an RTX 5090). Just wondering if there is something else about your setup which is unique?
Author
Owner

@Derkida commented on GitHub (Sep 17, 2025):

I'm having problems reproducing this. I think I have the same setup (Windows 11, Ollama 0.11.11, and an RTX 5090). Just wondering if there is something else about your setup which is unique?

I discovered that I have several custom Ollama environment variables set on my system:
Name Value
OLLAMA_ORIGINS *
OLLAMA_HOST 0.0.0.0:11434
OLLAMA_MODELS G:\AI\ollama\Modelfile

<!-- gh-comment-id:3304503261 --> @Derkida commented on GitHub (Sep 17, 2025): > I'm having problems reproducing this. I think I have the same setup (Windows 11, Ollama 0.11.11, and an RTX 5090). Just wondering if there is something else about your setup which is unique? I discovered that I have several custom Ollama environment variables set on my system: Name Value OLLAMA_ORIGINS * OLLAMA_HOST 0.0.0.0:11434 OLLAMA_MODELS G:\AI\ollama\Modelfile
Author
Owner

@YMG001 commented on GitHub (Sep 18, 2025):

I am also encountering this issue. The log shows the following error:

LOG:
time=2025-09-18T17:27:46.599+08:00 level=ERROR source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=9.7088ms request_id=1758187666589979600 version=0.11.11
time=2025-09-18T17:27:50.620+08:00 level=ERROR source=ui.go:1378 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b

<!-- gh-comment-id:3306471376 --> @YMG001 commented on GitHub (Sep 18, 2025): I am also encountering this issue. The log shows the following error: LOG: time=2025-09-18T17:27:46.599+08:00 level=ERROR source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=9.7088ms request_id=1758187666589979600 version=0.11.11 time=2025-09-18T17:27:50.620+08:00 level=ERROR source=ui.go:1378 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b
Author
Owner

@Derkida commented on GitHub (Sep 19, 2025):

I can confirm that this issue persists in the latest version, 0.12.0. The behavior is identical to what was observed in v0.11.11, where creating a new chat with a proxy model results in an HTTP 500 internal server error.
Here are the complete logs from version 0.12.0 for your reference:

time=2025-09-20T04:12:31.574+08:00 level=INFO source=app_windows.go:272 msg="starting Ollama" app=C:\Users\Derkida\AppData\Local\Programs\Ollama version=0.12.0 OS=Windows/10.0.26100
time=2025-09-20T04:12:31.602+08:00 level=INFO source=app.go:232 msg="initialized tools registry" tool_count=0
time=2025-09-20T04:12:31.604+08:00 level=INFO source=app.go:247 msg="starting ollama server"
time=2025-09-20T04:12:32.442+08:00 level=INFO source=app.go:279 msg="starting ui server" port=61268
time=2025-09-20T04:12:33.444+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1758312753444550300 version=0.12.0
time=2025-09-20T04:12:33.446+08:00 level=INFO source=server.go:343 msg=Matched "inference compute"="{Library:cuda Variant:v13 Compute:8.9 Driver:13.0 Name:NVIDIA GeForce RTX 4070 Ti VRAM:12.0 GiB}"
time=2025-09-20T04:12:33.448+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=999.4µs request_id=1758312753447051400 version=0.12.0
time=2025-09-20T04:12:33.451+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=200 http.d=5.9989ms request_id=1758312753445550200 version=0.12.0
time=2025-09-20T04:12:33.449+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=3.5001ms request_id=1758312753445550200 version=0.12.0
time=2025-09-20T04:12:33.544+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=1.499ms request_id=1758312753542688700 version=0.12.0
time=2025-09-20T04:12:33.544+08:00 level=ERROR source=ui.go:1426 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b
time=2025-09-20T04:12:33.545+08:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=2.4995ms request_id=1758312753542688700 version=0.12.0
time=2025-09-20T04:12:33.745+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=327.7145ms request_id=1758312753418050100 version=0.12.0
time=2025-09-20T04:12:33.771+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/models/turbo http.pattern="GET /api/v1/models/turbo" http.status=200 http.d=326.307ms request_id=1758312753445550200 version=0.12.0
time=2025-09-20T04:12:34.590+08:00 level=ERROR source=ui.go:1426 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b
time=2025-09-20T04:12:34.590+08:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=1.5049ms request_id=1758312754588848700 version=0.12.0
time=2025-09-20T04:12:35.442+08:00 level=INFO source=updater.go:252 msg="beginning update checker" interval=1h0m0s
time=2025-09-20T04:12:36.604+08:00 level=ERROR source=ui.go:1426 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b
time=2025-09-20T04:12:36.604+08:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=1.6933ms request_id=1758312756602755400 version=0.12.0
time=2025-09-20T04:12:38.244+08:00 level=INFO source=.:0 msg="http: superfluous response.WriteHeader call from github.com/ollama/app/ui.(*statusRecorder).WriteHeader (ui.go:60)"
time=2025-09-20T04:12:38.244+08:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=POST http.path=/api/v1/chat/new http.pattern="POST /api/v1/chat/{id}" http.status=500 http.d=1.0211ms request_id=1758312758243268300 version=0.12.0
time=2025-09-20T04:12:38.250+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=500.6µs request_id=1758312758250315400 version=0.12.0
time=2025-09-20T04:12:38.265+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1758312758265938500 version=0.12.0
time=2025-09-20T04:12:38.268+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=2.0017ms request_id=1758312758266437700 version=0.12.0
time=2025-09-20T04:12:38.591+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/models/turbo http.pattern="GET /api/v1/models/turbo" http.status=200 http.d=325.0594ms request_id=1758312758266938900 version=0.12.0
time=2025-09-20T04:12:40.618+08:00 level=ERROR source=ui.go:1426 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b
time=2025-09-20T04:12:40.618+08:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=2.0242ms request_id=1758312760616671600 version=0.12.0
time=2025-09-20T04:12:42.400+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199639b-7be3-7418-aec9-ca9ce8bf9c0a http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=1.5002ms request_id=1758312762398961200 version=0.12.0

<!-- gh-comment-id:3313695547 --> @Derkida commented on GitHub (Sep 19, 2025): I can confirm that this issue persists in the latest version, 0.12.0. The behavior is identical to what was observed in v0.11.11, where creating a new chat with a proxy model results in an HTTP 500 internal server error. Here are the complete logs from version 0.12.0 for your reference: time=2025-09-20T04:12:31.574+08:00 level=INFO source=app_windows.go:272 msg="starting Ollama" app=C:\Users\Derkida\AppData\Local\Programs\Ollama version=0.12.0 OS=Windows/10.0.26100 time=2025-09-20T04:12:31.602+08:00 level=INFO source=app.go:232 msg="initialized tools registry" tool_count=0 time=2025-09-20T04:12:31.604+08:00 level=INFO source=app.go:247 msg="starting ollama server" time=2025-09-20T04:12:32.442+08:00 level=INFO source=app.go:279 msg="starting ui server" port=61268 time=2025-09-20T04:12:33.444+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1758312753444550300 version=0.12.0 time=2025-09-20T04:12:33.446+08:00 level=INFO source=server.go:343 msg=Matched "inference compute"="{Library:cuda Variant:v13 Compute:8.9 Driver:13.0 Name:NVIDIA GeForce RTX 4070 Ti VRAM:12.0 GiB}" time=2025-09-20T04:12:33.448+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=999.4µs request_id=1758312753447051400 version=0.12.0 time=2025-09-20T04:12:33.451+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=200 http.d=5.9989ms request_id=1758312753445550200 version=0.12.0 time=2025-09-20T04:12:33.449+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=3.5001ms request_id=1758312753445550200 version=0.12.0 time=2025-09-20T04:12:33.544+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=1.499ms request_id=1758312753542688700 version=0.12.0 time=2025-09-20T04:12:33.544+08:00 level=ERROR source=ui.go:1426 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b time=2025-09-20T04:12:33.545+08:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=2.4995ms request_id=1758312753542688700 version=0.12.0 time=2025-09-20T04:12:33.745+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=327.7145ms request_id=1758312753418050100 version=0.12.0 time=2025-09-20T04:12:33.771+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/models/turbo http.pattern="GET /api/v1/models/turbo" http.status=200 http.d=326.307ms request_id=1758312753445550200 version=0.12.0 time=2025-09-20T04:12:34.590+08:00 level=ERROR source=ui.go:1426 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b time=2025-09-20T04:12:34.590+08:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=1.5049ms request_id=1758312754588848700 version=0.12.0 time=2025-09-20T04:12:35.442+08:00 level=INFO source=updater.go:252 msg="beginning update checker" interval=1h0m0s time=2025-09-20T04:12:36.604+08:00 level=ERROR source=ui.go:1426 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b time=2025-09-20T04:12:36.604+08:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=1.6933ms request_id=1758312756602755400 version=0.12.0 time=2025-09-20T04:12:38.244+08:00 level=INFO source=.:0 msg="http: superfluous response.WriteHeader call from github.com/ollama/app/ui.(*statusRecorder).WriteHeader (ui.go:60)" time=2025-09-20T04:12:38.244+08:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=POST http.path=/api/v1/chat/new http.pattern="POST /api/v1/chat/{id}" http.status=500 http.d=1.0211ms request_id=1758312758243268300 version=0.12.0 time=2025-09-20T04:12:38.250+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=500.6µs request_id=1758312758250315400 version=0.12.0 time=2025-09-20T04:12:38.265+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1758312758265938500 version=0.12.0 time=2025-09-20T04:12:38.268+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=2.0017ms request_id=1758312758266437700 version=0.12.0 time=2025-09-20T04:12:38.591+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/models/turbo http.pattern="GET /api/v1/models/turbo" http.status=200 http.d=325.0594ms request_id=1758312758266938900 version=0.12.0 time=2025-09-20T04:12:40.618+08:00 level=ERROR source=ui.go:1426 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b time=2025-09-20T04:12:40.618+08:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=2.0242ms request_id=1758312760616671600 version=0.12.0 time=2025-09-20T04:12:42.400+08:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199639b-7be3-7418-aec9-ca9ce8bf9c0a http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=1.5002ms request_id=1758312762398961200 version=0.12.0
Author
Owner

@Derkida commented on GitHub (Sep 19, 2025):

To provide a clear comparison, I'm also adding the full log from v0.11.8, which is the last version where this feature worked correctly.

time=2025-09-20T04:21:09.589+08:00 level=INFO source=app_windows.go:272 msg="starting Ollama" app=C:\Users\Derkida\AppData\Local\Programs\Ollama version=0.11.8 OS=Windows/10.0.26100
time=2025-09-20T04:21:09.597+08:00 level=INFO source=app.go:212 msg="initialized tools registry" tool_count=0
time=2025-09-20T04:21:09.599+08:00 level=INFO source=app.go:227 msg="starting ollama server"
time=2025-09-20T04:21:13.025+08:00 level=INFO source=app.go:256 msg="starting ui server" port=57553
time=2025-09-20T04:21:13.851+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1758313273851236400 version=0.11.8
time=2025-09-20T04:21:13.853+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=998.4µs request_id=1758313273852900500 version=0.11.8
time=2025-09-20T04:21:13.855+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=4.1676ms request_id=1758313273851236400 version=0.11.8
time=2025-09-20T04:21:13.879+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=688.5µs request_id=1758313273878911800 version=0.11.8
time=2025-09-20T04:21:13.881+08:00 level=ERROR source=ui.go:1215 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b
time=2025-09-20T04:21:13.881+08:00 level=ERROR source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=2.6977ms request_id=1758313273878911800 version=0.11.8
time=2025-09-20T04:21:14.165+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=339.6622ms request_id=1758313273826210500 version=0.11.8
time=2025-09-20T04:21:14.192+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/models/turbo http.pattern="GET /api/v1/models/turbo" http.status=200 http.d=340.7996ms request_id=1758313273851236400 version=0.11.8
time=2025-09-20T04:21:14.930+08:00 level=ERROR source=ui.go:1215 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b
time=2025-09-20T04:21:14.930+08:00 level=ERROR source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=1.5014ms request_id=1758313274929192400 version=0.11.8
time=2025-09-20T04:21:16.026+08:00 level=INFO source=updater.go:252 msg="beginning update checker" interval=1h0m0s
time=2025-09-20T04:21:16.416+08:00 level=INFO source=updater.go:127 msg="New update available at https://github.com/ollama/ollama/releases/download/v0.12.0/OllamaSetup.exe"
time=2025-09-20T04:21:16.937+08:00 level=ERROR source=ui.go:1215 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b
time=2025-09-20T04:21:16.937+08:00 level=ERROR source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=1.5005ms request_id=1758313276935604700 version=0.11.8
time=2025-09-20T04:21:18.905+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=999.5µs request_id=1758313278904065700 version=0.11.8
time=2025-09-20T04:21:18.920+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1758313278920079800 version=0.11.8
time=2025-09-20T04:21:18.923+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=3.5024ms request_id=1758313278920079800 version=0.11.8
time=2025-09-20T04:21:19.261+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/models/turbo http.pattern="GET /api/v1/models/turbo" http.status=200 http.d=340.6233ms request_id=1758313278920581700 version=0.11.8
time=2025-09-20T04:21:20.303+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=POST http.path=/api/v1/chat/new http.pattern="POST /api/v1/chat/{id}" http.status=200 http.d=1.4067238s request_id=1758313278896548700 version=0.11.8
time=2025-09-20T04:21:20.303+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/019963a3-6db0-785f-ba26-8a6da909df07 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=2.4949ms request_id=1758313280301278300 version=0.11.8
time=2025-09-20T04:21:20.941+08:00 level=ERROR source=ui.go:1215 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b
time=2025-09-20T04:21:20.941+08:00 level=ERROR source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=1.5013ms request_id=1758313280940211900 version=0.11.8

<!-- gh-comment-id:3313714053 --> @Derkida commented on GitHub (Sep 19, 2025): To provide a clear comparison, I'm also adding the full log from v0.11.8, which is the last version where this feature worked correctly. time=2025-09-20T04:21:09.589+08:00 level=INFO source=app_windows.go:272 msg="starting Ollama" app=C:\Users\Derkida\AppData\Local\Programs\Ollama version=0.11.8 OS=Windows/10.0.26100 time=2025-09-20T04:21:09.597+08:00 level=INFO source=app.go:212 msg="initialized tools registry" tool_count=0 time=2025-09-20T04:21:09.599+08:00 level=INFO source=app.go:227 msg="starting ollama server" time=2025-09-20T04:21:13.025+08:00 level=INFO source=app.go:256 msg="starting ui server" port=57553 time=2025-09-20T04:21:13.851+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1758313273851236400 version=0.11.8 time=2025-09-20T04:21:13.853+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=998.4µs request_id=1758313273852900500 version=0.11.8 time=2025-09-20T04:21:13.855+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=4.1676ms request_id=1758313273851236400 version=0.11.8 time=2025-09-20T04:21:13.879+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=688.5µs request_id=1758313273878911800 version=0.11.8 time=2025-09-20T04:21:13.881+08:00 level=ERROR source=ui.go:1215 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b time=2025-09-20T04:21:13.881+08:00 level=ERROR source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=2.6977ms request_id=1758313273878911800 version=0.11.8 time=2025-09-20T04:21:14.165+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=339.6622ms request_id=1758313273826210500 version=0.11.8 time=2025-09-20T04:21:14.192+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/models/turbo http.pattern="GET /api/v1/models/turbo" http.status=200 http.d=340.7996ms request_id=1758313273851236400 version=0.11.8 time=2025-09-20T04:21:14.930+08:00 level=ERROR source=ui.go:1215 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b time=2025-09-20T04:21:14.930+08:00 level=ERROR source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=1.5014ms request_id=1758313274929192400 version=0.11.8 time=2025-09-20T04:21:16.026+08:00 level=INFO source=updater.go:252 msg="beginning update checker" interval=1h0m0s time=2025-09-20T04:21:16.416+08:00 level=INFO source=updater.go:127 msg="New update available at https://github.com/ollama/ollama/releases/download/v0.12.0/OllamaSetup.exe" time=2025-09-20T04:21:16.937+08:00 level=ERROR source=ui.go:1215 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b time=2025-09-20T04:21:16.937+08:00 level=ERROR source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=1.5005ms request_id=1758313276935604700 version=0.11.8 time=2025-09-20T04:21:18.905+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=999.5µs request_id=1758313278904065700 version=0.11.8 time=2025-09-20T04:21:18.920+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1758313278920079800 version=0.11.8 time=2025-09-20T04:21:18.923+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=3.5024ms request_id=1758313278920079800 version=0.11.8 time=2025-09-20T04:21:19.261+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/models/turbo http.pattern="GET /api/v1/models/turbo" http.status=200 http.d=340.6233ms request_id=1758313278920581700 version=0.11.8 time=2025-09-20T04:21:20.303+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=POST http.path=/api/v1/chat/new http.pattern="POST /api/v1/chat/{id}" http.status=200 http.d=1.4067238s request_id=1758313278896548700 version=0.11.8 time=2025-09-20T04:21:20.303+08:00 level=INFO source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/019963a3-6db0-785f-ba26-8a6da909df07 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=2.4949ms request_id=1758313280301278300 version=0.11.8 time=2025-09-20T04:21:20.941+08:00 level=ERROR source=ui.go:1215 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b time=2025-09-20T04:21:20.941+08:00 level=ERROR source=ui.go:167 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=1.5013ms request_id=1758313280940211900 version=0.11.8
Author
Owner

@BruceMacD commented on GitHub (Sep 19, 2025):

Thanks for the detailed report. I believe I've solved this for the next release (not out yet). I added some network interface checks to give feedback in the case a user was offline, but in the case of Windows they seem to have had some problems in your case.

<!-- gh-comment-id:3314166779 --> @BruceMacD commented on GitHub (Sep 19, 2025): Thanks for the detailed report. I believe I've solved this for the next release (not out yet). I added some network interface checks to give feedback in the case a user was offline, but in the case of Windows they seem to have had some problems in your case.
Author
Owner

@djipih commented on GitHub (Sep 22, 2025):

I am very disappointed to pay a subscription for Turbo mode without guarantee of quality of service

<!-- gh-comment-id:3317401793 --> @djipih commented on GitHub (Sep 22, 2025): I am very disappointed to pay a subscription for Turbo mode without guarantee of quality of service
Author
Owner

@djipih commented on GitHub (Sep 22, 2025):

Error connection since 0.11.11 version :

time=2025-09-22T09:55:05.968+02:00 level=INFO source=app_windows.go:272 msg="starting Ollama" app=C:\Users\Admin\AppData\Local\Programs\Ollama version=0.12.0 OS=Windows/10.0.19044
time=2025-09-22T09:55:06.199+02:00 level=INFO source=app.go:232 msg="initialized tools registry" tool_count=0
time=2025-09-22T09:55:06.200+02:00 level=INFO source=app.go:247 msg="starting ollama server"
time=2025-09-22T09:55:06.466+02:00 level=INFO source=app.go:279 msg="starting ui server" port=59457
time=2025-09-22T09:55:09.467+02:00 level=INFO source=updater.go:252 msg="beginning update checker" interval=1h0m0s
time=2025-09-22T09:56:15.623+02:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1758527775623872900 version=0.12.0
time=2025-09-22T09:56:15.628+02:00 level=INFO source=server.go:343 msg=Matched "inference compute"="{Library:cpu Variant: Compute: Driver:0.0 Name: VRAM:31.9 GiB}"
time=2025-09-22T09:56:15.629+02:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=1.0726ms request_id=1758527775628013100 version=0.12.0
time=2025-09-22T09:56:15.630+02:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=4.3896ms request_id=1758527775626335600 version=0.12.0
time=2025-09-22T09:56:15.667+02:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1758527775667599800 version=0.12.0
time=2025-09-22T09:56:15.671+02:00 level=ERROR source=ui.go:1426 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b
time=2025-09-22T09:56:15.671+02:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=2.0855ms request_id=1758527775669853400 version=0.12.0
time=2025-09-22T09:56:15.701+02:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=122.0921ms request_id=1758527775578983100 version=0.12.0
time=2025-09-22T09:56:15.730+02:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=200 http.d=102.7485ms request_id=1758527775628013100 version=0.12.0
time=2025-09-22T09:56:15.746+02:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/models/turbo http.pattern="GET /api/v1/models/turbo" http.status=200 http.d=118.2767ms request_id=1758527775627438500 version=0.12.0
time=2025-09-22T09:56:16.693+02:00 level=ERROR source=ui.go:1426 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b
time=2025-09-22T09:56:16.694+02:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=1.6448ms request_id=1758527776692675700 version=0.12.0
time=2025-09-22T09:56:18.425+02:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/01995bad-8a88-7242-85a2-9a87347b009a http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=15.4508ms request_id=1758527778410169800 version=0.12.0
time=2025-09-22T09:56:18.710+02:00 level=ERROR source=ui.go:1426 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b
time=2025-09-22T09:56:18.710+02:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=1.124ms request_id=1758527778709812600 version=0.12.0
time=2025-09-22T09:56:22.729+02:00 level=ERROR source=ui.go:1426 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b
time=2025-09-22T09:56:22.729+02:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=2.0011ms request_id=1758527782727832200 version=0.12.0
<!-- gh-comment-id:3317438281 --> @djipih commented on GitHub (Sep 22, 2025): Error connection since 0.11.11 version : ``` time=2025-09-22T09:55:05.968+02:00 level=INFO source=app_windows.go:272 msg="starting Ollama" app=C:\Users\Admin\AppData\Local\Programs\Ollama version=0.12.0 OS=Windows/10.0.19044 time=2025-09-22T09:55:06.199+02:00 level=INFO source=app.go:232 msg="initialized tools registry" tool_count=0 time=2025-09-22T09:55:06.200+02:00 level=INFO source=app.go:247 msg="starting ollama server" time=2025-09-22T09:55:06.466+02:00 level=INFO source=app.go:279 msg="starting ui server" port=59457 time=2025-09-22T09:55:09.467+02:00 level=INFO source=updater.go:252 msg="beginning update checker" interval=1h0m0s time=2025-09-22T09:56:15.623+02:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1758527775623872900 version=0.12.0 time=2025-09-22T09:56:15.628+02:00 level=INFO source=server.go:343 msg=Matched "inference compute"="{Library:cpu Variant: Compute: Driver:0.0 Name: VRAM:31.9 GiB}" time=2025-09-22T09:56:15.629+02:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=1.0726ms request_id=1758527775628013100 version=0.12.0 time=2025-09-22T09:56:15.630+02:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=4.3896ms request_id=1758527775626335600 version=0.12.0 time=2025-09-22T09:56:15.667+02:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1758527775667599800 version=0.12.0 time=2025-09-22T09:56:15.671+02:00 level=ERROR source=ui.go:1426 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b time=2025-09-22T09:56:15.671+02:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=2.0855ms request_id=1758527775669853400 version=0.12.0 time=2025-09-22T09:56:15.701+02:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=122.0921ms request_id=1758527775578983100 version=0.12.0 time=2025-09-22T09:56:15.730+02:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=200 http.d=102.7485ms request_id=1758527775628013100 version=0.12.0 time=2025-09-22T09:56:15.746+02:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/models/turbo http.pattern="GET /api/v1/models/turbo" http.status=200 http.d=118.2767ms request_id=1758527775627438500 version=0.12.0 time=2025-09-22T09:56:16.693+02:00 level=ERROR source=ui.go:1426 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b time=2025-09-22T09:56:16.694+02:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=1.6448ms request_id=1758527776692675700 version=0.12.0 time=2025-09-22T09:56:18.425+02:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/01995bad-8a88-7242-85a2-9a87347b009a http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=15.4508ms request_id=1758527778410169800 version=0.12.0 time=2025-09-22T09:56:18.710+02:00 level=ERROR source=ui.go:1426 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b time=2025-09-22T09:56:18.710+02:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=1.124ms request_id=1758527778709812600 version=0.12.0 time=2025-09-22T09:56:22.729+02:00 level=ERROR source=ui.go:1426 msg="failed to show model details" error="model 'gpt-oss:120b' not found" model=gpt-oss:120b time=2025-09-22T09:56:22.729+02:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/gpt-oss:120b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=500 http.d=2.0011ms request_id=1758527782727832200 version=0.12.0 ```
Author
Owner

@scofano commented on GitHub (Sep 23, 2025):

time=2025-09-23T08:08:51.778-03:00 level=INFO source=.:0 msg="http: superfluous response.WriteHeader call from github.com/ollama/app/ui.(*statusRecorder).WriteHeader (ui.go:60)"
time=2025-09-23T08:08:51.778-03:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=POST http.path=/api/v1/chat/new http.pattern="POST /api/v1/chat/{id}" http.status=500 http.d=70.3958ms request_id=1758625731708054000 version=0.12.0
time=2025-09-23T08:08:51.816-03:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1758625731816064400 version=0.12.0
time=2025-09-23T08:08:51.835-03:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=17.6517ms request_id=1758625731817389000 version=0.12.0
time=2025-09-23T08:08:51.946-03:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/models/turbo http.pattern="GET /api/v1/models/turbo" http.status=200 http.d=129.4278ms request_id=1758625731817389000 version=0.12.0
time=2025-09-23T08:08:52.212-03:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=452.811ms request_id=1758625731759622300 version=0.12.0
time=2025-09-23T08:12:04.735-03:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=27.4751ms request_id=1758625924706928500 version=0.12.0
time=2025-09-23T08:12:04.757-03:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/01997643-1486-794c-abed-dcb76b58473b http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=50.2437ms request_id=1758625924706928500 version=0.12.0
time=2025-09-23T08:12:04.852-03:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=145.8075ms request_id=1758625924706928500 version=0.12.0
time=2025-09-23T08:12:04.871-03:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=164.7076ms request_id=1758625924706928500 version=0.12.0
time=2025-09-23T08:12:04.942-03:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/models/turbo http.pattern="GET /api/v1/models/turbo" http.status=200 http.d=232.9451ms request_id=1758625924709845800 version=0.12.0
time=2025-09-23T08:12:11.250-03:00 level=INFO source=.:0 msg="http: superfluous response.WriteHeader call from github.com/ollama/app/ui.(*statusRecorder).WriteHeader (ui.go:60)"
time=2025-09-23T08:12:11.250-03:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=POST http.path=/api/v1/chat/01997643-1486-794c-abed-dcb76b58473b http.pattern="POST /api/v1/chat/{id}" http.status=500 http.d=11.5338ms request_id=1758625931238850000 version=0.12.0

Windows 11

<!-- gh-comment-id:3323540522 --> @scofano commented on GitHub (Sep 23, 2025): time=2025-09-23T08:08:51.778-03:00 level=INFO source=.:0 msg="http: superfluous response.WriteHeader call from github.com/ollama/app/ui.(*statusRecorder).WriteHeader (ui.go:60)" time=2025-09-23T08:08:51.778-03:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=POST http.path=/api/v1/chat/new http.pattern="POST /api/v1/chat/{id}" http.status=500 http.d=70.3958ms request_id=1758625731708054000 version=0.12.0 time=2025-09-23T08:08:51.816-03:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1758625731816064400 version=0.12.0 time=2025-09-23T08:08:51.835-03:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=17.6517ms request_id=1758625731817389000 version=0.12.0 time=2025-09-23T08:08:51.946-03:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/models/turbo http.pattern="GET /api/v1/models/turbo" http.status=200 http.d=129.4278ms request_id=1758625731817389000 version=0.12.0 time=2025-09-23T08:08:52.212-03:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=452.811ms request_id=1758625731759622300 version=0.12.0 time=2025-09-23T08:12:04.735-03:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=27.4751ms request_id=1758625924706928500 version=0.12.0 time=2025-09-23T08:12:04.757-03:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/01997643-1486-794c-abed-dcb76b58473b http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=50.2437ms request_id=1758625924706928500 version=0.12.0 time=2025-09-23T08:12:04.852-03:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=145.8075ms request_id=1758625924706928500 version=0.12.0 time=2025-09-23T08:12:04.871-03:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=164.7076ms request_id=1758625924706928500 version=0.12.0 time=2025-09-23T08:12:04.942-03:00 level=INFO source=ui.go:188 msg=site.serveHTTP http.method=GET http.path=/api/v1/models/turbo http.pattern="GET /api/v1/models/turbo" http.status=200 http.d=232.9451ms request_id=1758625924709845800 version=0.12.0 time=2025-09-23T08:12:11.250-03:00 level=INFO source=.:0 msg="http: superfluous response.WriteHeader call from github.com/ollama/app/ui.(*statusRecorder).WriteHeader (ui.go:60)" time=2025-09-23T08:12:11.250-03:00 level=ERROR source=ui.go:188 msg=site.serveHTTP http.method=POST http.path=/api/v1/chat/01997643-1486-794c-abed-dcb76b58473b http.pattern="POST /api/v1/chat/{id}" http.status=500 http.d=11.5338ms request_id=1758625931238850000 version=0.12.0 Windows 11
Author
Owner

@BruceMacD commented on GitHub (Sep 23, 2025):

Thanks to everyone for the reports, v0.12.1 is now released. Please let me know if the issue persists, it should be fixed now.

<!-- gh-comment-id:3325544378 --> @BruceMacD commented on GitHub (Sep 23, 2025): Thanks to everyone for the reports, [v0.12.1](https://github.com/ollama/ollama/releases/tag/v0.12.1) is now released. Please let me know if the issue persists, it should be fixed now.
Author
Owner

@Derkida commented on GitHub (Sep 23, 2025):

Thanks to everyone for the reports, v0.12.1 is now released. Please let me know if the issue persists, it should be fixed now.

I can confirm that this issue has been resolved in the latest release, v0.12.1.

<!-- gh-comment-id:3325632730 --> @Derkida commented on GitHub (Sep 23, 2025): > Thanks to everyone for the reports, [v0.12.1](https://github.com/ollama/ollama/releases/tag/v0.12.1) is now released. Please let me know if the issue persists, it should be fixed now. I can confirm that this issue has been resolved in the latest release, v0.12.1.
Author
Owner

@djipih commented on GitHub (Sep 23, 2025):

Thx, the issue has been resolved in the release v0.12.1.
Note : It is important to delete models during the uninstallation process of Ollama v0.12.

<!-- gh-comment-id:3325864279 --> @djipih commented on GitHub (Sep 23, 2025): Thx, the issue has been resolved in the release v0.12.1. Note : It is important to delete models during the uninstallation process of Ollama v0.12.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33947