[GH-ISSUE #12594] In Ollama v0.12.5, the reasoning model has a slower inference speed; the deepseek-r1:8b model has issues with reasoning and output, but the deepseek-r1:7b model works fine. Is this related to memory allocation? #54871

Closed
opened 2026-04-29 07:44:44 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @minghua-123 on GitHub (Oct 13, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12594

What is the issue?

Specific details can be found at: https://github.com/ollama/ollama/issues/12570

Hardware environment: i9-13900HX, 32GB RAM, GeForce RTX 4060 (8GB VRAM).
In version 0.11.11, both the 8B and 7B models run normally (see screenshots in the previous issue), with short response times (around 20 seconds) for the same query.

However, in version 0.12.5, while the 7B model still runs normally (response time around 80 seconds), the 8B model experiences either an infinite loop or unexpected interruptions during inference and output, resulting in garbled or incoherent responses. Additionally, the processing time for both models has significantly increased for the same queries.

Could this be related to memory configuration? How can we fix these two issues (abnormal inference with the 8B model, and drastically reduced reasoning speed in the new version)?

Image Image

Relevant log output

### app.log

time=2025-10-13T17:30:07.692+08:00 level=INFO source=app_windows.go:272 msg="starting Ollama" app=C:\Users\wmh21\AppData\Local\Programs\Ollama version=0.12.5 OS=Windows/10.0.26100
time=2025-10-13T17:30:07.704+08:00 level=INFO source=app.go:232 msg="initialized tools registry" tool_count=0
time=2025-10-13T17:30:07.743+08:00 level=INFO source=app.go:247 msg="starting ollama server"
time=2025-10-13T17:30:08.538+08:00 level=INFO source=app.go:279 msg="starting ui server" port=50590
time=2025-10-13T17:30:11.538+08:00 level=INFO source=updater.go:252 msg="beginning update checker" interval=1h0m0s
time=2025-10-13T17:45:27.148+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=530µs request_id=1760348727148390100 version=0.12.5
time=2025-10-13T17:45:27.149+08:00 level=INFO source=server.go:343 msg=Matched "inference compute"="{Library:CUDA Variant: Compute:8.9 Driver:13.0 Name:CUDA0 VRAM:8.0 GiB}"
time=2025-10-13T17:45:27.149+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=200 http.d=574µs request_id=1760348727148920100 version=0.12.5
time=2025-10-13T17:45:27.152+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/health http.pattern="GET /api/v1/health" http.status=200 http.d=1.0799ms request_id=1760348727151095600 version=0.12.5
time=2025-10-13T17:45:27.153+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=2.1098ms request_id=1760348727151627000 version=0.12.5
time=2025-10-13T17:45:27.160+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=11.7697ms request_id=1760348727148920100 version=0.12.5
time=2025-10-13T17:45:27.189+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1760348727189058600 version=0.12.5
time=2025-10-13T17:45:27.281+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/deepseek-r1:8b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=200 http.d=91.0978ms request_id=1760348727190557000 version=0.12.5
time=2025-10-13T17:45:27.866+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=749.345ms request_id=1760348727116887100 version=0.12.5
time=2025-10-13T17:45:27.926+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/model/upstream http.pattern="POST /api/v1/model/upstream" http.status=200 http.d=737.1443ms request_id=1760348727189058600 version=0.12.5
time=2025-10-13T17:45:31.826+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=285.1706ms request_id=1760348731541600000 version=0.12.5
time=2025-10-13T17:45:32.076+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=277.3624ms request_id=1760348731799309700 version=0.12.5
time=2025-10-13T17:45:32.255+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=281.4039ms request_id=1760348731974192800 version=0.12.5
time=2025-10-13T17:45:32.464+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=274.4614ms request_id=1760348732190187000 version=0.12.5
time=2025-10-13T17:45:32.632+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=273.4965ms request_id=1760348732358521600 version=0.12.5
time=2025-10-13T17:45:33.184+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=299.0121ms request_id=1760348732885927200 version=0.12.5
time=2025-10-13T17:45:38.749+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=9.7688ms request_id=1760348738739471400 version=0.12.5
time=2025-10-13T17:45:39.219+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcbb-bdb9-7a32-8aec-14ee140c61d2 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=1.5351ms request_id=1760348739218358700 version=0.12.5
time=2025-10-13T17:45:42.688+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=454.722ms request_id=1760348742234210700 version=0.12.5
time=2025-10-13T17:45:42.729+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=252.4721ms request_id=1760348742477020500 version=0.12.5
time=2025-10-13T17:45:42.899+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=244.8661ms request_id=1760348742654894800 version=0.12.5
time=2025-10-13T17:45:43.145+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=251.6406ms request_id=1760348742893470800 version=0.12.5
time=2025-10-13T17:45:43.329+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=250.0324ms request_id=1760348743079914400 version=0.12.5
time=2025-10-13T17:45:44.037+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=247.4626ms request_id=1760348743788584200 version=0.12.5
time=2025-10-13T17:45:45.667+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/settings http.pattern="POST /api/v1/settings" http.status=200 http.d=0s request_id=1760348745667077700 version=0.12.5
time=2025-10-13T17:45:45.670+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=528.9µs request_id=1760348745670266400 version=0.12.5
time=2025-10-13T17:45:45.676+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=8.8543ms request_id=1760348745667665400 version=0.12.5
time=2025-10-13T17:45:45.713+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/deepseek-r1:7b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=200 http.d=39.3892ms request_id=1760348745673856800 version=0.12.5
time=2025-10-13T17:45:45.997+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/model/upstream http.pattern="POST /api/v1/model/upstream" http.status=200 http.d=324.0568ms request_id=1760348745673352800 version=0.12.5
time=2025-10-13T17:45:46.297+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d13e-7a0c-78d6-9619-768cf5fbfb27 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760348746297713900 version=0.12.5
time=2025-10-13T17:45:46.340+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d1f5-704a-7c8e-9c21-5f00ca75267c http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760348746340238200 version=0.12.5
time=2025-10-13T17:45:46.894+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcbb-bdb9-7a32-8aec-14ee140c61d2 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760348746894326400 version=0.12.5
time=2025-10-13T17:45:47.204+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760348747204573000 version=0.12.5
time=2025-10-13T17:45:47.208+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/settings http.pattern="POST /api/v1/settings" http.status=200 http.d=0s request_id=1760348747208356400 version=0.12.5
time=2025-10-13T17:45:47.209+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/settings http.pattern="POST /api/v1/settings" http.status=200 http.d=527.3µs request_id=1760348747208877500 version=0.12.5
time=2025-10-13T17:45:47.214+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=10.0429ms request_id=1760348747204573000 version=0.12.5
time=2025-10-13T17:45:47.216+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760348747216168200 version=0.12.5
time=2025-10-13T17:45:47.217+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760348747217349900 version=0.12.5
time=2025-10-13T17:45:50.848+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760348750848631800 version=0.12.5
time=2025-10-13T17:45:51.096+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=247.4702ms request_id=1760348750848631800 version=0.12.5
time=2025-10-13T17:45:51.562+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1760348751562336700 version=0.12.5
time=2025-10-13T17:45:51.569+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760348751569478700 version=0.12.5
time=2025-10-13T17:45:51.569+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/health http.pattern="GET /api/v1/health" http.status=200 http.d=519.8µs request_id=1760348751569478700 version=0.12.5
time=2025-10-13T17:45:51.578+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=9.0046ms request_id=1760348751569478700 version=0.12.5
time=2025-10-13T17:45:52.248+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcbb-bdb9-7a32-8aec-14ee140c61d2 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=503.6µs request_id=1760348752247810100 version=0.12.5
time=2025-10-13T17:45:54.816+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=270.7203ms request_id=1760348754545986100 version=0.12.5
time=2025-10-13T17:45:55.012+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=245.9362ms request_id=1760348754766923200 version=0.12.5
time=2025-10-13T17:45:55.196+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=244.5866ms request_id=1760348754952220300 version=0.12.5
time=2025-10-13T17:45:55.376+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=242.033ms request_id=1760348755134720700 version=0.12.5
time=2025-10-13T17:45:56.472+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/settings http.pattern="POST /api/v1/settings" http.status=200 http.d=0s request_id=1760348756472285600 version=0.12.5
time=2025-10-13T17:45:56.479+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=1.0051ms request_id=1760348756478256000 version=0.12.5
time=2025-10-13T17:45:56.486+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=8.3598ms request_id=1760348756478256000 version=0.12.5
time=2025-10-13T17:45:57.962+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=528.9µs request_id=1760348757962451400 version=0.12.5
time=2025-10-13T17:45:57.971+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760348757971325200 version=0.12.5
time=2025-10-13T17:45:57.983+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=12.2195ms request_id=1760348757971325200 version=0.12.5
time=2025-10-13T17:47:30.805+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199cce7-418a-722d-95b4-f2086af3a863 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=516.3µs request_id=1760348850804530000 version=0.12.5
time=2025-10-13T17:47:30.879+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199cce8-f6b2-7a9f-86ab-b36e81453da9 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760348850879458400 version=0.12.5
time=2025-10-13T17:47:50.807+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199cce8-f6b2-7a9f-86ab-b36e81453da9 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=1.0034ms request_id=1760348870806102100 version=0.12.5
time=2025-10-13T17:47:50.969+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199cd7d-6fa0-75af-852b-4f285f2fa41c http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=11.3404ms request_id=1760348870958192800 version=0.12.5
time=2025-10-13T17:47:51.329+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199cce1-aa17-7701-bc69-a8bc08701a20 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=1.0073ms request_id=1760348871328835200 version=0.12.5
time=2025-10-13T17:47:51.562+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d11d-3902-7918-88d5-a1374cd0c194 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760348871562919600 version=0.12.5
time=2025-10-13T17:47:51.672+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d13e-7a0c-78d6-9619-768cf5fbfb27 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=509.1µs request_id=1760348871672234900 version=0.12.5
time=2025-10-13T17:47:53.101+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199cce8-f6b2-7a9f-86ab-b36e81453da9 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=509.2µs request_id=1760348873101106800 version=0.12.5
time=2025-10-13T17:47:53.104+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199cd7d-6fa0-75af-852b-4f285f2fa41c http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=10.165ms request_id=1760348873094244400 version=0.12.5
time=2025-10-13T17:48:51.748+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/chat/new http.pattern="POST /api/v1/chat/{id}" http.status=200 http.d=2m53.7917916s request_id=1760348757957132400 version=0.12.5
time=2025-10-13T17:48:51.751+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcf6-5fc5-7205-85df-f7674eb280b8 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=1.0463ms request_id=1760348931750506900 version=0.12.5
time=2025-10-13T17:51:12.443+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760349072443560200 version=0.12.5
time=2025-10-13T17:51:13.297+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=855.284ms request_id=1760349072442545600 version=0.12.5
time=2025-10-13T17:51:13.548+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=249.3665ms request_id=1760349073298839200 version=0.12.5
time=2025-10-13T17:51:14.963+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=511.7µs request_id=1760349074962744400 version=0.12.5
time=2025-10-13T17:51:14.971+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760349074971051400 version=0.12.5
time=2025-10-13T17:51:14.972+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/health http.pattern="GET /api/v1/health" http.status=200 http.d=1.0247ms request_id=1760349074971051400 version=0.12.5
time=2025-10-13T17:51:14.981+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=10.1048ms request_id=1760349074971051400 version=0.12.5
time=2025-10-13T17:51:15.236+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcf6-5fc5-7205-85df-f7674eb280b8 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760349075236792800 version=0.12.5
time=2025-10-13T17:51:15.324+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/model/upstream http.pattern="POST /api/v1/model/upstream" http.status=200 http.d=354.5565ms request_id=1760349074970044100 version=0.12.5
time=2025-10-13T17:51:40.680+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d11d-3902-7918-88d5-a1374cd0c194 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=1.1179ms request_id=1760349100679227600 version=0.12.5
time=2025-10-13T17:51:40.700+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d13e-7a0c-78d6-9619-768cf5fbfb27 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=1.2571ms request_id=1760349100699153200 version=0.12.5
time=2025-10-13T17:51:40.724+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d1f5-704a-7c8e-9c21-5f00ca75267c http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760349100724807300 version=0.12.5
time=2025-10-13T17:51:40.837+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcbb-bdb9-7a32-8aec-14ee140c61d2 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=550.9µs request_id=1760349100836780100 version=0.12.5
time=2025-10-13T17:51:41.819+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760349101819188400 version=0.12.5
time=2025-10-13T17:51:41.822+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/settings http.pattern="POST /api/v1/settings" http.status=200 http.d=514.4µs request_id=1760349101822275000 version=0.12.5
time=2025-10-13T17:51:41.822+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/settings http.pattern="POST /api/v1/settings" http.status=200 http.d=0s request_id=1760349101822789400 version=0.12.5
time=2025-10-13T17:51:41.828+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=9.5864ms request_id=1760349101819188400 version=0.12.5
time=2025-10-13T17:51:41.828+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760349101828774800 version=0.12.5
time=2025-10-13T17:51:41.830+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760349101830814300 version=0.12.5
time=2025-10-13T17:51:42.145+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/model/upstream http.pattern="POST /api/v1/model/upstream" http.status=200 http.d=308.8644ms request_id=1760349101836480800 version=0.12.5
time=2025-10-13T17:51:42.258+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d1f5-704a-7c8e-9c21-5f00ca75267c http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760349102258190800 version=0.12.5
time=2025-10-13T17:57:32.805+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/chat/0199dcbb-bdb9-7a32-8aec-14ee140c61d2 http.pattern="POST /api/v1/chat/{id}" http.status=200 http.d=5m44.2060946s request_id=1760349108599778100 version=0.12.5
time=2025-10-13T17:59:21.956+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcf6-5fc5-7205-85df-f7674eb280b8 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=3.7882ms request_id=1760349561952545400 version=0.12.5
time=2025-10-13T17:59:22.336+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/settings http.pattern="POST /api/v1/settings" http.status=200 http.d=0s request_id=1760349562336686600 version=0.12.5
time=2025-10-13T17:59:22.337+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/settings http.pattern="POST /api/v1/settings" http.status=200 http.d=0s request_id=1760349562337192800 version=0.12.5
time=2025-10-13T17:59:22.340+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760349562340514300 version=0.12.5
time=2025-10-13T17:59:22.341+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760349562341540100 version=0.12.5
time=2025-10-13T17:59:23.089+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/model/upstream http.pattern="POST /api/v1/model/upstream" http.status=200 http.d=742.7291ms request_id=1760349562346295800 version=0.12.5
time=2025-10-13T17:59:36.163+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d1f5-704a-7c8e-9c21-5f00ca75267c http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=535.3µs request_id=1760349576162920400 version=0.12.5
time=2025-10-13T17:59:41.064+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d13e-7a0c-78d6-9619-768cf5fbfb27 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=1.5156ms request_id=1760349581062487800 version=0.12.5
time=2025-10-13T17:59:41.072+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d1f5-704a-7c8e-9c21-5f00ca75267c http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760349581072513100 version=0.12.5
time=2025-10-13T17:59:41.130+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcbb-bdb9-7a32-8aec-14ee140c61d2 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=615.7µs request_id=1760349581129865600 version=0.12.5
time=2025-10-13T17:59:41.180+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcf6-5fc5-7205-85df-f7674eb280b8 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=1.0094ms request_id=1760349581179724500 version=0.12.5
time=2025-10-13T17:59:42.891+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcbb-bdb9-7a32-8aec-14ee140c61d2 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760349582891632900 version=0.12.5
time=2025-10-13T17:59:42.893+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/settings http.pattern="POST /api/v1/settings" http.status=200 http.d=0s request_id=1760349582893214800 version=0.12.5
time=2025-10-13T17:59:42.893+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/settings http.pattern="POST /api/v1/settings" http.status=200 http.d=0s request_id=1760349582893746300 version=0.12.5
time=2025-10-13T17:59:42.902+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=1.0045ms request_id=1760349582901984400 version=0.12.5
time=2025-10-13T17:59:42.903+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760349582903493100 version=0.12.5
time=2025-10-13T17:59:43.205+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/model/upstream http.pattern="POST /api/v1/model/upstream" http.status=200 http.d=299.3917ms request_id=1760349582906399700 version=0.12.5
time=2025-10-13T17:59:43.554+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcf6-5fc5-7205-85df-f7674eb280b8 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=508.4µs request_id=1760349583553634400 version=0.12.5
time=2025-10-13T17:59:59.546+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d1f5-704a-7c8e-9c21-5f00ca75267c http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760349599546871200 version=0.12.5



### server.log

time=2025-10-13T17:30:08.935+08:00 level=INFO source=routes.go:1481 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:16384 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:E:\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-10-13T17:30:09.011+08:00 level=INFO source=images.go:522 msg="total blobs: 67"
time=2025-10-13T17:30:09.024+08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-10-13T17:30:09.033+08:00 level=INFO source=routes.go:1534 msg="Listening on [::]:11434 (version 0.12.5)"
time=2025-10-13T17:30:09.036+08:00 level=INFO source=runner.go:80 msg="discovering available GPUs..."
time=2025-10-13T17:30:11.966+08:00 level=INFO source=types.go:112 msg="inference compute" id=GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 library=CUDA compute=8.9 name=CUDA0 description="NVIDIA GeForce RTX 4060 Laptop GPU" libdirs=ollama,cuda_v13 driver=13.0 pci_id=01:00.0 type=discrete total="8.0 GiB" available="4.7 GiB"
time=2025-10-13T17:30:11.966+08:00 level=INFO source=routes.go:1575 msg="entering low vram mode" "total vram"="8.0 GiB" threshold="20.0 GiB"
[GIN] 2025/10/13 - 17:30:11 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/10/13 - 17:30:11 | 200 |      7.9005ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:30:25 | 200 |            0s |       127.0.0.1 | HEAD     "/"
time=2025-10-13T17:30:27.217+08:00 level=INFO source=download.go:177 msg="downloading 96c415656d37 in 16 292 MB part(s)"
time=2025-10-13T17:41:09.495+08:00 level=INFO source=download.go:374 msg="96c415656d37 part 5 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
[GIN] 2025/10/13 - 17:44:32 | 200 |         14m6s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2025/10/13 - 17:44:33 | 200 |            0s |       127.0.0.1 | HEAD     "/"
time=2025-10-13T17:44:34.857+08:00 level=INFO source=download.go:177 msg="downloading 96c415656d37 in 16 292 MB part(s)"
[GIN] 2025/10/13 - 17:45:27 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/10/13 - 17:45:27 | 200 |     10.6676ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:27 | 200 |     89.6962ms |       127.0.0.1 | POST     "/api/show"
time=2025-10-13T17:45:27.533+08:00 level=INFO source=download.go:177 msg="downloading f4d24e9138dd in 1 148 B part(s)"
time=2025-10-13T17:45:29.159+08:00 level=INFO source=download.go:177 msg="downloading 40fb844194b2 in 1 487 B part(s)"
[GIN] 2025/10/13 - 17:45:31 | 200 |      7.4451ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:31 | 200 |      8.8144ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:31 | 200 |      8.2771ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:32 | 200 |      9.5236ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:32 | 200 |      8.2524ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:32 | 200 |      8.6213ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:34 | 200 |          1m0s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2025/10/13 - 17:45:38 | 200 |      9.7688ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:42 | 200 |     10.1391ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:42 | 200 |     11.4119ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:42 | 200 |      8.4289ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:42 | 200 |      9.9602ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:43 | 200 |      13.602ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:43 | 200 |      7.0894ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:45 | 200 |      8.3381ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:45 | 200 |     38.8801ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/10/13 - 17:45:47 | 200 |     10.0429ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:51 | 200 |       519.8µs |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/10/13 - 17:45:51 | 200 |      7.9584ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:54 | 200 |      6.7861ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:54 | 200 |      9.5116ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:54 | 200 |       8.581ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:55 | 200 |      8.5321ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:56 | 200 |      8.3598ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:57 | 200 |     12.2195ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:45:58 | 200 |     47.0356ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/10/13 - 17:45:58 | 200 |      41.639ms |       127.0.0.1 | POST     "/api/show"
time=2025-10-13T17:45:58.486+08:00 level=INFO source=cpu_windows.go:139 msg=packages count=1
time=2025-10-13T17:45:58.486+08:00 level=INFO source=cpu_windows.go:155 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-10-13T17:45:58.486+08:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=24 efficiency=16 threads=32
time=2025-10-13T17:45:58.486+08:00 level=INFO source=server.go:216 msg="enabling flash attention"
time=2025-10-13T17:45:58.488+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\wmh21\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model E:\\.ollama\\models\\blobs\\sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 --port 59732"
time=2025-10-13T17:45:58.505+08:00 level=INFO source=server.go:675 msg="loading model" "model layers"=29 requested=-1
time=2025-10-13T17:45:58.505+08:00 level=INFO source=cpu_windows.go:139 msg=packages count=1
time=2025-10-13T17:45:58.505+08:00 level=INFO source=cpu_windows.go:155 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-10-13T17:45:58.505+08:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=24 efficiency=16 threads=32
time=2025-10-13T17:45:58.505+08:00 level=INFO source=server.go:681 msg="system memory" total="31.7 GiB" free="16.4 GiB" free_swap="4.6 GiB"
time=2025-10-13T17:45:58.505+08:00 level=INFO source=server.go:689 msg="gpu memory" id=GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 library=CUDA available="4.3 GiB" free="4.7 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-10-13T17:45:58.559+08:00 level=INFO source=runner.go:1316 msg="starting ollama engine"
time=2025-10-13T17:45:58.571+08:00 level=INFO source=runner.go:1352 msg="Server listening on 127.0.0.1:59732"
time=2025-10-13T17:45:58.582+08:00 level=INFO source=runner.go:1189 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:16384 KvCacheType: NumThreads:8 GPULayers:29[ID:GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-13T17:45:58.601+08:00 level=INFO source=ggml.go:133 msg="" architecture=qwen2 file_type=Q4_K_M name="DeepSeek R1 Distill Qwen 7B" description="" num_tensors=339 num_key_values=27
load_backend: loaded CPU backend from C:\Users\wmh21\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4060 Laptop GPU, compute capability 8.9, VMM: yes, ID: GPU-14b161fd-5142-a0b8-22c0-13cca7537e94
load_backend: loaded CUDA backend from C:\Users\wmh21\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll
time=2025-10-13T17:45:58.694+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-10-13T17:45:59.030+08:00 level=INFO source=runner.go:1189 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:16384 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 Layers:25(3..27)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-13T17:45:59.054+08:00 level=INFO source=runner.go:1189 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:16384 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 Layers:25(3..27)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-13T17:46:03.016+08:00 level=INFO source=runner.go:1189 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:16384 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 Layers:25(3..27)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-13T17:46:03.016+08:00 level=INFO source=ggml.go:477 msg="offloading 25 repeating layers to GPU"
time=2025-10-13T17:46:03.016+08:00 level=INFO source=ggml.go:481 msg="offloading output layer to CPU"
time=2025-10-13T17:46:03.016+08:00 level=INFO source=ggml.go:488 msg="offloaded 25/29 layers to GPU"
time=2025-10-13T17:46:03.016+08:00 level=INFO source=device.go:206 msg="model weights" device=CUDA0 size="3.2 GiB"
time=2025-10-13T17:46:03.016+08:00 level=INFO source=device.go:211 msg="model weights" device=CPU size="1.1 GiB"
time=2025-10-13T17:46:03.016+08:00 level=INFO source=device.go:217 msg="kv cache" device=CUDA0 size="800.0 MiB"
time=2025-10-13T17:46:03.016+08:00 level=INFO source=device.go:222 msg="kv cache" device=CPU size="96.0 MiB"
time=2025-10-13T17:46:03.016+08:00 level=INFO source=device.go:228 msg="compute graph" device=CUDA0 size="155.0 MiB"
time=2025-10-13T17:46:03.016+08:00 level=INFO source=device.go:233 msg="compute graph" device=CPU size="132.0 MiB"
time=2025-10-13T17:46:03.016+08:00 level=INFO source=device.go:238 msg="total memory" size="5.5 GiB"
time=2025-10-13T17:46:03.016+08:00 level=INFO source=sched.go:481 msg="loaded runners" count=1
time=2025-10-13T17:46:03.016+08:00 level=INFO source=server.go:1271 msg="waiting for llama runner to start responding"
time=2025-10-13T17:46:03.017+08:00 level=INFO source=server.go:1305 msg="waiting for server to become available" status="llm server loading model"
time=2025-10-13T17:46:04.521+08:00 level=INFO source=server.go:1309 msg="llama runner started in 6.03 seconds"
[GIN] 2025/10/13 - 17:48:51 | 200 |         2m53s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/10/13 - 17:51:14 | 200 |            0s |       127.0.0.1 | GET      "/api/version"
[GIN] 2025/10/13 - 17:51:14 | 200 |       8.579ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:51:41 | 200 |      9.0399ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2025/10/13 - 17:51:48 | 200 |     42.7984ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2025/10/13 - 17:51:48 | 200 |     39.6073ms |       127.0.0.1 | POST     "/api/show"
time=2025-10-13T17:51:48.781+08:00 level=INFO source=sched.go:544 msg="updated VRAM based on existing loaded models" gpu=GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 library=CUDA total="8.0 GiB" available="0 B"
time=2025-10-13T17:51:48.822+08:00 level=INFO source=cpu_windows.go:139 msg=packages count=1
time=2025-10-13T17:51:48.822+08:00 level=INFO source=cpu_windows.go:155 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-10-13T17:51:48.822+08:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=24 efficiency=16 threads=32
time=2025-10-13T17:51:48.822+08:00 level=INFO source=server.go:216 msg="enabling flash attention"
time=2025-10-13T17:51:48.831+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\wmh21\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model E:\\.ollama\\models\\blobs\\sha256-e6a7edc1a4d7d9b2de136a221a57336b76316cfe53a252aeba814496c5ae439d --port 55097"
time=2025-10-13T17:51:48.846+08:00 level=INFO source=server.go:675 msg="loading model" "model layers"=37 requested=-1
time=2025-10-13T17:51:48.846+08:00 level=INFO source=cpu_windows.go:139 msg=packages count=1
time=2025-10-13T17:51:48.846+08:00 level=INFO source=cpu_windows.go:155 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-10-13T17:51:48.846+08:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=24 efficiency=16 threads=32
time=2025-10-13T17:51:48.846+08:00 level=INFO source=server.go:681 msg="system memory" total="31.7 GiB" free="18.4 GiB" free_swap="1.7 GiB"
time=2025-10-13T17:51:48.846+08:00 level=INFO source=server.go:689 msg="gpu memory" id=GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 library=CUDA available="0 B" free="0 B" minimum="457.0 MiB" overhead="0 B"
time=2025-10-13T17:51:48.899+08:00 level=INFO source=runner.go:1316 msg="starting ollama engine"
time=2025-10-13T17:51:48.912+08:00 level=INFO source=runner.go:1352 msg="Server listening on 127.0.0.1:55097"
time=2025-10-13T17:51:48.922+08:00 level=INFO source=runner.go:1189 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:16384 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-13T17:51:48.942+08:00 level=INFO source=ggml.go:133 msg="" architecture=qwen3 file_type=Q4_K_M name="DeepSeek R1 0528 Qwen3 8B" description="" num_tensors=399 num_key_values=33
load_backend: loaded CPU backend from C:\Users\wmh21\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4060 Laptop GPU, compute capability 8.9, VMM: yes, ID: GPU-14b161fd-5142-a0b8-22c0-13cca7537e94
load_backend: loaded CUDA backend from C:\Users\wmh21\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll
time=2025-10-13T17:51:49.063+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-10-13T17:51:49.405+08:00 level=INFO source=runner.go:1189 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-13T17:51:49.405+08:00 level=INFO source=device.go:206 msg="model weights" device=CUDA0 size="4.5 GiB"
time=2025-10-13T17:51:49.405+08:00 level=INFO source=device.go:211 msg="model weights" device=CPU size="333.8 MiB"
time=2025-10-13T17:51:49.405+08:00 level=INFO source=device.go:217 msg="kv cache" device=CUDA0 size="2.2 GiB"
time=2025-10-13T17:51:49.405+08:00 level=INFO source=device.go:228 msg="compute graph" device=CUDA0 size="126.0 MiB"
time=2025-10-13T17:51:49.405+08:00 level=INFO source=device.go:233 msg="compute graph" device=CPU size="8.0 MiB"
time=2025-10-13T17:51:49.405+08:00 level=INFO source=device.go:238 msg="total memory" size="7.2 GiB"
time=2025-10-13T17:51:50.207+08:00 level=INFO source=server.go:675 msg="loading model" "model layers"=37 requested=-1
time=2025-10-13T17:51:50.207+08:00 level=INFO source=cpu_windows.go:139 msg=packages count=1
time=2025-10-13T17:51:50.207+08:00 level=INFO source=cpu_windows.go:155 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-10-13T17:51:50.207+08:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=24 efficiency=16 threads=32
time=2025-10-13T17:51:50.207+08:00 level=INFO source=server.go:681 msg="system memory" total="31.7 GiB" free="19.7 GiB" free_swap="7.3 GiB"
time=2025-10-13T17:51:50.207+08:00 level=INFO source=server.go:689 msg="gpu memory" id=GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 library=CUDA available="6.5 GiB" free="6.9 GiB" minimum="457.0 MiB" overhead="0 B"
time=2025-10-13T17:51:50.209+08:00 level=INFO source=runner.go:1189 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:16384 KvCacheType: NumThreads:8 GPULayers:36[ID:GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 Layers:36(0..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-13T17:51:50.237+08:00 level=INFO source=runner.go:1189 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:16384 KvCacheType: NumThreads:8 GPULayers:36[ID:GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 Layers:36(0..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-13T17:51:52.142+08:00 level=INFO source=runner.go:1189 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:16384 KvCacheType: NumThreads:8 GPULayers:36[ID:GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 Layers:36(0..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2025-10-13T17:51:52.142+08:00 level=INFO source=ggml.go:477 msg="offloading 36 repeating layers to GPU"
time=2025-10-13T17:51:52.142+08:00 level=INFO source=ggml.go:481 msg="offloading output layer to CPU"
time=2025-10-13T17:51:52.142+08:00 level=INFO source=ggml.go:488 msg="offloaded 36/37 layers to GPU"
time=2025-10-13T17:51:52.142+08:00 level=INFO source=device.go:206 msg="model weights" device=CUDA0 size="4.1 GiB"
time=2025-10-13T17:51:52.142+08:00 level=INFO source=device.go:211 msg="model weights" device=CPU size="820.7 MiB"
time=2025-10-13T17:51:52.142+08:00 level=INFO source=device.go:217 msg="kv cache" device=CUDA0 size="2.2 GiB"
time=2025-10-13T17:51:52.142+08:00 level=INFO source=device.go:228 msg="compute graph" device=CUDA0 size="126.0 MiB"
time=2025-10-13T17:51:52.142+08:00 level=INFO source=device.go:233 msg="compute graph" device=CPU size="8.0 MiB"
time=2025-10-13T17:51:52.142+08:00 level=INFO source=device.go:238 msg="total memory" size="7.2 GiB"
time=2025-10-13T17:51:52.142+08:00 level=INFO source=sched.go:481 msg="loaded runners" count=1
time=2025-10-13T17:51:52.142+08:00 level=INFO source=server.go:1271 msg="waiting for llama runner to start responding"
time=2025-10-13T17:51:52.143+08:00 level=INFO source=server.go:1305 msg="waiting for server to become available" status="llm server loading model"
time=2025-10-13T17:51:54.399+08:00 level=INFO source=server.go:1309 msg="llama runner started in 5.57 seconds"
[GIN] 2025/10/13 - 17:57:32 | 200 |         5m44s |       127.0.0.1 | POST     "/api/chat"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.12.5

Originally created by @minghua-123 on GitHub (Oct 13, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12594 ### What is the issue? Specific details can be found at: https://github.com/ollama/ollama/issues/12570 Hardware environment: i9-13900HX, 32GB RAM, GeForce RTX 4060 (8GB VRAM). In version 0.11.11, both the 8B and 7B models run normally (see screenshots in the previous issue), with short response times (around 20 seconds) for the same query. However, in version 0.12.5, while the 7B model still runs normally (response time around 80 seconds), the 8B model experiences either an infinite loop or unexpected interruptions during inference and output, resulting in garbled or incoherent responses. Additionally, the processing time for both models has significantly increased for the same queries. Could this be related to memory configuration? How can we fix these two issues (abnormal inference with the 8B model, and drastically reduced reasoning speed in the new version)? <img width="1069" height="1016" alt="Image" src="https://github.com/user-attachments/assets/5a5d6cbb-ebf3-4773-8882-e961ae0b5895" /> <img width="1102" height="1061" alt="Image" src="https://github.com/user-attachments/assets/30b61b2b-7137-4d01-a472-705ccfe4bd36" /> ### Relevant log output ```shell ### app.log time=2025-10-13T17:30:07.692+08:00 level=INFO source=app_windows.go:272 msg="starting Ollama" app=C:\Users\wmh21\AppData\Local\Programs\Ollama version=0.12.5 OS=Windows/10.0.26100 time=2025-10-13T17:30:07.704+08:00 level=INFO source=app.go:232 msg="initialized tools registry" tool_count=0 time=2025-10-13T17:30:07.743+08:00 level=INFO source=app.go:247 msg="starting ollama server" time=2025-10-13T17:30:08.538+08:00 level=INFO source=app.go:279 msg="starting ui server" port=50590 time=2025-10-13T17:30:11.538+08:00 level=INFO source=updater.go:252 msg="beginning update checker" interval=1h0m0s time=2025-10-13T17:45:27.148+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=530µs request_id=1760348727148390100 version=0.12.5 time=2025-10-13T17:45:27.149+08:00 level=INFO source=server.go:343 msg=Matched "inference compute"="{Library:CUDA Variant: Compute:8.9 Driver:13.0 Name:CUDA0 VRAM:8.0 GiB}" time=2025-10-13T17:45:27.149+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/inference-compute http.pattern="GET /api/v1/inference-compute" http.status=200 http.d=574µs request_id=1760348727148920100 version=0.12.5 time=2025-10-13T17:45:27.152+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/health http.pattern="GET /api/v1/health" http.status=200 http.d=1.0799ms request_id=1760348727151095600 version=0.12.5 time=2025-10-13T17:45:27.153+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=2.1098ms request_id=1760348727151627000 version=0.12.5 time=2025-10-13T17:45:27.160+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=11.7697ms request_id=1760348727148920100 version=0.12.5 time=2025-10-13T17:45:27.189+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1760348727189058600 version=0.12.5 time=2025-10-13T17:45:27.281+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/deepseek-r1:8b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=200 http.d=91.0978ms request_id=1760348727190557000 version=0.12.5 time=2025-10-13T17:45:27.866+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=749.345ms request_id=1760348727116887100 version=0.12.5 time=2025-10-13T17:45:27.926+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/model/upstream http.pattern="POST /api/v1/model/upstream" http.status=200 http.d=737.1443ms request_id=1760348727189058600 version=0.12.5 time=2025-10-13T17:45:31.826+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=285.1706ms request_id=1760348731541600000 version=0.12.5 time=2025-10-13T17:45:32.076+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=277.3624ms request_id=1760348731799309700 version=0.12.5 time=2025-10-13T17:45:32.255+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=281.4039ms request_id=1760348731974192800 version=0.12.5 time=2025-10-13T17:45:32.464+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=274.4614ms request_id=1760348732190187000 version=0.12.5 time=2025-10-13T17:45:32.632+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=273.4965ms request_id=1760348732358521600 version=0.12.5 time=2025-10-13T17:45:33.184+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=299.0121ms request_id=1760348732885927200 version=0.12.5 time=2025-10-13T17:45:38.749+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=9.7688ms request_id=1760348738739471400 version=0.12.5 time=2025-10-13T17:45:39.219+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcbb-bdb9-7a32-8aec-14ee140c61d2 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=1.5351ms request_id=1760348739218358700 version=0.12.5 time=2025-10-13T17:45:42.688+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=454.722ms request_id=1760348742234210700 version=0.12.5 time=2025-10-13T17:45:42.729+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=252.4721ms request_id=1760348742477020500 version=0.12.5 time=2025-10-13T17:45:42.899+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=244.8661ms request_id=1760348742654894800 version=0.12.5 time=2025-10-13T17:45:43.145+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=251.6406ms request_id=1760348742893470800 version=0.12.5 time=2025-10-13T17:45:43.329+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=250.0324ms request_id=1760348743079914400 version=0.12.5 time=2025-10-13T17:45:44.037+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=247.4626ms request_id=1760348743788584200 version=0.12.5 time=2025-10-13T17:45:45.667+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/settings http.pattern="POST /api/v1/settings" http.status=200 http.d=0s request_id=1760348745667077700 version=0.12.5 time=2025-10-13T17:45:45.670+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=528.9µs request_id=1760348745670266400 version=0.12.5 time=2025-10-13T17:45:45.676+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=8.8543ms request_id=1760348745667665400 version=0.12.5 time=2025-10-13T17:45:45.713+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/model/deepseek-r1:7b/capabilities http.pattern="GET /api/v1/model/{model}/capabilities" http.status=200 http.d=39.3892ms request_id=1760348745673856800 version=0.12.5 time=2025-10-13T17:45:45.997+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/model/upstream http.pattern="POST /api/v1/model/upstream" http.status=200 http.d=324.0568ms request_id=1760348745673352800 version=0.12.5 time=2025-10-13T17:45:46.297+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d13e-7a0c-78d6-9619-768cf5fbfb27 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760348746297713900 version=0.12.5 time=2025-10-13T17:45:46.340+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d1f5-704a-7c8e-9c21-5f00ca75267c http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760348746340238200 version=0.12.5 time=2025-10-13T17:45:46.894+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcbb-bdb9-7a32-8aec-14ee140c61d2 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760348746894326400 version=0.12.5 time=2025-10-13T17:45:47.204+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760348747204573000 version=0.12.5 time=2025-10-13T17:45:47.208+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/settings http.pattern="POST /api/v1/settings" http.status=200 http.d=0s request_id=1760348747208356400 version=0.12.5 time=2025-10-13T17:45:47.209+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/settings http.pattern="POST /api/v1/settings" http.status=200 http.d=527.3µs request_id=1760348747208877500 version=0.12.5 time=2025-10-13T17:45:47.214+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=10.0429ms request_id=1760348747204573000 version=0.12.5 time=2025-10-13T17:45:47.216+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760348747216168200 version=0.12.5 time=2025-10-13T17:45:47.217+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760348747217349900 version=0.12.5 time=2025-10-13T17:45:50.848+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760348750848631800 version=0.12.5 time=2025-10-13T17:45:51.096+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=247.4702ms request_id=1760348750848631800 version=0.12.5 time=2025-10-13T17:45:51.562+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=0s request_id=1760348751562336700 version=0.12.5 time=2025-10-13T17:45:51.569+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760348751569478700 version=0.12.5 time=2025-10-13T17:45:51.569+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/health http.pattern="GET /api/v1/health" http.status=200 http.d=519.8µs request_id=1760348751569478700 version=0.12.5 time=2025-10-13T17:45:51.578+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=9.0046ms request_id=1760348751569478700 version=0.12.5 time=2025-10-13T17:45:52.248+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcbb-bdb9-7a32-8aec-14ee140c61d2 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=503.6µs request_id=1760348752247810100 version=0.12.5 time=2025-10-13T17:45:54.816+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=270.7203ms request_id=1760348754545986100 version=0.12.5 time=2025-10-13T17:45:55.012+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=245.9362ms request_id=1760348754766923200 version=0.12.5 time=2025-10-13T17:45:55.196+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=244.5866ms request_id=1760348754952220300 version=0.12.5 time=2025-10-13T17:45:55.376+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=242.033ms request_id=1760348755134720700 version=0.12.5 time=2025-10-13T17:45:56.472+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/settings http.pattern="POST /api/v1/settings" http.status=200 http.d=0s request_id=1760348756472285600 version=0.12.5 time=2025-10-13T17:45:56.479+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=1.0051ms request_id=1760348756478256000 version=0.12.5 time=2025-10-13T17:45:56.486+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=8.3598ms request_id=1760348756478256000 version=0.12.5 time=2025-10-13T17:45:57.962+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=528.9µs request_id=1760348757962451400 version=0.12.5 time=2025-10-13T17:45:57.971+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760348757971325200 version=0.12.5 time=2025-10-13T17:45:57.983+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=12.2195ms request_id=1760348757971325200 version=0.12.5 time=2025-10-13T17:47:30.805+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199cce7-418a-722d-95b4-f2086af3a863 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=516.3µs request_id=1760348850804530000 version=0.12.5 time=2025-10-13T17:47:30.879+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199cce8-f6b2-7a9f-86ab-b36e81453da9 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760348850879458400 version=0.12.5 time=2025-10-13T17:47:50.807+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199cce8-f6b2-7a9f-86ab-b36e81453da9 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=1.0034ms request_id=1760348870806102100 version=0.12.5 time=2025-10-13T17:47:50.969+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199cd7d-6fa0-75af-852b-4f285f2fa41c http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=11.3404ms request_id=1760348870958192800 version=0.12.5 time=2025-10-13T17:47:51.329+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199cce1-aa17-7701-bc69-a8bc08701a20 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=1.0073ms request_id=1760348871328835200 version=0.12.5 time=2025-10-13T17:47:51.562+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d11d-3902-7918-88d5-a1374cd0c194 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760348871562919600 version=0.12.5 time=2025-10-13T17:47:51.672+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d13e-7a0c-78d6-9619-768cf5fbfb27 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=509.1µs request_id=1760348871672234900 version=0.12.5 time=2025-10-13T17:47:53.101+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199cce8-f6b2-7a9f-86ab-b36e81453da9 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=509.2µs request_id=1760348873101106800 version=0.12.5 time=2025-10-13T17:47:53.104+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199cd7d-6fa0-75af-852b-4f285f2fa41c http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=10.165ms request_id=1760348873094244400 version=0.12.5 time=2025-10-13T17:48:51.748+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/chat/new http.pattern="POST /api/v1/chat/{id}" http.status=200 http.d=2m53.7917916s request_id=1760348757957132400 version=0.12.5 time=2025-10-13T17:48:51.751+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcf6-5fc5-7205-85df-f7674eb280b8 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=1.0463ms request_id=1760348931750506900 version=0.12.5 time=2025-10-13T17:51:12.443+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760349072443560200 version=0.12.5 time=2025-10-13T17:51:13.297+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=855.284ms request_id=1760349072442545600 version=0.12.5 time=2025-10-13T17:51:13.548+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/me http.pattern="GET /api/v1/me" http.status=200 http.d=249.3665ms request_id=1760349073298839200 version=0.12.5 time=2025-10-13T17:51:14.963+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chats http.pattern="GET /api/v1/chats" http.status=200 http.d=511.7µs request_id=1760349074962744400 version=0.12.5 time=2025-10-13T17:51:14.971+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760349074971051400 version=0.12.5 time=2025-10-13T17:51:14.972+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/health http.pattern="GET /api/v1/health" http.status=200 http.d=1.0247ms request_id=1760349074971051400 version=0.12.5 time=2025-10-13T17:51:14.981+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=10.1048ms request_id=1760349074971051400 version=0.12.5 time=2025-10-13T17:51:15.236+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcf6-5fc5-7205-85df-f7674eb280b8 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760349075236792800 version=0.12.5 time=2025-10-13T17:51:15.324+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/model/upstream http.pattern="POST /api/v1/model/upstream" http.status=200 http.d=354.5565ms request_id=1760349074970044100 version=0.12.5 time=2025-10-13T17:51:40.680+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d11d-3902-7918-88d5-a1374cd0c194 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=1.1179ms request_id=1760349100679227600 version=0.12.5 time=2025-10-13T17:51:40.700+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d13e-7a0c-78d6-9619-768cf5fbfb27 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=1.2571ms request_id=1760349100699153200 version=0.12.5 time=2025-10-13T17:51:40.724+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d1f5-704a-7c8e-9c21-5f00ca75267c http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760349100724807300 version=0.12.5 time=2025-10-13T17:51:40.837+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcbb-bdb9-7a32-8aec-14ee140c61d2 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=550.9µs request_id=1760349100836780100 version=0.12.5 time=2025-10-13T17:51:41.819+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760349101819188400 version=0.12.5 time=2025-10-13T17:51:41.822+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/settings http.pattern="POST /api/v1/settings" http.status=200 http.d=514.4µs request_id=1760349101822275000 version=0.12.5 time=2025-10-13T17:51:41.822+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/settings http.pattern="POST /api/v1/settings" http.status=200 http.d=0s request_id=1760349101822789400 version=0.12.5 time=2025-10-13T17:51:41.828+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/models http.pattern="GET /api/v1/models" http.status=200 http.d=9.5864ms request_id=1760349101819188400 version=0.12.5 time=2025-10-13T17:51:41.828+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760349101828774800 version=0.12.5 time=2025-10-13T17:51:41.830+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760349101830814300 version=0.12.5 time=2025-10-13T17:51:42.145+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/model/upstream http.pattern="POST /api/v1/model/upstream" http.status=200 http.d=308.8644ms request_id=1760349101836480800 version=0.12.5 time=2025-10-13T17:51:42.258+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d1f5-704a-7c8e-9c21-5f00ca75267c http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760349102258190800 version=0.12.5 time=2025-10-13T17:57:32.805+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/chat/0199dcbb-bdb9-7a32-8aec-14ee140c61d2 http.pattern="POST /api/v1/chat/{id}" http.status=200 http.d=5m44.2060946s request_id=1760349108599778100 version=0.12.5 time=2025-10-13T17:59:21.956+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcf6-5fc5-7205-85df-f7674eb280b8 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=3.7882ms request_id=1760349561952545400 version=0.12.5 time=2025-10-13T17:59:22.336+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/settings http.pattern="POST /api/v1/settings" http.status=200 http.d=0s request_id=1760349562336686600 version=0.12.5 time=2025-10-13T17:59:22.337+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/settings http.pattern="POST /api/v1/settings" http.status=200 http.d=0s request_id=1760349562337192800 version=0.12.5 time=2025-10-13T17:59:22.340+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760349562340514300 version=0.12.5 time=2025-10-13T17:59:22.341+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760349562341540100 version=0.12.5 time=2025-10-13T17:59:23.089+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/model/upstream http.pattern="POST /api/v1/model/upstream" http.status=200 http.d=742.7291ms request_id=1760349562346295800 version=0.12.5 time=2025-10-13T17:59:36.163+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d1f5-704a-7c8e-9c21-5f00ca75267c http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=535.3µs request_id=1760349576162920400 version=0.12.5 time=2025-10-13T17:59:41.064+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d13e-7a0c-78d6-9619-768cf5fbfb27 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=1.5156ms request_id=1760349581062487800 version=0.12.5 time=2025-10-13T17:59:41.072+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d1f5-704a-7c8e-9c21-5f00ca75267c http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760349581072513100 version=0.12.5 time=2025-10-13T17:59:41.130+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcbb-bdb9-7a32-8aec-14ee140c61d2 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=615.7µs request_id=1760349581129865600 version=0.12.5 time=2025-10-13T17:59:41.180+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcf6-5fc5-7205-85df-f7674eb280b8 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=1.0094ms request_id=1760349581179724500 version=0.12.5 time=2025-10-13T17:59:42.891+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcbb-bdb9-7a32-8aec-14ee140c61d2 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760349582891632900 version=0.12.5 time=2025-10-13T17:59:42.893+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/settings http.pattern="POST /api/v1/settings" http.status=200 http.d=0s request_id=1760349582893214800 version=0.12.5 time=2025-10-13T17:59:42.893+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/settings http.pattern="POST /api/v1/settings" http.status=200 http.d=0s request_id=1760349582893746300 version=0.12.5 time=2025-10-13T17:59:42.902+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=1.0045ms request_id=1760349582901984400 version=0.12.5 time=2025-10-13T17:59:42.903+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/settings http.pattern="GET /api/v1/settings" http.status=200 http.d=0s request_id=1760349582903493100 version=0.12.5 time=2025-10-13T17:59:43.205+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=POST http.path=/api/v1/model/upstream http.pattern="POST /api/v1/model/upstream" http.status=200 http.d=299.3917ms request_id=1760349582906399700 version=0.12.5 time=2025-10-13T17:59:43.554+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199dcf6-5fc5-7205-85df-f7674eb280b8 http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=508.4µs request_id=1760349583553634400 version=0.12.5 time=2025-10-13T17:59:59.546+08:00 level=INFO source=ui.go:171 msg=site.serveHTTP http.method=GET http.path=/api/v1/chat/0199d1f5-704a-7c8e-9c21-5f00ca75267c http.pattern="GET /api/v1/chat/{id}" http.status=200 http.d=0s request_id=1760349599546871200 version=0.12.5 ### server.log time=2025-10-13T17:30:08.935+08:00 level=INFO source=routes.go:1481 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:16384 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:E:\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:true OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-10-13T17:30:09.011+08:00 level=INFO source=images.go:522 msg="total blobs: 67" time=2025-10-13T17:30:09.024+08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-10-13T17:30:09.033+08:00 level=INFO source=routes.go:1534 msg="Listening on [::]:11434 (version 0.12.5)" time=2025-10-13T17:30:09.036+08:00 level=INFO source=runner.go:80 msg="discovering available GPUs..." time=2025-10-13T17:30:11.966+08:00 level=INFO source=types.go:112 msg="inference compute" id=GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 library=CUDA compute=8.9 name=CUDA0 description="NVIDIA GeForce RTX 4060 Laptop GPU" libdirs=ollama,cuda_v13 driver=13.0 pci_id=01:00.0 type=discrete total="8.0 GiB" available="4.7 GiB" time=2025-10-13T17:30:11.966+08:00 level=INFO source=routes.go:1575 msg="entering low vram mode" "total vram"="8.0 GiB" threshold="20.0 GiB" [GIN] 2025/10/13 - 17:30:11 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/10/13 - 17:30:11 | 200 | 7.9005ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:30:25 | 200 | 0s | 127.0.0.1 | HEAD "/" time=2025-10-13T17:30:27.217+08:00 level=INFO source=download.go:177 msg="downloading 96c415656d37 in 16 292 MB part(s)" time=2025-10-13T17:41:09.495+08:00 level=INFO source=download.go:374 msg="96c415656d37 part 5 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." [GIN] 2025/10/13 - 17:44:32 | 200 | 14m6s | 127.0.0.1 | POST "/api/pull" [GIN] 2025/10/13 - 17:44:33 | 200 | 0s | 127.0.0.1 | HEAD "/" time=2025-10-13T17:44:34.857+08:00 level=INFO source=download.go:177 msg="downloading 96c415656d37 in 16 292 MB part(s)" [GIN] 2025/10/13 - 17:45:27 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/13 - 17:45:27 | 200 | 10.6676ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:27 | 200 | 89.6962ms | 127.0.0.1 | POST "/api/show" time=2025-10-13T17:45:27.533+08:00 level=INFO source=download.go:177 msg="downloading f4d24e9138dd in 1 148 B part(s)" time=2025-10-13T17:45:29.159+08:00 level=INFO source=download.go:177 msg="downloading 40fb844194b2 in 1 487 B part(s)" [GIN] 2025/10/13 - 17:45:31 | 200 | 7.4451ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:31 | 200 | 8.8144ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:31 | 200 | 8.2771ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:32 | 200 | 9.5236ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:32 | 200 | 8.2524ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:32 | 200 | 8.6213ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:34 | 200 | 1m0s | 127.0.0.1 | POST "/api/pull" [GIN] 2025/10/13 - 17:45:38 | 200 | 9.7688ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:42 | 200 | 10.1391ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:42 | 200 | 11.4119ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:42 | 200 | 8.4289ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:42 | 200 | 9.9602ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:43 | 200 | 13.602ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:43 | 200 | 7.0894ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:45 | 200 | 8.3381ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:45 | 200 | 38.8801ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/10/13 - 17:45:47 | 200 | 10.0429ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:51 | 200 | 519.8µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/13 - 17:45:51 | 200 | 7.9584ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:54 | 200 | 6.7861ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:54 | 200 | 9.5116ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:54 | 200 | 8.581ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:55 | 200 | 8.5321ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:56 | 200 | 8.3598ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:57 | 200 | 12.2195ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:45:58 | 200 | 47.0356ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/10/13 - 17:45:58 | 200 | 41.639ms | 127.0.0.1 | POST "/api/show" time=2025-10-13T17:45:58.486+08:00 level=INFO source=cpu_windows.go:139 msg=packages count=1 time=2025-10-13T17:45:58.486+08:00 level=INFO source=cpu_windows.go:155 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-10-13T17:45:58.486+08:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=24 efficiency=16 threads=32 time=2025-10-13T17:45:58.486+08:00 level=INFO source=server.go:216 msg="enabling flash attention" time=2025-10-13T17:45:58.488+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\wmh21\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model E:\\.ollama\\models\\blobs\\sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 --port 59732" time=2025-10-13T17:45:58.505+08:00 level=INFO source=server.go:675 msg="loading model" "model layers"=29 requested=-1 time=2025-10-13T17:45:58.505+08:00 level=INFO source=cpu_windows.go:139 msg=packages count=1 time=2025-10-13T17:45:58.505+08:00 level=INFO source=cpu_windows.go:155 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-10-13T17:45:58.505+08:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=24 efficiency=16 threads=32 time=2025-10-13T17:45:58.505+08:00 level=INFO source=server.go:681 msg="system memory" total="31.7 GiB" free="16.4 GiB" free_swap="4.6 GiB" time=2025-10-13T17:45:58.505+08:00 level=INFO source=server.go:689 msg="gpu memory" id=GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 library=CUDA available="4.3 GiB" free="4.7 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-10-13T17:45:58.559+08:00 level=INFO source=runner.go:1316 msg="starting ollama engine" time=2025-10-13T17:45:58.571+08:00 level=INFO source=runner.go:1352 msg="Server listening on 127.0.0.1:59732" time=2025-10-13T17:45:58.582+08:00 level=INFO source=runner.go:1189 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:16384 KvCacheType: NumThreads:8 GPULayers:29[ID:GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 Layers:29(0..28)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-13T17:45:58.601+08:00 level=INFO source=ggml.go:133 msg="" architecture=qwen2 file_type=Q4_K_M name="DeepSeek R1 Distill Qwen 7B" description="" num_tensors=339 num_key_values=27 load_backend: loaded CPU backend from C:\Users\wmh21\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4060 Laptop GPU, compute capability 8.9, VMM: yes, ID: GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 load_backend: loaded CUDA backend from C:\Users\wmh21\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll time=2025-10-13T17:45:58.694+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-10-13T17:45:59.030+08:00 level=INFO source=runner.go:1189 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:16384 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 Layers:25(3..27)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-13T17:45:59.054+08:00 level=INFO source=runner.go:1189 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:16384 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 Layers:25(3..27)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-13T17:46:03.016+08:00 level=INFO source=runner.go:1189 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:16384 KvCacheType: NumThreads:8 GPULayers:25[ID:GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 Layers:25(3..27)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-13T17:46:03.016+08:00 level=INFO source=ggml.go:477 msg="offloading 25 repeating layers to GPU" time=2025-10-13T17:46:03.016+08:00 level=INFO source=ggml.go:481 msg="offloading output layer to CPU" time=2025-10-13T17:46:03.016+08:00 level=INFO source=ggml.go:488 msg="offloaded 25/29 layers to GPU" time=2025-10-13T17:46:03.016+08:00 level=INFO source=device.go:206 msg="model weights" device=CUDA0 size="3.2 GiB" time=2025-10-13T17:46:03.016+08:00 level=INFO source=device.go:211 msg="model weights" device=CPU size="1.1 GiB" time=2025-10-13T17:46:03.016+08:00 level=INFO source=device.go:217 msg="kv cache" device=CUDA0 size="800.0 MiB" time=2025-10-13T17:46:03.016+08:00 level=INFO source=device.go:222 msg="kv cache" device=CPU size="96.0 MiB" time=2025-10-13T17:46:03.016+08:00 level=INFO source=device.go:228 msg="compute graph" device=CUDA0 size="155.0 MiB" time=2025-10-13T17:46:03.016+08:00 level=INFO source=device.go:233 msg="compute graph" device=CPU size="132.0 MiB" time=2025-10-13T17:46:03.016+08:00 level=INFO source=device.go:238 msg="total memory" size="5.5 GiB" time=2025-10-13T17:46:03.016+08:00 level=INFO source=sched.go:481 msg="loaded runners" count=1 time=2025-10-13T17:46:03.016+08:00 level=INFO source=server.go:1271 msg="waiting for llama runner to start responding" time=2025-10-13T17:46:03.017+08:00 level=INFO source=server.go:1305 msg="waiting for server to become available" status="llm server loading model" time=2025-10-13T17:46:04.521+08:00 level=INFO source=server.go:1309 msg="llama runner started in 6.03 seconds" [GIN] 2025/10/13 - 17:48:51 | 200 | 2m53s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/10/13 - 17:51:14 | 200 | 0s | 127.0.0.1 | GET "/api/version" [GIN] 2025/10/13 - 17:51:14 | 200 | 8.579ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:51:41 | 200 | 9.0399ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/13 - 17:51:48 | 200 | 42.7984ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/10/13 - 17:51:48 | 200 | 39.6073ms | 127.0.0.1 | POST "/api/show" time=2025-10-13T17:51:48.781+08:00 level=INFO source=sched.go:544 msg="updated VRAM based on existing loaded models" gpu=GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 library=CUDA total="8.0 GiB" available="0 B" time=2025-10-13T17:51:48.822+08:00 level=INFO source=cpu_windows.go:139 msg=packages count=1 time=2025-10-13T17:51:48.822+08:00 level=INFO source=cpu_windows.go:155 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-10-13T17:51:48.822+08:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=24 efficiency=16 threads=32 time=2025-10-13T17:51:48.822+08:00 level=INFO source=server.go:216 msg="enabling flash attention" time=2025-10-13T17:51:48.831+08:00 level=INFO source=server.go:400 msg="starting runner" cmd="C:\\Users\\wmh21\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model E:\\.ollama\\models\\blobs\\sha256-e6a7edc1a4d7d9b2de136a221a57336b76316cfe53a252aeba814496c5ae439d --port 55097" time=2025-10-13T17:51:48.846+08:00 level=INFO source=server.go:675 msg="loading model" "model layers"=37 requested=-1 time=2025-10-13T17:51:48.846+08:00 level=INFO source=cpu_windows.go:139 msg=packages count=1 time=2025-10-13T17:51:48.846+08:00 level=INFO source=cpu_windows.go:155 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-10-13T17:51:48.846+08:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=24 efficiency=16 threads=32 time=2025-10-13T17:51:48.846+08:00 level=INFO source=server.go:681 msg="system memory" total="31.7 GiB" free="18.4 GiB" free_swap="1.7 GiB" time=2025-10-13T17:51:48.846+08:00 level=INFO source=server.go:689 msg="gpu memory" id=GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 library=CUDA available="0 B" free="0 B" minimum="457.0 MiB" overhead="0 B" time=2025-10-13T17:51:48.899+08:00 level=INFO source=runner.go:1316 msg="starting ollama engine" time=2025-10-13T17:51:48.912+08:00 level=INFO source=runner.go:1352 msg="Server listening on 127.0.0.1:55097" time=2025-10-13T17:51:48.922+08:00 level=INFO source=runner.go:1189 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:16384 KvCacheType: NumThreads:8 GPULayers:37[ID:GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 Layers:37(0..36)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-13T17:51:48.942+08:00 level=INFO source=ggml.go:133 msg="" architecture=qwen3 file_type=Q4_K_M name="DeepSeek R1 0528 Qwen3 8B" description="" num_tensors=399 num_key_values=33 load_backend: loaded CPU backend from C:\Users\wmh21\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-alderlake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4060 Laptop GPU, compute capability 8.9, VMM: yes, ID: GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 load_backend: loaded CUDA backend from C:\Users\wmh21\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll time=2025-10-13T17:51:49.063+08:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-10-13T17:51:49.405+08:00 level=INFO source=runner.go:1189 msg=load request="{Operation:close LoraPath:[] Parallel:0 BatchSize:0 FlashAttention:false KvSize:0 KvCacheType: NumThreads:0 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-13T17:51:49.405+08:00 level=INFO source=device.go:206 msg="model weights" device=CUDA0 size="4.5 GiB" time=2025-10-13T17:51:49.405+08:00 level=INFO source=device.go:211 msg="model weights" device=CPU size="333.8 MiB" time=2025-10-13T17:51:49.405+08:00 level=INFO source=device.go:217 msg="kv cache" device=CUDA0 size="2.2 GiB" time=2025-10-13T17:51:49.405+08:00 level=INFO source=device.go:228 msg="compute graph" device=CUDA0 size="126.0 MiB" time=2025-10-13T17:51:49.405+08:00 level=INFO source=device.go:233 msg="compute graph" device=CPU size="8.0 MiB" time=2025-10-13T17:51:49.405+08:00 level=INFO source=device.go:238 msg="total memory" size="7.2 GiB" time=2025-10-13T17:51:50.207+08:00 level=INFO source=server.go:675 msg="loading model" "model layers"=37 requested=-1 time=2025-10-13T17:51:50.207+08:00 level=INFO source=cpu_windows.go:139 msg=packages count=1 time=2025-10-13T17:51:50.207+08:00 level=INFO source=cpu_windows.go:155 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-10-13T17:51:50.207+08:00 level=INFO source=cpu_windows.go:186 msg="" package=0 cores=24 efficiency=16 threads=32 time=2025-10-13T17:51:50.207+08:00 level=INFO source=server.go:681 msg="system memory" total="31.7 GiB" free="19.7 GiB" free_swap="7.3 GiB" time=2025-10-13T17:51:50.207+08:00 level=INFO source=server.go:689 msg="gpu memory" id=GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 library=CUDA available="6.5 GiB" free="6.9 GiB" minimum="457.0 MiB" overhead="0 B" time=2025-10-13T17:51:50.209+08:00 level=INFO source=runner.go:1189 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:16384 KvCacheType: NumThreads:8 GPULayers:36[ID:GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 Layers:36(0..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-13T17:51:50.237+08:00 level=INFO source=runner.go:1189 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:16384 KvCacheType: NumThreads:8 GPULayers:36[ID:GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 Layers:36(0..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-13T17:51:52.142+08:00 level=INFO source=runner.go:1189 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:true KvSize:16384 KvCacheType: NumThreads:8 GPULayers:36[ID:GPU-14b161fd-5142-a0b8-22c0-13cca7537e94 Layers:36(0..35)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2025-10-13T17:51:52.142+08:00 level=INFO source=ggml.go:477 msg="offloading 36 repeating layers to GPU" time=2025-10-13T17:51:52.142+08:00 level=INFO source=ggml.go:481 msg="offloading output layer to CPU" time=2025-10-13T17:51:52.142+08:00 level=INFO source=ggml.go:488 msg="offloaded 36/37 layers to GPU" time=2025-10-13T17:51:52.142+08:00 level=INFO source=device.go:206 msg="model weights" device=CUDA0 size="4.1 GiB" time=2025-10-13T17:51:52.142+08:00 level=INFO source=device.go:211 msg="model weights" device=CPU size="820.7 MiB" time=2025-10-13T17:51:52.142+08:00 level=INFO source=device.go:217 msg="kv cache" device=CUDA0 size="2.2 GiB" time=2025-10-13T17:51:52.142+08:00 level=INFO source=device.go:228 msg="compute graph" device=CUDA0 size="126.0 MiB" time=2025-10-13T17:51:52.142+08:00 level=INFO source=device.go:233 msg="compute graph" device=CPU size="8.0 MiB" time=2025-10-13T17:51:52.142+08:00 level=INFO source=device.go:238 msg="total memory" size="7.2 GiB" time=2025-10-13T17:51:52.142+08:00 level=INFO source=sched.go:481 msg="loaded runners" count=1 time=2025-10-13T17:51:52.142+08:00 level=INFO source=server.go:1271 msg="waiting for llama runner to start responding" time=2025-10-13T17:51:52.143+08:00 level=INFO source=server.go:1305 msg="waiting for server to become available" status="llm server loading model" time=2025-10-13T17:51:54.399+08:00 level=INFO source=server.go:1309 msg="llama runner started in 5.57 seconds" [GIN] 2025/10/13 - 17:57:32 | 200 | 5m44s | 127.0.0.1 | POST "/api/chat" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.12.5
GiteaMirror added the bug label 2026-04-29 07:44:44 -05:00
Author
Owner

@rick-github commented on GitHub (Oct 13, 2025):

I think the issue is being triggered by a lack of context buffer. I ran some tests with deepseek-r1:78 and deepseek-r1:8b on versions 0.11.11 and 0.12.5. The result is that given a large enough context (20480 tokens in my tests), the models generated a coherent response. 0.12.5 generated tokens faster than 0.11.11 for both models, with deepseek-r1:8b slowing down as the number of tokens increased.

Image

deepseek-r1:8b generates a lot more tokens than deepseek-r1:7b, which is what I think is causing the garbled or incoherent responses. deepseek-r1:8b is much more likely to try to create code to answer the question "How are multi-page wiki documents implemented?". As a result the model frequently exceeds the default buffer context size of 4096 tokens, while deepseek-r1:7b is much more likely to generate a short response, always within the 4096 token limit.

Image

Exceeding the context buffer length results in the buffer being shifted to create room for more tokens. For some models, this causes the model to lose coherence and ends up generating nonsense. This is a real problem, but there are workarounds to limit this behaviour.

The simplest is to just increase the size of the context buffer to allow it to hold all input and output tokens. Note that in cases where the model is being used through a UI, the UI is usually keeping chat history that is sent along with new prompts. So to use the model in this way, there needs to be enough context space for the chat history, the new prompt, and the expected output tokens.

The model can be prevented from running off the end of the context buffer by setting num_predict. This will stop token generation when the number of tokens set in num_predict is reached. So if the context buffer is 4096 tokens and the prompt is 500 tokens, setting num_predict to 3500 will prevent a buffer shift.

The shifting behaviour may be changing in 0.12.6 with https://github.com/ollama/ollama/pull/12582. This introduces new API parameters truncate and shift. If shift is false, then shifting will be prevented and the generation will stop when the context buffer limit is reached, similar to how num_predict works but without having to set a specific value. This requires support from the client to implement.

<!-- gh-comment-id:3398095323 --> @rick-github commented on GitHub (Oct 13, 2025): I think the issue is being triggered by a lack of context buffer. I ran some tests with deepseek-r1:78 and deepseek-r1:8b on versions 0.11.11 and 0.12.5. The result is that given a large enough context (20480 tokens in my tests), the models generated a coherent response. 0.12.5 generated tokens faster than 0.11.11 for both models, with deepseek-r1:8b slowing down as the number of tokens increased. <img width="1101" height="521" alt="Image" src="https://github.com/user-attachments/assets/08aed73d-566e-4345-90df-3ec88fe1139f" /> deepseek-r1:8b generates a lot more tokens than deepseek-r1:7b, which is what I think is causing the garbled or incoherent responses. deepseek-r1:8b is much more likely to try to create code to answer the question "How are multi-page wiki documents implemented?". As a result the model frequently exceeds the default buffer context size of 4096 tokens, while deepseek-r1:7b is much more likely to generate a short response, always within the 4096 token limit. <img width="1109" height="522" alt="Image" src="https://github.com/user-attachments/assets/2200b442-d84f-48b0-85ac-3e7e3ca66e0b" /> Exceeding the context buffer length results in the buffer being shifted to create room for more tokens. For some models, this causes the model to lose coherence and ends up generating nonsense. This is a real problem, but there are workarounds to limit this behaviour. The simplest is to just increase the size of the context buffer to allow it to hold all input and output tokens. Note that in cases where the model is being used through a UI, the UI is usually keeping chat history that is sent along with new prompts. So to use the model in this way, there needs to be enough context space for the chat history, the new prompt, and the expected output tokens. The model can be prevented from running off the end of the context buffer by setting `num_predict`. This will stop token generation when the number of tokens set in `num_predict` is reached. So if the context buffer is 4096 tokens and the prompt is 500 tokens, setting `num_predict` to 3500 will prevent a buffer shift. The shifting behaviour may be changing in 0.12.6 with https://github.com/ollama/ollama/pull/12582. This introduces new API parameters `truncate` and `shift`. If `shift` is `false`, then shifting will be prevented and the generation will stop when the context buffer limit is reached, similar to how `num_predict` works but without having to set a specific value. This requires support from the client to implement.
Author
Owner

@EFonteyne commented on GitHub (Oct 17, 2025):

Maybe try this, as referenced here : https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install

"If you are upgrading from a prior version, you MUST remove the old libraries with sudo rm -rf /usr/lib/ollama first."

In my case, it fixed my issue, where deepseek-r1:8b model output was chaotic.

<!-- gh-comment-id:3415800262 --> @EFonteyne commented on GitHub (Oct 17, 2025): Maybe try this, as referenced here : https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install "If you are upgrading from a prior version, you MUST remove the old libraries with sudo rm -rf /usr/lib/ollama first." In my case, it fixed my issue, where deepseek-r1:8b model output was chaotic.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#54871