[GH-ISSUE #15895] LLM misses out context whenever a web search happens amidst a thinking stream #72188

Open
opened 2026-05-05 03:36:32 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @RishiNandha on GitHub (Apr 30, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15895

What is the issue?

So I prompted Gemma 4 to find something from the internet. It went through a long thinking process, and then hit the internet, but once it did, it forgot everything including the chat context and my prompt. It just gave me an empty reply. Here is a screenshot:

Image

I tried reproducing the error. Here is another:

Image

I expanded the tool result and thinking. The first thinking and first tool result are okay. Once the tool result arrives though, it breaks:

Image

Relevant log output

time=2026-04-30T16:04:16.888+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\rishi\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 47485"
time=2026-04-30T16:04:18.203+05:30 level=INFO source=cpu_windows.go:148 msg=packages count=1
time=2026-04-30T16:04:18.203+05:30 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=4 efficiency=0 threads=8
time=2026-04-30T16:04:18.367+05:30 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-30T16:04:18.375+05:30 level=INFO source=server.go:259 msg="enabling flash attention"
time=2026-04-30T16:04:18.375+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\rishi\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model E:\\Ollama\\blobs\\sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a --port 47542"
time=2026-04-30T16:04:18.382+05:30 level=INFO source=sched.go:484 msg="system memory" total="15.7 GiB" free="5.6 GiB" free_swap="9.5 GiB"
time=2026-04-30T16:04:18.383+05:30 level=INFO source=sched.go:491 msg="gpu memory" id=GPU-b2081963-c53c-9f16-6da5-a3002f859161 library=CUDA available="3.2 GiB" free="3.6 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-04-30T16:04:18.383+05:30 level=INFO source=server.go:771 msg="loading model" "model layers"=43 requested=-1
time=2026-04-30T16:04:18.516+05:30 level=INFO source=runner.go:1417 msg="starting ollama engine"
time=2026-04-30T16:04:18.518+05:30 level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:47542"
time=2026-04-30T16:04:18.527+05:30 level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:4 GPULayers:43[ID:GPU-b2081963-c53c-9f16-6da5-a3002f859161 Layers:43(0..42)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-30T16:04:18.589+05:30 level=INFO source=ggml.go:136 msg="" architecture=gemma4 file_type=Q4_K_M name="" description="" num_tensors=2131 num_key_values=55
load_backend: loaded CPU backend from C:\Users\rishi\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA T500, compute capability 7.5, VMM: yes, ID: GPU-b2081963-c53c-9f16-6da5-a3002f859161
load_backend: loaded CUDA backend from C:\Users\rishi\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll
time=2026-04-30T16:04:18.682+05:30 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2026-04-30T16:04:18.700+05:30 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-30T16:04:18.723+05:30 level=INFO source=model.go:138 msg="vision: decode" elapsed=3.1335ms bounds=(0,0)-(2048,2048)
time=2026-04-30T16:04:18.844+05:30 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=120.6111ms size="[768 768]"
time=2026-04-30T16:04:18.844+05:30 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3
time=2026-04-30T16:04:18.844+05:30 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16
time=2026-04-30T16:04:18.849+05:30 level=INFO source=model.go:156 msg="vision: encoded" elapsed=128.7587ms shape="[2560 256]"
time=2026-04-30T16:04:19.547+05:30 level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:4 GPULayers:41[ID:GPU-b2081963-c53c-9f16-6da5-a3002f859161 Layers:41(1..41)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-30T16:04:19.617+05:30 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-30T16:04:19.635+05:30 level=INFO source=model.go:138 msg="vision: decode" elapsed=781.4µs bounds=(0,0)-(2048,2048)
time=2026-04-30T16:04:19.759+05:30 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=123.4924ms size="[768 768]"
time=2026-04-30T16:04:19.759+05:30 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3
time=2026-04-30T16:04:19.759+05:30 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16
time=2026-04-30T16:04:19.764+05:30 level=INFO source=model.go:156 msg="vision: encoded" elapsed=129.2499ms shape="[2560 256]"
time=2026-04-30T16:04:19.804+05:30 level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:4 GPULayers:41[ID:GPU-b2081963-c53c-9f16-6da5-a3002f859161 Layers:41(1..41)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-30T16:04:20.177+05:30 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883
time=2026-04-30T16:04:20.196+05:30 level=INFO source=model.go:138 msg="vision: decode" elapsed=1.0864ms bounds=(0,0)-(2048,2048)
time=2026-04-30T16:04:20.330+05:30 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=133.2876ms size="[768 768]"
time=2026-04-30T16:04:20.333+05:30 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3
time=2026-04-30T16:04:20.333+05:30 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16
time=2026-04-30T16:04:20.334+05:30 level=INFO source=model.go:156 msg="vision: encoded" elapsed=139.1373ms shape="[2560 256]"
time=2026-04-30T16:04:20.517+05:30 level=INFO source=runner.go:1290 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:4 GPULayers:41[ID:GPU-b2081963-c53c-9f16-6da5-a3002f859161 Layers:41(1..41)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
time=2026-04-30T16:04:20.518+05:30 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="2.7 GiB"
time=2026-04-30T16:04:20.518+05:30 level=INFO source=device.go:245 msg="model weights" device=CPU size="6.7 GiB"
time=2026-04-30T16:04:20.518+05:30 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="299.0 MiB"
time=2026-04-30T16:04:20.518+05:30 level=INFO source=device.go:256 msg="kv cache" device=CPU size="9.0 MiB"
time=2026-04-30T16:04:20.518+05:30 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="207.8 MiB"
time=2026-04-30T16:04:20.518+05:30 level=INFO source=device.go:267 msg="compute graph" device=CPU size="21.0 MiB"
time=2026-04-30T16:04:20.518+05:30 level=INFO source=device.go:272 msg="total memory" size="10.0 GiB"
time=2026-04-30T16:04:20.518+05:30 level=INFO source=sched.go:561 msg="loaded runners" count=1
time=2026-04-30T16:04:20.517+05:30 level=INFO source=ggml.go:482 msg="offloading 41 repeating layers to GPU"
time=2026-04-30T16:04:20.518+05:30 level=INFO source=ggml.go:486 msg="offloading output layer to CPU"
time=2026-04-30T16:04:20.518+05:30 level=INFO source=ggml.go:494 msg="offloaded 41/43 layers to GPU"
time=2026-04-30T16:04:20.518+05:30 level=INFO source=server.go:1364 msg="waiting for llama runner to start responding"
time=2026-04-30T16:04:20.519+05:30 level=INFO source=server.go:1398 msg="waiting for server to become available" status="llm server loading model"
time=2026-04-30T16:04:28.289+05:30 level=INFO source=server.go:1402 msg="llama runner started in 9.91 seconds"
[GIN] 2026/04/30 - 16:04:44 | 200 |      2.0005ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/30 - 16:05:14 | 200 |      2.4192ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/30 - 16:05:44 | 200 |      2.6342ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/30 - 16:06:14 | 200 |      1.6184ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/30 - 16:06:44 | 200 |      3.2794ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/30 - 16:07:09 | 200 |         2m52s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2026/04/30 - 16:07:11 | 200 |    262.2747ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/04/30 - 16:07:12 | 200 |    255.1287ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/04/30 - 16:07:14 | 200 |      2.6804ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/30 - 16:07:44 | 200 |      6.6318ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/30 - 16:08:08 | 200 |            0s |       127.0.0.1 | GET      "/api/status"
[GIN] 2026/04/30 - 16:08:08 | 200 |   56.3985655s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2026/04/30 - 16:08:14 | 200 |      2.5308ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/30 - 16:08:23 | 200 |   14.2939611s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2026/04/30 - 16:08:44 | 200 |      1.7789ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/30 - 16:09:14 | 200 |       1.568ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/30 - 16:09:44 | 200 |      1.6832ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/30 - 16:10:00 | 200 |      1.6013ms |       127.0.0.1 | GET      "/api/version"
[GIN] 2026/04/30 - 16:10:14 | 200 |      1.0411ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/30 - 16:10:44 | 200 |      5.9402ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/30 - 16:11:14 | 200 |      1.6301ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/30 - 16:11:44 | 200 |      3.1793ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/30 - 16:12:14 | 200 |       2.885ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/30 - 16:12:44 | 200 |      1.6544ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/30 - 16:13:14 | 200 |      1.0451ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/04/30 - 16:13:16 | 200 |    519.2165ms |       127.0.0.1 | POST     "/api/me"
[GIN] 2026/04/30 - 16:13:16 | 200 |    541.7997ms |       127.0.0.1 | POST     "/api/me"
ggml_backend_cuda_device_get_memory device GPU-b2081963-c53c-9f16-6da5-a3002f859161 utilizing NVML memory reporting free: 384000000 total: 4294967296
time=2026-04-30T16:13:24.346+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\rishi\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 51864"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.22.0

Originally created by @RishiNandha on GitHub (Apr 30, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15895 ### What is the issue? So I prompted Gemma 4 to find something from the internet. It went through a long thinking process, and then hit the internet, but once it did, it forgot everything including the chat context and my prompt. It just gave me an empty reply. Here is a screenshot: <img width="858" height="549" alt="Image" src="https://github.com/user-attachments/assets/ad9c0636-11c7-4356-b534-f8230e249c6c" /> I tried reproducing the error. Here is another: <img width="1004" height="704" alt="Image" src="https://github.com/user-attachments/assets/e4f143a1-215d-4951-8aae-5fe973bb4ce5" /> I expanded the tool result and thinking. The first thinking and first tool result are okay. Once the tool result arrives though, it breaks: <img width="816" height="871" alt="Image" src="https://github.com/user-attachments/assets/e6875c5e-323d-48cd-89ee-39c3a95cfdba" /> ### Relevant log output ```shell time=2026-04-30T16:04:16.888+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\rishi\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 47485" time=2026-04-30T16:04:18.203+05:30 level=INFO source=cpu_windows.go:148 msg=packages count=1 time=2026-04-30T16:04:18.203+05:30 level=INFO source=cpu_windows.go:195 msg="" package=0 cores=4 efficiency=0 threads=8 time=2026-04-30T16:04:18.367+05:30 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-30T16:04:18.375+05:30 level=INFO source=server.go:259 msg="enabling flash attention" time=2026-04-30T16:04:18.375+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\rishi\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --model E:\\Ollama\\blobs\\sha256-4c27e0f5b5adf02ac956c7322bd2ee7636fe3f45a8512c9aba5385242cb6e09a --port 47542" time=2026-04-30T16:04:18.382+05:30 level=INFO source=sched.go:484 msg="system memory" total="15.7 GiB" free="5.6 GiB" free_swap="9.5 GiB" time=2026-04-30T16:04:18.383+05:30 level=INFO source=sched.go:491 msg="gpu memory" id=GPU-b2081963-c53c-9f16-6da5-a3002f859161 library=CUDA available="3.2 GiB" free="3.6 GiB" minimum="457.0 MiB" overhead="0 B" time=2026-04-30T16:04:18.383+05:30 level=INFO source=server.go:771 msg="loading model" "model layers"=43 requested=-1 time=2026-04-30T16:04:18.516+05:30 level=INFO source=runner.go:1417 msg="starting ollama engine" time=2026-04-30T16:04:18.518+05:30 level=INFO source=runner.go:1452 msg="Server listening on 127.0.0.1:47542" time=2026-04-30T16:04:18.527+05:30 level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:4 GPULayers:43[ID:GPU-b2081963-c53c-9f16-6da5-a3002f859161 Layers:43(0..42)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-30T16:04:18.589+05:30 level=INFO source=ggml.go:136 msg="" architecture=gemma4 file_type=Q4_K_M name="" description="" num_tensors=2131 num_key_values=55 load_backend: loaded CPU backend from C:\Users\rishi\AppData\Local\Programs\Ollama\lib\ollama\ggml-cpu-icelake.dll ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA T500, compute capability 7.5, VMM: yes, ID: GPU-b2081963-c53c-9f16-6da5-a3002f859161 load_backend: loaded CUDA backend from C:\Users\rishi\AppData\Local\Programs\Ollama\lib\ollama\cuda_v13\ggml-cuda.dll time=2026-04-30T16:04:18.682+05:30 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=750,800,860,870,890,900,1000,1030,1100,1200,1210 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2026-04-30T16:04:18.700+05:30 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-30T16:04:18.723+05:30 level=INFO source=model.go:138 msg="vision: decode" elapsed=3.1335ms bounds=(0,0)-(2048,2048) time=2026-04-30T16:04:18.844+05:30 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=120.6111ms size="[768 768]" time=2026-04-30T16:04:18.844+05:30 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3 time=2026-04-30T16:04:18.844+05:30 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16 time=2026-04-30T16:04:18.849+05:30 level=INFO source=model.go:156 msg="vision: encoded" elapsed=128.7587ms shape="[2560 256]" time=2026-04-30T16:04:19.547+05:30 level=INFO source=runner.go:1290 msg=load request="{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:4 GPULayers:41[ID:GPU-b2081963-c53c-9f16-6da5-a3002f859161 Layers:41(1..41)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-30T16:04:19.617+05:30 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-30T16:04:19.635+05:30 level=INFO source=model.go:138 msg="vision: decode" elapsed=781.4µs bounds=(0,0)-(2048,2048) time=2026-04-30T16:04:19.759+05:30 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=123.4924ms size="[768 768]" time=2026-04-30T16:04:19.759+05:30 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3 time=2026-04-30T16:04:19.759+05:30 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16 time=2026-04-30T16:04:19.764+05:30 level=INFO source=model.go:156 msg="vision: encoded" elapsed=129.2499ms shape="[2560 256]" time=2026-04-30T16:04:19.804+05:30 level=INFO source=runner.go:1290 msg=load request="{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:4 GPULayers:41[ID:GPU-b2081963-c53c-9f16-6da5-a3002f859161 Layers:41(1..41)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-30T16:04:20.177+05:30 level=INFO source=model.go:97 msg="gemma4: token IDs" image=255999 image_end=258882 audio=256000 audio_end=258883 time=2026-04-30T16:04:20.196+05:30 level=INFO source=model.go:138 msg="vision: decode" elapsed=1.0864ms bounds=(0,0)-(2048,2048) time=2026-04-30T16:04:20.330+05:30 level=INFO source=model.go:145 msg="vision: preprocess" elapsed=133.2876ms size="[768 768]" time=2026-04-30T16:04:20.333+05:30 level=INFO source=model.go:148 msg="vision: pixelValues" shape="[768 768 3]" dim0=768 dim1=768 dim2=3 time=2026-04-30T16:04:20.333+05:30 level=INFO source=model.go:152 msg="vision: patches" patchesX=48 patchesY=48 total=2304 patchSize=16 time=2026-04-30T16:04:20.334+05:30 level=INFO source=model.go:156 msg="vision: encoded" elapsed=139.1373ms shape="[2560 256]" time=2026-04-30T16:04:20.517+05:30 level=INFO source=runner.go:1290 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:8192 KvCacheType: NumThreads:4 GPULayers:41[ID:GPU-b2081963-c53c-9f16-6da5-a3002f859161 Layers:41(1..41)] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}" time=2026-04-30T16:04:20.518+05:30 level=INFO source=device.go:240 msg="model weights" device=CUDA0 size="2.7 GiB" time=2026-04-30T16:04:20.518+05:30 level=INFO source=device.go:245 msg="model weights" device=CPU size="6.7 GiB" time=2026-04-30T16:04:20.518+05:30 level=INFO source=device.go:251 msg="kv cache" device=CUDA0 size="299.0 MiB" time=2026-04-30T16:04:20.518+05:30 level=INFO source=device.go:256 msg="kv cache" device=CPU size="9.0 MiB" time=2026-04-30T16:04:20.518+05:30 level=INFO source=device.go:262 msg="compute graph" device=CUDA0 size="207.8 MiB" time=2026-04-30T16:04:20.518+05:30 level=INFO source=device.go:267 msg="compute graph" device=CPU size="21.0 MiB" time=2026-04-30T16:04:20.518+05:30 level=INFO source=device.go:272 msg="total memory" size="10.0 GiB" time=2026-04-30T16:04:20.518+05:30 level=INFO source=sched.go:561 msg="loaded runners" count=1 time=2026-04-30T16:04:20.517+05:30 level=INFO source=ggml.go:482 msg="offloading 41 repeating layers to GPU" time=2026-04-30T16:04:20.518+05:30 level=INFO source=ggml.go:486 msg="offloading output layer to CPU" time=2026-04-30T16:04:20.518+05:30 level=INFO source=ggml.go:494 msg="offloaded 41/43 layers to GPU" time=2026-04-30T16:04:20.518+05:30 level=INFO source=server.go:1364 msg="waiting for llama runner to start responding" time=2026-04-30T16:04:20.519+05:30 level=INFO source=server.go:1398 msg="waiting for server to become available" status="llm server loading model" time=2026-04-30T16:04:28.289+05:30 level=INFO source=server.go:1402 msg="llama runner started in 9.91 seconds" [GIN] 2026/04/30 - 16:04:44 | 200 | 2.0005ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/30 - 16:05:14 | 200 | 2.4192ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/30 - 16:05:44 | 200 | 2.6342ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/30 - 16:06:14 | 200 | 1.6184ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/30 - 16:06:44 | 200 | 3.2794ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/30 - 16:07:09 | 200 | 2m52s | 127.0.0.1 | POST "/api/chat" [GIN] 2026/04/30 - 16:07:11 | 200 | 262.2747ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/04/30 - 16:07:12 | 200 | 255.1287ms | 127.0.0.1 | POST "/api/show" [GIN] 2026/04/30 - 16:07:14 | 200 | 2.6804ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/30 - 16:07:44 | 200 | 6.6318ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/30 - 16:08:08 | 200 | 0s | 127.0.0.1 | GET "/api/status" [GIN] 2026/04/30 - 16:08:08 | 200 | 56.3985655s | 127.0.0.1 | POST "/api/chat" [GIN] 2026/04/30 - 16:08:14 | 200 | 2.5308ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/30 - 16:08:23 | 200 | 14.2939611s | 127.0.0.1 | POST "/api/chat" [GIN] 2026/04/30 - 16:08:44 | 200 | 1.7789ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/30 - 16:09:14 | 200 | 1.568ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/30 - 16:09:44 | 200 | 1.6832ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/30 - 16:10:00 | 200 | 1.6013ms | 127.0.0.1 | GET "/api/version" [GIN] 2026/04/30 - 16:10:14 | 200 | 1.0411ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/30 - 16:10:44 | 200 | 5.9402ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/30 - 16:11:14 | 200 | 1.6301ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/30 - 16:11:44 | 200 | 3.1793ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/30 - 16:12:14 | 200 | 2.885ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/30 - 16:12:44 | 200 | 1.6544ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/30 - 16:13:14 | 200 | 1.0451ms | 127.0.0.1 | GET "/api/tags" [GIN] 2026/04/30 - 16:13:16 | 200 | 519.2165ms | 127.0.0.1 | POST "/api/me" [GIN] 2026/04/30 - 16:13:16 | 200 | 541.7997ms | 127.0.0.1 | POST "/api/me" ggml_backend_cuda_device_get_memory device GPU-b2081963-c53c-9f16-6da5-a3002f859161 utilizing NVML memory reporting free: 384000000 total: 4294967296 time=2026-04-30T16:13:24.346+05:30 level=INFO source=server.go:444 msg="starting runner" cmd="C:\\Users\\rishi\\AppData\\Local\\Programs\\Ollama\\ollama.exe runner --ollama-engine --port 51864" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.22.0
GiteaMirror added the bug label 2026-05-05 03:36:32 -05:00
Author
Owner

@RishiNandha commented on GitHub (May 1, 2026):

Update: This isnt an issue with Qwen3

Image

So I think only Gemma4 is affected

<!-- gh-comment-id:4360688559 --> @RishiNandha commented on GitHub (May 1, 2026): Update: This isnt an issue with Qwen3 <img width="807" height="931" alt="Image" src="https://github.com/user-attachments/assets/c1c04a63-2876-452e-ab6f-5644dd0b79f7" /> So I think only Gemma4 is affected
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#72188