[GH-ISSUE #14780] KV cache completely non-functional on CPU backend: every /api/chat request re-evaluates all tokens from scratch #35311

Open
opened 2026-04-22 19:43:14 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @zhener562 on GitHub (Mar 11, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14780


Title:
KV cache completely non-functional on CPU backend: every /api/chat request re-evaluates all tokens from scratch

Body:

Summary

With the current --ollama-engine (v0.17.1), /api/chat performs a full prompt re-evaluation on every turn when running on CPU. Identical requests sent consecutively take the
same time — the KV cache prefix-match is not being used at all.

This causes prompt evaluation time to grow linearly with conversation length, making multi-turn chat increasingly slow.

Environment

  • Ollama version: 0.17.1
  • OS: Ubuntu 24.04.2 LTS
  • CPU: AMD Ryzen 5 8600G (6 cores)
  • GPU: None (CPU-only mode, offloaded 0/33 layers to GPU)
  • Runner: --ollama-engine
  • Model: qwen3.5:9b

Reproduction

Send the same single-turn request 3 times:

import json, urllib.request                               
                                                                                                                                                                                      
def chat(msgs):
    body = json.dumps({"model": "qwen3.5:9b", "messages": msgs, "stream": False,                                                                                                      
                       "options": {"num_predict": 5}}).encode()                                                                                                                       
    req = urllib.request.Request("http://localhost:11434/api/chat",                                                                                                                   
                                 data=body, headers={"Content-Type": "application/json"})                                                                                             
    with urllib.request.urlopen(req) as r:                                                                                                                                            
        return json.loads(r.read())                                                                                                                                                   
                                                                                                                                                                                      
msgs = [{"role": "user", "content": "What is 2+2?"}]                                                                                                                                  
for i in range(3):                                                                                                                                                                    
    r = chat(msgs)                                                                                                                                                                    
    print(f"Request {i+1}: pe_count={r['prompt_eval_count']}, "
          f"pe_ms={r['prompt_eval_duration']/1e6:.1f}ms")                                                                                                                             

Expected: 2nd and 3rd requests are nearly instant (cache hit).

Actual:

Request 1: pe_count=17, pe_ms=583.7ms                                                                                                                                                 
Request 2: pe_count=17, pe_ms=627.5ms  ← no speedup                                                                                                                                   
Request 3: pe_count=17, pe_ms=583.4ms  ← no speedup                                                                                                                                   

Multi-turn evidence: pe_count is cumulative total, not new tokens

Turn  pe_count  pe_ms   ratio_vs_turn1                    
   1        11   423ms   1.00x                                                                                                                                                        
   2        22   703ms   1.66x                                                                                                                                                        
   3        33   987ms   2.33x                                                                                                                                                        
   4        44  1256ms   2.97x                                                                                                                                                        
   5        55  1542ms   3.64x                                                                                                                                                        
  • pe_count grows as the cumulative total of all tokens — all tokens are re-evaluated every turn.
  • pe_ms / pe_count stays constant (~27ms/token), confirming the full prompt is processed each time.
  • If cache were working, pe_count would reflect only the newly added tokens (~10 per turn).

Contrast: /api/generate with context reuse works correctly

# Pass context tokens back each turn                      
r = post("/api/generate", {"model": ..., "prompt": ..., "context": ctx})                                                                                                              
ctx = r["context"]                                                                                                                                                                    
Turn 1→2: pe_ms 539ms → 598ms  (+59ms for 19 new tokens = cache HIT ✓)
Turn 2→3: pe_ms 598ms → 1076ms (+478ms for 19 new tokens ≈ 25ms/tok ✓)                                                                                                                

The /api/generate + context path correctly reuses the KV cache. Only /api/chat is broken on CPU.

GPU comparison (ROCm backend — cache works)

After enabling ROCm (HSA_OVERRIDE_GFX_VERSION=11.0.0):

Turn  pe_count  ms/token                                  
   1        17    19.2   ← baseline                                                                                                                                                   
   2        42     8.2   ← DECREASING = cache hit, only new tokens evaluated                                                                                                          
   3        67     7.2                                                                                                                                                                
  10       252     5.2   ← still decreasing                                                                                                                                           

ms/token decreasing over turns proves the GPU backend correctly uses prefix-match KV cache.

Conclusion

Backend KV cache working?
CPU (--ollama-engine) No — full re-eval every request
GPU/ROCm (--ollama-engine) Yes — prefix-match working
/api/generate + context (CPU) Yes — explicit context reuse works

The CPU backend of the new engine appears to not maintain the KV cache between /api/chat requests. The prefix-match logic in cache.go (findLongestCacheSlot /
countCommonPrefix) does not appear to be triggered, or the cache is cleared between requests.

  • #5303 — random full re-evaluation (different trigger, same symptom)
  • #12504 — new engine prompt eval much slower (closed as duplicate of #12037)

Originally created by @zhener562 on GitHub (Mar 11, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14780 --- Title: KV cache completely non-functional on CPU backend: every /api/chat request re-evaluates all tokens from scratch Body: ## Summary With the current `--ollama-engine` (v0.17.1), `/api/chat` performs a **full prompt re-evaluation on every turn** when running on CPU. Identical requests sent consecutively take the same time — the KV cache prefix-match is not being used at all. This causes prompt evaluation time to grow **linearly** with conversation length, making multi-turn chat increasingly slow. ## Environment - **Ollama version:** 0.17.1 - **OS:** Ubuntu 24.04.2 LTS - **CPU:** AMD Ryzen 5 8600G (6 cores) - **GPU:** None (CPU-only mode, `offloaded 0/33 layers to GPU`) - **Runner:** `--ollama-engine` - **Model:** qwen3.5:9b ## Reproduction Send the same single-turn request 3 times: ```python import json, urllib.request def chat(msgs): body = json.dumps({"model": "qwen3.5:9b", "messages": msgs, "stream": False, "options": {"num_predict": 5}}).encode() req = urllib.request.Request("http://localhost:11434/api/chat", data=body, headers={"Content-Type": "application/json"}) with urllib.request.urlopen(req) as r: return json.loads(r.read()) msgs = [{"role": "user", "content": "What is 2+2?"}] for i in range(3): r = chat(msgs) print(f"Request {i+1}: pe_count={r['prompt_eval_count']}, " f"pe_ms={r['prompt_eval_duration']/1e6:.1f}ms") ``` **Expected:** 2nd and 3rd requests are nearly instant (cache hit). **Actual:** ``` Request 1: pe_count=17, pe_ms=583.7ms Request 2: pe_count=17, pe_ms=627.5ms ← no speedup Request 3: pe_count=17, pe_ms=583.4ms ← no speedup ``` ## Multi-turn evidence: pe_count is cumulative total, not new tokens ``` Turn pe_count pe_ms ratio_vs_turn1 1 11 423ms 1.00x 2 22 703ms 1.66x 3 33 987ms 2.33x 4 44 1256ms 2.97x 5 55 1542ms 3.64x ``` - `pe_count` grows as the cumulative total of all tokens — all tokens are re-evaluated every turn. - `pe_ms / pe_count` stays **constant** (~27ms/token), confirming the full prompt is processed each time. - If cache were working, `pe_count` would reflect only the newly added tokens (~10 per turn). ## Contrast: /api/generate with context reuse works correctly ```python # Pass context tokens back each turn r = post("/api/generate", {"model": ..., "prompt": ..., "context": ctx}) ctx = r["context"] ``` ``` Turn 1→2: pe_ms 539ms → 598ms (+59ms for 19 new tokens = cache HIT ✓) Turn 2→3: pe_ms 598ms → 1076ms (+478ms for 19 new tokens ≈ 25ms/tok ✓) ``` The `/api/generate` + `context` path correctly reuses the KV cache. Only `/api/chat` is broken on CPU. ## GPU comparison (ROCm backend — cache works) After enabling ROCm (`HSA_OVERRIDE_GFX_VERSION=11.0.0`): ``` Turn pe_count ms/token 1 17 19.2 ← baseline 2 42 8.2 ← DECREASING = cache hit, only new tokens evaluated 3 67 7.2 10 252 5.2 ← still decreasing ``` `ms/token` decreasing over turns proves the GPU backend correctly uses prefix-match KV cache. ## Conclusion | Backend | KV cache working? | |---------|-------------------| | CPU (`--ollama-engine`) | **No** — full re-eval every request | | GPU/ROCm (`--ollama-engine`) | **Yes** — prefix-match working | | `/api/generate` + context (CPU) | **Yes** — explicit context reuse works | The CPU backend of the new engine appears to not maintain the KV cache between `/api/chat` requests. The prefix-match logic in `cache.go` (`findLongestCacheSlot` / `countCommonPrefix`) does not appear to be triggered, or the cache is cleared between requests. ## Related issues - #5303 — random full re-evaluation (different trigger, same symptom) - #12504 — new engine prompt eval much slower (closed as duplicate of #12037) ---
GiteaMirror added the bug label 2026-04-22 19:43:14 -05:00
Author
Owner

@Lightspace260 commented on GitHub (Mar 12, 2026):

/attempt #14780

<!-- gh-comment-id:4043369035 --> @Lightspace260 commented on GitHub (Mar 12, 2026): /attempt #14780
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35311