[GH-ISSUE #7949] panic: failed to decode batch: could not find a kv cache slot goroutine 22 [running]: main.(*Server).run(0xc0000c2120, {0x556536b63ba0, 0xc00008a0a0}) github.com/ollama/ollama/llama/runner/runner.go:344 +0x23e created by main.main in… #30849

Closed
opened 2026-04-22 10:47:46 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @watch-Ultra on GitHub (Dec 5, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7949

Originally created by @watch-Ultra on GitHub (Dec 5, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7949
Author
Owner

@aliuspetraska commented on GitHub (Dec 5, 2024):

we do see the same with last 3 versions of Ollama:

Nov 27 14:55:15 ollama[40647]: [GIN] 2024/11/27 - 14:55:15 | 200 | 2.621066104s | 127.0.0.1 | POST "/api/generate"
Nov 27 14:55:16 ollama[40647]: [GIN] 2024/11/27 - 14:55:16 | 200 | 4.383705565s | 127.0.0.1 | POST "/api/generate"
Nov 27 14:55:18 ollama[40647]: [GIN] 2024/11/27 - 14:55:18 | 200 | 3.540287173s | 127.0.0.1 | POST "/api/generate"
Nov 27 14:55:21 ollama[40647]: [GIN] 2024/11/27 - 14:55:21 | 200 | 5.519881786s | 127.0.0.1 | POST "/api/generate"
Nov 27 14:55:23 ollama[40647]: [GIN] 2024/11/27 - 14:55:23 | 200 | 6.420199899s | 127.0.0.1 | POST "/api/generate"
Nov 27 14:55:24 ollama[40647]: [GIN] 2024/11/27 - 14:55:24 | 200 | 5.876235203s | 127.0.0.1 | POST "/api/generate"
Nov 27 14:55:28 ollama[40647]: [GIN] 2024/11/27 - 14:55:28 | 200 | 6.817514029s | 127.0.0.1 | POST "/api/generate"
Nov 27 14:55:29 ollama[40647]: [GIN] 2024/11/27 - 14:55:29 | 200 | 6.614235832s | 127.0.0.1 | POST "/api/generate"
Nov 27 14:55:34 ollama[40647]: [GIN] 2024/11/27 - 14:55:34 | 200 | 10.341194001s | 127.0.0.1 | POST "/api/generate"
Nov 27 14:55:39 ollama[40647]: [GIN] 2024/11/27 - 14:55:39 | 200 | 11.685299016s | 127.0.0.1 | POST "/api/generate"
Nov 27 14:55:40 ollama[40647]: panic: failed to decode batch: could not find a kv cache slot
Nov 27 14:55:40 ollama[40647]: goroutine 7 [running]:
Nov 27 14:55:40 ollama[40647]: main.(*Server).run(0xc0000c2120, {0x55c4bfd27d60, 0xc0000980a0})
Nov 27 14:55:40 ollama[40647]: github.com/ollama/ollama/llama/runner/runner.go:336 +0x23e
Nov 27 14:55:40 ollama[40647]: created by main.main in goroutine 1
Nov 27 14:55:40 ollama[40647]: github.com/ollama/ollama/llama/runner/runner.go:955 +0xc52
<!-- gh-comment-id:2519951968 --> @aliuspetraska commented on GitHub (Dec 5, 2024): we do see the same with last 3 versions of Ollama: ``` Nov 27 14:55:15 ollama[40647]: [GIN] 2024/11/27 - 14:55:15 | 200 | 2.621066104s | 127.0.0.1 | POST "/api/generate" Nov 27 14:55:16 ollama[40647]: [GIN] 2024/11/27 - 14:55:16 | 200 | 4.383705565s | 127.0.0.1 | POST "/api/generate" Nov 27 14:55:18 ollama[40647]: [GIN] 2024/11/27 - 14:55:18 | 200 | 3.540287173s | 127.0.0.1 | POST "/api/generate" Nov 27 14:55:21 ollama[40647]: [GIN] 2024/11/27 - 14:55:21 | 200 | 5.519881786s | 127.0.0.1 | POST "/api/generate" Nov 27 14:55:23 ollama[40647]: [GIN] 2024/11/27 - 14:55:23 | 200 | 6.420199899s | 127.0.0.1 | POST "/api/generate" Nov 27 14:55:24 ollama[40647]: [GIN] 2024/11/27 - 14:55:24 | 200 | 5.876235203s | 127.0.0.1 | POST "/api/generate" Nov 27 14:55:28 ollama[40647]: [GIN] 2024/11/27 - 14:55:28 | 200 | 6.817514029s | 127.0.0.1 | POST "/api/generate" Nov 27 14:55:29 ollama[40647]: [GIN] 2024/11/27 - 14:55:29 | 200 | 6.614235832s | 127.0.0.1 | POST "/api/generate" Nov 27 14:55:34 ollama[40647]: [GIN] 2024/11/27 - 14:55:34 | 200 | 10.341194001s | 127.0.0.1 | POST "/api/generate" Nov 27 14:55:39 ollama[40647]: [GIN] 2024/11/27 - 14:55:39 | 200 | 11.685299016s | 127.0.0.1 | POST "/api/generate" Nov 27 14:55:40 ollama[40647]: panic: failed to decode batch: could not find a kv cache slot Nov 27 14:55:40 ollama[40647]: goroutine 7 [running]: Nov 27 14:55:40 ollama[40647]: main.(*Server).run(0xc0000c2120, {0x55c4bfd27d60, 0xc0000980a0}) Nov 27 14:55:40 ollama[40647]: github.com/ollama/ollama/llama/runner/runner.go:336 +0x23e Nov 27 14:55:40 ollama[40647]: created by main.main in goroutine 1 Nov 27 14:55:40 ollama[40647]: github.com/ollama/ollama/llama/runner/runner.go:955 +0xc52 ```
Author
Owner

@jessegross commented on GitHub (Dec 6, 2024):

@aliuspetraska It sounds like this is reproducible for you. Can you give the steps to trigger it?

<!-- gh-comment-id:2522052387 --> @jessegross commented on GitHub (Dec 6, 2024): @aliuspetraska It sounds like this is reproducible for you. Can you give the steps to trigger it?
Author
Owner

@StefanDimitrov95 commented on GitHub (Dec 9, 2024):

For me this happens after running long series of requests. With larger models this issue occurs earlier.
Ollama 0.5.1, QWEN 2.5 Coder models and llama 3.1.

Dec 09 11:47:29 ollama[5600]: [GIN] 2024/12/09 - 11:47:29 | 200 | 44.198688317s |   127.0.0.1 | POST     "/v1/chat/completions"
Dec 09 11:47:30 ollama[5600]: panic: failed to decode batch: could not find a kv cache slot
Dec 09 11:47:30 ollama[5600]: goroutine 22 [running]:
Dec 09 11:47:30 ollama[5600]: main.(*Server).run(0xc0000cc120, {0x563bb381a9a0, 0xc0000960a0})
Dec 09 11:47:30 ollama[5600]:         github.com/ollama/ollama/llama/runner/runner.go:344 +0x23e
Dec 09 11:47:30 ollama[5600]: created by main.main in goroutine 1
Dec 09 11:47:30 ollama[5600]:         github.com/ollama/ollama/llama/runner/runner.go:980 +0xd3e
Dec 09 11:47:30 ollama[5600]: [GIN] 2024/12/09 - 11:47:30 | 500 | 44.931164011s |   127.0.0.1 | POST     "/v1/chat/completions"
<!-- gh-comment-id:2527758793 --> @StefanDimitrov95 commented on GitHub (Dec 9, 2024): For me this happens after running long series of requests. With larger models this issue occurs earlier. Ollama 0.5.1, QWEN 2.5 Coder models and llama 3.1. ``` Dec 09 11:47:29 ollama[5600]: [GIN] 2024/12/09 - 11:47:29 | 200 | 44.198688317s | 127.0.0.1 | POST "/v1/chat/completions" Dec 09 11:47:30 ollama[5600]: panic: failed to decode batch: could not find a kv cache slot Dec 09 11:47:30 ollama[5600]: goroutine 22 [running]: Dec 09 11:47:30 ollama[5600]: main.(*Server).run(0xc0000cc120, {0x563bb381a9a0, 0xc0000960a0}) Dec 09 11:47:30 ollama[5600]: github.com/ollama/ollama/llama/runner/runner.go:344 +0x23e Dec 09 11:47:30 ollama[5600]: created by main.main in goroutine 1 Dec 09 11:47:30 ollama[5600]: github.com/ollama/ollama/llama/runner/runner.go:980 +0xd3e Dec 09 11:47:30 ollama[5600]: [GIN] 2024/12/09 - 11:47:30 | 500 | 44.931164011s | 127.0.0.1 | POST "/v1/chat/completions" ```
Author
Owner

@jessegross commented on GitHub (Dec 10, 2024):

Can you set the environment variable OLLAMA_DEBUG=1 and then report the full log after the issue occurs?

<!-- gh-comment-id:2529899653 --> @jessegross commented on GitHub (Dec 10, 2024): Can you set the environment variable OLLAMA_DEBUG=1 and then report the full log after the issue occurs?
Author
Owner

@StefanDimitrov95 commented on GitHub (Dec 10, 2024):

Dec 10 07:58:44 ollama[4417]: time=2024-12-10T07:58:44.537Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=3 cache=1634 prompt=900 used=9 remaining=891
Dec 10 07:58:45 ollama[4417]: [GIN] 2024/12/10 - 07:58:45 | 200 | 13.969241081s | 127.0.0.1 | POST     "/v1/chat/completions"
Dec 10 07:58:45 ollama[4417]: time=2024-12-10T07:58:45.328Z level=DEBUG source=sched.go:407 msg="context for request finished"
Dec 10 07:58:45 ollama[4417]: time=2024-12-10T07:58:45.328Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=16
Dec 10 07:58:45 ollama[4417]: time=2024-12-10T07:58:45.380Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=1 cache=943 prompt=1685 used=9 remaining=1676
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.102Z level=DEBUG source=runner.go:437 msg="defragmenting kv cache"
Dec 10 07:58:46 ollama[4417]: panic: failed to decode batch: could not find a kv cache slot
Dec 10 07:58:46 ollama[4417]: goroutine 7 [running]:
Dec 10 07:58:46 ollama[4417]: main.(*Server).run(0xc000122120, {0x562bd88649a0, 0xc0000780a0})
Dec 10 07:58:46 ollama[4417]:         github.com/ollama/ollama/llama/runner/runner.go:344 +0x23e
Dec 10 07:58:46 ollama[4417]: created by main.main in goroutine 1
Dec 10 07:58:46 ollama[4417]:         github.com/ollama/ollama/llama/runner/runner.go:980 +0xd3e
Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.828268303s | 127.0.0.1 | POST     "/v1/chat/completions"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.180Z level=DEBUG source=sched.go:407 msg="context for request finished"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.180Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=15
Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.814482243s | 127.0.0.1 | POST     "/v1/chat/completions"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.199Z level=DEBUG source=sched.go:407 msg="context for request finished"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.199Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=14
Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.984322881s | 127.0.0.1 | POST     "/v1/chat/completions"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:407 msg="context for request finished"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=13
Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.930340877s | 127.0.0.1 | POST     "/v1/chat/completions"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:407 msg="context for request finished"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=12
Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 |  14.81576578s | 127.0.0.1 | POST     "/v1/chat/completions"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:407 msg="context for request finished"
Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.929918222s | 127.0.0.1 | POST     "/v1/chat/completions"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=11
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:407 msg="context for request finished"
Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.988404425s | 127.0.0.1 | POST     "/v1/chat/completions"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=10
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:407 msg="context for request finished"
Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.778233985s | 127.0.0.1 | POST     "/v1/chat/completions"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=9
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:407 msg="context for request finished"
Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.780533845s | 127.0.0.1 | POST     "/v1/chat/completions"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=8
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:407 msg="context for request finished"
Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.964448819s | 127.0.0.1 | POST     "/v1/chat/completions"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=7
Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.760055765s | 127.0.0.1 | POST     "/v1/chat/completions"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:407 msg="context for request finished"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=6
Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.965603224s | 127.0.0.1 | POST     "/v1/chat/completions"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:407 msg="context for request finished"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=5
Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.769107184s | 127.0.0.1 | POST     "/v1/chat/completions"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:407 msg="context for request finished"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=4
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:407 msg="context for request finished"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=3
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.229Z level=DEBUG source=server.go:1099 msg="stopping llama server"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.229Z level=DEBUG source=server.go:1105 msg="waiting for llama server to exit"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.229Z level=DEBUG source=server.go:1099 msg="stopping llama server"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.229Z level=DEBUG source=server.go:1105 msg="waiting for llama server to exit"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.229Z level=DEBUG source=server.go:1099 msg="stopping llama server"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.229Z level=DEBUG source=server.go:1105 msg="waiting for llama server to exit"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.250Z level=DEBUG source=server.go:1109 msg="llama server stopped"
Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 15.193519735s | 127.0.0.1 | POST     "/v1/chat/completions"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.251Z level=DEBUG source=sched.go:407 msg="context for request finished"
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.251Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=2
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.799Z level=DEBUG source=sched.go:575 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.799Z level=DEBUG source=server.go:565 msg="server unhealthy" error="llama runner process no longer running: 2 "
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.799Z level=DEBUG source=sched.go:283 msg="resetting model to expire immediately to make room" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=2
Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.799Z level=DEBUG source=sched.go:296 msg="waiting for pending requests to complete and unload to occur" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54
<!-- gh-comment-id:2530964878 --> @StefanDimitrov95 commented on GitHub (Dec 10, 2024): ```shell Dec 10 07:58:44 ollama[4417]: time=2024-12-10T07:58:44.537Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=3 cache=1634 prompt=900 used=9 remaining=891 Dec 10 07:58:45 ollama[4417]: [GIN] 2024/12/10 - 07:58:45 | 200 | 13.969241081s | 127.0.0.1 | POST "/v1/chat/completions" Dec 10 07:58:45 ollama[4417]: time=2024-12-10T07:58:45.328Z level=DEBUG source=sched.go:407 msg="context for request finished" Dec 10 07:58:45 ollama[4417]: time=2024-12-10T07:58:45.328Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=16 Dec 10 07:58:45 ollama[4417]: time=2024-12-10T07:58:45.380Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=1 cache=943 prompt=1685 used=9 remaining=1676 Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.102Z level=DEBUG source=runner.go:437 msg="defragmenting kv cache" Dec 10 07:58:46 ollama[4417]: panic: failed to decode batch: could not find a kv cache slot Dec 10 07:58:46 ollama[4417]: goroutine 7 [running]: Dec 10 07:58:46 ollama[4417]: main.(*Server).run(0xc000122120, {0x562bd88649a0, 0xc0000780a0}) Dec 10 07:58:46 ollama[4417]: github.com/ollama/ollama/llama/runner/runner.go:344 +0x23e Dec 10 07:58:46 ollama[4417]: created by main.main in goroutine 1 Dec 10 07:58:46 ollama[4417]: github.com/ollama/ollama/llama/runner/runner.go:980 +0xd3e Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.828268303s | 127.0.0.1 | POST "/v1/chat/completions" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.180Z level=DEBUG source=sched.go:407 msg="context for request finished" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.180Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=15 Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.814482243s | 127.0.0.1 | POST "/v1/chat/completions" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.199Z level=DEBUG source=sched.go:407 msg="context for request finished" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.199Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=14 Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.984322881s | 127.0.0.1 | POST "/v1/chat/completions" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:407 msg="context for request finished" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=13 Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.930340877s | 127.0.0.1 | POST "/v1/chat/completions" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:407 msg="context for request finished" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=12 Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.81576578s | 127.0.0.1 | POST "/v1/chat/completions" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:407 msg="context for request finished" Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.929918222s | 127.0.0.1 | POST "/v1/chat/completions" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=11 Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:407 msg="context for request finished" Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.988404425s | 127.0.0.1 | POST "/v1/chat/completions" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=10 Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:407 msg="context for request finished" Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.778233985s | 127.0.0.1 | POST "/v1/chat/completions" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=9 Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.200Z level=DEBUG source=sched.go:407 msg="context for request finished" Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.780533845s | 127.0.0.1 | POST "/v1/chat/completions" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=8 Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:407 msg="context for request finished" Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.964448819s | 127.0.0.1 | POST "/v1/chat/completions" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=7 Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.760055765s | 127.0.0.1 | POST "/v1/chat/completions" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:407 msg="context for request finished" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=6 Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.965603224s | 127.0.0.1 | POST "/v1/chat/completions" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:407 msg="context for request finished" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=5 Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 14.769107184s | 127.0.0.1 | POST "/v1/chat/completions" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:407 msg="context for request finished" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=4 Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:407 msg="context for request finished" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.201Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=3 Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.229Z level=DEBUG source=server.go:1099 msg="stopping llama server" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.229Z level=DEBUG source=server.go:1105 msg="waiting for llama server to exit" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.229Z level=DEBUG source=server.go:1099 msg="stopping llama server" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.229Z level=DEBUG source=server.go:1105 msg="waiting for llama server to exit" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.229Z level=DEBUG source=server.go:1099 msg="stopping llama server" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.229Z level=DEBUG source=server.go:1105 msg="waiting for llama server to exit" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.250Z level=DEBUG source=server.go:1109 msg="llama server stopped" Dec 10 07:58:46 ollama[4417]: [GIN] 2024/12/10 - 07:58:46 | 500 | 15.193519735s | 127.0.0.1 | POST "/v1/chat/completions" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.251Z level=DEBUG source=sched.go:407 msg="context for request finished" Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.251Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=2 Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.799Z level=DEBUG source=sched.go:575 msg="evaluating already loaded" model=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.799Z level=DEBUG source=server.go:565 msg="server unhealthy" error="llama runner process no longer running: 2 " Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.799Z level=DEBUG source=sched.go:283 msg="resetting model to expire immediately to make room" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 refCount=2 Dec 10 07:58:46 ollama[4417]: time=2024-12-10T07:58:46.799Z level=DEBUG source=sched.go:296 msg="waiting for pending requests to complete and unload to occur" modelPath=/usr/share/ollama/.ollama/models/blobs/sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 ```
Author
Owner

@thibautrey commented on GitHub (Dec 12, 2024):

I've got the exact same issue using llama3.3:70b:

`
time=2024-12-12T16:07:49.376Z level=WARN source=runner.go:129 msg="truncating input prompt" limit=2048 prompt=3819 keep=5 new=2048
time=2024-12-12T16:07:49.521Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=1212 prompt=2048 used=5 remaining=2043
time=2024-12-12T16:07:53.293Z level=DEBUG source=runner.go:437 msg="defragmenting kv cache"
panic: failed to decode batch: could not find a kv cache slot

goroutine 7 [running]:
main.(*Server).run(0xc0000ec120, {0x55b0d4a059a0, 0xc0000be0a0})
github.com/ollama/ollama/llama/runner/runner.go:344 +0x23e
created by main.main in goroutine 1
github.com/ollama/ollama/llama/runner/runner.go:980 +0xd3e
time=2024-12-12T16:07:53.403Z level=DEBUG source=sched.go:407 msg="context for request finished"
time=2024-12-12T16:07:53.403Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-4824460d29f2058aaf6e1118a63a7a197a09bed509f0e7d4e2efb1ee273b447d refCount=2
[GIN] 2024/12/12 - 16:07:53 | 500 | 4.090769271s | 172.17.0.1 | POST "/api/chat"
time=2024-12-12T16:07:53.470Z level=DEBUG source=server.go:1099 msg="stopping llama server"
time=2024-12-12T16:07:53.470Z level=DEBUG source=server.go:1105 msg="waiting for llama server to exit"
time=2024-12-12T16:07:53.470Z level=DEBUG source=server.go:1099 msg="stopping llama server"
time=2024-12-12T16:07:53.470Z level=DEBUG source=server.go:1105 msg="waiting for llama server to exit"
time=2024-12-12T16:07:53.487Z level=DEBUG source=server.go:1109 msg="llama server stopped"
`

<!-- gh-comment-id:2539391315 --> @thibautrey commented on GitHub (Dec 12, 2024): I've got the exact same issue using llama3.3:70b: ` time=2024-12-12T16:07:49.376Z level=WARN source=runner.go:129 msg="truncating input prompt" limit=2048 prompt=3819 keep=5 new=2048 time=2024-12-12T16:07:49.521Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=1212 prompt=2048 used=5 remaining=2043 time=2024-12-12T16:07:53.293Z level=DEBUG source=runner.go:437 msg="defragmenting kv cache" panic: failed to decode batch: could not find a kv cache slot goroutine 7 [running]: main.(*Server).run(0xc0000ec120, {0x55b0d4a059a0, 0xc0000be0a0}) github.com/ollama/ollama/llama/runner/runner.go:344 +0x23e created by main.main in goroutine 1 github.com/ollama/ollama/llama/runner/runner.go:980 +0xd3e time=2024-12-12T16:07:53.403Z level=DEBUG source=sched.go:407 msg="context for request finished" time=2024-12-12T16:07:53.403Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-4824460d29f2058aaf6e1118a63a7a197a09bed509f0e7d4e2efb1ee273b447d refCount=2 [GIN] 2024/12/12 - 16:07:53 | 500 | 4.090769271s | 172.17.0.1 | POST "/api/chat" time=2024-12-12T16:07:53.470Z level=DEBUG source=server.go:1099 msg="stopping llama server" time=2024-12-12T16:07:53.470Z level=DEBUG source=server.go:1105 msg="waiting for llama server to exit" time=2024-12-12T16:07:53.470Z level=DEBUG source=server.go:1099 msg="stopping llama server" time=2024-12-12T16:07:53.470Z level=DEBUG source=server.go:1105 msg="waiting for llama server to exit" time=2024-12-12T16:07:53.487Z level=DEBUG source=server.go:1109 msg="llama server stopped" `
Author
Owner

@jessegross commented on GitHub (Dec 12, 2024):

Can you please post the full log?

<!-- gh-comment-id:2539784099 --> @jessegross commented on GitHub (Dec 12, 2024): Can you please post the full log?
Author
Owner

@MishaAnikutin commented on GitHub (Dec 16, 2024):

Faced the same problem. I changed the ollama version to 0.3.14 and everything worked

<!-- gh-comment-id:2546443548 --> @MishaAnikutin commented on GitHub (Dec 16, 2024): Faced the same problem. I changed the ollama version to 0.3.14 and everything worked
Author
Owner

@jersam commented on GitHub (Jan 4, 2025):

@MishaAnikutin Any idea why? I am having the same issue and all is working on 3.14

I am using Paperless GPT

<!-- gh-comment-id:2570053496 --> @jersam commented on GitHub (Jan 4, 2025): @MishaAnikutin Any idea why? I am having the same issue and all is working on 3.14 I am using Paperless GPT
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#30849