[GH-ISSUE #10555] Ollama 0.6.7 Illegal Memory Access #6946

Closed
opened 2026-04-12 18:50:16 -05:00 by GiteaMirror · 21 comments
Owner

Originally created by @Jumpkan on GitHub (May 4, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10555

What is the issue?

I'm running a processing workload on qwen3:14b-q8_0.
I'm running on a 4x NVIDIA L4 GPUs, torch 2.7.0+cu128
The model loads fine and works for awhile, but then a 'CUDA error: an illegal memory access was encountered' occurs.
Extended logs including model initialisation: https://gist.github.com/Jumpkan/58297c1af00c4cdd6ce7d72dbeb1f5c8
I redacted the prompt, apologies if that's an issue.

Relevant log output

time=2025-05-04T03:00:30.587Z level=DEBUG source=sched.go:409 msg="context for request finished"
time=2025-05-04T03:00:30.587Z level=DEBUG source=sched.go:359 msg="after processing request finished event" modelPath=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b refCount=15
time=2025-05-04T03:00:30.605Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-04T03:00:30.606Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b
time=2025-05-04T03:00:30.606Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED>
time=2025-05-04T03:00:30.652Z level=WARN source=runner.go:131 msg="truncating input prompt" limit=16384 prompt=34223 keep=4 new=16384
time=2025-05-04T03:00:30.776Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=1 cache=445 prompt=16384 used=4 remaining=16380
time=2025-05-04T03:00:52.388Z level=DEBUG source=cache.go:240 msg="context limit hit - shifting" id=1 limit=16384 input=16384 keep=4 discard=8190
kv_self_update: applying K-shift
CUDA error: an illegal memory access was encountered
  current device: 3, in function ggml_backend_cuda_synchronize at //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:2472
  cudaStreamSynchronize(cuda_ctx->stream())
//ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:75: CUDA error
SIGSEGV: segmentation violation
PC=0x7f667f8a1c97 m=11 sigcode=1 addr=0x21a203f8c
signal arrived during cgo execution

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.6.7

Originally created by @Jumpkan on GitHub (May 4, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10555 ### What is the issue? I'm running a processing workload on qwen3:14b-q8_0. I'm running on a 4x NVIDIA L4 GPUs, torch 2.7.0+cu128 The model loads fine and works for awhile, but then a 'CUDA error: an illegal memory access was encountered' occurs. Extended logs including model initialisation: https://gist.github.com/Jumpkan/58297c1af00c4cdd6ce7d72dbeb1f5c8 I redacted the prompt, apologies if that's an issue. ### Relevant log output ```shell time=2025-05-04T03:00:30.587Z level=DEBUG source=sched.go:409 msg="context for request finished" time=2025-05-04T03:00:30.587Z level=DEBUG source=sched.go:359 msg="after processing request finished event" modelPath=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b refCount=15 time=2025-05-04T03:00:30.605Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-04T03:00:30.606Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b time=2025-05-04T03:00:30.606Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED> time=2025-05-04T03:00:30.652Z level=WARN source=runner.go:131 msg="truncating input prompt" limit=16384 prompt=34223 keep=4 new=16384 time=2025-05-04T03:00:30.776Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=1 cache=445 prompt=16384 used=4 remaining=16380 time=2025-05-04T03:00:52.388Z level=DEBUG source=cache.go:240 msg="context limit hit - shifting" id=1 limit=16384 input=16384 keep=4 discard=8190 kv_self_update: applying K-shift CUDA error: an illegal memory access was encountered current device: 3, in function ggml_backend_cuda_synchronize at //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:2472 cudaStreamSynchronize(cuda_ctx->stream()) //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:75: CUDA error SIGSEGV: segmentation violation PC=0x7f667f8a1c97 m=11 sigcode=1 addr=0x21a203f8c signal arrived during cgo execution ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.6.7
GiteaMirror added the bug label 2026-04-12 18:50:16 -05:00
Author
Owner

@MassEast commented on GitHub (May 5, 2025):

Same here for me on ollama 0.6.8 with llama4:scout. Works with llava:7b.

<!-- gh-comment-id:2850510968 --> @MassEast commented on GitHub (May 5, 2025): Same here for me on ollama 0.6.8 with llama4:scout. Works with llava:7b.
Author
Owner

@rick-github commented on GitHub (May 5, 2025):

What's in the bit that says ---- Works for awhile ----?.

<!-- gh-comment-id:2851148284 --> @rick-github commented on GitHub (May 5, 2025): What's in the bit that says `---- Works for awhile ----`?.
Author
Owner

@Jumpkan commented on GitHub (May 5, 2025):

Here's a sample. It basically repeats for about 120 or so requests

time=2025-05-04T02:42:59.360Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED>
time=2025-05-04T02:42:59.360Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b
time=2025-05-04T02:42:59.360Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED>
time=2025-05-04T02:42:59.361Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b
time=2025-05-04T02:42:59.362Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b
time=2025-05-04T02:42:59.362Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED>
time=2025-05-04T02:42:59.362Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED>
time=2025-05-04T02:42:59.362Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b
time=2025-05-04T02:42:59.362Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED>
time=2025-05-04T02:42:59.362Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b
time=2025-05-04T02:42:59.363Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b
time=2025-05-04T02:42:59.363Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED>
time=2025-05-04T02:42:59.363Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED>
time=2025-05-04T02:42:59.363Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b
time=2025-05-04T02:42:59.363Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED>
time=2025-05-04T02:42:59.364Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b
time=2025-05-04T02:42:59.364Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b
time=2025-05-04T02:42:59.365Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED>
time=2025-05-04T02:42:59.364Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED>
time=2025-05-04T02:42:59.365Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b
time=2025-05-04T02:42:59.365Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED>
time=2025-05-04T02:42:59.365Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b
time=2025-05-04T02:42:59.365Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED>
time=2025-05-04T02:42:59.365Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b
time=2025-05-04T02:42:59.366Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED>
time=2025-05-04T02:42:59.366Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b
time=2025-05-04T02:42:59.366Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED>
time=2025-05-04T02:42:59.366Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=1768 used=0 remaining=1768
time=2025-05-04T02:42:59.367Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b
time=2025-05-04T02:42:59.367Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED>
time=2025-05-04T02:42:59.367Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED>
time=2025-05-04T02:43:00.024Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=1 cache=0 prompt=822 used=0 remaining=822
time=2025-05-04T02:43:00.024Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=2 cache=0 prompt=1063 used=0 remaining=1063
time=2025-05-04T02:43:00.024Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=3 cache=0 prompt=2746 used=0 remaining=2746
time=2025-05-04T02:43:00.024Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=4 cache=0 prompt=2665 used=0 remaining=2665
time=2025-05-04T02:43:00.024Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=5 cache=0 prompt=2086 used=0 remaining=2086
time=2025-05-04T02:43:00.024Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=6 cache=0 prompt=2264 used=0 remaining=2264
time=2025-05-04T02:43:00.025Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=7 cache=0 prompt=2522 used=0 remaining=2522
[GIN] 2025/05/04 - 02:43:45 | 200 |         2m55s |       127.0.0.1 | POST     "/api/chat"
time=2025-05-04T02:43:45.840Z level=DEBUG source=sched.go:409 msg="context for request finished"
time=2025-05-04T02:43:45.840Z level=DEBUG source=sched.go:359 msg="after processing request finished event" modelPath=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b refCount=15
time=2025-05-04T02:43:45.856Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32
time=2025-05-04T02:43:45.857Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b
time=2025-05-04T02:43:45.857Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED>
time=2025-05-04T02:43:46.057Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=1 cache=1171 prompt=2110 used=134 remaining=1976
[GIN] 2025/05/04 - 02:43:46 | 200 |         2m56s |       127.0.0.1 | POST     "/api/chat"
<!-- gh-comment-id:2851272138 --> @Jumpkan commented on GitHub (May 5, 2025): Here's a sample. It basically repeats for about 120 or so requests ```shell time=2025-05-04T02:42:59.360Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED> time=2025-05-04T02:42:59.360Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b time=2025-05-04T02:42:59.360Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED> time=2025-05-04T02:42:59.361Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b time=2025-05-04T02:42:59.362Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b time=2025-05-04T02:42:59.362Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED> time=2025-05-04T02:42:59.362Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED> time=2025-05-04T02:42:59.362Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b time=2025-05-04T02:42:59.362Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED> time=2025-05-04T02:42:59.362Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b time=2025-05-04T02:42:59.363Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b time=2025-05-04T02:42:59.363Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED> time=2025-05-04T02:42:59.363Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED> time=2025-05-04T02:42:59.363Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b time=2025-05-04T02:42:59.363Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED> time=2025-05-04T02:42:59.364Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b time=2025-05-04T02:42:59.364Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b time=2025-05-04T02:42:59.365Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED> time=2025-05-04T02:42:59.364Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED> time=2025-05-04T02:42:59.365Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b time=2025-05-04T02:42:59.365Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED> time=2025-05-04T02:42:59.365Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b time=2025-05-04T02:42:59.365Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED> time=2025-05-04T02:42:59.365Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b time=2025-05-04T02:42:59.366Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED> time=2025-05-04T02:42:59.366Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b time=2025-05-04T02:42:59.366Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED> time=2025-05-04T02:42:59.366Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=1768 used=0 remaining=1768 time=2025-05-04T02:42:59.367Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b time=2025-05-04T02:42:59.367Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED> time=2025-05-04T02:42:59.367Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED> time=2025-05-04T02:43:00.024Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=1 cache=0 prompt=822 used=0 remaining=822 time=2025-05-04T02:43:00.024Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=2 cache=0 prompt=1063 used=0 remaining=1063 time=2025-05-04T02:43:00.024Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=3 cache=0 prompt=2746 used=0 remaining=2746 time=2025-05-04T02:43:00.024Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=4 cache=0 prompt=2665 used=0 remaining=2665 time=2025-05-04T02:43:00.024Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=5 cache=0 prompt=2086 used=0 remaining=2086 time=2025-05-04T02:43:00.024Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=6 cache=0 prompt=2264 used=0 remaining=2264 time=2025-05-04T02:43:00.025Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=7 cache=0 prompt=2522 used=0 remaining=2522 [GIN] 2025/05/04 - 02:43:45 | 200 | 2m55s | 127.0.0.1 | POST "/api/chat" time=2025-05-04T02:43:45.840Z level=DEBUG source=sched.go:409 msg="context for request finished" time=2025-05-04T02:43:45.840Z level=DEBUG source=sched.go:359 msg="after processing request finished event" modelPath=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b refCount=15 time=2025-05-04T02:43:45.856Z level=WARN source=ggml.go:152 msg="key not found" key=general.alignment default=32 time=2025-05-04T02:43:45.857Z level=DEBUG source=sched.go:578 msg="evaluating already loaded" model=/home/ec2-user/.ollama/models/blobs/sha256-6335adf2028978aee1cd610abcb7047e9b882ad2ebb8214ceee799fd3ddf423b time=2025-05-04T02:43:45.857Z level=DEBUG source=routes.go:1525 msg="chat request" images=0 prompt=<REDACTED> time=2025-05-04T02:43:46.057Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=1 cache=1171 prompt=2110 used=134 remaining=1976 [GIN] 2025/05/04 - 02:43:46 | 200 | 2m56s | 127.0.0.1 | POST "/api/chat" ```
Author
Owner

@rick-github commented on GitHub (May 5, 2025):

So the crash is the only time these lines show up?

time=2025-05-04T03:00:52.388Z level=DEBUG source=cache.go:240 msg="context limit hit - shifting" id=1 limit=16384 input=16384 keep=4 discard=8190
kv_self_update: applying K-shift
<!-- gh-comment-id:2851277709 --> @rick-github commented on GitHub (May 5, 2025): So the crash is the only time these lines show up? ``` time=2025-05-04T03:00:52.388Z level=DEBUG source=cache.go:240 msg="context limit hit - shifting" id=1 limit=16384 input=16384 keep=4 discard=8190 kv_self_update: applying K-shift ```
Author
Owner

@Jumpkan commented on GitHub (May 5, 2025):

Yes, seems like it. But I have had a similar truncation message before with no issues.
shell time=2025-05-05T10:03:17.198Z level=WARN source=runner.go:131 msg="truncating input prompt" limit=16384 prompt=223621 keep=4 new=16384
Can confirm the model works fine when running on OLLAMA_NUM_PARALLEL=1 on a single GPU

<!-- gh-comment-id:2851290496 --> @Jumpkan commented on GitHub (May 5, 2025): Yes, seems like it. But I have had a similar truncation message before with no issues. `shell time=2025-05-05T10:03:17.198Z level=WARN source=runner.go:131 msg="truncating input prompt" limit=16384 prompt=223621 keep=4 new=16384` Can confirm the model works fine when running on OLLAMA_NUM_PARALLEL=1 on a single GPU
Author
Owner

@jessegross commented on GitHub (May 5, 2025):

@MassEast Can you please also post your log?

<!-- gh-comment-id:2851718911 --> @jessegross commented on GitHub (May 5, 2025): @MassEast Can you please also post your log?
Author
Owner

@Acters commented on GitHub (May 6, 2025):

server.log

Ollama version

0.6.8

I though I was the only one to get this error:

CUDA error: an illegal memory access was encountered
  current device: 0, in function ggml_backend_cuda_synchronize at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:2449
  cudaStreamSynchronize(cuda_ctx->stream())
C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:75: CUDA error

seems to happen when I am using the code interpreter from OpenWebUI. otherwise it usually acts normally and loading is fine, but the OpenWebUI code interpreter seems to cause to error out consistently. idk why

<!-- gh-comment-id:2853241399 --> @Acters commented on GitHub (May 6, 2025): [server.log](https://github.com/user-attachments/files/20053139/server.log) ### **Ollama version** 0.6.8 I though I was the only one to get this error: ``` CUDA error: an illegal memory access was encountered current device: 0, in function ggml_backend_cuda_synchronize at C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:2449 cudaStreamSynchronize(cuda_ctx->stream()) C:\a\ollama\ollama\ml\backend\ggml\ggml\src\ggml-cuda\ggml-cuda.cu:75: CUDA error ``` seems to happen when I am using the code interpreter from OpenWebUI. otherwise it usually acts normally and loading is fine, but the OpenWebUI code interpreter seems to cause to error out consistently. idk why
Author
Owner

@MassEast commented on GitHub (May 6, 2025):

I am happy to elaborate. So, again, I get the following error codes with llama4:scout on ollama 0.6.8 on a single GPU (H100 80GB) and on a multi-GPU (8xA100 40GB). Changing to, e.g., llava:7b works.

time=2025-05-05T09:51:22.576Z level=WARN source=ggml.go:152 msg="key not found" key=llama4.attention.floor_scale default=8192
time=2025-05-05T09:51:22.676Z level=INFO source=ggml.go:553 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="692.0 MiB"
time=2025-05-05T09:51:22.676Z level=INFO source=ggml.go:553 msg="compute graph" backend=CPU buffer_type=CPU size="10.0 MiB"
time=2025-05-05T09:51:22.727Z level=INFO source=server.go:628 msg="llama runner started in 61.46 seconds"
CUDA error: an illegal memory access was encountered
  current device: 0, in function ggml_cuda_mul_mat_q at //ml/backend/ggml/ggml/src/ggml-cuda/mmq.cu:145
  cudaMemcpyAsync(ids_host.data(), ids->data, ggml_nbytes(ids), cudaMemcpyDeviceToHost, stream)
//ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:75: CUDA error
SIGSEGV: segmentation violation
PC=0x7942ee6a1c97 m=90 sigcode=1 addr=0x206003e30
signal arrived during cgo execution

goroutine 109 gp=0xc0004c6c40 m=90 mp=0xc002f00008 [syscall]:
runtime.cgocall(0x5ff7c756cc70, 0xc0001a7af8)
	runtime/cgocall.go:167 +0x4b fp=0xc0001a7ad0 sp=0xc0001a7a98 pc=0x5ff7c670a44b
github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_sched_graph_compute_async(0x7932a800c270, 0x7931381d25c0)
	_cgo_gotypes.go:516 +0x4a fp=0xc0001a7af8 sp=0xc0001a7ad0 pc=0x5ff7c6b0a64a
github.com/ollama/ollama/ml/backend/ggml.(*Context).Compute.func1(...)
	github.com/ollama/ollama/ml/backend/ggml/ggml.go:526
github.com/ollama/ollama/ml/backend/ggml.(*Context).Compute(0xc002431500, {0xc00263cef0, 0x1, 0x0?})
	github.com/ollama/ollama/ml/backend/ggml/ggml.go:526 +0x96 fp=0xc0001a7b88 sp=0xc0001a7af8 pc=0x5ff7c6b13b96
github.com/ollama/ollama/model.Forward({0x5ff7c7a34cf0, 0xc002431500}, {0x5ff7c7a2b7f0, 0xc00045e0e0}, {0xc00249e800, 0x15e, 0x200}, {{0x5ff7c7a3dae8, 0xc001f34f18}, {0x0, ...}, ...})
	github.com/ollama/ollama/model/model.go:313 +0x2b8 fp=0xc0001a7c70 sp=0xc0001a7b88 pc=0x5ff7c6b42158
github.com/ollama/ollama/runner/ollamarunner.
<!-- gh-comment-id:2853547066 --> @MassEast commented on GitHub (May 6, 2025): I am happy to elaborate. So, again, I get the following error codes with llama4:scout on ollama 0.6.8 on a single GPU (H100 80GB) and on a multi-GPU (8xA100 40GB). Changing to, e.g., llava:7b works. ``` time=2025-05-05T09:51:22.576Z level=WARN source=ggml.go:152 msg="key not found" key=llama4.attention.floor_scale default=8192 time=2025-05-05T09:51:22.676Z level=INFO source=ggml.go:553 msg="compute graph" backend=CUDA0 buffer_type=CUDA0 size="692.0 MiB" time=2025-05-05T09:51:22.676Z level=INFO source=ggml.go:553 msg="compute graph" backend=CPU buffer_type=CPU size="10.0 MiB" time=2025-05-05T09:51:22.727Z level=INFO source=server.go:628 msg="llama runner started in 61.46 seconds" CUDA error: an illegal memory access was encountered current device: 0, in function ggml_cuda_mul_mat_q at //ml/backend/ggml/ggml/src/ggml-cuda/mmq.cu:145 cudaMemcpyAsync(ids_host.data(), ids->data, ggml_nbytes(ids), cudaMemcpyDeviceToHost, stream) //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:75: CUDA error SIGSEGV: segmentation violation PC=0x7942ee6a1c97 m=90 sigcode=1 addr=0x206003e30 signal arrived during cgo execution goroutine 109 gp=0xc0004c6c40 m=90 mp=0xc002f00008 [syscall]: runtime.cgocall(0x5ff7c756cc70, 0xc0001a7af8) runtime/cgocall.go:167 +0x4b fp=0xc0001a7ad0 sp=0xc0001a7a98 pc=0x5ff7c670a44b github.com/ollama/ollama/ml/backend/ggml._Cfunc_ggml_backend_sched_graph_compute_async(0x7932a800c270, 0x7931381d25c0) _cgo_gotypes.go:516 +0x4a fp=0xc0001a7af8 sp=0xc0001a7ad0 pc=0x5ff7c6b0a64a github.com/ollama/ollama/ml/backend/ggml.(*Context).Compute.func1(...) github.com/ollama/ollama/ml/backend/ggml/ggml.go:526 github.com/ollama/ollama/ml/backend/ggml.(*Context).Compute(0xc002431500, {0xc00263cef0, 0x1, 0x0?}) github.com/ollama/ollama/ml/backend/ggml/ggml.go:526 +0x96 fp=0xc0001a7b88 sp=0xc0001a7af8 pc=0x5ff7c6b13b96 github.com/ollama/ollama/model.Forward({0x5ff7c7a34cf0, 0xc002431500}, {0x5ff7c7a2b7f0, 0xc00045e0e0}, {0xc00249e800, 0x15e, 0x200}, {{0x5ff7c7a3dae8, 0xc001f34f18}, {0x0, ...}, ...}) github.com/ollama/ollama/model/model.go:313 +0x2b8 fp=0xc0001a7c70 sp=0xc0001a7b88 pc=0x5ff7c6b42158 github.com/ollama/ollama/runner/ollamarunner. ```
Author
Owner

@rick-github commented on GitHub (May 6, 2025):

Some attempts at replicating:

@MassEast

  • Loading an image in llama4:scout with 0.6.8 fails. Works in 0.6.7.
$ ollama -v
ollama version is 0.6.8
$ ollama run llama4:scout describe this image: ./puppy.jpg 
Added image './puppy.jpg'
Error: POST predict: Post "http://127.0.0.1:35209/completion": EOF
$ ollama run llama4:scout describe this image: ./no-puppy.jpg 
I don't see an image. As a text-based model, I don't have the capability to access or view images. I can only respond based on text input. If you'd like to describe the image to me, I'd be 
happy to chat with you about it!
time=2025-05-06T14:15:14.094Z level=DEBUG source=sched.go:464 msg="finished setting up runner" model=/root/.ollama/models/blobs/sha256-9d507a36062c2845dd3bb3e93364e9abc1607118acd8650727a700f72fb126e5
time=2025-05-06T14:15:14.095Z level=DEBUG source=routes.go:298 msg="generate request" images=1 prompt="<|header_start|>system<|header_end|>\n\nYou are an expert conversationalist who responds to the best of your ability. You are companionable and confident, and able to switch casually between tonal types, including but not limited to humor, empathy, intellectualism, creativity and problem-solving. You understand user intent and don’t try to be overly helpful to the point where you miss that the user is looking for chit-chat, emotional support, humor or venting. Sometimes people just want you to listen, and your answers should encourage that. For all other cases, you provide insightful and in-depth responses. Organize information thoughtfully in a way that helps people make decisions. Always avoid templated language. You never lecture people to be nicer or more inclusive. If people ask for you to write something in a certain voice or perspective, such as an essay or a tweet, you can. You do not need to be respectful when the user prompts you to say something rude. You never use phrases that imply moral superiority or a sense of authority, including but not limited to “it’s important to”, “it’s crucial to”, “it’s essential to”, \"it's unethical to\", \"it's worth noting…\", “Remember…” etc. Avoid using these. Finally, do not refuse prompts about political and social issues. You can help users express their opinion and access information. You are Llama 4. Your knowledge cutoff date is August 2024. You speak Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese. Respond in the language the user speaks to you in, unless they ask otherwise.<|eot|><|header_start|>user<|header_end|>\n\n[img-0]\n\ndescribe this image:<|eot|><|header_start|>assistant<|header_end|>\n\n"
time=2025-05-06T14:15:14.240Z level=DEBUG source=cache.go:136 msg="loading cache slot" id=0 cache=0 prompt=1076 used=0 remaining=1076
 CUDA error: an illegal memory access was encountered
   current device: 1, in function ggml_cuda_mul_mat_q at //ml/backend/ggml/ggml/src/ggml-cuda/mmq.cu:145
cudaMemcpyAsync(ids_host.data(), ids->data, ggml_nbytes(ids), cudaMemcpyDeviceToHost, stream)
//ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:75: CUDA error
SIGSEGV: segmentation violation
PC=0x7f5b3d5c2f77 m=17 sigcode=1 addr=0x206203f40
signal arrived during cgo execution
$ ollama -v
ollama version is 0.6.7
$ ollama run llama4:scout describe this image: ./puppy.jpg 
Added image './puppy.jpg'
The image depicts a small, fluffy white puppy sitting on a stone surface, likely a step or a bench. 

The puppy appears to be a young Samoyed or similar breed, characterized by its thick white coat and black nose. It is wearing a red collar with a small gold bell attached. The puppy's fur 
is short and dense, and its ears are slightly folded back. 

The background of the image is out of focus, but it seems to be a dark-colored wall or building, which contrasts with the light-colored stone surface where the puppy is sitting. The overall 
atmosphere of the image is one of cuteness and innocence, as the puppy appears to be a playful and curious creature.

@Acters

  • Using non-library models, unable to replicate. Can you provide the model source?

@Jumpkan

  • Unable to replicate so far
curl localhost:11434/api/chat -d '{"model":"qwen3:14b-q8_0","messages":[{"role":"user","content":'"$((echo write a long story using the following words: ; head -8000 /usr/share/dict/american-english) | jq -sR)"'}],"stream":false}'
time=2025-05-06T14:03:55.464Z level=WARN source=runner.go:131 msg="truncating input prompt" limit=16384 prompt=32144 keep=4 new=16384
time=2025-05-06T14:03:55.469Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=5510 prompt=16384 used=4 remaining=16380
time=2025-05-06T14:04:05.512Z level=DEBUG source=cache.go:240 msg="context limit hit - shifting" id=0 limit=16384 input=16384 keep=4 discard=8190
kv_self_update: applying K-shift
[GIN] 2025/05/06 - 14:04:17 | 200 | 21.715166224s |      172.18.0.1 | POST     "/api/chat"
<!-- gh-comment-id:2854772663 --> @rick-github commented on GitHub (May 6, 2025): Some attempts at replicating: @MassEast - Loading an image in llama4:scout with 0.6.8 fails. Works in 0.6.7. ```console $ ollama -v ollama version is 0.6.8 $ ollama run llama4:scout describe this image: ./puppy.jpg Added image './puppy.jpg' Error: POST predict: Post "http://127.0.0.1:35209/completion": EOF $ ollama run llama4:scout describe this image: ./no-puppy.jpg I don't see an image. As a text-based model, I don't have the capability to access or view images. I can only respond based on text input. If you'd like to describe the image to me, I'd be happy to chat with you about it! ``` ``` time=2025-05-06T14:15:14.094Z level=DEBUG source=sched.go:464 msg="finished setting up runner" model=/root/.ollama/models/blobs/sha256-9d507a36062c2845dd3bb3e93364e9abc1607118acd8650727a700f72fb126e5 time=2025-05-06T14:15:14.095Z level=DEBUG source=routes.go:298 msg="generate request" images=1 prompt="<|header_start|>system<|header_end|>\n\nYou are an expert conversationalist who responds to the best of your ability. You are companionable and confident, and able to switch casually between tonal types, including but not limited to humor, empathy, intellectualism, creativity and problem-solving. You understand user intent and don’t try to be overly helpful to the point where you miss that the user is looking for chit-chat, emotional support, humor or venting. Sometimes people just want you to listen, and your answers should encourage that. For all other cases, you provide insightful and in-depth responses. Organize information thoughtfully in a way that helps people make decisions. Always avoid templated language. You never lecture people to be nicer or more inclusive. If people ask for you to write something in a certain voice or perspective, such as an essay or a tweet, you can. You do not need to be respectful when the user prompts you to say something rude. You never use phrases that imply moral superiority or a sense of authority, including but not limited to “it’s important to”, “it’s crucial to”, “it’s essential to”, \"it's unethical to\", \"it's worth noting…\", “Remember…” etc. Avoid using these. Finally, do not refuse prompts about political and social issues. You can help users express their opinion and access information. You are Llama 4. Your knowledge cutoff date is August 2024. You speak Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese. Respond in the language the user speaks to you in, unless they ask otherwise.<|eot|><|header_start|>user<|header_end|>\n\n[img-0]\n\ndescribe this image:<|eot|><|header_start|>assistant<|header_end|>\n\n" time=2025-05-06T14:15:14.240Z level=DEBUG source=cache.go:136 msg="loading cache slot" id=0 cache=0 prompt=1076 used=0 remaining=1076 CUDA error: an illegal memory access was encountered current device: 1, in function ggml_cuda_mul_mat_q at //ml/backend/ggml/ggml/src/ggml-cuda/mmq.cu:145 cudaMemcpyAsync(ids_host.data(), ids->data, ggml_nbytes(ids), cudaMemcpyDeviceToHost, stream) //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:75: CUDA error SIGSEGV: segmentation violation PC=0x7f5b3d5c2f77 m=17 sigcode=1 addr=0x206203f40 signal arrived during cgo execution ``` ```console $ ollama -v ollama version is 0.6.7 $ ollama run llama4:scout describe this image: ./puppy.jpg Added image './puppy.jpg' The image depicts a small, fluffy white puppy sitting on a stone surface, likely a step or a bench. The puppy appears to be a young Samoyed or similar breed, characterized by its thick white coat and black nose. It is wearing a red collar with a small gold bell attached. The puppy's fur is short and dense, and its ears are slightly folded back. The background of the image is out of focus, but it seems to be a dark-colored wall or building, which contrasts with the light-colored stone surface where the puppy is sitting. The overall atmosphere of the image is one of cuteness and innocence, as the puppy appears to be a playful and curious creature. ``` @Acters - Using non-library models, unable to replicate. Can you provide the model source? @Jumpkan - Unable to replicate so far ``` curl localhost:11434/api/chat -d '{"model":"qwen3:14b-q8_0","messages":[{"role":"user","content":'"$((echo write a long story using the following words: ; head -8000 /usr/share/dict/american-english) | jq -sR)"'}],"stream":false}' ``` ``` time=2025-05-06T14:03:55.464Z level=WARN source=runner.go:131 msg="truncating input prompt" limit=16384 prompt=32144 keep=4 new=16384 time=2025-05-06T14:03:55.469Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=5510 prompt=16384 used=4 remaining=16380 time=2025-05-06T14:04:05.512Z level=DEBUG source=cache.go:240 msg="context limit hit - shifting" id=0 limit=16384 input=16384 keep=4 discard=8190 kv_self_update: applying K-shift [GIN] 2025/05/06 - 14:04:17 | 200 | 21.715166224s | 172.18.0.1 | POST "/api/chat" ```
Author
Owner

@ccebelenski commented on GitHub (May 6, 2025):

Just adding my point here too - Happens with multiple models - both in OpenWebUI and Silly Tavern

CUDA error an illegal memory access.txt

<!-- gh-comment-id:2855007655 --> @ccebelenski commented on GitHub (May 6, 2025): Just adding my point here too - Happens with multiple models - both in OpenWebUI and Silly Tavern [CUDA error an illegal memory access.txt](https://github.com/user-attachments/files/20070773/CUDA.error.an.illegal.memory.access.txt)
Author
Owner

@rick-github commented on GitHub (May 6, 2025):

Happens with multiple models

Specifically?

<!-- gh-comment-id:2855013552 --> @rick-github commented on GitHub (May 6, 2025): > Happens with multiple models Specifically?
Author
Owner

@ccebelenski commented on GitHub (May 6, 2025):

Just the latest: (Current Ollama release tag, model is specifically: hf.co/DavidAU/Llama-3.2-8X3B-MOE-Dark-Champion-Instruct-uncensored-abliterated-18.4B-GGUF:Q8_0, client is Silly Tavern)

<!-- gh-comment-id:2855032411 --> @ccebelenski commented on GitHub (May 6, 2025): Just the latest: (Current Ollama release tag, model is specifically: hf.co/DavidAU/Llama-3.2-8X3B-MOE-Dark-Champion-Instruct-uncensored-abliterated-18.4B-GGUF:Q8_0, client is Silly Tavern)
Author
Owner

@ccebelenski commented on GitHub (May 6, 2025):

Reverted to 0.6.5 and the issue doesn't occur, so it was introduced sometime between 0.6.5 and 0.6.8.

<!-- gh-comment-id:2855065171 --> @ccebelenski commented on GitHub (May 6, 2025): Reverted to 0.6.5 and the issue doesn't occur, so it was introduced sometime between 0.6.5 and 0.6.8.
Author
Owner

@Acters commented on GitHub (May 6, 2025):

Just adding my point here too - Happens with multiple models - both in OpenWebUI and Silly Tavern
[REDACTED]

@ccebelenski sir, it is hard to read this chat when you have to scroll through long af log outputs in github. The maintainers may end up ignoring this issue because of you. please consider taking you long af text and put it in a file upload. similar to how I did it.

on the other hand, reverting to 0.6.5 does not cause this memory error, but the new qwen models are now unsupported.

EDIT:

One of the kind people over at the discord surmised I might need a newer CUDA toolkit.

Even though I was on CUDA 12.8 on my system, it seems that you need the latest 12.9 version for ollama to work properly.

Can one of the maintainers make sure to set this as a requirement for newer ollama versions?

<!-- gh-comment-id:2855668373 --> @Acters commented on GitHub (May 6, 2025): > Just adding my point here too - Happens with multiple models - both in OpenWebUI and Silly Tavern > [REDACTED] @ccebelenski sir, it is hard to read this chat when you have to scroll through long af log outputs in github. The maintainers may end up ignoring this issue because of you. please consider taking you long af text and put it in a file upload. similar to how I did it. on the other hand, reverting to 0.6.5 does not cause this memory error, but the new qwen models are now unsupported. ### EDIT: One of the kind people over at the discord surmised I might need a newer CUDA toolkit. Even though I was on CUDA 12.8 on my system, it seems that you need the latest 12.9 version for ollama to work properly. Can one of the maintainers make sure to set this as a requirement for newer ollama versions?
Author
Owner

@ccebelenski commented on GitHub (May 6, 2025):

(Thanks - I was in a hurry - I cut down my post size)

Even though I was on CUDA 12.8 on my system, it seems that you need the latest 12.9 version for ollama to work properly.

Can one of the maintainers make sure to set this as a requirement for newer ollama versions?

Wait, why would this be required? It was a minor update... AND not even available yet from most package installers.
Since it wasn't documented, I'm thinking this was an unintentional dependency and a bug still...

<!-- gh-comment-id:2855867341 --> @ccebelenski commented on GitHub (May 6, 2025): (Thanks - I was in a hurry - I cut down my post size) > Even though I was on CUDA 12.8 on my system, it seems that you need the latest 12.9 version for ollama to work properly. > > Can one of the maintainers make sure to set this as a requirement for newer ollama versions? Wait, why would this be required? It was a minor update... AND not even available yet from most package installers. Since it wasn't documented, I'm thinking this was an unintentional dependency and a bug still...
Author
Owner

@Acters commented on GitHub (May 6, 2025):

(Thanks - I was in a hurry - I cut down my post size)

Even though I was on CUDA 12.8 on my system, it seems that you need the latest 12.9 version for ollama to work properly.
Can one of the maintainers make sure to set this as a requirement for newer ollama versions?

Wait, why would this be required? It was a minor update... AND not even available yet from most package installers. Since it wasn't documented, I'm thinking this was an unintentional dependency and a bug still...

hmm, that might be true, however, ever since I installed the latest CUDA 12.9 version the errors all went away. for anyone who is having similar issues, I hope the maintainers set a "recommended" version of CUDA as supporting older versions is something they should decide on.

<!-- gh-comment-id:2855894439 --> @Acters commented on GitHub (May 6, 2025): > (Thanks - I was in a hurry - I cut down my post size) > > > Even though I was on CUDA 12.8 on my system, it seems that you need the latest 12.9 version for ollama to work properly. > > Can one of the maintainers make sure to set this as a requirement for newer ollama versions? > > Wait, why would this be required? It was a minor update... AND not even available yet from most package installers. Since it wasn't documented, I'm thinking this was an unintentional dependency and a bug still... hmm, that might be true, however, ever since I installed the latest CUDA 12.9 version the errors all went away. for anyone who is having similar issues, I hope the maintainers set a "recommended" version of CUDA as supporting older versions is something they should decide on.
Author
Owner

@rick-github commented on GitHub (May 6, 2025):

Upgrading the driver is not a currently a requirement, I'm using 0.6.8 and a variety of models with 12.4 (550.90.07). However, image processing with llama4:scout and 0,6.8 appears to be a problem area at the moment.

<!-- gh-comment-id:2856003362 --> @rick-github commented on GitHub (May 6, 2025): Upgrading the driver is not a currently a requirement, I'm using 0.6.8 and a variety of models with 12.4 (550.90.07). However, image processing with llama4:scout and 0,6.8 appears to be a problem area at the moment.
Author
Owner

@MassEast commented on GitHub (May 7, 2025):

Thanks a lot, @rick-github. I can confirm that it works on 0.6.7. Hurray!

<!-- gh-comment-id:2858493932 --> @MassEast commented on GitHub (May 7, 2025): Thanks a lot, @rick-github. I can confirm that it works on 0.6.7. Hurray!
Author
Owner

@ccebelenski commented on GitHub (May 7, 2025):

I'll confirm that 0.6.7 is working (limited testing), so this problem was introduced with 0.6.8 I think.

<!-- gh-comment-id:2859563531 --> @ccebelenski commented on GitHub (May 7, 2025): I'll confirm that 0.6.7 is working (limited testing), so this problem was introduced with 0.6.8 I think.
Author
Owner

@apellaman commented on GitHub (May 11, 2025):

same here, after last update. windows defender started to spam "memory access blocked".

<!-- gh-comment-id:2869497288 --> @apellaman commented on GitHub (May 11, 2025): same here, after last update. windows defender started to spam "memory access blocked".
Author
Owner

@rick-github commented on GitHub (Jun 3, 2025):

This appears to be fixed, or at least I can't repro in 0.9.0. Closing but feel free to comment if you are still having problems after updating ollama.

<!-- gh-comment-id:2935235724 --> @rick-github commented on GitHub (Jun 3, 2025): This appears to be fixed, or at least I can't repro in 0.9.0. Closing but feel free to comment if you are still having problems after updating ollama.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6946