[GH-ISSUE #12142] llama3.2 responses are gibberish - ggggggggggggg #70130

Closed
opened 2026-05-04 20:25:57 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @R1U2 on GitHub (Sep 1, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12142

What is the issue?

Deepseek r1 gives normal responses. I read on redit that qwen 2,5 used to answer with ggggggg,
The fix was to enable flash attention in the llama.cpp file, but now with the new version where is enabled by default my llama3.2:latest and b models respond only with GGGGGGGGGGGGGGGGGGG. i will roll back my Ollama instace in docker and see if this presists.

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @R1U2 on GitHub (Sep 1, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12142 ### What is the issue? Deepseek r1 gives normal responses. I read on redit that qwen 2,5 used to answer with ggggggg, The fix was to enable flash attention in the llama.cpp file, but now with the new version where is enabled by default my llama3.2:latest and b models respond only with GGGGGGGGGGGGGGGGGGG. i will roll back my Ollama instace in docker and see if this presists. ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-05-04 20:25:57 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 1, 2025):

Server logs may help in debugging.

$ ollama -v
ollama version is 0.11.8
$ ollama run llama3.2:latest hello
Hello! How can I assist you today?
<!-- gh-comment-id:3241780415 --> @rick-github commented on GitHub (Sep 1, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may help in debugging. ```console $ ollama -v ollama version is 0.11.8 $ ollama run llama3.2:latest hello Hello! How can I assist you today? ```
Author
Owner

@R1U2 commented on GitHub (Sep 1, 2025):

Hi Rick. I went back to ver 0.11.7 with same results, see below,

root@52b14c20edeb:/# ollama -v
ollama version is 0.11.7
root@52b14c20edeb:/# ollama run llama3.2:latest hello
pulling manifest
pulling dde5aa3fc5ff: 100% ▕█████████████████████████████████████████████▏ 2.0 GB
pulling 966de95ca8a6: 100% ▕█████████████████████████████████████████████▏ 1.4 KB
pulling fcc5a6bec9da: 100% ▕█████████████████████████████████████████████▏ 7.7 KB
pulling a70ff7e570d9: 100% ▕█████████████████████████████████████████████▏ 6.0 KB
pulling 56bb8bd477a5: 100% ▕█████████████████████████████████████████████▏ 96 B
pulling 34bb5ab01051: 100% ▕█████████████████████████████████████████████▏ 561 B
verifying sha256 digest
writing manifest
success
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG

<!-- gh-comment-id:3241895695 --> @R1U2 commented on GitHub (Sep 1, 2025): Hi Rick. I went back to ver 0.11.7 with same results, see below, root@52b14c20edeb:/# ollama -v ollama version is 0.11.7 root@52b14c20edeb:/# ollama run llama3.2:latest hello pulling manifest pulling dde5aa3fc5ff: 100% ▕█████████████████████████████████████████████▏ 2.0 GB pulling 966de95ca8a6: 100% ▕█████████████████████████████████████████████▏ 1.4 KB pulling fcc5a6bec9da: 100% ▕█████████████████████████████████████████████▏ 7.7 KB pulling a70ff7e570d9: 100% ▕█████████████████████████████████████████████▏ 6.0 KB pulling 56bb8bd477a5: 100% ▕█████████████████████████████████████████████▏ 96 B pulling 34bb5ab01051: 100% ▕█████████████████████████████████████████████▏ 561 B verifying sha256 digest writing manifest success GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
Author
Owner

@R1U2 commented on GitHub (Sep 1, 2025):

logs
print_info: n_head_kv = 8
print_info: n_rot = 128
print_info: n_swa = 0
print_info: is_swa_any = 0
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 3
print_info: n_embd_k_gqa = 1024
print_info: n_embd_v_gqa = 1024
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 8192
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 0
print_info: rope scaling = linear
print_info: freq_base_train = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 131072
print_info: rope_finetuned = unknown
print_info: model type = 3B
print_info: model params = 3.21 B
print_info: general.name = Llama 3.2 3B Instruct
print_info: vocab type = BPE
print_info: n_vocab = 128256
print_info: n_merges = 280147
print_info: BOS token = 128000 '<|begin_of_text|>'
print_info: EOS token = 128009 '<|eot_id|>'
print_info: EOT token = 128009 '<|eot_id|>'
print_info: EOM token = 128008 '<|eom_id|>'
print_info: LF token = 198 'Ċ'
print_info: EOG token = 128001 '<|end_of_text|>'
print_info: EOG token = 128008 '<|eom_id|>'
print_info: EOG token = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 28 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 29/29 layers to GPU
load_tensors: CPU_Mapped model buffer size = 308.23 MiB
load_tensors: CUDA0 model buffer size = 1918.35 MiB
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch = 512
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = 0
llama_context: kv_unified = false
llama_context: freq_base = 500000.0
llama_context: freq_scale = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context: CUDA_Host output buffer size = 0.50 MiB
llama_kv_cache_unified: CUDA0 KV buffer size = 448.00 MiB
llama_kv_cache_unified: size = 448.00 MiB ( 4096 cells, 28 layers, 1/1 seqs), K (f16): 224.00 MiB, V (f16): 224.00 MiB
llama_context: CUDA0 compute buffer size = 256.50 MiB
llama_context: CUDA_Host compute buffer size = 18.01 MiB
llama_context: graph nodes = 986
llama_context: graph splits = 2
time=2025-09-01T10:40:31.753Z level=INFO source=server.go:1269 msg="llama runner started in 2.69 seconds"
time=2025-09-01T10:40:31.753Z level=INFO source=sched.go:473 msg="loaded runners" count=1
time=2025-09-01T10:40:31.754Z level=INFO source=server.go:1231 msg="waiting for llama runner to start responding"
time=2025-09-01T10:40:31.754Z level=INFO source=server.go:1269 msg="llama runner started in 2.69 seconds"
[GIN] 2025/09/01 - 10:40:33 | 200 | 5.534739311s | 192.168.20.42 | POST "/api/chat"
[GIN] 2025/09/01 - 10:40:35 | 200 | 1.832525445s | 192.168.20.42 | POST "/api/chat"
time=2025-09-01T10:40:36.081Z level=WARN source=runner.go:127 msg="truncating input prompt" limit=4096 prompt=117694 keep=5 new=4096
[GIN] 2025/09/01 - 10:40:46 | 200 | 11.170074164s | 192.168.20.42 | POST "/api/chat"
[GIN] 2025/09/01 - 10:40:53 | 200 | 2.740209922s | 192.168.20.42 | POST "/api/chat"
[GIN] 2025/09/01 - 10:40:55 | 200 | 1.595009048s | 192.168.20.42 | POST "/api/chat"
[GIN] 2025/09/01 - 10:41:01 | 200 | 4.271081005s | 192.168.20.42 | POST "/api/chat"
[GIN] 2025/09/01 - 10:41:07 | 200 | 2.952587715s | 192.168.20.42 | POST "/api/chat"
[GIN] 2025/09/01 - 10:41:10 | 200 | 2.553112842s | 192.168.20.42 | POST "/api/chat"
[GIN] 2025/09/01 - 10:41:13 | 200 | 2.378499046s | 192.168.20.42 | POST "/api/chat"
[GIN] 2025/09/01 - 10:41:19 | 200 | 2.527511817s | 192.168.20.42 | POST "/api/chat"
[GIN] 2025/09/01 - 10:41:35 | 200 | 934.138µs | 192.168.20.42 | GET "/api/tags"
[GIN] 2025/09/01 - 10:41:35 | 200 | 71.33µs | 192.168.20.42 | GET "/api/ps"
[GIN] 2025/09/01 - 10:41:48 | 200 | 2.135968434s | 192.168.20.42 | POST "/api/chat"
[GIN] 2025/09/01 - 10:41:50 | 200 | 2.37979372s | 192.168.20.42 | POST "/api/chat"
[GIN] 2025/09/01 - 10:41:53 | 200 | 2.571309747s | 192.168.20.42 | POST "/api/chat"
[GIN] 2025/09/01 - 10:41:55 | 200 | 2.340410806s | 192.168.20.42 | POST "/api/chat"
[GIN] 2025/09/01 - 10:41:59 | 200 | 974.043µs | 192.168.20.42 | GET "/api/tags"
[GIN] 2025/09/01 - 10:41:59 | 200 | 98.499µs | 192.168.20.42 | GET "/api/ps"
[GIN] 2025/09/01 - 10:42:00 | 200 | 852.439µs | 192.168.20.42 | GET "/api/tags"
[GIN] 2025/09/01 - 10:42:00 | 200 | 72.994µs | 192.168.20.42 | GET "/api/ps"
[GIN] 2025/09/01 - 10:43:22 | 200 | 77.73µs | 127.0.0.1 | GET "/api/version"
[GIN] 2025/09/01 - 10:43:45 | 200 | 58.882µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/09/01 - 10:43:45 | 404 | 572.784µs | 127.0.0.1 | POST "/api/show"
[GIN] 2025/09/01 - 10:43:46 | 200 | 874.040041ms | 127.0.0.1 | POST "/api/pull"
[GIN] 2025/09/01 - 10:43:46 | 200 | 189.311072ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/09/01 - 10:43:48 | 200 | 2.03981665s | 127.0.0.1 | POST "/api/generate"
time=2025-09-01T10:48:53.520Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.137233099 runner.size="3.1 GiB" runner.vram="3.1 GiB" runner.parallel=1 runner.pid=63 runner.model=/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
time=2025-09-01T10:48:53.769Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.386321548 runner.size="3.1 GiB" runner.vram="3.1 GiB" runner.parallel=1 runner.pid=63 runner.model=/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
time=2025-09-01T10:48:54.020Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.636993337 runner.size="3.1 GiB" runner.vram="3.1 GiB" runner.parallel=1 runner.pid=63 runner.model=/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff

<!-- gh-comment-id:3241925496 --> @R1U2 commented on GitHub (Sep 1, 2025): logs print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: is_swa_any = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 3 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-05 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 8192 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 0 print_info: rope scaling = linear print_info: freq_base_train = 500000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: model type = 3B print_info: model params = 3.21 B print_info: general.name = Llama 3.2 3B Instruct print_info: vocab type = BPE print_info: n_vocab = 128256 print_info: n_merges = 280147 print_info: BOS token = 128000 '<|begin_of_text|>' print_info: EOS token = 128009 '<|eot_id|>' print_info: EOT token = 128009 '<|eot_id|>' print_info: EOM token = 128008 '<|eom_id|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 128001 '<|end_of_text|>' print_info: EOG token = 128008 '<|eom_id|>' print_info: EOG token = 128009 '<|eot_id|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: offloading 28 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 29/29 layers to GPU load_tensors: CPU_Mapped model buffer size = 308.23 MiB load_tensors: CUDA0 model buffer size = 1918.35 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 4096 llama_context: n_ctx_per_seq = 4096 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 0 llama_context: kv_unified = false llama_context: freq_base = 500000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (4096) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: CUDA_Host output buffer size = 0.50 MiB llama_kv_cache_unified: CUDA0 KV buffer size = 448.00 MiB llama_kv_cache_unified: size = 448.00 MiB ( 4096 cells, 28 layers, 1/1 seqs), K (f16): 224.00 MiB, V (f16): 224.00 MiB llama_context: CUDA0 compute buffer size = 256.50 MiB llama_context: CUDA_Host compute buffer size = 18.01 MiB llama_context: graph nodes = 986 llama_context: graph splits = 2 time=2025-09-01T10:40:31.753Z level=INFO source=server.go:1269 msg="llama runner started in 2.69 seconds" time=2025-09-01T10:40:31.753Z level=INFO source=sched.go:473 msg="loaded runners" count=1 time=2025-09-01T10:40:31.754Z level=INFO source=server.go:1231 msg="waiting for llama runner to start responding" time=2025-09-01T10:40:31.754Z level=INFO source=server.go:1269 msg="llama runner started in 2.69 seconds" [GIN] 2025/09/01 - 10:40:33 | 200 | 5.534739311s | 192.168.20.42 | POST "/api/chat" [GIN] 2025/09/01 - 10:40:35 | 200 | 1.832525445s | 192.168.20.42 | POST "/api/chat" time=2025-09-01T10:40:36.081Z level=WARN source=runner.go:127 msg="truncating input prompt" limit=4096 prompt=117694 keep=5 new=4096 [GIN] 2025/09/01 - 10:40:46 | 200 | 11.170074164s | 192.168.20.42 | POST "/api/chat" [GIN] 2025/09/01 - 10:40:53 | 200 | 2.740209922s | 192.168.20.42 | POST "/api/chat" [GIN] 2025/09/01 - 10:40:55 | 200 | 1.595009048s | 192.168.20.42 | POST "/api/chat" [GIN] 2025/09/01 - 10:41:01 | 200 | 4.271081005s | 192.168.20.42 | POST "/api/chat" [GIN] 2025/09/01 - 10:41:07 | 200 | 2.952587715s | 192.168.20.42 | POST "/api/chat" [GIN] 2025/09/01 - 10:41:10 | 200 | 2.553112842s | 192.168.20.42 | POST "/api/chat" [GIN] 2025/09/01 - 10:41:13 | 200 | 2.378499046s | 192.168.20.42 | POST "/api/chat" [GIN] 2025/09/01 - 10:41:19 | 200 | 2.527511817s | 192.168.20.42 | POST "/api/chat" [GIN] 2025/09/01 - 10:41:35 | 200 | 934.138µs | 192.168.20.42 | GET "/api/tags" [GIN] 2025/09/01 - 10:41:35 | 200 | 71.33µs | 192.168.20.42 | GET "/api/ps" [GIN] 2025/09/01 - 10:41:48 | 200 | 2.135968434s | 192.168.20.42 | POST "/api/chat" [GIN] 2025/09/01 - 10:41:50 | 200 | 2.37979372s | 192.168.20.42 | POST "/api/chat" [GIN] 2025/09/01 - 10:41:53 | 200 | 2.571309747s | 192.168.20.42 | POST "/api/chat" [GIN] 2025/09/01 - 10:41:55 | 200 | 2.340410806s | 192.168.20.42 | POST "/api/chat" [GIN] 2025/09/01 - 10:41:59 | 200 | 974.043µs | 192.168.20.42 | GET "/api/tags" [GIN] 2025/09/01 - 10:41:59 | 200 | 98.499µs | 192.168.20.42 | GET "/api/ps" [GIN] 2025/09/01 - 10:42:00 | 200 | 852.439µs | 192.168.20.42 | GET "/api/tags" [GIN] 2025/09/01 - 10:42:00 | 200 | 72.994µs | 192.168.20.42 | GET "/api/ps" [GIN] 2025/09/01 - 10:43:22 | 200 | 77.73µs | 127.0.0.1 | GET "/api/version" [GIN] 2025/09/01 - 10:43:45 | 200 | 58.882µs | 127.0.0.1 | HEAD "/" [GIN] 2025/09/01 - 10:43:45 | 404 | 572.784µs | 127.0.0.1 | POST "/api/show" [GIN] 2025/09/01 - 10:43:46 | 200 | 874.040041ms | 127.0.0.1 | POST "/api/pull" [GIN] 2025/09/01 - 10:43:46 | 200 | 189.311072ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/09/01 - 10:43:48 | 200 | 2.03981665s | 127.0.0.1 | POST "/api/generate" time=2025-09-01T10:48:53.520Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.137233099 runner.size="3.1 GiB" runner.vram="3.1 GiB" runner.parallel=1 runner.pid=63 runner.model=/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff time=2025-09-01T10:48:53.769Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.386321548 runner.size="3.1 GiB" runner.vram="3.1 GiB" runner.parallel=1 runner.pid=63 runner.model=/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff time=2025-09-01T10:48:54.020Z level=WARN source=sched.go:652 msg="gpu VRAM usage didn't recover within timeout" seconds=5.636993337 runner.size="3.1 GiB" runner.vram="3.1 GiB" runner.parallel=1 runner.pid=63 runner.model=/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
Author
Owner

@monnster commented on GitHub (Sep 1, 2025):

Faced the same issue on another model, the problem was with model files being corrupted.
On linux try sha256sum /usr/share/ollama/.ollama/models/blobs/*, if you notice that sha256 sum doesn't match file name you have the same problem. In my case it seems like SSD is damaged.

<!-- gh-comment-id:3242668933 --> @monnster commented on GitHub (Sep 1, 2025): Faced the same issue on another model, the problem was with model files being corrupted. On linux try `sha256sum /usr/share/ollama/.ollama/models/blobs/*`, if you notice that sha256 sum doesn't match file name you have the same problem. In my case it seems like SSD is damaged.
Author
Owner

@R1U2 commented on GitHub (Sep 2, 2025):

@monnster Thanks for that, i deleted everything related to ollama. Then redid the docker compose file and pulled llama3.2b:latest. Tested it in openwebui and the response was normal.
Will now pull the other models i need and test them one by one.

Thanks for the info.

<!-- gh-comment-id:3244284490 --> @R1U2 commented on GitHub (Sep 2, 2025): @monnster Thanks for that, i deleted everything related to ollama. Then redid the docker compose file and pulled llama3.2b:latest. Tested it in openwebui and the response was normal. Will now pull the other models i need and test them one by one. Thanks for the info.
Author
Owner

@R1U2 commented on GitHub (Sep 3, 2025):

Ok, when testing it , i get normal responses the first few times, then it bombs out again with GGGGGGGGGGGGGGGGGGGGGGGG.

I then restart my ollama docker instance and then two to three rounds of questions then BOOM more GGGGGGGGGGGGGGGGG's

I have removed completely all ollama dockers, reinstalled them, upped my power for my jetson orin nano to supermax, still get GGGGGGGGGGGGGGGGGGGGGGGGGGGG's.

Swap file usgae is normal, temperature is about 50 deg. So i am stumped why suddenly i will be getting these errors from ollama..

Is there anyone out there that have a solution for this please.

<!-- gh-comment-id:3248319800 --> @R1U2 commented on GitHub (Sep 3, 2025): Ok, when testing it , i get normal responses the first few times, then it bombs out again with GGGGGGGGGGGGGGGGGGGGGGGG. I then restart my ollama docker instance and then two to three rounds of questions then BOOM more GGGGGGGGGGGGGGGGG's I have removed completely all ollama dockers, reinstalled them, upped my power for my jetson orin nano to supermax, still get GGGGGGGGGGGGGGGGGGGGGGGGGGGG's. Swap file usgae is normal, temperature is about 50 deg. So i am stumped why suddenly i will be getting these errors from ollama.. Is there anyone out there that have a solution for this please.
Author
Owner

@R1U2 commented on GitHub (Sep 3, 2025):

Well that proves it. The issue is with the newer versions of Ollama. I rolled my ollama docker back to version 0.10.1 and am having a conversation with my llama3.2:latest. No issues and very stable.

Now to download deepseek and qwen and test it.

<!-- gh-comment-id:3248526106 --> @R1U2 commented on GitHub (Sep 3, 2025): Well that proves it. The issue is with the newer versions of Ollama. I rolled my ollama docker back to version 0.10.1 and am having a conversation with my llama3.2:latest. No issues and very stable. Now to download deepseek and qwen and test it.
Author
Owner

@R1U2 commented on GitHub (Sep 4, 2025):

A day later and still Ollama is stable.

<!-- gh-comment-id:3252389819 --> @R1U2 commented on GitHub (Sep 4, 2025): A day later and still Ollama is stable.
Author
Owner

@CodeBradley commented on GitHub (Sep 8, 2025):

I ran a test query with my fresh llama3.1 instance and get this still. I never knew of this error until I just searched it and came across this.

Could you start by asking me about instances when I procrastinate the most and then give me some suggestions to overcome it?

llama3.1:8b
GGGGGGGGGGGGGGGGGGGGGGGGGGGG

<!-- gh-comment-id:3267248960 --> @CodeBradley commented on GitHub (Sep 8, 2025): I ran a test query with my fresh [ llama3.1 ](llama3.1:8b) instance and get this still. I never knew of this error until I just searched it and came across this. Could you start by asking me about instances when I procrastinate the most and then give me some suggestions to overcome it? llama3.1:8b GGGGGGGGGGGGGGGGGGGGGGGGGGGG
Author
Owner

@R1U2 commented on GitHub (Sep 9, 2025):

@CodeBradley Brllliant answer. Now stop playing and start working. LOL.

I dont think it is the Models, it is Ollama itself. Roll back your ollama to ver 0.10.0 and then ask llama3.1 that question again and it will give a proper answer.

<!-- gh-comment-id:3270207347 --> @R1U2 commented on GitHub (Sep 9, 2025): @CodeBradley Brllliant answer. Now stop playing and start working. LOL. I dont think it is the Models, it is Ollama itself. Roll back your ollama to ver 0.10.0 and then ask llama3.1 that question again and it will give a proper answer.
Author
Owner

@rick-github commented on GitHub (Sep 9, 2025):

It's a combination of ollama and the environment it's running in. For example, 0.11.10 with an Nvidia 4070:

$ ollama run llama3.1:8b 
>>> Could you start by asking me about instances when I procrastinate the most and then give me some suggestions to overcome it? 
I'd love to help you understand your procrastination patterns.

To get started, can you think of a specific situation or task that tends to trigger your procrastination? For example:
...

0.11.10 and AMD 8060S

$ ollama run llama3.1:8b 
>>> Could you start by asking me about instances when I procrastinate the most and then give me some suggestions to overcome it?
Before we dive into strategies, can you tell me:

**When do you tend to procrastinate the most?**

Is it:
...

This is why there's no fix yet - something in your particular configuration is causing this, but it's not reproducible. If you would like to help with the investigation, R1U2 opened a new ticket at #12209, add your details and server log.

<!-- gh-comment-id:3270247981 --> @rick-github commented on GitHub (Sep 9, 2025): It's a combination of ollama and the environment it's running in. For example, 0.11.10 with an Nvidia 4070: ```console $ ollama run llama3.1:8b >>> Could you start by asking me about instances when I procrastinate the most and then give me some suggestions to overcome it? I'd love to help you understand your procrastination patterns. To get started, can you think of a specific situation or task that tends to trigger your procrastination? For example: ... ``` 0.11.10 and AMD 8060S ```console $ ollama run llama3.1:8b >>> Could you start by asking me about instances when I procrastinate the most and then give me some suggestions to overcome it? Before we dive into strategies, can you tell me: **When do you tend to procrastinate the most?** Is it: ... ``` This is why there's no fix yet - something in your particular configuration is causing this, but it's not reproducible. If you would like to help with the investigation, R1U2 opened a new ticket at #12209, add your details and [server log](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues).
Author
Owner

@thunderfm commented on GitHub (Sep 9, 2025):

I'm getting this too on every version above 0.11.4

Hardware is Jetson Orin Nano Super and ollama is installed on the base system. Openwebui also installed in docker container. It doesn't happen with the first prompt. Only after 1-2 responses. I've tred adjusting the size of the context window, but it makes no difference.

<!-- gh-comment-id:3271495288 --> @thunderfm commented on GitHub (Sep 9, 2025): I'm getting this too on every version above 0.11.4 Hardware is Jetson Orin Nano Super and ollama is installed on the base system. Openwebui also installed in docker container. It doesn't happen with the first prompt. Only after 1-2 responses. I've tred adjusting the size of the context window, but it makes no difference.
Author
Owner

@rick-github commented on GitHub (Sep 9, 2025):

This ticket is closed, add comments to #12209.

<!-- gh-comment-id:3271499389 --> @rick-github commented on GitHub (Sep 9, 2025): This ticket is closed, add comments to #12209.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70130