[GH-ISSUE #8235] Requests begin to all fail after several independent prompts #31019

Closed
opened 2026-04-22 11:06:44 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @steveseguin on GitHub (Dec 24, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/8235

What is the issue?

I've been having an issue with Ollama where the output is either gibberish or just a series of @@@@@ characters. I don't recall it being this way some weeks ago, but I've found a solution. (The gibberish seems to happen mostly when stream: true, and @@@ mostly with stream: false. w/e.)

I started having this issue with Llama3.1 lorablated some weeks ago, but I'm having the issue with wen2.5-coder:32b as well, now that I'm trying to use it.

An example of the issue, after several successful requests, one suddenly fails, and then everyone after starts to fail.

image
image

The solution I discovered has been to just set the keep_alive to 0. I suspect there's some sort of context caching going on and I'm hitting some memory limit. My Titan RTX 24GB just squeaks by with these models. Windows 11. 96GB RAM. Most recent Ollama.

My requests are pretty short; just a couple of sentences in most cases.

Works:

const response = await fetch(`${endpoint}/api/generate`, {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json'
            },
            body: JSON.stringify({
                model: model,
                prompt: prompt,
                stream: false,
		keep_alive: 0  <===== note the keep_alive=0
            })
        });

Fails after a few prompts:

const response = await fetch(`${endpoint}/api/generate`, {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json'
            },
            body: JSON.stringify({
                model: model,
                prompt: prompt,
                stream: false
            })
        });

If this is a caching issue, it be pretty nice to have the option as to whether to cache the conversation or not, and if I am caching it, perhaps have a way to refer to that specific conversation. More fine grain control via the API that way.

image

server log - Not working

server log - working

image

image

This might not be a bug, but just me being stupid. Still, time_out = 0 works, however crude and slow it is.

Merry Christmas, all. 🎄

OS

Windows

GPU

Nvidia

CPU

Intel, AMD

Ollama version

0.5.4

Originally created by @steveseguin on GitHub (Dec 24, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/8235 ### What is the issue? I've been having an issue with Ollama where the output is either gibberish or just a series of @@@@@ characters. I don't recall it being this way some weeks ago, but I've found a solution. (The gibberish seems to happen mostly when stream: true, and @@@ mostly with stream: false. w/e.) I started having this issue with Llama3.1 lorablated some weeks ago, but I'm having the issue with wen2.5-coder:32b as well, now that I'm trying to use it. An example of the issue, after several successful requests, one suddenly fails, and then everyone after starts to fail. ![image](https://github.com/user-attachments/assets/1fca8c4d-ec17-4ae1-b85c-a349feffb48e) ![image](https://github.com/user-attachments/assets/d2d836e0-e07e-463f-8aec-3a3f1b158f99) **The solution I discovered has been to just set the keep_alive to 0.** I suspect there's some sort of context caching going on and I'm hitting some memory limit. My Titan RTX 24GB just squeaks by with these models. Windows 11. 96GB RAM. Most recent Ollama. My requests are pretty short; just a couple of sentences in most cases. Works: ``` const response = await fetch(`${endpoint}/api/generate`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ model: model, prompt: prompt, stream: false, keep_alive: 0 <===== note the keep_alive=0 }) }); ``` Fails after a few prompts: ``` const response = await fetch(`${endpoint}/api/generate`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ model: model, prompt: prompt, stream: false }) }); ``` If this is a caching issue, it be pretty nice to have the option as to whether to cache the conversation or not, and if I am caching it, perhaps have a way to refer to that specific conversation. More fine grain control via the API that way. ![image](https://github.com/user-attachments/assets/6f12762c-fa08-4b09-af6e-84a3fc255527) [server log - Not working](https://github.com/user-attachments/files/18241593/not_working_default.log) [server log - working](https://github.com/user-attachments/files/18241545/working_timeout_0.log) ![image](https://github.com/user-attachments/assets/16427aa1-22ad-49e5-8a4c-5f6736d3d25c) ![image](https://github.com/user-attachments/assets/e6e925e2-f508-4674-8c67-8d5290ef0fb9) This might not be a bug, but just me being stupid. Still, time_out = 0 works, however crude and slow it is. Merry Christmas, all. 🎄 ### OS Windows ### GPU Nvidia ### CPU Intel, AMD ### Ollama version 0.5.4
GiteaMirror added the bug label 2026-04-22 11:06:44 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 26, 2024):

I replayed the queries on a Linux system with 16G VRAM and was unable to replicate. I will try to get an environment more like yours and try again.

<!-- gh-comment-id:2562028520 --> @rick-github commented on GitHub (Dec 26, 2024): I replayed the queries on a Linux system with 16G VRAM and was unable to replicate. I will try to get an environment more like yours and try again.
Author
Owner

@steveseguin commented on GitHub (Dec 26, 2024):

Thank you kindly.

I felt obliged to report the issue. If you cant easily replicate or if it's
a niche issue, I'm content to just use my current work around.

If I can provide more info though, please let me know.

This is a wonderful project.

On Wed, Dec 25, 2024, 7:01 p.m. frob @.***> wrote:

I replayed the queries on a Linux system with 16G VRAM and was unable to
replicate. I will try to get an environment more like yours and try again.


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/8235#issuecomment-2562028520,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AATU2UTKVOI26WVBOSDYLSD2HNBNRAVCNFSM6AAAAABUFDIV5GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNRSGAZDQNJSGA
.
You are receiving this because you authored the thread.Message ID:
@.***>

<!-- gh-comment-id:2562029596 --> @steveseguin commented on GitHub (Dec 26, 2024): Thank you kindly. I felt obliged to report the issue. If you cant easily replicate or if it's a niche issue, I'm content to just use my current work around. If I can provide more info though, please let me know. This is a wonderful project. On Wed, Dec 25, 2024, 7:01 p.m. frob ***@***.***> wrote: > I replayed the queries on a Linux system with 16G VRAM and was unable to > replicate. I will try to get an environment more like yours and try again. > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/8235#issuecomment-2562028520>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AATU2UTKVOI26WVBOSDYLSD2HNBNRAVCNFSM6AAAAABUFDIV5GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNRSGAZDQNJSGA> > . > You are receiving this because you authored the thread.Message ID: > ***@***.***> >
Author
Owner

@steveseguin commented on GitHub (Jan 27, 2025):

Just an FYI, this issue persists still, and in some ways, it's made Ollama unusable for me.

Regardless of the model I'm using, after some amount of tokens being processed, the output fails. It only resolves itself when I either restart Ollama or I clear the model from memory and reload it.

32B deepseek: Image

I've fully uninstalled Ollama, deleted every try from cache, environmental properties, and removed any models. I've also updated the Nvidia drivers to new stable. Nothing has worked.

Suspecting that this is an issue with my GPU, an Nvidia Titan RTX, I've run a memory integrity test and it does not pass:

Image

I'll get a new GPU and will report back if it resolves my issues, which I assume it will.

I should note that I found it a bit challenging to find a program online that can do a memory integrity tests. Given how Ollama can detect issues with the output response (eg: repeated @@@@), perhaps including a memory integrity test option and mentioning to the user that they can perform one to test, could be helpful.

I love the project. Cheers
-steve

<!-- gh-comment-id:2615368957 --> @steveseguin commented on GitHub (Jan 27, 2025): Just an FYI, this issue persists still, and in some ways, it's made Ollama unusable for me. Regardless of the model I'm using, after some amount of tokens being processed, the output fails. It only resolves itself when I either restart Ollama or I clear the model from memory and reload it. 32B deepseek: ![Image](https://github.com/user-attachments/assets/8674bafb-214b-49bc-b815-859147fb44bd) I've fully uninstalled Ollama, deleted every try from cache, environmental properties, and removed any models. I've also updated the Nvidia drivers to new stable. Nothing has worked. Suspecting that this is an issue with my GPU, an Nvidia Titan RTX, I've run a memory integrity test and it does not pass: ![Image](https://github.com/user-attachments/assets/e2c4a31b-652a-4bdc-bbf2-8f886d56fe7b) I'll get a new GPU and will report back if it resolves my issues, which I assume it will. I should note that I found it a bit challenging to find a program online that can do a memory integrity tests. Given how Ollama can detect issues with the output response (eg: repeated @@@@), perhaps including a memory integrity test option and mentioning to the user that they can perform one to test, could be helpful. I love the project. Cheers -steve
Author
Owner

@rick-github commented on GitHub (Jan 27, 2025):

Thanks Steve. Just to verify, the GPU tester you used was this one: https://www.programming4beginners.com/gpumemtest ?

<!-- gh-comment-id:2615394979 --> @rick-github commented on GitHub (Jan 27, 2025): Thanks Steve. Just to verify, the GPU tester you used was this one: https://www.programming4beginners.com/gpumemtest ?
Author
Owner

@steveseguin commented on GitHub (Jan 27, 2025):

yes, specifically:

https://www.programming4beginners.com/files/gpumemtest/install-GpuMemTest-1.2.exe
https://www.virustotal.com/gui/file/e1c7db05dec5f2854e5811362a781adb69bd96182b4de770dc21e946ccf5030d

Of a few I tried, this one seemed to actually run and was low-effort to make work. Zero idea if its accurate though.

<!-- gh-comment-id:2615418852 --> @steveseguin commented on GitHub (Jan 27, 2025): yes, specifically: https://www.programming4beginners.com/files/gpumemtest/install-GpuMemTest-1.2.exe https://www.virustotal.com/gui/file/e1c7db05dec5f2854e5811362a781adb69bd96182b4de770dc21e946ccf5030d Of a few I tried, this one seemed to actually run and was low-effort to make work. Zero idea if its accurate though.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#31019