[GH-ISSUE #10802] Auto-update Toggle #69153

Closed
opened 2026-05-04 17:17:51 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @novaexe on GitHub (May 21, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10802

in lue of the broken https://github.com/ollama/ollama/releases/tag/v0.7.0 release on vlm's please re-consider #6024 #4498 #3459

Originally created by @novaexe on GitHub (May 21, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10802 in lue of the broken https://github.com/ollama/ollama/releases/tag/v0.7.0 release on vlm's please re-consider #6024 #4498 #3459
GiteaMirror added the feature request label 2026-05-04 17:17:51 -05:00
Author
Owner

@jmorganca commented on GitHub (May 21, 2025):

Thanks for the issue @novaexe. May I ask what broke in 0.7.0 with VLMs? Will make sure we take a look at that

<!-- gh-comment-id:2898390942 --> @jmorganca commented on GitHub (May 21, 2025): Thanks for the issue @novaexe. May I ask what broke in 0.7.0 with VLMs? Will make sure we take a look at that
Author
Owner

@rick-github commented on GitHub (May 21, 2025):

llama3.2-vision quality degraded (#10731). I haven't seen any problems with other VLMs.

<!-- gh-comment-id:2898398798 --> @rick-github commented on GitHub (May 21, 2025): llama3.2-vision quality degraded (#10731). I haven't seen any problems with other VLMs.
Author
Owner

@novaexe commented on GitHub (May 21, 2025):

@jmorganca see the most recent cpu issues pertaining to models with img processing capabilities, in my own testing it will softlock my entire system on 0.7 this issue does not exist in .6.8

<!-- gh-comment-id:2898404212 --> @novaexe commented on GitHub (May 21, 2025): @jmorganca see the most recent cpu issues pertaining to models with img processing capabilities, in my own testing it will softlock my entire system on 0.7 this issue does not exist in .6.8
Author
Owner

@novaexe commented on GitHub (May 21, 2025):

would assume all sept one are part of the same issue

Image

<!-- gh-comment-id:2898428159 --> @novaexe commented on GitHub (May 21, 2025): would assume all sept one are part of the same issue ![Image](https://github.com/user-attachments/assets/a9a251db-1f3c-44e5-94fe-ae665d0d688d)
Author
Owner

@novaexe commented on GitHub (May 21, 2025):

but either way mine is a feature request for pause/rollback functionality on windows specifically, cause now it will ping me for an update, take up 1gb on my system, and i assume double update once the issues is resolved

<!-- gh-comment-id:2898435582 --> @novaexe commented on GitHub (May 21, 2025): but either way mine is a feature request for pause/rollback functionality on windows specifically, cause now it will ping me for an update, take up 1gb on my system, and i assume double update once the issues is resolved
Author
Owner

@rick-github commented on GitHub (May 21, 2025):

10801 is a VRAM size issue
10800 looks like a context buffer size isue
10799 is a grammar buffer issue
10798 is a VRAM size issue
10797 is a VRAM size issue

None of this will cause a system to softlock.

<!-- gh-comment-id:2898439955 --> @rick-github commented on GitHub (May 21, 2025): 10801 is a VRAM size issue 10800 looks like a context buffer size isue 10799 is a grammar buffer issue 10798 is a VRAM size issue 10797 is a VRAM size issue None of this will cause a system to softlock.
Author
Owner

@novaexe commented on GitHub (May 21, 2025):

@rick-github all i know is on 3 separate front-ends, chat works as far as i can tell, but the moment i upload an img of any size and ask gemma3, qwen, to describe it, it throws my cpu above what it's capable of doing, softlocking my system cause of cpu over utilization
and again this does not happen in .6.8

<!-- gh-comment-id:2898449882 --> @novaexe commented on GitHub (May 21, 2025): @rick-github all i know is on 3 separate front-ends, chat works as far as i can tell, but the moment i upload an img of any size and ask gemma3, qwen, to describe it, it throws my cpu above what it's capable of doing, softlocking my system cause of cpu over utilization and again this does not happen in .6.8
Author
Owner

@rick-github commented on GitHub (May 21, 2025):

Server logs may aid in diagnosis.

The VRAM issue is being attended to with #10773, that's the only thing I can think of that would remotely cause any issues.

<!-- gh-comment-id:2898453392 --> @rick-github commented on GitHub (May 21, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may aid in diagnosis. The VRAM issue is being attended to with #10773, that's the only thing I can think of that would remotely cause any issues.
Author
Owner

@novaexe commented on GitHub (May 21, 2025):

hopefully that does fix the issue and im wrong about the cpu over utilization (hard to tell whats actually going on when ur rig is softlocked)

but my feature request still stands, many have requested it and it makes perfect sense to do so, as until that issue is resolved,

now it will ping me for an update, take up 1gb on my system, and i assume double update once the issues is resolved

<!-- gh-comment-id:2898477860 --> @novaexe commented on GitHub (May 21, 2025): hopefully that does fix the issue and im wrong about the cpu over utilization (hard to tell whats actually going on when ur rig is softlocked) but my feature request still stands, many have requested it and it makes perfect sense to do so, as until that issue is resolved, > now it will ping me for an update, take up 1gb on my system, and i assume double update once the issues is resolved
Author
Owner

@rick-github commented on GitHub (May 21, 2025):

Sure. If you want your softlock issue investigated, open a new ticket and add logs.

<!-- gh-comment-id:2898480978 --> @rick-github commented on GitHub (May 21, 2025): Sure. If you want your softlock issue investigated, open a new ticket and add logs.
Author
Owner

@novaexe commented on GitHub (May 21, 2025):

will do if that pr doesnt fix the issue but i just spent the better part of 2 hrs figuring out there's no feasible way to prevent ollama to stop auto updating, hence my feature request xD

<!-- gh-comment-id:2898487359 --> @novaexe commented on GitHub (May 21, 2025): will do if that pr doesnt fix the issue but i just spent the better part of 2 hrs figuring out there's no feasible way to prevent ollama to stop auto updating, hence my feature request xD
Author
Owner

@YonTracks commented on GitHub (May 22, 2025):

this is very interesting (the bit about) the ping and 1gb. I have disabled the update for myself, but maybe now silently adding the memory, do you mean the notification adds the 1gb. anyway, for me in lifecycle/lifecycle.go I just commented out the updater check.

// StartBackgroundUpdaterChecker(ctx, t.UpdateAvailable)

but either way mine is a feature request for pause/rollback functionality on windows specifically, cause now it will ping me for an update, take up 1gb on my system, and i assume double update once the issues is resolved

<!-- gh-comment-id:2899854730 --> @YonTracks commented on GitHub (May 22, 2025): this is very interesting (the bit about) the ping and 1gb. I have disabled the update for myself, but maybe now silently adding the memory, do you mean the notification adds the 1gb. anyway, for me in lifecycle/lifecycle.go I just commented out the updater check. // StartBackgroundUpdaterChecker(ctx, t.UpdateAvailable) > but either way mine is a feature request for pause/rollback functionality on windows specifically, cause now it will ping me for an update, take up 1gb on my system, and i assume double update once the issues is resolved
Author
Owner

@YonTracks commented on GitHub (May 22, 2025):

ahh I see, it will download and have it ready. ok. cheers

<!-- gh-comment-id:2899859233 --> @YonTracks commented on GitHub (May 22, 2025): ahh I see, it will download and have it ready. ok. cheers
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69153