[PR #10540] ollamarunner: Re-enable worst case graph preallocation. #13270

Closed
opened 2026-04-13 00:22:33 -05:00 by GiteaMirror · 0 comments
Owner

Original Pull Request: https://github.com/ollama/ollama/pull/10540

State: closed
Merged: Yes


Worst case graph preallocation was disabled by a27462b "ollamarunner: Temporarily disable worst case graph preallocation" since it caused crashes with large batches when not using the GPU.

This backports upstream llama.cpp commit f057808
"ggml: Don't assert fail when tensor data changes (#13222)", which fixes the underlying bug and allows reverting the previous workaround.

**Original Pull Request:** https://github.com/ollama/ollama/pull/10540 **State:** closed **Merged:** Yes --- Worst case graph preallocation was disabled by a27462b "ollamarunner: Temporarily disable worst case graph preallocation" since it caused crashes with large batches when not using the GPU. This backports upstream llama.cpp commit f057808 "ggml: Don't assert fail when tensor data changes (#13222)", which fixes the underlying bug and allows reverting the previous workaround.
GiteaMirror added the pull-request label 2026-04-13 00:22:33 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#13270