[GH-ISSUE #9959] Ollama's new engine. #6520

Open
opened 2026-04-12 18:07:37 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @amritahs-ibm on GitHub (Mar 24, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9959

@mchiang0610 replied to my PR (https://github.com/ollama/ollama/pull/9538), that

We are no longer using llama.cpp for Ollama's new engine. For backwards CPU compatibility, we will continue to support GGML.

I saw this PR has the corresponding changes. 1fdb351c37

I see that llama.cpp has been replaced by textProcessor. I would like to know more about this textProcessor and how it is a better replacement for llama.cpp. I could not find any document/information regarding the same.

Can anyone please help?

Originally created by @amritahs-ibm on GitHub (Mar 24, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9959 @mchiang0610 replied to my PR (https://github.com/ollama/ollama/pull/9538), that We are no longer using llama.cpp for Ollama's new engine. For backwards CPU compatibility, we will continue to support GGML. I saw this PR has the corresponding changes. https://github.com/ollama/ollama/commit/1fdb351c37a445fb2e8fdad19fff88f6d85b2912 I see that llama.cpp has been replaced by textProcessor. I would like to know more about this textProcessor and how it is a better replacement for llama.cpp. I could not find any document/information regarding the same. Can anyone please help?
GiteaMirror added the question label 2026-04-12 18:07:37 -05:00
Author
Owner

@triple-threat-dan commented on GitHub (Mar 26, 2025):

Adding a comment because I also would like to know the answer to this... Is there any documentation anywhere regarding this new engine??

@amritahs-ibm has anyone messaged you or answered you question in discord by any chance? This seems like a big change to just.... Not use llama.cpp anymore

<!-- gh-comment-id:2755400237 --> @triple-threat-dan commented on GitHub (Mar 26, 2025): Adding a comment because I also would like to know the answer to this... Is there any documentation anywhere regarding this new engine?? @amritahs-ibm has anyone messaged you or answered you question in discord by any chance? This seems like a big change to just.... Not use llama.cpp anymore
Author
Owner

@amritahs-ibm commented on GitHub (Mar 28, 2025):

@triple-threat-dan No, I have not received any reply yet.

<!-- gh-comment-id:2760375435 --> @amritahs-ibm commented on GitHub (Mar 28, 2025): @triple-threat-dan No, I have not received any reply yet.
Author
Owner

@amritahs-ibm commented on GitHub (Mar 28, 2025):

@mchiang0610 Can you please provide answer to our question?

<!-- gh-comment-id:2760376680 --> @amritahs-ibm commented on GitHub (Mar 28, 2025): @mchiang0610 Can you please provide answer to our question?
Author
Owner

@dnck commented on GitHub (Mar 28, 2025):

Will the textgen t/s still be slow on a A100gb?!

<!-- gh-comment-id:2761312834 --> @dnck commented on GitHub (Mar 28, 2025): Will the textgen t/s still be slow on a A100gb?!
Author
Owner

@ALutz273 commented on GitHub (Feb 11, 2026):

@mchiang0610 @jmorganca @dhiltgen
Can any of you comment on this?

<!-- gh-comment-id:3887456564 --> @ALutz273 commented on GitHub (Feb 11, 2026): @mchiang0610 @jmorganca @dhiltgen Can any of you comment on this?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6520