[GH-ISSUE #3697] No llama.cpp acknowledgement #48790

Closed
opened 2026-04-28 09:16:20 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @survirtual on GitHub (Apr 17, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3697

What is the issue?

This project is heavily dependent on llama.cpp, as seen in this search, but there is no mention of that in the readme. This creates some conflict and distaste for this project that I keep seeing on communities for developers that is entirely unnecessary. Additionally, it prevents people from easily understanding how this project works at a lower level.

What did you expect to see?

Add an acknowledgement about llama.cpp. An example of what this could look like can be found in libraries such as Rust's axum, which is built on top of hyper, where there are several mentions of hyper:

Performance

axum is a relatively thin layer on top of [hyper] and adds very little
overhead. So axum's performance is comparable to [hyper]. You can find
benchmarks here and
here.

And at the bottom, there are acknowledgement links:

...
[tower]: https://crates.io/crates/tower
[hyper]: https://crates.io/crates/hyper
[tower-http]: https://crates.io/crates/tower-http
...

Something similar can be done for Ollama, which completely alleviates the perception issues, along with helping users and developers get a deeper understanding of the package.

Steps to reproduce

Go to https://github.com/ollama/ollama, search the readme for llama.cpp, feel a tiny sting of disappointment that such a good project doesn't have any acknowledgements :(

Are there any recent changes that introduced the issue?

No response

OS

No response

Architecture

No response

Platform

No response

Ollama version

No response

GPU

No response

GPU info

No response

CPU

No response

Other software

No response

Originally created by @survirtual on GitHub (Apr 17, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3697 ### What is the issue? This project is heavily dependent on [llama.cpp](https://github.com/ggerganov/llama.cpp), as seen [in this search](https://github.com/search?q=repo%3Aollama%2Follama%20llama.cpp&type=code), but there is no mention of that in the readme. This creates some conflict and distaste for this project that I keep seeing on communities for developers that is entirely unnecessary. Additionally, it prevents people from easily understanding how this project works at a lower level. ### What did you expect to see? Add an acknowledgement about llama.cpp. An example of what this could look like can be found in libraries such as Rust's [axum](https://github.com/tokio-rs/axum), which is built on top of [hyper](https://github.com/hyperium/hyper), where there are several mentions of hyper: > ## Performance > > `axum` is a relatively thin layer on top of [`hyper`] and adds very little > overhead. So `axum`'s performance is comparable to [`hyper`]. You can find > benchmarks [here](https://github.com/programatik29/rust-web-benchmarks) and > [here](https://web-frameworks-benchmark.netlify.app/result?l=rust). And at the bottom, there are acknowledgement links: > ... > [`tower`]: https://crates.io/crates/tower > [`hyper`]: https://crates.io/crates/hyper > [`tower-http`]: https://crates.io/crates/tower-http >... Something similar can be done for Ollama, which completely alleviates the perception issues, along with helping users and developers get a deeper understanding of the package. ### Steps to reproduce Go to https://github.com/ollama/ollama, search the readme for llama.cpp, feel a tiny sting of disappointment that such a good project doesn't have any acknowledgements :( ### Are there any recent changes that introduced the issue? _No response_ ### OS _No response_ ### Architecture _No response_ ### Platform _No response_ ### Ollama version _No response_ ### GPU _No response_ ### GPU info _No response_ ### CPU _No response_ ### Other software _No response_
GiteaMirror added the bug label 2026-04-28 09:16:20 -05:00
Author
Owner

@Zambito1 commented on GitHub (Apr 17, 2024):

This is not a matter of taste. The llama.cpp license (along with all software that is under a non public domain compatible license) requires attribution. How is ollama complying with this?

<!-- gh-comment-id:2061299705 --> @Zambito1 commented on GitHub (Apr 17, 2024): This is not a matter of taste. The llama.cpp license (along with all software that is under a non public domain compatible license) requires attribution. How is ollama complying with this?
Author
Owner

@survirtual commented on GitHub (Apr 17, 2024):

I made a pull request with simple edits to the readme which may resolve the issue. Feel free to modify it as desired, but I would recommend a front-and-center, subtle acknowledgement (as illustrated) as a minimum.

Including acknowledgements may not seem like a big deal, but it really goes a long way in fostering a healthier developer community.

<!-- gh-comment-id:2061327604 --> @survirtual commented on GitHub (Apr 17, 2024): I made a [pull request](https://github.com/ollama/ollama/pull/3700) with simple edits to the readme which may resolve the issue. Feel free to modify it as desired, but I would recommend a front-and-center, subtle acknowledgement (as illustrated) as a minimum. Including acknowledgements may not seem like a big deal, but it really goes a long way in fostering a healthier developer community.
Author
Owner

@mchiang0610 commented on GitHub (Apr 17, 2024):

Thank you! We definitely should do a better job. I added it to the readme.

Thank you for the amazing work.

9755cf9173

<!-- gh-comment-id:2061878632 --> @mchiang0610 commented on GitHub (Apr 17, 2024): Thank you! We definitely should do a better job. I added it to the readme. Thank you for the amazing work. https://github.com/ollama/ollama/commit/9755cf9173152047030b6d080c29c829bb050a15
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#48790