[GH-ISSUE #8293] Ollama - Gentoo Linux support #5308

Closed
opened 2026-04-12 16:29:53 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @jaypeche on GitHub (Jan 3, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8293

Hello, world !

Gentoo portage of Ollama, binary release.
RTX4060-i7 works perfectly !

https://ftp.pingwho.org/pub/gentoo/ftp/overlay/pingwho-overlay/sci-ml/ollama-bin/

Enjoy !

Originally created by @jaypeche on GitHub (Jan 3, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8293 Hello, world ! **Gentoo portage of Ollama**, binary release. RTX4060-i7 works perfectly ! [https://ftp.pingwho.org/pub/gentoo/ftp/overlay/pingwho-overlay/sci-ml/ollama-bin/](https://ftp.pingwho.org/pub/gentoo/ftp/overlay/pingwho-overlay/sci-ml/ollama-bin/) Enjoy !
GiteaMirror added the feature request label 2026-04-12 16:29:53 -05:00
Author
Owner

@jaypeche commented on GitHub (Jan 3, 2025):

Also testing on Raspberrypi 4 8Go RAM && SSD SATA HardDisk, works correctly with light model like Phi3/3B.

<!-- gh-comment-id:2569938406 --> @jaypeche commented on GitHub (Jan 3, 2025): Also testing on **Raspberrypi 4 8Go RAM** && SSD SATA HardDisk, works correctly with light model like **Phi3/3B**.
Author
Owner

@ProjectMoon commented on GitHub (Jan 4, 2025):

Would it make sense for this to be in GURU?

<!-- gh-comment-id:2570240281 --> @ProjectMoon commented on GitHub (Jan 4, 2025): Would it make sense for this to be in GURU?
Author
Owner

@pdevine commented on GitHub (Jan 8, 2025):

@jaypeche Thanks for the update! Is there anything we need to do on our end?

<!-- gh-comment-id:2578280962 --> @pdevine commented on GitHub (Jan 8, 2025): @jaypeche Thanks for the update! Is there anything we need to do on our end?
Author
Owner

@jlpoolen commented on GitHub (Feb 21, 2025):

I come here as a long term Gentoo enthusiast. I'm faced with the task of ingesting 139 PDFs representing cases, essentially opinions of an administrative court, before the National Transportation Safety Board involving low flying aircraft. I have an occurrence where a Learjet buzzed our downtown Salem, OR, at 175 feet and I want to get some feel for the metrics of what is considered unsafe: 100', 300'?

I run my Gentoo servers on Xen (Gentoo Dom 0, the VMs share the same kernel) and I've been looking at the https://docs.unstructured.io/welcome project which seems pretty straight forward about creating JSON from the PDFs. The problem is that I do not know where to go with my extracted JSON or what to do with it. I've just been using extensively ChatGPT through a web interface. ChatGPT brought to my attention this project: https://github.com/imartinez/privateGPT which has a dependency of an LLM, so I thought to search "Gentoo Ollama" and here I am at this ticket. I have a Ryzen 7950 with 128 GB ram, but no GPU. I have a VM which is Debian and a Ubuntu which I use when I just want something up and running, or test something new and exciting, and don't want to have to learn about it or fiddle with stuff to get it to work in Gentoo.

Your download is a binary which gives me great pause. Would you have an ebuild; that would be preferable to me.

With the above background, do you see assembling unstructured.io to create JSON of my 139 PDFs and then using Ollama in conjunction with the privateGPT project? It looks like these three components would make up the entire tool chain. Speed is not that critical, I probably will have 50 or less questions which I'm willing to wait a bit for responses. I do not want to have to grapple with a GPU on a Xen based system in Gentoo... too many variables and I'm wary of pass-through paradigms that really have not be used by a lot of people. I may end up using a commercial service, but i don't want to run into hidden thresholds and then be held hostage for upgrade fees.

(Btw, @jaypeche , I was very much following the GenPi64 project building an iron clad Gentoo which I eventually deploy on the Pi Zero 2 W; I was a guinea pig for builds... one took about 8 days & night?)

<!-- gh-comment-id:2675817440 --> @jlpoolen commented on GitHub (Feb 21, 2025): I come here as a long term Gentoo enthusiast. I'm faced with the task of ingesting 139 PDFs representing cases, essentially opinions of an administrative court, before the National Transportation Safety Board involving low flying aircraft. I have an occurrence where a Learjet buzzed our downtown Salem, OR, at 175 feet and I want to get some feel for the metrics of what is considered unsafe: 100', 300'? I run my Gentoo servers on Xen (Gentoo Dom 0, the VMs share the same kernel) and I've been looking at the https://docs.unstructured.io/welcome project which seems pretty straight forward about creating JSON from the PDFs. The problem is that I do not know where to go with my extracted JSON or what to do with it. I've just been using extensively ChatGPT through a web interface. ChatGPT brought to my attention this project: https://github.com/imartinez/privateGPT which has a dependency of an LLM, so I thought to search "Gentoo Ollama" and here I am at this ticket. I have a Ryzen 7950 with 128 GB ram, but no GPU. I have a VM which is Debian and a Ubuntu which I use when I just want something up and running, or test something new and exciting, and don't want to have to learn about it or fiddle with stuff to get it to work in Gentoo. Your download is a binary which gives me great pause. Would you have an ebuild; that would be preferable to me. With the above background, do you see assembling unstructured.io to create JSON of my 139 PDFs and then using Ollama in conjunction with the privateGPT project? It looks like these three components would make up the entire tool chain. Speed is not that critical, I probably will have 50 or less questions which I'm willing to wait a bit for responses. I do not want to have to grapple with a GPU on a Xen based system in Gentoo... too many variables and I'm wary of pass-through paradigms that really have not be used by a lot of people. I may end up using a commercial service, but i don't want to run into hidden thresholds and then be held hostage for upgrade fees. (Btw, @jaypeche , I was very much following the GenPi64 project building an iron clad Gentoo which I eventually deploy on the Pi Zero 2 W; I was a guinea pig for builds... one took about 8 days & night?)
Author
Owner

@jaypeche commented on GitHub (Oct 12, 2025):

Hi,

Thanks you very much for your feedbacks !

Fix ebuild link : https://ftp.pingwho.org/pub/gentoo/ftp/overlay/pingwho-overlay/sci-ml/ollama-bin
Official repository : https://github.com/gentoo-mirror/pingwho-overlay/tree/master/sci-ml/ollama-bin

~amd64 && ~arm64 keywords

<!-- gh-comment-id:3394133860 --> @jaypeche commented on GitHub (Oct 12, 2025): Hi, Thanks you very much for your feedbacks ! Fix ebuild link : https://ftp.pingwho.org/pub/gentoo/ftp/overlay/pingwho-overlay/sci-ml/ollama-bin Official repository : https://github.com/gentoo-mirror/pingwho-overlay/tree/master/sci-ml/ollama-bin **~amd64 && ~arm64 keywords**
Author
Owner

@pdevine commented on GitHub (Oct 28, 2025):

Hey @jaypeche , is it OK to close the issue as completed? Not sure if there is anything for us to do on our end.

<!-- gh-comment-id:3458671701 --> @pdevine commented on GitHub (Oct 28, 2025): Hey @jaypeche , is it OK to close the issue as completed? Not sure if there is anything for us to do on our end.
Author
Owner

@jaypeche commented on GitHub (Oct 28, 2025):

hI @pdevine , you should close this issue, Latest ebuild is hosted on pingwho overlay.Thx

<!-- gh-comment-id:3458709988 --> @jaypeche commented on GitHub (Oct 28, 2025): hI @pdevine , you should close this issue, Latest ebuild is hosted on pingwho overlay.Thx
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5308