[GH-ISSUE #1437] Update Script and Documentation for non-systemd Linux systems #26529

Closed
opened 2026-04-22 02:51:46 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @NikeshKhatiwada on GitHub (Dec 8, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1437

Originally assigned to: @dhiltgen on GitHub.

I tried default installation script in Alpine Linux (WSL) and though it was apparently installed, I couldn't use ollama command. Also, manual install guide needs alternative steps for non-systemd sytems.

Originally created by @NikeshKhatiwada on GitHub (Dec 8, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1437 Originally assigned to: @dhiltgen on GitHub. I tried default installation script in Alpine Linux (WSL) and though it was apparently installed, I couldn't use ollama command. Also, manual install guide needs alternative steps for non-systemd sytems.
Author
Owner

@jart commented on GitHub (Dec 10, 2023):

The ollama binary needs glibc. This makes me sad, because I wanted to try it!

<!-- gh-comment-id:1848804213 --> @jart commented on GitHub (Dec 10, 2023): The ollama binary needs glibc. This makes me sad, because I wanted to try it!
Author
Owner

@NikeshKhatiwada commented on GitHub (Dec 12, 2023):

I was able to use ollama serve command, which seemingly work; after manually installing Ollama following beginning of manual install guide, performing basic configuration in Alpine system, and then installing gcompat compatibilty layer. While trying ollama run {model}, it is able to download models but unable to run them showing following error:
Error: llama runner process has terminated

<!-- gh-comment-id:1852411366 --> @NikeshKhatiwada commented on GitHub (Dec 12, 2023): I was able to use `ollama serve` command, which seemingly work; after manually installing **Ollama** following beginning of manual install guide, performing basic configuration in Alpine system, and then installing gcompat compatibilty layer. While trying `ollama run {model}`, it is able to download models but unable to run them showing following error: `Error: llama runner process has terminated`
Author
Owner

@dhiltgen commented on GitHub (Jan 27, 2024):

@NikeshKhatiwada unfortunately I think you're going to be fighting an uphill battle to try to get all our dependencies working without glibc. I would encourage you to try to build the source tree from main, since we've revamped a lot of the build process since you tried. Maybe you can get it to build for CPU only mode.

<!-- gh-comment-id:1912907274 --> @dhiltgen commented on GitHub (Jan 27, 2024): @NikeshKhatiwada unfortunately I think you're going to be fighting an uphill battle to try to get all our dependencies working without glibc. I would encourage you to try to build the source tree from main, since we've revamped a lot of the build process since you tried. Maybe you can get it to build for CPU only mode.
Author
Owner

@dhiltgen commented on GitHub (Mar 12, 2024):

Our pre-compiled binary wont be viable for a non-glibc system. I think we'd be open to community PRs to the build scripts if they're not complicated to enable building from source on a non-glibc system, but I'm going to close this ticket for now as I don't think it's something we plan to work on at this time.

<!-- gh-comment-id:1992090042 --> @dhiltgen commented on GitHub (Mar 12, 2024): Our pre-compiled binary wont be viable for a non-glibc system. I think we'd be open to community PRs to the build scripts if they're not complicated to enable building from source on a non-glibc system, but I'm going to close this ticket for now as I don't think it's something we plan to work on at this time.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#26529