[GH-ISSUE #5950] Help to install the ollama files in my /home/myUser folder (install files, not the models). #65753

Closed
opened 2026-05-03 22:32:58 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @andrerclaudio on GitHub (Jul 25, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5950

Has anyone modified the script to install Ollama in a location other than the root folders? I want to change it to work only for my user using the install script for Linux. Can anyone help?

I changed it as follows:
asimov is the username.

...

BINDIR="/home/asimov/.local/bin"
mkdir -p $BINDIR
status "Installing ollama to $BINDIR..."
install -o $(id -u asimov) -g $(id -g asimov) -m 755 $TEMP_DIR/ollama $BINDIR/ollama

status 'The Ollama API is now available at 127.0.0.1:11434.'
status 'Install complete. Run "ollama" from the command line.'

configure_systemd() {
if ! id ollama >/dev/null 2>&1; then
status "Creating ollama user..."
sudo useradd -r -s /bin/false -U -m -d /home/asimov ollama
fi
if getent group render >/dev/null 2>&1; then
status "Adding ollama user to render group..."
sudo usermod -a -G render ollama
fi
if getent group video >/dev/null 2>&1; then
status "Adding ollama user to video group..."
sudo usermod -a -G video ollama
fi
status "Adding current user to ollama group..."
sudo usermod -a -G ollama $(whoami)
status "Creating ollama systemd service..."
sudo tee /etc/systemd/system/ollama.service > /dev/null <<EOF
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=$BINDIR/ollama serve
User=asimov
Group=asimov
Restart=always
RestartSec=3
Environment="PATH=$PATH"

[Install]
WantedBy=default.target

...

You guys see more to change?

Originally created by @andrerclaudio on GitHub (Jul 25, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5950 Has anyone modified the script to install Ollama in a location other than the root folders? I want to change it to work only for my user using the install script for Linux. Can anyone help? I changed it as follows: asimov is the username. ... BINDIR="/home/asimov/.local/bin" mkdir -p $BINDIR status "Installing ollama to $BINDIR..." install -o $(id -u asimov) -g $(id -g asimov) -m 755 $TEMP_DIR/ollama $BINDIR/ollama status 'The Ollama API is now available at 127.0.0.1:11434.' status 'Install complete. Run "ollama" from the command line.' configure_systemd() { if ! id ollama >/dev/null 2>&1; then status "Creating ollama user..." sudo useradd -r -s /bin/false -U -m -d /home/asimov ollama fi if getent group render >/dev/null 2>&1; then status "Adding ollama user to render group..." sudo usermod -a -G render ollama fi if getent group video >/dev/null 2>&1; then status "Adding ollama user to video group..." sudo usermod -a -G video ollama fi status "Adding current user to ollama group..." sudo usermod -a -G ollama $(whoami) status "Creating ollama systemd service..." sudo tee /etc/systemd/system/ollama.service > /dev/null <<EOF [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=$BINDIR/ollama serve User=asimov Group=asimov Restart=always RestartSec=3 Environment="PATH=$PATH" [Install] WantedBy=default.target ... You guys see more to change?
Author
Owner

@rick-github commented on GitHub (Jul 25, 2024):

You can just get the binary:

$ curl -o ollama -L https://ollama.com/download/ollama-linux-amd64
$ chmod +x ollama

Start the server:

$ nohup ./ollama serve > ollama.log 2>&1 &

Interact with it:

$ ./ollama run llama3 "hello"
pulling manifest 
pulling 6a0746a1ec1a... 100% ▕██████████████████▏ 4.7 GB                         
pulling 4fa551d4f938... 100% ▕██████████████████▏  12 KB                         
pulling 8ab4849b038c... 100% ▕██████████████████▏  254 B                         
pulling 577073ffcc6c... 100% ▕██████████████████▏  110 B                         
pulling 3f8eb4da87fa... 100% ▕██████████████████▏  485 B                         
verifying sha256 digest 
writing manifest 
removing any unused layers 
success 
Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?

Models are in ~/.ollama, you change that by setting OLLAMA_MODELS and restarting the server.

<!-- gh-comment-id:2250326456 --> @rick-github commented on GitHub (Jul 25, 2024): You can just get the binary: ```sh $ curl -o ollama -L https://ollama.com/download/ollama-linux-amd64 $ chmod +x ollama ``` Start the server: ```sh $ nohup ./ollama serve > ollama.log 2>&1 & ``` Interact with it: ``` $ ./ollama run llama3 "hello" pulling manifest pulling 6a0746a1ec1a... 100% ▕██████████████████▏ 4.7 GB pulling 4fa551d4f938... 100% ▕██████████████████▏ 12 KB pulling 8ab4849b038c... 100% ▕██████████████████▏ 254 B pulling 577073ffcc6c... 100% ▕██████████████████▏ 110 B pulling 3f8eb4da87fa... 100% ▕██████████████████▏ 485 B verifying sha256 digest writing manifest removing any unused layers success Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat? ``` Models are in `~/.ollama`, you change that by setting `OLLAMA_MODELS` and restarting the server.
Author
Owner

@andrerclaudio commented on GitHub (Jul 25, 2024):

Hi, @rick-github.
Thanks for your reply.
It is a very good idea.

Right now, I have the variable OLLAMA_MODELS set in the ollama.service file, so I am saving the models in the folder I want.

But, assuming this is the first time I am going to install it using the method you mentioned, and I don't have the ollama.service file anymore, should the OLLAMA_MODELS variable be set in my .bashrc file?

<!-- gh-comment-id:2250445348 --> @andrerclaudio commented on GitHub (Jul 25, 2024): Hi, @rick-github. Thanks for your reply. It is a very good idea. Right now, I have the variable OLLAMA_MODELS set in the ollama.service file, so I am saving the models in the folder I want. But, assuming this is the first time I am going to install it using the method you mentioned, and I don't have the ollama.service file anymore, should the OLLAMA_MODELS variable be set in my .bashrc file?
Author
Owner

@rick-github commented on GitHub (Jul 25, 2024):

Either in your .bashrc, or you can create a script to start the ollama server:

#!/bin/bash

export OLLAMA_MODELS=/some/random/dir
nohup ./ollama serve > ollama.log 2>&1 &
<!-- gh-comment-id:2250456176 --> @rick-github commented on GitHub (Jul 25, 2024): Either in your .bashrc, or you can create a script to start the ollama server: ```sh #!/bin/bash export OLLAMA_MODELS=/some/random/dir nohup ./ollama serve > ollama.log 2>&1 & ```
Author
Owner

@andrerclaudio commented on GitHub (Jul 25, 2024):

Got it.
Thanks @rick-github for your help.

<!-- gh-comment-id:2250493454 --> @andrerclaudio commented on GitHub (Jul 25, 2024): Got it. Thanks @rick-github for your help.
Author
Owner

@APMonitor commented on GitHub (Feb 2, 2025):

The binary is now available from:

curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz
tar -xzf ollama-linux-amd64.tgz

I created an ollama folder. The tar file extracts to create a bin and lib file in your directory with the ollama binary in the bin folder. After chmod +x ./bin/ollama and creating a models folder in your /home/{usr}/ollama directory, start the ollama service:

#!/bin/bash

export OLLAMA_MODELS=/home/{usr}/ollama/models
nohup ./bin/ollama serve > ollama.log 2>&1 &

Run models with:

./bin/ollama run phi4

Here are the manual installation instructions.

<!-- gh-comment-id:2629201854 --> @APMonitor commented on GitHub (Feb 2, 2025): The binary is now available from: ```bash curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz tar -xzf ollama-linux-amd64.tgz ``` I created an ollama folder. The tar file extracts to create a `bin` and `lib` file in your directory with the `ollama` binary in the bin folder. After `chmod +x ./bin/ollama` and creating a `models` folder in your `/home/{usr}/ollama` directory, start the ollama service: ```bash #!/bin/bash export OLLAMA_MODELS=/home/{usr}/ollama/models nohup ./bin/ollama serve > ollama.log 2>&1 & ``` Run models with: ```bash ./bin/ollama run phi4 ``` Here are the [manual installation instructions](https://github.com/ollama/ollama/blob/main/docs/linux.md).
Author
Owner

@RaphaelMouravieff commented on GitHub (Mar 6, 2025):

this work for me.

nohup ./bin/ollama serve > ollama.log 2>&1 &
./bin/ollama run llama3.2

the prompt are very fast on terminal (2sec).

but when using python :

from langchain_ollama import OllamaLLM
model = OllamaLLM(model="llama3.2")
result = model.invoke(input="hello world")
print(result)

the code work but it's not using the gpu (very slow like 1min).

someone can help for this ?

<!-- gh-comment-id:2702416019 --> @RaphaelMouravieff commented on GitHub (Mar 6, 2025): this work for me. ```shell nohup ./bin/ollama serve > ollama.log 2>&1 & ./bin/ollama run llama3.2 ``` the prompt are very fast on terminal (2sec). but when using python : ```python from langchain_ollama import OllamaLLM model = OllamaLLM(model="llama3.2") result = model.invoke(input="hello world") print(result) ``` the code work but it's not using the gpu (very slow like 1min). someone can help for this ?
Author
Owner

@rick-github commented on GitHub (Mar 6, 2025):

Server logs may aid in debugging.

<!-- gh-comment-id:2702433128 --> @rick-github commented on GitHub (Mar 6, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may aid in debugging.
Author
Owner

@RaphaelMouravieff commented on GitHub (Mar 6, 2025):

this work for me.

nohup ./bin/ollama serve > ollama.log 2>&1 &
./bin/ollama run llama3.2
the prompt are very fast on terminal (2sec).

but when using python :

from langchain_ollama import OllamaLLM
model = OllamaLLM(model="llama3.2")
result = model.invoke(input="hello world")
print(result)
the code work but it's not using the gpu (very slow like 1min).

someone can help for this ?

Disabling the proxy resolved connectivity issues with Ollama because the proxy was intercepting local API requests (127.0.0.1:11434).

<!-- gh-comment-id:2702528031 --> @RaphaelMouravieff commented on GitHub (Mar 6, 2025): > this work for me. > > nohup ./bin/ollama serve > ollama.log 2>&1 & > ./bin/ollama run llama3.2 > the prompt are very fast on terminal (2sec). > > but when using python : > > from langchain_ollama import OllamaLLM > model = OllamaLLM(model="llama3.2") > result = model.invoke(input="hello world") > print(result) > the code work but it's not using the gpu (very slow like 1min). > > someone can help for this ? Disabling the proxy resolved connectivity issues with Ollama because the proxy was intercepting local API requests (`127.0.0.1:11434`).
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65753