[GH-ISSUE #1783] Model not found when Ollama runs on a different device in the same network #1019

Closed
opened 2026-04-12 10:44:33 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @AdvancedAssistiveTech on GitHub (Jan 4, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/1783

I'm hosting Ollama on an Ubuntu server and then trying to connect to the instance via chatbox on another (Arch) device.

I've run both ollama run llama2 and ollama pull llama2. I then ran OLLAMA_HOST=0.0.0.0:8070 ollama serve in a separate shell as described in the setup procedure, but I'm unable to chat via chatbox. In the SSH session where I run ollama run llama2, the chat works perfectly: I'm able to have a fluent conversation. When running chatbox, however, I get an API Error: Status Code 404, {"error":"model 'llama2' not found, try pulling it first"}. If I use curl for the generate or show endpoints on the Arch device, I get the same error. If I run ollama list on the Ubuntu machine, however, the llama2 entry is listed. Is there a network configuration step or something similar that I missed? Any help is appreciated.

Originally created by @AdvancedAssistiveTech on GitHub (Jan 4, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/1783 I'm hosting Ollama on an Ubuntu server and then trying to connect to the instance via chatbox on another (Arch) device. I've run both `ollama run llama2` and `ollama pull llama2`. I then ran `OLLAMA_HOST=0.0.0.0:8070 ollama serve` in a separate shell as described in the setup procedure, but I'm unable to chat via chatbox. In the SSH session where I run `ollama run llama2`, the chat works perfectly: I'm able to have a fluent conversation. When running chatbox, however, I get an `API Error: Status Code 404, {"error":"model 'llama2' not found, try pulling it first"}`. If I use curl for the generate or show endpoints on the Arch device, I get the same error. If I run `ollama list` on the Ubuntu machine, however, the llama2 entry is listed. Is there a network configuration step or something similar that I missed? Any help is appreciated.
Author
Owner

@a496046961 commented on GitHub (Jan 4, 2024):

View this code.

`func RunServer(cmd *cobra.Command, _ []string) error {
host, port, err := net.SplitHostPort(os.Getenv("OLLAMA_HOST"))
if err != nil {
host, port = "127.0.0.1", "11434"
if ip := net.ParseIP(strings.Trim(os.Getenv("OLLAMA_HOST"), "[]")); ip != nil {
host = ip.String()
}
}

if err := initializeKeypair(); err != nil {
	return err
}

ln, err := net.Listen("tcp", net.JoinHostPort(host, port))
if err != nil {
	return err
}

return server.Serve(ln)

}`

<!-- gh-comment-id:1876990560 --> @a496046961 commented on GitHub (Jan 4, 2024): View this code. `func RunServer(cmd *cobra.Command, _ []string) error { host, port, err := net.SplitHostPort(os.Getenv("OLLAMA_HOST")) if err != nil { host, port = "127.0.0.1", "11434" if ip := net.ParseIP(strings.Trim(os.Getenv("OLLAMA_HOST"), "[]")); ip != nil { host = ip.String() } } if err := initializeKeypair(); err != nil { return err } ln, err := net.Listen("tcp", net.JoinHostPort(host, port)) if err != nil { return err } return server.Serve(ln) }`
Author
Owner

@a496046961 commented on GitHub (Jan 4, 2024):

It can be achieved through reverse proxy with nginx.

<!-- gh-comment-id:1876994340 --> @a496046961 commented on GitHub (Jan 4, 2024): It can be achieved through reverse proxy with nginx.
Author
Owner

@AdvancedAssistiveTech commented on GitHub (Jan 4, 2024):

View this code.

`func RunServer(cmd *cobra.Command, _ []string) error { host, port, err := net.SplitHostPort(os.Getenv("OLLAMA_HOST")) if err != nil { host, port = "127.0.0.1", "11434" if ip := net.ParseIP(strings.Trim(os.Getenv("OLLAMA_HOST"), "[]")); ip != nil { host = ip.String() } }

if err := initializeKeypair(); err != nil {
	return err
}

ln, err := net.Listen("tcp", net.JoinHostPort(host, port))
if err != nil {
	return err
}

return server.Serve(ln)

}`

What am I to do with this code? Also, I feel it's rather overkill to configure a reverse proxy when both devices are on the same network. I'm able to access http://<Ubuntu ip>:8070 on the Arch machine and see that ollama is running, so it's not like the Arch device is unable to reach the Ubuntu server

<!-- gh-comment-id:1877055748 --> @AdvancedAssistiveTech commented on GitHub (Jan 4, 2024): > View this code. > > `func RunServer(cmd *cobra.Command, _ []string) error { host, port, err := net.SplitHostPort(os.Getenv("OLLAMA_HOST")) if err != nil { host, port = "127.0.0.1", "11434" if ip := net.ParseIP(strings.Trim(os.Getenv("OLLAMA_HOST"), "[]")); ip != nil { host = ip.String() } } > > ``` > if err := initializeKeypair(); err != nil { > return err > } > > ln, err := net.Listen("tcp", net.JoinHostPort(host, port)) > if err != nil { > return err > } > > return server.Serve(ln) > ``` > > }` What am I to do with this code? Also, I feel it's rather overkill to configure a reverse proxy when both devices are on the same network. I'm able to access `http://<Ubuntu ip>:8070` on the Arch machine and see that ollama is running, so it's not like the Arch device is unable to reach the Ubuntu server
Author
Owner

@technovangelist commented on GitHub (Jan 4, 2024):

Hi @AdvancedAssistiveTech I think the main problem comes from running ollama serve as two different users. When you are on the ubuntu box, you are probably using the service that is running as the ollama user. The models for that user are all under /usr/share/ollama/.ollama Then you have run a separate service as your user. Those models will be under ~/.ollama. To run the service, refer to the FAQ that goes over how to set environment variables https://github.com/jmorganca/ollama/blob/main/docs/faq.md#how-do-i-use-ollama-server-environment-variables-on-linux

Let us know if anything isn't clear and we can help further.

You referred to some setup docs that directed you to run ollama serve as your user. Can you point that out to me so I can correct it?

Thanks so much for being a great part of this awesome community

<!-- gh-comment-id:1877276553 --> @technovangelist commented on GitHub (Jan 4, 2024): Hi @AdvancedAssistiveTech I think the main problem comes from running ollama serve as two different users. When you are on the ubuntu box, you are probably using the service that is running as the ollama user. The models for that user are all under /usr/share/ollama/.ollama Then you have run a separate service as your user. Those models will be under ~/.ollama. To run the service, refer to the FAQ that goes over how to set environment variables https://github.com/jmorganca/ollama/blob/main/docs/faq.md#how-do-i-use-ollama-server-environment-variables-on-linux Let us know if anything isn't clear and we can help further. You referred to some setup docs that directed you to run ollama serve as your user. Can you point that out to me so I can correct it? Thanks so much for being a great part of this awesome community
Author
Owner

@AdvancedAssistiveTech commented on GitHub (Jan 4, 2024):

Hi @technovangelist
Thanks for your support. Configuring the environment worked perfectly. The setup doc I was referring to was the "Running local builds" heading in README.md, though that may just have been me misunderstanding when those instructions were appropriate to follow. Can I close this thread?

<!-- gh-comment-id:1877558897 --> @AdvancedAssistiveTech commented on GitHub (Jan 4, 2024): Hi @technovangelist Thanks for your support. Configuring the environment worked perfectly. The setup doc I was referring to was the "Running local builds" heading in README.md, though that may just have been me misunderstanding when those instructions were appropriate to follow. Can I close this thread?
Author
Owner

@technovangelist commented on GitHub (Jan 4, 2024):

That’s great to hear that everything worked. I'll go ahead and close this. thanks so much

<!-- gh-comment-id:1877601853 --> @technovangelist commented on GitHub (Jan 4, 2024): That’s great to hear that everything worked. I'll go ahead and close this. thanks so much
Author
Owner

@asmith26 commented on GitHub (Jan 18, 2024):

In case helpful, I fixed this by running (following https://github.com/jmorganca/ollama/issues/1783#issuecomment-1877276553) e.g.:

curl http://IP ADDRESS:11434/api/pull -d '{    
  "name": "mistral"  
}'
<!-- gh-comment-id:1899237237 --> @asmith26 commented on GitHub (Jan 18, 2024): In case helpful, I fixed this by running (following https://github.com/jmorganca/ollama/issues/1783#issuecomment-1877276553) e.g.: ```bash curl http://IP ADDRESS:11434/api/pull -d '{ "name": "mistral" }' ```
Author
Owner

@amaigo commented on GitHub (May 10, 2024):

Hi @AdvancedAssistiveTech I think the main problem comes from running ollama serve as two different users. When you are on the ubuntu box, you are probably using the service that is running as the ollama user. The models for that user are all under /usr/share/ollama/.ollama Then you have run a separate service as your user. Those models will be under ~/.ollama. To run the service, refer to the FAQ that goes over how to set environment variables https://github.com/jmorganca/ollama/blob/main/docs/faq.md#how-do-i-use-ollama-server-environment-variables-on-linux

Let us know if anything isn't clear and we can help further.

You referred to some setup docs that directed you to run ollama serve as your user. Can you point that out to me so I can correct it?

Thanks so much for being a great part of this awesome community

Yes.
ollama serve belongs to user, but the directory .ollama/models belong to ollama by default. So ollama serve can't access to any models in .ollama/models
the solution is like:
OLLAMA_MODELS=/home/user/happy_models OLLAMA_HOSTS=1.1.1.1:11111 ollama serve
curl 1.1.1.1:11111/api/pull -d '{"model": "goodmodel"}'

quote from ckite2 from csdn

<!-- gh-comment-id:2104668221 --> @amaigo commented on GitHub (May 10, 2024): > Hi @AdvancedAssistiveTech I think the main problem comes from running ollama serve as two different users. When you are on the ubuntu box, you are probably using the service that is running as the ollama user. The models for that user are all under /usr/share/ollama/.ollama Then you have run a separate service as your user. Those models will be under ~/.ollama. To run the service, refer to the FAQ that goes over how to set environment variables https://github.com/jmorganca/ollama/blob/main/docs/faq.md#how-do-i-use-ollama-server-environment-variables-on-linux > > Let us know if anything isn't clear and we can help further. > > You referred to some setup docs that directed you to run ollama serve as your user. Can you point that out to me so I can correct it? > > Thanks so much for being a great part of this awesome community Yes. ollama serve belongs to user, but the directory .ollama/models belong to ollama by default. So ollama serve can't access to any models in .ollama/models the solution is like: OLLAMA_MODELS=/home/user/happy_models OLLAMA_HOSTS=1.1.1.1:11111 ollama serve curl 1.1.1.1:11111/api/pull -d '{"model": "goodmodel"}' quote from ckite2 from csdn
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1019