[GH-ISSUE #1119] Hi @jjsarf you can use the OLLAMA_HOST environment variable in combination with ollama serve #47076

Closed
opened 2026-04-28 02:58:59 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @jjsarf on GitHub (Nov 14, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1119

That did not work fully.
super@mario:/var/www/html$ curl -X POST http://ai:8080/api/generate -d '{
"model": "llama2",
"prompt": "How are you?",
"stream": true,
"options": {
"top_k": 20,
"top_p": 0.9,
"typical_p": 0.7,
"temperature": 0.8,
"repeat_penalty": 1.2,
"presence_penalty": 1.5,
"frequency_penalty": 1.0
}
}'
curl: (7) Failed to connect to ai port 8080 after 230 ms: Connection refused

I also tried with the IPs
C:\usb\laragon-portable\se>curl -X POST http://10.0.0.252:8080/api/generate -d "{"model": "llama2", "prompt": "Why is the sky blue?", "stream": true, "options": { "top_k": 20, "top_p": 0.9, "typical_p": 0.7, "temperature": 0.8, "repeat_penalty": 1.2, "presence_penalty": 1.5, "frequency_penalty": 1.0 }}"
curl: (7) Failed to connect to 10.0.0.252 port 8080: Connection refused

===
Hi @jjsarf you can use the OLLAMA_HOST environment variable in combination with ollama serve

E.g. to expose Ollama externally on port 8080 you can use:

OLLAMA_HOST=0.0.0.0:8080 ollama serve

Feel free to post another issue! Will close this one for now

Originally posted by @jmorganca in https://github.com/jmorganca/ollama/issues/1117#issuecomment-1809465610

Originally created by @jjsarf on GitHub (Nov 14, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1119 That did not work fully. super@mario:/var/www/html$ curl -X POST http://ai:8080/api/generate -d '{ "model": "llama2", "prompt": "How are you?", "stream": true, "options": { "top_k": 20, "top_p": 0.9, "typical_p": 0.7, "temperature": 0.8, "repeat_penalty": 1.2, "presence_penalty": 1.5, "frequency_penalty": 1.0 } }' curl: (7) Failed to connect to ai port 8080 after 230 ms: Connection refused I also tried with the IPs C:\usb\laragon-portable\se>curl -X POST http://10.0.0.252:8080/api/generate -d "{"model": "llama2", "prompt": "Why is the sky blue?", "stream": true, "options": { "top_k": 20, "top_p": 0.9, "typical_p": 0.7, "temperature": 0.8, "repeat_penalty": 1.2, "presence_penalty": 1.5, "frequency_penalty": 1.0 }}" curl: (7) Failed to connect to 10.0.0.252 port 8080: Connection refused === Hi @jjsarf you can use the `OLLAMA_HOST` environment variable in combination with `ollama serve` E.g. to expose Ollama externally on port 8080 you can use: ``` OLLAMA_HOST=0.0.0.0:8080 ollama serve ``` Feel free to post another issue! Will close this one for now _Originally posted by @jmorganca in https://github.com/jmorganca/ollama/issues/1117#issuecomment-1809465610_
Author
Owner

@jmorganca commented on GitHub (Nov 14, 2023):

Hi @jjsarf sorry it's still not working. Do you know if port 8080 is open on your machine? Does logging into that machine and running curl http://localhost:8080/api/generate ... work? Closing for now since I believe this is a network issue but do let me know if you're still encountering it!

<!-- gh-comment-id:1810798014 --> @jmorganca commented on GitHub (Nov 14, 2023): Hi @jjsarf sorry it's still not working. Do you know if port 8080 is open on your machine? Does logging into that machine and running `curl http://localhost:8080/api/generate ...` work? Closing for now since I believe this is a network issue but do let me know if you're still encountering it!
Author
Owner

@jjsarf commented on GitHub (Nov 14, 2023):

super@mario:~$ curl -X POST http://172.26.96.1:8080/api/generate -d '{"model": "llama2","prompt": "Why is the sky blue?","s
tream": true,"options": {"top_k": 20,"top_p": 0.9,"typical_p": 0.7,"temperature": 0.8,"repeat_penalty": 1.2,"presence_pe
nalty": 1.5,"frequency_penalty": 1.0 }}'curl -X POST http://10.0.0.252:8080/api/generate -d '{"model": "llama2","prompt"
"Why is the sky blue?","stream": true,"options": {"top_k": 20,"top_p": 0.9,"typical_p": 0.7,"temperature": 0.8,"repeat
_penalty": 1.2,"presence_penalty": 1.5,"frequency_penalty": 1.0 }}'
curl: (7) Failed to connect to 172.26.96.1 port 8080 after 0 ms: Connection refused
curl: (7) Failed to connect to 10.0.0.252 port 8080 after 0 ms: Connection refused
super@mario:$ curl -X POST http://10.0.0.252:8080/api/generate -d '{"model": "llama2","prompt": "Why is the sky blue?","stream": true,"options": {"top_k": 20,"top_p": 0.9,"typical_p": 0.7,"temperature": 0.8,"repeat_penalty": 1.2,"presence_penalty": 1.5,"frequency_penalty": 1.0 }}'curl -X POST http://10.0.0.252:8080/api/generate -d '{"model": "llama2","prompt": "Why is the sky blue?","stream": true,"options": {"top_k": 20,"top_p": 0.9,"typical_p": 0.7,"temperature": 0.8,"repeat_penalty": 1.2,"presence_penalty": 1.5,"frequency_penalty": 1.0 }}'
curl: (7) Failed to connect to 10.0.0.252 port 8080 after 0 ms: Connection refused
curl: (7) Failed to connect to 10.0.0.252 port 8080 after 0 ms: Connection refused
super@mario:
$ curl -X POST http://ai:8080/api/generate -d '{"model": "llama2","prompt": "Why is the sky blue?","stream": true,"options": {"top_k": 20,"top_p": 0.9,"typical_p": 0.7,"temperature": 0.8,"repeat_penalty": 1.2,"presence_penalty": 1.5,"frequency_penalty": 1.0 }}'
{"timestamp":1700000009,"level":"INFO","function":"log_server_request","line":1233,"message":"request","remote_addr":"127.0.0.1","remote_port":50324,"status":200,"method":"HEAD","path":"/","params":{}}
{"model":"llama2","created_at":"2023-11-14T22:13:29.558889954Z","response":"\n","done":false}
{"model":"llama2","created_at":"2023-11-14T22:13:29.606870611Z","response":"The","done":false}
{"model":"llama2","created_at":"2023-11-14T22:13:29.669844293Z","response":" sky","done":false}
{"model":"llama2","created_at":"2023-11-14T22:13:29.71778644Z","response":" appears","done":false}
{"model":"llama2","created_at":"2023-11-14T22:13:29.781741821Z","response":" blue","done":false}
{"model":"llama2","created_at":"2023-11-14T22:13:29.829792648Z","response":" because","done":false}
{"model":"llama2","created_at":"2023-11-14T22:13:29.876772284Z","response":" of","done":false}
{"model":"llama2","created_at":"2023-11-14T22:13:29.939741767Z","response":" a","done":false}

So it works with "localhost" and "ai" which is the host name of my computer. It does not work with IPs as per above. One is the wsl2 IP the other outer computer PC. Is there a config file or something else I could try?

<!-- gh-comment-id:1811448512 --> @jjsarf commented on GitHub (Nov 14, 2023): super@mario:~$ curl -X POST http://172.26.96.1:8080/api/generate -d '{"model": "llama2","prompt": "Why is the sky blue?","s tream": true,"options": {"top_k": 20,"top_p": 0.9,"typical_p": 0.7,"temperature": 0.8,"repeat_penalty": 1.2,"presence_pe nalty": 1.5,"frequency_penalty": 1.0 }}'curl -X POST http://10.0.0.252:8080/api/generate -d '{"model": "llama2","prompt" : "Why is the sky blue?","stream": true,"options": {"top_k": 20,"top_p": 0.9,"typical_p": 0.7,"temperature": 0.8,"repeat _penalty": 1.2,"presence_penalty": 1.5,"frequency_penalty": 1.0 }}' curl: (7) Failed to connect to 172.26.96.1 port 8080 after 0 ms: Connection refused curl: (7) Failed to connect to 10.0.0.252 port 8080 after 0 ms: Connection refused super@mario:~$ curl -X POST http://10.0.0.252:8080/api/generate -d '{"model": "llama2","prompt": "Why is the sky blue?","stream": true,"options": {"top_k": 20,"top_p": 0.9,"typical_p": 0.7,"temperature": 0.8,"repeat_penalty": 1.2,"presence_penalty": 1.5,"frequency_penalty": 1.0 }}'curl -X POST http://10.0.0.252:8080/api/generate -d '{"model": "llama2","prompt": "Why is the sky blue?","stream": true,"options": {"top_k": 20,"top_p": 0.9,"typical_p": 0.7,"temperature": 0.8,"repeat_penalty": 1.2,"presence_penalty": 1.5,"frequency_penalty": 1.0 }}' curl: (7) Failed to connect to 10.0.0.252 port 8080 after 0 ms: Connection refused curl: (7) Failed to connect to 10.0.0.252 port 8080 after 0 ms: Connection refused super@mario:~$ curl -X POST http://ai:8080/api/generate -d '{"model": "llama2","prompt": "Why is the sky blue?","stream": true,"options": {"top_k": 20,"top_p": 0.9,"typical_p": 0.7,"temperature": 0.8,"repeat_penalty": 1.2,"presence_penalty": 1.5,"frequency_penalty": 1.0 }}' {"timestamp":1700000009,"level":"INFO","function":"log_server_request","line":1233,"message":"request","remote_addr":"127.0.0.1","remote_port":50324,"status":200,"method":"HEAD","path":"/","params":{}} {"model":"llama2","created_at":"2023-11-14T22:13:29.558889954Z","response":"\n","done":false} {"model":"llama2","created_at":"2023-11-14T22:13:29.606870611Z","response":"The","done":false} {"model":"llama2","created_at":"2023-11-14T22:13:29.669844293Z","response":" sky","done":false} {"model":"llama2","created_at":"2023-11-14T22:13:29.71778644Z","response":" appears","done":false} {"model":"llama2","created_at":"2023-11-14T22:13:29.781741821Z","response":" blue","done":false} {"model":"llama2","created_at":"2023-11-14T22:13:29.829792648Z","response":" because","done":false} {"model":"llama2","created_at":"2023-11-14T22:13:29.876772284Z","response":" of","done":false} {"model":"llama2","created_at":"2023-11-14T22:13:29.939741767Z","response":" a","done":false} So it works with "localhost" and "ai" which is the host name of my computer. It does not work with IPs as per above. One is the wsl2 IP the other outer computer PC. Is there a config file or something else I could try?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47076