[GH-ISSUE #9127] How can I connect to Ollama's server? #5937

Closed
opened 2026-04-12 17:16:41 -05:00 by GiteaMirror · 18 comments
Owner

Originally created by @Neabigmo on GitHub (Feb 15, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9127

when I run the powershell:

Image

really make me confusing./(ㄒoㄒ)/~~

Originally created by @Neabigmo on GitHub (Feb 15, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9127 when I run the powershell: ![Image](https://github.com/user-attachments/assets/8a04e782-f32d-4baa-bd3b-4c56de36cbc9) really make me confusing./(ㄒoㄒ)/~~
Author
Owner

@LeisureLinux commented on GitHub (Feb 15, 2025):

  • check your process list.
  • check your service running.
  • use netstat check your opening port.
  • check other terminal is running ollama or not. how do you start your ollam?
  • try setx OLLAMA_HOST=127.0.0.1:11434
<!-- gh-comment-id:2660822331 --> @LeisureLinux commented on GitHub (Feb 15, 2025): - check your process list. - check your service running. - use netstat check your opening port. - check other terminal is running ollama or not. how do you start your ollam? - try setx OLLAMA_HOST=127.0.0.1:11434
Author
Owner

@rick-github commented on GitHub (Feb 15, 2025):

The ollama server crashed when you did the run command. Server logs may show why.

<!-- gh-comment-id:2660889387 --> @rick-github commented on GitHub (Feb 15, 2025): The ollama server crashed when you did the `run` command. [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may show why.
Author
Owner

@trash-fish commented on GitHub (Feb 15, 2025):

The ollama server crashed when you did the run command. Server logs may show why.

Why do I have to use a proxy in the code to connect to the model? This will conflict with my actual proxy.
os.environ["http_proxy"] = "http://127.0.0.1:11434"
os.environ["https_proxy"] = "http://127.0.0.1:11434"

<!-- gh-comment-id:2661019906 --> @trash-fish commented on GitHub (Feb 15, 2025): > The ollama server crashed when you did the `run` command. [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may show why. Why do I have to use a proxy in the code to connect to the model? This will conflict with my actual proxy. os.environ["http_proxy"] = "http://127.0.0.1:11434" os.environ["https_proxy"] = "http://127.0.0.1:11434"
Author
Owner

@rick-github commented on GitHub (Feb 15, 2025):

You don't have to use a proxy, and in fact you shouldn't for 127.0.0.1.

<!-- gh-comment-id:2661022128 --> @rick-github commented on GitHub (Feb 15, 2025): You don't have to use a proxy, and in fact you shouldn't for 127.0.0.1.
Author
Owner

@trash-fish commented on GitHub (Feb 15, 2025):

You don't have to use a proxy, and in fact you shouldn't for 127.0.0.1.

If I don't add the proxy, my error is as follows:

Traceback (most recent call last):
File "W:\代码\LLM\LangChainDemo01\demo03.py", line 54, in
print(chain.invoke({'content': '你好?'}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\20262\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\runnables\base.py", line 3024, in invoke
input = context.run(step.invoke, input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\20262\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\language_models\chat_models.py", line 284, in invoke
self.generate_prompt(
File "C:\Users\20262\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\language_models\chat_models.py", line 860, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\20262\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\language_models\chat_models.py", line 690, in generate
self._generate_with_cache(
File "C:\Users\20262\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\language_models\chat_models.py", line 925, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "C:\Users\20262\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_ollama\chat_models.py", line 701, in _generate
final_chunk = self._chat_stream_with_aggregation(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\20262\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_ollama\chat_models.py", line 602, in _chat_stream_with_aggregation
for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
File "C:\Users\20262\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_ollama\chat_models.py", line 589, in _create_chat_stream
yield from self._client.chat(**chat_params)
File "C:\Users\20262\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\ollama_client.py", line 168, in inner
raise ResponseError(e.response.text, e.response.status_code) from None
ollama._types.ResponseError: (status code: 502)

my code:


# 设置代理
# os.environ["http_proxy"] = "http://127.0.0.1:11434"
# os.environ["https_proxy"] = "http://127.0.0.1:11434"



# 调用大语言模型
model = ChatOllama(
    model="llama3.2:1b",
    base_url="http://localhost:11434",
    temperature=0.8
)


# 定义提示模板
prompt = ChatPromptTemplate.from_messages(
    [('system', '你作为ai注释,为用户解决问题'), ('human', '{content}')]
)


# 定义解析器
parser = StrOutputParser()



# 获取链,模板、模型、消息解析器
chain = prompt | model | parser




print(chain.invoke({'content': '你好?'}))
<!-- gh-comment-id:2661023235 --> @trash-fish commented on GitHub (Feb 15, 2025): > You don't have to use a proxy, and in fact you shouldn't for 127.0.0.1. If I don't add the proxy, my error is as follows: Traceback (most recent call last): File "W:\代码\LLM\LangChainDemo01\demo03.py", line 54, in <module> print(chain.invoke({'content': '你好?'})) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\20262\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\runnables\base.py", line 3024, in invoke input = context.run(step.invoke, input, config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\20262\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\language_models\chat_models.py", line 284, in invoke self.generate_prompt( File "C:\Users\20262\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\language_models\chat_models.py", line 860, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\20262\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\language_models\chat_models.py", line 690, in generate self._generate_with_cache( File "C:\Users\20262\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\language_models\chat_models.py", line 925, in _generate_with_cache result = self._generate( ^^^^^^^^^^^^^^^ File "C:\Users\20262\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_ollama\chat_models.py", line 701, in _generate final_chunk = self._chat_stream_with_aggregation( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\20262\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_ollama\chat_models.py", line 602, in _chat_stream_with_aggregation for stream_resp in self._create_chat_stream(messages, stop, **kwargs): File "C:\Users\20262\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_ollama\chat_models.py", line 589, in _create_chat_stream yield from self._client.chat(**chat_params) File "C:\Users\20262\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\ollama\_client.py", line 168, in inner raise ResponseError(e.response.text, e.response.status_code) from None ollama._types.ResponseError: (status code: 502) ``` my code: # 设置代理 # os.environ["http_proxy"] = "http://127.0.0.1:11434" # os.environ["https_proxy"] = "http://127.0.0.1:11434" # 调用大语言模型 model = ChatOllama( model="llama3.2:1b", base_url="http://localhost:11434", temperature=0.8 ) # 定义提示模板 prompt = ChatPromptTemplate.from_messages( [('system', '你作为ai注释,为用户解决问题'), ('human', '{content}')] ) # 定义解析器 parser = StrOutputParser() # 获取链,模板、模型、消息解析器 chain = prompt | model | parser print(chain.invoke({'content': '你好?'})) ```
Author
Owner

@trash-fish commented on GitHub (Feb 15, 2025):

You don't have to use a proxy, and in fact you shouldn't for 127.0.0.1.

Image

I installed the model on my local machine using Ollama.

<!-- gh-comment-id:2661023954 --> @trash-fish commented on GitHub (Feb 15, 2025): > You don't have to use a proxy, and in fact you shouldn't for 127.0.0.1. ![Image](https://github.com/user-attachments/assets/c7a5535f-5f00-4d2b-a6ee-fccfbf0b89ac) I installed the model on my local machine using Ollama.
Author
Owner

@rick-github commented on GitHub (Feb 15, 2025):

What does the following return:

curl localhost:11434
<!-- gh-comment-id:2661024595 --> @rick-github commented on GitHub (Feb 15, 2025): What does the following return: ``` curl localhost:11434 ```
Author
Owner

@trash-fish commented on GitHub (Feb 15, 2025):

What does the following return:

curl localhost:11434

Image

<!-- gh-comment-id:2661024971 --> @trash-fish commented on GitHub (Feb 15, 2025): > What does the following return: > > ``` > curl localhost:11434 > ``` ![Image](https://github.com/user-attachments/assets/ed1a4f3a-4542-44ef-8aff-f960348b59bf)
Author
Owner

@trash-fish commented on GitHub (Feb 15, 2025):

What does the following return:

curl localhost:11434

As long as I use it, there won't be any error reported.

# os.environ["http_proxy"] = "http://127.0.0.1:11434"
# os.environ["https_proxy"] = "http://127.0.0.1:11434"
<!-- gh-comment-id:2661025235 --> @trash-fish commented on GitHub (Feb 15, 2025): > What does the following return: > > ``` > curl localhost:11434 > ``` As long as I use it, there won't be any error reported. ``` # os.environ["http_proxy"] = "http://127.0.0.1:11434" # os.environ["https_proxy"] = "http://127.0.0.1:11434" ```
Author
Owner

@rick-github commented on GitHub (Feb 15, 2025):

Ollama is running

ollama is working. If your app doesn't, that's a problem with the app or your proxy configuration. Try setting

os.environ["no_proxy"] = "127.0.0.1,localhost"
<!-- gh-comment-id:2661028405 --> @rick-github commented on GitHub (Feb 15, 2025): ``` Ollama is running ``` ollama is working. If your app doesn't, that's a problem with the app or your proxy configuration. Try setting ``` os.environ["no_proxy"] = "127.0.0.1,localhost" ```
Author
Owner

@trash-fish commented on GitHub (Feb 15, 2025):

Ollama is running

ollama is working. If your app doesn't, that's a problem with the app or your proxy configuration. Try setting

os.environ["no_proxy"] = "127.0.0.1,localhost"

If I use os.environ["no_proxy"] = "127.0.0.1,localhost", my proxy becomes invalid, causing me to be unable to access TavilySearchAPI. This might be due to my location in China, but theoretically, I shouldn't need to add os.environ["no_proxy"] = "127.0.0.1,localhost" to call the model.

<!-- gh-comment-id:2661032361 --> @trash-fish commented on GitHub (Feb 15, 2025): > ``` > Ollama is running > ``` > > ollama is working. If your app doesn't, that's a problem with the app or your proxy configuration. Try setting > > ``` > os.environ["no_proxy"] = "127.0.0.1,localhost" > ``` If I use os.environ["no_proxy"] = "127.0.0.1,localhost", my proxy becomes invalid, causing me to be unable to access TavilySearchAPI. This might be due to my location in China, but theoretically, I shouldn't need to add os.environ["no_proxy"] = "127.0.0.1,localhost" to call the model.
Author
Owner

@rick-github commented on GitHub (Feb 15, 2025):

Then you need to figure out why your proxy is not routing traffic to 127.0.0.1:11434 to 127.0.0.1:11434. ollama doesn't need a proxy, if your app does then it's not an ollama problem.

<!-- gh-comment-id:2661033721 --> @rick-github commented on GitHub (Feb 15, 2025): Then you need to figure out why your proxy is not routing traffic to 127.0.0.1:11434 to 127.0.0.1:11434. ollama doesn't need a proxy, if your app does then it's not an ollama problem.
Author
Owner

@trash-fish commented on GitHub (Feb 16, 2025):

Then you need to figure out why your proxy is not routing traffic to 127.0.0.1:11434 to 127.0.0.1:11434. ollama doesn't need a proxy, if your app does then it's not an ollama problem.

os.environ["no_proxy"] = "127.0.0.1,localhost"
os.environ["http_proxy"] = "http://127.0.0.1:7890"
os.environ["https_proxy"] = "http://127.0.0.1:7890"

This can be solved

<!-- gh-comment-id:2661221531 --> @trash-fish commented on GitHub (Feb 16, 2025): > Then you need to figure out why your proxy is not routing traffic to 127.0.0.1:11434 to 127.0.0.1:11434. ollama doesn't need a proxy, if your app does then it's not an ollama problem. os.environ["no_proxy"] = "127.0.0.1,localhost" os.environ["http_proxy"] = "http://127.0.0.1:7890" os.environ["https_proxy"] = "http://127.0.0.1:7890" This can be solved
Author
Owner

@trash-fish commented on GitHub (Feb 16, 2025):

Then you need to figure out why your proxy is not routing traffic to 127.0.0.1:11434 to 127.0.0.1:11434. ollama doesn't need a proxy, if your app does then it's not an ollama problem.

Image
It can also be like this

<!-- gh-comment-id:2661222606 --> @trash-fish commented on GitHub (Feb 16, 2025): > Then you need to figure out why your proxy is not routing traffic to 127.0.0.1:11434 to 127.0.0.1:11434. ollama doesn't need a proxy, if your app does then it's not an ollama problem. ![Image](https://github.com/user-attachments/assets/5e426219-8e1c-46f6-84e7-05baad731ca5) It can also be like this
Author
Owner

@Neabigmo commented on GitHub (Feb 16, 2025):

以下内容返回什么:

curl localhost:11434

Image
what's meaning?

<!-- gh-comment-id:2661305350 --> @Neabigmo commented on GitHub (Feb 16, 2025): > 以下内容返回什么: > > ``` > curl localhost:11434 > ``` ![Image](https://github.com/user-attachments/assets/fdca1d25-dd6b-40c1-8939-ce21dd9cca50) what's meaning?
Author
Owner

@Neabigmo commented on GitHub (Feb 16, 2025):

I'm really troubled. Does Ollama require users to configure a proxy? And how should I configure the local port? If I don't configure the proxy, why does the Ollama server reject my access?

<!-- gh-comment-id:2661306366 --> @Neabigmo commented on GitHub (Feb 16, 2025): I'm really troubled. Does Ollama require users to configure a proxy? And how should I configure the local port? If I don't configure the proxy, why does the Ollama server reject my access?
Author
Owner

@rick-github commented on GitHub (Feb 16, 2025):

Does Ollama require users to configure a proxy?

No. If you do have a proxy, you need to configure it to allow clients to connect to the ollama port.

<!-- gh-comment-id:2661441054 --> @rick-github commented on GitHub (Feb 16, 2025): > Does Ollama require users to configure a proxy? No. If you do have a proxy, you need to configure it to allow clients to connect to the ollama port.
Author
Owner

@trash-fish commented on GitHub (Feb 16, 2025):

真的很纠结,Ollama需要用户配置代理吗?本地端口怎么配置?如果不配置代理,Ollama服务器为什么会拒绝我的访问?

支配置https_proxy就行,别配置http_proxy否则会出错

<!-- gh-comment-id:2661442127 --> @trash-fish commented on GitHub (Feb 16, 2025): > 真的很纠结,Ollama需要用户配置代理吗?本地端口怎么配置?如果不配置代理,Ollama服务器为什么会拒绝我的访问? 支配置https_proxy就行,别配置http_proxy否则会出错
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5937