OllamaLLM fails to connect to custom Ollama server after migration from langchain_community #1928

Closed
opened 2025-11-11 14:56:42 -06:00 by GiteaMirror · 0 comments
Owner

Originally created by @fmdelgado on GitHub (Aug 29, 2024).

Description

I'm experiencing connection issues when trying to use OllamaLLM from langchain_ollama to connect to a custom Ollama server. This setup previously worked correctly when using Ollama from langchain_community.llms.

Current Behavior

When attempting to use ollama_llm.invoke(), I receive a ConnectError: [Errno 61] Connection refused error.

Expected Behavior

The OllamaLLM class should successfully connect to the custom Ollama server and allow me to generate responses using the invoke() method, as it did when using langchain_community.llms.Ollama.

Steps to Reproduce

  1. Set up authentication and obtain a JWT token from the custom server.
  2. Initialize OllamaLLM with the appropriate parameters.
  3. Attempt to use ollama_llm.invoke() to generate a response.

Code Sample

from langchain_ollama import OllamaLLM
import requests
import json

protocol = "https"
hostname = "example.com"  # Anonymized hostname
host = f"{protocol}://{hostname}"
auth_url = f"{host}/api/v1/auths/signin"
api_url = f"{host}/ollama"
username = 'your_username'
password = 'your_password'
model_name = "mistral:v0.2"
account = {'email': username, 'password': password}

# Authenticate and get JWT token
auth_response = requests.post(auth_url, json=account)
jwt = json.loads(auth_response.text)["token"]

ollama_llm = OllamaLLM(
    base_url=api_url,
    model=model_name,
    temperature=0.0,
    headers={"Authorization": f"Bearer {jwt}"},
    verify=False  # Note: This is not recommended for production use
)

# This line raises the ConnectError
response = ollama_llm.invoke("What can we visit in Hamburg?")

Additional Context

  • This code worked correctly when using langchain_community.llms.Ollama.
  • A GET request to the API URL returns a 200 status code but serves an HTML page instead of an API response, suggesting the endpoint might not be correct for the new implementation.
  • I've verified that the server is up and running, and the authentication process is working correctly.

Environment

  • Python version: 3.11.9
  • LangChain version: 0.2.5
  • langchain-ollama version: 0.1.1

Possible Solution

It seems that the OllamaLLM class might be expecting a different API structure or endpoint compared to the previous implementation. Could there be any changes in how OllamaLLM constructs API requests compared to the previous Ollama class?

Question

Is there any additional configuration or setup required when migrating from langchain_community.llms.Ollama to langchain_ollama.OllamaLLM for custom Ollama servers?

Originally created by @fmdelgado on GitHub (Aug 29, 2024). ### Description I'm experiencing connection issues when trying to use `OllamaLLM` from `langchain_ollama` to connect to a custom Ollama server. This setup previously worked correctly when using `Ollama` from `langchain_community.llms`. ### Current Behavior When attempting to use `ollama_llm.invoke()`, I receive a `ConnectError: [Errno 61] Connection refused` error. ### Expected Behavior The `OllamaLLM` class should successfully connect to the custom Ollama server and allow me to generate responses using the `invoke()` method, as it did when using `langchain_community.llms.Ollama`. ### Steps to Reproduce 1. Set up authentication and obtain a JWT token from the custom server. 2. Initialize `OllamaLLM` with the appropriate parameters. 3. Attempt to use `ollama_llm.invoke()` to generate a response. ### Code Sample ```python from langchain_ollama import OllamaLLM import requests import json protocol = "https" hostname = "example.com" # Anonymized hostname host = f"{protocol}://{hostname}" auth_url = f"{host}/api/v1/auths/signin" api_url = f"{host}/ollama" username = 'your_username' password = 'your_password' model_name = "mistral:v0.2" account = {'email': username, 'password': password} # Authenticate and get JWT token auth_response = requests.post(auth_url, json=account) jwt = json.loads(auth_response.text)["token"] ollama_llm = OllamaLLM( base_url=api_url, model=model_name, temperature=0.0, headers={"Authorization": f"Bearer {jwt}"}, verify=False # Note: This is not recommended for production use ) # This line raises the ConnectError response = ollama_llm.invoke("What can we visit in Hamburg?") ``` ### Additional Context - This code worked correctly when using `langchain_community.llms.Ollama`. - A GET request to the API URL returns a 200 status code but serves an HTML page instead of an API response, suggesting the endpoint might not be correct for the new implementation. - I've verified that the server is up and running, and the authentication process is working correctly. ### Environment - Python version: 3.11.9 - LangChain version: 0.2.5 - langchain-ollama version: 0.1.1 ### Possible Solution It seems that the `OllamaLLM` class might be expecting a different API structure or endpoint compared to the previous implementation. Could there be any changes in how `OllamaLLM` constructs API requests compared to the previous `Ollama` class? ### Question Is there any additional configuration or setup required when migrating from `langchain_community.llms.Ollama` to `langchain_ollama.OllamaLLM` for custom Ollama servers?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#1928