[GH-ISSUE #2783] Connection Error with OllamaFunctions in Langchain #1679

Closed
opened 2026-04-12 11:39:08 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @quartermaine on GitHub (Feb 27, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2783

Description

I am attempting to replicate the Langchain tutorial in order to use OllamaFunctions for web extraction, as also demonstrated here in a Google Colab environment.

Code

[1] %%capture
     !pip install langchain_experimental


[2] from langchain_experimental.llms.ollama_functions import OllamaFunctions

     lm = OllamaFunctions(model="llama2:13b",
                      base_url="http://localhost:11434",
                      temperature=0)

[3] %%capture
     !pip install -q langchain-openai langchain playwright beautifulsoup4
     !playwright install


[4] import nest_asyncio
     nest_asyncio.apply()


[5] from langchain.chains import create_extraction_chain
     schema = {
          "properties": {
           "news_article_title": {"type": "string"},
           "news_article_summary": {"type": "string"},
              },
           "required": ["news_article_title", "news_article_summary"],
      }

     def extract(content: str, schema: dict):
      return create_extraction_chain(schema=schema, llm=llm, verbose=True).invoke(content)


[6] import pprint
     from langchain.text_splitter import RecursiveCharacterTextSplitter
     from langchain_community.document_loaders import AsyncChromiumLoader
     from langchain_community.document_transformers import BeautifulSoupTransformer

    def scrape_with_playwright(urls, schema):
         loader = AsyncChromiumLoader(urls)
         docs = loader.load()
         bs_transformer = BeautifulSoupTransformer()
         docs_transformed = bs_transformer.transform_documents(
                docs, tags_to_extract=["span"]
         )
         print("Extracting content with LLM")
         # Grab the first 1000 tokens of the site
         splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
                   chunk_size=1000,
                    chunk_overlap=0,
                     separators=["\n"]
          )
         splits = splitter.split_documents(docs_transformed)
         print("Number of splits:", len(splits))  # Add this debugging statement
         if splits:  # Check if splits list is not empty
                # Process the first split
                extracted_content = extract(schema=schema, content=splits[0].page_content) #  Line where error occurs
                pprint.pprint(extracted_content)
                return extracted_content
         else:
                 print("No splits found")  # Add this debugging statement
                 return None```

[7] urls = ["https://www.nytimes.com/"]
     extracted_content = scrape_with_playwright(urls, schema=schema) python

Error

But I am getting the following error:

ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/chat/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f7b19911300>: Failed to establish a new connection: [Errno 111] Connection refused'))
Originally created by @quartermaine on GitHub (Feb 27, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2783 ### Description I am attempting to replicate the [Langchain tutorial](https://python.langchain.com/docs/integrations/chat/ollama_functions) in order to use OllamaFunctions for web extraction, as also demonstrated [here](https://python.langchain.com/docs/use_cases/web_scraping#scraping-with-extraction) in a Google Colab environment. ### Code ```python [1] %%capture !pip install langchain_experimental [2] from langchain_experimental.llms.ollama_functions import OllamaFunctions lm = OllamaFunctions(model="llama2:13b", base_url="http://localhost:11434", temperature=0) [3] %%capture !pip install -q langchain-openai langchain playwright beautifulsoup4 !playwright install [4] import nest_asyncio nest_asyncio.apply() [5] from langchain.chains import create_extraction_chain schema = { "properties": { "news_article_title": {"type": "string"}, "news_article_summary": {"type": "string"}, }, "required": ["news_article_title", "news_article_summary"], } def extract(content: str, schema: dict): return create_extraction_chain(schema=schema, llm=llm, verbose=True).invoke(content) [6] import pprint from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain_community.document_loaders import AsyncChromiumLoader from langchain_community.document_transformers import BeautifulSoupTransformer def scrape_with_playwright(urls, schema): loader = AsyncChromiumLoader(urls) docs = loader.load() bs_transformer = BeautifulSoupTransformer() docs_transformed = bs_transformer.transform_documents( docs, tags_to_extract=["span"] ) print("Extracting content with LLM") # Grab the first 1000 tokens of the site splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder( chunk_size=1000, chunk_overlap=0, separators=["\n"] ) splits = splitter.split_documents(docs_transformed) print("Number of splits:", len(splits)) # Add this debugging statement if splits: # Check if splits list is not empty # Process the first split extracted_content = extract(schema=schema, content=splits[0].page_content) # Line where error occurs pprint.pprint(extracted_content) return extracted_content else: print("No splits found") # Add this debugging statement return None``` [7] urls = ["https://www.nytimes.com/"] extracted_content = scrape_with_playwright(urls, schema=schema) python ``` ### Error But I am getting the following error: ```python ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/chat/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f7b19911300>: Failed to establish a new connection: [Errno 111] Connection refused')) ```
GiteaMirror added the questionneeds more info labels 2026-04-12 11:39:08 -05:00
Author
Owner

@ecsricktorzynski commented on GitHub (Mar 9, 2024):

I am finding the same thing - I can use Ollama fine with Streamlit, but when I try to access Ollama through Langchain, I get this same exact error message.

<!-- gh-comment-id:1986766240 --> @ecsricktorzynski commented on GitHub (Mar 9, 2024): I am finding the same thing - I can use Ollama fine with Streamlit, but when I try to access Ollama through Langchain, I get this same exact error message.
Author
Owner

@jjmlovesgit commented on GitHub (Mar 22, 2024):

Friend -- a suggestion to try given limited view of the issue -- I have seen this when I did my Langchain -- make sure you start Ollama with "Ollama Serve" and you see it listening on the port...

C:\projects\DID\DID_LC_Ollama>ollama serve
time=2024-03-21T22:04:06.277-04:00 level=INFO source=images.go:806 msg="total blobs: 39"
time=2024-03-21T22:04:06.278-04:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0"
time=2024-03-21T22:04:06.280-04:00 level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)"

<!-- gh-comment-id:2014910889 --> @jjmlovesgit commented on GitHub (Mar 22, 2024): Friend -- a suggestion to try given limited view of the issue -- I have seen this when I did my Langchain -- make sure you start Ollama with "Ollama Serve" and you see it listening on the port... C:\projects\DID\DID_LC_Ollama>ollama serve time=2024-03-21T22:04:06.277-04:00 level=INFO source=images.go:806 msg="total blobs: 39" time=2024-03-21T22:04:06.278-04:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0" time=2024-03-21T22:04:06.280-04:00 level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)"
Author
Owner

@rdndsouza5 commented on GitHub (Apr 21, 2024):

Change the base url from localhost to whats there in ollama serve ie. llm = Ollama(model="llama2",base_url="http://127.0.0.1:11434")

<!-- gh-comment-id:2068179307 --> @rdndsouza5 commented on GitHub (Apr 21, 2024): Change the base url from localhost to whats there in ollama serve ie. llm = Ollama(model="llama2",base_url="http://127.0.0.1:11434")
Author
Owner

@dhiltgen commented on GitHub (Nov 6, 2024):

@quartermaine did you get it figured out?

<!-- gh-comment-id:2460531507 --> @dhiltgen commented on GitHub (Nov 6, 2024): @quartermaine did you get it figured out?
Author
Owner

@kikiyu commented on GitHub (Dec 12, 2024):

In my case,

model = OllamaLLM(model="llama3.1", base_url="http://localhost:11434")

solves the problem.

This is because Ollama is listening on IPV6 addresses, while 127.0.0.1 is an IPV4 address. I used netstat to check the address Ollama is listening on.

$ sudo netstat -tuln | grep 11434
tcp6       0      0 :::11434                :::*                    LISTEN 
<!-- gh-comment-id:2537945641 --> @kikiyu commented on GitHub (Dec 12, 2024): In my case, ``` model = OllamaLLM(model="llama3.1", base_url="http://localhost:11434") ``` solves the problem. This is because Ollama is listening on IPV6 addresses, while 127.0.0.1 is an IPV4 address. I used netstat to check the address Ollama is listening on. ``` $ sudo netstat -tuln | grep 11434 tcp6 0 0 :::11434 :::* LISTEN ```
Author
Owner

@pdevine commented on GitHub (Jan 12, 2025):

I'm going to go ahead and close the issue.

<!-- gh-comment-id:2585499961 --> @pdevine commented on GitHub (Jan 12, 2025): I'm going to go ahead and close the issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1679