[GH-ISSUE #2513] ECONNREFUSED error #1468

Closed
opened 2026-04-12 11:22:23 -05:00 by GiteaMirror · 17 comments
Owner

Originally created by @jakobhoeg on GitHub (Feb 15, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2513

Keep getting ECONNREFUSED error when trying to use Ollama for my NextJS frontend in production:

⨯ TypeError: fetch failed
    at Object.fetch (node:internal/deps/undici/undici:11730:11)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async globalThis.fetch (/var/task/node_modules/next/dist/compiled/next-server/app-route.runtime.prod.js:6:36091)
    at async s (/var/task/.next/server/app/api/model/route.js:1:491)
    at async /var/task/node_modules/next/dist/compiled/next-server/app-route.runtime.prod.js:6:42484
    at async eI.execute (/var/task/node_modules/next/dist/compiled/next-server/app-route.runtime.prod.js:6:32486)
    at async eI.handle (/var/task/node_modules/next/dist/compiled/next-server/app-route.runtime.prod.js:6:43737)
    at async Y (/var/task/node_modules/next/dist/compiled/next-server/server.runtime.prod.js:16:24556)
    at async Q.responseCache.get.routeKind (/var/task/node_modules/next/dist/compiled/next-server/server.runtime.prod.js:17:1025)
    at async r3.renderToResponseWithComponentsImpl (/var/task/node_modules/next/dist/compiled/next-server/server.runtime.prod.js:17:507) {
  cause: Error: connect ECONNREFUSED 127.0.0.1:11434
      at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1555:16)
      at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:128:17) {
    errno: -111,
    code: 'ECONNREFUSED',
    syscall: 'connect',
    address: '127.0.0.1',
    port: 11434
  }
}
Originally created by @jakobhoeg on GitHub (Feb 15, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2513 Keep getting ECONNREFUSED error when trying to use Ollama for my NextJS frontend in production: ``` ⨯ TypeError: fetch failed at Object.fetch (node:internal/deps/undici/undici:11730:11) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async globalThis.fetch (/var/task/node_modules/next/dist/compiled/next-server/app-route.runtime.prod.js:6:36091) at async s (/var/task/.next/server/app/api/model/route.js:1:491) at async /var/task/node_modules/next/dist/compiled/next-server/app-route.runtime.prod.js:6:42484 at async eI.execute (/var/task/node_modules/next/dist/compiled/next-server/app-route.runtime.prod.js:6:32486) at async eI.handle (/var/task/node_modules/next/dist/compiled/next-server/app-route.runtime.prod.js:6:43737) at async Y (/var/task/node_modules/next/dist/compiled/next-server/server.runtime.prod.js:16:24556) at async Q.responseCache.get.routeKind (/var/task/node_modules/next/dist/compiled/next-server/server.runtime.prod.js:17:1025) at async r3.renderToResponseWithComponentsImpl (/var/task/node_modules/next/dist/compiled/next-server/server.runtime.prod.js:17:507) { cause: Error: connect ECONNREFUSED 127.0.0.1:11434 at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1555:16) at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:128:17) { errno: -111, code: 'ECONNREFUSED', syscall: 'connect', address: '127.0.0.1', port: 11434 } } ```
GiteaMirror added the bugjsneeds more info labels 2026-04-12 11:22:24 -05:00
Author
Owner

@mxyng commented on GitHub (Feb 15, 2024):

ECONNREFUSED indicates Ollama server isn't running. Can you check it is running and accessible on localhost:11434?

<!-- gh-comment-id:1947137756 --> @mxyng commented on GitHub (Feb 15, 2024): ECONNREFUSED indicates Ollama server isn't running. Can you check it is running and accessible on localhost:11434?
Author
Owner

@jakobhoeg commented on GitHub (Feb 15, 2024):

ECONNREFUSED indicates Ollama server isn't running. Can you check it is running and accessible on localhost:11434?

It is running and accessible.

<!-- gh-comment-id:1947343222 --> @jakobhoeg commented on GitHub (Feb 15, 2024): > ECONNREFUSED indicates Ollama server isn't running. Can you check it is running and accessible on localhost:11434? It is running and accessible.
Author
Owner

@pdevine commented on GitHub (Mar 12, 2024):

@jakobhoeg Did you manage to get this working? @mxyng 's point is correct; maybe you were running it on a different port or a different host?

<!-- gh-comment-id:1992747589 --> @pdevine commented on GitHub (Mar 12, 2024): @jakobhoeg Did you manage to get this working? @mxyng 's point is correct; maybe you were running it on a different port or a different host?
Author
Owner

@jakobhoeg commented on GitHub (Mar 17, 2024):

@jakobhoeg Did you manage to get this working? @mxyng 's point is correct; maybe you were running it on a different port or a different host?

Hey.
No, still not working. It's hosted on Vercel and I have set the OLLAMA_ORIGINS.
Dev server it works fine, just not when trying to host it.

<!-- gh-comment-id:2002430693 --> @jakobhoeg commented on GitHub (Mar 17, 2024): > @jakobhoeg Did you manage to get this working? @mxyng 's point is correct; maybe you were running it on a different port or a different host? Hey. No, still not working. It's hosted on Vercel and I have set the OLLAMA_ORIGINS. Dev server it works fine, just not when trying to host it.
Author
Owner

@mxyng commented on GitHub (Mar 17, 2024):

Can you clarify where everything is deployed? You mentioned something is deployed in Vercel but the wording is vague. I assume the NextJS app you're calling Ollama from. If this is the case, 127.0.0.1 is probably not the right OLLAMA_HOST since that would be the Vercel deployment.

<!-- gh-comment-id:2002501436 --> @mxyng commented on GitHub (Mar 17, 2024): Can you clarify where everything is deployed? You mentioned something is deployed in Vercel but the wording is vague. I assume the NextJS app you're calling Ollama from. If this is the case, 127.0.0.1 is probably not the right OLLAMA_HOST since that would be the Vercel deployment.
Author
Owner

@jakobhoeg commented on GitHub (Mar 17, 2024):

Can you clarify where everything is deployed? You mentioned something is deployed in Vercel but the wording is vague. I assume the NextJS app you're calling Ollama from. If this is the case, 127.0.0.1 is probably not the right OLLAMA_HOST since that would be the Vercel deployment.

Yeah, my apologies.
I have my NextJS frontend deployed on Vercel. I am trying to allow users to chat with their own Ollama server running on their machine.
I am able to handle this client side, but not in an api route. That is where I get the error code from the original post. And now that I think of it, this probably isn't a Ollama issue, but rather an issue with the langchain package or Vercel/NextJS

For reference, this is my api route: (api/chattest)

import { StreamingTextResponse, Message } from "ai";
import { ChatOllama } from "@langchain/community/chat_models/ollama";
import { AIMessage, HumanMessage } from "@langchain/core/messages";
import { BytesOutputParser } from "@langchain/core/output_parsers";


export async function POST(req: Request) {
  const { messages, selectedModel } = await req.json();

  const model = new ChatOllama({
    baseUrl: "http://localhost:11434",
    model: selectedModel,
  });

  const parser = new BytesOutputParser();

  const stream = await model
    .pipe(parser)
    .stream(
      (messages as Message[]).map((m) =>
        m.role == "user"
          ? new HumanMessage(m.content)
          : new AIMessage(m.content)
      )
    );

    console.log(stream);

  return new StreamingTextResponse(stream);
}

Whenever I call that api endpoint, it also doesnt appear in the Ollama logs with any error message.

If i however try and do this client side, it works. Like this (just a quick example):

useEffect(() => {
    const fetchChat = async () => {
      const res = await fetch("http://localhost:11434/api/chat", {
        method: "POST",
        headers: {
          "Content-Type": "application/json",
        },
        body: JSON.stringify({ model: "gemma:2b", "messages": [
          {
            "role": "user",
            "content": "why is the sky blue?"
          }
        ] }),
      });
      const data = await res.json();
      console.log("Data from fetchChat:", data);

    }
    fetchChat();
  }
  , []);
<!-- gh-comment-id:2002533534 --> @jakobhoeg commented on GitHub (Mar 17, 2024): > Can you clarify where everything is deployed? You mentioned something is deployed in Vercel but the wording is vague. I assume the NextJS app you're calling Ollama from. If this is the case, 127.0.0.1 is probably not the right OLLAMA_HOST since that would be the Vercel deployment. Yeah, my apologies. I have my NextJS frontend deployed on Vercel. I am trying to allow users to chat with their own Ollama server running on their machine. I am able to handle this client side, but not in an api route. That is where I get the error code from the original post. And now that I think of it, this probably isn't a Ollama issue, but rather an issue with the langchain package or Vercel/NextJS For reference, this is my api route: (api/chattest) ```ts import { StreamingTextResponse, Message } from "ai"; import { ChatOllama } from "@langchain/community/chat_models/ollama"; import { AIMessage, HumanMessage } from "@langchain/core/messages"; import { BytesOutputParser } from "@langchain/core/output_parsers"; export async function POST(req: Request) { const { messages, selectedModel } = await req.json(); const model = new ChatOllama({ baseUrl: "http://localhost:11434", model: selectedModel, }); const parser = new BytesOutputParser(); const stream = await model .pipe(parser) .stream( (messages as Message[]).map((m) => m.role == "user" ? new HumanMessage(m.content) : new AIMessage(m.content) ) ); console.log(stream); return new StreamingTextResponse(stream); } ``` Whenever I call that api endpoint, it also doesnt appear in the Ollama logs with any error message. If i however try and do this client side, it works. Like this (just a quick example): ```tsx useEffect(() => { const fetchChat = async () => { const res = await fetch("http://localhost:11434/api/chat", { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ model: "gemma:2b", "messages": [ { "role": "user", "content": "why is the sky blue?" } ] }), }); const data = await res.json(); console.log("Data from fetchChat:", data); } fetchChat(); } , []); ```
Author
Owner

@BruceMacD commented on GitHub (Mar 18, 2024):

@jakobhoeg looks like this could be an issue with host resolution when using the langchain library rather than ollama, could you try using 127.0.0.1 rather than localhost?

const model = new ChatOllama({
    baseUrl: "http://127.0.0.1:11434",
    model: selectedModel,
  });
<!-- gh-comment-id:2003277007 --> @BruceMacD commented on GitHub (Mar 18, 2024): @jakobhoeg looks like this could be an issue with host resolution when using the langchain library rather than ollama, could you try using `127.0.0.1` rather than `localhost`? ```javascript const model = new ChatOllama({ baseUrl: "http://127.0.0.1:11434", model: selectedModel, }); ```
Author
Owner

@jakobhoeg commented on GitHub (Mar 18, 2024):

@jakobhoeg looks like this could be an issue with host resolution when using the langchain library rather than ollama, could you try using 127.0.0.1 rather than localhost?

const model = new ChatOllama({
    baseUrl: "http://127.0.0.1:11434",
    model: selectedModel,
  });

Have tried that. The issue persists.

<!-- gh-comment-id:2003326497 --> @jakobhoeg commented on GitHub (Mar 18, 2024): > @jakobhoeg looks like this could be an issue with host resolution when using the langchain library rather than ollama, could you try using `127.0.0.1` rather than `localhost`? > > ```js > const model = new ChatOllama({ > baseUrl: "http://127.0.0.1:11434", > model: selectedModel, > }); > ``` Have tried that. The issue persists.
Author
Owner

@jakobhoeg commented on GitHub (Mar 18, 2024):

Also tried with this in the api route:

export const runtime = 'edge';

And now I get a different error in Vercel:

 Error: Ollama call failed with status code 403: Direct IP access is not allowed in Vercel's Edge environment (hostname: 127.0.0.1)
    at (node_modules/@langchain/community/dist/utils/ollama.js:26:20)
    at (node_modules/@langchain/community/dist/utils/ollama.js:56:4)
    at (node_modules/@langchain/community/dist/chat_models/ollama.js:375:29)
    at (node_modules/@langchain/community/dist/chat_models/ollama.js:483:25)
    at (node_modules/@langchain/core/dist/language_models/chat_models.js:347:21)
    at (node_modules/@langchain/core/dist/language_models/chat_models.js:110:24)
    at (node_modules/@langchain/core/dist/language_models/chat_models.js:294:23) {
  response: Response {  }
}

This is basically because of how @langchain/community/dist/utils/ollama.js formats localhost to 127.0.0.1:

import { IterableReadableStream } from "@langchain/core/utils/stream";
async function* createOllamaStream(url, params, options) {
    let formattedUrl = url;
    if (formattedUrl.startsWith("http://localhost:")) {
        // Node 18 has issues with resolving "localhost"
        // See https://github.com/node-fetch/node-fetch/issues/1624
        formattedUrl = formattedUrl.replace("http://localhost:", "http://127.0.0.1:");
    }
    const response = await fetch(formattedUrl, {
        method: "POST",
        body: JSON.stringify(params),
        headers: {
            "Content-Type": "application/json",
        },
        signal: options.signal,
    });
    if (!response.ok) {
        let error;
        const responseText = await response.text();
        try {
            const json = JSON.parse(responseText);
            error = new Error(`Ollama call failed with status code ${response.status}: ${json.error}`);
            // eslint-disable-next-line @typescript-eslint/no-explicit-any
        }
        catch (e) {
            error = new Error(`Ollama call failed with status code ${response.status}: ${responseText}`);
        }
        // eslint-disable-next-line @typescript-eslint/no-explicit-any
        error.response = response;
        throw error;
    }
    if (!response.body) {
        throw new Error("Could not begin Ollama stream. Please check the given URL and try again.");
    }
    const stream = IterableReadableStream.fromReadableStream(response.body);
    const decoder = new TextDecoder();
    let extra = "";
    for await (const chunk of stream) {
        const decoded = extra + decoder.decode(chunk);
        const lines = decoded.split("\n");
        extra = lines.pop() || "";
        for (const line of lines) {
            try {
                yield JSON.parse(line);
            }
            catch (e) {
                console.warn(`Received a non-JSON parseable chunk: ${line}`);
            }
        }
    }
}
export async function* createOllamaGenerateStream(baseUrl, params, options) {
    yield* createOllamaStream(`${baseUrl}/api/generate`, params, options);
}
export async function* createOllamaChatStream(baseUrl, params, options) {
    yield* createOllamaStream(`${baseUrl}/api/chat`, params, options);
}

But I don't know if it would work, if it didn't format it?

<!-- gh-comment-id:2003500010 --> @jakobhoeg commented on GitHub (Mar 18, 2024): Also tried with this in the api route: `export const runtime = 'edge';` And now I get a different error in Vercel: ``` Error: Ollama call failed with status code 403: Direct IP access is not allowed in Vercel's Edge environment (hostname: 127.0.0.1) at (node_modules/@langchain/community/dist/utils/ollama.js:26:20) at (node_modules/@langchain/community/dist/utils/ollama.js:56:4) at (node_modules/@langchain/community/dist/chat_models/ollama.js:375:29) at (node_modules/@langchain/community/dist/chat_models/ollama.js:483:25) at (node_modules/@langchain/core/dist/language_models/chat_models.js:347:21) at (node_modules/@langchain/core/dist/language_models/chat_models.js:110:24) at (node_modules/@langchain/core/dist/language_models/chat_models.js:294:23) { response: Response { } } ``` This is basically because of how `@langchain/community/dist/utils/ollama.js` formats localhost to 127.0.0.1: ``` import { IterableReadableStream } from "@langchain/core/utils/stream"; async function* createOllamaStream(url, params, options) { let formattedUrl = url; if (formattedUrl.startsWith("http://localhost:")) { // Node 18 has issues with resolving "localhost" // See https://github.com/node-fetch/node-fetch/issues/1624 formattedUrl = formattedUrl.replace("http://localhost:", "http://127.0.0.1:"); } const response = await fetch(formattedUrl, { method: "POST", body: JSON.stringify(params), headers: { "Content-Type": "application/json", }, signal: options.signal, }); if (!response.ok) { let error; const responseText = await response.text(); try { const json = JSON.parse(responseText); error = new Error(`Ollama call failed with status code ${response.status}: ${json.error}`); // eslint-disable-next-line @typescript-eslint/no-explicit-any } catch (e) { error = new Error(`Ollama call failed with status code ${response.status}: ${responseText}`); } // eslint-disable-next-line @typescript-eslint/no-explicit-any error.response = response; throw error; } if (!response.body) { throw new Error("Could not begin Ollama stream. Please check the given URL and try again."); } const stream = IterableReadableStream.fromReadableStream(response.body); const decoder = new TextDecoder(); let extra = ""; for await (const chunk of stream) { const decoded = extra + decoder.decode(chunk); const lines = decoded.split("\n"); extra = lines.pop() || ""; for (const line of lines) { try { yield JSON.parse(line); } catch (e) { console.warn(`Received a non-JSON parseable chunk: ${line}`); } } } } export async function* createOllamaGenerateStream(baseUrl, params, options) { yield* createOllamaStream(`${baseUrl}/api/generate`, params, options); } export async function* createOllamaChatStream(baseUrl, params, options) { yield* createOllamaStream(`${baseUrl}/api/chat`, params, options); } ``` But I don't know if it would work, if it didn't format it?
Author
Owner

@BruceMacD commented on GitHub (Mar 18, 2024):

@jakobhoeg could this code be executing on the server-side rather than on the client-side? The scenario you're describing relies on the code being executed in the client-side part of your Vercel app.

<!-- gh-comment-id:2003647701 --> @BruceMacD commented on GitHub (Mar 18, 2024): @jakobhoeg could this code be executing on the server-side rather than on the client-side? The scenario you're describing relies on the code being executed in the client-side part of your Vercel app.
Author
Owner

@jakobhoeg commented on GitHub (Mar 18, 2024):

@BruceMacD I am using Vercels useChat() from their @ai/react package on the client side to call the api route if that helps?

<!-- gh-comment-id:2003696230 --> @jakobhoeg commented on GitHub (Mar 18, 2024): @BruceMacD I am using Vercels `useChat()` from their `@ai/react` package on the client side to call the api route if that helps?
Author
Owner

@jakobhoeg commented on GitHub (Mar 18, 2024):

I also tried using the Ollama OpenAI completions instead and followed the guide found here (the Vercel AI SDK part). So my API route (/api/chat/route.ts) looks like this:

import OpenAI from 'openai';
import { OpenAIStream, StreamingTextResponse } from 'ai';

// Create an OpenAI API client (that's edge friendly!)
const openai = new OpenAI({
  baseURL: 'http://localhost:11434/v1',
  apiKey: 'ollama',
});

// IMPORTANT! Set the runtime to edge
export const runtime = 'edge';

export async function POST(req: Request) {
  const { messages, selectedModel } = await req.json();

  try {
    const response = await openai.chat.completions.create({
      model: 'gemma:2b',
      stream: true,
      messages,
    });

    // Convert the response into a friendly text-stream
  const stream = OpenAIStream(response);

  // Respond with the stream
  return new StreamingTextResponse(stream);
  } catch (error) {
    console.error('Error:', error);
  }

}

When doing so, I get these 2 errors in my Vercel logs:

Error: 403 error code: 1003
    at (node_modules/openai/error.mjs:46:19)
    at (node_modules/openai/core.mjs:256:24)
    at (node_modules/openai/core.mjs:299:29)
    at (src/app/api/chat/route.ts:17:21)
    at (node_modules/next/dist/esm/server/future/route-modules/app-route/module.js:189:36)
    at (node_modules/next/dist/esm/server/future/route-modules/app-route/module.js:128:25)
    at (node_modules/next/dist/esm/server/future/route-modules/app-route/module.js:251:29)
    at (node_modules/next/dist/esm/server/web/edge-route-module-wrapper.js:81:20)
    at (node_modules/next/dist/esm/server/web/adapter.js:157:15) {
  status: 403,
  headers: {
  cache-control: 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0',
  cf-connection-close: 'close',
  connection: 'keep-alive',
  content-length: '16',
  content-type: 'text/plain; charset=UTF-8',
  date: 'Mon, 18 Mar 2024 15:00:45 GMT',
  expires: 'Thu, 01 Jan 1970 00:00:01 GMT',
  referrer-policy: 'same-origin',
  x-frame-options: 'SAMEORIGIN'
},
  error: undefined,
  code: undefined,
  param: undefined,
  type: undefined
}

AND this:

[POST] /api/chat reason=EDGE_FUNCTION_INVOCATION_FAILED, status=500, user_error=true - this one actually shows as a 405 eror on Vercel, despite saying status=500.

<!-- gh-comment-id:2004165015 --> @jakobhoeg commented on GitHub (Mar 18, 2024): I also tried using the Ollama OpenAI completions instead and followed the guide found [here](https://ollama.com/blog/openai-compatibility) (the Vercel AI SDK part). So my API route (/api/chat/route.ts) looks like this: ``` import OpenAI from 'openai'; import { OpenAIStream, StreamingTextResponse } from 'ai'; // Create an OpenAI API client (that's edge friendly!) const openai = new OpenAI({ baseURL: 'http://localhost:11434/v1', apiKey: 'ollama', }); // IMPORTANT! Set the runtime to edge export const runtime = 'edge'; export async function POST(req: Request) { const { messages, selectedModel } = await req.json(); try { const response = await openai.chat.completions.create({ model: 'gemma:2b', stream: true, messages, }); // Convert the response into a friendly text-stream const stream = OpenAIStream(response); // Respond with the stream return new StreamingTextResponse(stream); } catch (error) { console.error('Error:', error); } } ``` When doing so, I get these 2 errors in my Vercel logs: ``` Error: 403 error code: 1003 at (node_modules/openai/error.mjs:46:19) at (node_modules/openai/core.mjs:256:24) at (node_modules/openai/core.mjs:299:29) at (src/app/api/chat/route.ts:17:21) at (node_modules/next/dist/esm/server/future/route-modules/app-route/module.js:189:36) at (node_modules/next/dist/esm/server/future/route-modules/app-route/module.js:128:25) at (node_modules/next/dist/esm/server/future/route-modules/app-route/module.js:251:29) at (node_modules/next/dist/esm/server/web/edge-route-module-wrapper.js:81:20) at (node_modules/next/dist/esm/server/web/adapter.js:157:15) { status: 403, headers: { cache-control: 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', cf-connection-close: 'close', connection: 'keep-alive', content-length: '16', content-type: 'text/plain; charset=UTF-8', date: 'Mon, 18 Mar 2024 15:00:45 GMT', expires: 'Thu, 01 Jan 1970 00:00:01 GMT', referrer-policy: 'same-origin', x-frame-options: 'SAMEORIGIN' }, error: undefined, code: undefined, param: undefined, type: undefined } ``` AND this: `[POST] /api/chat reason=EDGE_FUNCTION_INVOCATION_FAILED, status=500, user_error=true` - this one actually shows as a 405 eror on Vercel, despite saying `status=500`.
Author
Owner

@rajasegar commented on GitHub (Sep 28, 2024):

This one worked for me:

import { Ollama } from '@langchain/ollama'

const llm = new Ollama({
  model: 'tinyllama',
  temperature: 0,
  maxRetries: 2,
  baseUrl: "http://127.0.0.1:11434"
});

const inputText = "why is the sky blue?";
const completion = await llm.invoke(inputText);
console.log(completion)

I am using Raspberry Pi 5 with Node.js 18.19.0 and ollama 0.3.10
this is my package.json

{
  "name": "langchain-tutorial",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "type": "module",
  "keywords": [],
  "author": "",
  "license": "ISC",
  "dependencies": {
    "@langchain/core": "^0.3.3",
    "@langchain/ollama": "^0.1.0"
  }
}
<!-- gh-comment-id:2380647094 --> @rajasegar commented on GitHub (Sep 28, 2024): This one worked for me: ```js import { Ollama } from '@langchain/ollama' const llm = new Ollama({ model: 'tinyllama', temperature: 0, maxRetries: 2, baseUrl: "http://127.0.0.1:11434" }); const inputText = "why is the sky blue?"; const completion = await llm.invoke(inputText); console.log(completion) ``` I am using Raspberry Pi 5 with Node.js 18.19.0 and ollama 0.3.10 this is my package.json ```json { "name": "langchain-tutorial", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "type": "module", "keywords": [], "author": "", "license": "ISC", "dependencies": { "@langchain/core": "^0.3.3", "@langchain/ollama": "^0.1.0" } } ```
Author
Owner

@titiyoyo commented on GitHub (Oct 27, 2024):

@rajasegar worked for me too, thanks for posting this !

<!-- gh-comment-id:2439916452 --> @titiyoyo commented on GitHub (Oct 27, 2024): @rajasegar worked for me too, thanks for posting this !
Author
Owner

@rosariogueli commented on GitHub (Oct 30, 2024):

This worked for me, using OpenAI nodejs SDK, hope that helps!

import OpenAIApi from 'openai';

const openai = new OpenAIApi({
  apiKey: 'nope',
  baseURL: 'http://127.0.0.1:11434/v1',
});

try {
  const response = await openai.chat.completions.create({
    model: 'llama3',
    messages: [
      { role: 'user', content: 'Hello, how are you?' }
    ],
  });
  console.log(response?.choices?.[0]?.message?.content)
} catch (err) {
  console.log(err.message);
}
<!-- gh-comment-id:2445588084 --> @rosariogueli commented on GitHub (Oct 30, 2024): This worked for me, using OpenAI nodejs SDK, hope that helps! ``` import OpenAIApi from 'openai'; const openai = new OpenAIApi({ apiKey: 'nope', baseURL: 'http://127.0.0.1:11434/v1', }); try { const response = await openai.chat.completions.create({ model: 'llama3', messages: [ { role: 'user', content: 'Hello, how are you?' } ], }); console.log(response?.choices?.[0]?.message?.content) } catch (err) { console.log(err.message); } ```
Author
Owner

@pdevine commented on GitHub (Dec 19, 2024):

This issue has gone pretty stale. I'm going to go ahead and close it, but I'm curious is there was any resolution.

<!-- gh-comment-id:2555813180 --> @pdevine commented on GitHub (Dec 19, 2024): This issue has gone pretty stale. I'm going to go ahead and close it, but I'm curious is there was any resolution.
Author
Owner

@dungdo123 commented on GitHub (Feb 26, 2025):

I run n8n and ollama on windows, set the base URL: http://172.21.0.1:11434/ fixed the ECONNREFUSED problem.

<!-- gh-comment-id:2683743450 --> @dungdo123 commented on GitHub (Feb 26, 2025): I run n8n and ollama on windows, set the base URL: http://172.21.0.1:11434/ fixed the ECONNREFUSED problem.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1468