[GH-ISSUE #838] how to use ollama with open-interpreter? #402

Closed
opened 2026-04-12 10:02:57 -05:00 by GiteaMirror · 17 comments
Owner

Originally created by @wuyongyi on GitHub (Oct 18, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/838

I noticed that open-interpreter utilizes litellm to communicate with llms. While litellm can utilise ollama as a backend to respond to prompts, I have been unable to find a way to utilise ollama within open-interpreter. Does anyone have any experience or knowledge regarding this?

Originally created by @wuyongyi on GitHub (Oct 18, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/838 I noticed that open-interpreter utilizes litellm to communicate with llms. While litellm can utilise ollama as a backend to respond to prompts, I have been unable to find a way to utilise ollama within open-interpreter. Does anyone have any experience or knowledge regarding this?
Author
Owner

@wuyongyi commented on GitHub (Oct 18, 2023):

BTW: I use
litellm --model ollama/codellama
setup openai API compatible server for local LLMs in http://0.0.0.0:8000
as (https://docs.litellm.ai/docs/providers/ollama)
The test code for litellm works fine.
But I get error in open-interpreter:
"""

model, custom_llm_provider = get_llm_provider(model=model, custom_llm_provider=custom_llm_provider, api_base=api_base)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ProgramData\anaconda3\Lib\site-packages\litellm\utils.py", line 1483, in get_llm_provider
raise e
File "D:\ProgramData\anaconda3\Lib\site-packages\litellm\utils.py", line 1480, in get_llm_provider
raise ValueError(f"LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/{model}',..) Learn more: https://docs.litellm.ai/docs/providers")

...

raise APIError(status_code=500, message=str(original_exception), llm_provider=custom_llm_provider, model=model)
litellm.exceptions.APIError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/gpt-4',..) Learn more: https://docs.litellm.ai/docs/providers
"""

<!-- gh-comment-id:1768458083 --> @wuyongyi commented on GitHub (Oct 18, 2023): BTW: I use litellm --model ollama/codellama setup openai API compatible server for local LLMs in http://0.0.0.0:8000 as (https://docs.litellm.ai/docs/providers/ollama) The test code for litellm works fine. But I get error in open-interpreter: """ model, custom_llm_provider = get_llm_provider(model=model, custom_llm_provider=custom_llm_provider, api_base=api_base) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ProgramData\anaconda3\Lib\site-packages\litellm\utils.py", line 1483, in get_llm_provider raise e File "D:\ProgramData\anaconda3\Lib\site-packages\litellm\utils.py", line 1480, in get_llm_provider raise ValueError(f"LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/{model}',..)` Learn more: https://docs.litellm.ai/docs/providers") ... raise APIError(status_code=500, message=str(original_exception), llm_provider=custom_llm_provider, model=model) litellm.exceptions.APIError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/gpt-4',..)` Learn more: https://docs.litellm.ai/docs/providers """
Author
Owner

@wuyongyi commented on GitHub (Oct 18, 2023):

update:
run :
interpreter --model openai/codellama --api_base http://127.0.0.1:8000/
now the connection is fine.

<!-- gh-comment-id:1769530696 --> @wuyongyi commented on GitHub (Oct 18, 2023): update: run : interpreter --model openai/codellama --api_base http://127.0.0.1:8000/ now the connection is fine.
Author
Owner

@ghost commented on GitHub (Oct 20, 2023):

hey @wuyongyi openinterpreter uses litellm, so you could just use it as ollama/codellama in the interpreter call

<!-- gh-comment-id:1773436863 --> @ghost commented on GitHub (Oct 20, 2023): hey @wuyongyi openinterpreter uses litellm, so you could just use it as ollama/codellama in the interpreter call
Author
Owner

@Axenide commented on GitHub (Nov 17, 2023):

hey @wuyongyi openinterpreter uses litellm, so you could just use it as ollama/codellama in the interpreter call

How did you do it? I tried with interpreter --model ollama/codellama --api_base http://127.0.0.1:11434/ but I get this error:

▌ Model set to ollama/codellama                                                                                                                                                                                                                                          

Open Interpreter will require approval before running code.                                                                                                                                                                                                                

Use interpreter -y to bypass this.                                                                                                                                                                                                                                         

Press CTRL-C to exit.                                                                                                                                                                                                                                                      

> Hello there!

We were unable to determine the context window of this model. Defaulting to 3000.                                                                                                                                                                                          
If your model can handle more, run interpreter --context_window {token limit} or interpreter.context_window = {token limit}.                                                                                                                                               
Also, please set max_tokens: interpreter --max_tokens {max tokens per response} or interpreter.max_tokens = {max tokens per response}                                                                                                                                      

Traceback (most recent call last):
  File "/home/adriano/.local/pipx/venvs/open-interpreter/lib/python3.11/site-packages/litellm/llms/ollama.py", line 133, in get_ollama_response_stream
    j = json.loads(chunk)
        ^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/json/decoder.py", line 340, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 5 (char 4)
<!-- gh-comment-id:1815711609 --> @Axenide commented on GitHub (Nov 17, 2023): > hey @wuyongyi openinterpreter uses litellm, so you could just use it as ollama/codellama in the interpreter call How did you do it? I tried with `interpreter --model ollama/codellama --api_base http://127.0.0.1:11434/` but I get this error: ``` ▌ Model set to ollama/codellama Open Interpreter will require approval before running code. Use interpreter -y to bypass this. Press CTRL-C to exit. > Hello there! We were unable to determine the context window of this model. Defaulting to 3000. If your model can handle more, run interpreter --context_window {token limit} or interpreter.context_window = {token limit}. Also, please set max_tokens: interpreter --max_tokens {max tokens per response} or interpreter.max_tokens = {max tokens per response} Traceback (most recent call last): File "/home/adriano/.local/pipx/venvs/open-interpreter/lib/python3.11/site-packages/litellm/llms/ollama.py", line 133, in get_ollama_response_stream j = json.loads(chunk) ^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/json/__init__.py", line 346, in loads return _default_decoder.decode(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/json/decoder.py", line 340, in decode raise JSONDecodeError("Extra data", s, end) json.decoder.JSONDecodeError: Extra data: line 1 column 5 (char 4) ```
Author
Owner

@tokutake commented on GitHub (Nov 22, 2023):

@Axenide
I encountered the same error you did.

[GIN] 2023/11/22 - 21:03:44 | 404 |       2.256µs |       127.0.0.1 | POST     "//api/generate"

There was an extra slash in the request URL.

I removed the trailing slash from the api_base argument URL and the error went away.

> interpreter --model ollama/llama2 --api_base http://localhost:11434

> hi

We were unable to determine the context window of this model. Defaulting to 3000.                                             
If your model can handle more, run interpreter --context_window {token limit} or interpreter.context_window = {token limit}.  
Also, please set max_tokens: interpreter --max_tokens {max tokens per response} or interpreter.max_tokens = {max tokens per   
response}                                                                                                                     

b'{"model":"llama2","created_at":"2023-11-22T12:11:27.893947Z","response":"Hello","done":false}'
<!-- gh-comment-id:1822658989 --> @tokutake commented on GitHub (Nov 22, 2023): @Axenide I encountered the same error you did. ``` [GIN] 2023/11/22 - 21:03:44 | 404 | 2.256µs | 127.0.0.1 | POST "//api/generate" ``` There was an extra slash in the request URL. I removed the trailing slash from the api_base argument URL and the error went away. ``` > interpreter --model ollama/llama2 --api_base http://localhost:11434 > hi We were unable to determine the context window of this model. Defaulting to 3000. If your model can handle more, run interpreter --context_window {token limit} or interpreter.context_window = {token limit}. Also, please set max_tokens: interpreter --max_tokens {max tokens per response} or interpreter.max_tokens = {max tokens per response} b'{"model":"llama2","created_at":"2023-11-22T12:11:27.893947Z","response":"Hello","done":false}' ```
Author
Owner

@Axenide commented on GitHub (Nov 22, 2023):

@Axenide
I encountered the same error you did.

[GIN] 2023/11/22 - 21:03:44 | 404 |       2.256µs |       127.0.0.1 | POST     "//api/generate"

There was an extra slash in the request URL.

I removed the trailing slash from the api_base argument URL and the error went away.

> interpreter --model ollama/llama2 --api_base http://localhost:11434

> hi

We were unable to determine the context window of this model. Defaulting to 3000.                                             
If your model can handle more, run interpreter --context_window {token limit} or interpreter.context_window = {token limit}.  
Also, please set max_tokens: interpreter --max_tokens {max tokens per response} or interpreter.max_tokens = {max tokens per   
response}                                                                                                                     

b'{"model":"llama2","created_at":"2023-11-22T12:11:27.893947Z","response":"Hello","done":false}'

Actually, the solution was to specify the version of the model. :)

ollama/codellama:latest

<!-- gh-comment-id:1822799610 --> @Axenide commented on GitHub (Nov 22, 2023): > @Axenide > I encountered the same error you did. > > ``` > [GIN] 2023/11/22 - 21:03:44 | 404 | 2.256µs | 127.0.0.1 | POST "//api/generate" > ``` > There was an extra slash in the request URL. > > I removed the trailing slash from the api_base argument URL and the error went away. > > ``` > > interpreter --model ollama/llama2 --api_base http://localhost:11434 > > > hi > > We were unable to determine the context window of this model. Defaulting to 3000. > If your model can handle more, run interpreter --context_window {token limit} or interpreter.context_window = {token limit}. > Also, please set max_tokens: interpreter --max_tokens {max tokens per response} or interpreter.max_tokens = {max tokens per > response} > > b'{"model":"llama2","created_at":"2023-11-22T12:11:27.893947Z","response":"Hello","done":false}' > ``` > > Actually, the solution was to specify the version of the model. :) `ollama/codellama:latest`
Author
Owner

@phrane commented on GitHub (Dec 31, 2023):

@Axenide
I'm still getting the same errors previously described despite specifying the model version (as displayed in 'Ollama list' cmd). I'm on intel-mac, running mistral, and here is what I used:
interpreter --model ollama/mistral:latest --api_base http://127.0.0.1:11434/
but I then get the following from open interpreter:

▌ Model set to openai/ollama/mistral:latest

Open Interpreter will require approval before running code.
Use interpreter -y to bypass this.
Press CTRL-C to exit.

...i.e. openai prefixed on the model selection. Any ideas on how to correct this?

<!-- gh-comment-id:1872983185 --> @phrane commented on GitHub (Dec 31, 2023): @Axenide I'm still getting the same errors previously described despite specifying the model version (as displayed in 'Ollama list' cmd). I'm on intel-mac, running mistral, and here is what I used: `interpreter --model ollama/mistral:latest --api_base http://127.0.0.1:11434/` but I then get the following from open interpreter: ``` ▌ Model set to openai/ollama/mistral:latest Open Interpreter will require approval before running code. Use interpreter -y to bypass this. Press CTRL-C to exit. ``` ...i.e. openai prefixed on the model selection. Any ideas on how to correct this?
Author
Owner

@Axenide commented on GitHub (Dec 31, 2023):

@phrane
Try without ollama/, it used to work when specifying it as openai, so I guess it should work if you just use mistral:latest.

<!-- gh-comment-id:1873001362 --> @Axenide commented on GitHub (Dec 31, 2023): @phrane Try without `ollama/`, it used to work when specifying it as openai, so I guess it should work if you just use `mistral:latest`.
Author
Owner

@ghost commented on GitHub (Jan 1, 2024):

@phrane that looks wrong. I think openinterpreter is automatically assuming this is an openai compatible endpoint (see the appended openai/),

LiteLLM (which openinterpreter uses to make api calls) supports ollama/mistral and that should work fine - https://docs.litellm.ai/docs/providers/ollama.

But it seems like open-interpreter hasn't added support yet - https://docs.openinterpreter.com/language-model-setup/hosted-models/openai

Would recommend filing an issue on their github - https://github.com/KillianLucas/open-interpreter

<!-- gh-comment-id:1873089125 --> @ghost commented on GitHub (Jan 1, 2024): @phrane that looks wrong. I think openinterpreter is automatically assuming this is an openai compatible endpoint (see the appended `openai/`), LiteLLM (which openinterpreter uses to make api calls) supports `ollama/mistral` and that should work fine - https://docs.litellm.ai/docs/providers/ollama. But it seems like open-interpreter hasn't added support yet - https://docs.openinterpreter.com/language-model-setup/hosted-models/openai Would recommend filing an issue on their github - https://github.com/KillianLucas/open-interpreter
Author
Owner

@hidek84 commented on GitHub (Jan 1, 2024):

You no longer need the --api-base argument when connecting to a local Ollama model endpoint because LiteLLM will automatically handle it.

So, I think you can try the following command (open-interpreter==0.1.18 and litellm==1.16.7):

interpreter --model ollama/mistral:latest

For more details, please refer to my comment in the open-interpreter's GitHub issue below.
https://github.com/KillianLucas/open-interpreter/issues/856#issuecomment-1872673035

<!-- gh-comment-id:1873363488 --> @hidek84 commented on GitHub (Jan 1, 2024): You no longer need the `--api-base` argument when connecting to a local Ollama model endpoint because LiteLLM will automatically handle it. So, I think you can try the following command (`open-interpreter==0.1.18` and `litellm==1.16.7`): ``` interpreter --model ollama/mistral:latest ``` For more details, please refer to my comment in the open-interpreter's GitHub issue below. https://github.com/KillianLucas/open-interpreter/issues/856#issuecomment-1872673035
Author
Owner

@phrane commented on GitHub (Jan 1, 2024):

@hidek84 this works a treat, thanks!!
I still get a msg:

If your model can handle more, run `interpreter --context_window {token limit}` or `interpreter.llm.context_window = {token limit}`.
Also, please set max_tokens: `interpreter --max_tokens {max tokens per response}` or `interpreter.llm.max_tokens = {max tokens per response}`

but I'm guessing this is due to a config I need to update somewhere.

<!-- gh-comment-id:1873373092 --> @phrane commented on GitHub (Jan 1, 2024): @hidek84 this works a treat, thanks!! I still get a msg: ```We were unable to determine the context window of this model. Defaulting to 3000. If your model can handle more, run `interpreter --context_window {token limit}` or `interpreter.llm.context_window = {token limit}`. Also, please set max_tokens: `interpreter --max_tokens {max tokens per response}` or `interpreter.llm.max_tokens = {max tokens per response}` ``` but I'm guessing this is due to a config I need to update somewhere.
Author
Owner

@phrane commented on GitHub (Jan 1, 2024):

@phrane that looks wrong. I think openinterpreter is automatically assuming this is an openai compatible endpoint (see the appended openai/),

LiteLLM (which openinterpreter uses to make api calls) supports ollama/mistral and that should work fine - https://docs.litellm.ai/docs/providers/ollama.

But it seems like open-interpreter hasn't added support yet - https://docs.openinterpreter.com/language-model-setup/hosted-models/openai

Would recommend filing an issue on their github - https://github.com/KillianLucas/open-interpreter

@krrishdholakia thanks for your input bro, my thinking was in line with yours regarding open default assumption. Was about to log an issue, but for @hidek84 . Thanks alll round!!

<!-- gh-comment-id:1873374262 --> @phrane commented on GitHub (Jan 1, 2024): > @phrane that looks wrong. I think openinterpreter is automatically assuming this is an openai compatible endpoint (see the appended `openai/`), > > LiteLLM (which openinterpreter uses to make api calls) supports `ollama/mistral` and that should work fine - https://docs.litellm.ai/docs/providers/ollama. > > But it seems like open-interpreter hasn't added support yet - https://docs.openinterpreter.com/language-model-setup/hosted-models/openai > > Would recommend filing an issue on their github - https://github.com/KillianLucas/open-interpreter @krrishdholakia thanks for your input bro, my thinking was in line with yours regarding open default assumption. Was about to log an issue, but for @hidek84 . Thanks alll round!!
Author
Owner

@obadiahpewee commented on GitHub (Mar 14, 2025):

i'm still get issues with this, trying to connect via ollama on debian/linux
interpreter --model ollama/qwen2.5:7b-instruct-q8_0

▌ Model set to ollama/qwen2.5:7b-instruct-q8_0

Loading qwen2.5:7b-instruct-q8_0...

Model loaded.

Open Interpreter will require approval before running code.

Use interpreter -y to bypass this.

Press CTRL-C to exit.

hello

{"name": "execute", "arguments": {"language": "python", "code": "print('Hello, obione!')"}}

how are you

{"name": "execute", "arguments": {"language": "python", "code": "print(f'Hello, {obione}! How can I assist you today?')"}}

what os is this on

{"name": "execute", "arguments": {"language": "shell", "code": "uname"}}

what is the time in LA,USA right now

{"name": "execute", "arguments": {"language": "python", "code": "import platform\nprint(platform.system())"}}

Exiting...

<!-- gh-comment-id:2723851709 --> @obadiahpewee commented on GitHub (Mar 14, 2025): i'm still get issues with this, trying to connect via ollama on debian/linux interpreter --model ollama/qwen2.5:7b-instruct-q8_0 ▌ Model set to ollama/qwen2.5:7b-instruct-q8_0 Loading qwen2.5:7b-instruct-q8_0... Model loaded. Open Interpreter will require approval before running code. Use interpreter -y to bypass this. Press CTRL-C to exit. > hello {"name": "execute", "arguments": {"language": "python", "code": "print('Hello, obione!')"}} > how are you {"name": "execute", "arguments": {"language": "python", "code": "print(f'Hello, {obione}! How can I assist you today?')"}} > what os is this on {"name": "execute", "arguments": {"language": "shell", "code": "uname"}} > what is the time in LA,USA right now {"name": "execute", "arguments": {"language": "python", "code": "import platform\nprint(platform.system())"}} > Exiting...
Author
Owner

@obadiahpewee commented on GitHub (Mar 14, 2025):

same result with --local flag used
interpreter --local

Open Interpreter supports multiple local model providers.

[?] Select a provider:

Ollama
Llamafile
LM Studio
Jan

[?] Select a model:
llama3.2

qwen2.5:7b-instruct-q8_0
qwen2.5:32b
qwen2.5-coder:32b
huihui_ai/qwen2.5-1m-abliterated:14b-instruct-q8_0
deepseek-r1:32b
↓ Download llama3.1
↓ Download phi3
↓ Download mistral-nemo
↓ Download gemma2
↓ Download codestral
Browse Models ↗

Loading qwen2.5:7b-instruct-q8_0...

Model loaded.

▌ Model set to qwen2.5:7b-instruct-q8_0

Open Interpreter will require approval before running code.

Use interpreter -y to bypass this.

Press CTRL-C to exit.

hello

{"name": "execute", "arguments": {"language": "python", "code": "print('hello')"}}

how are you

{"name": "execute", "arguments": {"language": "python", "code": "print('How are you?')"}}

time in UK

{"name": "execute", "arguments": {"language": "python", "code": "from datetime import datetime; print(datetime.now().strftime('%Y-%m-%d
%H:%M:%S %Z'))"}}

answer?

{"name": "execute", "arguments": {"language": "python", "code": "print('The current time in UK is',
datetime.now(pytz.timezone('Europe/London')).strftime('%Y-%m-%d %H:%M:%S'))"}}

<!-- gh-comment-id:2723862651 --> @obadiahpewee commented on GitHub (Mar 14, 2025): same result with --local flag used interpreter --local Open Interpreter supports multiple local model providers. [?] Select a provider: > Ollama Llamafile LM Studio Jan [?] Select a model: llama3.2 > qwen2.5:7b-instruct-q8_0 qwen2.5:32b qwen2.5-coder:32b huihui_ai/qwen2.5-1m-abliterated:14b-instruct-q8_0 deepseek-r1:32b ↓ Download llama3.1 ↓ Download phi3 ↓ Download mistral-nemo ↓ Download gemma2 ↓ Download codestral Browse Models ↗ Loading qwen2.5:7b-instruct-q8_0... Model loaded. ▌ Model set to qwen2.5:7b-instruct-q8_0 Open Interpreter will require approval before running code. Use interpreter -y to bypass this. Press CTRL-C to exit. > hello {"name": "execute", "arguments": {"language": "python", "code": "print('hello')"}} > how are you {"name": "execute", "arguments": {"language": "python", "code": "print('How are you?')"}} > time in UK {"name": "execute", "arguments": {"language": "python", "code": "from datetime import datetime; print(datetime.now().strftime('%Y-%m-%d %H:%M:%S %Z'))"}} > answer? {"name": "execute", "arguments": {"language": "python", "code": "print('The current time in UK is', datetime.now(pytz.timezone('Europe/London')).strftime('%Y-%m-%d %H:%M:%S'))"}}
Author
Owner

@trevorstr commented on GitHub (Mar 15, 2025):

I'm trying to run the development branch with this:

interpreter --api-base http://myserver.local:11434 --model llama3.2-vision:11b --provider ollama

And then when I enter a prompt, I get:

Open Interpreter 1.0.0
Copyright (C) 2024 Open Interpreter Team
Licensed under GNU AGPL v3.0

A modern command-line assistant.

Usage: i [prompt]
   or: interpreter [options]

Documentation: docs.openinterpreter.com
Run 'interpreter --help' for all options

> asdf

   Traceback (most recent call last):
  File "/Users/trevor.sullivan/.local/bin/interpreter", line 8, in <module>
    sys.exit(main())
             ~~~~^^
  File "/Users/trevor.sullivan/.local/pipx/venvs/open-interpreter/lib/python3.13/site-packages/interpreter/cli.py", line 306, in main
    asyncio.run(async_main(args))
    ~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/python@3.13/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ~~~~~~~~~~^^^^^^
  File "/opt/homebrew/Cellar/python@3.13/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
  File "/opt/homebrew/Cellar/python@3.13/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/base_events.py", line 725, in run_until_complete
    return future.result()
           ~~~~~~~~~~~~~^^
  File "/Users/trevor.sullivan/.local/pipx/venvs/open-interpreter/lib/python3.13/site-packages/interpreter/cli.py", line 230, in async_main
    async for _ in global_interpreter.async_respond():
        pass
  File "/Users/trevor.sullivan/.local/pipx/venvs/open-interpreter/lib/python3.13/site-packages/interpreter/interpreter.py", line 740, in async_respond
    raw_response = litellm.completion(**params)
  File "/Users/trevor.sullivan/.local/pipx/venvs/open-interpreter/lib/python3.13/site-packages/litellm/utils.py", line 1235, in wrapper
    raise e
  File "/Users/trevor.sullivan/.local/pipx/venvs/open-interpreter/lib/python3.13/site-packages/litellm/utils.py", line 1113, in wrapper
    result = original_function(*args, **kwargs)
  File "/Users/trevor.sullivan/.local/pipx/venvs/open-interpreter/lib/python3.13/site-packages/litellm/main.py", line 3101, in completion
    raise exception_type(
    ...<5 lines>...
    )
  File "/Users/trevor.sullivan/.local/pipx/venvs/open-interpreter/lib/python3.13/site-packages/litellm/main.py", line 984, in completion
    model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
                                                            ~~~~~~~~~~~~~~~~^
        model=model,
        ^^^^^^^^^^^^
    ...<2 lines>...
        api_key=api_key,
        ^^^^^^^^^^^^^^^^
    )
    ^
  File "/Users/trevor.sullivan/.local/pipx/venvs/open-interpreter/lib/python3.13/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 356, in get_llm_provider
    raise e
  File "/Users/trevor.sullivan/.local/pipx/venvs/open-interpreter/lib/python3.13/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 333, in get_llm_provider
    raise litellm.exceptions.BadRequestError(  # type: ignore
    ...<8 lines>...
    )
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama3.2-vision:11b
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
<!-- gh-comment-id:2726089256 --> @trevorstr commented on GitHub (Mar 15, 2025): I'm trying to run the development branch with this: ``` interpreter --api-base http://myserver.local:11434 --model llama3.2-vision:11b --provider ollama ``` And then when I enter a prompt, I get: ``` Open Interpreter 1.0.0 Copyright (C) 2024 Open Interpreter Team Licensed under GNU AGPL v3.0 A modern command-line assistant. Usage: i [prompt] or: interpreter [options] Documentation: docs.openinterpreter.com Run 'interpreter --help' for all options > asdf Traceback (most recent call last): File "/Users/trevor.sullivan/.local/bin/interpreter", line 8, in <module> sys.exit(main()) ~~~~^^ File "/Users/trevor.sullivan/.local/pipx/venvs/open-interpreter/lib/python3.13/site-packages/interpreter/cli.py", line 306, in main asyncio.run(async_main(args)) ~~~~~~~~~~~^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/python@3.13/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/runners.py", line 195, in run return runner.run(main) ~~~~~~~~~~^^^^^^ File "/opt/homebrew/Cellar/python@3.13/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^ File "/opt/homebrew/Cellar/python@3.13/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/base_events.py", line 725, in run_until_complete return future.result() ~~~~~~~~~~~~~^^ File "/Users/trevor.sullivan/.local/pipx/venvs/open-interpreter/lib/python3.13/site-packages/interpreter/cli.py", line 230, in async_main async for _ in global_interpreter.async_respond(): pass File "/Users/trevor.sullivan/.local/pipx/venvs/open-interpreter/lib/python3.13/site-packages/interpreter/interpreter.py", line 740, in async_respond raw_response = litellm.completion(**params) File "/Users/trevor.sullivan/.local/pipx/venvs/open-interpreter/lib/python3.13/site-packages/litellm/utils.py", line 1235, in wrapper raise e File "/Users/trevor.sullivan/.local/pipx/venvs/open-interpreter/lib/python3.13/site-packages/litellm/utils.py", line 1113, in wrapper result = original_function(*args, **kwargs) File "/Users/trevor.sullivan/.local/pipx/venvs/open-interpreter/lib/python3.13/site-packages/litellm/main.py", line 3101, in completion raise exception_type( ...<5 lines>... ) File "/Users/trevor.sullivan/.local/pipx/venvs/open-interpreter/lib/python3.13/site-packages/litellm/main.py", line 984, in completion model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider( ~~~~~~~~~~~~~~~~^ model=model, ^^^^^^^^^^^^ ...<2 lines>... api_key=api_key, ^^^^^^^^^^^^^^^^ ) ^ File "/Users/trevor.sullivan/.local/pipx/venvs/open-interpreter/lib/python3.13/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 356, in get_llm_provider raise e File "/Users/trevor.sullivan/.local/pipx/venvs/open-interpreter/lib/python3.13/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 333, in get_llm_provider raise litellm.exceptions.BadRequestError( # type: ignore ...<8 lines>... ) litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama3.2-vision:11b Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers ```
Author
Owner

@ghost commented on GitHub (Mar 15, 2025):

hey @trevorstr looks like the provider isn't being passed in

try --model ollama/llama3.2-vision:11b

<!-- gh-comment-id:2726183460 --> @ghost commented on GitHub (Mar 15, 2025): hey @trevorstr looks like the provider isn't being passed in try `--model ollama/llama3.2-vision:11b`
Author
Owner

@trevorstr commented on GitHub (Mar 15, 2025):

@krrishdholakia I tried that first, and it still did not work.

I was using the development branch. Are you using that branch as well?

<!-- gh-comment-id:2726891295 --> @trevorstr commented on GitHub (Mar 15, 2025): @krrishdholakia I tried that first, and it still did not work. I was using the `development` branch. Are you using that branch as well?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#402