[GH-ISSUE #9961] How can i send api-key via the OpenAI sdk #53034

Closed
opened 2026-04-29 01:44:48 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @abhiram1809 on GitHub (Mar 24, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9961

What is the issue?

Hello, i need slight help with setting the headers in OpenAI sdk for the ollama host, for reference, the code below works smoothly

ollama_client = Client(host=OLLAMA_URL,
               headers={"Authorization": f"Bearer {api_key}"})
response = ollama_client.generate(model="llama3.2:3b", prompt="Tell me a fun fact")
 
print(response.response)

but i am failing to setup the same in openai-sdk, please help
i have tried

client = OpenAI(api_key=api_key, base_url=OLLAMA_URL) # Same key from environment as before 
# Gives Permission Error
response = client.chat.completions.create(
    model="llama3.2:3b",
    messages=[
        {"role": "user", "content": "Why is the sky blue?"}
    ]
)

print(response.choices[0].message.content)

this way does not work either:

client = OpenAI(api_key="ollama", base_url=OLLAMA_URL, default_headers={"Authorization": f"Bearer {api_key}"})

response = client.chat.completions.create(
    model="llama3.2:3b",
    messages=[
        {"role": "user", "content": "Why is the sky blue?"}
    ]
)

print(response.choices[0].message.content)

this also does not work

client = OpenAI(api_key="ollama", base_url=OLLAMA_URL"})

response = client.chat.completions.create(
    model="llama3.2:3b",
    messages=[
        {"role": "user", "content": "Why is the sky blue?"}
    ],
     custom_headers={"Authorization": f"Bearer {api_key}"}
)

print(response.choices[0].message.content)

Relevant log output


OS

Windows

GPU

Other

CPU

AMD

Ollama version

0.4.7

Originally created by @abhiram1809 on GitHub (Mar 24, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9961 ### What is the issue? Hello, i need slight help with setting the headers in OpenAI sdk for the ollama host, for reference, the code below works smoothly ```python ollama_client = Client(host=OLLAMA_URL, headers={"Authorization": f"Bearer {api_key}"}) response = ollama_client.generate(model="llama3.2:3b", prompt="Tell me a fun fact") print(response.response) ``` but i am failing to setup the same in openai-sdk, please help i have tried ```python client = OpenAI(api_key=api_key, base_url=OLLAMA_URL) # Same key from environment as before # Gives Permission Error response = client.chat.completions.create( model="llama3.2:3b", messages=[ {"role": "user", "content": "Why is the sky blue?"} ] ) print(response.choices[0].message.content) ``` this way does not work either: ```python client = OpenAI(api_key="ollama", base_url=OLLAMA_URL, default_headers={"Authorization": f"Bearer {api_key}"}) response = client.chat.completions.create( model="llama3.2:3b", messages=[ {"role": "user", "content": "Why is the sky blue?"} ] ) print(response.choices[0].message.content) ``` this also does not work ```python client = OpenAI(api_key="ollama", base_url=OLLAMA_URL"}) response = client.chat.completions.create( model="llama3.2:3b", messages=[ {"role": "user", "content": "Why is the sky blue?"} ], custom_headers={"Authorization": f"Bearer {api_key}"} ) print(response.choices[0].message.content) ``` ### Relevant log output ```shell ``` ### OS Windows ### GPU Other ### CPU AMD ### Ollama version 0.4.7
GiteaMirror added the bug label 2026-04-29 01:44:48 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 24, 2025):

ollama does't have support for api keys so presumably you are running a proxy in front of ollama for key authentication. What do the logs in the proxy show?

<!-- gh-comment-id:2747547517 --> @rick-github commented on GitHub (Mar 24, 2025): ollama does't have support for api keys so presumably you are running a proxy in front of ollama for key authentication. What do the logs in the proxy show?
Author
Owner

@abhiram1809 commented on GitHub (Mar 24, 2025):

@rick-github, i sadly do not have access to the logs, i could build with the ollama sdk, the only reason i am asking is because i prefer the openai-sdk because all my tools(non-opensource) are built on top of it. It would be a lot of re-write.

<!-- gh-comment-id:2747914392 --> @abhiram1809 commented on GitHub (Mar 24, 2025): @rick-github, i sadly do not have access to the logs, i could build with the ollama sdk, the only reason i am asking is because i prefer the openai-sdk because all my tools(non-opensource) are built on top of it. It would be a lot of re-write.
Author
Owner

@rick-github commented on GitHub (Mar 24, 2025):

I ran your two samples with strace to see what was on the wire:

#!/usr/bin/env python3

from openai import OpenAI
from ollama import Client

api_key = "api_key"

def openai():
  OLLAMA_URL='http://localhost:11434/v1/'
  client = OpenAI(api_key=api_key, base_url=OLLAMA_URL) # Same key from environment as before
# Gives Permission Error
  response = client.chat.completions.create(
      model="llama3.2:3b",
      messages=[
          {"role": "user", "content": "Why is the sky blue?"}
      ]
  )

def ollama():
  OLLAMA_URL='http://localhost:11434/'
  ollama_client = Client(host=OLLAMA_URL,
               headers={"Authorization": f"Bearer {api_key}"})
  response = ollama_client.generate(model="llama3.2:3b", prompt="Why is the sky blue?")

ollama()
openai()
$ strace -e sendto -s 1024 ./9961.py 2>&1 | sed -ne 's/.*\("POST[^"]*"\).*/\1/p' | xargs -i@ printf @'\n'
POST /api/generate HTTP/1.1
Host: localhost:11434
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
authorization: Bearer api_key
content-type: application/json
accept: application/json
user-agent: ollama-python/0.4.7 (x86_64 linux) Python/3.12.3
Content-Length: 75


POST /v1/chat/completions HTTP/1.1
Host: localhost:11434
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Accept: application/json
Content-Type: application/json
User-Agent: OpenAI/Python 1.54.4
X-Stainless-Lang: python
X-Stainless-Package-Version: 1.54.4
X-Stainless-OS: Linux
X-Stainless-Arch: x64
X-Stainless-Runtime: CPython
X-Stainless-Runtime-Version: 3.12.3
Authorization: Bearer api_key
X-Stainless-Async: false
x-stainless-retry-count: 0
Content-Length: 91

The Authorization in the openai call looks fine, perhaps the issue is something else?

<!-- gh-comment-id:2747947738 --> @rick-github commented on GitHub (Mar 24, 2025): I ran your two samples with strace to see what was on the wire: ```python #!/usr/bin/env python3 from openai import OpenAI from ollama import Client api_key = "api_key" def openai(): OLLAMA_URL='http://localhost:11434/v1/' client = OpenAI(api_key=api_key, base_url=OLLAMA_URL) # Same key from environment as before # Gives Permission Error response = client.chat.completions.create( model="llama3.2:3b", messages=[ {"role": "user", "content": "Why is the sky blue?"} ] ) def ollama(): OLLAMA_URL='http://localhost:11434/' ollama_client = Client(host=OLLAMA_URL, headers={"Authorization": f"Bearer {api_key}"}) response = ollama_client.generate(model="llama3.2:3b", prompt="Why is the sky blue?") ollama() openai() ``` ```console $ strace -e sendto -s 1024 ./9961.py 2>&1 | sed -ne 's/.*\("POST[^"]*"\).*/\1/p' | xargs -i@ printf @'\n' POST /api/generate HTTP/1.1 Host: localhost:11434 Accept-Encoding: gzip, deflate, br Connection: keep-alive authorization: Bearer api_key content-type: application/json accept: application/json user-agent: ollama-python/0.4.7 (x86_64 linux) Python/3.12.3 Content-Length: 75 POST /v1/chat/completions HTTP/1.1 Host: localhost:11434 Accept-Encoding: gzip, deflate, br Connection: keep-alive Accept: application/json Content-Type: application/json User-Agent: OpenAI/Python 1.54.4 X-Stainless-Lang: python X-Stainless-Package-Version: 1.54.4 X-Stainless-OS: Linux X-Stainless-Arch: x64 X-Stainless-Runtime: CPython X-Stainless-Runtime-Version: 3.12.3 Authorization: Bearer api_key X-Stainless-Async: false x-stainless-retry-count: 0 Content-Length: 91 ``` The `Authorization` in the openai call looks fine, perhaps the issue is something else?
Author
Owner

@abhiram1809 commented on GitHub (Mar 24, 2025):

Openai has in-built type validation checks that prevent it from sending the request i think, cause this is the error i faced

---------------------------------------------------------------------------
PermissionDeniedError                     Traceback (most recent call last)
Cell In[5], line 3
      1 client = OpenAI(api_key=api_key, base_url=OLLAMA_URL) # Same key from environment as before 
      2 # Gives Permission Error
----> 3 response = client.chat.completions.create(
      4     model="llama3.2:3b",
      5     messages=[
      6         {"role": "user", "content": "Why is the sky blue?"}
      7     ]
      8 )
     10 print(response.choices[0].message.content)

File c:\Users\sharmaa8\Opensource-Guardrails\guardrails_os\Lib\site-packages\openai\_utils\_utils.py:279, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
    277             msg = f"Missing required argument: {quote(missing[0])}"
    278     raise TypeError(msg)
--> 279 return func(*args, **kwargs)

File c:\Users\sharmaa8s\Opensource-Guardrails\guardrails_os\Lib\site-packages\openai\resources\chat\completions\completions.py:914, in Completions.create(self, messages, model, audio, frequency_penalty, function_call, functions, logit_bias, logprobs, max_completion_tokens, max_tokens, metadata, modalities, n, parallel_tool_calls, prediction, presence_penalty, reasoning_effort, response_format, seed, service_tier, stop, store, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, web_search_options, extra_headers, extra_query, extra_body, timeout)
    871 @required_args(["messages", "model"], ["messages", "model", "stream"])
    872 def create(
    873     self,
   (...)    911     timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
    912 ) -> ChatCompletion | Stream[ChatCompletionChunk]:
    913     validate_response_format(response_format)
--> 914     return self._post(
    915         "/chat/completions",
    916         body=maybe_transform(
    917             {
    918                 "messages": messages,
    919                 "model": model,
    920                 "audio": audio,
    921                 "frequency_penalty": frequency_penalty,
    922                 "function_call": function_call,
    923                 "functions": functions,
    924                 "logit_bias": logit_bias,
    925                 "logprobs": logprobs,
    926                 "max_completion_tokens": max_completion_tokens,
    927                 "max_tokens": max_tokens,
    928                 "metadata": metadata,
    929                 "modalities": modalities,
    930                 "n": n,
    931                 "parallel_tool_calls": parallel_tool_calls,
    932                 "prediction": prediction,
    933                 "presence_penalty": presence_penalty,
    934                 "reasoning_effort": reasoning_effort,
    935                 "response_format": response_format,
    936                 "seed": seed,
    937                 "service_tier": service_tier,
    938                 "stop": stop,
    939                 "store": store,
    940                 "stream": stream,
    941                 "stream_options": stream_options,
    942                 "temperature": temperature,
    943                 "tool_choice": tool_choice,
    944                 "tools": tools,
    945                 "top_logprobs": top_logprobs,
    946                 "top_p": top_p,
    947                 "user": user,
    948                 "web_search_options": web_search_options,
    949             },
    950             completion_create_params.CompletionCreateParams,
    951         ),
    952         options=make_request_options(
    953             extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
    954         ),
    955         cast_to=ChatCompletion,
    956         stream=stream or False,
    957         stream_cls=Stream[ChatCompletionChunk],
    958     )

File c:\Users\sharmaa8\Opensource-Guardrails\guardrails_os\Lib\site-packages\openai\_base_client.py:1242, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
   1228 def post(
   1229     self,
   1230     path: str,
   (...)   1237     stream_cls: type[_StreamT] | None = None,
   1238 ) -> ResponseT | _StreamT:
   1239     opts = FinalRequestOptions.construct(
   1240         method="post", url=path, json_data=body, files=to_httpx_files(files), **options
   1241     )
-> 1242     return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))

File c:\Users\sharmaa8\Opensource-Guardrails\guardrails_os\Lib\site-packages\openai\_base_client.py:919, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
    916 else:
    917     retries_taken = 0
--> 919 return self._request(
    920     cast_to=cast_to,
    921     options=options,
    922     stream=stream,
    923     stream_cls=stream_cls,
    924     retries_taken=retries_taken,
    925 )

File c:\Users\sharmaa8\Opensource-Guardrails\guardrails_os\Lib\site-packages\openai\_base_client.py:1023, in SyncAPIClient._request(self, cast_to, options, retries_taken, stream, stream_cls)
   1020         err.response.read()
   1022     log.debug("Re-raising status error")
-> 1023     raise self._make_status_error_from_response(err.response) from None
   1025 return self._process_response(
   1026     cast_to=cast_to,
   1027     options=options,
   (...)   1031     retries_taken=retries_taken,
   1032 )

PermissionDeniedError: Error code: 403 - {'message': "Invalid key=value pair (missing equal-sign) in Authorization header (hashed with SHA-256 and encoded with Base64): '<i-hid-it-don't-know-if-it-is-safe-to-share>'."}
<!-- gh-comment-id:2747968644 --> @abhiram1809 commented on GitHub (Mar 24, 2025): Openai has in-built type validation checks that prevent it from sending the request i think, cause this is the error i faced ```error --------------------------------------------------------------------------- PermissionDeniedError Traceback (most recent call last) Cell In[5], line 3 1 client = OpenAI(api_key=api_key, base_url=OLLAMA_URL) # Same key from environment as before 2 # Gives Permission Error ----> 3 response = client.chat.completions.create( 4 model="llama3.2:3b", 5 messages=[ 6 {"role": "user", "content": "Why is the sky blue?"} 7 ] 8 ) 10 print(response.choices[0].message.content) File c:\Users\sharmaa8\Opensource-Guardrails\guardrails_os\Lib\site-packages\openai\_utils\_utils.py:279, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs) 277 msg = f"Missing required argument: {quote(missing[0])}" 278 raise TypeError(msg) --> 279 return func(*args, **kwargs) File c:\Users\sharmaa8s\Opensource-Guardrails\guardrails_os\Lib\site-packages\openai\resources\chat\completions\completions.py:914, in Completions.create(self, messages, model, audio, frequency_penalty, function_call, functions, logit_bias, logprobs, max_completion_tokens, max_tokens, metadata, modalities, n, parallel_tool_calls, prediction, presence_penalty, reasoning_effort, response_format, seed, service_tier, stop, store, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, web_search_options, extra_headers, extra_query, extra_body, timeout) 871 @required_args(["messages", "model"], ["messages", "model", "stream"]) 872 def create( 873 self, (...) 911 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN, 912 ) -> ChatCompletion | Stream[ChatCompletionChunk]: 913 validate_response_format(response_format) --> 914 return self._post( 915 "/chat/completions", 916 body=maybe_transform( 917 { 918 "messages": messages, 919 "model": model, 920 "audio": audio, 921 "frequency_penalty": frequency_penalty, 922 "function_call": function_call, 923 "functions": functions, 924 "logit_bias": logit_bias, 925 "logprobs": logprobs, 926 "max_completion_tokens": max_completion_tokens, 927 "max_tokens": max_tokens, 928 "metadata": metadata, 929 "modalities": modalities, 930 "n": n, 931 "parallel_tool_calls": parallel_tool_calls, 932 "prediction": prediction, 933 "presence_penalty": presence_penalty, 934 "reasoning_effort": reasoning_effort, 935 "response_format": response_format, 936 "seed": seed, 937 "service_tier": service_tier, 938 "stop": stop, 939 "store": store, 940 "stream": stream, 941 "stream_options": stream_options, 942 "temperature": temperature, 943 "tool_choice": tool_choice, 944 "tools": tools, 945 "top_logprobs": top_logprobs, 946 "top_p": top_p, 947 "user": user, 948 "web_search_options": web_search_options, 949 }, 950 completion_create_params.CompletionCreateParams, 951 ), 952 options=make_request_options( 953 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout 954 ), 955 cast_to=ChatCompletion, 956 stream=stream or False, 957 stream_cls=Stream[ChatCompletionChunk], 958 ) File c:\Users\sharmaa8\Opensource-Guardrails\guardrails_os\Lib\site-packages\openai\_base_client.py:1242, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls) 1228 def post( 1229 self, 1230 path: str, (...) 1237 stream_cls: type[_StreamT] | None = None, 1238 ) -> ResponseT | _StreamT: 1239 opts = FinalRequestOptions.construct( 1240 method="post", url=path, json_data=body, files=to_httpx_files(files), **options 1241 ) -> 1242 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) File c:\Users\sharmaa8\Opensource-Guardrails\guardrails_os\Lib\site-packages\openai\_base_client.py:919, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls) 916 else: 917 retries_taken = 0 --> 919 return self._request( 920 cast_to=cast_to, 921 options=options, 922 stream=stream, 923 stream_cls=stream_cls, 924 retries_taken=retries_taken, 925 ) File c:\Users\sharmaa8\Opensource-Guardrails\guardrails_os\Lib\site-packages\openai\_base_client.py:1023, in SyncAPIClient._request(self, cast_to, options, retries_taken, stream, stream_cls) 1020 err.response.read() 1022 log.debug("Re-raising status error") -> 1023 raise self._make_status_error_from_response(err.response) from None 1025 return self._process_response( 1026 cast_to=cast_to, 1027 options=options, (...) 1031 retries_taken=retries_taken, 1032 ) PermissionDeniedError: Error code: 403 - {'message': "Invalid key=value pair (missing equal-sign) in Authorization header (hashed with SHA-256 and encoded with Base64): '<i-hid-it-don't-know-if-it-is-safe-to-share>'."} ```
Author
Owner

@abhiram1809 commented on GitHub (Mar 24, 2025):

@rick-github ^^

<!-- gh-comment-id:2747969504 --> @abhiram1809 commented on GitHub (Mar 24, 2025): @rick-github ^^
Author
Owner

@abhiram1809 commented on GitHub (Mar 24, 2025):

If there would be a way to ignore these check, it would work fine i think

<!-- gh-comment-id:2747979154 --> @abhiram1809 commented on GitHub (Mar 24, 2025): If there would be a way to ignore these check, it would work fine i think
Author
Owner

@rick-github commented on GitHub (Mar 24, 2025):

I think you need to take it up with the maintainer of the proxy/auth gateway. RFC6750 makes no mention of key=value pairs in the header for Bearer tokens so either your library is not sending a Bearer token Authorization header, or the proxy is expecting something else in the header. Perhaps they have defined a different auth method for /v1.

Have you tried probing the proxy manually to see what happens?

curl http://proxy:port/v1/chat/completions -H "Authorization: Bearer api_key" -d "{\"model\":\"llama3.2:3b\",\"messages\":[{\"role\":\"user\",\"content\":\"hi\"}]}"
<!-- gh-comment-id:2748033754 --> @rick-github commented on GitHub (Mar 24, 2025): I think you need to take it up with the maintainer of the proxy/auth gateway. [RFC6750](https://datatracker.ietf.org/doc/html/rfc6750#section-2.1) makes no mention of key=value pairs in the header for Bearer tokens so either your library is not sending a Bearer token Authorization header, or the proxy is expecting something else in the header. Perhaps they have defined a different auth method for `/v1`. Have you tried probing the proxy manually to see what happens? ```sh curl http://proxy:port/v1/chat/completions -H "Authorization: Bearer api_key" -d "{\"model\":\"llama3.2:3b\",\"messages\":[{\"role\":\"user\",\"content\":\"hi\"}]}"
Author
Owner

@abhiram1809 commented on GitHub (Mar 24, 2025):

let me try

<!-- gh-comment-id:2748484386 --> @abhiram1809 commented on GitHub (Mar 24, 2025): let me try
Author
Owner

@pdevine commented on GitHub (Mar 24, 2025):

I'm going to go ahead and close the issue, but feel free to keep commenting.

<!-- gh-comment-id:2748818524 --> @pdevine commented on GitHub (Mar 24, 2025): I'm going to go ahead and close the issue, but feel free to keep commenting.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53034