[GH-ISSUE #2844] OpenAI package compatibility #1730

Closed
opened 2026-04-12 11:42:39 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @eliranwong on GitHub (Feb 29, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2844

I read the example in https://ollama.com/blog/openai-compatibility

The example works, but it doesn't when I add "response_format={ "type": "json_object" },"

https://platform.openai.com/docs/guides/text-generation/json-mode

Originally created by @eliranwong on GitHub (Feb 29, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2844 I read the example in https://ollama.com/blog/openai-compatibility The example works, but it doesn't when I add "response_format={ "type": "json_object" }," https://platform.openai.com/docs/guides/text-generation/json-mode
Author
Owner

@eliranwong commented on GitHub (Feb 29, 2024):

ok, I just found a workaround, by adding the following prefix in User message content:

Use template '{"answer": ""}' to answer:

<!-- gh-comment-id:1971993292 --> @eliranwong commented on GitHub (Feb 29, 2024): ok, I just found a workaround, by adding the following prefix in User message content: Use template '{"answer": ""}' to answer:
Author
Owner

@eliranwong commented on GitHub (Feb 29, 2024):

I am happy enough with the workaround. many thanks.

<!-- gh-comment-id:1971994264 --> @eliranwong commented on GitHub (Feb 29, 2024): I am happy enough with the workaround. many thanks.
Author
Owner

@eliranwong commented on GitHub (Feb 29, 2024):

By the way, the full workaround example that works is:

from openai import OpenAI
client = OpenAI( 
    base_url = 'http://localhost:11434/v1', 
    api_key='ollama', # required, but unused 
) 
 
response = client.chat.completions.create( 
  model="llama2", 
  response_format={ "type": "json_object" }, 
  messages=[ 
    {"role": "system", "content": "You are a helpful assistant."}, 
    {"role": "user", "content": "Who won the world series in 2020?"}, 
    {"role": "assistant", "content": "The LA Dodgers won in 2020."}, 
    {"role": "user", "content": """Use template '{"answer": ""}' to answer: Wher
e was it played?"""} 
  ] 
) 
print(response.choices[0].message.content) 
<!-- gh-comment-id:1971995760 --> @eliranwong commented on GitHub (Feb 29, 2024): By the way, the full workaround example that works is: ``` from openai import OpenAI client = OpenAI( base_url = 'http://localhost:11434/v1', api_key='ollama', # required, but unused ) response = client.chat.completions.create( model="llama2", response_format={ "type": "json_object" }, messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2020?"}, {"role": "assistant", "content": "The LA Dodgers won in 2020."}, {"role": "user", "content": """Use template '{"answer": ""}' to answer: Wher e was it played?"""} ] ) print(response.choices[0].message.content) ```
Author
Owner

@jmorganca commented on GitHub (Mar 1, 2024):

Hi there, thanks for opening an issue. If using json_object, it's very important to tell the model to answer in JSON, or it may get stuck generating a lot of whitespace (solving this separately with #2605). The OpenAI API behaves similarly although it will error if "answer in JSON" or similar isn't in the prompt. Let me know if you still see more issues!

<!-- gh-comment-id:1972260932 --> @jmorganca commented on GitHub (Mar 1, 2024): Hi there, thanks for opening an issue. If using `json_object`, it's very important to tell the model to answer in JSON, or it may get stuck generating a lot of whitespace (solving this separately with #2605). The OpenAI API behaves similarly although it will error if "answer in JSON" or similar isn't in the prompt. Let me know if you still see more issues!
Author
Owner

@halcwb commented on GitHub (Mar 13, 2024):

@jmorganca Tried the official way with a response_format object with type=json_object and schema filled in with a schema object, but that is ignored. When I try to put the schema in the prompt then I get the white space problem.

Using the exact same code with for example the openAI api from fireworks, it works fine.

<!-- gh-comment-id:1994261460 --> @halcwb commented on GitHub (Mar 13, 2024): @jmorganca Tried the official way with a response_format object with type=json_object and schema filled in with a schema object, but that is ignored. When I try to put the schema in the prompt then I get the white space problem. Using the exact same code with for example the openAI api from fireworks, it works fine.
Author
Owner

@PhilipAmadasun commented on GitHub (May 2, 2024):

@jmorganca I'll create an issue for this but at this time is it possible for developers to create API keys for access security using the openai API (with ollama for inference)?

<!-- gh-comment-id:2089375627 --> @PhilipAmadasun commented on GitHub (May 2, 2024): @jmorganca I'll create an issue for this but at this time is it possible for developers to create API keys for access security using the openai API (with ollama for inference)?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1730