[GH-ISSUE #13111] OpenAI API compatibility layer seems not to support n (i.e. multi-choice output) #55193

Open
opened 2026-04-29 08:29:10 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @starpit on GitHub (Nov 16, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13111

As far as I can tell, when passing n>1 to the OpenAI compatible endpoint, I do not get more than one choices back.

https://platform.openai.com/docs/api-reference/chat/create#chat_create-n

Originally created by @starpit on GitHub (Nov 16, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13111 As far as I can tell, when passing `n>1` to the OpenAI compatible endpoint, I do not get more than one `choices` back. https://platform.openai.com/docs/api-reference/chat/create#chat_create-n
GiteaMirror added the feature request label 2026-04-29 08:29:10 -05:00
Author
Owner

@rick-github commented on GitHub (Nov 17, 2025):

Correct, ollama does not support n.

<!-- gh-comment-id:3539534340 --> @rick-github commented on GitHub (Nov 17, 2025): Correct, ollama [does not](https://github.com/ollama/ollama/blob/main/docs/api/openai-compatibility.mdx#supported-request-fields:~:text=%5B%20%5D%20user-,%5B%20%5D%20n,-/v1/completions) support `n`.
Author
Owner

@changminbark commented on GitHub (Nov 17, 2025):

Hi, can I tackle this issue?

<!-- gh-comment-id:3540096890 --> @changminbark commented on GitHub (Nov 17, 2025): Hi, can I tackle this issue?
Author
Owner

@starpit commented on GitHub (Nov 17, 2025):

Hi, can I tackle this issue?

@changminbark if you think the Ollama maintainers would be accepting of this, go for it!

<!-- gh-comment-id:3542121078 --> @starpit commented on GitHub (Nov 17, 2025): > Hi, can I tackle this issue? @changminbark if you think the Ollama maintainers would be accepting of this, go for it!
Author
Owner

@changminbark commented on GitHub (Nov 17, 2025):

@jmorganca @mchiang0610 @pdevine Would this new feature request be rejected? I read the CONTRIBUTING.md file and saw that changes to the OpenAI API may not be accepted.

<!-- gh-comment-id:3542779175 --> @changminbark commented on GitHub (Nov 17, 2025): @jmorganca @mchiang0610 @pdevine Would this new feature request be rejected? I read the CONTRIBUTING.md file and saw that changes to the OpenAI API may not be accepted.
Author
Owner

@changminbark commented on GitHub (Nov 19, 2025):

After a quick preliminary analysis, it seems that we would have to add another field to the ChatCompletionRequest struct in 53985b3c4d/openai/openai.go (L98) to have another field (n). We would also need to add another field in the generic ChatRequest struct, which is also used by non-OpenAI API models in 53985b3c4d/api/types.go (L131). So, this change might not be accepted as it requires changing mission critical code unless the maintainers deem it safe/necessary.

<!-- gh-comment-id:3550822918 --> @changminbark commented on GitHub (Nov 19, 2025): After a quick preliminary analysis, it seems that we would have to add another field to the `ChatCompletionRequest` struct in https://github.com/ollama/ollama/blob/53985b3c4d94f22517e4090696a5b8ecd06caedb/openai/openai.go#L98 to have another field (n). We would also need to add another field in the generic `ChatRequest` struct, which is also used by non-OpenAI API models in https://github.com/ollama/ollama/blob/53985b3c4d94f22517e4090696a5b8ecd06caedb/api/types.go#L131. So, this change might not be accepted as it requires changing mission critical code unless the maintainers deem it safe/necessary.
Author
Owner

@pdevine commented on GitHub (Dec 11, 2025):

@changminbark sorry about the slow response. Is this something you commonly use with OpenAI? I'm not opposed to adding it, but want to make sure we're not adding in unnecessary code in the API interface.

<!-- gh-comment-id:3643540916 --> @pdevine commented on GitHub (Dec 11, 2025): @changminbark sorry about the slow response. Is this something you commonly use with OpenAI? I'm not opposed to adding it, but want to make sure we're not adding in unnecessary code in the API interface.
Author
Owner

@starpit commented on GitHub (Dec 11, 2025):

Hi there! The OpenAI API supports two (that I know of) bulk APIs. One, via the chat completion API, allows you to request n variants from a single prompt (this is the API in question). For completeness, it seems worth mentioning the second which, via the completion API, allows you to request one completion per prompt in an array of prompts. Both are useful ways to lower the overhead against the API server; for one, they facilitate batching (I realize that continuous batching is intended to allow batching in the absence of bulk APIs, but the bulk APIs will always allow for higher throughput).

  • A use case for the former: a judge/generator (generate n candidate emails, then judge them).
  • A use case for the latter: any program that has a structured fork/join.

My understanding is utilization/throughput is not a primary goal of Ollama. If not, it is for other inference servers. Having some uniformity in the APIs is the request here.

Thanks for the consideration!

<!-- gh-comment-id:3643982901 --> @starpit commented on GitHub (Dec 11, 2025): Hi there! The OpenAI API supports two (that I know of) bulk APIs. One, via the chat completion API, allows you to request `n` variants from a single prompt (this is the API in question). For completeness, it seems worth mentioning the second which, via the completion API, allows you to request one completion per prompt in an array of prompts. Both are useful ways to lower the overhead against the API server; for one, they facilitate batching (I realize that continuous batching is intended to allow batching in the absence of bulk APIs, but the bulk APIs will always allow for higher throughput). - A use case for the former: a judge/generator (generate `n` candidate emails, then judge them). - A use case for the latter: any program that has a structured fork/join. My understanding is utilization/throughput is not a primary goal of Ollama. If not, it is for other inference servers. Having some uniformity in the APIs is the request here. Thanks for the consideration!
Author
Owner

@changminbark commented on GitHub (Jan 8, 2026):

@pdevine I do not commonly use it with OpenAI, but @starpit has a decent point about uniformity in the APIs. The only thing is that it may not be a trivial change.

<!-- gh-comment-id:3722025900 --> @changminbark commented on GitHub (Jan 8, 2026): @pdevine I do not commonly use it with OpenAI, but @starpit has a decent point about uniformity in the APIs. The only thing is that it may not be a trivial change.
Author
Owner

@rick-github commented on GitHub (Jan 8, 2026):

Why not just wrap the client library and have it generate n completions?

<!-- gh-comment-id:3722275766 --> @rick-github commented on GitHub (Jan 8, 2026): Why not just wrap the client library and have it generate `n` completions?
Author
Owner

@changminbark commented on GitHub (Jan 8, 2026):

@rick-github what do you mean by wrapping the client library?

<!-- gh-comment-id:3723198965 --> @changminbark commented on GitHub (Jan 8, 2026): @rick-github what do you mean by wrapping the client library?
Author
Owner

@rick-github commented on GitHub (Jan 8, 2026):

If I understand correctly (I don't use this feature myself), n processing is just multiple generations, so a client can just call ollama multiple times. Exactly how the client would wrap the library depends on the language and the library, but here's an example of n processing for the python openai library:

#!/usr/bin/env python3

from openai import OpenAI

class OpenAIN:
    def __init__(self, **kwargs):
        self._client = OpenAI(**kwargs)
        self.chat = ChatNamespace(self._client)

class ChatNamespace:
    def __init__(self, client):
        self._client = client
        self.completions = CompletionsNamespace(client)

class CompletionsNamespace:
    def __init__(self, client):
        self._client = client
    
    def create(self, **kwargs):
        response = self._client.chat.completions.create(**kwargs)
        for n in range(1, kwargs.get('n', 1)):
            r = self._client.chat.completions.create(**kwargs)
            r.choices[0].index = n
            response.choices.append(r.choices[0])
            response.usage.completion_tokens += r.usage.completion_tokens
            response.usage.prompt_tokens += r.usage.prompt_tokens
            response.usage.total_tokens += r.usage.total_tokens
        return response

import argparse
parser = argparse.ArgumentParser()
parser.add_argument("-m", "--model", default="qwen2.5:0.5b")
parser.add_argument("-p", "--prompt", default="Why is the sky blue?")
parser.add_argument("-n", "--count", type=int, default=1)
args = parser.parse_args()

client = OpenAIN(
    base_url="http://localhost:11434/v1",
    api_key='ollama' 
)

response = client.chat.completions.create(
    model=args.model,
    messages=[
        { "role": "user", "content": args.prompt, }
    ],
    n=args.count,
    stream=False,
)

print(response.model_dump_json())

In this example, clients use OpenAIN() instead of OpenAI(), and the create function handles the n parameter if supplied. There are other ways to do this, but the principle is the same: trap the call to create() and run n completions. Optimisations are possible, eg asyncio can be used to run the completions in parallel to reduce run time.

$ ./13111.py --count=2  --prompt "what is 2+2?" | jq
{
  "id": "chatcmpl-766",
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "logprobs": null,
      "message": {
        "content": "2 + 2 equals 4. It's a simple arithmetic operation where we add two numbers together to get the sum. In programming terms, you can think of it as adding:\n\n```python\nresult = 2 + 2\n```\n\nto an integer like 4, it would look like this in Python after being assigned the value 4:\n```vbnet\nresult = 4\n```",
        "refusal": null,
        "role": "assistant",
        "annotations": null,
        "audio": null,
        "function_call": null,
        "tool_calls": null
      }
    },
    {
      "finish_reason": "stop",
      "index": 1,
      "logprobs": null,
      "message": {
        "content": "The mathematical operation \"2 + 2\" represents the sum of two numbers or variables. In this case, it represents either 4 or 2 + 2 = 4.\n\nIf you're referring to a language-specific concept, we'll need more context to provide an accurate response. However, based on typical programming languages and mathematics education, \"2 + 2\" would most commonly be represented as \"the sum of two numbers\" in certain languages. But for general comparison, if the operation were `a + b = c`, then 4, and if that's how many are being added together.\n\nIn Python (or any similar language), you might write it as `2 + 2` or even equivalently to a sum like `x = 4`.",
        "refusal": null,
        "role": "assistant",
        "annotations": null,
        "audio": null,
        "function_call": null,
        "tool_calls": null
      }
    }
  ],
  "created": 1767872013,
  "model": "qwen2.5:0.5b",
  "object": "chat.completion",
  "service_tier": null,
  "system_fingerprint": "fp_ollama",
  "usage": {
    "completion_tokens": 240,
    "prompt_tokens": 72,
    "total_tokens": 312,
    "completion_tokens_details": null,
    "prompt_tokens_details": null
  }
}

This example does require modifications to the client, but it's a quicker route to supporting n than ollama surgery.

<!-- gh-comment-id:3723524277 --> @rick-github commented on GitHub (Jan 8, 2026): If I understand correctly (I don't use this feature myself), `n` processing is just multiple generations, so a client can just call ollama multiple times. Exactly how the client would wrap the library depends on the language and the library, but here's an example of `n` processing for the python openai library: ```python #!/usr/bin/env python3 from openai import OpenAI class OpenAIN: def __init__(self, **kwargs): self._client = OpenAI(**kwargs) self.chat = ChatNamespace(self._client) class ChatNamespace: def __init__(self, client): self._client = client self.completions = CompletionsNamespace(client) class CompletionsNamespace: def __init__(self, client): self._client = client def create(self, **kwargs): response = self._client.chat.completions.create(**kwargs) for n in range(1, kwargs.get('n', 1)): r = self._client.chat.completions.create(**kwargs) r.choices[0].index = n response.choices.append(r.choices[0]) response.usage.completion_tokens += r.usage.completion_tokens response.usage.prompt_tokens += r.usage.prompt_tokens response.usage.total_tokens += r.usage.total_tokens return response import argparse parser = argparse.ArgumentParser() parser.add_argument("-m", "--model", default="qwen2.5:0.5b") parser.add_argument("-p", "--prompt", default="Why is the sky blue?") parser.add_argument("-n", "--count", type=int, default=1) args = parser.parse_args() client = OpenAIN( base_url="http://localhost:11434/v1", api_key='ollama' ) response = client.chat.completions.create( model=args.model, messages=[ { "role": "user", "content": args.prompt, } ], n=args.count, stream=False, ) print(response.model_dump_json()) ``` In this example, clients use `OpenAIN()` instead of `OpenAI()`, and the `create` function handles the `n` parameter if supplied. There are other ways to do this, but the principle is the same: trap the call to `create()` and run `n` completions. Optimisations are possible, eg `asyncio` can be used to run the completions in parallel to reduce run time. ```console $ ./13111.py --count=2 --prompt "what is 2+2?" | jq { "id": "chatcmpl-766", "choices": [ { "finish_reason": "stop", "index": 0, "logprobs": null, "message": { "content": "2 + 2 equals 4. It's a simple arithmetic operation where we add two numbers together to get the sum. In programming terms, you can think of it as adding:\n\n```python\nresult = 2 + 2\n```\n\nto an integer like 4, it would look like this in Python after being assigned the value 4:\n```vbnet\nresult = 4\n```", "refusal": null, "role": "assistant", "annotations": null, "audio": null, "function_call": null, "tool_calls": null } }, { "finish_reason": "stop", "index": 1, "logprobs": null, "message": { "content": "The mathematical operation \"2 + 2\" represents the sum of two numbers or variables. In this case, it represents either 4 or 2 + 2 = 4.\n\nIf you're referring to a language-specific concept, we'll need more context to provide an accurate response. However, based on typical programming languages and mathematics education, \"2 + 2\" would most commonly be represented as \"the sum of two numbers\" in certain languages. But for general comparison, if the operation were `a + b = c`, then 4, and if that's how many are being added together.\n\nIn Python (or any similar language), you might write it as `2 + 2` or even equivalently to a sum like `x = 4`.", "refusal": null, "role": "assistant", "annotations": null, "audio": null, "function_call": null, "tool_calls": null } } ], "created": 1767872013, "model": "qwen2.5:0.5b", "object": "chat.completion", "service_tier": null, "system_fingerprint": "fp_ollama", "usage": { "completion_tokens": 240, "prompt_tokens": 72, "total_tokens": 312, "completion_tokens_details": null, "prompt_tokens_details": null } } ``` This example does require modifications to the client, but it's a quicker route to supporting `n` than ollama surgery.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#55193