[GH-ISSUE #10646] Structured Outputs: Ensure value of predicted JSON attribute is in a given text #53513

Closed
opened 2026-04-29 03:28:51 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @NilsHellwig on GitHub (May 10, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10646

Hello everyone,

First of all, thank you very much for the fantastic library. I've been a regular user for months now.
Structured outputs are an amazing feature. However, I have a small issue with it.

Issue:

I would like to extract a substring from the text using the LLM (Python, ollama library, pydantic for schema). For this, I need to ensure that the predicted text actually appears in the original text:

text = "The food was amazing but the service was awful!"
class TASDAspect(BaseModel):
    polarity: Literal["positive", "negative", "neutral"]
    aspect_term: str # <- should appear in my text
    ...

Regex does not seem to be supported. Alternatively, I tried treating aspect_term as a Literal and setting all possible substrings of text as Literals. However, with large texts, there can be thousands of valid aspect_term values (=substrings of text), and a Literal cannot have an unlimited number of items, otherwise, it results in a request error.

Originally created by @NilsHellwig on GitHub (May 10, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10646 Hello everyone, First of all, thank you very much for the fantastic library. I've been a regular user for months now. Structured outputs are an amazing feature. However, I have a small issue with it. Issue: I would like to extract a substring from the text using the LLM (Python, ollama library, pydantic for schema). For this, I need to ensure that the predicted text actually appears in the original text: ``` text = "The food was amazing but the service was awful!" class TASDAspect(BaseModel): polarity: Literal["positive", "negative", "neutral"] aspect_term: str # <- should appear in my text ... ``` Regex does not seem to be supported. Alternatively, I tried treating `aspect_term` as a Literal and setting all possible substrings of text as Literals. However, with large texts, there can be thousands of valid `aspect_term` values (=substrings of text), and a Literal cannot have an unlimited number of items, otherwise, it results in a request error.
GiteaMirror added the feature request label 2026-04-29 03:28:51 -05:00
Author
Owner

@rick-github commented on GitHub (May 10, 2025):

#!/usr/bin/env python3

import ollama
import argparse
from pydantic import BaseModel, Field
from typing import Literal

parser = argparse.ArgumentParser()
parser.add_argument("--text", default="The food was amazing but the service was awful!")
parser.add_argument("--model", default="qwen3")
args = parser.parse_args()

class TASDAspect(BaseModel):
    polarity: Literal["positive", "negative", "neutral"]
    aspect_term: str = Field(pattern="^(awful|great)$")

def answer(prompt):
  response = ollama.chat(
      model=args.model,
      messages=[
        {"role":"user", "content":prompt}
      ],
      options={"temperature":0, "seed":0},
      format=TASDAspect.model_json_schema())
  return TASDAspect.model_validate_json(response.message.content)

print(answer(args.text))
$ ./10646.py --text 'The food was amazing but the service was awful!'
polarity='negative' aspect_term='awful'
$ ./10646.py --text 'The food was amazing but the service was great!'
polarity='positive' aspect_term='great'
$ ./10646.py --text 'The food was awful but the service was amazing!'
polarity='positive' aspect_term='awful'
<!-- gh-comment-id:2868955875 --> @rick-github commented on GitHub (May 10, 2025): ```python #!/usr/bin/env python3 import ollama import argparse from pydantic import BaseModel, Field from typing import Literal parser = argparse.ArgumentParser() parser.add_argument("--text", default="The food was amazing but the service was awful!") parser.add_argument("--model", default="qwen3") args = parser.parse_args() class TASDAspect(BaseModel): polarity: Literal["positive", "negative", "neutral"] aspect_term: str = Field(pattern="^(awful|great)$") def answer(prompt): response = ollama.chat( model=args.model, messages=[ {"role":"user", "content":prompt} ], options={"temperature":0, "seed":0}, format=TASDAspect.model_json_schema()) return TASDAspect.model_validate_json(response.message.content) print(answer(args.text)) ``` ```console $ ./10646.py --text 'The food was amazing but the service was awful!' polarity='negative' aspect_term='awful' $ ./10646.py --text 'The food was amazing but the service was great!' polarity='positive' aspect_term='great' $ ./10646.py --text 'The food was awful but the service was amazing!' polarity='positive' aspect_term='awful' ```
Author
Owner

@rick-github commented on GitHub (May 10, 2025):

Grammar is limited to 32K: 3fa78598a1/llama/llama.go (L641)

<!-- gh-comment-id:2869022530 --> @rick-github commented on GitHub (May 10, 2025): Grammar is limited to 32K: https://github.com/ollama/ollama/blob/3fa78598a1e86faee9390ded4b43e78ca3bef816/llama/llama.go#L641
Author
Owner

@NilsHellwig commented on GitHub (May 10, 2025):

@rick-github Thank you, interesting detail. I guess changing this one is not recommended / easy to do.

<!-- gh-comment-id:2869031840 --> @NilsHellwig commented on GitHub (May 10, 2025): @rick-github Thank you, interesting detail. I guess changing this one is not recommended / easy to do.
Author
Owner

@rick-github commented on GitHub (May 10, 2025):

I bumped it to 128K and handled a list of 5000 words. The problem is that it's a compile-time constant and your use case is a corner case. You can fork the project and maintain a version with a modified length, or submit a PR that either increases it or makes it run-time configurable.

<!-- gh-comment-id:2869040195 --> @rick-github commented on GitHub (May 10, 2025): I bumped it to 128K and handled a list of 5000 words. The problem is that it's a compile-time constant and your use case is a corner case. You can fork the project and maintain a version with a modified length, or submit a PR that either increases it or makes it run-time configurable.
Author
Owner

@NilsHellwig commented on GitHub (May 10, 2025):

@rick-github Ok thank you :) might try this out. Don't actually think it's a corner case, I mean it's just pattern recognition / named entity recognition 😄

<!-- gh-comment-id:2869041687 --> @NilsHellwig commented on GitHub (May 10, 2025): @rick-github Ok thank you :) might try this out. Don't actually think it's a corner case, I mean it's just pattern recognition / named entity recognition 😄
Author
Owner

@rick-github commented on GitHub (May 10, 2025):

Well, corner case in that nobody's raised it before. In my case, NER is done with a dedicated program rather than a general purpose LLM. It seems like this could be resolved by just setting maxLen to some multiple of the length of the supplied schema.

<!-- gh-comment-id:2869046601 --> @rick-github commented on GitHub (May 10, 2025): Well, corner case in that nobody's raised it before. In my case, NER is done with a dedicated program rather than a general purpose LLM. It seems like this could be resolved by just setting `maxLen` to some multiple of the length of the supplied schema.
Author
Owner

@NilsHellwig commented on GitHub (May 10, 2025):

@rick-github this is 🔥 thank you!

<!-- gh-comment-id:2869078573 --> @NilsHellwig commented on GitHub (May 10, 2025): @rick-github this is 🔥 thank you!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53513