[GH-ISSUE #6237] Ollama Product Stance on Grammar Feature / Outstanding PRs #3901

Closed
opened 2026-04-12 14:45:08 -05:00 by GiteaMirror · 21 comments
Owner

Originally created by @Kinglord on GitHub (Aug 7, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6237

Hello,

This isn't a feature request, but it's the best category I could pick. This is really a question around merging PRs for exposing an existing feature to users of Ollama that are being ignored or declined without good context. I'm asking this to get more public visibility from the Ollama team on grammar features, specifically those implemented and existing in llama.cpp.

I understand Ollama provides json schemas functionality as a way to direct and control the output from models, another popular approach is the use of GBNF / Grammar, which is supported and implemented in llama.cpp currently. Several PRs have been submitted to expose this feature to Ollama users, and have been either sitting idle or closed. This particular point is going to continue to surface and make noise (there was a large help thread in the Discord started today) until Ollama makes a clear and public statement on this issue. If Ollama as a product has decided not to give users this choice, and is saying if you want to use or test this feature it must be done outside of Ollama, then you need to let us (the community) know. If there is some problem with the way the community is exposing this feature in the PRs, then again just let us know so we can fix it. I understand as a contributor it can be hard to understand why a product does not want to give users more choices and options, and I think Ollama needs to clearly state why this choice has been made for the product.

This is not a post to talk about which approach between GBNF and json is better or worse - this is a post to clarify that there is community demand for the ability to use this feature in Ollama, and Ollama apparently actively rejecting the inclusion of it based on what I have to assume are product calls the community does not have visibility on. I hope this post will end that lack of clarity for all involved, so we all will know Ollama's stance and as a community we can stop bringing this up and submitting additional PRs. If anyone wants to start a more technical post and provide data on why one approach can be better than another, I welcome you to do so and link it to this topic.

My simple personal example is this. As a newer Ollama user I actually would like to try out both approaches to see which one works better for me and my product. Right now in Ollama I simply cannot, and from appearances (which can be deceiving) it appears that what's stopping me from testing these both in Ollama is a simple code change to expose the feature in llama.cpp to me. (edit: It was brought to my attention that Ollama actually uses GBNF internally to enforce json syntax, so the only thing that's really missing is exposing this feature to the end user to customize or use different grammar.)

There might be more, but for reference here are some links to other discussions about this topic as well as a link to Discord thread from earlier today. Thanks to the Ollama team for taking a look at this and helping align the community with their future response.

Discord:
https://discord.com/channels/1128867683291627614/1236730825928741034

Github PRs:
https://github.com/ollama/ollama/pull/565
https://github.com/ollama/ollama/pull/830
https://github.com/ollama/ollama/pull/1606
https://github.com/ollama/ollama/pull/2404
https://github.com/ollama/ollama/pull/2754
https://github.com/ollama/ollama/pull/3303
https://github.com/ollama/ollama/pull/3618
https://github.com/ollama/ollama/pull/4525
https://github.com/ollama/ollama/pull/5348

Github Issues:
https://github.com/ollama/ollama/issues/808
https://github.com/ollama/ollama/issues/1507
https://github.com/ollama/ollama/issues/3616
https://github.com/ollama/ollama/issues/4074
https://github.com/ollama/ollama/issues/4370
https://github.com/ollama/ollama/issues/6002

Originally created by @Kinglord on GitHub (Aug 7, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6237 Hello, This isn't a feature request, but it's the best category I could pick. This is really a question around merging PRs for exposing an existing feature to users of Ollama that are being ignored or declined without good context. I'm asking this to get more public visibility from the Ollama team on grammar features, specifically those implemented and existing in llama.cpp. I understand Ollama provides json schemas functionality as a way to direct and control the output from models, another popular approach is the use of GBNF / Grammar, which is supported and implemented in llama.cpp currently. Several PRs have been submitted to expose this feature to Ollama users, and have been either sitting idle or closed. This particular point is going to continue to surface and make noise (there was a large help thread in the Discord started today) until Ollama makes a clear and public statement on this issue. If Ollama as a product has decided not to give users this choice, and is saying if you want to use or test this feature it must be done outside of Ollama, then you need to let us (the community) know. If there is some problem with the way the community is exposing this feature in the PRs, then again just let us know so we can fix it. I understand as a contributor it can be hard to understand why a product does not want to give users more choices and options, and I think Ollama needs to clearly state why this choice has been made for the product. This is not a post to talk about which approach between GBNF and json is better or worse - this is a post to clarify that there is community demand for the ability to use this feature in Ollama, and Ollama apparently actively rejecting the inclusion of it based on what I have to assume are product calls the community does not have visibility on. I hope this post will end that lack of clarity for all involved, so we all will know Ollama's stance and as a community we can stop bringing this up and submitting additional PRs. If anyone wants to start a more technical post and provide data on why one approach can be better than another, I welcome you to do so and link it to this topic. My simple personal example is this. As a newer Ollama user I actually would like to try out both approaches to see which one works better for me and my product. Right now in Ollama I simply cannot, and from appearances (which can be deceiving) it appears that what's stopping me from testing these both in Ollama is a simple code change to expose the feature in llama.cpp to me. _(**edit**: It was brought to my attention that Ollama actually uses GBNF internally to enforce json syntax, so the only thing that's really missing is exposing this feature to the end user to customize or use different grammar.)_ There might be more, but for reference here are some links to other discussions about this topic as well as a link to Discord thread from earlier today. Thanks to the Ollama team for taking a look at this and helping align the community with their future response. Discord: https://discord.com/channels/1128867683291627614/1236730825928741034 Github PRs: https://github.com/ollama/ollama/pull/565 https://github.com/ollama/ollama/pull/830 https://github.com/ollama/ollama/pull/1606 https://github.com/ollama/ollama/pull/2404 https://github.com/ollama/ollama/pull/2754 https://github.com/ollama/ollama/pull/3303 https://github.com/ollama/ollama/pull/3618 https://github.com/ollama/ollama/pull/4525 https://github.com/ollama/ollama/pull/5348 Github Issues: https://github.com/ollama/ollama/issues/808 https://github.com/ollama/ollama/issues/1507 https://github.com/ollama/ollama/issues/3616 https://github.com/ollama/ollama/issues/4074 https://github.com/ollama/ollama/issues/4370 https://github.com/ollama/ollama/issues/6002
GiteaMirror added the feature request label 2026-04-12 14:45:08 -05:00
Author
Owner

@NeuralNotwerk commented on GitHub (Aug 7, 2024):

I currently use llama.cpp for anything production that requires structured output. I'd love to see the feature in Ollama.

<!-- gh-comment-id:2273978105 --> @NeuralNotwerk commented on GitHub (Aug 7, 2024): I currently use llama.cpp for anything production that requires structured output. I'd love to see the feature in Ollama.
Author
Owner

@coder543 commented on GitHub (Aug 7, 2024):

Even though Ollama’s core team has frustratingly not communicated it clearly anywhere that I’ve seen, my feeling is that they’ve been waiting on OpenAI to officially support this, in order to stay aligned with the OpenAI API specification as much as possible. Therefore, the single most relevant link to this conversation is probably this one: https://openai.com/index/introducing-structured-outputs-in-the-api/

Maybe we’ll finally get some movement on this.

<!-- gh-comment-id:2274336707 --> @coder543 commented on GitHub (Aug 7, 2024): Even though Ollama’s core team has frustratingly not communicated it clearly anywhere that I’ve seen, my feeling is that they’ve been waiting on OpenAI to officially support this, in order to stay aligned with the OpenAI API specification as much as possible. Therefore, the single most relevant link to this conversation is probably this one: https://openai.com/index/introducing-structured-outputs-in-the-api/ Maybe we’ll finally get some movement on this.
Author
Owner

@MHugonKaliop commented on GitHub (Aug 8, 2024):

As one of the people expressing my interest for some of these PRs, I agree that it would be nice to have some feedback from ollama team regarding long sitting PRs. Even a "don't have time for this" would be better to nothing at all. I can completely understand that it's almost impossible to be able to react to all the activity of a successful project like Ollama, so maybe you could use milestones just to let other people know what are your priorities

As @coder543 said, now that OpenAI supports this, maybe it will become a "must do" feature in order to keep up with OpenAI compatibility, but I think that this discussion is interesting.

And thank you so much for your work on this project !

<!-- gh-comment-id:2275068299 --> @MHugonKaliop commented on GitHub (Aug 8, 2024): As one of the people expressing my interest for some of these PRs, I agree that it would be nice to have some feedback from ollama team regarding long sitting PRs. Even a "don't have time for this" would be better to nothing at all. I can completely understand that it's almost impossible to be able to react to all the activity of a successful project like Ollama, so maybe you could use milestones just to let other people know what are your priorities As @coder543 said, now that OpenAI supports this, maybe it will become a "must do" feature in order to keep up with OpenAI compatibility, but I think that this discussion is interesting. And thank you so much for your work on this project !
Author
Owner

@Kinglord commented on GitHub (Aug 15, 2024):

Bumping this as it's been a week and we still have complete radio silence from the Ollama team about their stance on this issue and the state of the numerous PRs and issues still open around it. I'm really not one to annoy people, but I truly and deeply believe that the community deserves a 5-10 minute response from Ollama so we can all get on the same page here.

<!-- gh-comment-id:2290740407 --> @Kinglord commented on GitHub (Aug 15, 2024): Bumping this as it's been a week and we still have complete radio silence from the Ollama team about their stance on this issue and the state of the numerous PRs and issues still open around it. I'm really not one to annoy people, but I truly and deeply believe that the community deserves a 5-10 minute response from Ollama so we can all get on the same page here.
Author
Owner

@PaulCapestany commented on GitHub (Aug 15, 2024):

Re: exposing llama.cpp's grammar feature, seems like @royjhan, @dhiltgen, and of course @jmorganca may be the most/recently involved in potentially-related features?

@Kinglord - appreciate you taking the time to write up your overview on this, as I'm also pretty interested in the topic! FWIW, it could very well be that the core ollama folks aren't "ignoring"/"declining" grammar support within ollama, perhaps they just haven't had the bandwidth and/or visibility into this issue yet (I mean, ollama has literally thousands of issues, and I don't think I saw them directly respond to any of the more recent grammar feature issues/PRs posted)

<!-- gh-comment-id:2291615904 --> @PaulCapestany commented on GitHub (Aug 15, 2024): Re: exposing llama.cpp's grammar feature, seems like @royjhan, @dhiltgen, and of course @jmorganca may be the most/recently involved in potentially-related features? @Kinglord - appreciate you taking the time to write up your overview on this, as I'm also pretty interested in the topic! FWIW, it could very well be that the core ollama folks aren't "ignoring"/"declining" grammar support within ollama, perhaps they just haven't had the bandwidth and/or visibility into this issue yet (I mean, ollama has _literally_ thousands of issues, and I don't think I saw them directly respond to any of the more recent grammar feature issues/PRs posted)
Author
Owner

@jmorganca commented on GitHub (Sep 4, 2024):

Hi all, first off, I'm very sorry for the radio silence on adding structured outputs and/or grammars to Ollama. Thank you everyone who wrote PRs, filed issues and shed light on why the feature is valuable. And thanks @Kinglord for bringing this all into one mega-issue here, it's really helped me catch up.

The short answer is yes, let's add structured outputs to Ollama. Specifically, starting with specifying a JSON schema in the API similar to OpenAI and the existing JSON mode. PRs are very welcome for this, especially if we can do it incrementally.

I'm currently hesitant to add Context-Free Grammar (CFG) support, only so we can focus on making JSON-schema based structured outputs really fast and reliable first. As you may have seen from experimenting with CFGs, they can be tricky to get right, and we've seen them cause models (especially smaller ones) to produce unnatural output (e.g. repeating whitespace indefinitely). I mostly just wouldn't want new users trying the API to hit a usability wall or performance issues if a JSON-schema based approach can work for them (especially since a ton of tooling is supporting this since August).

In terms of what took so long: we've been focusing on fleshing out API features (e.g. tool calling, suffix/fim, embeddings), catching up on OpenAI compatibility, and making the existing API surface area faster and more reliable (there's still lots to do here and some great work is happening – PRs to help with performance and reliability are always super welcome!). This isn't a great reason for the radio silence but I thought it would be helpful to share what the maintainers have been up to in the meantime!

<!-- gh-comment-id:2330104301 --> @jmorganca commented on GitHub (Sep 4, 2024): Hi all, first off, I'm very sorry for the radio silence on adding structured outputs and/or grammars to Ollama. Thank you everyone who wrote PRs, filed issues and shed light on why the feature is valuable. And thanks @Kinglord for bringing this all into one mega-issue here, it's really helped me catch up. The short answer is yes, let's add structured outputs to Ollama. Specifically, starting with specifying a JSON schema in the API similar to OpenAI and the existing JSON mode. PRs are very welcome for this, especially if we can do it incrementally. I'm currently hesitant to add Context-Free Grammar (CFG) support, only so we can focus on making JSON-schema based structured outputs really fast and reliable first. As you may have seen from experimenting with CFGs, they can be tricky to get right, and we've seen them cause models (especially smaller ones) to produce unnatural output (e.g. repeating whitespace indefinitely). I mostly just wouldn't want new users trying the API to hit a usability wall or performance issues if a JSON-schema based approach can work for them (especially since a ton of tooling is supporting this since August). In terms of what took so long: we've been focusing on fleshing out API features (e.g. tool calling, suffix/fim, embeddings), catching up on OpenAI compatibility, and making the existing API surface area faster and more reliable (there's still lots to do here and some great work is happening – PRs to help with performance and reliability are always super welcome!). This isn't a great reason for the radio silence but I thought it would be helpful to share what the maintainers have been up to in the meantime!
Author
Owner

@mitar commented on GitHub (Sep 4, 2024):

@jmorganca So I made a PR adding JSON Schema support here: https://github.com/ollama/ollama/pull/5348 In contrast with other PRs this one really works because it updates also the C server part, which is necessary for this to work (based on example C server from llama.cpp).

<!-- gh-comment-id:2330156441 --> @mitar commented on GitHub (Sep 4, 2024): @jmorganca So I made a PR adding JSON Schema support here: https://github.com/ollama/ollama/pull/5348 In contrast with other PRs this one really works because it updates also the C server part, which is necessary for this to work (based on example C server from llama.cpp).
Author
Owner

@cesarandreslopez commented on GitHub (Sep 13, 2024):

I'm looking forward to seeing JSON Schemma support merged!

<!-- gh-comment-id:2348800997 --> @cesarandreslopez commented on GitHub (Sep 13, 2024): I'm looking forward to seeing JSON Schemma support merged!
Author
Owner

@Kinglord commented on GitHub (Sep 16, 2024):

Big ❤️ @jmorganca - appreciate the reply and also super pumped to see this make its way into Ollama!

<!-- gh-comment-id:2353154689 --> @Kinglord commented on GitHub (Sep 16, 2024): Big ❤️ @jmorganca - appreciate the reply and also super pumped to see this make its way into Ollama!
Author
Owner

@rlouf commented on GitHub (Oct 9, 2024):

Outlines author here. I don't know if this can help, but we are about to release a Rust port of our structured generation algorithms, which are of course faster, but can also be compiled as a shared library and be called by C++ code.

<!-- gh-comment-id:2401783296 --> @rlouf commented on GitHub (Oct 9, 2024): Outlines author here. I don't know if this can help, but we are about to release a Rust port of our structured generation algorithms, which are of course faster, but can also be compiled as a shared library and be called by C++ code.
Author
Owner

@mitar commented on GitHub (Oct 9, 2024):

So llama.cpp has C implementation which Ollama just has to call into.

<!-- gh-comment-id:2401848833 --> @mitar commented on GitHub (Oct 9, 2024): So llama.cpp has C implementation which Ollama just has to call into.
Author
Owner

@rlouf commented on GitHub (Oct 9, 2024):

Of course, only mentioning this because the approach is different and so is runtime latency.

<!-- gh-comment-id:2401921776 --> @rlouf commented on GitHub (Oct 9, 2024): Of course, only mentioning this because the approach is different and so is runtime latency.
Author
Owner

@cpfiffer commented on GitHub (Oct 16, 2024):

Related: https://github.com/ollama/ollama/issues/6473

<!-- gh-comment-id:2418066364 --> @cpfiffer commented on GitHub (Oct 16, 2024): Related: https://github.com/ollama/ollama/issues/6473
Author
Owner

@tucnak commented on GitHub (Oct 22, 2024):

Guys, I'm sorry but your efforts are completely misguided. What you should be doing instead is parsing the system prompt for ```gbnf code blocks. This approach would not impact the API surface, and it would also allow for dynamically generating the grammar on the fly from any existing Ollama client.

set GRAMMAR '
```gbnf
root  ::= (expr "=" ws term "\n")+
expr  ::= term ([-+*/] term)*
term  ::= ident | num | "(" ws expr ")" ws
ident ::= [a-z] [a-z0-9_]* ws
num   ::= [0-9]+ ws
ws    ::= [ \t\n]*
```'
curl http://ollama.lan/chat -d '{
  "model": "llama3.2",
  "messages": [
    {
      "role": "system",
      "content": "You are helpful assistant.\n$GRAMMAR"
    },
    {
      "role": "user",
      "content": "why is the sky blue?"
    }
  ]
}'

I have implemented this a few months back for our close-circuit agent environment, & it works beautifully—as a substitute, in some positions—for a wire controller like AICI. It works really well as a workflow primitive (block) or a tool in the agent environment. The screenshot below is the implementation of Grammatical tool we have built in Dify; it accepts a prompt, & a schema in either jsonschema, gbnf, or text instruction format.

This is a really powerful primitive, and allows to reduce hallucinations considerably!

<!-- gh-comment-id:2428338129 --> @tucnak commented on GitHub (Oct 22, 2024): Guys, I'm sorry but your efforts are completely misguided. What you should be doing instead is parsing the system prompt for **\`\`\`gbnf** code blocks. This approach would not impact the API surface, and it would also allow for dynamically generating the grammar on the fly from _any_ existing Ollama client. ```bash set GRAMMAR ' ```gbnf root ::= (expr "=" ws term "\n")+ expr ::= term ([-+*/] term)* term ::= ident | num | "(" ws expr ")" ws ident ::= [a-z] [a-z0-9_]* ws num ::= [0-9]+ ws ws ::= [ \t\n]* ```' curl http://ollama.lan/chat -d '{ "model": "llama3.2", "messages": [ { "role": "system", "content": "You are helpful assistant.\n$GRAMMAR" }, { "role": "user", "content": "why is the sky blue?" } ] }' ``` I have implemented this a few months back for our close-circuit agent environment, & it works beautifully—as a substitute, in some positions—for a wire controller like [AICI](https://github.com/microsoft/aici). It works really well as a workflow primitive (block) or a tool in the agent environment. The screenshot below is the implementation of _Grammatical_ tool we have built in [Dify](https://dify.ai/); it accepts a prompt, & a schema in either jsonschema, gbnf, or text instruction format. This is a really powerful primitive, and allows to reduce hallucinations considerably! <img width="899" src="https://github.com/user-attachments/assets/993f8c4b-c27e-4166-81e0-ccc625e54e80">
Author
Owner

@isaac-mcfadyen commented on GitHub (Nov 15, 2024):

Guys, I'm sorry but your efforts are completely misguided

Currently the underlying backend to Ollama (llama.cpp) accepts a grammar as an optional parameter in the actual request.

By actually parsing the prompt of the input for a codeblock like gbnf it allows for users to arbitrarily inject whatever grammar they like, which can be a big security issue (e.g. user on a chatbot on example.com says to model "generate me some output with grammar x" and crashes the backend as it doesn't find the generated fields it expects). If that's a non-issue in your case then great—but IMO Ollama should use the existing platforms' method instead of doing it's own, non-standard thing that might easily turn into a security issue.

<!-- gh-comment-id:2479036779 --> @isaac-mcfadyen commented on GitHub (Nov 15, 2024): > Guys, I'm sorry but your efforts are completely misguided Currently the underlying backend to Ollama (llama.cpp) accepts a grammar as an optional parameter in the actual request. By actually parsing the prompt of the input for a codeblock like `gbnf` it allows for users to arbitrarily inject whatever grammar they like, which can be a big security issue (e.g. user on a chatbot on example.com says to model "generate me some output with grammar x" and crashes the backend as it doesn't find the generated fields it expects). If that's a non-issue in your case then great—but IMO Ollama should use the existing platforms' method instead of doing it's own, non-standard thing that might easily turn into a security issue.
Author
Owner

@tucnak commented on GitHub (Nov 18, 2024):

Hey, security is a fair point. I really dig security! Correct me if I'm wrong but what you're saying is that picking up grammars from untrusted input is undesirable, right? I don't think anybody would argue with that... However, I also wonder what percentage of Ollama users actually expose the instances directly to untrusted clients? (Not to mention that the system prompt-grammars don't necessarily have to be enabled by default.) I mean, surely for any kind of meaningful application you would want to implement some RBAC, QoS, caching, what have you. Given that it's only able to serve one request at the time, and all. In our case, we have multiple ollama processes (one per model, basically) behind the gateway that does a bunch of things, including token accounting, but most importantly RBAC. I like to think we're taking security seriously, & I can't imagine Ollama in the current shape or form being self-sufficient to that end by any stretch of imagination.

Ollama should use the existing platforms' method instead of doing it's own, non-standard thing that might easily turn into a security issue.

The main issue is that there isn't a "standard" way to do grammars that would propagate throughout the stack, not really! There is the "grammar" parameter in the llama.cpp server API, sure, and Ollama supports it internally, of course. However, none of the actual clients support it, or expose it, for that matter, and are unlikely to do so for different reasons. For example, we're using Dify which doesn't allow for customizing the Ollama parameters per-request; it's just one set of settings, and that's it. In order to bring grammars, they would need to augment their whole UI, and that's obviously not unique to Ollama, so now you have divergent UI's per provider, that's hard to support, etc... I'm sure you know why that's problematic.

The reason I bothered with my patch in the first place is so that it had at the time enabled us to create ad-hoc grammars in the existing agent environment, as well as multiple existing tools, and clients, without ever making any modifications to any of them, or the internal API's! The path of least resistance, if you will. But then again what do I know! At any rate, I don't believe merges like these are actually that important: my patch is as easy to rebase against upstream, as any other change, & it will do. The same stands for the dozen of prior implementations that we have itt.

To be honest, if I were the OP, I would honestly close the issue at this point. 😃

<!-- gh-comment-id:2484157216 --> @tucnak commented on GitHub (Nov 18, 2024): Hey, security is a fair point. I really dig security! Correct me if I'm wrong but what you're saying is that picking up grammars from untrusted input is undesirable, right? I don't think anybody would argue with that... However, I also wonder what percentage of Ollama users actually expose the instances _directly_ to untrusted clients? (Not to mention that the system prompt-grammars don't necessarily _have_ to be enabled by default.) I mean, surely for any kind of meaningful application you would want to implement some RBAC, QoS, caching, what have you. Given that it's only able to serve one request at the time, and all. In our case, we have multiple ollama processes (one per model, basically) behind the gateway that does a bunch of things, including token accounting, but most importantly RBAC. I like to think we're taking security seriously, & I can't imagine Ollama in the current shape or form being self-sufficient to that end by any stretch of imagination. > Ollama should use the existing platforms' method instead of doing it's own, non-standard thing that might easily turn into a security issue. The main issue is that there isn't a "standard" way to do grammars that would propagate throughout the stack, not really! There is the "grammar" parameter in the llama.cpp server API, sure, and Ollama supports it internally, of course. However, none of the actual clients support it, or expose it, for that matter, and are unlikely to do so for different reasons. For example, we're using Dify which doesn't allow for customizing the Ollama parameters per-request; it's just one set of settings, and that's it. In order to bring grammars, they would need to augment their whole UI, and that's obviously not unique to Ollama, so now you have divergent UI's per provider, that's hard to support, etc... I'm sure you know why that's problematic. The reason I bothered with my patch in the first place is so that it had at the time enabled us to create ad-hoc grammars in the existing agent environment, as well as multiple existing tools, and clients, without ever making any modifications to any of them, or the internal API's! The path of least resistance, if you will. But then again what do I know! At any rate, I don't believe merges like these are actually _that_ important: my patch is as easy to rebase against upstream, as any other change, & it will do. The same stands for the dozen of prior implementations that we have itt. To be honest, if I were the OP, I would honestly close the issue at this point. 😃
Author
Owner

@isaac-mcfadyen commented on GitHub (Nov 18, 2024):

However, I also wonder what percentage of Ollama users actually expose the instances directly to untrusted clients

Fair point! Just wanted to point that out since I wasn't aware of the specifics of your application.

none of the actual clients support it, or expose it, for that matter [...] In order to bring grammars, they would need to augment their whole UI, and that's obviously not unique to Ollama, so now you have divergent UI's per provider, that's hard to support, etc

For sure, but then again, Ollama is mainly an API-based project and UIs or other clients that build on top of Ollama are free to do as they wish. The idea is that Ollama should make it available and then the client (UI or otherwise) can decide whether it wants to use it or not.

Also, clients probably don't support it because it's not available in the Ollama API yet 🙃

To be honest, if I were the OP, I would honestly close the issue at this point.

I see from above that PR #5348 has been opened to add grammar support so I'm not sure closing the issue would be productive until that PR is either merged or closed for any other reason.

<!-- gh-comment-id:2484169895 --> @isaac-mcfadyen commented on GitHub (Nov 18, 2024): > However, I also wonder what percentage of Ollama users actually expose the instances _directly_ to untrusted clients Fair point! Just wanted to point that out since I wasn't aware of the specifics of your application. > none of the actual clients support it, or expose it, for that matter [...] In order to bring grammars, they would need to augment their whole UI, and that's obviously not unique to Ollama, so now you have divergent UI's per provider, that's hard to support, etc For sure, but then again, Ollama is mainly an API-based project and UIs or other clients that build on top of Ollama are free to do as they wish. The idea is that Ollama should make it *available* and then the client (UI or otherwise) can decide whether it wants to use it or not. Also, clients probably don't support it because it's not available in the Ollama API yet 🙃 > To be honest, if I were the OP, I would honestly close the issue at this point. I see from above that PR #5348 has been opened to add grammar support so I'm not sure closing the issue would be productive until that PR is either merged or closed for any other reason.
Author
Owner

@ParthSareen commented on GitHub (Dec 5, 2024):

Hey everyone!

With the merging of #7900, we're introducing structured output to be able to go from a json schema to structured generation! Really appreciate all the feedback and contributions. Extremely thankful for all of you being so involved in this 🙏🏽

There are a few things we're still keeping in mind over the next few months. The first focus is going to be around performance - speed and accuracy. There has been a lot of research coming out around this, we're keeping a close eye and are going to see how we can integrate some of this into Ollama. We're also thinking about how to support structured generation in the long term and that'll play nicely with a lot of the work we're doing on our new engine.

Stoked for the coming few months, hope to improve both performance and accuracy around sampling and constrained decoding.

Thank you again for your patience, we're super excited to get this out in an upcoming release! Will spin out more issues around this as well - happy to keep you all posted as well!

<!-- gh-comment-id:2518836758 --> @ParthSareen commented on GitHub (Dec 5, 2024): Hey everyone! With the merging of #7900, we're introducing structured output to be able to go from a json schema to structured generation! Really appreciate all the feedback and contributions. Extremely thankful for all of you being so involved in this 🙏🏽 There are a few things we're still keeping in mind over the next few months. The first focus is going to be around performance - speed and accuracy. There has been a lot of research coming out around this, we're keeping a close eye and are going to see how we can integrate some of this into Ollama. We're also thinking about how to support structured generation in the long term and that'll play nicely with a lot of the work we're doing on our new engine. Stoked for the coming few months, hope to improve both performance and accuracy around sampling and constrained decoding. Thank you again for your patience, we're super excited to get this out in an upcoming release! Will spin out more issues around this as well - happy to keep you all posted as well!
Author
Owner

@0xdevalias commented on GitHub (Feb 18, 2025):

There are a few things we're still keeping in mind over the next few months. The first focus is going to be around performance - speed and accuracy. There has been a lot of research coming out around this, we're keeping a close eye and are going to see how we can integrate some of this into Ollama. We're also thinking about how to support structured generation in the long term and that'll play nicely with a lot of the work we're doing on our new engine.

Stoked for the coming few months, hope to improve both performance and accuracy around sampling and constrained decoding.

@ParthSareen Curious, would that potentially include support for Coalescence/similar?

<!-- gh-comment-id:2665163558 --> @0xdevalias commented on GitHub (Feb 18, 2025): > There are a few things we're still keeping in mind over the next few months. The first focus is going to be around performance - speed and accuracy. There has been a lot of research coming out around this, we're keeping a close eye and are going to see how we can integrate some of this into Ollama. We're also thinking about how to support structured generation in the long term and that'll play nicely with a lot of the work we're doing on our new engine. > > Stoked for the coming few months, hope to improve both performance and accuracy around sampling and constrained decoding. @ParthSareen Curious, would that potentially include support for Coalescence/similar? - https://blog.dottxt.co/coalescence.html - https://github.com/ggml-org/llama.cpp/issues/5292 - https://github.com/ggml-org/llama.cpp/discussions/5455
Author
Owner

@ParthSareen commented on GitHub (Feb 18, 2025):

@0xdevalias Currently working on a new constrained sampling engine to have fast + accurate structured outputs. Thoughts around this were - any external library would need a good amount of integration in order to be useful. For sampling, that would mean exposing logits, the tokenizer, and integration with the runner. This would also mean we can't iterate fast if new SOTA for structured outputs comes out.

So unlikely for now but always keeping an eye out :)

<!-- gh-comment-id:2666729507 --> @ParthSareen commented on GitHub (Feb 18, 2025): @0xdevalias Currently working on a new constrained sampling engine to have fast + accurate structured outputs. Thoughts around this were - any external library would need a good amount of integration in order to be useful. For sampling, that would mean exposing logits, the tokenizer, and integration with the runner. This would also mean we can't iterate fast if new SOTA for structured outputs comes out. So unlikely for now but always keeping an eye out :)
Author
Owner

@bZichett commented on GitHub (Mar 19, 2026):

Any update @ParthSareen ?

<!-- gh-comment-id:4091846473 --> @bZichett commented on GitHub (Mar 19, 2026): Any update @ParthSareen ?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3901