[GH-ISSUE #10222] Support Jinja chat templates #53219

Open
opened 2026-04-29 02:23:50 -05:00 by GiteaMirror · 17 comments
Owner

Originally created by @snuggles4553 on GitHub (Apr 10, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10222

Would be great if Ollama supported Jinja chat templates.

Perhaps gonja (Jinja template engine implementation for Go) can be used for this.

Benefits:

  • less hassle converting chat templates (they're becoming bigger and bigger after all)
  • more reliable (predictable) LLM results in the sense that the answers you get from a model in Ollama are the same as in any other software such as llama.cpp and others.
Originally created by @snuggles4553 on GitHub (Apr 10, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10222 Would be great if Ollama supported Jinja chat templates. Perhaps `gonja` (Jinja template engine implementation for Go) can be used for this. Benefits: - less hassle converting chat templates (they're becoming bigger and bigger after all) - more reliable (predictable) LLM results in the sense that the answers you get from a model in Ollama are the same as in any other software such as llama.cpp and others.
GiteaMirror added the feature request label 2026-04-29 02:23:50 -05:00
Author
Owner

@JasonHonKL commented on GitHub (Apr 11, 2025):

I think Ollama is a backend focusing on allowing user to host their LLM but not the part how developer use it. So I don't think this feature should be added.

<!-- gh-comment-id:2795745533 --> @JasonHonKL commented on GitHub (Apr 11, 2025): I think Ollama is a backend focusing on allowing user to host their LLM but not the part how developer use it. So I don't think this feature should be added.
Author
Owner

@snuggles4553 commented on GitHub (Apr 12, 2025):

But Ollama supports adding local models via ollama create NAME -f Modelfile as well, without pulling models from elsewhere. As long as that is supported, support for Jinja chat templates would be very helpful, as many LLMs come with a Jinja template.

It would be very useful to be able to use the original / official chat template that came with a model, regardless of what syntax that chat template happens to use.

And it's actually very much possible and feasible to add support for Jinja in Ollama using the aforementioned gonja library for example.

<!-- gh-comment-id:2798356893 --> @snuggles4553 commented on GitHub (Apr 12, 2025): But Ollama supports adding local models via `ollama create NAME -f Modelfile` as well, without pulling models from elsewhere. As long as that is supported, support for Jinja chat templates would be very helpful, as many LLMs come with a Jinja template. It would be very useful to be able to use the original / official chat template that came with a model, regardless of what syntax that chat template happens to use. And it's actually very much possible and feasible to add support for Jinja in Ollama using the aforementioned `gonja` library for example.
Author
Owner

@a1ix2 commented on GitHub (Apr 19, 2025):

This has been very confusing to me since I started using ollama and I was unable to find any information or documentation about it anywhere. Namely:

  • If I don't include a template in the Modelfile while importing a GGUF, does it automatically use the one that's bundled in its metadata?
  • Isn't ollama using llama.cpp in the background, which I believe uses the template stored in the metadata of the GGUF by e.g. convert_hf_to_gguf.py? (is that even how it works in the first place?)
  • If I clone a huggingface repo in transformers format and use ollama create using a Modelfile without a template, or direcly pull it from huggingface using ollama pull hf.co/..., does it use the template stored in tokenizer_config.json?
  • If it were the case but I also include a template in the Modelfile while importing, how would the template in a Modelfile interact with the template in the GGUF?
  • If this is not the case, is it possible to automatically convert those jinga templates into ollama golang templates using something like gonja, or does it have to be done completely manually?
<!-- gh-comment-id:2816830839 --> @a1ix2 commented on GitHub (Apr 19, 2025): This has been very confusing to me since I started using ollama and I was unable to find any information or documentation about it anywhere. Namely: - If I don't include a template in the Modelfile while importing a GGUF, does it automatically use the one that's bundled in its metadata? - Isn't ollama using `llama.cpp` in the background, which I believe uses the template stored in the metadata of the GGUF by e.g. `convert_hf_to_gguf.py`? (is that even how it works in the first place?) - If I clone a huggingface repo in transformers format and use `ollama create` using a Modelfile without a template, or direcly pull it from huggingface using `ollama pull hf.co/...`, does it use the template stored in `tokenizer_config.json`? - If it were the case but I also include a template in the Modelfile while importing, how would the template in a Modelfile interact with the template in the GGUF? - If this is not the case, is it possible to automatically convert those jinga templates into ollama golang templates using something like gonja, or does it have to be done completely manually?
Author
Owner

@snuggles4553 commented on GitHub (Apr 19, 2025):

@a1ix2

* If I don't include a template in the Modelfile while importing a GGUF, does it automatically use the one that's bundled in its metadata?

Not sure, but I suspect not (correct me if I'm wrong). However, what I do know for certain is that the bundled chat template won't be used if the bundled one is a Jinja template, as Jinja chat templates are not supported as of this writing.

* If it were the case but I also include a template in the Modelfile while importing, how would the template in a Modelfile interact with the template in the GGUF?

No other chat template will be used if a chat template is specified in the Modelfile. But again, whether Ollama supports any kind of built-in chat templates is unclear to me as well at this moment, but I suspect not (correct me if I'm wrong). So what kind of results you'll get depends heavily on what a model's default behavior is.

Some models behave a bit more like the way you would expect them to if they had gotten a chat template (even though they didn't), while other models will be more prone to acting weird (like ignoring your questions, hallucinations, and so on).

* If this is not the case, is it possible to automatically convert those jinga templates into ollama golang templates using something like gonja, or does it have to be done completely manually?

Those are two separate things. If a library such as gonja were to be used, it would bring actual proper Jinja chat template support to Ollama. That is, native support, without conversions of any kind. On the other hand, while it's theoretically possible to convert templates from Jinja chat templates into Golang templates, it's probably much simpler to implement support for actual Jinja chat templates.

Hope this helps! 🙂

Disclaimer: Don't take my word for these statements as I'm not an Ollama developer.

<!-- gh-comment-id:2816867093 --> @snuggles4553 commented on GitHub (Apr 19, 2025): @a1ix2 > * If I don't include a template in the Modelfile while importing a GGUF, does it automatically use the one that's bundled in its metadata? Not sure, but I suspect not (correct me if I'm wrong). However, what I do know for certain is that the bundled chat template won't be used if the bundled one is a Jinja template, as Jinja chat templates are not supported as of this writing. > * If it were the case but I also include a template in the Modelfile while importing, how would the template in a Modelfile interact with the template in the GGUF? No other chat template will be used if a chat template is specified in the `Modelfile`. But again, whether Ollama supports any kind of built-in chat templates is unclear to me as well at this moment, but I suspect not (correct me if I'm wrong). So what kind of results you'll get depends heavily on what a model's default behavior is. Some models behave a bit more like the way you would expect them to if they had gotten a chat template (even though they didn't), while other models will be more prone to acting weird (like ignoring your questions, hallucinations, and so on). > * If this is not the case, is it possible to automatically convert those jinga templates into ollama golang templates using something like gonja, or does it have to be done completely manually? Those are two separate things. If a library such as `gonja` were to be used, it would bring actual proper Jinja chat template support to Ollama. That is, native support, without conversions of any kind. On the other hand, while it's theoretically possible to convert templates from Jinja chat templates into Golang templates, it's probably much simpler to implement support for actual Jinja chat templates. Hope this helps! 🙂 _Disclaimer: Don't take my word for these statements as I'm not an Ollama developer._
Author
Owner

@a1ix2 commented on GitHub (Apr 19, 2025):

Thanks for the explanation, this makes sense.

Minutes after I pressed "Comment" I realized the ollama/template directory is full of default templates for a bunch of models, but those are all much more basic than what a transformers model. For example the golang template for gemma3 provided in template/gemma3-instruct.gotmpl is

{{- range $i, $_ := .Messages }}
    {{- $last := eq (len (slice $.Messages $i)) 1 }}
    {{- if eq .Role "user" }}<start_of_turn>user
        {{- if and (eq $i 1) $.System }}
            {{ $.System }}
        {{ end }}
        {{ .Content }}<end_of_turn>
    {{ else if eq .Role "assistant" }}<start_of_turn>model
        {{ .Content }}<end_of_turn>
    {{ end }}
    {{- if $last }}<start_of_turn>model
    {{ end }}
{{- end }}

while the chat template in tokenizer_config.json inside https://huggingface.co/google/gemma-3-27b-it/blob/main/tokenizer_config.json is larger and seems to handle images which the golang template doesn't.

{{ bos_token }}
{%- if messages[0]['role'] == 'system' -%}
    {%- if messages[0]['content'] is string -%}
        {%- set first_user_prefix = messages[0]['content'] + '

' -%}
    {%- else -%}
        {%- set first_user_prefix = messages[0]['content'][0]['text'] + '

' -%}
    {%- endif -%}
    {%- set loop_messages = messages[1:] -%}
{%- else -%}
    {%- set first_user_prefix = \"\" -%}
    {%- set loop_messages = messages -%}
{%- endif -%}
{%- for message in loop_messages -%}
    {%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) -%}
        {{ raise_exception(\"Conversation roles must alternate user/assistant/user/assistant/...\") }}
    {%- endif -%}
    {%- if (message['role'] == 'assistant') -%}
        {%- set role = \"model\" -%}
    {%- else -%}
        {%- set role = message['role'] -%}
    {%- endif -%}
    {{ '<start_of_turn>' + role + '
' + (first_user_prefix if loop.first else \"\") }}
    {%- if message['content'] is string -%}
        {{ message['content'] | trim }}
    {%- elif message['content'] is iterable -%}
        {%- for item in message['content'] -%}
            {%- if item['type'] == 'image' -%}
                {{ '<start_of_image>' }}
            {%- elif item['type'] == 'text' -%}
                {{ item['text'] | trim }}
            {%- endif -%}
        {%- endfor -%}
    {%- else -%}
        {{ raise_exception(\"Invalid content type\") }}
    {%- endif -%}
    {{ '<end_of_turn>
' }}
{%- endfor -%}
{%- if add_generation_prompt -%}
    {{'<start_of_turn>model
'}}
{%- endif -%}
<!-- gh-comment-id:2816870388 --> @a1ix2 commented on GitHub (Apr 19, 2025): Thanks for the explanation, this makes sense. Minutes after I pressed "Comment" I realized the ollama/template directory is full of default templates for a bunch of models, but those are all much more basic than what a transformers model. For example the golang template for gemma3 provided in `template/gemma3-instruct.gotmpl` is ```go {{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 }} {{- if eq .Role "user" }}<start_of_turn>user {{- if and (eq $i 1) $.System }} {{ $.System }} {{ end }} {{ .Content }}<end_of_turn> {{ else if eq .Role "assistant" }}<start_of_turn>model {{ .Content }}<end_of_turn> {{ end }} {{- if $last }}<start_of_turn>model {{ end }} {{- end }} ``` while the chat template in tokenizer_config.json inside https://huggingface.co/google/gemma-3-27b-it/blob/main/tokenizer_config.json is larger and seems to handle images which the golang template doesn't. ```jinja {{ bos_token }} {%- if messages[0]['role'] == 'system' -%} {%- if messages[0]['content'] is string -%} {%- set first_user_prefix = messages[0]['content'] + ' ' -%} {%- else -%} {%- set first_user_prefix = messages[0]['content'][0]['text'] + ' ' -%} {%- endif -%} {%- set loop_messages = messages[1:] -%} {%- else -%} {%- set first_user_prefix = \"\" -%} {%- set loop_messages = messages -%} {%- endif -%} {%- for message in loop_messages -%} {%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) -%} {{ raise_exception(\"Conversation roles must alternate user/assistant/user/assistant/...\") }} {%- endif -%} {%- if (message['role'] == 'assistant') -%} {%- set role = \"model\" -%} {%- else -%} {%- set role = message['role'] -%} {%- endif -%} {{ '<start_of_turn>' + role + ' ' + (first_user_prefix if loop.first else \"\") }} {%- if message['content'] is string -%} {{ message['content'] | trim }} {%- elif message['content'] is iterable -%} {%- for item in message['content'] -%} {%- if item['type'] == 'image' -%} {{ '<start_of_image>' }} {%- elif item['type'] == 'text' -%} {{ item['text'] | trim }} {%- endif -%} {%- endfor -%} {%- else -%} {{ raise_exception(\"Invalid content type\") }} {%- endif -%} {{ '<end_of_turn> ' }} {%- endfor -%} {%- if add_generation_prompt -%} {{'<start_of_turn>model '}} {%- endif -%} ```
Author
Owner

@snuggles4553 commented on GitHub (Apr 19, 2025):

@a1ix2: Yep, exactly! 🙂

And this is why support for Jinja chat templates would be so useful. It makes the models that use a built-in Jinja chat template respond exactly as they would in llama.cpp, which does support Jinja.

<!-- gh-comment-id:2816875685 --> @snuggles4553 commented on GitHub (Apr 19, 2025): @a1ix2: Yep, exactly! 🙂 And this is why support for Jinja chat templates would be so useful. It makes the models that use a built-in Jinja chat template respond exactly as they would in `llama.cpp`, which does support Jinja.
Author
Owner

@WasamiKirua commented on GitHub (May 4, 2025):

in addition to this. On HF is now possible to get the Jinja chat template easely by clicking on "Chat Template" button on the model page. If i don't have a precise way to convert it to an ollama go template format, I won't have a way to import the model in ollama. I am struggling with my fine tune model which is working fine when i start inference using llama.cpp but it is not repsecting the system message for some reason when i import the model in ollama, I am doing something wrong 99% but wouldn't that being easy by having a way to use the jinja2 directly or at least as said an easy and pricese way to convert it ?

<!-- gh-comment-id:2849105565 --> @WasamiKirua commented on GitHub (May 4, 2025): in addition to this. On HF is now possible to get the Jinja chat template easely by clicking on "Chat Template" button on the model page. If i don't have a precise way to convert it to an ollama go template format, I won't have a way to import the model in ollama. I am struggling with my fine tune model which is working fine when i start inference using llama.cpp but it is not repsecting the system message for some reason when i import the model in ollama, I am doing something wrong 99% but wouldn't that being easy by having a way to use the jinja2 directly or at least as said an easy and pricese way to convert it ?
Author
Owner

@Fuzzillogic commented on GitHub (May 14, 2025):

The Qwen3 Jinja2 template has important features, missing in the Go template provided by Ollama:

No Thinking Content in History: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.

<!-- gh-comment-id:2881412006 --> @Fuzzillogic commented on GitHub (May 14, 2025): The [Qwen3](https://huggingface.co/Qwen/Qwen3-235B-A22B#best-practices) Jinja2 template has important features, missing in the [Go template provided by Ollama](https://ollama.com/library/qwen3:latest/blobs/eb4402837c78): > **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
Author
Owner

@johnnyasantoss commented on GitHub (Aug 6, 2025):

@JasonHonKL I think Ollama is a backend focusing on allowing user to host their LLM but not the part how developer use it. So I don't think this feature should be added.

What's the meaning of hosting if the LLM can't produce the expected results (without proper template)?
Most, if not all models, come with a jinja2 template and Ollama is the only inference server AFAIK using Go templates, which makes porting models harder than it should be. Also, if it's not meant to be "the part [of] how developer use it" why does ollama have run, create -f Modelfile, ps, etc.?

TBH this seems like a Go dev pet peeve with Jinja2

<!-- gh-comment-id:3161302801 --> @johnnyasantoss commented on GitHub (Aug 6, 2025): > @JasonHonKL I think Ollama is a backend focusing on allowing user to host their LLM but not the part how developer use it. So I don't think this feature should be added. What's the meaning of hosting if the LLM can't produce the expected results (without proper template)? Most, if not all models, come with a jinja2 template and Ollama is the only inference server AFAIK using Go templates, which makes porting models harder than it should be. Also, if it's not meant to be "the part [of] how developer use it" why does ollama have `run`, `create -f Modelfile`, `ps`, etc.? TBH this seems like a Go dev pet peeve with Jinja2
Author
Owner

@SorenDreano commented on GitHub (Aug 21, 2025):

This would indeed be an excellent feature to have

<!-- gh-comment-id:3209841244 --> @SorenDreano commented on GitHub (Aug 21, 2025): This would indeed be an excellent feature to have
Author
Owner

@eslowney commented on GitHub (Aug 21, 2025):

Not sure, but I suspect not (correct me if I'm wrong). However, what I do know for certain is that the bundled chat template won't be used if the bundled one is a Jinja template, as Jinja chat templates are not supported as of this writing.

This is wild to me. How are people using ollama at all then? What templates are people using? Is that why many models reply to my statement with a ? before it continues its response? I don't understand how you set up chat templates at all in ollama.

<!-- gh-comment-id:3212151746 --> @eslowney commented on GitHub (Aug 21, 2025): > Not sure, but I suspect not (correct me if I'm wrong). However, what I do know for certain is that the bundled chat template won't be used if the bundled one is a Jinja template, as Jinja chat templates are not supported as of this writing. This is wild to me. How are people using ollama at all then? What templates are people using? Is that why many models reply to my statement with a ? before it continues its response? I don't understand how you set up chat templates at all in ollama.
Author
Owner

@aaronpliu commented on GitHub (Nov 3, 2025):

no one to take the feature? it's helpful if ollama support it directly

<!-- gh-comment-id:3479474302 --> @aaronpliu commented on GitHub (Nov 3, 2025): no one to take the feature? it's helpful if ollama support it directly
Author
Owner

@KhazAkar commented on GitHub (Nov 11, 2025):

Translating manually from jinja2 template to go template format, which doesn't even have custom convenient functions like 'toJSON' is doable, but takes time. Supporting Jinja2 templates using library like gonja would be awesome.
Example template to translate is available here: https://github.com/speakleash/bielik-tools/blob/main/tools/bielik_advanced_chat_template.jinja

<!-- gh-comment-id:3516927859 --> @KhazAkar commented on GitHub (Nov 11, 2025): Translating manually from jinja2 template to go template format, which doesn't even have custom convenient functions like 'toJSON' is doable, but takes time. Supporting Jinja2 templates using library like gonja would be awesome. Example template to translate is available here: https://github.com/speakleash/bielik-tools/blob/main/tools/bielik_advanced_chat_template.jinja
Author
Owner

@jeepshop commented on GitHub (Jan 13, 2026):

This really should be a high priority, tools like RooCode no longer work with Ollama without crafting a custom templates based off the jinja templates. The rest of the LLM (vllm, llama.cpp, lmstudio) world has all but settled on jinja, and Ollama is starting to decay due to lack off tool support in templates. Yes I know I can create a custom modelfile with a custom template, but I don't have time to learn how to convert between two diametrically opposed template formats. I just need Ollama to serve models so I can get my job done.

Currently evaluating llama-swap + llama.cpp; It isn't nearly as nice as using ollama - but tool calling just works with every last model from huggingface that I've tried.

Yes I can use the low Q4 from Ollama.com, 3-4 weeks after they've released on huggingface, but I have the hardware to run Q6 and Q8 models and really want the extra precision.

<!-- gh-comment-id:3745815532 --> @jeepshop commented on GitHub (Jan 13, 2026): This really should be a high priority, tools like RooCode no longer work with Ollama without crafting a custom templates based off the jinja templates. The rest of the LLM (vllm, llama.cpp, lmstudio) world has all but settled on jinja, and Ollama is starting to decay due to lack off tool support in templates. Yes I know I can create a custom modelfile with a custom template, but I don't have time to learn how to convert between two diametrically opposed template formats. I just need Ollama to serve models so I can get my job done. Currently evaluating llama-swap + llama.cpp; It isn't nearly as nice as using ollama - but tool calling just works with every last model from huggingface that I've tried. Yes I can use the low Q4 from Ollama.com, 3-4 weeks after they've released on huggingface, but I have the hardware to run Q6 and Q8 models and really want the extra precision.
Author
Owner

@am009 commented on GitHub (Jan 21, 2026):

For newly released models, or small models focused on specific domains, they often only support huggingface's transformers or vLLM, both of which utilize Jinja templates.

Typically the reason why community models don't always seem to work is because the template is incorrect, as is the case for HY-MT1.5-1.8B.

This feature is the only obstacle to seamlessly converting models to Ollama! Many people directly convert existing models to GGUF or import them into Ollama, but due to the lack of manual conversion of templates, the model behavior often become very strange and nearly unusable. I have encountered this problem twice already.

<!-- gh-comment-id:3777345031 --> @am009 commented on GitHub (Jan 21, 2026): For newly released models, or small models focused on specific domains, they often only support huggingface's transformers or vLLM, both of which utilize Jinja templates. > Typically the reason why community models don't always seem to work is because the template is incorrect, as is the case for HY-MT1.5-1.8B. This feature is the only obstacle to seamlessly converting models to Ollama! Many people directly convert existing models to GGUF or import them into Ollama, but due to the lack of manual conversion of templates, the model behavior often become very strange and nearly unusable. I have encountered this problem twice already.
Author
Owner

@RangerMauve commented on GitHub (Mar 16, 2026):

The new qwen models are barely usable in Ollama due to this limitation. The default templates just say {{ .Prompt }} which isn't helpful.

<!-- gh-comment-id:4069280340 --> @RangerMauve commented on GitHub (Mar 16, 2026): The new qwen models are barely usable in Ollama due to this limitation. The default templates just say `{{ .Prompt }}` which isn't helpful.
Author
Owner

@jeepshop commented on GitHub (Mar 16, 2026):

Make a modelfile like below and try it; I've had good luck with
hf.co/unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q4_K_XL- but you have to remove the
vision model until ollama supports multi-part models.

Instructions for fixing multi-part by removing vision
https://github.com/ollama/ollama/issues/14503#issuecomment-3986898959

FROM

TEMPLATE {{ .Prompt }}

RENDERER qwen3.5

PARSER qwen3.5

On Mon, Mar 16, 2026 at 1:16 PM Mauve Signweaver @.***>
wrote:

RangerMauve left a comment (ollama/ollama#10222)
https://github.com/ollama/ollama/issues/10222#issuecomment-4069280340

The new qwen models are barely usable in Ollama due to this limitation.
The default templates just say {{ .Prompt }} which isn't helpful.


Reply to this email directly, view it on GitHub
https://github.com/ollama/ollama/issues/10222#issuecomment-4069280340,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AF6EMS6FXXBI46ZAI6UDYQL4RAZFXAVCNFSM6AAAAAB24QXHPKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHM2DANRZGI4DAMZUGA
.
You are receiving this because you commented.Message ID:
@.***>

<!-- gh-comment-id:4069770056 --> @jeepshop commented on GitHub (Mar 16, 2026): Make a modelfile like below and try it; I've had good luck with hf.co/unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q4_K_XL- but you have to remove the vision model until ollama supports multi-part models. Instructions for fixing multi-part by removing vision https://github.com/ollama/ollama/issues/14503#issuecomment-3986898959 FROM <original model> TEMPLATE {{ .Prompt }} RENDERER qwen3.5 PARSER qwen3.5 On Mon, Mar 16, 2026 at 1:16 PM Mauve Signweaver ***@***.***> wrote: > *RangerMauve* left a comment (ollama/ollama#10222) > <https://github.com/ollama/ollama/issues/10222#issuecomment-4069280340> > > The new qwen models are barely usable in Ollama due to this limitation. > The default templates just say {{ .Prompt }} which isn't helpful. > > — > Reply to this email directly, view it on GitHub > <https://github.com/ollama/ollama/issues/10222#issuecomment-4069280340>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AF6EMS6FXXBI46ZAI6UDYQL4RAZFXAVCNFSM6AAAAAB24QXHPKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHM2DANRZGI4DAMZUGA> > . > You are receiving this because you commented.Message ID: > ***@***.***> >
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53219