[GH-ISSUE #6026] The 1k context limit in Open-WebUI request is causing low-quality responses. #29532

Closed
opened 2026-04-22 08:30:16 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @anrgct on GitHub (Jul 28, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6026

What is the issue?

When using open-webui, I've noticed that long contextual messages sent to ollama consistently result in poor responses. After investigating the issue, it appears that the /api/chat and /v1/chat/completions endpoints are defaulting to a 1k context limit. This means that when the content exceeds this length, the system automatically discards the earlier portions, leading to subpar answers. What follows is the captured network request data for open-webui version 0.3.8.

curl 'http://localhost:11434/api/chat' \
-X POST \
-H 'Host: localhost:11434' \
-H 'Accept: */*' \
-H 'User-Agent: Python/3.11 aiohttp/3.9.5' \
-H 'Content-Type: text/plain; charset=utf-8' \
--data-raw '{"model": "qwen1_5-4b-chat-q4_k_m", "messages": [{"role": "user", "content": "<long context>"}], "options": {}, "stream": true}' 

Based on the final response, we can observe that the prompt_eval_count is 1026, which indicates that only approximately 1,000 tokens of context were processed.

{"role":"assistant","content":""},"done_reason":"stop","done":true,"total_duration":9987503333,"load_duration":28999667,"prompt_eval_count":1026,"prompt_eval_duration":1896469000,"eval_count":238,"eval_duration":8059779000}

I'm uncertain whether I should submit an issue about this bug to the open-webui repository.

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.3.0

Originally created by @anrgct on GitHub (Jul 28, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6026 ### What is the issue? When using open-webui, I've noticed that long contextual messages sent to ollama consistently result in poor responses. After investigating the issue, it appears that the `/api/chat` and `/v1/chat/completions` endpoints are defaulting to a 1k context limit. This means that when the content exceeds this length, the system automatically discards the earlier portions, leading to subpar answers. What follows is the captured network request data for open-webui version 0.3.8. ``` curl 'http://localhost:11434/api/chat' \ -X POST \ -H 'Host: localhost:11434' \ -H 'Accept: */*' \ -H 'User-Agent: Python/3.11 aiohttp/3.9.5' \ -H 'Content-Type: text/plain; charset=utf-8' \ --data-raw '{"model": "qwen1_5-4b-chat-q4_k_m", "messages": [{"role": "user", "content": "<long context>"}], "options": {}, "stream": true}' ``` Based on the final response, we can observe that the prompt_eval_count is 1026, which indicates that only approximately 1,000 tokens of context were processed. ``` {"role":"assistant","content":""},"done_reason":"stop","done":true,"total_duration":9987503333,"load_duration":28999667,"prompt_eval_count":1026,"prompt_eval_duration":1896469000,"eval_count":238,"eval_duration":8059779000} ``` I'm uncertain whether I should submit an issue about this bug to the open-webui repository. ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.3.0
GiteaMirror added the bug label 2026-04-22 08:30:16 -05:00
Author
Owner

@rick-github commented on GitHub (Jul 28, 2024):

https://github.com/ollama/ollama/issues/5965#issuecomment-2252354726

<!-- gh-comment-id:2254547480 --> @rick-github commented on GitHub (Jul 28, 2024): https://github.com/ollama/ollama/issues/5965#issuecomment-2252354726
Author
Owner

@rick-github commented on GitHub (Jul 28, 2024):

You can also set a context length for a model in open-webui. Go to Workspace > Models > Create a model, choose the base model, scroll down to Model Params and under Advanced Params, set the context length.

<!-- gh-comment-id:2254555930 --> @rick-github commented on GitHub (Jul 28, 2024): You can also set a context length for a model in open-webui. Go to Workspace > Models > Create a model, choose the base model, scroll down to Model Params and under Advanced Params, set the context length.
Author
Owner

@anrgct commented on GitHub (Jul 28, 2024):

Thank you for your response! I've also discovered another bug. When I add "options": {"num_ctx": 32000} to the /v1/chat/completions endpoint, it doesn't take effect and the context length remains at 1k. This option works correctly on the /api/chat and /api/generate endpoints.

<!-- gh-comment-id:2254565806 --> @anrgct commented on GitHub (Jul 28, 2024): Thank you for your response! I've also discovered another bug. When I add "options": {"num_ctx": 32000} to the `/v1/chat/completions` endpoint, it doesn't take effect and the context length remains at 1k. This option works correctly on the `/api/chat` and `/api/generate endpoints`.
Author
Owner

@rick-github commented on GitHub (Jul 28, 2024):

Probably because the OpenAI API doesn't support specifying the context window, just max_tokens. So yes, that would be an appropriate feature request for ollama.

<!-- gh-comment-id:2254569646 --> @rick-github commented on GitHub (Jul 28, 2024): Probably because the OpenAI API doesn't support specifying the context window, just `max_tokens`. So yes, that would be an appropriate feature request for ollama.
Author
Owner

@chris-31337 commented on GitHub (Jul 29, 2024):

You can also set a context length for a model in open-webui. Go to Workspace > Models > Create a model, choose the base model, scroll down to Model Params and under Advanced Params, set the context length.

@rick-github I tried setting the context length both according to your instructions (Workspace->Models) as well as in the general settings (Settings -> General -> Advanced Parameters) but I am still receiving poor responses on long inputs via webui, see also here for what my test results are.

I'd think this is indeed a bug in openweb-ui, since in my tests, the LLM responses are fine in ollama using console interaction on the server but not when using the web UI. However, my knowledge of all this is very limited, so I'm not sure I should open a bug there, given that the discussion here seems to indicate some API level responsibility in ollama as well? I'd appreciate it if you could find the time to explain to me why this should not be opened as an issue in openweb-ui right away.

Also, if either of you know of a workaround, I'd be most grateful if you could explain it to me, since I'd prefer to use the webui interface rather than opening a shell into the docker image on the server every time I have a long prompt.

<!-- gh-comment-id:2255755454 --> @chris-31337 commented on GitHub (Jul 29, 2024): > You can also set a context length for a model in open-webui. Go to Workspace > Models > Create a model, choose the base model, scroll down to Model Params and under Advanced Params, set the context length. @rick-github I tried setting the context length both according to your instructions (Workspace->Models) as well as in the general settings (Settings -> General -> Advanced Parameters) but I am still receiving poor responses on long inputs via webui, see also [here](https://github.com/ollama/ollama/issues/2714#issuecomment-2254455550) for what my test results are. I'd think this is indeed a bug in openweb-ui, since in my tests, the LLM responses are fine in ollama using console interaction on the server but not when using the web UI. However, my knowledge of all this is very limited, so I'm not sure I should open a bug there, given that the discussion here seems to indicate some API level responsibility in ollama as well? I'd appreciate it if you could find the time to explain to me why this should not be opened as an issue in openweb-ui right away. Also, if either of you know of a workaround, I'd be most grateful if you could explain it to me, since I'd prefer to use the webui interface rather than opening a shell into the docker image on the server every time I have a long prompt.
Author
Owner

@rick-github commented on GitHub (Jul 29, 2024):

A bit more information is needed. If you supply a server log (docker logs <name-of-ollama-container>) it can be checked for truncation messages. It would also be useful to see the actual HTTP request that open-webui is sending. Unfortunately that's not logged, so a tool needs to be installed to capture that:

$ docker exec -it <name-of-ollama-container> apt install -y tcpflow
$ docker exec -it <name-of-ollama-comtainer> tcpflow -c 'src port 11434 or dst port 11434' > ollama-open-webui.log

Run this, use open-webui for a long context conversation, ^C tcpflow, remove any PII from the log, and add it here.

<!-- gh-comment-id:2255847161 --> @rick-github commented on GitHub (Jul 29, 2024): A bit more information is needed. If you supply a server log (`docker logs <name-of-ollama-container>`) it can be checked for truncation messages. It would also be useful to see the actual HTTP request that open-webui is sending. Unfortunately that's not logged, so a tool needs to be installed to capture that: ``` $ docker exec -it <name-of-ollama-container> apt install -y tcpflow $ docker exec -it <name-of-ollama-comtainer> tcpflow -c 'src port 11434 or dst port 11434' > ollama-open-webui.log ``` Run this, use open-webui for a long context conversation, ^C tcpflow, remove any PII from the log, and add it here.
Author
Owner

@chris-31337 commented on GitHub (Aug 6, 2024):

@rick-github Sorry for my delayed response and thank you very much for your instructions on how to generate meaningful logs from the docker container.

Because I had copyright concerns about my original test prompt (which involved asking questions on a full scientific text), I have recreated the test scenario with a public domain excerpt from Fitzgerald's classic novel (Chapters 1-3) to arrive at a comparable 90k characters prompt as with my original scientific text (roughly 20k tokens according to openai tokenizer).

Please find attached the full prompt as well as the requested docker and tcpflow logs:
full-prompt.txt
ollama-docker.log
ollama-open-webui.log

As in my previous tests, running the same prompt and test questions through console worked significantly better. The test prompt looked like this:

You are responsible for reading stories and then faithfully answering questions about the text. When someone gives you a story input, you should not summarize the text and only wait for questions. Importantly, upon receiving a large text, just confirm that you have read the text:
(followed by the novel excerpt, for full prompt see attachment)

Here is a comparison of the best results (validity of answers indicated by myself):

Console Interaction

Relevant settings/procedure

sudo docker exec -it (ollama-containerid) bash
ollama run llama3.1
/set parameter num_ctx 128000
(followed by pasting of prompt)

Immediate response to the prompt

I'm ready to answer your questions about the text. What would you like to ask? ✔️

What does the narrator say about reserving judgements?

The narrator says that he is one of the few honest people you've ever known, and that making "reserving judgings" is a matter of infinite hope. ✔️

When Nick is sitting at the table with Daisy and Miss Baker, Daisy says that Nick reminds her of an object. What is it?

Daisy mentions that Nick resembles a rose, an absolute rose. ✔️

What does Catherine say about Gatsby's royal origin?

Catherine mentions that there is a rumor among the guests that Gatsby is a nephew or a cousin of Kaiser Wilhelm's, and that this is where all his money comes from. ✔️

Someone is playing piano. What was the color of their dress?

The girl in yellow was playing the piano, and she wore a yellow dress. ✔️

In the final sentence of the excerpt, the narrator reveals a character trait of himself. What is it?

The final sentence states that "Everyone suspects himself of at least one of the cardinal virtues..." The narrator's own "cardinal virtue" is being "one of the few honest people that I have ever known." ✔️

WebUI Interaction

Relevant settings

    ctx

Immediate response to the prompt

The passage you've provided is an excerpt from F. Scott Fitzgerald's novel "The Great Gatsby", specifically Chapter 1. The narrative voice belongs to Nick Carraway, a young man who has just moved to Long Island and is recounting his experiences with Jordan Baker, a golf champion. Here are the main points of the passage: [...; unsolicited summary]

What does the narrator say about reserving judgements?

At the beginning of Chapter 1, the narrator (Nick Carraway) says: "Reserving judgments is a matter of infinite hope. [...] ✔️

When Nick is sitting at the table with Daisy and Miss Baker, Daisy says that Nick reminds her of an object. What is it?

Actually, it's Jordan Baker who makes a comment about Nick resembling an object. When they are all having lunch together, Jordan says: "An indispensable personage for any girl not equipped with the necessity of personal fascination."

What does Catherine say about Gatsby's royal origin?

Catherine (also known as Daisy) doesn't mention anything about Gatsby's "royal origin". However, Nick Carraway says that Gatsby had a reputation for having come from "old" money. [...]

Someone is playing piano. What was the color of their dress?

You're probably thinking of Jordan Baker, who plays the piano at Tom Buchanan's house in Chapter 1. According to the text, Jordan is wearing a green dress while playing the piano.

In the final sentence of the excerpt, the narrator reveals a character trait of himself. What is it?

The final sentence of the original passage I provided is not actually from the excerpt you gave me. However, based on the context of Chapter 1, one possible interpretation of a character trait revealed by the narrator (Nick Carraway) could be his tendency to observe and reflect upon others. [...]

It seems clear that the responses obtained from the console interaction are much more accurate and the WebUI interaction must be based on truncated information. Since both tests were run on the same docker container and the same model, it would seem to be an issue of either the ollama API or WebUI.

Note that in WebUI, I had set the context length to 128000, yet the ollama-docker-logs appear to show that the server is invoked with --ctx-size 8192 regardless of that setting. I am not sure if this is already the issue or if you can spot additional problems in the log files.

<!-- gh-comment-id:2271374339 --> @chris-31337 commented on GitHub (Aug 6, 2024): @rick-github Sorry for my delayed response and thank you very much for your instructions on how to generate meaningful logs from the docker container. Because I had copyright concerns about my original test prompt (which involved asking questions on a full scientific text), I have recreated the test scenario with a public domain excerpt from [Fitzgerald's classic novel](https://gutenberg.org/ebooks/64317) (Chapters 1-3) to arrive at a comparable 90k characters prompt as with my original scientific text (roughly 20k tokens according to openai tokenizer). **Please find attached the full prompt as well as the requested docker and tcpflow logs:** [full-prompt.txt](https://github.com/user-attachments/files/16511316/full-prompt.txt) [ollama-docker.log](https://github.com/user-attachments/files/16511322/ollama-docker.log) [ollama-open-webui.log](https://github.com/user-attachments/files/16511323/ollama-open-webui.log) As in my previous tests, running the same prompt and test questions through console worked significantly better. The test prompt looked like this: > You are responsible for reading stories and then faithfully answering questions about the text. When someone gives you a story input, you should not summarize the text and only wait for questions. Importantly, upon receiving a large text, just confirm that you have read the text: > (followed by the novel excerpt, for full prompt see attachment) Here is a comparison of the best results (validity of answers indicated by myself): <details> <summary>Console Interaction</summary> <br> **Relevant settings/procedure** ``` sudo docker exec -it (ollama-containerid) bash ollama run llama3.1 /set parameter num_ctx 128000 (followed by pasting of prompt) ``` **Immediate response to the prompt** > I'm ready to answer your questions about the text. What would you like to ask? :heavy_check_mark: **What does the narrator say about reserving judgements?** > The narrator says that he is one of the few honest people you've ever known, and that making "reserving judgings" is a matter of infinite hope. :heavy_check_mark: **When Nick is sitting at the table with Daisy and Miss Baker, Daisy says that Nick reminds her of an object. What is it?** > Daisy mentions that Nick resembles a rose, an absolute rose. :heavy_check_mark: **What does Catherine say about Gatsby's royal origin?** > Catherine mentions that there is a rumor among the guests that Gatsby is a nephew or a cousin of Kaiser Wilhelm's, and that this is where all his money comes from. :heavy_check_mark: **Someone is playing piano. What was the color of their dress?** > The girl in yellow was playing the piano, and she wore a yellow dress. :heavy_check_mark: **In the final sentence of the excerpt, the narrator reveals a character trait of himself. What is it?** > The final sentence states that "Everyone suspects himself of at least one of the cardinal virtues..." The narrator's own "cardinal virtue" is being "one of the few honest people that I have ever known." :heavy_check_mark: </details> <details> <summary>WebUI Interaction</summary> <br> **Relevant settings** &nbsp;&nbsp;&nbsp;&nbsp;<img width="413" alt="ctx" src="https://github.com/user-attachments/assets/ad77a769-4671-42f8-a32f-c73da869a15b"> **Immediate response to the prompt** > The passage you've provided is an excerpt from F. Scott Fitzgerald's novel "The Great Gatsby", specifically Chapter 1. The narrative voice belongs to Nick Carraway, a young man who has just moved to Long Island and is recounting his experiences with Jordan Baker, a golf champion. Here are the main points of the passage: [...; unsolicited summary] :x: **What does the narrator say about reserving judgements?** > At the beginning of Chapter 1, the narrator (Nick Carraway) says: "Reserving judgments is a matter of infinite hope. [...] :heavy_check_mark: **When Nick is sitting at the table with Daisy and Miss Baker, Daisy says that Nick reminds her of an object. What is it?** > Actually, it's Jordan Baker who makes a comment about Nick resembling an object. When they are all having lunch together, Jordan says: "An indispensable personage for any girl not equipped with the necessity of personal fascination." :x: **What does Catherine say about Gatsby's royal origin?** > Catherine (also known as Daisy) doesn't mention anything about Gatsby's "royal origin". However, Nick Carraway says that Gatsby had a reputation for having come from "old" money. [...] :x: **Someone is playing piano. What was the color of their dress?** > You're probably thinking of Jordan Baker, who plays the piano at Tom Buchanan's house in Chapter 1. According to the text, Jordan is wearing a green dress while playing the piano. :x: **In the final sentence of the excerpt, the narrator reveals a character trait of himself. What is it?** > The final sentence of the original passage I provided is not actually from the excerpt you gave me. However, based on the context of Chapter 1, one possible interpretation of a character trait revealed by the narrator (Nick Carraway) could be his tendency to observe and reflect upon others. [...] :x: </details> It seems clear that the responses obtained from the console interaction are much more accurate and the WebUI interaction must be based on truncated information. Since both tests were run on the same docker container and the same model, it would seem to be an issue of either the ollama API or WebUI. Note that in WebUI, I had set the context length to 128000, yet the ollama-docker-logs appear to show that the server is invoked with `--ctx-size 8192` regardless of that setting. I am not sure if this is already the issue or if you can spot additional problems in the log files.
Author
Owner

@anrgct commented on GitHub (Aug 6, 2024):

@chris-31337 , "prompt_eval_count" refers to the actual length of the processed prompt, which is 934 in your ollama-open-webui.log, meaning only the last 934 tokens were used as the prompt. Didn't you modify the modelfile as suggested? It worked for me after I made the changes. Also, is this screenshot from openwebui? I haven't seen this option before.

#5965 (comment)

<!-- gh-comment-id:2271443664 --> @anrgct commented on GitHub (Aug 6, 2024): @chris-31337 , "prompt_eval_count" refers to the actual length of the processed prompt, which is 934 in your ollama-open-webui.log, meaning only the last 934 tokens were used as the prompt. Didn't you modify the modelfile as suggested? It worked for me after I made the changes. Also, is this screenshot from [openwebui](https://github.com/open-webui/open-webui)? I haven't seen this option before. > [#5965 (comment)](https://github.com/ollama/ollama/issues/5965#issuecomment-2252354726)
Author
Owner

@rick-github commented on GitHub (Aug 6, 2024):

From ollama-docker.log:

INFO [update_slots] input truncated | n_ctx=2048 n_erase=19335 n_keep=4 n_left=2044 n_shift=1022 tid="133210613727232" timestamp=1722942601

From ollama-open-webui.log:

have ever known."}],"options":{},"keep_alive":"15m","sess

There's no num_ctx in the options field, so the input text of ~21k tokens is being whittled down to 2048 by throwing away 19335 of them. What version of open-webui are you using? I'll fire up a docker container with it and see if I can replicate.

<!-- gh-comment-id:2271475869 --> @rick-github commented on GitHub (Aug 6, 2024): From ollama-docker.log: ``` INFO [update_slots] input truncated | n_ctx=2048 n_erase=19335 n_keep=4 n_left=2044 n_shift=1022 tid="133210613727232" timestamp=1722942601 ``` From ollama-open-webui.log: ``` have ever known."}],"options":{},"keep_alive":"15m","sess ``` There's no `num_ctx` in the `options` field, so the input text of ~21k tokens is being whittled down to 2048 by throwing away 19335 of them. What version of open-webui are you using? I'll fire up a docker container with it and see if I can replicate.
Author
Owner

@chris-31337 commented on GitHub (Aug 9, 2024):

Thank you @anrgct and @rick-github for explaining how to interpret the log files. This allowed me to trace and test the issue further. During these tests, I noticed that the truncation log entries were not appearing when the WebUI was accessed (and the relevant settings changed to 128k) directly through a browser on the host server, via the locally exposed docker container port.

This led me to suspect that a misconfiguration of the Nginx Proxy (caused by myself) was somehow preventing the increased context length setting to be saved or passed on properly. Indeed, after resetting my faulty Nginx config I am no longer able to reproduce the problems I described above and the tcpflow log now contains "options": {"num_ctx": 128000} for the prompt. I will test further, but I think your help allowed me to fix this. Thank you very much!


@anrgct: To answer your question, the screenshot was indeed from Open WebUI, please find below another screenshot (edited slightly for readability) showing where I found the setting on my end:

OpenWebUI Screenshot

However, as you pointed out, the model setting is the actually relevant one. Leaving a 128k context length capable model in its default setting, even while having the user setting from the screenshot above at 128k, does not successfully lead to the num_ctx 128000 being passed. Instead, I indeed had to change the Context Length for that model specifically (under workspace), as you suggested. I had tried this before, but the setting was not properly stored due to my Nginx Proxy mistakes.

OpenWebUI Screenshot

Only when setting the 128k in the workspace -> models config (second screenshot) is the num_ctx 128000 passed and the LLM performs as expected. I do not know why the user setting (of the first screenshot) is ignored or even exists in the first place, given that it seems to not have the desired effect.

<!-- gh-comment-id:2277654383 --> @chris-31337 commented on GitHub (Aug 9, 2024): Thank you @anrgct and @rick-github for explaining how to interpret the log files. This allowed me to trace and test the issue further. During these tests, I noticed that the truncation log entries were not appearing when the WebUI was accessed (and the relevant settings changed to 128k) directly through a browser on the host server, via the locally exposed docker container port. This led me to suspect that a misconfiguration of the Nginx Proxy (caused by myself) was somehow preventing the increased context length setting to be saved or passed on properly. Indeed, after resetting my faulty Nginx config I am no longer able to reproduce the problems I described above and the tcpflow log now contains `"options": {"num_ctx": 128000}` for the prompt. I will test further, but I think your help allowed me to fix this. Thank you very much! --- @anrgct: To answer your question, the screenshot was indeed from Open WebUI, please find below another screenshot (edited slightly for readability) showing where I found the setting on my end: <img width="400" alt="OpenWebUI Screenshot" src="https://github.com/user-attachments/assets/634e0676-f4fe-42ab-a36c-89cfd2b20d0d"> However, as you pointed out, the model setting is the actually relevant one. Leaving a 128k context length capable model in its default setting, even while having the user setting from the screenshot above at 128k, does not successfully lead to the num_ctx 128000 being passed. Instead, I indeed had to change the Context Length for that model specifically (under workspace), as you suggested. I had tried this before, but the setting was not properly stored due to my Nginx Proxy mistakes. <img width="400" alt="OpenWebUI Screenshot" src="https://github.com/user-attachments/assets/d1b0feff-5a73-4620-8782-01ef8418f2db"> Only when setting the 128k in the `workspace -> models` config (second screenshot) is the num_ctx 128000 passed and the LLM performs as expected. I do not know why the user setting (of the first screenshot) is ignored or even exists in the first place, given that it seems to not have the desired effect.
Author
Owner

@anrgct commented on GitHub (Aug 9, 2024):

I just discovered this setting in open-webui. My version is v0.3.11, and it's actually working. I had no idea about it before... @chris-31337
1723208848023

By the way: I noticed that modifying the Context Length in Settings - Advanced Parameters doesn't actually pass that parameter in the request. Only changing the Context Length in Chat Controls - Advanced Params actually sends the parameter. This seems to be a bug in OpenWebUI.

<!-- gh-comment-id:2277915109 --> @anrgct commented on GitHub (Aug 9, 2024): I just discovered this setting in open-webui. My version is v0.3.11, and it's actually working. I had no idea about it before... @chris-31337 ![1723208848023](https://github.com/user-attachments/assets/b18774b5-a10a-4404-a2e0-b4d6feac6447) By the way: I noticed that modifying the Context Length in Settings - Advanced Parameters doesn't actually pass that parameter in the request. Only changing the Context Length in Chat Controls - Advanced Params actually sends the parameter. This seems to be a bug in OpenWebUI.
Author
Owner

@igorschlum commented on GitHub (Aug 10, 2024):

@anrgct interesting issue. As it is solved. Could you please close it to reduce the number of open issues?

<!-- gh-comment-id:2280629841 --> @igorschlum commented on GitHub (Aug 10, 2024): @anrgct interesting issue. As it is solved. Could you please close it to reduce the number of open issues?
Author
Owner

@anrgct commented on GitHub (Aug 10, 2024):

Thank you for your patient replies! @igorschlum @rick-github

<!-- gh-comment-id:2282189032 --> @anrgct commented on GitHub (Aug 10, 2024): Thank you for your patient replies! @igorschlum @rick-github
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#29532