[GH-ISSUE #4370] Ollama’s speed in generating chat content slowed down by tenfold When switching the chat format to JSON #49238

Open
opened 2026-04-28 10:58:27 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @XDesktopSoft on GitHub (May 12, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4370

Originally assigned to: @ParthSareen on GitHub.

What is the issue?

I just set the chat format to JSON, then the Ollama’s speed in generating chat content slowed down by tenfold.

For example, when I use the gemma7b model and the chat format is not set, I can get a chat reply in about 0.5s to 1s.
But if I set the chat format to JSON, it usually takes 6-15 seconds to get a chat reply.

Almost every LLM model are like this.
Is there any solution to this? Thanks.

code example:
curl http://localhost:11434/api/chat -d '{ "model": "llama3", "prompt": "What color is the sky at different times of the day? Respond using JSON", "format": "json", "stream": false }'

OS

Windows 10

GPU

Nvidia RTX4060Ti 16GB VRAM

CPU

Intel

Ollama version

0.1.37

Originally created by @XDesktopSoft on GitHub (May 12, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4370 Originally assigned to: @ParthSareen on GitHub. ### What is the issue? I just set the chat format to JSON, then the Ollama’s speed in generating chat content slowed down by tenfold. For example, when I use the gemma7b model and the chat format is not set, I can get a chat reply in about 0.5s to 1s. But if I set the chat format to JSON, it usually takes 6-15 seconds to get a chat reply. Almost every LLM model are like this. Is there any solution to this? Thanks. code example: `curl http://localhost:11434/api/chat -d '{ "model": "llama3", "prompt": "What color is the sky at different times of the day? Respond using JSON", "format": "json", "stream": false }'` ### OS Windows 10 ### GPU Nvidia RTX4060Ti 16GB VRAM ### CPU Intel ### Ollama version 0.1.37
GiteaMirror added the performancebugapi labels 2026-04-28 10:58:27 -05:00
Author
Owner

@H0llyW00dzZ commented on GitHub (May 12, 2024):

What is the issue?

I just set the chat format to JSON, then the Ollama’s speed in generating chat content slowed down by tenfold.

For example, when I use the gemma7b model and the chat format is not set, I can get a chat reply in about 0.5s to 1s. But if I set the chat format to JSON, it usually takes 6-15 seconds to get a chat reply.

Almost every LLM model are like this. Is there any solution to this? Thanks.

code example: curl http://localhost:11434/api/chat -d '{ "model": "llama3", "prompt": "What color is the sky at different times of the day? Respond using JSON", "format": "json", "stream": false }'

OS

Windows 10

GPU

Nvidia RTX4060Ti 16GB VRAM

CPU

Intel

Ollama version

0.1.37

Yes, there is a solution. It requires refactoring the entire codebase from scratch by replacing the standard library json with the high-performance JSON library (e.g, github.com/bytedance/sonic). This can significantly reduce the overhead caused by the combination of AI processing and streaming.

<!-- gh-comment-id:2106164586 --> @H0llyW00dzZ commented on GitHub (May 12, 2024): > ### What is the issue? > I just set the chat format to JSON, then the Ollama’s speed in generating chat content slowed down by tenfold. > > For example, when I use the gemma7b model and the chat format is not set, I can get a chat reply in about 0.5s to 1s. But if I set the chat format to JSON, it usually takes 6-15 seconds to get a chat reply. > > Almost every LLM model are like this. Is there any solution to this? Thanks. > > code example: `curl http://localhost:11434/api/chat -d '{ "model": "llama3", "prompt": "What color is the sky at different times of the day? Respond using JSON", "format": "json", "stream": false }'` > > ### OS > Windows 10 > > ### GPU > Nvidia RTX4060Ti 16GB VRAM > > ### CPU > Intel > > ### Ollama version > 0.1.37 Yes, there is a solution. It requires refactoring the entire codebase from scratch by replacing the standard library `json` with the high-performance JSON library (e.g, [github.com/bytedance/sonic](https://github.com/bytedance/sonic)). This can significantly reduce the overhead caused by the combination of AI processing and streaming.
Author
Owner

@VMinB12 commented on GitHub (May 12, 2024):

I don't know the specifics of how Ollama achieves JSON mode, but let me point out that vLLM supports outlines and lm-format-enforcer for guiding generation, see the vLLM docs. It would be great if Ollama could support at least one of these. It could simultaneously solve this issue, as well as significantly extend the power and utility of JSON mode and therefore the utility of Ollama. Since both frameworks support llama.cpp, it should be possible to integrate them into Ollama.

<!-- gh-comment-id:2106196513 --> @VMinB12 commented on GitHub (May 12, 2024): I don't know the specifics of how Ollama achieves JSON mode, but let me point out that vLLM supports [outlines](https://github.com/outlines-dev/outlines) and [lm-format-enforcer](https://github.com/noamgat/lm-format-enforcer) for guiding generation, [see the vLLM docs](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html#extra-parameters-for-chat-api). It would be great if Ollama could support at least one of these. It could simultaneously solve this issue, as well as significantly extend the power and utility of JSON mode and therefore the utility of Ollama. Since both frameworks support llama.cpp, it should be possible to integrate them into Ollama.
Author
Owner

@nikhil-swamix commented on GitHub (May 12, 2024):

yes i can confirm this issue, sometimes json formatting/schema plays important role. my solution would be to add a 2 stage parser, the second stage just uses ultra small model like phi-3 or just do some regex matching and handle few edge cases. let me know what you think about it guys??

After 1 hour search: https://news.ycombinator.com/item?id=37125118

<!-- gh-comment-id:2106354956 --> @nikhil-swamix commented on GitHub (May 12, 2024): yes i can confirm this issue, sometimes json formatting/schema plays important role. my solution would be to add a 2 stage parser, the second stage just uses ultra small model like phi-3 or just do some regex matching and handle few edge cases. let me know what you think about it guys?? After 1 hour search: https://news.ycombinator.com/item?id=37125118
Author
Owner

@H0llyW00dzZ commented on GitHub (May 12, 2024):

yes i can confirm this issue, sometimes json formatting/schema plays important role. my solution would be to add a 2 stage parser, the second stage just uses ultra small model like phi-3 or just do some regex matching and handle few edge cases. let me know what you think about it guys??

After 1 hour search: https://news.ycombinator.com/item?id=37125118

It's a performance issue, not an issue with the AI itself.

<!-- gh-comment-id:2106414465 --> @H0llyW00dzZ commented on GitHub (May 12, 2024): > yes i can confirm this issue, sometimes json formatting/schema plays important role. my solution would be to add a 2 stage parser, the second stage just uses ultra small model like phi-3 or just do some regex matching and handle few edge cases. let me know what you think about it guys?? > > After 1 hour search: https://news.ycombinator.com/item?id=37125118 It's a performance issue, not an issue with the AI itself.
Author
Owner

@XDesktopSoft commented on GitHub (May 13, 2024):

It's a performance issue, not an issue with the AI itself.

I don't think it is just a performance issue, because if you uses command line to test the json format you will see the json content generating quickly but then there are so many whitespace blank output Before the entire chat message generation ends, which occupies waste lote of the time.

Just test with the command line :
ollama.exe run gemma:2b --format json "hello"

json_whitespace_bug

<!-- gh-comment-id:2106532511 --> @XDesktopSoft commented on GitHub (May 13, 2024): > It's a performance issue, not an issue with the AI itself. I don't think it is just a performance issue, because if you uses command line to test the json format you will see the json content generating quickly but then there are so many whitespace blank output Before the entire chat message generation ends, which occupies waste lote of the time. Just test with the command line : `ollama.exe run gemma:2b --format json "hello"` ![json_whitespace_bug](https://github.com/ollama/ollama/assets/126927865/e0218fb9-406d-43fc-853e-e4c635cb1d79)
Author
Owner

@H0llyW00dzZ commented on GitHub (May 13, 2024):

It's a performance issue, not an issue with the AI itself.

I don't think it is just a performance issue, because if you uses command line to test the json format you will see the json content generating quickly but then there are so many whitespace blank output Before the entire chat message generation ends, which occupies waste lote of the time.

Just test with the command line : ollama.exe run gemma:2b --format json "hello"

json_whitespace_bug

This repo is written in Go, and you can compare performance by using profiling tools, not by testing with simple inputs like "Hello baby" or "bla bla bla".

<!-- gh-comment-id:2106699085 --> @H0llyW00dzZ commented on GitHub (May 13, 2024): > > It's a performance issue, not an issue with the AI itself. > > I don't think it is just a performance issue, because if you uses command line to test the json format you will see the json content generating quickly but then there are so many whitespace blank output Before the entire chat message generation ends, which occupies waste lote of the time. > > Just test with the command line : `ollama.exe run gemma:2b --format json "hello"` > > ![json_whitespace_bug](https://private-user-images.githubusercontent.com/126927865/329906335-e0218fb9-406d-43fc-853e-e4c635cb1d79.jpg?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTU1NzkxMjIsIm5iZiI6MTcxNTU3ODgyMiwicGF0aCI6Ii8xMjY5Mjc4NjUvMzI5OTA2MzM1LWUwMjE4ZmI5LTQwNmQtNDNmYy04NTNlLWU0YzYzNWNiMWQ3OS5qcGc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwNTEzJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDUxM1QwNTQwMjJaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT0xOTRmNGYzN2Y2MmRkYzI0NTFhMGYwZGU1NmMyOTkwM2FiOWIwZTg1YmVkZTEzOWRlZDg1ZWU0MjAxZGY4YTA0JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCZhY3Rvcl9pZD0wJmtleV9pZD0wJnJlcG9faWQ9MCJ9.TCHTtBa4IMWL8Cls4SdzgevnPhE1hqrYMutoYOvrrXc) This repo is written in Go, and you can compare performance by using profiling tools, not by testing with simple inputs like "Hello baby" or "bla bla bla".
Author
Owner

@H0llyW00dzZ commented on GitHub (May 13, 2024):

Also, regarding the whitespace blanks, that could be a bug because I can't identify it since it works well on my machine with perfect performance.

<!-- gh-comment-id:2106738890 --> @H0llyW00dzZ commented on GitHub (May 13, 2024): Also, regarding the whitespace blanks, that could be a bug because I can't identify it since it works well on my machine with perfect performance.
Author
Owner

@mitar commented on GitHub (May 29, 2024):

JSON format in Ollama is converted to a JSON grammar and passed to llama.cpp. llama.cpp then restricts generated tokens to match the grammar. But there are some known performance issues with that approach (https://github.com/ggerganov/llama.cpp/issues/4218).

<!-- gh-comment-id:2138279653 --> @mitar commented on GitHub (May 29, 2024): JSON format in Ollama is converted to a JSON grammar and passed to llama.cpp. llama.cpp then restricts generated tokens to match the grammar. But there are some known performance issues with that approach (https://github.com/ggerganov/llama.cpp/issues/4218).
Author
Owner

@mitar commented on GitHub (May 29, 2024):

Once those PRs are included in llama.cpp used by ollama, using grammar should be fast: https://github.com/ggerganov/llama.cpp/pull/6811 https://github.com/ggerganov/llama.cpp/pull/7424

<!-- gh-comment-id:2138341935 --> @mitar commented on GitHub (May 29, 2024): Once those PRs are included in llama.cpp used by ollama, using grammar should be fast: https://github.com/ggerganov/llama.cpp/pull/6811 https://github.com/ggerganov/llama.cpp/pull/7424
Author
Owner

@mitar commented on GitHub (Jun 27, 2024):

After more testing it seems that only "format json" is slow while if I convert a custom JSON schema to grammar (see #5348), there is no slowdown. I think the issue is that current grammar for "format json" allows whitespace at the end and thus models generate unnecessary tokens at the end, making things slow. So it is an Ollama issue and not upstream llama.cpp issue.

<!-- gh-comment-id:2195836660 --> @mitar commented on GitHub (Jun 27, 2024): After more testing it seems that only "format json" is slow while if I convert a custom JSON schema to grammar (see #5348), there is no slowdown. I think the issue is that current grammar for "format json" allows whitespace at the end and thus models generate unnecessary tokens at the end, making things slow. So it is an Ollama issue and not upstream llama.cpp issue.
Author
Owner

@Kinglord commented on GitHub (Aug 7, 2024):

Hey all, I know there's an automated ping here but just to better align everyone please check out and comment on my new call to the Ollama team for clarity here. As always please be civil and stay on topic! 😄 - https://github.com/ollama/ollama/issues/6237

<!-- gh-comment-id:2273914800 --> @Kinglord commented on GitHub (Aug 7, 2024): Hey all, I know there's an automated ping here but just to better align everyone please check out and comment on my new call to the Ollama team for clarity here. As always please be civil and stay on topic! 😄 - https://github.com/ollama/ollama/issues/6237
Author
Owner

@ParthSareen commented on GitHub (Dec 5, 2024):

Hey! Keeping a close eye on this - hopefully some of the upcoming constrained decoding work will help!

<!-- gh-comment-id:2518841670 --> @ParthSareen commented on GitHub (Dec 5, 2024): Hey! Keeping a close eye on this - hopefully some of the upcoming constrained decoding work will help!
Author
Owner

@ZeyBal commented on GitHub (Nov 25, 2025):

Hello! Is it fixed?

<!-- gh-comment-id:3576239503 --> @ZeyBal commented on GitHub (Nov 25, 2025): Hello! Is it fixed?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#49238