[GH-ISSUE #1135] json response stalls? #47081

Closed
opened 2026-04-28 02:59:58 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @hemanth on GitHub (Nov 15, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1135

/tmp 
❯ ollama --version
ollama version 0.1.9

https://github.com/jmorganca/ollama/assets/18315/d0d8ecb1-142f-464c-bb49-8d147eb3d322

Sometimes we see an empty response:

{"model":"llama2","created_at":"2023-11-15T05:46:21.685664Z","response":"{} ","done":true,"context":[29961,25580,29962,3532,14816,29903,29958,5299,829,14816,29903,6778,13,13,29911,514,592,263,270,328,2212,446,518,29914,25580,29962,6571,29871],"total_duration":216306917,"load_duration":982333,"prompt_eval_count":1,"eval_count":3,"eval_duration":192199000}
Originally created by @hemanth on GitHub (Nov 15, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1135 ``` /tmp ❯ ollama --version ollama version 0.1.9 ``` https://github.com/jmorganca/ollama/assets/18315/d0d8ecb1-142f-464c-bb49-8d147eb3d322 Sometimes we see an empty response: ```json {"model":"llama2","created_at":"2023-11-15T05:46:21.685664Z","response":"{} ","done":true,"context":[29961,25580,29962,3532,14816,29903,29958,5299,829,14816,29903,6778,13,13,29911,514,592,263,270,328,2212,446,518,29914,25580,29962,6571,29871],"total_duration":216306917,"load_duration":982333,"prompt_eval_count":1,"eval_count":3,"eval_duration":192199000} ```
GiteaMirror added the bug label 2026-04-28 02:59:59 -05:00
Author
Owner

@jmorganca commented on GitHub (Nov 15, 2023):

Hi @hemanth ! Thanks for the issue. would it be possible to share an example prompt that causes this? Also I think your video may be of another project 😊

<!-- gh-comment-id:1812389341 --> @jmorganca commented on GitHub (Nov 15, 2023): Hi @hemanth ! Thanks for the issue. would it be possible to share an example prompt that causes this? Also I think your video may be of another project 😊
Author
Owner

@technovangelist commented on GitHub (Dec 4, 2023):

@hemanth can you provide an example of this?

<!-- gh-comment-id:1839707566 --> @technovangelist commented on GitHub (Dec 4, 2023): @hemanth can you provide an example of this?
Author
Owner

@hemanth commented on GitHub (Dec 5, 2023):

Just the video. I didn't change any prompt.

@technovangelist @jmorganca

image
<!-- gh-comment-id:1839768365 --> @hemanth commented on GitHub (Dec 5, 2023): Just the video. I didn't change any prompt. @technovangelist @jmorganca <img width="764" alt="image" src="https://github.com/jmorganca/ollama/assets/18315/7ca82fb2-63fb-4024-9c03-8af08026c86a">
Author
Owner

@technovangelist commented on GitHub (Dec 8, 2023):

But the video and this screenshot don't show anything having to do with ollama. Is there anything you can show that demonstrates the problem. We would love to solve any issues you have. But we need a little info about what is the problem and how did you get there.

<!-- gh-comment-id:1847973496 --> @technovangelist commented on GitHub (Dec 8, 2023): But the video and this screenshot don't show anything having to do with ollama. Is there anything you can show that demonstrates the problem. We would love to solve any issues you have. But we need a little info about what is the problem and how did you get there.
Author
Owner

@hemanth commented on GitHub (Dec 9, 2023):

Ah, sorry, my bad! [/me had mapped this issue mentally as the create-llama ticket :D]

Here is the correct recording...

https://github.com/jmorganca/ollama/assets/18315/b900d6ef-423d-4674-88ed-3b94ccfa2a7c

<!-- gh-comment-id:1848612267 --> @hemanth commented on GitHub (Dec 9, 2023): Ah, sorry, my bad! [/me had mapped this issue mentally as the create-llama ticket :D] Here is the correct recording... https://github.com/jmorganca/ollama/assets/18315/b900d6ef-423d-4674-88ed-3b94ccfa2a7c
Author
Owner

@atorr0 commented on GitHub (Jan 1, 2024):

Hi @hemanth , did you try it with newer versions (like https://github.com/jmorganca/ollama/releases/tag/v0.1.17)?

<!-- gh-comment-id:1873421004 --> @atorr0 commented on GitHub (Jan 1, 2024): Hi @hemanth , did you try it with newer versions (like https://github.com/jmorganca/ollama/releases/tag/v0.1.17)?
Author
Owner

@rmallick6806 commented on GitHub (Jan 1, 2024):

Running into the same issue and I'm using v0.0.17. JSON format mode always hangs. Without the flag, I am able to get responses.

<!-- gh-comment-id:1873447428 --> @rmallick6806 commented on GitHub (Jan 1, 2024): Running into the same issue and I'm using v0.0.17. JSON format mode always hangs. Without the flag, I am able to get responses.
Author
Owner

@technovangelist commented on GitHub (Jan 3, 2024):

@hemanth Have you tried with the latest version? What response are you getting?

@rmallick6806 can you tell me more about what you are trying and the result. what platform are you on?

<!-- gh-comment-id:1875686856 --> @technovangelist commented on GitHub (Jan 3, 2024): @hemanth Have you tried with the latest version? What response are you getting? @rmallick6806 can you tell me more about what you are trying and the result. what platform are you on?
Author
Owner

@horiacristescu commented on GitHub (Jan 6, 2024):

Same problem, using mistral:7b-instruct-v0.2-q5_K_M but it happens for any model in my experience.

I tried with v0.1.18 and for every --format json request it doesn't stop. After printing the JSON it continues to print empty lines forever. Because it never stops printing empty lines, it is as if it hangs forever.

Does the JSON format model know when to stop? What stop words should I use? I tried stop="\n\n\n" without success.

<!-- gh-comment-id:1879635186 --> @horiacristescu commented on GitHub (Jan 6, 2024): Same problem, using `mistral:7b-instruct-v0.2-q5_K_M` but it happens for any model in my experience. I tried with v0.1.18 and for every --format json request it doesn't stop. After printing the JSON it continues to print empty lines forever. Because it never stops printing empty lines, it is as if it hangs forever. Does the JSON format model know when to stop? What stop words should I use? I tried stop="\n\n\n" without success.
Author
Owner

@seanmavley commented on GitHub (Mar 9, 2024):

Related to #1910

<!-- gh-comment-id:1986843050 --> @seanmavley commented on GitHub (Mar 9, 2024): Related to #1910
Author
Owner

@BruceMacD commented on GitHub (Mar 11, 2024):

Hi all, this is an ongoing issue with how the response is mapped to JSON. If you see this issue please try adding a system prompt that specifies a JSON response is expected.

For example in the API:

curl http://127.0.0.1:11434/api/chat -d '{
    "model": "llama2",
    "format": "json",
    "stream": false,
    "messages": [
        {
            "role": "system",
            "content": "Respond only in JSON."
        },
        {
            "role": "user",
            "content": "Give me a list of emojis to descriptions."
        }
    ]
}'

Resolving this issue to consolidate it with #1182 , which I will continue work in.

<!-- gh-comment-id:1989189585 --> @BruceMacD commented on GitHub (Mar 11, 2024): Hi all, this is an ongoing issue with how the response is mapped to JSON. If you see this issue please try adding a system prompt that specifies a JSON response is expected. For example in the API: ```bash curl http://127.0.0.1:11434/api/chat -d '{ "model": "llama2", "format": "json", "stream": false, "messages": [ { "role": "system", "content": "Respond only in JSON." }, { "role": "user", "content": "Give me a list of emojis to descriptions." } ] }' ``` Resolving this issue to consolidate it with #1182 , which I will continue work in.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47081