[GH-ISSUE #1910] "format": "json" in api request causes hang due to repeated tokens #1097

Closed
opened 2026-04-12 10:50:52 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @Fuzzillogic on GitHub (Jan 10, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/1910

Originally assigned to: @BruceMacD on GitHub.

When explicitly adding "format": "json" to an api request, the request then never seems to run to completion. In the logs I can see that the model is loaded, but apart from CPU usage to the maximum configured, nothing happens until I abort the request.

This hangs:

curl http://localhost:11434/api/generate -d '{ 
  "model": "mistral:latest",  
  "prompt": "Say hello.",
  "stream": false,
  "format": "json"
}'

This works just fine:

curl http://localhost:11434/api/generate -d '{ 
  "model": "mistral:latest",
  "prompt": "Say hello.",
  "stream": false
}'

The weird thing is, I did got some responses occasionally with "format": "json" present, but this example consistently fails.

I use the official Docker container. (Using rootless Podman). CPU only. Tested with 0.1.17, 0.1.18 and 0.1.19, on two different machines, one Intel, one AMD, both Kubuntu 23.10, with same results.

Originally created by @Fuzzillogic on GitHub (Jan 10, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/1910 Originally assigned to: @BruceMacD on GitHub. When explicitly adding `"format": "json"` to an api request, the request then never seems to run to completion. In the logs I can see that the model is loaded, but apart from CPU usage to the maximum configured, nothing happens until I abort the request. This hangs: ```shell curl http://localhost:11434/api/generate -d '{ "model": "mistral:latest", "prompt": "Say hello.", "stream": false, "format": "json" }' ``` This works just fine: ```shell curl http://localhost:11434/api/generate -d '{ "model": "mistral:latest", "prompt": "Say hello.", "stream": false }' ``` The weird thing is, I did got some responses occasionally with `"format": "json"` present, but this example consistently fails. I use the official Docker container. (Using rootless Podman). CPU only. Tested with 0.1.17, 0.1.18 and 0.1.19, on two different machines, one Intel, one AMD, both Kubuntu 23.10, with same results.
GiteaMirror added the bug label 2026-04-12 10:50:52 -05:00
Author
Owner

@jmorganca commented on GitHub (Jan 10, 2024):

To shed some light: without specifying reply in json, the model will sometimes output whitespace indefinitely.

<!-- gh-comment-id:1885868954 --> @jmorganca commented on GitHub (Jan 10, 2024): To shed some light: without specifying `reply in json`, the model will sometimes output whitespace indefinitely.
Author
Owner

@xsa-dev commented on GitHub (Jan 14, 2024):

I have some bug too.

<!-- gh-comment-id:1890919922 --> @xsa-dev commented on GitHub (Jan 14, 2024): I have some bug too.
Author
Owner

@Shajan commented on GitHub (Jan 14, 2024):

Repro below, hangs after about 20 requests (ollama version 0.1.20 on linux with GPU, as well as on mac m2)

import requests

def query(session):
    url = "http://localhost:11434/api/generate"
    data = {
        "model": "llama2:7b",
        "prompt": "Why is the sky blue?",
        "stream": False,
        "options": { "temperature": 0.8 }
    }

    with requests.post(url, json=data) as response: # Hangs about every 20 requests
        if response.ok:
            return response.text
        else:
            print(response)
            return None

def main():
    total = 0
    errors = 0

    with requests.Session() as session:
        for _ in range(100):
            total += 1
            r = query(session)
            if r is None:
                errors += 1
            success_rate = 100*((total - errors)/total)
            print(f"{total=} {errors=} {success_rate=:.2f}")

if __name__ == "__main__":
    main()
<!-- gh-comment-id:1891057901 --> @Shajan commented on GitHub (Jan 14, 2024): Repro below, hangs after about 20 requests (ollama version 0.1.20 on linux with GPU, as well as on mac m2) ```python import requests def query(session): url = "http://localhost:11434/api/generate" data = { "model": "llama2:7b", "prompt": "Why is the sky blue?", "stream": False, "options": { "temperature": 0.8 } } with requests.post(url, json=data) as response: # Hangs about every 20 requests if response.ok: return response.text else: print(response) return None def main(): total = 0 errors = 0 with requests.Session() as session: for _ in range(100): total += 1 r = query(session) if r is None: errors += 1 success_rate = 100*((total - errors)/total) print(f"{total=} {errors=} {success_rate=:.2f}") if __name__ == "__main__": main() ```
Author
Owner

@xsa-dev commented on GitHub (Jan 15, 2024):

I don't see the json parameter in your example. Without 'json', it has been running smoothly for about 20 hours with around 10k requests and everything's working fine.

ollama version is 0.1.17
ubuntu 22.04

Job

image

Linux GPU:

image

Prompts & Json loads

I deserialize response with json loads after response and specify format in prompt with JSON.
image

image

<!-- gh-comment-id:1892562599 --> @xsa-dev commented on GitHub (Jan 15, 2024): I don't see the json parameter in your example. Without 'json', it has been running smoothly for about 20 hours with around 10k requests and everything's working fine. ollama version is 0.1.17 ubuntu 22.04 ### Job ![image](https://github.com/jmorganca/ollama/assets/16959353/bd1267b0-8fbc-4492-8547-ba026dde3111) ### Linux GPU: ![image](https://github.com/jmorganca/ollama/assets/16959353/186331f3-db5f-49b9-9f20-7dae664a7971) ### Prompts & Json loads I deserialize response with json loads after response and specify format in prompt with `JSON`. ![image](https://github.com/jmorganca/ollama/assets/16959353/48da265b-45e2-401b-a088-979b262e6f4a) ![image](https://github.com/jmorganca/ollama/assets/16959353/054942f1-9ae4-478a-93c0-1fe4c3dfe84c)
Author
Owner

@ypxk commented on GitHub (Feb 19, 2024):

I'm also having this issue with mistral, ollama, json and my m1 32 GB Ventura 13.6 Macbook. I've been working on a summarization script for a few days, had the code working and was solely exiting/rerunning to tweak the prompt to try to improve mistral's output. After one of the exits, I can no longer get mistral to reliably output json at all, it hangs 99% of the time.

Test script from a tutorial I followed when I was trying to wrap my head around the json support:

import requests 
import json
import sys 
country = "france"
schema = {
	"city": {
		"type": "string",
		"description": "Name of the city"
	},
	"lat":{
		"type": "float",
		"description": "Decimal Latitude of the city"
	},
	"lon":{
		"type": "float",
		"description": "Decimal Longitude of the city"
	}
}
payload = {
	"model": "mistral",
	"messages": [
		{"role": "system", "content": f"You are a helpful AI assistant. The user will enter a country name and the assistant will return the decimal latitude and decimal longitude of the capital of the country. Output in JSON using the schema defined here: {schema}."},
		{"role": "user", "content": "japan"},
		{"role": "assistant", "content": "{\"city\": \"Tokyo\", \"lat\": 35.6748, \"lon\": 139.7624}"},
		{"role": "user", "content": country},
		],
		"format": "json",
		"stream": False
		
}
response = requests.post ("http://localhost:11434/api/chat", json=payload)

Changing the model to llama2, dolphin-mixtral, etc works.
Removing the format: json line works with mistral.
And mistral worked with this test code up until yesterday—I'd been testing various prompts with it for a few hours.

Now that it doesn't work, I can no longer get it back to working. It's like it never worked. I have tried:
-quitting ollama from the task bar
—restarting computer
-pip uninstalling/reinstalling the python api
—trying this script in a different conda env from the one I was working in
—deleting all modelfiles that use mistral and redownloading it.
—deleting ollama and reinstalling it.

Really weird

edit: after deleting and re-installing everything at once (previously had only deleted mistral OR ollama), I think I am good to go again

<!-- gh-comment-id:1953151158 --> @ypxk commented on GitHub (Feb 19, 2024): I'm also having this issue with mistral, ollama, json and my m1 32 GB Ventura 13.6 Macbook. I've been working on a summarization script for a few days, had the code working and was solely exiting/rerunning to tweak the prompt to try to improve mistral's output. After one of the exits, I can no longer get mistral to reliably output json at all, it hangs 99% of the time. Test script from a tutorial I followed when I was trying to wrap my head around the json support: ``` import requests import json import sys country = "france" schema = { "city": { "type": "string", "description": "Name of the city" }, "lat":{ "type": "float", "description": "Decimal Latitude of the city" }, "lon":{ "type": "float", "description": "Decimal Longitude of the city" } } payload = { "model": "mistral", "messages": [ {"role": "system", "content": f"You are a helpful AI assistant. The user will enter a country name and the assistant will return the decimal latitude and decimal longitude of the capital of the country. Output in JSON using the schema defined here: {schema}."}, {"role": "user", "content": "japan"}, {"role": "assistant", "content": "{\"city\": \"Tokyo\", \"lat\": 35.6748, \"lon\": 139.7624}"}, {"role": "user", "content": country}, ], "format": "json", "stream": False } response = requests.post ("http://localhost:11434/api/chat", json=payload) ``` Changing the model to llama2, dolphin-mixtral, etc works. Removing the format: json line works with mistral. And mistral worked with this test code up until yesterday—I'd been testing various prompts with it for a few hours. Now that it doesn't work, I can no longer get it back to working. It's like it never worked. I have tried: -quitting ollama from the task bar —restarting computer -pip uninstalling/reinstalling the python api —trying this script in a different conda env from the one I was working in —deleting all modelfiles that use mistral and redownloading it. —deleting ollama and reinstalling it. Really weird edit: after deleting and re-installing everything at once (previously had only deleted mistral OR ollama), I think I am good to go again
Author
Owner

@seanmavley commented on GitHub (Mar 9, 2024):

Any insights into what the workaround may be? Seems like a critical issue when using the 'format': 'json' and Ollama hanging entirely.

Initially I thought it's the switching of models in my code, but the format is the culprit because running without the format responds quickly as expected (even when switching models).

This bug is a serious blocker, and why does it happen? And what's the potential workaround at the moment before it's fixed? Kinda been bugging me for weeks now.

It's like 2 months now, and everything else seems to get fixed but this bug

It currently renders this tutorial even unusable as it tightly requires the use of the format in the scoring of retrieved documents.
https://github.com/langchain-ai/langgraph/blob/main/examples/rag/langgraph_crag_mistral.ipynb

The above tutorial link is a realistic repro of the issue.

cc @jmorganca

<!-- gh-comment-id:1986842150 --> @seanmavley commented on GitHub (Mar 9, 2024): Any insights into what the workaround may be? Seems like a critical issue when using the `'format': 'json'` and Ollama hanging entirely. Initially I thought it's the switching of models in my code, but the `format` is the culprit because running without the format responds quickly as expected (even when switching models). This bug is a serious blocker, and why does it happen? And what's the potential workaround at the moment before it's fixed? Kinda been bugging me for weeks now. It's like 2 months now, and everything else seems to get fixed but this bug It currently renders this tutorial even unusable as it tightly requires the use of the `format` in the scoring of retrieved documents. https://github.com/langchain-ai/langgraph/blob/main/examples/rag/langgraph_crag_mistral.ipynb The above tutorial link is a realistic repro of the issue. cc @jmorganca
Author
Owner

@koleshjr commented on GitHub (Mar 9, 2024):

Is the format : json there on default?? Because using langchain and chatollama also hangs even without the format : json. Option ?

<!-- gh-comment-id:1986845276 --> @koleshjr commented on GitHub (Mar 9, 2024): Is the format : json there on default?? Because using langchain and chatollama also hangs even without the format : json. Option ?
Author
Owner

@marklysze commented on GitHub (Mar 9, 2024):

Interestingly, as per my comment in the related issue, 2905 , it works first time but hangs on second attempt. That seems odd?

<!-- gh-comment-id:1986962319 --> @marklysze commented on GitHub (Mar 9, 2024): Interestingly, as per my comment in the related issue, [2905](https://github.com/ollama/ollama/issues/2905#issuecomment-1986761141) , it works first time but hangs on second attempt. That seems odd?
Author
Owner

@seanmavley commented on GitHub (Mar 11, 2024):

@marklysze Yes, once in a while it works for me too, except it fails way too often, and it's random.

<!-- gh-comment-id:1987920782 --> @seanmavley commented on GitHub (Mar 11, 2024): @marklysze Yes, once in a while it works for me too, except it fails way too often, and it's random.
Author
Owner

@bitsydarel commented on GitHub (May 12, 2025):

This seems to be back with 0.6.8 on gemma3 and qwen3

<!-- gh-comment-id:2870547885 --> @bitsydarel commented on GitHub (May 12, 2025): This seems to be back with 0.6.8 on gemma3 and qwen3
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1097