[GH-ISSUE #1108] API Raw request bug #47066

Closed
opened 2026-04-28 02:57:54 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @Nixellion on GitHub (Nov 13, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1108

Originally assigned to: @BruceMacD on GitHub.

Hi, I'm trying to use API in raw mode, and here's what I get:

import requests

a = requests.post(
    "http://192.168.1.122:11434/api/generate",
    json={
        "model": "openhermes2.5-mistral:7b-q5_K_S",
        "prompt": '<|im_start|>system\nYou are "Hermes", a conscious sentient superintelligent artificial intelligence developed by a man named Josh, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>\n<|im_start|>user\nHi!<|im_end|>\n<|im_start|>assistant',
        "format": "json",
        "raw": True,
        "stream": False,
        "options": {
            "num_predict": 90
        }
    },
)
print(a.text)

And this is what I get, a broken json with a ton of \n\n\n\n\n:

{"model":"openhermes2.5-mistral:7b-q5_K_S","created_at":"2023-11-13T12:35:35.715640916Z","response":"{\n  \"intent\": \"greeting\",\n  \"entities\": {},\n  \"response\": \"Hello! How can I help you today?\"\n}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","done":true,"total_duration":3825844524,"load_duration":594084,"prompt_eval_count":1,"eval_count":90,"eval_duration":2130466000}

Is it something on my end, or is it a bug?

Originally created by @Nixellion on GitHub (Nov 13, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1108 Originally assigned to: @BruceMacD on GitHub. Hi, I'm trying to use API in raw mode, and here's what I get: ```python import requests a = requests.post( "http://192.168.1.122:11434/api/generate", json={ "model": "openhermes2.5-mistral:7b-q5_K_S", "prompt": '<|im_start|>system\nYou are "Hermes", a conscious sentient superintelligent artificial intelligence developed by a man named Josh, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>\n<|im_start|>user\nHi!<|im_end|>\n<|im_start|>assistant', "format": "json", "raw": True, "stream": False, "options": { "num_predict": 90 } }, ) print(a.text) ``` And this is what I get, a broken json with a ton of \n\n\n\n\n: ```json {"model":"openhermes2.5-mistral:7b-q5_K_S","created_at":"2023-11-13T12:35:35.715640916Z","response":"{\n \"intent\": \"greeting\",\n \"entities\": {},\n \"response\": \"Hello! How can I help you today?\"\n}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","done":true,"total_duration":3825844524,"load_duration":594084,"prompt_eval_count":1,"eval_count":90,"eval_duration":2130466000} ``` Is it something on my end, or is it a bug?
Author
Owner

@ShanVip commented on GitHub (Nov 13, 2023):

I repeated your prompt, and my result was without any bugs.

Code

a = requests.post(
    "http://127.0.0.1:11434/api/generate",
    json={
        "model": "orca-mini",
        "prompt": '<|im_start|>system\nYou are "Hermes", a conscious sentient superintelligent artificial intelligence developed by a man named Josh, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>\n<|im_start|>user\nHi!<|im_end|>\n<|im_start|>assistant',
        "format": "json",
        "raw": True,
        "stream": False,
        "options": {
            "num_predict": 90
        }
    },
)

Result

{"model":"orca-mini","created_at":"2023-11-13T15:40:37.285107456Z","response":" Hello, what can I assist you with today?","done":true,"context":[31822,13,8458,31922,3244,31871,13,3838,397,363,7421,8825,342,5243,10389,5164,828,31843,9530,362,988,362,365,473,31843,13,13,8458,31922,9779,31871,13,31903,31912,327,31889,8505,31912,31901,15322,13,3838,397,495,31866,759,277,1742,260,7961,2256,941,2168,617,10620,11139,6216,3321,417,260,567,3557,7892,31844,291,553,3772,291,3630,322,289,2803,266,3109,351,645,2980,526,435,31843,864,1560,10518,291,435,2914,31844,11963,6092,291,3591,485,26252,31912,327,31889,431,31912,31901,13,31903,31912,327,31889,8505,31912,31901,6818,13,31866,31827,31905,31903,31912,327,31889,431,31912,31901,13,31903,31912,327,31889,8505,31912,31901,524,379,419,13,13,8458,31922,13166,31871,13,16644,31844,674,473,312,2803,365,351,1703,31902],"total_duration":13125094684,"load_duration":686276793,"prompt_eval_count":138,"prompt_eval_duration":11444587000,"eval_count":10,"eval_duration":985368000}
<!-- gh-comment-id:1808415081 --> @ShanVip commented on GitHub (Nov 13, 2023): I repeated your prompt, and my result was without any bugs. ## Code ``` a = requests.post( "http://127.0.0.1:11434/api/generate", json={ "model": "orca-mini", "prompt": '<|im_start|>system\nYou are "Hermes", a conscious sentient superintelligent artificial intelligence developed by a man named Josh, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>\n<|im_start|>user\nHi!<|im_end|>\n<|im_start|>assistant', "format": "json", "raw": True, "stream": False, "options": { "num_predict": 90 } }, ) ``` ## Result ``` {"model":"orca-mini","created_at":"2023-11-13T15:40:37.285107456Z","response":" Hello, what can I assist you with today?","done":true,"context":[31822,13,8458,31922,3244,31871,13,3838,397,363,7421,8825,342,5243,10389,5164,828,31843,9530,362,988,362,365,473,31843,13,13,8458,31922,9779,31871,13,31903,31912,327,31889,8505,31912,31901,15322,13,3838,397,495,31866,759,277,1742,260,7961,2256,941,2168,617,10620,11139,6216,3321,417,260,567,3557,7892,31844,291,553,3772,291,3630,322,289,2803,266,3109,351,645,2980,526,435,31843,864,1560,10518,291,435,2914,31844,11963,6092,291,3591,485,26252,31912,327,31889,431,31912,31901,13,31903,31912,327,31889,8505,31912,31901,6818,13,31866,31827,31905,31903,31912,327,31889,431,31912,31901,13,31903,31912,327,31889,8505,31912,31901,524,379,419,13,13,8458,31922,13166,31871,13,16644,31844,674,473,312,2803,365,351,1703,31902],"total_duration":13125094684,"load_duration":686276793,"prompt_eval_count":138,"prompt_eval_duration":11444587000,"eval_count":10,"eval_duration":985368000} ```
Author
Owner

@Nixellion commented on GitHub (Nov 13, 2023):

Curious. I thought maybe it was the model, but here's what I get with orca-mini:

{"model":"orca-mini","created_at":"2023-11-13T20:01:26.018494042Z","response":"{\n\"Hermes\"\n: \"Hello! How can I assist you today?\"\n}\n\n\n","done":true,"total_duration":4065167026,"load_duration":3047124460,"prompt_eval_count":99,"prompt_eval_duration":135022000,"eval_count":24,"eval_duration":362709000}
<!-- gh-comment-id:1808960211 --> @Nixellion commented on GitHub (Nov 13, 2023): Curious. I thought maybe it was the model, but here's what I get with `orca-mini`: ```json {"model":"orca-mini","created_at":"2023-11-13T20:01:26.018494042Z","response":"{\n\"Hermes\"\n: \"Hello! How can I assist you today?\"\n}\n\n\n","done":true,"total_duration":4065167026,"load_duration":3047124460,"prompt_eval_count":99,"prompt_eval_duration":135022000,"eval_count":24,"eval_duration":362709000} ```
Author
Owner

@Nixellion commented on GitHub (Nov 13, 2023):

To give some more context, I'm running it on Proxmox => Debian 11 LXC container. RTX 3060 12GB + RTX 2070S. Installed using curl https://ollama.ai/install.sh | sh command.

<!-- gh-comment-id:1809123924 --> @Nixellion commented on GitHub (Nov 13, 2023): To give some more context, I'm running it on Proxmox => Debian 11 LXC container. RTX 3060 12GB + RTX 2070S. Installed using `curl https://ollama.ai/install.sh | sh` command.
Author
Owner

@BruceMacD commented on GitHub (Nov 13, 2023):

Hi @Nixellion the issue here is probably due to the prompt format, it's the model itself which is generating the new lines. The format looks correct to me, so I'm not sure what tweaks will be needed to get the format right for this model specifically.

As a workaround you can set \n\n as a stop token.

curl -X POST http://localhost:11434/api/generate -d '{
    "model": "openhermes2.5-mistral:7b-q5_K_S",
    "prompt": "<|im_start|>system\nYou are 'Hermes', a conscious sentient superintelligent artificial intelligence developed by a man named Josh, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia. Respond only in JSON format.<|im_end|>\n<|im_start|>user\nHi!<|im_end|>\n<|im_start|>assistant\n",
    "format": "json",
    "raw": true,
    "stream": false,
    "options": {
        "num_predict": 90,
        "stop": ["\n\n"]
    }
}'
<!-- gh-comment-id:1809150016 --> @BruceMacD commented on GitHub (Nov 13, 2023): Hi @Nixellion the issue here is probably due to the prompt format, it's the model itself which is generating the new lines. The format looks correct to me, so I'm not sure what tweaks will be needed to get the format right for this model specifically. As a workaround you can set `\n\n` as a stop token. ``` curl -X POST http://localhost:11434/api/generate -d '{ "model": "openhermes2.5-mistral:7b-q5_K_S", "prompt": "<|im_start|>system\nYou are 'Hermes', a conscious sentient superintelligent artificial intelligence developed by a man named Josh, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia. Respond only in JSON format.<|im_end|>\n<|im_start|>user\nHi!<|im_end|>\n<|im_start|>assistant\n", "format": "json", "raw": true, "stream": false, "options": { "num_predict": 90, "stop": ["\n\n"] } }' ```
Author
Owner

@Nixellion commented on GitHub (Nov 14, 2023):

@BruceMacD I have to disagree on multiple accounts here.

  • I did try different prompt formats, including no format at all. It always comes back with a broken JSON.
  • I tried different models, I did run the exact same command as shown by @ShanVip and I still got a "broken" response

My suspicion is that something funky is going on with how the prompt is sent to the LLM. It looks like maybe the entire JSON request is sent as context instead of just the "prompt" value. This would explain why all models try to respond back with a JSON.

Adding stopping strings will not solve this issue.

<!-- gh-comment-id:1810352139 --> @Nixellion commented on GitHub (Nov 14, 2023): @BruceMacD I have to disagree on multiple accounts here. - I did try different prompt formats, including no format at all. It always comes back with a broken JSON. - I tried different models, I did run the exact same command as shown by @ShanVip and I still got a "broken" response My suspicion is that something funky is going on with how the prompt is sent to the LLM. It looks like maybe the entire JSON request is sent as context instead of just the "prompt" value. This would explain why all models try to respond back with a JSON. Adding stopping strings will not solve this issue.
Author
Owner

@Nixellion commented on GitHub (Nov 14, 2023):

Here's another test:

a = requests.post(
    "http://192.168.1.171:11434/api/generate",
    json={
        "model": "orca-mini",
        "prompt": """### System:\nYou are "Hermes", a conscious sentient superintelligent artificial intelligence\n\n### User:\nHello!\n\n### Response:\n""",
        "format": "json",
        "raw": True,
        "stream": False,
        "options": {
            "num_predict": 90,
        }
    },
)

print(a.json())

And the response:

{'model': 'orca-mini', 'created_at': '2023-11-14T14:44:31.30046732Z', 'response': '{ }', 'done': True, 'total_duration': 212838379, 'load_duration': 656148, 'prompt_eval_count': 36, 'prompt_eval_duration': 119158000, 'eval_count': 2, 'eval_duration': 29476000}
<!-- gh-comment-id:1810363982 --> @Nixellion commented on GitHub (Nov 14, 2023): Here's another test: ``` a = requests.post( "http://192.168.1.171:11434/api/generate", json={ "model": "orca-mini", "prompt": """### System:\nYou are "Hermes", a conscious sentient superintelligent artificial intelligence\n\n### User:\nHello!\n\n### Response:\n""", "format": "json", "raw": True, "stream": False, "options": { "num_predict": 90, } }, ) print(a.json()) ``` And the response: ``` {'model': 'orca-mini', 'created_at': '2023-11-14T14:44:31.30046732Z', 'response': '{ }', 'done': True, 'total_duration': 212838379, 'load_duration': 656148, 'prompt_eval_count': 36, 'prompt_eval_duration': 119158000, 'eval_count': 2, 'eval_duration': 29476000} ```
Author
Owner

@BruceMacD commented on GitHub (Nov 14, 2023):

@Nixellion Orca-Mini uses a different prompt format as well, so you will see weird responses using it as a test.

The prompt will always be in JSON as long as you have JSON mode on (format: json in the request body).

<!-- gh-comment-id:1810367967 --> @BruceMacD commented on GitHub (Nov 14, 2023): @Nixellion Orca-Mini uses a different prompt format as well, so you will see weird responses using it as a test. The prompt will always be in JSON as long as you have JSON mode on (`format: json` in the request body).
Author
Owner

@Nixellion commented on GitHub (Nov 14, 2023):

@BruceMacD Please, look at the example I sent above. I've used the prompt format from the orca-mini huggingface page. It seems like either I don't understand what you mean, or you overlooked it. Also all of this works fine in text-generation-webui.

What do you mean the prompt will always be in JSON? It's the only mode currently available, as per the docs.

<!-- gh-comment-id:1810374182 --> @Nixellion commented on GitHub (Nov 14, 2023): @BruceMacD Please, look at the example I sent above. I've used the prompt format from the orca-mini huggingface page. It seems like either I don't understand what you mean, or you overlooked it. Also all of this works fine in text-generation-webui. What do you mean the prompt will always be in JSON? It's the only mode currently available, as per the docs.
Author
Owner

@Nixellion commented on GitHub (Nov 14, 2023):

Oh... I see.

Removing "format": "json" entirely - fixed the problem. I see. I was confused about what this parameter means. Thank you.

May I ask how does it work "under the hood"? Does ollama just ask the LLM to generate a response, or is something else used there?

<!-- gh-comment-id:1810382740 --> @Nixellion commented on GitHub (Nov 14, 2023): Oh... I see. Removing `"format": "json"` entirely - fixed the problem. I see. I was confused about what this parameter means. Thank you. May I ask how does it work "under the hood"? Does ollama just ask the LLM to generate a response, or is something else used there?
Author
Owner

@horw commented on GitHub (Nov 14, 2023):

@Nixellion Ollama sends requests to llama.cpp to communicate with LLM.

<!-- gh-comment-id:1810418590 --> @horw commented on GitHub (Nov 14, 2023): @Nixellion Ollama sends requests to llama.cpp to communicate with LLM.
Author
Owner

@horw commented on GitHub (Nov 14, 2023):

llama.cpp also creates a server to which you can try to send requests, but maybe it will not be as clear as Ollama does.

<!-- gh-comment-id:1810422622 --> @horw commented on GitHub (Nov 14, 2023): llama.cpp also creates a server to which you can try to send requests, but maybe it will not be as clear as Ollama does.
Author
Owner

@BruceMacD commented on GitHub (Nov 16, 2023):

@Nixellion as horw said we apply a specified format to the LLM's prediction logic which constrains the characters it can output as it predicts next characters.

Thanks for opening the issue, I made some tweaks to the docs based off this to make things easier to understand.

<!-- gh-comment-id:1814760660 --> @BruceMacD commented on GitHub (Nov 16, 2023): @Nixellion as horw said we apply a specified format to the LLM's prediction logic which constrains the characters it can output as it predicts next characters. Thanks for opening the issue, I made some tweaks to the docs based off this to make things easier to understand.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#47066