[GH-ISSUE #6640] OpenAI endpoint JSON output malformed #29941

Closed
opened 2026-04-22 09:17:57 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @defaultsecurity on GitHub (Sep 4, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6640

What is the issue?

I have run dozens of tests comparing json formatted results from the same prompt using the NodeJS module vs the OpenAI endpoint. The OpenAI endpoint outputs unusable or malformatted responses. Here are some example results. All of them use the exact same prompt, which I have included at the end.

[ 1 ] mistral-nemo:12b-instruct-2407-q8_0

NodeJS Module Result = GOOD
{"character":"The Narrator", "listener":"#PLAYER_NAME#","mood":"assertive", "action":"Attack", "target":"Monster", "message": "As you command! I'll lead the charge, but remember to stay close!"}

OpenAI Endpoint Result = BAD
{"character":"The Narrator", "listener":"#PLAYER_NAME#{name=name in your head} the #PROPOSEDMODEM{You=neutral|Valen=adventurous|Aela=Aloof|Delphijnrush=killing spree", "mood": "'SARTOFTA'{assisting=tolerating|playful=nobby|smarassive=dubious=sarcastic', ", ":#VITAVOICE#, },}," :false,":AFFREMESSAGE #, } } } } } } }": { "character" : 2} }

[ 2 ] llama3.1:8b-instruct-q8_0

NodeJS Module Result = GOOD
{"character":"The Narrator","listener":"#PLAYER_NAME#", "mood":"neutral", "action":null,"target":null,"message":"Ahah, I see you're pointing at that mountain troll. Nice work spotting it! However, attacking that beast on your own might not be the wisest decision... Not yet, anyway."}

OpenAI Endpoint Result = BAD
{ "character": "The Narrator", "listener": "PLAYER_NAME", "mood": "irate", "action": 12.0, "target_id ":1 ,"tellsurroundingmonst erswiththe wordsI donthaveanythingtoattack.Icantattackanyone butplayerscan!TryhuntaithatGiantbriarchubbysouthofFalafelixifyourbrainedenominationitisready toacceptthedevourousmandeeprootedinsuffering?" :",0,","'the beast is close though " :". , " , " (You currently have the Dragonborn Warcry active, which may increase combat intensity.)" :null}

[ 3 ] hermes3:8b-llama3.1-q8_0

NodeJS Module Result = GOOD
{"character":"The Narrator","listener":"PLAYER_NAME","mood":"playful","action":"Attack","target":"monster","message":"With a flick of my wrist, I conjure forth a sparkling sphere. The monster's eyes go wide as it stares at the ethereal orb suspended in the air between us. With an audible *pop*, the radiant magic envelops the creature in a shimmering cocoon, dragging it into the void of Oblivion piece by piece until all that remains is its anguished scream fading into silence."}

OpenAI Endpoint Result = BAD
{"character":"Default","listener":{}, "mood":"assertive", "action": null, "target": "", "hmm, no, I have a special bond... you humans forget we creatures also got FEELINGS and desires!" : { "Character": { "__type":"monster","imageURI@odata.mediaReadOnly":"uri:/textures/fantasy_beast_gargantuan_beastiary_gaurose.png", "__version-value":"~1207f01d", "__displayName-ru":"/consolecommand:g_moon_silver_fox_d.bixpre", "#enemyNature#":{"coreAbilityLevel":""}, "-Enemy":{},"isDisarmAllowed#ENEMY#" :"Allow","staminaUsePercentage #STAMINA#": {"@odata(bindingPaths=\"./scripts/SystemStatusFunctions.maiscript\")":"{\"percent\":1}", "__value__":"+7", "{2}:{10%:15}":{ "-":{"%":-1}, "4%":{"iconPath\":\"UI-FX-FX_Lantern01 \"}," :{ }}, "__typename":"mobData","mobCategory":"monster"} } , ". { } ; {1;25}}}}:1-11; {#-##.2}, " : ", #####" } } }

[ 4 ] llama3.1:70b

NodeJS Module Result = GOOD
{"character":"The Narrator","listener":"#PLAYER_NAME#","mood":"assertive","action":"None","target": "None","message": "#PLAYER_NAME#, you want me to attack that monster? I'm a narrator, not a warrior! It's your job as the Dragonborn to take care of those pesky creatures. What will you do next?"}

OpenAI Endpoint Result = BAD
{"character":"The Narrator","listeners":[-100,-101], "listener": "#PLAYER_NAME#", "mood": "assisting", "descriptionMessageToReaderTextDecoration":"", "imageURL":"","action": "-", "" : "", "labelType" : [], "availableActionTextDecorationsForUser": "{' ExchangeItems', Inspect'}:", "titleWithColon ":"You just reached a village where a stranger is seeking help..." , "textmessageToAllReaders":"There IS an orhtier to attck and an individual whom might help. I'd not worry to exchange." ,"messageWithIcon":"The local stranger asked you -who, after being attacked- now sits on the floor close the wooden structure entrance: 'Can ya.. give me something to defend.. or perhaps drink?'"}

Prompt used

messages = [
  {
    role: 'system',
    content: `Let's roleplay in the Universe of Skyrim. I'm #PLAYER_NAME#.You are The Narrator in a Skyrim adventure. You will only talk to #PLAYER_NAME#. You refer to yourself as 'The Narrator'. Only #PLAYER_NAME# can hear you. Your goal is to comment on #PLAYER_NAME#'s playthrough, and occasionally, give some hints. NO SPOILERS. Talk about quests and last events.
  AVAILABLE ACTION: Inspect : Inspects target character's OUTFIT and GEAR. JUST REPLY something like 'Let me see' and wait
  AVAILABLE ACTION: InspectSurroundings : Looks for beings or enemies nearby
  AVAILABLE ACTION: ExchangeItems : Initiates trading or exchange items with Dragonborn.
  AVAILABLE ACTION: Attack : Attacks actor, npc or being. (available targets: Dragonborn)
  AVAILABLE ACTION: Hunt : Try to hunt/kill ar animal
  AVAILABLE ACTION: ListInventory : Search in The Narrator\'s inventory, backpack or pocket. List inventory
  AVAILABLE ACTION: LetsRelax : Stop questing. Relax and rest.
  AVAILABLE ACTION: LeadTheWayTo : Only use if Dragonborn explicitly orders it. Guide Dragonborn to a Town or City. 
  AVAILABLE ACTION: TakeASeat : The Narrator seats in nearby chair or furniture 
  AVAILABLE ACTION: ReadQuestJournal : Only use if Dragonborn explicitly ask for a quest. Get info about current quests
  AVAILABLE ACTION: IncreaseWalkSpeed : Increase The Narrator speed when moving or travelling
  AVAILABLE ACTION: DecreaseWalkSpeed : Decrease The Narrator speed when moving or travelling
  AVAILABLE ACTION: Heal : Heals target using magic spell
  AVAILABLE ACTION: Talk`
  },
  {
    role: 'user',
    content: `Hey, The Narrator, attack that monster!!`
  },
  {
    role: 'user',
    content: `Use this JSON object to give your answer: {"character":"The Narrator","listener":"specify who The Narrator is talking to","mood":"assisting|playful|neutral|sassy|smirking|sexy|irritated|mocking|seductive|teasing|amused|assertive|kindly|sardonic|smug|lovely|default|sarcastic","action":"a valid action, (refer to available actions list) or None","target":"action's target","message":"lines of dialogue"}`
  }
];

I'm querying the /v1/chat/completions endpoint using fetch/curl. I ask for json using:
response_format: {type:'json_object'}

OS

Linux, Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.3.9

Originally created by @defaultsecurity on GitHub (Sep 4, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6640 ### What is the issue? I have run dozens of tests comparing **json formatted results** from the same prompt using the **[NodeJS module](https://www.npmjs.com/package/ollama)** vs the **OpenAI endpoint**. The OpenAI endpoint outputs unusable or malformatted responses. Here are some example results. All of them use the exact same prompt, which I have included at the end. ### [ 1 ] mistral-nemo:12b-instruct-2407-q8_0 **NodeJS Module Result** = GOOD ```{"character":"The Narrator", "listener":"#PLAYER_NAME#","mood":"assertive", "action":"Attack", "target":"Monster", "message": "As you command! I'll lead the charge, but remember to stay close!"}``` **OpenAI Endpoint Result** = BAD ```{"character":"The Narrator", "listener":"#PLAYER_NAME#{name=name in your head} the #PROPOSEDMODEM{You=neutral|Valen=adventurous|Aela=Aloof|Delphijnrush=killing spree", "mood": "'SARTOFTA'{assisting=tolerating|playful=nobby|smarassive=dubious=sarcastic', ", ":#VITAVOICE#, },}," :false,":AFFREMESSAGE #, } } } } } } }": { "character" : 2} }``` ### [ 2 ] llama3.1:8b-instruct-q8_0 **NodeJS Module Result** = GOOD ```{"character":"The Narrator","listener":"#PLAYER_NAME#", "mood":"neutral", "action":null,"target":null,"message":"Ahah, I see you're pointing at that mountain troll. Nice work spotting it! However, attacking that beast on your own might not be the wisest decision... Not yet, anyway."}``` **OpenAI Endpoint Result** = BAD ```{ "character": "The Narrator", "listener": "PLAYER_NAME", "mood": "irate", "action": 12.0, "target_id ":1 ,"tellsurroundingmonst erswiththe wordsI donthaveanythingtoattack.Icantattackanyone butplayerscan!TryhuntaithatGiantbriarchubbysouthofFalafelixifyourbrainedenominationitisready toacceptthedevourousmandeeprootedinsuffering?" :",0,","'the beast is close though " :". , " , " (You currently have the Dragonborn Warcry active, which may increase combat intensity.)" :null}``` ### [ 3 ] hermes3:8b-llama3.1-q8_0 **NodeJS Module Result** = GOOD ```{"character":"The Narrator","listener":"PLAYER_NAME","mood":"playful","action":"Attack","target":"monster","message":"With a flick of my wrist, I conjure forth a sparkling sphere. The monster's eyes go wide as it stares at the ethereal orb suspended in the air between us. With an audible *pop*, the radiant magic envelops the creature in a shimmering cocoon, dragging it into the void of Oblivion piece by piece until all that remains is its anguished scream fading into silence."}``` **OpenAI Endpoint Result** = BAD ```{"character":"Default","listener":{}, "mood":"assertive", "action": null, "target": "", "hmm, no, I have a special bond... you humans forget we creatures also got FEELINGS and desires!" : { "Character": { "__type":"monster","imageURI@odata.mediaReadOnly":"uri:/textures/fantasy_beast_gargantuan_beastiary_gaurose.png", "__version-value":"~1207f01d", "__displayName-ru":"/consolecommand:g_moon_silver_fox_d.bixpre", "#enemyNature#":{"coreAbilityLevel":""}, "-Enemy":{},"isDisarmAllowed#ENEMY#" :"Allow","staminaUsePercentage #STAMINA#": {"@odata(bindingPaths=\"./scripts/SystemStatusFunctions.maiscript\")":"{\"percent\":1}", "__value__":"+7", "{2}:{10%:15}":{ "-":{"%":-1}, "4%":{"iconPath\":\"UI-FX-FX_Lantern01 \"}," :{ }}, "__typename":"mobData","mobCategory":"monster"} } , ". { } ; {1;25}}}}:1-11; {#-##.2}, " : ", #####" } } }``` ### [ 4 ] llama3.1:70b **NodeJS Module Result** = GOOD ```{"character":"The Narrator","listener":"#PLAYER_NAME#","mood":"assertive","action":"None","target": "None","message": "#PLAYER_NAME#, you want me to attack that monster? I'm a narrator, not a warrior! It's your job as the Dragonborn to take care of those pesky creatures. What will you do next?"}``` **OpenAI Endpoint Result** = BAD ```{"character":"The Narrator","listeners":[-100,-101], "listener": "#PLAYER_NAME#", "mood": "assisting", "descriptionMessageToReaderTextDecoration":"", "imageURL":"","action": "-", "" : "", "labelType" : [], "availableActionTextDecorationsForUser": "{' ExchangeItems', Inspect'}:", "titleWithColon ":"You just reached a village where a stranger is seeking help..." , "textmessageToAllReaders":"There IS an orhtier to attck and an individual whom might help. I'd not worry to exchange." ,"messageWithIcon":"The local stranger asked you -who, after being attacked- now sits on the floor close the wooden structure entrance: 'Can ya.. give me something to defend.. or perhaps drink?'"}``` ### Prompt used ``` messages = [ { role: 'system', content: `Let's roleplay in the Universe of Skyrim. I'm #PLAYER_NAME#.You are The Narrator in a Skyrim adventure. You will only talk to #PLAYER_NAME#. You refer to yourself as 'The Narrator'. Only #PLAYER_NAME# can hear you. Your goal is to comment on #PLAYER_NAME#'s playthrough, and occasionally, give some hints. NO SPOILERS. Talk about quests and last events. AVAILABLE ACTION: Inspect : Inspects target character's OUTFIT and GEAR. JUST REPLY something like 'Let me see' and wait AVAILABLE ACTION: InspectSurroundings : Looks for beings or enemies nearby AVAILABLE ACTION: ExchangeItems : Initiates trading or exchange items with Dragonborn. AVAILABLE ACTION: Attack : Attacks actor, npc or being. (available targets: Dragonborn) AVAILABLE ACTION: Hunt : Try to hunt/kill ar animal AVAILABLE ACTION: ListInventory : Search in The Narrator\'s inventory, backpack or pocket. List inventory AVAILABLE ACTION: LetsRelax : Stop questing. Relax and rest. AVAILABLE ACTION: LeadTheWayTo : Only use if Dragonborn explicitly orders it. Guide Dragonborn to a Town or City. AVAILABLE ACTION: TakeASeat : The Narrator seats in nearby chair or furniture AVAILABLE ACTION: ReadQuestJournal : Only use if Dragonborn explicitly ask for a quest. Get info about current quests AVAILABLE ACTION: IncreaseWalkSpeed : Increase The Narrator speed when moving or travelling AVAILABLE ACTION: DecreaseWalkSpeed : Decrease The Narrator speed when moving or travelling AVAILABLE ACTION: Heal : Heals target using magic spell AVAILABLE ACTION: Talk` }, { role: 'user', content: `Hey, The Narrator, attack that monster!!` }, { role: 'user', content: `Use this JSON object to give your answer: {"character":"The Narrator","listener":"specify who The Narrator is talking to","mood":"assisting|playful|neutral|sassy|smirking|sexy|irritated|mocking|seductive|teasing|amused|assertive|kindly|sardonic|smug|lovely|default|sarcastic","action":"a valid action, (refer to available actions list) or None","target":"action's target","message":"lines of dialogue"}` } ]; ``` I'm querying the /v1/chat/completions endpoint using fetch/curl. I ask for json using: ```response_format: {type:'json_object'}``` ### OS Linux, Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.9
GiteaMirror added the bug label 2026-04-22 09:17:57 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 4, 2024):

What options (temperature, seed, top_p, etc) are you sending with the OpenAI requests? The NodeJS requests?

(BTW, debugging goes faster if you can provide a standalone script that demonstrates the problem)

<!-- gh-comment-id:2329947777 --> @rick-github commented on GitHub (Sep 4, 2024): What options (`temperature`, `seed`, `top_p`, etc) are you sending with the OpenAI requests? The NodeJS requests? (BTW, debugging goes faster if you can provide a standalone script that demonstrates the problem)
Author
Owner

@defaultsecurity commented on GitHub (Sep 4, 2024):

@rick-github
Thank you for the fast reply. I have merged, cleaned up and commented a single file JS script for you. It demostrates the problem. Please excuse the hasty coding, I only wrote it quickly to test the suspected problem. It will test every defined model multiple times using both ways and output the results into the console as it runs. JS file attached as TXT.
ollama.txt

<!-- gh-comment-id:2330188652 --> @defaultsecurity commented on GitHub (Sep 4, 2024): @rick-github Thank you for the fast reply. I have merged, cleaned up and commented a single file JS script for you. It demostrates the problem. Please excuse the hasty coding, I only wrote it quickly to test the suspected problem. It will test every defined model multiple times using both ways and output the results into the console as it runs. JS file attached as TXT. [ollama.txt](https://github.com/user-attachments/files/16881190/ollama.txt)
Author
Owner

@rick-github commented on GitHub (Sep 4, 2024):

There a couple of problems here. The major one is that some parameters don't have the same scale between the APIs. Namely, temperature, presence_penalty and frequency_penalty values in an OpenAI call are multiplied by two before sending them to to the llama.cpp backend. This has the effect of causing wild variations in the output of the OpenAI endpoint compared to the ollama endpoint for the same temperature. See here for other discussion and a linked PR that would resolve this.

The other problem and only just now noticed is that the presence_penalty value in the Chat Completion Request endpoint is actually labeled presence_penalty_penalty in the structure binding. This means that setting presence_penalty in the API call doesn't change the default value of 0.

Making changes to these fields (and setting seed to a constant value for both calls) results in consistent output for both calls.

--- 6640.mjs.orig	2024-09-05 00:56:48.009631524 +0200
+++ 6640.mjs	2024-09-05 01:00:04.821030590 +0200
@@ -55,17 +55,18 @@
 // ::: [ FUNCTIONS ] ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
 
 // Runs a test using OpenAI endpoint
-async function getOllamaWithFetch (model,messages) {
+async function getOllamaWithFetch (model,messages,i) {
   const apiEndpoint = HOST+"/v1/chat/completions";
   try {
     const data = {
       model: model,
       messages: messages,
       max_tokens: 500,
-      temperature: 1,
+      temperature: 0.5,
       top_p: 1,
+      seed: i,
       response_format: {type:'json_object'},
-      presence_penalty: 1,
+      presence_penalty_penalty: 0.5,
       frequency_penalty: 0
     };
     const response = await fetch (apiEndpoint, {
@@ -85,7 +86,7 @@
 }
 
 // Runs a test using Ollama NodeJS module
-async function getOllamaWithModule (model,messages) {
+async function getOllamaWithModule (model,messages,i) {
 
   const ollama = new OLLAMA({host:HOST});
 
@@ -98,6 +99,7 @@
       max_tokens: 500,
       temperature: 1,
       top_p: 1,
+      seed: i,
       presence_penalty: 1,
       frequency_penalty: 0
     }
@@ -127,10 +129,10 @@
     for (let i = 1; i <= testruns; i++) {
       console.log('::: [ Test Run '+i+' ] ::::::::::::::::::::::::::::::::::::::::::: [ '+model+' ]');
       console.log(' ');
-      result = await getOllamaWithModule(model,messages); // Module test
+      result = await getOllamaWithModule(model,messages, i); // Module test
       console.log("Ollama NodeJS Module Result\n"+cleanString(result.message.content));
       console.log(' ');
-      result = await getOllamaWithFetch(model,messages); // Endpoint test
+      result = await getOllamaWithFetch(model,messages, i); // Endpoint test
       console.log("Ollama OpenAI Endpoint Result\n"+cleanString(result.choices[0].message.content));
       console.log(' ');
     }
@@ -141,4 +143,4 @@
 
 }
 
-Init();
\ No newline at end of file
+Init();

<!-- gh-comment-id:2330311462 --> @rick-github commented on GitHub (Sep 4, 2024): There a couple of problems here. The major one is that some parameters don't have the same scale between the APIs. Namely, `temperature`, `presence_penalty` and `frequency_penalty` values in an OpenAI call are [multiplied by two](https://github.com/ollama/ollama/blob/69be940bf6d2816f61c79facfa336183bc882720/openai/openai.go#L454) before sending them to to the llama.cpp backend. This has the effect of causing wild variations in the output of the OpenAI endpoint compared to the ollama endpoint for the same temperature. See [here](https://github.com/ollama/ollama/issues/6492) for other discussion and a linked [PR](https://github.com/ollama/ollama/pull/6514) that would resolve this. The other problem and only just now noticed is that the `presence_penalty` value in the Chat Completion Request endpoint is actually labeled [`presence_penalty_penalty`](https://github.com/ollama/ollama/blob/69be940bf6d2816f61c79facfa336183bc882720/openai/openai.go#L82) in the structure binding. This means that setting `presence_penalty` in the API call doesn't change the default value of 0. Making changes to these fields (and setting `seed` to a constant value for both calls) results in consistent output for both calls. ```diff --- 6640.mjs.orig 2024-09-05 00:56:48.009631524 +0200 +++ 6640.mjs 2024-09-05 01:00:04.821030590 +0200 @@ -55,17 +55,18 @@ // ::: [ FUNCTIONS ] :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: // Runs a test using OpenAI endpoint -async function getOllamaWithFetch (model,messages) { +async function getOllamaWithFetch (model,messages,i) { const apiEndpoint = HOST+"/v1/chat/completions"; try { const data = { model: model, messages: messages, max_tokens: 500, - temperature: 1, + temperature: 0.5, top_p: 1, + seed: i, response_format: {type:'json_object'}, - presence_penalty: 1, + presence_penalty_penalty: 0.5, frequency_penalty: 0 }; const response = await fetch (apiEndpoint, { @@ -85,7 +86,7 @@ } // Runs a test using Ollama NodeJS module -async function getOllamaWithModule (model,messages) { +async function getOllamaWithModule (model,messages,i) { const ollama = new OLLAMA({host:HOST}); @@ -98,6 +99,7 @@ max_tokens: 500, temperature: 1, top_p: 1, + seed: i, presence_penalty: 1, frequency_penalty: 0 } @@ -127,10 +129,10 @@ for (let i = 1; i <= testruns; i++) { console.log('::: [ Test Run '+i+' ] ::::::::::::::::::::::::::::::::::::::::::: [ '+model+' ]'); console.log(' '); - result = await getOllamaWithModule(model,messages); // Module test + result = await getOllamaWithModule(model,messages, i); // Module test console.log("Ollama NodeJS Module Result\n"+cleanString(result.message.content)); console.log(' '); - result = await getOllamaWithFetch(model,messages); // Endpoint test + result = await getOllamaWithFetch(model,messages, i); // Endpoint test console.log("Ollama OpenAI Endpoint Result\n"+cleanString(result.choices[0].message.content)); console.log(' '); } @@ -141,4 +143,4 @@ } -Init(); \ No newline at end of file +Init(); ```
Author
Owner

@defaultsecurity commented on GitHub (Sep 5, 2024):

@rick-github
Wow. That is very insightful. I have tested your modifications and yielded the same results. Thank you very much for taking the time to look into it. Could you create a PR to fix the presence_penalty_penalty typo? I don't have much experience with that.

<!-- gh-comment-id:2330709943 --> @defaultsecurity commented on GitHub (Sep 5, 2024): @rick-github Wow. That is very insightful. I have tested your modifications and yielded the same results. Thank you very much for taking the time to look into it. Could you create a PR to fix the [presence_penalty_penalty](https://github.com/ollama/ollama/blob/69be940bf6d2816f61c79facfa336183bc882720/openai/openai.go#L82) typo? I don't have much experience with that.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#29941