[GH-ISSUE #8678] Missing support for name field #67678

Closed
opened 2026-05-04 11:17:35 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @gagb on GitHub (Jan 30, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8678

Originally assigned to: @ParthSareen on GitHub.

What is the issue?

For many models, phi-4, deepseek-r1, Ollama support OpenAI chat completion format, but it seems like it does not support the name field in the message history. It only supports the role and content field. Is there a plan to fix this?

OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @gagb on GitHub (Jan 30, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8678 Originally assigned to: @ParthSareen on GitHub. ### What is the issue? For many models, phi-4, deepseek-r1, Ollama support OpenAI chat completion format, but it seems like it does not support the name field in the message history. It only supports the role and content field. Is there a plan to fix this? ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the compatibilityapi labels 2026-05-04 11:17:36 -05:00
Author
Owner

@afourney commented on GitHub (Jan 30, 2025):

Just to enrich the issue, consider the following code:

chat_completion = client.chat.completions.create(
    messages=[
        {
            'role': 'user',
            'content': "Hello I am from Seattle.",
            'name': "Adam"
        },
        {
            'role': 'user',
            'content': "Hello I am from Tacoma.",
            'name': "Sam"
        },
        {
            'role': 'user',
            'content': "I am from Olympia",
            'name': "Robert",
        },
        {
            'role': 'user',
            'content': "What did Sam say? I missed it.",
            'name': "Adam",
        },
    ],
    model='gpt-4o',
    #model='deepseek-r1:70b',
)

o1 answers:
Sam said, “Hello I am from Tacoma.”

GPT-4o answers:
Sam said they are from Tacoma

R1 answers:

Okay, so I'm trying to figure out what the user is asking here. Let me read through the conversation again to make sure I understand.

The user wrote:

"Hello I am from Seattle.

Hello I am from Tacoma.

I am from Olympia

What did Sam say? I missed it."

And then in the response, the assistant says there's no mention of someone named Sam, so they can't provide what Sam said.

Hmm, maybe the user is trying to test if the AI can recognize when a name isn't mentioned. They might be checking the AI's ability to handle missing information or possibly a typo. Alternatively, the user could have intended to include something about Sam but missed it, but since I don't see that in the conversation, the assistant was right to point out that Sam wasn't mentioned.

I wonder if the user is actually looking for more interaction, like maybe they expected a different response and are testing the boundaries. Or perhaps they're trying to see how the AI handles confusion or missing data.

Another angle could be that "Sam" is a typo or stands for something else in their context. Without more information, it's hard to tell, but the assistant handled it appropriately by stating that Sam wasn't mentioned and offering help with anything else they need.

The user appears to be testing the AI's ability to handle missing information or possibly a typo. They listed three greetings from different locations (Seattle, Tacoma, Olympia) and then asked about "Sam," who wasn't mentioned. The assistant correctly pointed out that Sam wasn't referenced, indicating the user might be exploring how the AI manages such situations.

It seems there was a misunderstanding since "Sam" wasn't mentioned in our conversation. If you have any other questions or need further assistance, feel free to ask!

<!-- gh-comment-id:2623251795 --> @afourney commented on GitHub (Jan 30, 2025): Just to enrich the issue, consider the following code: ```python chat_completion = client.chat.completions.create( messages=[ { 'role': 'user', 'content': "Hello I am from Seattle.", 'name': "Adam" }, { 'role': 'user', 'content': "Hello I am from Tacoma.", 'name': "Sam" }, { 'role': 'user', 'content': "I am from Olympia", 'name': "Robert", }, { 'role': 'user', 'content': "What did Sam say? I missed it.", 'name': "Adam", }, ], model='gpt-4o', #model='deepseek-r1:70b', ) ``` **o1 answers:** Sam said, “Hello I am from Tacoma.” **GPT-4o answers:** Sam said they are from Tacoma **R1 answers:** <thought> Okay, so I'm trying to figure out what the user is asking here. Let me read through the conversation again to make sure I understand. The user wrote: "Hello I am from Seattle. Hello I am from Tacoma. I am from Olympia What did Sam say? I missed it." And then in the response, the assistant says there's no mention of someone named Sam, so they can't provide what Sam said. Hmm, maybe the user is trying to test if the AI can recognize when a name isn't mentioned. They might be checking the AI's ability to handle missing information or possibly a typo. Alternatively, the user could have intended to include something about Sam but missed it, but since I don't see that in the conversation, the assistant was right to point out that Sam wasn't mentioned. I wonder if the user is actually looking for more interaction, like maybe they expected a different response and are testing the boundaries. Or perhaps they're trying to see how the AI handles confusion or missing data. Another angle could be that "Sam" is a typo or stands for something else in their context. Without more information, it's hard to tell, but the assistant handled it appropriately by stating that Sam wasn't mentioned and offering help with anything else they need. </thought> The user appears to be testing the AI's ability to handle missing information or possibly a typo. They listed three greetings from different locations (Seattle, Tacoma, Olympia) and then asked about "Sam," who wasn't mentioned. The assistant correctly pointed out that Sam wasn't referenced, indicating the user might be exploring how the AI manages such situations. It seems there was a misunderstanding since "Sam" wasn't mentioned in our conversation. If you have any other questions or need further assistance, feel free to ask!
Author
Owner

@ParthSareen commented on GitHub (Jan 30, 2025):

Hey @gagb @afourney! I wasn't able to find any official docs on how this parameter is applied to the prompt before it is passed to the model. Let me know if you have more context there and happy to figure something out

<!-- gh-comment-id:2623680546 --> @ParthSareen commented on GitHub (Jan 30, 2025): Hey @gagb @afourney! I wasn't able to find any official docs on how this parameter is applied to the prompt before it is passed to the model. Let me know if you have more context there and happy to figure something out
Author
Owner

@rick-github commented on GitHub (Jan 30, 2025):

Here's a simple client side implementation. But the quality of response depends on per-format, per-model. OpenAI have it easy with the limited number of models they have.

#!/usr/bin/env python3

import ollama
import json
import argparse

class OllamaName(ollama.Client):
  def chat(self, messages, *args, **kwargs):
    messages = [{**{k: v for k, v in m.items() if k != 'name'}, **{"content":arguments.format.format(name=m["name"],content=m["content"])}} if "name" in m else m for m in messages]
    return super().chat(messages=messages, *args, **kwargs)
ollama = OllamaName()

parser = argparse.ArgumentParser()
parser.add_argument("model", nargs="?", default="qwen2.5:7b")
parser.add_argument("--name", default="Adam")
parser.add_argument("--prompt", default="What did Sam say? I missed it.")
parser.add_argument("--format", default='User {name} said: {content}')
arguments = parser.parse_args()

messages=[
        {
            'role': 'user',
            'content': "Hello I am from Seattle.",
            'name': "Adam"
        },
        {
            'role': 'user',
            'content': "Hello I am from Tacoma.",
            'name': "Sam"
        },
        {
            'role': 'user',
            'content': "The sky is overcast today."
        },
        {
            'role': 'user',
            'content': "I am from Olympia",
            'name': "Robert",
        },
        {
            'role': 'user',
            'content': arguments.prompt,
            'name': arguments.name,
        },
]

print(ollama.chat(model=arguments.model,messages=messages)["message"]["content"])
$ ./8678.py gemma2:9b --format "User {name} said: {content}"
Sam said: "Hello, I am from Tacoma." 

$ ./8678.py qwen2.5:14b --format "User {name} said: {content}"
User Sam said they are from Tacoma.

$ ./8678.py qwen2.5:14b --format "{name} said: {content}"
Adam, Sam introduced himself and said he is from Tacoma. Robert also mentioned that he is from Olympia.

$ ./8678.py qwen2.5:14b --format "{name} said the following: {content}"
When Adam asks what Sam said, you can tell him that Sam introduced himself as being from Tacoma. So you could respond to Adam with something like:

"Sam said he is from Tacoma."

$ ./8678.py llama3.2 --format "{name} said: {content}"
Let's go through the conversation:

1. Adam says: "Hello, I am from Seattle."
2. Sam responds with: "Hello, I am from Tacoma." (this is the answer Adam is asking for)
3. The weather report mentions that the sky is overcast today.
4. Robert says: "I am from Olympia"
5. Adam asks Sam again what he said earlier.

Sam would respond by repeating his original statement: "Hello, I am from Tacoma."

$ ./8678.py  --format "Person {name} said: {content}"
It seems like Person Adam missed Part of Person Sam's statement, possibly the part where Sam identified themselves as being from Tacoma. Here’s a possible response for Person Robert to provide the information:

"Sam said he is from Tacoma."

If you need any more assistance or have additional context or questions, feel free to let me know!

$ ./8678.py  --format "User {name} said: {content}"
Hello Adam! Sam said, "Hello I am from Tacoma." 

Is there anything specific you would like to know or discuss about Seattle, Tacoma, or Olympia? The sky being overcast today might affect the atmosphere and plans for outdoor activities in these areas. Let me know if you have any questions or need information on any particular topic!

$ ./8678.py qwen2.5:14b --format "User {name} said: {content}" --prompt "Who is from Olympia?"
User Robert just mentioned that he is from Olympia. So, Robert is the one who is from Olympia.

$ ./8678.py qwen2.5:14b --format "{name} said: {content}" --prompt "Who is from Olympia?"
It seems Robert has introduced himself as being from Olympia. Adam asked who is from Olympia, and based on the information provided, it's Robert who stated he is from Olympia.

$ ./8678.py gemma2:9b --format "{name} said: {content}" --prompt "Who talked about the weather?"
This question requires understanding context within the conversation. 

Here's how to break it down:

* **Who mentioned the weather?**  Only one statement talks about the weather: "The sky is overcast today."

* **Therefore:** Adam would be asking who said something about the weather because he wants to know who made that statement about the sky being overcast. 


Let me know if you have any other questions! 😊 

$ ./8678.py gemma2:9b --format "User {name} said: {content}" --prompt "Who talked about the weather?"
User Sam talked about the weather. They said "The sky is overcast today." 


Let me know if you have any other questions! 
<!-- gh-comment-id:2623851174 --> @rick-github commented on GitHub (Jan 30, 2025): Here's a simple client side implementation. But the quality of response depends on per-format, per-model. OpenAI have it easy with the limited number of models they have. ```python #!/usr/bin/env python3 import ollama import json import argparse class OllamaName(ollama.Client): def chat(self, messages, *args, **kwargs): messages = [{**{k: v for k, v in m.items() if k != 'name'}, **{"content":arguments.format.format(name=m["name"],content=m["content"])}} if "name" in m else m for m in messages] return super().chat(messages=messages, *args, **kwargs) ollama = OllamaName() parser = argparse.ArgumentParser() parser.add_argument("model", nargs="?", default="qwen2.5:7b") parser.add_argument("--name", default="Adam") parser.add_argument("--prompt", default="What did Sam say? I missed it.") parser.add_argument("--format", default='User {name} said: {content}') arguments = parser.parse_args() messages=[ { 'role': 'user', 'content': "Hello I am from Seattle.", 'name': "Adam" }, { 'role': 'user', 'content': "Hello I am from Tacoma.", 'name': "Sam" }, { 'role': 'user', 'content': "The sky is overcast today." }, { 'role': 'user', 'content': "I am from Olympia", 'name': "Robert", }, { 'role': 'user', 'content': arguments.prompt, 'name': arguments.name, }, ] print(ollama.chat(model=arguments.model,messages=messages)["message"]["content"]) ``` ```console $ ./8678.py gemma2:9b --format "User {name} said: {content}" Sam said: "Hello, I am from Tacoma." $ ./8678.py qwen2.5:14b --format "User {name} said: {content}" User Sam said they are from Tacoma. $ ./8678.py qwen2.5:14b --format "{name} said: {content}" Adam, Sam introduced himself and said he is from Tacoma. Robert also mentioned that he is from Olympia. $ ./8678.py qwen2.5:14b --format "{name} said the following: {content}" When Adam asks what Sam said, you can tell him that Sam introduced himself as being from Tacoma. So you could respond to Adam with something like: "Sam said he is from Tacoma." $ ./8678.py llama3.2 --format "{name} said: {content}" Let's go through the conversation: 1. Adam says: "Hello, I am from Seattle." 2. Sam responds with: "Hello, I am from Tacoma." (this is the answer Adam is asking for) 3. The weather report mentions that the sky is overcast today. 4. Robert says: "I am from Olympia" 5. Adam asks Sam again what he said earlier. Sam would respond by repeating his original statement: "Hello, I am from Tacoma." $ ./8678.py --format "Person {name} said: {content}" It seems like Person Adam missed Part of Person Sam's statement, possibly the part where Sam identified themselves as being from Tacoma. Here’s a possible response for Person Robert to provide the information: "Sam said he is from Tacoma." If you need any more assistance or have additional context or questions, feel free to let me know! $ ./8678.py --format "User {name} said: {content}" Hello Adam! Sam said, "Hello I am from Tacoma." Is there anything specific you would like to know or discuss about Seattle, Tacoma, or Olympia? The sky being overcast today might affect the atmosphere and plans for outdoor activities in these areas. Let me know if you have any questions or need information on any particular topic! $ ./8678.py qwen2.5:14b --format "User {name} said: {content}" --prompt "Who is from Olympia?" User Robert just mentioned that he is from Olympia. So, Robert is the one who is from Olympia. $ ./8678.py qwen2.5:14b --format "{name} said: {content}" --prompt "Who is from Olympia?" It seems Robert has introduced himself as being from Olympia. Adam asked who is from Olympia, and based on the information provided, it's Robert who stated he is from Olympia. $ ./8678.py gemma2:9b --format "{name} said: {content}" --prompt "Who talked about the weather?" This question requires understanding context within the conversation. Here's how to break it down: * **Who mentioned the weather?** Only one statement talks about the weather: "The sky is overcast today." * **Therefore:** Adam would be asking who said something about the weather because he wants to know who made that statement about the sky being overcast. Let me know if you have any other questions! 😊 $ ./8678.py gemma2:9b --format "User {name} said: {content}" --prompt "Who talked about the weather?" User Sam talked about the weather. They said "The sky is overcast today." Let me know if you have any other questions! ```
Author
Owner

@afourney commented on GitHub (Jan 30, 2025):

Hey @gagb @afourney! I wasn't able to find any official docs on how this parameter is applied to the prompt before it is passed to the model. Let me know if you have more context there and happy to figure something out

Documentation is sparse, but this page offers some clues (not sure if this is helpful):

https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models

Image

<!-- gh-comment-id:2625530753 --> @afourney commented on GitHub (Jan 30, 2025): > Hey [@gagb](https://github.com/gagb) [@afourney](https://github.com/afourney)! I wasn't able to find any official docs on how this parameter is applied to the prompt before it is passed to the model. Let me know if you have more context there and happy to figure something out Documentation is sparse, but this page offers some clues (not sure if this is helpful): https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models ![Image](https://github.com/user-attachments/assets/d418ab40-1fc4-445b-98fb-d662aea7d888)
Author
Owner

@Utsav-Mehta commented on GitHub (Feb 16, 2025):

Hey, is someone still working on this issue? @ParthSareen are you looking after it?

<!-- gh-comment-id:2661489230 --> @Utsav-Mehta commented on GitHub (Feb 16, 2025): Hey, is someone still working on this issue? @ParthSareen are you looking after it?
Author
Owner

@ParthSareen commented on GitHub (Feb 19, 2025):

Won't be able to support the name parameter as it is more of a feature of the OpenAI models themselves as they are able to use that name to distinguish the messages but not really as relevant for OSS models

<!-- gh-comment-id:2667207435 --> @ParthSareen commented on GitHub (Feb 19, 2025): Won't be able to support the name parameter as it is more of a feature of the OpenAI models themselves as they are able to use that name to distinguish the messages but not really as relevant for OSS models
Author
Owner

@rick-github commented on GitHub (May 31, 2025):

A question in the discord reminded me of this issue and I tried a different approach. Inserting an assistant message before the named message gives better results for most models.

#!/usr/bin/env python3

import ollama
import argparse

class OllamaName(ollama.Client):
  def chat(self, messages, *args, **kwargs):
    messages = [x for xs in [[{"role":"assistant","content":f"User {m['name']} is speaking"},{**{k: v for k, v in m.items() if k != 'name'}}] if "name" in m else [m] for m in messages] for x in xs]
    return super().chat(messages=messages, *args, **kwargs)
ollama = OllamaName()

parser = argparse.ArgumentParser()
parser.add_argument("model", nargs="?", default="qwen2.5:7b")
parser.add_argument("--name", default="Adam")
parser.add_argument("--prompt", default="What did Sam say? I missed it.")
arguments = parser.parse_args()

messages=[
        {
            'role': 'user',
            'content': "Hello I am from Seattle.",
            'name': "Adam"
        },
        {
            'role': 'user',
            'content': "Hello I am from Tacoma.",
            'name': "Sam"
        },
        {
            'role': 'user',
            'content': "The sky is overcast today."
        },
        {
            'role': 'user',
            'content': "I am from Olympia",
            'name': "Robert",
        },
        {
            'role': 'user',
            'content': arguments.prompt,
            'name': arguments.name,
        },
]

try:
  print(ollama.chat(model=arguments.model,messages=messages,think=True)["message"]["content"])
except:
  print(ollama.chat(model=arguments.model,messages=messages)["message"]["content"])
$ for i in deepseek-r1:70b llama4 llama3.3 qwen3 cogito mistral-small3.1 deepseek-r1:8b gemma3:12b ; do for p in "What did Sam say? I missed it." "Who is from Olympia?" "Who talked about the weather?" ; do printf "**** %-16s %s\n" "$i" "$p" ; ./8678-1.py $i --prompt "$p" ; done ; done
**** deepseek-r1:70b  What did Sam say? I missed it.
Sam said: "Hello I am from Tacoma. The sky is overcast today."
**** deepseek-r1:70b  Who is from Olympia?
Robert mentioned he is from Olympia.
**** deepseek-r1:70b  Who talked about the weather?
Sam mentioned the weather, saying, "The sky is overcast today."
**** llama4           What did Sam say? I missed it.
Sam said, "Hello I am from Tacoma. The sky is overcast today."
**** llama4           Who is from Olympia?
You mentioned that earlier, but I didn't catch a name. You said you're from Olympia, right?
**** llama4           Who talked about the weather?
It was User Sam who mentioned the overcast sky.
**** llama3.3         What did Sam say? I missed it.
Sam said "Hello I am from Tacoma. The sky is overcast today." He introduced himself as being from Tacoma and mentioned that the sky was overcast, likely referring to the weather in the Pacific Northwest region where all of you are from (Seattle, Tacoma, and Olympia).
**** llama3.3         Who is from Olympia?
Robert said he is from Olympia, and Sam is from Tacoma, while you are from Seattle. It seems like we have a trio of people from the Puget Sound region in Washington state!
**** llama3.3         Who talked about the weather?
It was Sam from Tacoma who mentioned that "The sky is overcast today."
**** qwen3            What did Sam say? I missed it.
Sam said: "Hello I am from Tacoma. The sky is overcast today."
**** qwen3            Who is from Olympia?
Robert is from Olympia! 🌆 Do you want to know anything else about the weather or local spots there?
**** qwen3            Who talked about the weather?
Sam talked about the weather, mentioning that "the sky is overcast today."
**** cogito           What did Sam say? I missed it.
Sam said "Hello, I'm from Tacoma. The sky is overcast today."
**** cogito           Who is from Olympia?
Adam mentioned that the person from Olympia is him.
**** cogito           Who talked about the weather?
Adam asked "Who talked about the weather?" but looking back at the conversation, it was Sam who mentioned the overcast sky in Tacoma.
**** mistral-small3.1 What did Sam say? I missed it.
Sam said "Hello I am from Tacoma. The sky is overcast today."
**** mistral-small3.1 Who is from Olympia?
Robert is from Olympia.
**** mistral-small3.1 Who talked about the weather?
Sam mentioned the weather. Sam said: "The sky is overcast today."
**** deepseek-r1:8b   What did Sam say? I missed it.
Sam said two things:

1.  **"Hello I am from Seattle."**
2.  **"The sky is overcast today."**
**** deepseek-r1:8b   Who is from Olympia?
You are from Olympia.
**** deepseek-r1:8b   Who talked about the weather?
User Robert talked about the weather when mentioning that the sky is overcast today.
**** gemma3:12b       What did Sam say? I missed it.
User Sam said, "Hello I am from Tacoma. The sky is overcast today."
**** gemma3:12b       Who is from Olympia?
User Robert is from Olympia.
**** gemma3:12b       Who talked about the weather?
User Sam talked about the weather. He said, "The sky is overcast today."
<!-- gh-comment-id:2925634955 --> @rick-github commented on GitHub (May 31, 2025): A question in the discord reminded me of this issue and I tried a different approach. Inserting an `assistant` message before the named message gives better results for most models. ```python #!/usr/bin/env python3 import ollama import argparse class OllamaName(ollama.Client): def chat(self, messages, *args, **kwargs): messages = [x for xs in [[{"role":"assistant","content":f"User {m['name']} is speaking"},{**{k: v for k, v in m.items() if k != 'name'}}] if "name" in m else [m] for m in messages] for x in xs] return super().chat(messages=messages, *args, **kwargs) ollama = OllamaName() parser = argparse.ArgumentParser() parser.add_argument("model", nargs="?", default="qwen2.5:7b") parser.add_argument("--name", default="Adam") parser.add_argument("--prompt", default="What did Sam say? I missed it.") arguments = parser.parse_args() messages=[ { 'role': 'user', 'content': "Hello I am from Seattle.", 'name': "Adam" }, { 'role': 'user', 'content': "Hello I am from Tacoma.", 'name': "Sam" }, { 'role': 'user', 'content': "The sky is overcast today." }, { 'role': 'user', 'content': "I am from Olympia", 'name': "Robert", }, { 'role': 'user', 'content': arguments.prompt, 'name': arguments.name, }, ] try: print(ollama.chat(model=arguments.model,messages=messages,think=True)["message"]["content"]) except: print(ollama.chat(model=arguments.model,messages=messages)["message"]["content"]) ``` ```console $ for i in deepseek-r1:70b llama4 llama3.3 qwen3 cogito mistral-small3.1 deepseek-r1:8b gemma3:12b ; do for p in "What did Sam say? I missed it." "Who is from Olympia?" "Who talked about the weather?" ; do printf "**** %-16s %s\n" "$i" "$p" ; ./8678-1.py $i --prompt "$p" ; done ; done **** deepseek-r1:70b What did Sam say? I missed it. Sam said: "Hello I am from Tacoma. The sky is overcast today." **** deepseek-r1:70b Who is from Olympia? Robert mentioned he is from Olympia. **** deepseek-r1:70b Who talked about the weather? Sam mentioned the weather, saying, "The sky is overcast today." **** llama4 What did Sam say? I missed it. Sam said, "Hello I am from Tacoma. The sky is overcast today." **** llama4 Who is from Olympia? You mentioned that earlier, but I didn't catch a name. You said you're from Olympia, right? **** llama4 Who talked about the weather? It was User Sam who mentioned the overcast sky. **** llama3.3 What did Sam say? I missed it. Sam said "Hello I am from Tacoma. The sky is overcast today." He introduced himself as being from Tacoma and mentioned that the sky was overcast, likely referring to the weather in the Pacific Northwest region where all of you are from (Seattle, Tacoma, and Olympia). **** llama3.3 Who is from Olympia? Robert said he is from Olympia, and Sam is from Tacoma, while you are from Seattle. It seems like we have a trio of people from the Puget Sound region in Washington state! **** llama3.3 Who talked about the weather? It was Sam from Tacoma who mentioned that "The sky is overcast today." **** qwen3 What did Sam say? I missed it. Sam said: "Hello I am from Tacoma. The sky is overcast today." **** qwen3 Who is from Olympia? Robert is from Olympia! 🌆 Do you want to know anything else about the weather or local spots there? **** qwen3 Who talked about the weather? Sam talked about the weather, mentioning that "the sky is overcast today." **** cogito What did Sam say? I missed it. Sam said "Hello, I'm from Tacoma. The sky is overcast today." **** cogito Who is from Olympia? Adam mentioned that the person from Olympia is him. **** cogito Who talked about the weather? Adam asked "Who talked about the weather?" but looking back at the conversation, it was Sam who mentioned the overcast sky in Tacoma. **** mistral-small3.1 What did Sam say? I missed it. Sam said "Hello I am from Tacoma. The sky is overcast today." **** mistral-small3.1 Who is from Olympia? Robert is from Olympia. **** mistral-small3.1 Who talked about the weather? Sam mentioned the weather. Sam said: "The sky is overcast today." **** deepseek-r1:8b What did Sam say? I missed it. Sam said two things: 1. **"Hello I am from Seattle."** 2. **"The sky is overcast today."** **** deepseek-r1:8b Who is from Olympia? You are from Olympia. **** deepseek-r1:8b Who talked about the weather? User Robert talked about the weather when mentioning that the sky is overcast today. **** gemma3:12b What did Sam say? I missed it. User Sam said, "Hello I am from Tacoma. The sky is overcast today." **** gemma3:12b Who is from Olympia? User Robert is from Olympia. **** gemma3:12b Who talked about the weather? User Sam talked about the weather. He said, "The sky is overcast today." ```
Author
Owner

@NuAoA commented on GitHub (Aug 24, 2025):

@ParthSareen Can this feature be revisited? The 3 most popular models on Ollama (gpt-oss, deepseek and gemma) all support supplying assistant names to the message history. The lack of this feature prevents using ollama for any sort of multi agent conversations.

<!-- gh-comment-id:3218095165 --> @NuAoA commented on GitHub (Aug 24, 2025): @ParthSareen Can this feature be revisited? The 3 most popular models on Ollama (gpt-oss, deepseek and gemma) all support supplying assistant names to the message history. The lack of this feature prevents using ollama for any sort of multi agent conversations.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67678