[GH-ISSUE #6351] ollama tool input: string with newline character (\n) is cutoff #29744

Open
opened 2026-04-22 08:55:34 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @remon-rakibul on GitHub (Aug 14, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6351

I am using ollama with llama3.1 model and using a tool to draft email responses. The input passed to the tool is has a string with \n character which is a valid string in python. but I am getting a cutoff version of the string passed to the tool where I am only getting characters before the \n.

here is the output from the console:

Finished chain.
[2024-08-14 11:48:57][DEBUG]: == [Email Action Specialist] Task output: {
"message_id": "1914ee60fc1515fa",
"summary": Dear Md. Rakibul Haque,\r\n\r\nI hope you're doing well. I wanted to reach out to schedule a meeting to discuss the upcoming project we've been planning. There are a few key details and timelines that I'd like to review with you, so please let me know if this is something you can fit into your schedule.\r\n\r\nBest regards,\r\nSolaiman Ovi",
"main_points": [
"Schedule a meeting to discuss the upcoming project",
"Review key details and timelines"
],
"user": "Solaiman Ovi",
"communication_style": "Formal",
"sender": "solaiman.arisaftech@gmail.com"
}

[2024-08-14 11:48:57][DEBUG]: == Working Agent: Email Response Writer
[2024-08-14 11:48:57][INFO]: == Starting Task: Based on the action-required emails identified, draft responses for each...
....

Entering new CrewAgentExecutor chain...
Action: Create Draft
Action Input: {"email": "solaiman.arisaftech@gmail.com", "message": "Dear Solaiman Ovi,\n\nThank you for reaching out to schedule a meeting. I'd be happy to discuss the upcoming project with you. I'll send over some availability options and we can schedule something that works best for both of us.\n\nBest regards,\nMd. Rakibul Haque", "subject": "Re: Meeting to Discuss Upcoming Project

this is what I'm printing from the tool in the console:

email: solaiman.arisaftech@gmail.com
subject: Re: Meeting to Discuss Upcoming Project
message: Dear Solaiman Ovi

As you can see the string is cutoff before \n. I am not getting the full string.

The funny thing happening here is when I use groq API instead of ollama and use the same model llama3.1, the string does not cutt off and I get the full string.

below are the agent, task and tool codes

def email_response_writer(self):
		return Agent(
			role='Email Response Writer',
			goal='Draft responses to action-required emails',
			backstory=dedent("""\
				You are a skilled writer, adept at crafting clear, concise, and effective email responses.
				Your strength lies in your ability to communicate effectively, ensuring that each response is
				tailored to address the specific needs and context of the email."""),
			tools=[
				CreateGmailMessageTool.get_gmail_message,
				CreateDraftTool.create_draft,
			],
			verbose=True,
			allow_delegation=False,
			llm=llm
		)

def draft_responses_task(self, agent):
		return Task(
		description=dedent("""\
			Based on the action-required emails identified, draft responses for each.
			Ensure that each response is tailored to address the specific needs
			and context outlined in the email.

			- Assume the persona of the user and mimic the communication style in the message.
			- Feel free to do research on the topic to provide a more detailed response, IF NECESSARY.
			- IF a research is necessary do it BEFORE drafting the response.
			- If you need to pull the message again do it using only the message ID.

			Use the tool provided to draft each of the responses.
			When using the tool pass the following input in JSON format:
			
					- email: the email address of the sender
					- message: the reply you generate for the email (format it as raw string)
					- subject: subject of the email
			
			You MUST create all drafts before sending your final answer.
			"""),
		expected_output=dedent("""\
			- confirmation: [Confirmation that all responses have been drafted]
		"""),
		agent=agent
	)


class CreateDraftTool():
  @tool("Create Draft")
  def create_draft(email,message,subject):
    """
 		Useful to create an email draft.
    """
    print('email: ',email.strip("\""))
    print('subject: ',subject.strip("\""))
    print('message: ',message.strip("\""))
    gmail = GmailToolkit()
    draft = GmailCreateDraft(api_resource=gmail.api_resource)
    result = draft.run({
				'to': [email.strip("\"")],
				'subject': subject.strip("\""),
				'message': message.strip("\"")
		})
    return f"\nDraft created: {result}\n"

OS

macOS

GPU

Intel

CPU

Intel

Ollama version

0.3.6

Originally created by @remon-rakibul on GitHub (Aug 14, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6351 I am using ollama with llama3.1 model and using a tool to draft email responses. The input passed to the tool is has a string with \n character which is a valid string in python. but I am getting a cutoff version of the string passed to the tool where I am only getting characters before the \n. here is the output from the console: > Finished chain. [2024-08-14 11:48:57][DEBUG]: == [Email Action Specialist] Task output: { "message_id": "1914ee60fc1515fa", "summary": Dear Md. Rakibul Haque,\r\n\r\nI hope you're doing well. I wanted to reach out to schedule a meeting to discuss the upcoming project we've been planning. There are a few key details and timelines that I'd like to review with you, so please let me know if this is something you can fit into your schedule.\r\n\r\nBest regards,\r\nSolaiman Ovi", "main_points": [ "Schedule a meeting to discuss the upcoming project", "Review key details and timelines" ], "user": "Solaiman Ovi", "communication_style": "Formal", "sender": "solaiman.arisaftech@gmail.com" } [2024-08-14 11:48:57][DEBUG]: == Working Agent: Email Response Writer [2024-08-14 11:48:57][INFO]: == Starting Task: Based on the action-required emails identified, draft responses for each... .... > Entering new CrewAgentExecutor chain... Action: Create Draft Action Input: {"email": "solaiman.arisaftech@gmail.com", "message": "Dear Solaiman Ovi,\n\nThank you for reaching out to schedule a meeting. I'd be happy to discuss the upcoming project with you. I'll send over some availability options and we can schedule something that works best for both of us.\n\nBest regards,\nMd. Rakibul Haque", "subject": "Re: Meeting to Discuss Upcoming Project this is what I'm printing from the tool in the console: email: solaiman.arisaftech@gmail.com subject: Re: Meeting to Discuss Upcoming Project message: Dear Solaiman Ovi As you can see the string is cutoff before \n. I am not getting the full string. **The funny thing happening here is when I use groq API instead of ollama and use the same model llama3.1, the string does not cutt off and I get the full string.** below are the agent, task and tool codes ``` def email_response_writer(self): return Agent( role='Email Response Writer', goal='Draft responses to action-required emails', backstory=dedent("""\ You are a skilled writer, adept at crafting clear, concise, and effective email responses. Your strength lies in your ability to communicate effectively, ensuring that each response is tailored to address the specific needs and context of the email."""), tools=[ CreateGmailMessageTool.get_gmail_message, CreateDraftTool.create_draft, ], verbose=True, allow_delegation=False, llm=llm ) def draft_responses_task(self, agent): return Task( description=dedent("""\ Based on the action-required emails identified, draft responses for each. Ensure that each response is tailored to address the specific needs and context outlined in the email. - Assume the persona of the user and mimic the communication style in the message. - Feel free to do research on the topic to provide a more detailed response, IF NECESSARY. - IF a research is necessary do it BEFORE drafting the response. - If you need to pull the message again do it using only the message ID. Use the tool provided to draft each of the responses. When using the tool pass the following input in JSON format: - email: the email address of the sender - message: the reply you generate for the email (format it as raw string) - subject: subject of the email You MUST create all drafts before sending your final answer. """), expected_output=dedent("""\ - confirmation: [Confirmation that all responses have been drafted] """), agent=agent ) class CreateDraftTool(): @tool("Create Draft") def create_draft(email,message,subject): """ Useful to create an email draft. """ print('email: ',email.strip("\"")) print('subject: ',subject.strip("\"")) print('message: ',message.strip("\"")) gmail = GmailToolkit() draft = GmailCreateDraft(api_resource=gmail.api_resource) result = draft.run({ 'to': [email.strip("\"")], 'subject': subject.strip("\""), 'message': message.strip("\"") }) return f"\nDraft created: {result}\n" ``` ### OS macOS ### GPU Intel ### CPU Intel ### Ollama version 0.3.6
GiteaMirror added the bugapi labels 2026-04-22 08:55:34 -05:00
Author
Owner

@mxyng commented on GitHub (Aug 15, 2024):

Can you describe the inputs and outputs to the functions above? Because it reads to me this is the output from the LLM and the input to create_draft:

Entering new CrewAgentExecutor chain...
Action: Create Draft
Action Input: {"email": "[solaiman.arisaftech@gmail.com](mailto:solaiman.arisaftech@gmail.com)", "message": "Dear Solaiman Ovi,\n\nThank you for reaching out to schedule a meeting. I'd be happy to discuss the upcoming project with you. I'll send over some availability options and we can schedule something that works best for both of us.\n\nBest regards,\nMd. Rakibul Haque", "subject": "Re: Meeting to Discuss Upcoming Project

At this point, the LLM and Ollama are no longer in the loop. Instead the framework you're using (crewAI?) is solely responsible for passing the LLM outputs to the tool and the framework would be the party truncating the LLM outputs.

<!-- gh-comment-id:2292186982 --> @mxyng commented on GitHub (Aug 15, 2024): Can you describe the inputs and outputs to the functions above? Because it reads to me this is the output from the LLM and the input to `create_draft`: ``` Entering new CrewAgentExecutor chain... Action: Create Draft Action Input: {"email": "[solaiman.arisaftech@gmail.com](mailto:solaiman.arisaftech@gmail.com)", "message": "Dear Solaiman Ovi,\n\nThank you for reaching out to schedule a meeting. I'd be happy to discuss the upcoming project with you. I'll send over some availability options and we can schedule something that works best for both of us.\n\nBest regards,\nMd. Rakibul Haque", "subject": "Re: Meeting to Discuss Upcoming Project ``` At this point, the LLM and Ollama are no longer in the loop. Instead the framework you're using (crewAI?) is solely responsible for passing the LLM outputs to the tool and the framework would be the party truncating the LLM outputs.
Author
Owner

@remon-rakibul commented on GitHub (Aug 16, 2024):

yes, the output of the llm is the input to the create_draft tool and this tool basically takes these inputs and drafts an email.

Action: Create Draft
Action Input: {"email": "solaiman.arisaftech@gmail.com", "message": "Dear Solaiman Ovi,\n\nThank you for reaching out to schedule a meeting. I'd be happy to discuss the upcoming project with you. I'll send over some availability options and we can schedule something that works best for both of us.\n\nBest regards,\nMd. Rakibul Haque", "subject": "Re: Meeting to Discuss Upcoming Project"}

here, email, message and subject are being passed to this function.

class CreateDraftTool():
  @tool("Create Draft")
  def create_draft(email,message,subject):
    """
 		Useful to create an email draft.
    """
    print('email: ',email.strip("\""))
    print('subject: ',subject.strip("\""))
    print('message: ',message.strip("\""))
    gmail = GmailToolkit()
    draft = GmailCreateDraft(api_resource=gmail.api_resource)
    result = draft.run({
				'to': [email.strip("\"")],
				'subject': subject.strip("\""),
				'message': message.strip("\"")
		})
    return f"\nDraft created: {result}\n"

yes, I am using crewai but when I use groq API instead of ollama and use the same model llama3.1, the string does not cut off and I get the full string. If crewai is truncating the string passed to the tool, it should also get truncated while using the groq API with the same model llama3.1 but it only happens when I use llama3.1 with ollama.

can you please explain why I am experiencing this when I use ollama.

thanks in advance

<!-- gh-comment-id:2292751819 --> @remon-rakibul commented on GitHub (Aug 16, 2024): yes, the output of the llm is the input to the create_draft tool and this tool basically takes these inputs and drafts an email. Action: Create Draft Action Input: {"email": "solaiman.arisaftech@gmail.com", "message": "Dear Solaiman Ovi,\n\nThank you for reaching out to schedule a meeting. I'd be happy to discuss the upcoming project with you. I'll send over some availability options and we can schedule something that works best for both of us.\n\nBest regards,\nMd. Rakibul Haque", "subject": "Re: Meeting to Discuss Upcoming Project"} here, email, message and subject are being passed to this function. ``` class CreateDraftTool(): @tool("Create Draft") def create_draft(email,message,subject): """ Useful to create an email draft. """ print('email: ',email.strip("\"")) print('subject: ',subject.strip("\"")) print('message: ',message.strip("\"")) gmail = GmailToolkit() draft = GmailCreateDraft(api_resource=gmail.api_resource) result = draft.run({ 'to': [email.strip("\"")], 'subject': subject.strip("\""), 'message': message.strip("\"") }) return f"\nDraft created: {result}\n" ``` yes, I am using crewai but when I use groq API instead of ollama and use the same model llama3.1, the string does not cut off and I get the full string. If crewai is truncating the string passed to the tool, it should also get truncated while using the groq API with the same model llama3.1 but it only happens when I use llama3.1 with ollama. can you please explain why I am experiencing this when I use ollama. thanks in advance
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#29744