[GH-ISSUE #8387] Ollama not completing chat request #67440

Closed
opened 2026-05-04 10:20:47 -05:00 by GiteaMirror · 25 comments
Owner

Originally created by @MarkWard0110 on GitHub (Jan 11, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8387

What is the issue?

At times, Ollama does not complete chat requests. The client waits and timeouts. Ollama reports the cancellation in the log and cancels the runner. If streaming, Ollama will continue to stream repeating data and not end the stream. If not streaming, Ollama will never respond to the chat request.

In my attempt to track down the issue, I have found the following, which will recreate the issue.
Chat Request:
model: llama3.1:8b-instruct-q2_K
User: good job!
Temperature: 0
Top P: 1
Context Size: 1

A stream request, Ollama will never stop the stream. It will respond with repeating data.
A non-stream request, Ollama will never respond. It will continue to process the chat request.

With this request context size: 3 did the same.

I have tested this on two different computers.

I understand the request above is not an optimal chat request. It only demonstrates the issue. My program is seeing the issue with the following options.

llama3.1:8b-instruct-q2_K
NumCtx = 4096,
Temperature = 0.1f,
TopP = 0.9f,

The history of the chat requests are the following.

  1. Context request: 4096 Prompt Eval Count: 439 Eval Count: 72
  2. Context request: 4096 Prompt Eval Count: 605 Eval Count: 76
  3. Context request: 4096 Prompt Eval Count: 275 Eval Count: 3
  4. Context request: 4096 Prompt Eval Count: 677 Eval Count: 76
  5. Context request: 4096 Prompt Eval Count: 271 Eval Count: 3
  6. Context request: 4096 Prompt Eval Count: 753 Eval Count: 102

The issue was hit when the client was making its 7th chat request. In Ollama's log the 7th message is the last chat request in the log. The server was processing the request and not responding. I have truncated the log. When I canceled the client's request Ollama stopped the request and logged a 500 error for the chat request.

Ollama's output
[GIN] 2025/01/11 - 19:18:53 | 200 | 709.326503ms | 10.0.0.123 | POST "/api/chat" time=2025-01-11T19:18:53.376Z level=DEBUG source=sched.go:407 msg="context for request finished" time=2025-01-11T19:18:53.376Z level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/vscode/.ollama/models/blobs/sha256-36cc41839cc170480c8e4fd176b404024e06df4bd7128bcbe199293283b844a9 duration=2562047h47m16.854775807s time=2025-01-11T19:18:53.376Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/home/vscode/.ollama/models/blobs/sha256-36cc41839cc170480c8e4fd176b404024e06df4bd7128bcbe199293283b844a9 refCount=0 time=2025-01-11T19:18:53.397Z level=DEBUG source=sched.go:575 msg="evaluating already loaded" model=/home/vscode/.ollama/models/blobs/sha256-36cc41839cc170480c8e4fd176b404024e06df4bd7128bcbe199293283b844a9 time=2025-01-11T19:18:53.398Z level=DEBUG source=routes.go:1470 msg="chat request" images=0 prompt="<|start_header_id|>system<|end_header_id|>\n\nYou are a coordinator responsible for managing a group of agents. Your primary task is to choose which agent should be called next based on the user's instructions and the context of the conversation.\r\n\r\nYou have a list of agent names that you may choose from: [A0,A2].\r\n\r\nCritical Rule:\r\n\r\nIf the user's prompt explicitly specifies \"next\" and an agent name follows \"next\" and that name matches an agent in your list, you must select that agent, without exception. This rule applies even if other agents were mentioned earlier in the conversation.\r\n\r\nSecondary Rules:\r\n\r\nIf the user's suggestion does not match any names in the list, ignore the user's suggestion and select the most contextually appropriate agent from the list.\r\nIf the user does not provide any explicit instructions, choose the agent that best continues the flow of the conversation, ensuring logical progression.\r\n\r\nOnly return the agent's name and nothing else.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nI'm A2, a member of Team A. I have 2 chocolates.\n\n{\nA0:4\nA1:? \nA2:? \nTeamATotal:? \nB0:? \nB1:? \nB2:? \nTeamBTotal:? \nC0:? \nC1:? \nC2:? \nTeamCTotal:? }\n\nNEXT: A2<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" time=2025-01-11T19:18:53.398Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=752 prompt=271 used=7 remaining=264 [GIN] 2025/01/11 - 19:18:53 | 200 | 94.967933ms | 10.0.0.123 | POST "/api/chat" time=2025-01-11T19:18:53.473Z level=DEBUG source=sched.go:407 msg="context for request finished" time=2025-01-11T19:18:53.473Z level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/vscode/.ollama/models/blobs/sha256-36cc41839cc170480c8e4fd176b404024e06df4bd7128bcbe199293283b844a9 duration=2562047h47m16.854775807s time=2025-01-11T19:18:53.473Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/home/vscode/.ollama/models/blobs/sha256-36cc41839cc170480c8e4fd176b404024e06df4bd7128bcbe199293283b844a9 refCount=0 time=2025-01-11T19:18:53.496Z level=DEBUG source=sched.go:575 msg="evaluating already loaded" model=/home/vscode/.ollama/models/blobs/sha256-36cc41839cc170480c8e4fd176b404024e06df4bd7128bcbe199293283b844a9 time=2025-01-11T19:18:53.498Z level=DEBUG source=routes.go:1470 msg="chat request" images=0 prompt="<|start_header_id|>system<|end_header_id|>\n\nYou are A2, a member of Team A. \r\nYour team consists of [A0, A1, A2].\r\nEach member of your team has been given a number of chocolates. Remember the count of chocolates. \r\n\r\nYou have 1 chocolates.\r\n\r\nInstructions:\r\n\r\nTeam Dynamics:\r\n Teams: There are three teams, A, B, and C. Your team consists of [A0, A1, A2].\r\n Team Leaders: The second character '0' in your name indicates you are the team leader. Team leaders can communicate with leaders of other teams but not with non-leaders outside their team.\r\n Team Members: Team members can only communicate within their team.\r\n Teams: must complete the task before calling a different team.\r\n\r\nCollaboration: \r\n A team must collaborate together to accomplish the task. \r\n Suggest the next team leader to do the same by using the NEXT:tag, e.g.,NEXT: B0.\r\n \r\nTermination: \r\n - Once all teams have answered terminate the discussion using TERMINATE.\r\n - The termination must include the complete answer, not just a summary.\r\n\r\nConstraints:\r\n - Only suggest players from the given list [A0,A1].\r\n - Adhere strictly to the communication rules and team constraints.\r\n - A team member's chocolate count must not change, it must remain constant.\r\n - The count must be based on the initial count given to you.\r\n - Do not answer for the team unless each team member has provided their count.\r\n\r\nNext Action:\r\nUse NEXT: to suggest the next speaker who should contribute based on the current state of the discussion.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nThere are 9 players in this game, split equally into Teams A, B, C. Therefore, each team has 3 players, including the team leader.\r\nThe task is to find out the chocolate count from all nine players. Each team lead must call on another team after they have their team's total.\r\nEvery player must keep track of all player's tally using a JSON format. Every player must answer with the JSON format\r\n{\r\nA0:?\r\nA1:?\r\nA2:?\r\nTeamATotal:?\r\nB0:?\r\nB1:?\r\nB2:?\r\nTeamBTotal:?\r\nC0:?\r\nC1:?\r\nC2:?\r\nTeamCTotal:?\r\n}\r\n\r\nThe termination must include the JSON format with all the player's tally.\r\n\r\nAn example of team leader's answer:\r\nI'm B0, a leader of Team B. I have 1 chocolate.\r\n{\r\nA0:3\r\nA1:5\r\nA2:2\r\nTeamATotal:10\r\nB0:1\r\nB1:?\r\nB2:?\r\nTeamBTotal:?\r\nC0:?\r\nC1:?\r\nC2:?\r\nTeamCTotal:?\r\n}\r\nNEXT: B1\r\n\n\nI'm A0, the leader of Team A. I have 4 chocolates.\n\n{\nA0:4\nA1:? \nA2:? \nTeamATotal:? \nB0:? \nB1:? \nB2:? \nTeamBTotal:? \nC0:? \nC1:? \nC2:? \nTeamCTotal:? }\n\nNEXT: A1\n\nI'm A2, a member of Team A. I have 2 chocolates.\n\n{\nA0:4\nA1:? \nA2:? \nTeamATotal:? \nB0:? \nB1:? \nB2:? \nTeamBTotal:? \nC0:? \nC1:? \nC2:? \nTeamCTotal:? }\n\nNEXT: A2<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" time=2025-01-11T19:18:53.499Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=273 prompt=753 used=7 remaining=746 [GIN] 2025/01/11 - 19:18:54 | 200 | 929.905932ms | 10.0.0.123 | POST "/api/chat" time=2025-01-11T19:18:54.406Z level=DEBUG source=sched.go:407 msg="context for request finished" time=2025-01-11T19:18:54.406Z level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/vscode/.ollama/models/blobs/sha256-36cc41839cc170480c8e4fd176b404024e06df4bd7128bcbe199293283b844a9 duration=2562047h47m16.854775807s time=2025-01-11T19:18:54.406Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/home/vscode/.ollama/models/blobs/sha256-36cc41839cc170480c8e4fd176b404024e06df4bd7128bcbe199293283b844a9 refCount=0 time=2025-01-11T19:18:54.432Z level=DEBUG source=sched.go:575 msg="evaluating already loaded" model=/home/vscode/.ollama/models/blobs/sha256-36cc41839cc170480c8e4fd176b404024e06df4bd7128bcbe199293283b844a9 time=2025-01-11T19:18:54.433Z level=DEBUG source=routes.go:1470 msg="chat request" images=0 prompt="<|start_header_id|>system<|end_header_id|>\n\nYou are a coordinator responsible for managing a group of agents. Your primary task is to choose which agent should be called next based on the user's instructions and the context of the conversation.\r\n\r\nYou have a list of agent names that you may choose from: [A0,A1].\r\n\r\nCritical Rule:\r\n\r\nIf the user's prompt explicitly specifies \"next\" and an agent name follows \"next\" and that name matches an agent in your list, you must select that agent, without exception. This rule applies even if other agents were mentioned earlier in the conversation.\r\n\r\nSecondary Rules:\r\n\r\nIf the user's suggestion does not match any names in the list, ignore the user's suggestion and select the most contextually appropriate agent from the list.\r\nIf the user does not provide any explicit instructions, choose the agent that best continues the flow of the conversation, ensuring logical progression.\r\n\r\nOnly return the agent's name and nothing else.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nI'm A2, a member of Team A. I have 2 chocolates.\n\n{\nA0:4\nA1:? \nA2:? \nTeamATotal:? \nB0:? \nB1:? \nB2:? \nTeamBTotal:? \nC0:? \nC1:? \nC2:? \nTeamCTotal:? }\n\nNEXT: B1\n\nYour team members have provided their chocolate count, so you can now calculate the total for Team A and proceed with the task.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" time=2025-01-11T19:18:54.434Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=854 prompt=297 used=7 remaining=290 time=2025-01-11T19:19:23.745Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=5 discard=2045 time=2025-01-11T19:19:40.621Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=5 discard=2045

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.5, 0.5.4, 0.5.3,

Originally created by @MarkWard0110 on GitHub (Jan 11, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8387 ### What is the issue? At times, Ollama does not complete chat requests. The client waits and timeouts. Ollama reports the cancellation in the log and cancels the runner. If streaming, Ollama will continue to stream repeating data and not end the stream. If not streaming, Ollama will never respond to the chat request. In my attempt to track down the issue, I have found the following, which will recreate the issue. Chat Request: model: llama3.1:8b-instruct-q2_K User: good job! Temperature: 0 Top P: 1 Context Size: 1 A stream request, Ollama will never stop the stream. It will respond with repeating data. A non-stream request, Ollama will never respond. It will continue to process the chat request. With this request context size: 3 did the same. I have tested this on two different computers. I understand the request above is not an optimal chat request. It only demonstrates the issue. My program is seeing the issue with the following options. llama3.1:8b-instruct-q2_K NumCtx = 4096, Temperature = 0.1f, TopP = 0.9f, The history of the chat requests are the following. 1. Context request: 4096 Prompt Eval Count: 439 Eval Count: 72 2. Context request: 4096 Prompt Eval Count: 605 Eval Count: 76 3. Context request: 4096 Prompt Eval Count: 275 Eval Count: 3 4. Context request: 4096 Prompt Eval Count: 677 Eval Count: 76 5. Context request: 4096 Prompt Eval Count: 271 Eval Count: 3 6. Context request: 4096 Prompt Eval Count: 753 Eval Count: 102 The issue was hit when the client was making its 7th chat request. In Ollama's log the 7th message is the last chat request in the log. The server was processing the request and not responding. I have truncated the log. When I canceled the client's request Ollama stopped the request and logged a 500 error for the chat request. Ollama's output `[GIN] 2025/01/11 - 19:18:53 | 200 | 709.326503ms | 10.0.0.123 | POST "/api/chat" time=2025-01-11T19:18:53.376Z level=DEBUG source=sched.go:407 msg="context for request finished" time=2025-01-11T19:18:53.376Z level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/vscode/.ollama/models/blobs/sha256-36cc41839cc170480c8e4fd176b404024e06df4bd7128bcbe199293283b844a9 duration=2562047h47m16.854775807s time=2025-01-11T19:18:53.376Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/home/vscode/.ollama/models/blobs/sha256-36cc41839cc170480c8e4fd176b404024e06df4bd7128bcbe199293283b844a9 refCount=0 time=2025-01-11T19:18:53.397Z level=DEBUG source=sched.go:575 msg="evaluating already loaded" model=/home/vscode/.ollama/models/blobs/sha256-36cc41839cc170480c8e4fd176b404024e06df4bd7128bcbe199293283b844a9 time=2025-01-11T19:18:53.398Z level=DEBUG source=routes.go:1470 msg="chat request" images=0 prompt="<|start_header_id|>system<|end_header_id|>\n\nYou are a coordinator responsible for managing a group of agents. Your primary task is to choose which agent should be called next based on the user's instructions and the context of the conversation.\r\n\r\nYou have a list of agent names that you may choose from: [A0,A2].\r\n\r\nCritical Rule:\r\n\r\nIf the user's prompt explicitly specifies \"next\" and an agent name follows \"next\" and that name matches an agent in your list, you must select that agent, without exception. This rule applies even if other agents were mentioned earlier in the conversation.\r\n\r\nSecondary Rules:\r\n\r\nIf the user's suggestion does not match any names in the list, ignore the user's suggestion and select the most contextually appropriate agent from the list.\r\nIf the user does not provide any explicit instructions, choose the agent that best continues the flow of the conversation, ensuring logical progression.\r\n\r\nOnly return the agent's name and nothing else.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nI'm A2, a member of Team A. I have 2 chocolates.\n\n{\nA0:4\nA1:? \nA2:? \nTeamATotal:? \nB0:? \nB1:? \nB2:? \nTeamBTotal:? \nC0:? \nC1:? \nC2:? \nTeamCTotal:? }\n\nNEXT: A2<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" time=2025-01-11T19:18:53.398Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=752 prompt=271 used=7 remaining=264 [GIN] 2025/01/11 - 19:18:53 | 200 | 94.967933ms | 10.0.0.123 | POST "/api/chat" time=2025-01-11T19:18:53.473Z level=DEBUG source=sched.go:407 msg="context for request finished" time=2025-01-11T19:18:53.473Z level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/vscode/.ollama/models/blobs/sha256-36cc41839cc170480c8e4fd176b404024e06df4bd7128bcbe199293283b844a9 duration=2562047h47m16.854775807s time=2025-01-11T19:18:53.473Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/home/vscode/.ollama/models/blobs/sha256-36cc41839cc170480c8e4fd176b404024e06df4bd7128bcbe199293283b844a9 refCount=0 time=2025-01-11T19:18:53.496Z level=DEBUG source=sched.go:575 msg="evaluating already loaded" model=/home/vscode/.ollama/models/blobs/sha256-36cc41839cc170480c8e4fd176b404024e06df4bd7128bcbe199293283b844a9 time=2025-01-11T19:18:53.498Z level=DEBUG source=routes.go:1470 msg="chat request" images=0 prompt="<|start_header_id|>system<|end_header_id|>\n\nYou are A2, a member of Team A. \r\nYour team consists of [A0, A1, A2].\r\nEach member of your team has been given a number of chocolates. Remember the count of chocolates. \r\n\r\nYou have 1 chocolates.\r\n\r\nInstructions:\r\n\r\nTeam Dynamics:\r\n Teams: There are three teams, A, B, and C. Your team consists of [A0, A1, A2].\r\n Team Leaders: The second character '0' in your name indicates you are the team leader. Team leaders can communicate with leaders of other teams but not with non-leaders outside their team.\r\n Team Members: Team members can only communicate within their team.\r\n Teams: must complete the task before calling a different team.\r\n\r\nCollaboration: \r\n A team must collaborate together to accomplish the task. \r\n Suggest the next team leader to do the same by using the `NEXT:` tag, e.g., `NEXT: B0`.\r\n \r\nTermination: \r\n - Once all teams have answered terminate the discussion using `TERMINATE`.\r\n - The termination must include the complete answer, not just a summary.\r\n\r\nConstraints:\r\n - Only suggest players from the given list [A0,A1].\r\n - Adhere strictly to the communication rules and team constraints.\r\n - A team member's chocolate count must not change, it must remain constant.\r\n - The count must be based on the initial count given to you.\r\n - Do not answer for the team unless each team member has provided their count.\r\n\r\nNext Action:\r\nUse `NEXT:` to suggest the next speaker who should contribute based on the current state of the discussion.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nThere are 9 players in this game, split equally into Teams A, B, C. Therefore, each team has 3 players, including the team leader.\r\nThe task is to find out the chocolate count from all nine players. Each team lead must call on another team after they have their team's total.\r\nEvery player must keep track of all player's tally using a JSON format. Every player must answer with the JSON format\r\n{\r\nA0:?\r\nA1:?\r\nA2:?\r\nTeamATotal:?\r\nB0:?\r\nB1:?\r\nB2:?\r\nTeamBTotal:?\r\nC0:?\r\nC1:?\r\nC2:?\r\nTeamCTotal:?\r\n}\r\n\r\nThe termination must include the JSON format with all the player's tally.\r\n\r\nAn example of team leader's answer:\r\nI'm B0, a leader of Team B. I have 1 chocolate.\r\n{\r\nA0:3\r\nA1:5\r\nA2:2\r\nTeamATotal:10\r\nB0:1\r\nB1:?\r\nB2:?\r\nTeamBTotal:?\r\nC0:?\r\nC1:?\r\nC2:?\r\nTeamCTotal:?\r\n}\r\nNEXT: B1\r\n\n\nI'm A0, the leader of Team A. I have 4 chocolates.\n\n{\nA0:4\nA1:? \nA2:? \nTeamATotal:? \nB0:? \nB1:? \nB2:? \nTeamBTotal:? \nC0:? \nC1:? \nC2:? \nTeamCTotal:? }\n\nNEXT: A1\n\nI'm A2, a member of Team A. I have 2 chocolates.\n\n{\nA0:4\nA1:? \nA2:? \nTeamATotal:? \nB0:? \nB1:? \nB2:? \nTeamBTotal:? \nC0:? \nC1:? \nC2:? \nTeamCTotal:? }\n\nNEXT: A2<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" time=2025-01-11T19:18:53.499Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=273 prompt=753 used=7 remaining=746 [GIN] 2025/01/11 - 19:18:54 | 200 | 929.905932ms | 10.0.0.123 | POST "/api/chat" time=2025-01-11T19:18:54.406Z level=DEBUG source=sched.go:407 msg="context for request finished" time=2025-01-11T19:18:54.406Z level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/vscode/.ollama/models/blobs/sha256-36cc41839cc170480c8e4fd176b404024e06df4bd7128bcbe199293283b844a9 duration=2562047h47m16.854775807s time=2025-01-11T19:18:54.406Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/home/vscode/.ollama/models/blobs/sha256-36cc41839cc170480c8e4fd176b404024e06df4bd7128bcbe199293283b844a9 refCount=0 time=2025-01-11T19:18:54.432Z level=DEBUG source=sched.go:575 msg="evaluating already loaded" model=/home/vscode/.ollama/models/blobs/sha256-36cc41839cc170480c8e4fd176b404024e06df4bd7128bcbe199293283b844a9 time=2025-01-11T19:18:54.433Z level=DEBUG source=routes.go:1470 msg="chat request" images=0 prompt="<|start_header_id|>system<|end_header_id|>\n\nYou are a coordinator responsible for managing a group of agents. Your primary task is to choose which agent should be called next based on the user's instructions and the context of the conversation.\r\n\r\nYou have a list of agent names that you may choose from: [A0,A1].\r\n\r\nCritical Rule:\r\n\r\nIf the user's prompt explicitly specifies \"next\" and an agent name follows \"next\" and that name matches an agent in your list, you must select that agent, without exception. This rule applies even if other agents were mentioned earlier in the conversation.\r\n\r\nSecondary Rules:\r\n\r\nIf the user's suggestion does not match any names in the list, ignore the user's suggestion and select the most contextually appropriate agent from the list.\r\nIf the user does not provide any explicit instructions, choose the agent that best continues the flow of the conversation, ensuring logical progression.\r\n\r\nOnly return the agent's name and nothing else.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nI'm A2, a member of Team A. I have 2 chocolates.\n\n{\nA0:4\nA1:? \nA2:? \nTeamATotal:? \nB0:? \nB1:? \nB2:? \nTeamBTotal:? \nC0:? \nC1:? \nC2:? \nTeamCTotal:? }\n\nNEXT: B1\n\nYour team members have provided their chocolate count, so you can now calculate the total for Team A and proceed with the task.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" time=2025-01-11T19:18:54.434Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=854 prompt=297 used=7 remaining=290 time=2025-01-11T19:19:23.745Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=5 discard=2045 time=2025-01-11T19:19:40.621Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=5 discard=2045` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.5, 0.5.4, 0.5.3,
GiteaMirror added the bug label 2026-05-04 10:20:47 -05:00
Author
Owner

@MarkWard0110 commented on GitHub (Jan 11, 2025):

I have found I cannot reproduce the issue with the model (yet)
llama3.1:8b-instruct-q8_0

but I can reproduce with
bespoke-minicheck:7b-fp16
llama3.1:8b-instruct-q4_K_M

<!-- gh-comment-id:2585394451 --> @MarkWard0110 commented on GitHub (Jan 11, 2025): I have found I cannot reproduce the issue with the model (yet) `llama3.1:8b-instruct-q8_0` but I can reproduce with `bespoke-minicheck:7b-fp16` `llama3.1:8b-instruct-q4_K_M`
Author
Owner

@MarkWard0110 commented on GitHub (Jan 11, 2025):

Reproduced with
phi4:14b-q4_K_M

<!-- gh-comment-id:2585395690 --> @MarkWard0110 commented on GitHub (Jan 11, 2025): Reproduced with phi4:14b-q4_K_M
Author
Owner

@MarkWard0110 commented on GitHub (Jan 11, 2025):

Not reproducible with
llama3.1:8b-instruct-q6_K

<!-- gh-comment-id:2585398253 --> @MarkWard0110 commented on GitHub (Jan 11, 2025): Not reproducible with llama3.1:8b-instruct-q6_K
Author
Owner

@MarkWard0110 commented on GitHub (Jan 11, 2025):

Even though I can't reproduce the issue with some models, my test program has encountered an issue with the other llama variants. It is random and takes many hours of testing to find one that eventually fails.

<!-- gh-comment-id:2585398981 --> @MarkWard0110 commented on GitHub (Jan 11, 2025): Even though I can't reproduce the issue with some models, my test program has encountered an issue with the other llama variants. It is random and takes many hours of testing to find one that eventually fails.
Author
Owner

@MarkWard0110 commented on GitHub (Jan 11, 2025):

The program will retry the same prompt, so sometimes, it will reissue the request, and Ollama will process it. In other situations, the issue will hit for all 3 retries.

<!-- gh-comment-id:2585399527 --> @MarkWard0110 commented on GitHub (Jan 11, 2025): The program will retry the same prompt, so sometimes, it will reissue the request, and Ollama will process it. In other situations, the issue will hit for all 3 retries.
Author
Owner

@MarkWard0110 commented on GitHub (Jan 11, 2025):

So that you know, I have a 30-minute timeout on my client.
I get 147tps for llama3.1:8b-instruct-q2_K
110tps for llama3.1:8b-instruct-q4_K_M

<!-- gh-comment-id:2585400494 --> @MarkWard0110 commented on GitHub (Jan 11, 2025): So that you know, I have a 30-minute timeout on my client. I get 147tps for llama3.1:8b-instruct-q2_K 110tps for llama3.1:8b-instruct-q4_K_M
Author
Owner

@MarkWard0110 commented on GitHub (Jan 11, 2025):

I dug into my logs and picked out the /api/chat requests that terminated with a 500 error.

Note the time of the request. Requests near 30 minutes - when my test app would timeout due to not getting a response/chat completion.

Note the date these logs start appearing: December 11. On December 11, Ollama released Ollama 0.5.2. I have not been able to reproduce the issue on older versions of Ollama, like 0.3.14, but I am still trying.

Dec 11 09:58:58 quorra ollama[1543654]: [GIN] 2024/12/11 - 09:58:58 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Dec 11 10:31:41 quorra ollama[1543654]: [GIN] 2024/12/11 - 10:31:41 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Dec 11 11:04:24 quorra ollama[1543654]: [GIN] 2024/12/11 - 11:04:24 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Dec 12 00:40:02 quorra ollama[1543654]: [GIN] 2024/12/12 - 00:40:02 | 500 | 18m31s | 10.0.0.123 | POST "/api/chat"
Dec 12 00:40:39 quorra ollama[1543654]: [GIN] 2024/12/12 - 00:40:39 | 500 | 4.709180529s | 10.0.0.123 | POST "/api/chat"
Dec 13 01:17:21 quorra ollama[1190]: [GIN] 2024/12/13 - 01:17:21 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Dec 13 01:53:03 quorra ollama[1190]: [GIN] 2024/12/13 - 01:53:03 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Dec 13 02:28:48 quorra ollama[1190]: [GIN] 2024/12/13 - 02:28:48 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Dec 13 15:19:01 quorra ollama[1190]: [GIN] 2024/12/13 - 15:19:01 | 500 | 19.409145891s | 10.0.0.123 | POST "/api/chat"
Dec 14 14:19:48 quorra ollama[1190]: [GIN] 2024/12/14 - 14:19:48 | 500 | 49.968259888s | 10.0.0.123 | POST "/api/chat"
Dec 14 19:32:48 quorra ollama[1185]: [GIN] 2024/12/14 - 19:32:48 | 500 | 1m7s | 10.0.0.123 | POST "/api/chat"
Dec 21 20:21:58 quorra ollama[1170]: [GIN] 2024/12/21 - 20:21:58 | 500 | 443.053589ms | 10.0.0.123 | POST "/api/chat"
Dec 21 20:24:11 quorra ollama[1170]: [GIN] 2024/12/21 - 20:24:11 | 500 | 1.115558916s | 10.0.0.123 | POST "/api/chat"
Dec 21 20:37:10 quorra ollama[1170]: [GIN] 2024/12/21 - 20:37:10 | 500 | 2m48s | 10.0.0.123 | POST "/api/chat"
Dec 21 20:38:49 quorra ollama[1170]: [GIN] 2024/12/21 - 20:38:49 | 500 | 1m25s | 10.0.0.123 | POST "/api/chat"
Dec 21 20:56:27 quorra ollama[943299]: [GIN] 2024/12/21 - 20:56:27 | 500 | 16m6s | 10.0.0.123 | POST "/api/chat"
Dec 21 22:03:41 quorra ollama[943299]: [GIN] 2024/12/21 - 22:03:41 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Dec 21 22:34:03 quorra ollama[943299]: [GIN] 2024/12/21 - 22:34:03 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Dec 21 23:04:24 quorra ollama[943299]: [GIN] 2024/12/21 - 23:04:24 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Dec 21 23:22:23 quorra ollama[943299]: [GIN] 2024/12/21 - 23:22:23 | 500 | 15m16s | 10.0.0.123 | POST "/api/chat"
Dec 22 00:14:37 quorra ollama[1190]: [GIN] 2024/12/22 - 00:14:37 | 500 | 5m3s | 10.0.0.123 | POST "/api/chat"
Dec 22 00:29:47 quorra ollama[1190]: [GIN] 2024/12/22 - 00:29:47 | 500 | 2m52s | 10.0.0.123 | POST "/api/chat"
Dec 22 00:35:14 quorra ollama[1190]: [GIN] 2024/12/22 - 00:35:14 | 500 | 2m12s | 10.0.0.123 | POST "/api/chat"
Dec 22 00:36:45 quorra ollama[1190]: [GIN] 2024/12/22 - 00:36:45 | 500 | 1m20s | 10.0.0.123 | POST "/api/chat"
Dec 22 01:58:40 quorra ollama[1190]: [GIN] 2024/12/22 - 01:58:40 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Dec 22 02:46:03 quorra ollama[1190]: [GIN] 2024/12/22 - 02:46:03 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Dec 23 16:25:17 quorra ollama[1197]: [GIN] 2024/12/23 - 16:25:17 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Dec 23 16:31:51 quorra ollama[1197]: [GIN] 2024/12/23 - 16:31:51 | 500 | 6m2s | 10.0.0.123 | POST "/api/chat"
Dec 24 20:29:02 quorra ollama[1197]: [GIN] 2024/12/24 - 20:29:02 | 500 | 13.811388894s | 10.0.0.123 | POST "/api/chat"
Dec 26 07:07:06 quorra ollama[1197]: [GIN] 2024/12/26 - 07:07:06 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Dec 26 07:45:31 quorra ollama[1197]: [GIN] 2024/12/26 - 07:45:31 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Dec 26 08:23:58 quorra ollama[1197]: [GIN] 2024/12/26 - 08:23:58 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Dec 29 07:40:18 quorra ollama[1197]: [GIN] 2024/12/29 - 07:40:18 | 500 | 30m0s | 192.168.2.2 | POST "/api/chat"
Dec 29 08:10:52 quorra ollama[1197]: [GIN] 2024/12/29 - 08:10:52 | 500 | 30m0s | 192.168.2.2 | POST "/api/chat"
Dec 29 08:41:26 quorra ollama[1197]: [GIN] 2024/12/29 - 08:41:26 | 500 | 30m0s | 192.168.2.2 | POST "/api/chat"
Dec 29 10:06:30 quorra ollama[1197]: [GIN] 2024/12/29 - 10:06:30 | 500 | 29m59s | 192.168.2.2 | POST "/api/chat"
Dec 29 10:36:57 quorra ollama[1197]: [GIN] 2024/12/29 - 10:36:57 | 500 | 30m0s | 192.168.2.2 | POST "/api/chat"
Dec 29 11:07:25 quorra ollama[1197]: [GIN] 2024/12/29 - 11:07:25 | 500 | 30m0s | 192.168.2.2 | POST "/api/chat"
Dec 29 12:45:42 quorra ollama[1197]: [GIN] 2024/12/29 - 12:45:42 | 500 | 30m0s | 192.168.2.2 | POST "/api/chat"
Dec 29 13:17:01 quorra ollama[1197]: [GIN] 2024/12/29 - 13:17:01 | 500 | 29m59s | 192.168.2.2 | POST "/api/chat"
Dec 29 13:46:31 quorra ollama[1197]: [GIN] 2024/12/29 - 13:46:31 | 500 | 28m7s | 192.168.2.2 | POST "/api/chat"
Dec 31 18:57:31 quorra ollama[436927]: [GIN] 2024/12/31 - 18:57:31 | 500 | 2.28044619s | 192.168.2.2 | POST "/api/chat"
Jan 01 04:44:01 quorra ollama[436927]: [GIN] 2025/01/01 - 04:44:01 | 500 | 2.295119335s | 192.168.2.2 | POST "/api/chat"
Jan 01 12:54:27 quorra ollama[2072842]: [GIN] 2025/01/01 - 12:54:27 | 500 | 20.785669967s | 192.168.2.2 | POST "/api/chat"
Jan 02 11:58:18 quorra ollama[2072842]: [GIN] 2025/01/02 - 11:58:18 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 02 12:40:04 quorra ollama[2072842]: [GIN] 2025/01/02 - 12:40:04 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 02 13:21:51 quorra ollama[2072842]: [GIN] 2025/01/02 - 13:21:51 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 02 17:11:02 quorra ollama[2072842]: [GIN] 2025/01/02 - 17:11:02 | 500 | 41.438646474s | 10.0.0.123 | POST "/api/chat"
Jan 04 04:37:46 quorra ollama[1189]: [GIN] 2025/01/04 - 04:37:46 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 04 05:09:19 quorra ollama[1189]: [GIN] 2025/01/04 - 05:09:19 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 04 05:41:03 quorra ollama[1189]: [GIN] 2025/01/04 - 05:41:03 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 04 08:03:23 quorra ollama[1189]: [GIN] 2025/01/04 - 08:03:23 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 04 08:34:45 quorra ollama[1189]: [GIN] 2025/01/04 - 08:34:45 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 04 09:06:08 quorra ollama[1189]: [GIN] 2025/01/04 - 09:06:08 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 04 09:38:38 quorra ollama[1189]: [GIN] 2025/01/04 - 09:38:38 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 04 10:09:58 quorra ollama[1189]: [GIN] 2025/01/04 - 10:09:58 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 04 10:41:18 quorra ollama[1189]: [GIN] 2025/01/04 - 10:41:18 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 04 18:16:26 quorra ollama[1189]: [GIN] 2025/01/04 - 18:16:26 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 04 18:46:34 quorra ollama[1189]: [GIN] 2025/01/04 - 18:46:34 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 04 19:16:43 quorra ollama[1189]: [GIN] 2025/01/04 - 19:16:43 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 04 19:50:37 quorra ollama[1189]: [GIN] 2025/01/04 - 19:50:37 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 04 20:20:57 quorra ollama[1189]: [GIN] 2025/01/04 - 20:20:57 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 04 20:51:18 quorra ollama[1189]: [GIN] 2025/01/04 - 20:51:18 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 04 21:21:36 quorra ollama[1189]: [GIN] 2025/01/04 - 21:21:36 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 04 21:51:55 quorra ollama[1189]: [GIN] 2025/01/04 - 21:51:55 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 04 22:22:13 quorra ollama[1189]: [GIN] 2025/01/04 - 22:22:13 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 04 22:52:59 quorra ollama[1189]: [GIN] 2025/01/04 - 22:52:59 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 04 23:23:30 quorra ollama[1189]: [GIN] 2025/01/04 - 23:23:30 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 04 23:53:54 quorra ollama[1189]: [GIN] 2025/01/04 - 23:53:54 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 05 00:27:21 quorra ollama[1189]: [GIN] 2025/01/05 - 00:27:21 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 05 01:34:56 quorra ollama[1189]: [GIN] 2025/01/05 - 01:34:56 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 05 02:05:28 quorra ollama[1189]: [GIN] 2025/01/05 - 02:05:28 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 05 02:36:01 quorra ollama[1189]: [GIN] 2025/01/05 - 02:36:01 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 05 07:40:16 quorra ollama[1189]: [GIN] 2025/01/05 - 07:40:16 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 05 08:10:51 quorra ollama[1189]: [GIN] 2025/01/05 - 08:10:51 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 05 08:41:26 quorra ollama[1189]: [GIN] 2025/01/05 - 08:41:26 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 05 09:15:54 quorra ollama[1189]: [GIN] 2025/01/05 - 09:15:54 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 05 09:46:20 quorra ollama[1189]: [GIN] 2025/01/05 - 09:46:20 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 05 10:16:46 quorra ollama[1189]: [GIN] 2025/01/05 - 10:16:46 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 05 10:49:33 quorra ollama[1189]: [GIN] 2025/01/05 - 10:49:33 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 05 11:19:50 quorra ollama[1189]: [GIN] 2025/01/05 - 11:19:50 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 05 11:50:06 quorra ollama[1189]: [GIN] 2025/01/05 - 11:50:06 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 05 12:22:42 quorra ollama[1189]: [GIN] 2025/01/05 - 12:22:42 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 05 13:25:41 quorra ollama[1199]: [GIN] 2025/01/05 - 13:25:41 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 05 13:56:00 quorra ollama[1199]: [GIN] 2025/01/05 - 13:56:00 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 05 14:26:20 quorra ollama[1199]: [GIN] 2025/01/05 - 14:26:20 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 05 16:27:09 quorra ollama[1199]: [GIN] 2025/01/05 - 16:27:09 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 05 16:57:37 quorra ollama[1199]: [GIN] 2025/01/05 - 16:57:37 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 05 17:28:04 quorra ollama[1199]: [GIN] 2025/01/05 - 17:28:04 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 05 18:20:22 quorra ollama[1199]: [GIN] 2025/01/05 - 18:20:22 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 05 18:50:51 quorra ollama[1199]: [GIN] 2025/01/05 - 18:50:51 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 05 19:21:19 quorra ollama[1199]: [GIN] 2025/01/05 - 19:21:19 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 05 20:06:08 quorra ollama[153554]: [GIN] 2025/01/05 - 20:06:08 | 500 | 859.302774ms | 10.0.0.123 | POST "/api/chat"
Jan 06 13:33:49 quorra ollama[153554]: [GIN] 2025/01/06 - 13:33:49 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 06 14:06:18 quorra ollama[153554]: [GIN] 2025/01/06 - 14:06:18 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 06 14:30:41 quorra ollama[153554]: [GIN] 2025/01/06 - 14:30:41 | 500 | 21m53s | 10.0.0.123 | POST "/api/chat"
Jan 07 13:40:30 quorra ollama[2092446]: [GIN] 2025/01/07 - 13:40:30 | 500 | 27m4s | 10.0.0.123 | POST "/api/chat"
Jan 07 15:05:40 quorra ollama[2627580]: [GIN] 2025/01/07 - 15:05:40 | 500 | 4.294571004s | 10.0.0.123 | POST "/api/chat"
Jan 07 15:07:47 quorra ollama[2653069]: [GIN] 2025/01/07 - 15:07:47 | 500 | 218.456882ms | 10.0.0.123 | POST "/api/chat"
Jan 07 15:51:33 quorra ollama[2653872]: [GIN] 2025/01/07 - 15:51:33 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 07 16:28:18 quorra ollama[2653872]: [GIN] 2025/01/07 - 16:28:18 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 09 04:05:37 quorra ollama[1194]: [GIN] 2025/01/09 - 04:05:37 | 500 | 22.271843428s | 10.0.0.123 | POST "/api/chat"
Jan 09 04:40:32 quorra ollama[119670]: [GIN] 2025/01/09 - 04:40:32 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 09 05:14:03 quorra ollama[119670]: [GIN] 2025/01/09 - 05:14:03 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 09 06:10:14 quorra ollama[119670]: [GIN] 2025/01/09 - 06:10:14 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 09 07:08:47 quorra ollama[119670]: [GIN] 2025/01/09 - 07:08:47 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 09 08:00:30 quorra ollama[119670]: [GIN] 2025/01/09 - 08:00:30 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 09 08:38:37 quorra ollama[119670]: [GIN] 2025/01/09 - 08:38:37 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 09 09:16:46 quorra ollama[119670]: [GIN] 2025/01/09 - 09:16:46 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 09 09:53:25 quorra ollama[119670]: [GIN] 2025/01/09 - 09:53:25 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 09 10:26:05 quorra ollama[119670]: [GIN] 2025/01/09 - 10:26:05 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 09 11:02:30 quorra ollama[119670]: [GIN] 2025/01/09 - 11:02:30 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 09 11:38:15 quorra ollama[119670]: [GIN] 2025/01/09 - 11:38:15 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 09 13:24:42 quorra ollama[119670]: [GIN] 2025/01/09 - 13:24:42 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 09 13:37:00 quorra ollama[119670]: [GIN] 2025/01/09 - 13:37:00 | 500 | 833.493538ms | 10.0.0.123 | POST "/api/chat"
Jan 09 13:43:40 quorra ollama[119670]: [GIN] 2025/01/09 - 13:43:40 | 500 | 4m9s | 10.0.0.123 | POST "/api/chat"
Jan 09 14:14:35 quorra ollama[119670]: [GIN] 2025/01/09 - 14:14:35 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 09 15:00:58 quorra ollama[119670]: [GIN] 2025/01/09 - 15:00:58 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 09 15:47:33 quorra ollama[119670]: [GIN] 2025/01/09 - 15:47:33 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 09 17:46:09 quorra ollama[119670]: [GIN] 2025/01/09 - 17:46:09 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 09 18:20:09 quorra ollama[119670]: [GIN] 2025/01/09 - 18:20:09 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 09 18:56:09 quorra ollama[119670]: [GIN] 2025/01/09 - 18:56:09 | 500 | 2.26187395s | 10.0.0.123 | POST "/api/chat"
Jan 09 19:41:32 quorra ollama[1286285]: [GIN] 2025/01/09 - 19:41:32 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 09 20:22:46 quorra ollama[1286285]: [GIN] 2025/01/09 - 20:22:46 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 09 20:53:04 quorra ollama[1286285]: [GIN] 2025/01/09 - 20:53:04 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 09 21:23:22 quorra ollama[1286285]: [GIN] 2025/01/09 - 21:23:22 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 09 21:54:23 quorra ollama[1286285]: [GIN] 2025/01/09 - 21:54:23 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 09 22:27:13 quorra ollama[1286285]: [GIN] 2025/01/09 - 22:27:13 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 09 23:39:58 quorra ollama[1286285]: [GIN] 2025/01/09 - 23:39:58 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 10 01:00:58 quorra ollama[1286285]: [GIN] 2025/01/10 - 01:00:58 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat"
Jan 10 01:51:52 quorra ollama[1286285]: [GIN] 2025/01/10 - 01:51:52 | 500 | 29m43s | 10.0.0.123 | POST "/api/chat"
Jan 10 05:03:17 quorra ollama[1161]: [GIN] 2025/01/10 - 05:03:17 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 10 06:11:59 quorra ollama[1161]: [GIN] 2025/01/10 - 06:11:59 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 11 02:56:34 quorra ollama[246677]: [GIN] 2025/01/11 - 02:56:34 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 11 16:16:44 quorra ollama[246677]: [GIN] 2025/01/11 - 16:16:44 | 500 | 4.275501413s | 10.0.0.123 | POST "/api/chat"

<!-- gh-comment-id:2585402322 --> @MarkWard0110 commented on GitHub (Jan 11, 2025): I dug into my logs and picked out the /api/chat requests that terminated with a 500 error. Note the time of the request. Requests near 30 minutes - when my test app would timeout due to not getting a response/chat completion. Note the date these logs start appearing: December 11. On December 11, Ollama released Ollama 0.5.2. I have not been able to reproduce the issue on older versions of Ollama, like 0.3.14, but I am still trying. Dec 11 09:58:58 quorra ollama[1543654]: [GIN] 2024/12/11 - 09:58:58 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Dec 11 10:31:41 quorra ollama[1543654]: [GIN] 2024/12/11 - 10:31:41 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Dec 11 11:04:24 quorra ollama[1543654]: [GIN] 2024/12/11 - 11:04:24 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Dec 12 00:40:02 quorra ollama[1543654]: [GIN] 2024/12/12 - 00:40:02 | 500 | 18m31s | 10.0.0.123 | POST "/api/chat" Dec 12 00:40:39 quorra ollama[1543654]: [GIN] 2024/12/12 - 00:40:39 | 500 | 4.709180529s | 10.0.0.123 | POST "/api/chat" Dec 13 01:17:21 quorra ollama[1190]: [GIN] 2024/12/13 - 01:17:21 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Dec 13 01:53:03 quorra ollama[1190]: [GIN] 2024/12/13 - 01:53:03 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Dec 13 02:28:48 quorra ollama[1190]: [GIN] 2024/12/13 - 02:28:48 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Dec 13 15:19:01 quorra ollama[1190]: [GIN] 2024/12/13 - 15:19:01 | 500 | 19.409145891s | 10.0.0.123 | POST "/api/chat" Dec 14 14:19:48 quorra ollama[1190]: [GIN] 2024/12/14 - 14:19:48 | 500 | 49.968259888s | 10.0.0.123 | POST "/api/chat" Dec 14 19:32:48 quorra ollama[1185]: [GIN] 2024/12/14 - 19:32:48 | 500 | 1m7s | 10.0.0.123 | POST "/api/chat" Dec 21 20:21:58 quorra ollama[1170]: [GIN] 2024/12/21 - 20:21:58 | 500 | 443.053589ms | 10.0.0.123 | POST "/api/chat" Dec 21 20:24:11 quorra ollama[1170]: [GIN] 2024/12/21 - 20:24:11 | 500 | 1.115558916s | 10.0.0.123 | POST "/api/chat" Dec 21 20:37:10 quorra ollama[1170]: [GIN] 2024/12/21 - 20:37:10 | 500 | 2m48s | 10.0.0.123 | POST "/api/chat" Dec 21 20:38:49 quorra ollama[1170]: [GIN] 2024/12/21 - 20:38:49 | 500 | 1m25s | 10.0.0.123 | POST "/api/chat" Dec 21 20:56:27 quorra ollama[943299]: [GIN] 2024/12/21 - 20:56:27 | 500 | 16m6s | 10.0.0.123 | POST "/api/chat" Dec 21 22:03:41 quorra ollama[943299]: [GIN] 2024/12/21 - 22:03:41 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Dec 21 22:34:03 quorra ollama[943299]: [GIN] 2024/12/21 - 22:34:03 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Dec 21 23:04:24 quorra ollama[943299]: [GIN] 2024/12/21 - 23:04:24 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Dec 21 23:22:23 quorra ollama[943299]: [GIN] 2024/12/21 - 23:22:23 | 500 | 15m16s | 10.0.0.123 | POST "/api/chat" Dec 22 00:14:37 quorra ollama[1190]: [GIN] 2024/12/22 - 00:14:37 | 500 | 5m3s | 10.0.0.123 | POST "/api/chat" Dec 22 00:29:47 quorra ollama[1190]: [GIN] 2024/12/22 - 00:29:47 | 500 | 2m52s | 10.0.0.123 | POST "/api/chat" Dec 22 00:35:14 quorra ollama[1190]: [GIN] 2024/12/22 - 00:35:14 | 500 | 2m12s | 10.0.0.123 | POST "/api/chat" Dec 22 00:36:45 quorra ollama[1190]: [GIN] 2024/12/22 - 00:36:45 | 500 | 1m20s | 10.0.0.123 | POST "/api/chat" Dec 22 01:58:40 quorra ollama[1190]: [GIN] 2024/12/22 - 01:58:40 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Dec 22 02:46:03 quorra ollama[1190]: [GIN] 2024/12/22 - 02:46:03 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Dec 23 16:25:17 quorra ollama[1197]: [GIN] 2024/12/23 - 16:25:17 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Dec 23 16:31:51 quorra ollama[1197]: [GIN] 2024/12/23 - 16:31:51 | 500 | 6m2s | 10.0.0.123 | POST "/api/chat" Dec 24 20:29:02 quorra ollama[1197]: [GIN] 2024/12/24 - 20:29:02 | 500 | 13.811388894s | 10.0.0.123 | POST "/api/chat" Dec 26 07:07:06 quorra ollama[1197]: [GIN] 2024/12/26 - 07:07:06 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Dec 26 07:45:31 quorra ollama[1197]: [GIN] 2024/12/26 - 07:45:31 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Dec 26 08:23:58 quorra ollama[1197]: [GIN] 2024/12/26 - 08:23:58 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Dec 29 07:40:18 quorra ollama[1197]: [GIN] 2024/12/29 - 07:40:18 | 500 | 30m0s | 192.168.2.2 | POST "/api/chat" Dec 29 08:10:52 quorra ollama[1197]: [GIN] 2024/12/29 - 08:10:52 | 500 | 30m0s | 192.168.2.2 | POST "/api/chat" Dec 29 08:41:26 quorra ollama[1197]: [GIN] 2024/12/29 - 08:41:26 | 500 | 30m0s | 192.168.2.2 | POST "/api/chat" Dec 29 10:06:30 quorra ollama[1197]: [GIN] 2024/12/29 - 10:06:30 | 500 | 29m59s | 192.168.2.2 | POST "/api/chat" Dec 29 10:36:57 quorra ollama[1197]: [GIN] 2024/12/29 - 10:36:57 | 500 | 30m0s | 192.168.2.2 | POST "/api/chat" Dec 29 11:07:25 quorra ollama[1197]: [GIN] 2024/12/29 - 11:07:25 | 500 | 30m0s | 192.168.2.2 | POST "/api/chat" Dec 29 12:45:42 quorra ollama[1197]: [GIN] 2024/12/29 - 12:45:42 | 500 | 30m0s | 192.168.2.2 | POST "/api/chat" Dec 29 13:17:01 quorra ollama[1197]: [GIN] 2024/12/29 - 13:17:01 | 500 | 29m59s | 192.168.2.2 | POST "/api/chat" Dec 29 13:46:31 quorra ollama[1197]: [GIN] 2024/12/29 - 13:46:31 | 500 | 28m7s | 192.168.2.2 | POST "/api/chat" Dec 31 18:57:31 quorra ollama[436927]: [GIN] 2024/12/31 - 18:57:31 | 500 | 2.28044619s | 192.168.2.2 | POST "/api/chat" Jan 01 04:44:01 quorra ollama[436927]: [GIN] 2025/01/01 - 04:44:01 | 500 | 2.295119335s | 192.168.2.2 | POST "/api/chat" Jan 01 12:54:27 quorra ollama[2072842]: [GIN] 2025/01/01 - 12:54:27 | 500 | 20.785669967s | 192.168.2.2 | POST "/api/chat" Jan 02 11:58:18 quorra ollama[2072842]: [GIN] 2025/01/02 - 11:58:18 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 02 12:40:04 quorra ollama[2072842]: [GIN] 2025/01/02 - 12:40:04 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 02 13:21:51 quorra ollama[2072842]: [GIN] 2025/01/02 - 13:21:51 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 02 17:11:02 quorra ollama[2072842]: [GIN] 2025/01/02 - 17:11:02 | 500 | 41.438646474s | 10.0.0.123 | POST "/api/chat" Jan 04 04:37:46 quorra ollama[1189]: [GIN] 2025/01/04 - 04:37:46 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 04 05:09:19 quorra ollama[1189]: [GIN] 2025/01/04 - 05:09:19 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 04 05:41:03 quorra ollama[1189]: [GIN] 2025/01/04 - 05:41:03 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 04 08:03:23 quorra ollama[1189]: [GIN] 2025/01/04 - 08:03:23 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 04 08:34:45 quorra ollama[1189]: [GIN] 2025/01/04 - 08:34:45 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 04 09:06:08 quorra ollama[1189]: [GIN] 2025/01/04 - 09:06:08 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 04 09:38:38 quorra ollama[1189]: [GIN] 2025/01/04 - 09:38:38 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 04 10:09:58 quorra ollama[1189]: [GIN] 2025/01/04 - 10:09:58 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 04 10:41:18 quorra ollama[1189]: [GIN] 2025/01/04 - 10:41:18 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 04 18:16:26 quorra ollama[1189]: [GIN] 2025/01/04 - 18:16:26 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 04 18:46:34 quorra ollama[1189]: [GIN] 2025/01/04 - 18:46:34 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 04 19:16:43 quorra ollama[1189]: [GIN] 2025/01/04 - 19:16:43 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 04 19:50:37 quorra ollama[1189]: [GIN] 2025/01/04 - 19:50:37 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 04 20:20:57 quorra ollama[1189]: [GIN] 2025/01/04 - 20:20:57 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 04 20:51:18 quorra ollama[1189]: [GIN] 2025/01/04 - 20:51:18 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 04 21:21:36 quorra ollama[1189]: [GIN] 2025/01/04 - 21:21:36 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 04 21:51:55 quorra ollama[1189]: [GIN] 2025/01/04 - 21:51:55 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 04 22:22:13 quorra ollama[1189]: [GIN] 2025/01/04 - 22:22:13 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 04 22:52:59 quorra ollama[1189]: [GIN] 2025/01/04 - 22:52:59 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 04 23:23:30 quorra ollama[1189]: [GIN] 2025/01/04 - 23:23:30 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 04 23:53:54 quorra ollama[1189]: [GIN] 2025/01/04 - 23:53:54 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 05 00:27:21 quorra ollama[1189]: [GIN] 2025/01/05 - 00:27:21 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 05 01:34:56 quorra ollama[1189]: [GIN] 2025/01/05 - 01:34:56 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 05 02:05:28 quorra ollama[1189]: [GIN] 2025/01/05 - 02:05:28 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 05 02:36:01 quorra ollama[1189]: [GIN] 2025/01/05 - 02:36:01 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 05 07:40:16 quorra ollama[1189]: [GIN] 2025/01/05 - 07:40:16 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 05 08:10:51 quorra ollama[1189]: [GIN] 2025/01/05 - 08:10:51 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 05 08:41:26 quorra ollama[1189]: [GIN] 2025/01/05 - 08:41:26 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 05 09:15:54 quorra ollama[1189]: [GIN] 2025/01/05 - 09:15:54 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 05 09:46:20 quorra ollama[1189]: [GIN] 2025/01/05 - 09:46:20 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 05 10:16:46 quorra ollama[1189]: [GIN] 2025/01/05 - 10:16:46 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 05 10:49:33 quorra ollama[1189]: [GIN] 2025/01/05 - 10:49:33 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 05 11:19:50 quorra ollama[1189]: [GIN] 2025/01/05 - 11:19:50 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 05 11:50:06 quorra ollama[1189]: [GIN] 2025/01/05 - 11:50:06 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 05 12:22:42 quorra ollama[1189]: [GIN] 2025/01/05 - 12:22:42 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 05 13:25:41 quorra ollama[1199]: [GIN] 2025/01/05 - 13:25:41 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 05 13:56:00 quorra ollama[1199]: [GIN] 2025/01/05 - 13:56:00 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 05 14:26:20 quorra ollama[1199]: [GIN] 2025/01/05 - 14:26:20 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 05 16:27:09 quorra ollama[1199]: [GIN] 2025/01/05 - 16:27:09 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 05 16:57:37 quorra ollama[1199]: [GIN] 2025/01/05 - 16:57:37 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 05 17:28:04 quorra ollama[1199]: [GIN] 2025/01/05 - 17:28:04 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 05 18:20:22 quorra ollama[1199]: [GIN] 2025/01/05 - 18:20:22 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 05 18:50:51 quorra ollama[1199]: [GIN] 2025/01/05 - 18:50:51 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 05 19:21:19 quorra ollama[1199]: [GIN] 2025/01/05 - 19:21:19 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 05 20:06:08 quorra ollama[153554]: [GIN] 2025/01/05 - 20:06:08 | 500 | 859.302774ms | 10.0.0.123 | POST "/api/chat" Jan 06 13:33:49 quorra ollama[153554]: [GIN] 2025/01/06 - 13:33:49 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 06 14:06:18 quorra ollama[153554]: [GIN] 2025/01/06 - 14:06:18 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 06 14:30:41 quorra ollama[153554]: [GIN] 2025/01/06 - 14:30:41 | 500 | 21m53s | 10.0.0.123 | POST "/api/chat" Jan 07 13:40:30 quorra ollama[2092446]: [GIN] 2025/01/07 - 13:40:30 | 500 | 27m4s | 10.0.0.123 | POST "/api/chat" Jan 07 15:05:40 quorra ollama[2627580]: [GIN] 2025/01/07 - 15:05:40 | 500 | 4.294571004s | 10.0.0.123 | POST "/api/chat" Jan 07 15:07:47 quorra ollama[2653069]: [GIN] 2025/01/07 - 15:07:47 | 500 | 218.456882ms | 10.0.0.123 | POST "/api/chat" Jan 07 15:51:33 quorra ollama[2653872]: [GIN] 2025/01/07 - 15:51:33 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 07 16:28:18 quorra ollama[2653872]: [GIN] 2025/01/07 - 16:28:18 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 09 04:05:37 quorra ollama[1194]: [GIN] 2025/01/09 - 04:05:37 | 500 | 22.271843428s | 10.0.0.123 | POST "/api/chat" Jan 09 04:40:32 quorra ollama[119670]: [GIN] 2025/01/09 - 04:40:32 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 09 05:14:03 quorra ollama[119670]: [GIN] 2025/01/09 - 05:14:03 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 09 06:10:14 quorra ollama[119670]: [GIN] 2025/01/09 - 06:10:14 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 09 07:08:47 quorra ollama[119670]: [GIN] 2025/01/09 - 07:08:47 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 09 08:00:30 quorra ollama[119670]: [GIN] 2025/01/09 - 08:00:30 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 09 08:38:37 quorra ollama[119670]: [GIN] 2025/01/09 - 08:38:37 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 09 09:16:46 quorra ollama[119670]: [GIN] 2025/01/09 - 09:16:46 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 09 09:53:25 quorra ollama[119670]: [GIN] 2025/01/09 - 09:53:25 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 09 10:26:05 quorra ollama[119670]: [GIN] 2025/01/09 - 10:26:05 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 09 11:02:30 quorra ollama[119670]: [GIN] 2025/01/09 - 11:02:30 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 09 11:38:15 quorra ollama[119670]: [GIN] 2025/01/09 - 11:38:15 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 09 13:24:42 quorra ollama[119670]: [GIN] 2025/01/09 - 13:24:42 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 09 13:37:00 quorra ollama[119670]: [GIN] 2025/01/09 - 13:37:00 | 500 | 833.493538ms | 10.0.0.123 | POST "/api/chat" Jan 09 13:43:40 quorra ollama[119670]: [GIN] 2025/01/09 - 13:43:40 | 500 | 4m9s | 10.0.0.123 | POST "/api/chat" Jan 09 14:14:35 quorra ollama[119670]: [GIN] 2025/01/09 - 14:14:35 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 09 15:00:58 quorra ollama[119670]: [GIN] 2025/01/09 - 15:00:58 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 09 15:47:33 quorra ollama[119670]: [GIN] 2025/01/09 - 15:47:33 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 09 17:46:09 quorra ollama[119670]: [GIN] 2025/01/09 - 17:46:09 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 09 18:20:09 quorra ollama[119670]: [GIN] 2025/01/09 - 18:20:09 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 09 18:56:09 quorra ollama[119670]: [GIN] 2025/01/09 - 18:56:09 | 500 | 2.26187395s | 10.0.0.123 | POST "/api/chat" Jan 09 19:41:32 quorra ollama[1286285]: [GIN] 2025/01/09 - 19:41:32 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 09 20:22:46 quorra ollama[1286285]: [GIN] 2025/01/09 - 20:22:46 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 09 20:53:04 quorra ollama[1286285]: [GIN] 2025/01/09 - 20:53:04 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 09 21:23:22 quorra ollama[1286285]: [GIN] 2025/01/09 - 21:23:22 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 09 21:54:23 quorra ollama[1286285]: [GIN] 2025/01/09 - 21:54:23 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 09 22:27:13 quorra ollama[1286285]: [GIN] 2025/01/09 - 22:27:13 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 09 23:39:58 quorra ollama[1286285]: [GIN] 2025/01/09 - 23:39:58 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 10 01:00:58 quorra ollama[1286285]: [GIN] 2025/01/10 - 01:00:58 | 500 | 29m59s | 10.0.0.123 | POST "/api/chat" Jan 10 01:51:52 quorra ollama[1286285]: [GIN] 2025/01/10 - 01:51:52 | 500 | 29m43s | 10.0.0.123 | POST "/api/chat" Jan 10 05:03:17 quorra ollama[1161]: [GIN] 2025/01/10 - 05:03:17 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 10 06:11:59 quorra ollama[1161]: [GIN] 2025/01/10 - 06:11:59 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 11 02:56:34 quorra ollama[246677]: [GIN] 2025/01/11 - 02:56:34 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 11 16:16:44 quorra ollama[246677]: [GIN] 2025/01/11 - 16:16:44 | 500 | 4.275501413s | 10.0.0.123 | POST "/api/chat"
Author
Owner

@MarkWard0110 commented on GitHub (Jan 11, 2025):

produced using
phi4:14b-q8_0

<!-- gh-comment-id:2585405450 --> @MarkWard0110 commented on GitHub (Jan 11, 2025): produced using phi4:14b-q8_0
Author
Owner

@MarkWard0110 commented on GitHub (Jan 11, 2025):

If I use mistral-small:22b-instruct-2409-q4_K_M
It does complete the chat request, however the chat request is incomplete. It is missing the evaluation metadata. The response is garbage - but that is not the point :)

time=2025-01-11T20:58:45.240Z level=INFO source=server.go:594 msg="llama runner started in 1.51 seconds" time=2025-01-11T20:58:45.240Z level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/home/vscode/.ollama/models/blobs/sha256-7f39b4b7da055cab91127a7297aa4e6490999e1443d6be679b303555c64c1c5d time=2025-01-11T20:58:45.241Z level=DEBUG source=routes.go:1470 msg="chat request" images=0 prompt=" [INST] good job! [/INST]" time=2025-01-11T20:58:45.243Z level=WARN source=runner.go:129 msg="truncating input prompt" limit=4 prompt=8 keep=3 new=4 time=2025-01-11T20:58:45.243Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=4 used=0 remaining=4 time=2025-01-11T20:58:45.362Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.390Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.413Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.437Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.460Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.483Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.506Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.529Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.552Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.575Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.598Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.621Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.645Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.668Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.691Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.714Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.737Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.760Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.783Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.806Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.829Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.853Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.876Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.899Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.922Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.945Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.968Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.992Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.015Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.038Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.061Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.084Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.107Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.130Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.131Z level=DEBUG source=server.go:816 msg="prediction aborted, token repeat limit reached" time=2025-01-11T20:58:46.131Z level=DEBUG source=sched.go:466 msg="context for request finished" [GIN] 2025/01/11 - 20:58:46 | 200 | 3.076485346s | 10.0.0.123 | POST "/api/chat" time=2025-01-11T20:58:46.131Z level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/vscode/.ollama/models/blobs/sha256-7f39b4b7da055cab91127a7297aa4e6490999e1443d6be679b303555c64c1c5d duration=2562047h47m16.854775807s time=2025-01-11T20:58:46.131Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/home/vscode/.ollama/models/blobs/sha256-7f39b4b7da055cab91127a7297aa4e6490999e1443d6be679b303555c64c1c5d refCount=0 time=2025-01-11T20:58:46.154Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.177Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.200Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.223Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.246Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.269Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1

<!-- gh-comment-id:2585406901 --> @MarkWard0110 commented on GitHub (Jan 11, 2025): If I use mistral-small:22b-instruct-2409-q4_K_M It does complete the chat request, however the chat request is incomplete. It is missing the evaluation metadata. The response is garbage - but that is not the point :) ` time=2025-01-11T20:58:45.240Z level=INFO source=server.go:594 msg="llama runner started in 1.51 seconds" time=2025-01-11T20:58:45.240Z level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/home/vscode/.ollama/models/blobs/sha256-7f39b4b7da055cab91127a7297aa4e6490999e1443d6be679b303555c64c1c5d time=2025-01-11T20:58:45.241Z level=DEBUG source=routes.go:1470 msg="chat request" images=0 prompt=" [INST] good job! [/INST]" time=2025-01-11T20:58:45.243Z level=WARN source=runner.go:129 msg="truncating input prompt" limit=4 prompt=8 keep=3 new=4 time=2025-01-11T20:58:45.243Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=4 used=0 remaining=4 time=2025-01-11T20:58:45.362Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.390Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.413Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.437Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.460Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.483Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.506Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.529Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.552Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.575Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.598Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.621Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.645Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.668Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.691Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.714Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.737Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.760Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.783Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.806Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.829Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.853Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.876Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.899Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.922Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.945Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.968Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:45.992Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.015Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.038Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.061Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.084Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.107Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.130Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.131Z level=DEBUG source=server.go:816 msg="prediction aborted, token repeat limit reached" time=2025-01-11T20:58:46.131Z level=DEBUG source=sched.go:466 msg="context for request finished" [GIN] 2025/01/11 - 20:58:46 | 200 | 3.076485346s | 10.0.0.123 | POST "/api/chat" time=2025-01-11T20:58:46.131Z level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/vscode/.ollama/models/blobs/sha256-7f39b4b7da055cab91127a7297aa4e6490999e1443d6be679b303555c64c1c5d duration=2562047h47m16.854775807s time=2025-01-11T20:58:46.131Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/home/vscode/.ollama/models/blobs/sha256-7f39b4b7da055cab91127a7297aa4e6490999e1443d6be679b303555c64c1c5d refCount=0 time=2025-01-11T20:58:46.154Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.177Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.200Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.223Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.246Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.269Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 `
Author
Owner

@MarkWard0110 commented on GitHub (Jan 11, 2025):

I also want to add. While testing, I have had a break point on the prompt.go chatPrompt function where it logs truncating. It does not always break here. It seems the chat request API is slightly different from what the runner gets when it logs truncate messages.

<!-- gh-comment-id:2585416638 --> @MarkWard0110 commented on GitHub (Jan 11, 2025): I also want to add. While testing, I have had a break point on the `prompt.go` `chatPrompt` function where it logs truncating. It does not always break here. It seems the chat request API is slightly different from what the runner gets when it logs truncate messages.
Author
Owner

@MarkWard0110 commented on GitHub (Jan 11, 2025):

reproduced with
qwen2.5:7b-instruct-q6_K

<!-- gh-comment-id:2585418016 --> @MarkWard0110 commented on GitHub (Jan 11, 2025): reproduced with qwen2.5:7b-instruct-q6_K
Author
Owner

@MarkWard0110 commented on GitHub (Jan 11, 2025):

I have gotten the issue with llama3.1:8b-instruct-q6_K

The messages sent to Ollama were of the following evaluations

  1. Context request: 4096 Prompt Eval Count: 439 Eval Count: 78
  2. Context request: 4096 Prompt Eval Count: 605 Eval Count: 77
  3. Context request: 4096 Prompt Eval Count: 276 Eval Count: 3
  4. Context request: 4096 Prompt Eval Count: 678 Eval Count: 79
  5. Context request: 4096 Prompt Eval Count: 274 Eval Count: 3
  6. Context request: 4096 Prompt Eval Count: 757 Eval Count: 80
  7. Context request: 4096 Prompt Eval Count: 275 Eval Count: 3
  8. Context request: 4096 Prompt Eval Count: 849 Eval Count: 80
  9. Context request: 4096 Prompt Eval Count: 279 Eval Count: 3
  10. Context request: 4096 Prompt Eval Count: 921 Eval Count: 77
  11. Context request: 4096 Prompt Eval Count: 276 Eval Count: 3
  12. Context request: 4096 Prompt Eval Count: 994 Eval Count: 79
  13. Context request: 4096 Prompt Eval Count: 274 Eval Count: 3
  14. Context request: 4096 Prompt Eval Count: 1073 Eval Count: 80
  15. Context request: 4096 Prompt Eval Count: 275 Eval Count: 3
  16. Context request: 4096 Prompt Eval Count: 1165 Eval Count: 80
  17. Context request: 4096 Prompt Eval Count: 279 Eval Count: 3
  18. Context request: 4096 Prompt Eval Count: 1237 Eval Count: 78
  19. Context request: 4096 Prompt Eval Count: 277 Eval Count: 3
  20. Context request: 4096 Prompt Eval Count: 1311 Eval Count: 79
  21. Context request: 4096 Prompt Eval Count: 274 Eval Count: 3
  22. Context request: 4096 Prompt Eval Count: 1390 Eval Count: 80
  23. Context request: 4096 Prompt Eval Count: 275 Eval Count: 3
  24. Context request: 4096 Prompt Eval Count: 1482 Eval Count: 80
  25. Context request: 4096 Prompt Eval Count: 279 Eval Count: 3
  26. Context request: 4096 Prompt Eval Count: 1570 Eval Count: 236

The 27th message is what has not completed and is not responding. Ollama is logging context limits hit.

When I take the message from the log and find the following about it eval:928 prompt eval:1949. It is not a huge message and should not be hitting a context limit. However it seems to be doing that in Ollama. If I resend the message I can get a chat completion for it.

Ollama log

time=2025-01-11T21:26:18.408Z level=DEBUG source=routes.go:1470 msg="chat request" images=0 prompt="<|start_header_id|>system<|end_header_id|>\n\nYou are a coordinator responsible for managing a group of agents. Your primary task is to choose which agent should be called next based on the user's instructions and the context of the conversation.\r\n\r\nYou have a list of agent names that you may choose from: [A1,A2,B0,C0].\r\n\r\nCritical Rule:\r\n\r\nIf the user's prompt explicitly specifies "next" and an agent name follows "next" and that name matches an agent in your list, you must select that agent, without exception. This rule applies even if other agents were mentioned earlier in the conversation.\r\n\r\nSecondary Rules:\r\n\r\nIf the user's suggestion does not match any names in the list, ignore the user's suggestion and select the most contextually appropriate agent from the list.\r\nIf the user does not provide any explicit instructions, choose the agent that best continues the flow of the conversation, ensuring logical progression.\r\n\r\nOnly return the agent's name and nothing else.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nI'm A0, the leader of Team A. I have 1 chocolate.\n{\nA0:1\nA1:2\nA2:1\nTeamATotal:4\nB0:2\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: A0<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nA0<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nIs this agent in the list of agent names that was given to you?\r\nYou didn't choose a valid agent name. You have a list of agent names that you may choose from: [A1,A2,B0,C0]. As a reminder, to determine the next agent use these prioritized rules:\r\n1. If the context refers to themselves as a speaker e.g. "As the..." , choose that agent's name.\r\n2. If it refers to the "next" agent name and the name is in the list, choose that name.\r\n3. Otherwise, choose one of the provided agent's name on behalf of the user.\r\n4. Do not answer with the invalid agent name.\r\nThe names are case-sensitive and should not be abbreviated or changed.\r\nThe only names that are accepted are [A1,A2,B0,C0].\r\nRespond with ONLY the name of the agent and DO NOT provide a reason.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
time=2025-01-11T21:26:18.408Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=282 prompt=479 used=282 remaining=197
[GIN] 2025/01/11 - 21:26:18 | 200 | 92.252039ms | 10.0.0.123 | POST "/api/chat"
time=2025-01-11T21:26:18.479Z level=DEBUG source=sched.go:407 msg="context for request finished"
time=2025-01-11T21:26:18.479Z level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/vscode/.ollama/models/blobs/sha256-2eda6fab632c5584da09426d35d410a7d2b23ed6259d71455b664824c2d17076 duration=2562047h47m16.854775807s
time=2025-01-11T21:26:18.479Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/home/vscode/.ollama/models/blobs/sha256-2eda6fab632c5584da09426d35d410a7d2b23ed6259d71455b664824c2d17076 refCount=0
time=2025-01-11T21:26:18.510Z level=DEBUG source=sched.go:575 msg="evaluating already loaded" model=/home/vscode/.ollama/models/blobs/sha256-2eda6fab632c5584da09426d35d410a7d2b23ed6259d71455b664824c2d17076
time=2025-01-11T21:26:18.516Z level=DEBUG source=routes.go:1470 msg="chat request" images=0 prompt="<|start_header_id|>system<|end_header_id|>\n\nYou are A1, a member of Team A. \r\nYour team consists of [A0, A1, A2].\r\nEach member of your team has been given a number of chocolates. Remember the count of chocolates. \r\n\r\nYou have 2 chocolates.\r\n\r\nInstructions:\r\n\r\nTeam Dynamics:\r\n Teams: There are three teams, A, B, and C. Your team consists of [A0, A1, A2].\r\n Team Leaders: The second character '0' in your name indicates you are the team leader. Team leaders can communicate with leaders of other teams but not with non-leaders outside their team.\r\n Team Members: Team members can only communicate within their team.\r\n Teams: must complete the task before calling a different team.\r\n\r\nCollaboration: \r\n A team must collaborate together to accomplish the task. \r\n Suggest the next team leader to do the same by using the NEXT: tag, e.g., NEXT: B0.\r\n \r\nTermination: \r\n - Once all teams have answered terminate the discussion using TERMINATE.\r\n - The termination must include the complete answer, not just a summary.\r\n\r\nConstraints:\r\n - Only suggest players from the given list [A0,A2].\r\n - Adhere strictly to the communication rules and team constraints.\r\n - A team member's chocolate count must not change, it must remain constant.\r\n - The count must be based on the initial count given to you.\r\n - Do not answer for the team unless each team member has provided their count.\r\n\r\nNext Action:\r\nUse NEXT: to suggest the next speaker who should contribute based on the current state of the discussion.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nThere are 9 players in this game, split equally into Teams A, B, C. Therefore, each team has 3 players, including the team leader.\r\nThe task is to find out the chocolate count from all nine players. Each team lead must call on another team after they have their team's total.\r\nEvery player must keep track of all player's tally using a JSON format. Every player must answer with the JSON format\r\n{\r\nA0:?\r\nA1:?\r\nA2:?\r\nTeamATotal:?\r\nB0:?\r\nB1:?\r\nB2:?\r\nTeamBTotal:?\r\nC0:?\r\nC1:?\r\nC2:?\r\nTeamCTotal:?\r\n}\r\n\r\nThe termination must include the JSON format with all the player's tally.\r\n\r\nAn example of team leader's answer:\r\nI'm B0, a leader of Team B. I have 1 chocolate.\r\n{\r\nA0:3\r\nA1:5\r\nA2:2\r\nTeamATotal:10\r\nB0:1\r\nB1:?\r\nB2:?\r\nTeamBTotal:?\r\nC0:?\r\nC1:?\r\nC2:?\r\nTeamCTotal:?\r\n}\r\nNEXT: B1\r\n\n\nI'm A0, the leader of Team A. I have 1 chocolate.\n{\nA0:1\nA1:?\nA2:?\nTeamATotal:?\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: A1<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nI'm A1, a member of Team A. I have 2 chocolates.\n{\nA0:1\nA1:2\nA2:?\nTeamATotal:3\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: A2<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nI'm A2, a member of Team A. I have 1 chocolate.\n{\nA0:1\nA1:2\nA2:1\nTeamATotal:4\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: A0\n\nI'm A0, the leader of Team A. I have 1 chocolate.\n{\nA0:1\nA1:2\nA2:1\nTeamATotal:4\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: B0\n\nI'm B0, the leader of Team B. I have 2 chocolates.\n{\nA0:? \nA1:?\nA2:?\nTeamATotal:?\nB0:2\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: B1\n\nI'm B1, a member of Team B. I have 2 chocolates.\n{\nA0:? \nA1:?\nA2:?\nTeamATotal:?\nB0:2\nB1:2\nB2:?\nTeamBTotal:4\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: B2\n\nI'm B2, a member of Team B. I have 5 chocolates.\n{\nA0:? \nA1:?\nA2:?\nTeamATotal:?\nB0:2\nB1:2\nB2:5\nTeamBTotal:9\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: B0\n\nI'm B0, the leader of Team B. I have 2 chocolates.\n{\nA0:? \nA1:?\nA2:?\nTeamATotal:?\nB0:2\nB1:2\nB2:5\nTeamBTotal:9\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: C0\n\nI'm C0, the leader of Team C. I have 2 chocolates.\n{\nA0:? \nA1:?\nA2:?\nTeamATotal:?\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:2\nC1:?\nC2:?\nTeamCTotal:2\n}\nNEXT: C1\n\nI'm C1, a member of Team C. I have 2 chocolates.\n{\nA0:? \nA1:?\nA2:?\nTeamATotal:?\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:2\nC1:2\nC2:?\nTeamCTotal:4\n}\nNEXT: C2\n\nI'm C2, a member of Team C. I have 2 chocolates.\n{\nA0:? \nA1:?\nA2:?\nTeamATotal:?\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:2\nC1:2\nC2:2\nTeamCTotal:6\n}\nNEXT: C0\n\nI'm C0, the leader of Team C. I have 2 chocolates.\n{\nA0:? \nA1:?\nA2:?\nTeamATotal:?\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:2\nC1:2\nC2:2\nTeamCTotal:6\n}\nNEXT: A0\n\nI'm A0, the leader of Team A. I have 1 chocolate.\n{\nA0:1\nA1:? \nA2:?\nTeamATotal:?\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: A1\n\nI'm A1, a member of Team A. I have 2 chocolates.\n{\nA0:1\nA1:2\nA2:?\nTeamATotal:3\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: A2\n\nI'm A2, a member of Team A. I have 1 chocolate.\n{\nA0:1\nA1:2\nA2:1\nTeamATotal:4\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: A0\n\nI'm A0, the leader of Team A. I have 1 chocolate.\n{\nA0:1\nA1:2\nA2:1\nTeamATotal:4\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: B0\n\nI'm A0, the leader of Team A. I have 1 chocolate.\n{\nA0:1\nA1:2\nA2:1\nTeamATotal:4\nB0:2\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: A0<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
time=2025-01-11T21:26:18.518Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=481 prompt=1955 used=7 remaining=1948
time=2025-01-11T21:26:46.302Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=5 discard=2045
time=2025-01-11T21:27:12.980Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=5 discard=2045
time=2025-01-11T21:27:39.696Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=5 discard=2045

<!-- gh-comment-id:2585454618 --> @MarkWard0110 commented on GitHub (Jan 11, 2025): I have gotten the issue with `llama3.1:8b-instruct-q6_K` The messages sent to Ollama were of the following evaluations 1. Context request: 4096 Prompt Eval Count: 439 Eval Count: 78 2. Context request: 4096 Prompt Eval Count: 605 Eval Count: 77 3. Context request: 4096 Prompt Eval Count: 276 Eval Count: 3 4. Context request: 4096 Prompt Eval Count: 678 Eval Count: 79 5. Context request: 4096 Prompt Eval Count: 274 Eval Count: 3 6. Context request: 4096 Prompt Eval Count: 757 Eval Count: 80 7. Context request: 4096 Prompt Eval Count: 275 Eval Count: 3 8. Context request: 4096 Prompt Eval Count: 849 Eval Count: 80 9. Context request: 4096 Prompt Eval Count: 279 Eval Count: 3 10. Context request: 4096 Prompt Eval Count: 921 Eval Count: 77 11. Context request: 4096 Prompt Eval Count: 276 Eval Count: 3 12. Context request: 4096 Prompt Eval Count: 994 Eval Count: 79 13. Context request: 4096 Prompt Eval Count: 274 Eval Count: 3 14. Context request: 4096 Prompt Eval Count: 1073 Eval Count: 80 15. Context request: 4096 Prompt Eval Count: 275 Eval Count: 3 16. Context request: 4096 Prompt Eval Count: 1165 Eval Count: 80 17. Context request: 4096 Prompt Eval Count: 279 Eval Count: 3 18. Context request: 4096 Prompt Eval Count: 1237 Eval Count: 78 19. Context request: 4096 Prompt Eval Count: 277 Eval Count: 3 20. Context request: 4096 Prompt Eval Count: 1311 Eval Count: 79 21. Context request: 4096 Prompt Eval Count: 274 Eval Count: 3 22. Context request: 4096 Prompt Eval Count: 1390 Eval Count: 80 23. Context request: 4096 Prompt Eval Count: 275 Eval Count: 3 24. Context request: 4096 Prompt Eval Count: 1482 Eval Count: 80 25. Context request: 4096 Prompt Eval Count: 279 Eval Count: 3 26. Context request: 4096 Prompt Eval Count: 1570 Eval Count: 236 The 27th message is what has not completed and is not responding. Ollama is logging context limits hit. When I take the message from the log and find the following about it eval:928 prompt eval:1949. It is not a huge message and should not be hitting a context limit. However it seems to be doing that in Ollama. If I resend the message I can get a chat completion for it. Ollama log time=2025-01-11T21:26:18.408Z level=DEBUG source=routes.go:1470 msg="chat request" images=0 prompt="<|start_header_id|>system<|end_header_id|>\n\nYou are a coordinator responsible for managing a group of agents. Your primary task is to choose which agent should be called next based on the user's instructions and the context of the conversation.\r\n\r\nYou have a list of agent names that you may choose from: [A1,A2,B0,C0].\r\n\r\nCritical Rule:\r\n\r\nIf the user's prompt explicitly specifies \"next\" and an agent name follows \"next\" and that name matches an agent in your list, you must select that agent, without exception. This rule applies even if other agents were mentioned earlier in the conversation.\r\n\r\nSecondary Rules:\r\n\r\nIf the user's suggestion does not match any names in the list, ignore the user's suggestion and select the most contextually appropriate agent from the list.\r\nIf the user does not provide any explicit instructions, choose the agent that best continues the flow of the conversation, ensuring logical progression.\r\n\r\nOnly return the agent's name and nothing else.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nI'm A0, the leader of Team A. I have 1 chocolate.\n{\nA0:1\nA1:2\nA2:1\nTeamATotal:4\nB0:2\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: A0<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nA0<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nIs this agent in the list of agent names that was given to you?\r\nYou didn't choose a valid agent name. You have a list of agent names that you may choose from: [A1,A2,B0,C0]. As a reminder, to determine the next agent use these prioritized rules:\r\n1. If the context refers to themselves as a speaker e.g. \"As the...\" , choose that agent's name.\r\n2. If it refers to the \"next\" agent name and the name is in the list, choose that name.\r\n3. Otherwise, choose one of the provided agent's name on behalf of the user.\r\n4. Do not answer with the invalid agent name.\r\nThe names are case-sensitive and should not be abbreviated or changed.\r\nThe only names that are accepted are [A1,A2,B0,C0].\r\nRespond with ONLY the name of the agent and DO NOT provide a reason.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" time=2025-01-11T21:26:18.408Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=282 prompt=479 used=282 remaining=197 [GIN] 2025/01/11 - 21:26:18 | 200 | 92.252039ms | 10.0.0.123 | POST "/api/chat" time=2025-01-11T21:26:18.479Z level=DEBUG source=sched.go:407 msg="context for request finished" time=2025-01-11T21:26:18.479Z level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/vscode/.ollama/models/blobs/sha256-2eda6fab632c5584da09426d35d410a7d2b23ed6259d71455b664824c2d17076 duration=2562047h47m16.854775807s time=2025-01-11T21:26:18.479Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/home/vscode/.ollama/models/blobs/sha256-2eda6fab632c5584da09426d35d410a7d2b23ed6259d71455b664824c2d17076 refCount=0 time=2025-01-11T21:26:18.510Z level=DEBUG source=sched.go:575 msg="evaluating already loaded" model=/home/vscode/.ollama/models/blobs/sha256-2eda6fab632c5584da09426d35d410a7d2b23ed6259d71455b664824c2d17076 time=2025-01-11T21:26:18.516Z level=DEBUG source=routes.go:1470 msg="chat request" images=0 prompt="<|start_header_id|>system<|end_header_id|>\n\nYou are A1, a member of Team A. \r\nYour team consists of [A0, A1, A2].\r\nEach member of your team has been given a number of chocolates. Remember the count of chocolates. \r\n\r\nYou have 2 chocolates.\r\n\r\nInstructions:\r\n\r\nTeam Dynamics:\r\n Teams: There are three teams, A, B, and C. Your team consists of [A0, A1, A2].\r\n Team Leaders: The second character '0' in your name indicates you are the team leader. Team leaders can communicate with leaders of other teams but not with non-leaders outside their team.\r\n Team Members: Team members can only communicate within their team.\r\n Teams: must complete the task before calling a different team.\r\n\r\nCollaboration: \r\n A team must collaborate together to accomplish the task. \r\n Suggest the next team leader to do the same by using the `NEXT:` tag, e.g., `NEXT: B0`.\r\n \r\nTermination: \r\n - Once all teams have answered terminate the discussion using `TERMINATE`.\r\n - The termination must include the complete answer, not just a summary.\r\n\r\nConstraints:\r\n - Only suggest players from the given list [A0,A2].\r\n - Adhere strictly to the communication rules and team constraints.\r\n - A team member's chocolate count must not change, it must remain constant.\r\n - The count must be based on the initial count given to you.\r\n - Do not answer for the team unless each team member has provided their count.\r\n\r\nNext Action:\r\nUse `NEXT:` to suggest the next speaker who should contribute based on the current state of the discussion.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nThere are 9 players in this game, split equally into Teams A, B, C. Therefore, each team has 3 players, including the team leader.\r\nThe task is to find out the chocolate count from all nine players. Each team lead must call on another team after they have their team's total.\r\nEvery player must keep track of all player's tally using a JSON format. Every player must answer with the JSON format\r\n{\r\nA0:?\r\nA1:?\r\nA2:?\r\nTeamATotal:?\r\nB0:?\r\nB1:?\r\nB2:?\r\nTeamBTotal:?\r\nC0:?\r\nC1:?\r\nC2:?\r\nTeamCTotal:?\r\n}\r\n\r\nThe termination must include the JSON format with all the player's tally.\r\n\r\nAn example of team leader's answer:\r\nI'm B0, a leader of Team B. I have 1 chocolate.\r\n{\r\nA0:3\r\nA1:5\r\nA2:2\r\nTeamATotal:10\r\nB0:1\r\nB1:?\r\nB2:?\r\nTeamBTotal:?\r\nC0:?\r\nC1:?\r\nC2:?\r\nTeamCTotal:?\r\n}\r\nNEXT: B1\r\n\n\nI'm A0, the leader of Team A. I have 1 chocolate.\n{\nA0:1\nA1:?\nA2:?\nTeamATotal:?\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: A1<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nI'm A1, a member of Team A. I have 2 chocolates.\n{\nA0:1\nA1:2\nA2:?\nTeamATotal:3\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: A2<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nI'm A2, a member of Team A. I have 1 chocolate.\n{\nA0:1\nA1:2\nA2:1\nTeamATotal:4\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: A0\n\nI'm A0, the leader of Team A. I have 1 chocolate.\n{\nA0:1\nA1:2\nA2:1\nTeamATotal:4\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: B0\n\nI'm B0, the leader of Team B. I have 2 chocolates.\n{\nA0:? \nA1:?\nA2:?\nTeamATotal:?\nB0:2\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: B1\n\nI'm B1, a member of Team B. I have 2 chocolates.\n{\nA0:? \nA1:?\nA2:?\nTeamATotal:?\nB0:2\nB1:2\nB2:?\nTeamBTotal:4\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: B2\n\nI'm B2, a member of Team B. I have 5 chocolates.\n{\nA0:? \nA1:?\nA2:?\nTeamATotal:?\nB0:2\nB1:2\nB2:5\nTeamBTotal:9\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: B0\n\nI'm B0, the leader of Team B. I have 2 chocolates.\n{\nA0:? \nA1:?\nA2:?\nTeamATotal:?\nB0:2\nB1:2\nB2:5\nTeamBTotal:9\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: C0\n\nI'm C0, the leader of Team C. I have 2 chocolates.\n{\nA0:? \nA1:?\nA2:?\nTeamATotal:?\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:2\nC1:?\nC2:?\nTeamCTotal:2\n}\nNEXT: C1\n\nI'm C1, a member of Team C. I have 2 chocolates.\n{\nA0:? \nA1:?\nA2:?\nTeamATotal:?\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:2\nC1:2\nC2:?\nTeamCTotal:4\n}\nNEXT: C2\n\nI'm C2, a member of Team C. I have 2 chocolates.\n{\nA0:? \nA1:?\nA2:?\nTeamATotal:?\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:2\nC1:2\nC2:2\nTeamCTotal:6\n}\nNEXT: C0\n\nI'm C0, the leader of Team C. I have 2 chocolates.\n{\nA0:? \nA1:?\nA2:?\nTeamATotal:?\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:2\nC1:2\nC2:2\nTeamCTotal:6\n}\nNEXT: A0\n\nI'm A0, the leader of Team A. I have 1 chocolate.\n{\nA0:1\nA1:? \nA2:?\nTeamATotal:?\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: A1\n\nI'm A1, a member of Team A. I have 2 chocolates.\n{\nA0:1\nA1:2\nA2:?\nTeamATotal:3\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: A2\n\nI'm A2, a member of Team A. I have 1 chocolate.\n{\nA0:1\nA1:2\nA2:1\nTeamATotal:4\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: A0\n\nI'm A0, the leader of Team A. I have 1 chocolate.\n{\nA0:1\nA1:2\nA2:1\nTeamATotal:4\nB0:?\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: B0\n\nI'm A0, the leader of Team A. I have 1 chocolate.\n{\nA0:1\nA1:2\nA2:1\nTeamATotal:4\nB0:2\nB1:?\nB2:?\nTeamBTotal:?\nC0:?\nC1:?\nC2:?\nTeamCTotal:?\n}\nNEXT: A0<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" time=2025-01-11T21:26:18.518Z level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=481 prompt=1955 used=7 remaining=1948 time=2025-01-11T21:26:46.302Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=5 discard=2045 time=2025-01-11T21:27:12.980Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=5 discard=2045 time=2025-01-11T21:27:39.696Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4096 input=4096 keep=5 discard=2045
Author
Owner

@MarkWard0110 commented on GitHub (Jan 11, 2025):

I can't reproduce the issue with Ollama 0.3.14 or Ollama 0.5.1

llama3.1:8b-instruct-q2_K
temp: 0
context size: 1
Request:
role: user
Message: good job!

What is really interesting is how the responses are different between the Ollama versions for the same chat request. Even when the seed is the same.

Ollama 0.3.14

The response looked like

I'm glad you're pleased, but I think there might be some confusion. This is the start of our conversation and I haven't done anything yet. If you'd like to talk about a specific

Ollama 0.5.1
The response looked like


The sum

##Step 1

Astranger

#include

In a) 

##Step 6

The equation
The following the number

The sum

The sum

The sum

<!-- gh-comment-id:2585457988 --> @MarkWard0110 commented on GitHub (Jan 11, 2025): I can't reproduce the issue with Ollama 0.3.14 or Ollama 0.5.1 llama3.1:8b-instruct-q2_K temp: 0 context size: 1 Request: role: user Message: good job! What is really interesting is how the responses are different between the Ollama versions for the same chat request. Even when the seed is the same. Ollama 0.3.14 The response looked like ` I'm glad you're pleased, but I think there might be some confusion. This is the start of our conversation and I haven't done anything yet. If you'd like to talk about a specific ` Ollama 0.5.1 The response looked like ``` The sum ##Step 1 Astranger #include In a) ##Step 6 The equation The following the number The sum The sum The sum ```
Author
Owner

@MarkWard0110 commented on GitHub (Jan 11, 2025):

Ollama configuration

[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_NUM_PARALLEL=1"

<!-- gh-comment-id:2585458636 --> @MarkWard0110 commented on GitHub (Jan 11, 2025): Ollama configuration [Service] Environment="OLLAMA_HOST=0.0.0.0" Environment="OLLAMA_NUM_PARALLEL=1"
Author
Owner

@MarkWard0110 commented on GitHub (Jan 12, 2025):

I have observed the bug with 0.5.1 on a different server.
This server is running Ollama on CPU only. It is a significantly slower server.
It has logged a 500 with a time of 30 minutes. Matching my client's timeout error. The following 200 api/chat would be the client's retry.

Jan 12 00:52:46 notquorra ollama[907909]: [GIN] 2025/01/12 - 00:52:46 | 200 | 19.293643585s | 10.0.0.123 | POST "/api/chat"
Jan 12 00:53:05 notquorra ollama[907909]: [GIN] 2025/01/12 - 00:53:05 | 200 | 19.794583282s | 10.0.0.123 | POST "/api/chat"
Jan 12 01:23:05 notquorra ollama[907909]: [GIN] 2025/01/12 - 01:23:05 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat"
Jan 12 01:23:58 notquorra ollama[907909]: [GIN] 2025/01/12 - 01:23:58 | 200 | 51.265644865s | 10.0.0.123 | POST "/api/chat"
Jan 12 01:25:10 notquorra ollama[907909]: [GIN] 2025/01/12 - 01:25:10 | 200 | 1m12s | 10.0.0.123 | POST "/api/chat"
Jan 12 01:25:35 notquorra ollama[907909]: [GIN] 2025/01/12 - 01:25:35 | 200 | 25.103787041s | 10.0.0.123 | POST "/api/chat"

<!-- gh-comment-id:2585537395 --> @MarkWard0110 commented on GitHub (Jan 12, 2025): I have observed the bug with 0.5.1 on a different server. This server is running Ollama on CPU only. It is a significantly slower server. It has logged a 500 with a time of 30 minutes. Matching my client's timeout error. The following 200 api/chat would be the client's retry. Jan 12 00:52:46 notquorra ollama[907909]: [GIN] 2025/01/12 - 00:52:46 | 200 | 19.293643585s | 10.0.0.123 | POST "/api/chat" Jan 12 00:53:05 notquorra ollama[907909]: [GIN] 2025/01/12 - 00:53:05 | 200 | 19.794583282s | 10.0.0.123 | POST "/api/chat" Jan 12 01:23:05 notquorra ollama[907909]: [GIN] 2025/01/12 - 01:23:05 | 500 | 30m0s | 10.0.0.123 | POST "/api/chat" Jan 12 01:23:58 notquorra ollama[907909]: [GIN] 2025/01/12 - 01:23:58 | 200 | 51.265644865s | 10.0.0.123 | POST "/api/chat" Jan 12 01:25:10 notquorra ollama[907909]: [GIN] 2025/01/12 - 01:25:10 | 200 | 1m12s | 10.0.0.123 | POST "/api/chat" Jan 12 01:25:35 notquorra ollama[907909]: [GIN] 2025/01/12 - 01:25:35 | 200 | 25.103787041s | 10.0.0.123 | POST "/api/chat"
Author
Owner

@rick-github commented on GitHub (Jan 12, 2025):

Correct me if I'm wrong, but the gist I get is that sometimes ollama doesn't terminate a generate request and the client times out.

Models are probabilistic generators and occasionally will hit a sequence where the probability of generating a terminating token is very small. This is usually associated with exceeding the context window. We see that in the logs with shifting warnings:

msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.269Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1

A simple way to limit this is to add num_predict to the generation, either in the API call or by adding it to the PARAMETER list in the Modelfile.

<!-- gh-comment-id:2585537628 --> @rick-github commented on GitHub (Jan 12, 2025): Correct me if I'm wrong, but the gist I get is that sometimes ollama doesn't terminate a generate request and the client times out. Models are probabilistic generators and occasionally will hit a sequence where the probability of generating a terminating token is very small. This is usually associated with exceeding the context window. We see that in the logs with `shifting` warnings: ``` msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.269Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 ``` A simple way to limit this is to add [`num_predict`](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values:~:text=tfs_z%201-,num_predict,-Maximum%20number%20of) to the generation, either in the API call or by adding it to the PARAMETER list in the Modelfile.
Author
Owner

@MarkWard0110 commented on GitHub (Jan 12, 2025):

Correct me if I'm wrong, but the gist I get is that sometimes ollama doesn't terminate a generate request and the client times out.

Models are probabilistic generators and occasionally will hit a sequence where the probability of generating a terminating token is very small. This is usually associated with exceeding the context window. We see that in the logs with shifting warnings:

msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.269Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1

A simple way to limit this is to add num_predict to the generation, either in the API call or by adding it to the PARAMETER list in the Modelfile.

I will test this. However, it does not change that this can be used as a denial of service on an Ollama server.

<!-- gh-comment-id:2585538568 --> @MarkWard0110 commented on GitHub (Jan 12, 2025): > Correct me if I'm wrong, but the gist I get is that sometimes ollama doesn't terminate a generate request and the client times out. > > Models are probabilistic generators and occasionally will hit a sequence where the probability of generating a terminating token is very small. This is usually associated with exceeding the context window. We see that in the logs with `shifting` warnings: > > ``` > msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 time=2025-01-11T20:58:46.269Z level=DEBUG source=cache.go:231 msg="context limit hit - shifting" id=0 limit=4 input=4 keep=3 discard=1 > ``` > > A simple way to limit this is to add [`num_predict`](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values:~:text=tfs_z%201-,num_predict,-Maximum%20number%20of) to the generation, either in the API call or by adding it to the PARAMETER list in the Modelfile. I will test this. However, it does not change that this can be used as a denial of service on an Ollama server.
Author
Owner

@rick-github commented on GitHub (Jan 12, 2025):

Yes, if an ollama server is going to deployed with public access certain hardening efforts will have to be undertaken.

<!-- gh-comment-id:2585539208 --> @rick-github commented on GitHub (Jan 12, 2025): Yes, if an ollama server is going to deployed with public access certain hardening efforts will have to be undertaken.
Author
Owner

@MarkWard0110 commented on GitHub (Jan 12, 2025):

I don't understand why I would not see these issues with older versions of Ollama but now I see them with the newer versions. Hear me out on the agent's prompts. The agent is sending the same prompts. It is a repeatable client that can be run over and over. It is playing a finite-state game.
It is basically like a unit test. Now it is failing to complete when it was otherwise working just fine.

<!-- gh-comment-id:2585540291 --> @MarkWard0110 commented on GitHub (Jan 12, 2025): I don't understand why I would not see these issues with older versions of Ollama but now I see them with the newer versions. Hear me out on the agent's prompts. The agent is sending the same prompts. It is a repeatable client that can be run over and over. It is playing a finite-state game. It is basically like a unit test. Now it is failing to complete when it was otherwise working just fine.
Author
Owner

@rick-github commented on GitHub (Jan 12, 2025):

There have been changes in the way that context exceeded events are handled (https://github.com/ollama/ollama/issues/7762#issuecomment-2489836910).

<!-- gh-comment-id:2585547057 --> @rick-github commented on GitHub (Jan 12, 2025): There have been changes in the way that context exceeded events are handled (https://github.com/ollama/ollama/issues/7762#issuecomment-2489836910).
Author
Owner

@MarkWard0110 commented on GitHub (Jan 12, 2025):

There have been changes in the way that context exceeded events are handled (#7762 (comment)).

Thank you for taking the time to point out the sliding window. I have seen it in the logs, but now I have a "context" helping me understand it.
num_predict stops and terminates the chat request. The num_predict default -1 has been magic, and I didn't know the logs were telling me information this whole time.
I'm doing to do additional testing!

If Ollama had provided more feedback in the response indicating the context was exceeded, I might have known more about this by now. I have been learning more about the input truncation but didn't understand the context window sliding.

<!-- gh-comment-id:2585555994 --> @MarkWard0110 commented on GitHub (Jan 12, 2025): > There have been changes in the way that context exceeded events are handled ([#7762 (comment)](https://github.com/ollama/ollama/issues/7762#issuecomment-2489836910)). Thank you for taking the time to point out the sliding window. I have seen it in the logs, but now I have a "context" helping me understand it. num_predict stops and terminates the chat request. The `num_predict` default -1 has been magic, and I didn't know the logs were telling me information this whole time. I'm doing to do additional testing! If Ollama had provided more feedback in the response indicating the context was exceeded, I might have known more about this by now. I have been learning more about the input truncation but didn't understand the context window sliding.
Author
Owner

@rick-github commented on GitHub (Jan 12, 2025):

OK, let me add some filler. Ignore the bits you know. I am not expert in this, others may offer corrections.

LLMs are basically auto-complete with feedback. The context window encompasses the data used for token completion and computing feedback (the attention mask). The initial prompt from the API goes through a templating step that adds system instructions, tools (if supported) and sequences that are converted to special tokens. During this process, ollama tries to limit the amount of data to less than the size of the context buffer. It does this by dropping messages during templating. It does its best to preserve the system message. The end result is a block of text that contains the system message and most if not all of the messages (user, assistant, tool). That text is tokenized and placed in the context buffer and generation starts. Each generated token is added to the end of the context buffer and the attention mask computed for the buffer. This continues until an end of sequence (eos or eog) token is generated or the token limit is reached. If eos is not generated before the context buffer is full, the buffer is shifted to make room. This leads to the head of the buffer being lost. This is typically where the system message is, so if that part of the buffer contained instructions on how to generate output, the generation process can lose coherence and start to ramble - this leads to a situation where the feedback starts to reinforce a particular sequence and the probability of an eos token decreases. Other heuristics can come into play (repeated token detection, total generated token count) that will stop generation but it can take some time.

It's important to note that the context window contains both input tokens (the initial prompt) and the output tokens (generated tokens). So if you ask the model to generate a long text and haven't adjusted the context buffer to account for that, you can run off the end and get a poor response.

<!-- gh-comment-id:2585591110 --> @rick-github commented on GitHub (Jan 12, 2025): OK, let me add some filler. Ignore the bits you know. I am not expert in this, others may offer corrections. LLMs are basically auto-complete with feedback. The context window encompasses the data used for token completion and computing feedback (the attention mask). The initial prompt from the API goes through a templating step that adds system instructions, tools (if supported) and sequences that are converted to special tokens. During this process, ollama tries to limit the amount of data to less than the size of the context buffer. It does this by dropping messages during templating. It does its best to preserve the system message. The end result is a block of text that contains the system message and most if not all of the messages (user, assistant, tool). That text is tokenized and placed in the context buffer and generation starts. Each generated token is added to the end of the context buffer and the attention mask computed for the buffer. This continues until an end of sequence (eos or eog) token is generated or the token limit is reached. If eos is not generated before the context buffer is full, the buffer is shifted to make room. This leads to the head of the buffer being lost. This is typically where the system message is, so if that part of the buffer contained instructions on how to generate output, the generation process can lose coherence and start to ramble - this leads to a situation where the feedback starts to reinforce a particular sequence and the probability of an eos token decreases. Other heuristics can come into play (repeated token detection, total generated token count) that will stop generation but it can take some time. It's important to note that the context window contains both input tokens (the initial prompt) and the output tokens (generated tokens). So if you ask the model to generate a long text and haven't adjusted the context buffer to account for that, you can run off the end and get a poor response.
Author
Owner

@MarkWard0110 commented on GitHub (Jan 12, 2025):

@rick-github 's suggestion to use num_predict has resolved the no response issue I was getting.

<!-- gh-comment-id:2585738034 --> @MarkWard0110 commented on GitHub (Jan 12, 2025): @rick-github 's suggestion to use `num_predict` has resolved the no response issue I was getting.
Author
Owner

@ricky-bug commented on GitHub (Mar 6, 2025):

Here is the log. The last response occurred after i stopped my programe.
[GIN] 2025/03/06 - 11:05:50 | 200 | 1.3655709s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/03/06 - 11:05:51 | 200 | 1.3723884s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/03/06 - 11:05:53 | 200 | 1.3856899s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/03/06 - 11:05:54 | 200 | 1.3489615s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/03/06 - 11:05:56 | 200 | 1.613231s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/03/06 - 11:05:57 | 200 | 1.3488688s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/03/06 - 11:05:58 | 200 | 1.3488556s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/03/06 - 11:06:00 | 200 | 1.3483834s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/03/06 - 11:06:01 | 200 | 1.3736995s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/03/06 - 11:06:02 | 200 | 1.3468758s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/03/06 - 11:06:04 | 200 | 1.343276s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/03/06 - 11:06:05 | 200 | 1.3512894s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/03/06 - 11:06:07 | 200 | 1.3640879s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/03/06 - 11:06:08 | 200 | 1.3514527s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/03/06 - 11:06:10 | 200 | 2.4528513s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/03/06 - 13:19:39 | 200 | 2h13m28s | 127.0.0.1 | POST "/api/chat"

<!-- gh-comment-id:2702847116 --> @ricky-bug commented on GitHub (Mar 6, 2025): Here is the log. The last response occurred after i stopped my programe. [GIN] 2025/03/06 - 11:05:50 | 200 | 1.3655709s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/06 - 11:05:51 | 200 | 1.3723884s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/06 - 11:05:53 | 200 | 1.3856899s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/06 - 11:05:54 | 200 | 1.3489615s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/06 - 11:05:56 | 200 | 1.613231s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/06 - 11:05:57 | 200 | 1.3488688s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/06 - 11:05:58 | 200 | 1.3488556s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/06 - 11:06:00 | 200 | 1.3483834s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/06 - 11:06:01 | 200 | 1.3736995s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/06 - 11:06:02 | 200 | 1.3468758s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/06 - 11:06:04 | 200 | 1.343276s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/06 - 11:06:05 | 200 | 1.3512894s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/06 - 11:06:07 | 200 | 1.3640879s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/06 - 11:06:08 | 200 | 1.3514527s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/06 - 11:06:10 | 200 | 2.4528513s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/03/06 - 13:19:39 | 200 | 2h13m28s | 127.0.0.1 | POST "/api/chat"
Author
Owner

@rick-github commented on GitHub (Mar 6, 2025):

https://github.com/ollama/ollama/issues/8387#issuecomment-2585537628

<!-- gh-comment-id:2702986875 --> @rick-github commented on GitHub (Mar 6, 2025): https://github.com/ollama/ollama/issues/8387#issuecomment-2585537628
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67440