[GH-ISSUE #13092] logprobs does not contain tool call information #70723

Closed
opened 2026-05-04 22:45:21 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @MarkWard0110 on GitHub (Nov 15, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13092

Originally assigned to: @jmorganca, @ParthSareen on GitHub.

What is the issue?

I don't know if this is a bug or a feature request.

I have just started exploring the logprobs feature and have noticed that it contains thinking and content but I don't see tool calls.

For example, modify one of the ollama-python examples calling tools and enabe logprobs


from typing import Iterable
from ollama import Client
from ollama._types import ChatResponse


def get_weather(city: str) -> str:
  """
  Get the current temperature for a city

  Args:
      city (str): The name of the city

  Returns:
      str: The current temperature
  """
  temperatures = list(range(-10, 35))

  temp = random.choice(temperatures)

  return f'The temperature in {city} is {temp}°C'


def get_weather_conditions(city: str) -> str:
  """
  Get the weather conditions for a city

  Args:
      city (str): The name of the city

  Returns:
      str: The current weather conditions
  """
  conditions = ['sunny', 'cloudy', 'rainy', 'snowy', 'foggy']
  return random.choice(conditions)


def print_logprobs(logprobs: Iterable[dict], label: str) -> None:
  print(f'\n{label}:')
  if not logprobs:
    print('  (no logprobs returned)')
    return
  for entry in logprobs:
    token = entry.get('token', '')
    logprob = entry.get('logprob')
    print(f'  token={token!r:<12} logprob={logprob:.3f}')
    for alt in entry.get('top_logprobs', []):
      if alt['token'] != token:
        print(f'    alt -> {alt["token"]!r:<12} ({alt["logprob"]:.3f})')


available_tools = {'get_weather': get_weather, 'get_weather_conditions': get_weather_conditions}

messages = [{'role': 'user', 'content': 'What is the weather like in London? What are the conditions in Toronto?'}]



client = Client(
  # Ollama Turbo
  host="http://localhost:11434",
)
model = 'gpt-oss:20b'
# gpt-oss can call tools while "thinking"
# a loop is needed to call the tools and get the results
running_agent = True
turn_count = 0
while running_agent:
  turn_count += 1
  response: ChatResponse = client.chat(
    model=model, 
    messages=messages, 
    tools=[get_weather, get_weather_conditions],
    logprobs=True,
    top_logprobs=5,
    )

  print('################################################')

  print(f"(response] -------------------------------- (Turn {turn_count})\n")
  if response.message.content:
    print('Content: ')
    print(response.message.content + '\n')
  if response.message.thinking:
    print('Thinking: ')
    print(response.message.thinking + '\n')

  messages.append(response.message)

  print("(tools] --------------------------------\n")
  if response.message.tool_calls:
    for tool_call in response.message.tool_calls:
      function_to_call = available_tools.get(tool_call.function.name)
      if function_to_call:
        result = function_to_call(**tool_call.function.arguments)
        print("[tool execute]\ncall name:\n", tool_call.function.name, "\narguments:\n", tool_call.function.arguments, "result:\n", result + "\n")
        messages.append({'role': 'tool', 'content': result, 'tool_name': tool_call.function.name})
      else:
        print(f'Tool {tool_call.function.name} not found')
        messages.append({'role': 'tool', 'content': f'Tool {tool_call.function.name} not found', 'tool_name': tool_call.function.name})
  else:
    # no more tool calls, we can stop the loop
    running_agent = False

  print("(logprobs] --------------------------------\n")
  print_logprobs(response.get('logprobs', []), 'chat logprobs')

When I run it the logprobs does not contain the tool call.

(response] -------------------------------- (Turn 1)

Thinking: 
The user asks: "What is the weather like in London? What are the conditions in Toronto?" We need to call get_weather for London, and get_weather_conditions for Toronto. Likely we should provide both. The user didn't specify which temperature unit. We'll use get_weather for London. And get_weather_conditions for Toronto. Let's do that.

(tools] --------------------------------

[tool execute]
call name:
 get_weather 
arguments:
 {'city': 'London'} result:
 The temperature in London is -7°C

(logprobs] --------------------------------


chat logprobs:
  token='The'        logprob=-0.846
    alt -> 'We'         (-1.156)
    alt -> 'User'       (-1.405)
    alt -> 'Need'       (-4.950)
    alt -> 'I'          (-6.232)
  token=' user'      logprob=-0.001
    alt -> ' question'  (-8.254)
    alt -> " user's"    (-8.620)
    alt -> ' task'      (-9.554)
    alt -> ' prompt'    (-9.588)
  token=' asks'      logprob=-0.201
    alt -> ' wants'     (-2.618)
    alt -> ' asked'     (-3.085)
    alt -> ' is'        (-3.276)
    alt -> ':'          (-4.017)
  token=':'          logprob=-0.722
    alt -> ' two'       (-0.988)
    alt -> ' for'       (-2.324)
    alt -> ' about'     (-3.943)
    alt -> ' "'         (-4.078)
  token=' "'         logprob=-0.021
    alt -> ' What'      (-4.620)
    alt -> ' weather'   (-5.443)
    alt -> ' “'         (-6.224)
    alt -> ' what'      (-6.266)
  token='What'       logprob=-0.000
    alt -> "What's"     (-11.070)
    alt -> 'what'       (-11.650)
    alt -> ' What'      (-12.809)
    alt -> 'Weather'    (-14.045)
  token=' is'        logprob=-0.000
    alt -> ' are'       (-12.209)
    alt -> ' Is'        (-13.360)
    alt -> ' does'      (-13.596)
    alt -> '’s'         (-14.168)
  token=' the'       logprob=-0.000
    alt -> ' weather'   (-9.903)
    alt -> ' ...'       (-13.487)
    alt -> ' '          (-14.947)
    alt -> ' your'      (-14.980)
  token=' weather'   logprob=-0.000
    alt -> ' current'   (-16.945)
    alt -> ' temperature' (-17.378)
    alt -> ' Weather'   (-17.917)
    alt -> 'weather'    (-17.964)
  token=' like'      logprob=-0.000
    alt -> ' in'        (-11.463)
    alt -> '-like'      (-13.506)
    alt -> ' likely'    (-15.606)
    alt -> ' ...'       (-15.923)
  token=' in'        logprob=-0.000
    alt -> ' ('         (-14.371)
    alt -> '"'          (-14.663)
    alt -> '?'          (-14.838)
    alt -> ' ['         (-15.070)
  token=' London'    logprob=-0.000
    alt -> " London's"  (-14.862)
    alt -> ' london'    (-16.875)
    alt -> 'London'     (-17.147)
    alt -> ' ['         (-17.446)
  token='?'          logprob=-0.046
    alt -> '?"'         (-3.097)
    alt -> '?",'        (-9.678)
    alt -> '?".'        (-11.265)
    alt -> '"'          (-12.764)
  token=' What'      logprob=-0.000
    alt -> ' And'       (-13.393)
    alt -> ' '          (-14.160)
    alt -> ' what'      (-14.335)
    alt -> ' ...'       (-15.305)
  token=' are'       logprob=-0.000
    alt -> ' about'     (-15.922)
    alt -> ' the'       (-16.224)
    alt -> ' were'      (-16.243)
    alt -> 'are'        (-16.546)
  token=' the'       logprob=-0.000
    alt -> ' conditions' (-12.831)
    alt -> ' ('         (-14.783)
    alt -> ' ...'       (-15.191)
    alt -> ' its'       (-15.491)
  token=' conditions' logprob=-0.000
    alt -> 'conditions' (-13.146)
    alt -> '.conditions' (-15.275)
    alt -> ' condition' (-15.330)
    alt -> ' CONDITIONS' (-17.096)
  token=' in'        logprob=-0.000
    alt -> '?'          (-14.124)
    alt -> ' ('         (-14.266)
    alt -> ' for'       (-15.836)
    alt -> '...'        (-15.912)
  token=' Toronto'   logprob=-0.000
    alt -> 'Toronto'    (-15.043)
    alt -> ' Tokyo'     (-16.086)
    alt -> ' London'    (-18.319)
    alt -> ' ...'       (-18.376)
  token='?"'         logprob=-0.039
    alt -> '?"\n\n'     (-3.389)
    alt -> '?".'        (-5.605)
    alt -> '?"\n'       (-8.413)
    alt -> '?'          (-8.545)
  token=' We'        logprob=-0.609
    alt -> ' They'      (-1.490)
    alt -> ' The'       (-2.543)
    alt -> ' So'        (-2.824)
    alt -> ' This'      (-3.787)
  token=' need'      logprob=-0.794
    alt -> ' have'      (-0.994)
    alt -> ' should'    (-2.279)
    alt -> ' can'       (-3.212)
    alt -> ' must'      (-4.272)
  token=' to'        logprob=-0.009
    alt -> ' weather'   (-5.996)
    alt -> ' two'       (-6.338)
    alt -> ' current'   (-6.454)
    alt -> ' the'       (-6.600)
  token=' call'      logprob=-1.841
    alt -> ' use'       (-1.144)
    alt -> ' provide'   (-1.471)
    alt -> ' fetch'     (-2.701)
    alt -> ' respond'   (-2.701)
  token=' get'       logprob=-1.995
    alt -> ' the'       (-0.809)
    alt -> ' functions' (-1.766)
    alt -> ' appropriate' (-2.144)
    alt -> ' two'       (-2.895)
  token='_weather'   logprob=-0.000
    alt -> ' weather'   (-9.419)
    alt -> '_temperature' (-10.624)
    alt -> 'Weather'    (-11.635)
    alt -> '_current'   (-12.605)
  token=' for'       logprob=-0.021
    alt -> ' and'       (-4.961)
    alt -> ' function'  (-5.748)
    alt -> ' or'        (-5.920)
    alt -> ' ('         (-6.050)
  token=' London'    logprob=-0.004
    alt -> ' "'         (-6.489)
    alt -> ' city'      (-7.188)
    alt -> " London's"  (-7.622)
    alt -> ' the'       (-8.570)
  token=','          logprob=-1.155
    alt -> ' and'       (-0.719)
    alt -> ' to'        (-2.879)
    alt -> ' ('         (-2.934)
    alt -> '?'          (-3.233)
  token=' and'       logprob=-0.252
    alt -> ' get'       (-1.655)
    alt -> ' presumably' (-5.192)
    alt -> ' maybe'     (-5.192)
    alt -> ' likely'    (-5.435)
  token=' get'       logprob=-0.003
    alt -> ' maybe'     (-7.037)
    alt -> ' call'      (-7.243)
    alt -> ' possibly'  (-8.369)
    alt -> ' then'      (-8.612)
  token='_weather'   logprob=-0.000
    alt -> ' weather'   (-9.473)
    alt -> ' Weather'   (-13.018)
    alt -> 'Weather'    (-13.413)
    alt -> '_conditions' (-13.681)
  token='_conditions' logprob=-0.000
    alt -> ' for'       (-10.433)
    alt -> '_condition' (-10.654)
    alt -> ' conditions' (-11.640)
    alt -> '_cond'      (-12.087)
  token=' for'       logprob=-0.011
    alt -> ' ('         (-5.199)
    alt -> ' or'        (-5.807)
    alt -> '?'          (-7.120)
    alt -> ' maybe'     (-7.212)
  token=' Toronto'   logprob=-0.000
    alt -> ' maybe'     (-10.298)
    alt -> '...'        (-11.411)
    alt -> '?'          (-11.664)
    alt -> ' presumably' (-11.701)
  token='.'          logprob=-0.419
    alt -> '?'          (-1.613)
    alt -> ' ('         (-3.178)
    alt -> ','          (-3.234)
    alt -> '.\n\n'      (-3.878)
  token=' Lik'       logprob=-3.753
    alt -> ' The'       (-1.266)
    alt -> ' We'        (-2.241)
    alt -> ' According' (-2.602)
    alt -> ' But'       (-2.909)
    alt -> ' Use'       (-2.932)
  token='ely'        logprob=-0.000
    alt -> 'elihood'    (-9.822)
    alt -> 'ert'        (-12.566)
    alt -> 'ewise'      (-14.725)
    alt -> 'ew'         (-14.982)
  token=' we'        logprob=-1.200
    alt -> ' get'       (-1.913)
    alt -> ' the'       (-2.671)
    alt -> ' separate'  (-2.702)
    alt -> ' need'      (-2.746)
  token=' should'    logprob=-1.319
    alt -> ' need'      (-0.753)
    alt -> ' can'       (-2.474)
    alt -> ' call'      (-2.997)
    alt -> ' use'       (-3.385)
  token=' provide'   logprob=-2.374
    alt -> ' call'      (-0.875)
    alt -> ' use'       (-1.495)
    alt -> ' do'        (-3.542)
    alt -> ' return'    (-3.612)
  token=' both'      logprob=-1.059
    alt -> ' the'       (-2.041)
    alt -> ' separate'  (-2.330)
    alt -> ' a'         (-2.582)
    alt -> ' two'       (-2.622)
  token='.'          logprob=-0.525
    alt -> ' results'   (-2.018)
    alt -> ' answers'   (-2.759)
    alt -> ' responses' (-2.830)
    alt -> '.\n\n'      (-3.300)
  token=' The'       logprob=-1.290
    alt -> ' Use'       (-1.819)
    alt -> ' We'        (-2.274)
    alt -> ' According' (-2.357)
    alt -> " Let's"     (-2.627)
  token=' user'      logprob=-1.253
    alt -> ' instructions' (-2.068)
    alt -> ' tool'      (-2.515)
    alt -> ' system'    (-2.710)
    alt -> ' question'  (-2.938)
  token=" didn't"    logprob=-0.950
    alt -> ' wants'     (-1.489)
    alt -> ' might'     (-2.733)
    alt -> ' asked'     (-2.786)
    alt -> ' likely'    (-2.833)
  token=' specify'   logprob=-0.073
    alt -> ' ask'       (-3.282)
    alt -> ' request'   (-4.970)
    alt -> ' mention'   (-5.159)
    alt -> ' explicitly' (-5.249)
  token=' which'     logprob=-1.101
    alt -> ' whether'   (-2.422)
    alt -> ' if'        (-2.731)
    alt -> ' temperature' (-2.925)
    alt -> ' format'    (-2.995)
  token=' temperature' logprob=-4.370
    alt -> ' function'  (-0.861)
    alt -> ' type'      (-2.255)
    alt -> ' functions' (-2.331)
    alt -> ' city'      (-2.524)
    alt -> ' London'    (-3.170)
  token=' unit'      logprob=-1.041
    alt -> ' or'        (-1.796)
    alt -> ' units'     (-2.070)
    alt -> ' format'    (-2.651)
    alt -> ' type'      (-2.676)
  token='.'          logprob=-1.002
    alt -> ','          (-1.494)
    alt -> ';'          (-1.748)
    alt -> ' or'        (-1.762)
    alt -> ' but'       (-3.600)
  token=" We'll"     logprob=-0.920
    alt -> ' We'        (-1.614)
    alt -> ' The'       (-2.714)
    alt -> ' Just'      (-3.027)
    alt -> ' Probably'  (-3.043)
  token=' use'       logprob=-2.571
    alt -> ' just'      (-1.101)
    alt -> ' call'      (-1.734)
    alt -> ' assume'    (-1.919)
    alt -> ' provide'   (-3.111)
  token=' get'       logprob=-2.313
    alt -> ' default'   (-1.120)
    alt -> ' the'       (-1.128)
    alt -> ' function'  (-3.225)
    alt -> ' standard'  (-3.332)
  token='_weather'   logprob=-0.000
    alt -> '_temperature' (-8.856)
    alt -> ' weather'   (-9.985)
    alt -> 'Weather'    (-12.044)
    alt -> '-weather'   (-12.769)
  token=' for'       logprob=-0.959
    alt -> '.'          (-1.982)
    alt -> ' to'        (-2.238)
    alt -> ' ('         (-2.409)
    alt -> ' which'     (-2.792)
  token=' London'    logprob=-0.274
    alt -> ' temperature' (-2.295)
    alt -> ' city'      (-3.202)
    alt -> ' current'   (-3.473)
    alt -> ' the'       (-4.292)
  token='.'          logprob=-0.989
    alt -> ','          (-1.813)
    alt -> ' to'        (-2.035)
    alt -> ' ('         (-2.158)
    alt -> ':'          (-2.777)
  token=' And'       logprob=-2.638
    alt -> ' Then'      (-1.353)
    alt -> ' For'       (-1.425)
    alt -> ' The'       (-2.543)
    alt -> " We'll"     (-2.896)
  token=' get'       logprob=-0.058
    alt -> ' for'       (-3.548)
    alt -> ' conditions' (-5.364)
    alt -> ' use'       (-5.432)
    alt -> ' then'      (-5.448)
  token='_weather'   logprob=-0.000
    alt -> ' weather'   (-8.305)
    alt -> 'Weather'    (-11.579)
    alt -> '_conditions' (-11.634)
    alt -> '_temperature' (-11.796)
  token='_conditions' logprob=-0.000
    alt -> '_condition' (-9.139)
    alt -> ' for'       (-10.025)
    alt -> ' conditions' (-10.461)
    alt -> '_cond'      (-11.623)
  token=' for'       logprob=-0.015
    alt -> ' returns'   (-5.905)
    alt -> ' ('         (-6.384)
    alt -> ' Toronto'   (-6.437)
    alt -> ' likely'    (-6.664)
  token=' Toronto'   logprob=-0.000
    alt -> ' conditions' (-10.965)
    alt -> ' both'      (-11.997)
    alt -> ' the'       (-12.319)
    alt -> ' "'         (-12.353)
  token='.'          logprob=-0.160
    alt -> '.\n\n'      (-2.018)
    alt -> ','          (-5.705)
    alt -> ' ('         (-5.899)
    alt -> ' to'        (-6.054)
  token=" Let's"     logprob=-2.127
    alt -> " We'll"     (-1.234)
    alt -> ' Then'      (-2.077)
    alt -> ' We'        (-2.339)
    alt -> ' Use'       (-2.560)
  token=' do'        logprob=-1.261
    alt -> ' call'      (-0.829)
    alt -> ' produce'   (-2.937)
    alt -> ' use'       (-3.098)
    alt -> ' proceed'   (-3.669)
  token=' that'      logprob=-0.428
    alt -> ' it'        (-2.506)
    alt -> ' two'       (-2.904)
    alt -> ' calls'     (-3.020)
    alt -> ' function'  (-3.207)
  token='.'          logprob=-0.154
    alt -> '.\n\n'      (-2.075)
    alt -> ' via'       (-5.444)
    alt -> ' with'      (-5.967)
    alt -> ' in'        (-6.036)

I'm also wondering why it does not split the output among the "thinking", "content", and "tool_call".

Relevant log output


OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.12.11

Originally created by @MarkWard0110 on GitHub (Nov 15, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13092 Originally assigned to: @jmorganca, @ParthSareen on GitHub. ### What is the issue? I don't know if this is a bug or a feature request. I have just started exploring the logprobs feature and have noticed that it contains thinking and content but I don't see tool calls. For example, modify one of the ollama-python examples calling tools and enabe logprobs ```import random from typing import Iterable from ollama import Client from ollama._types import ChatResponse def get_weather(city: str) -> str: """ Get the current temperature for a city Args: city (str): The name of the city Returns: str: The current temperature """ temperatures = list(range(-10, 35)) temp = random.choice(temperatures) return f'The temperature in {city} is {temp}°C' def get_weather_conditions(city: str) -> str: """ Get the weather conditions for a city Args: city (str): The name of the city Returns: str: The current weather conditions """ conditions = ['sunny', 'cloudy', 'rainy', 'snowy', 'foggy'] return random.choice(conditions) def print_logprobs(logprobs: Iterable[dict], label: str) -> None: print(f'\n{label}:') if not logprobs: print(' (no logprobs returned)') return for entry in logprobs: token = entry.get('token', '') logprob = entry.get('logprob') print(f' token={token!r:<12} logprob={logprob:.3f}') for alt in entry.get('top_logprobs', []): if alt['token'] != token: print(f' alt -> {alt["token"]!r:<12} ({alt["logprob"]:.3f})') available_tools = {'get_weather': get_weather, 'get_weather_conditions': get_weather_conditions} messages = [{'role': 'user', 'content': 'What is the weather like in London? What are the conditions in Toronto?'}] client = Client( # Ollama Turbo host="http://localhost:11434", ) model = 'gpt-oss:20b' # gpt-oss can call tools while "thinking" # a loop is needed to call the tools and get the results running_agent = True turn_count = 0 while running_agent: turn_count += 1 response: ChatResponse = client.chat( model=model, messages=messages, tools=[get_weather, get_weather_conditions], logprobs=True, top_logprobs=5, ) print('################################################') print(f"(response] -------------------------------- (Turn {turn_count})\n") if response.message.content: print('Content: ') print(response.message.content + '\n') if response.message.thinking: print('Thinking: ') print(response.message.thinking + '\n') messages.append(response.message) print("(tools] --------------------------------\n") if response.message.tool_calls: for tool_call in response.message.tool_calls: function_to_call = available_tools.get(tool_call.function.name) if function_to_call: result = function_to_call(**tool_call.function.arguments) print("[tool execute]\ncall name:\n", tool_call.function.name, "\narguments:\n", tool_call.function.arguments, "result:\n", result + "\n") messages.append({'role': 'tool', 'content': result, 'tool_name': tool_call.function.name}) else: print(f'Tool {tool_call.function.name} not found') messages.append({'role': 'tool', 'content': f'Tool {tool_call.function.name} not found', 'tool_name': tool_call.function.name}) else: # no more tool calls, we can stop the loop running_agent = False print("(logprobs] --------------------------------\n") print_logprobs(response.get('logprobs', []), 'chat logprobs') ``` When I run it the logprobs does not contain the tool call. ``` (response] -------------------------------- (Turn 1) Thinking: The user asks: "What is the weather like in London? What are the conditions in Toronto?" We need to call get_weather for London, and get_weather_conditions for Toronto. Likely we should provide both. The user didn't specify which temperature unit. We'll use get_weather for London. And get_weather_conditions for Toronto. Let's do that. (tools] -------------------------------- [tool execute] call name: get_weather arguments: {'city': 'London'} result: The temperature in London is -7°C (logprobs] -------------------------------- chat logprobs: token='The' logprob=-0.846 alt -> 'We' (-1.156) alt -> 'User' (-1.405) alt -> 'Need' (-4.950) alt -> 'I' (-6.232) token=' user' logprob=-0.001 alt -> ' question' (-8.254) alt -> " user's" (-8.620) alt -> ' task' (-9.554) alt -> ' prompt' (-9.588) token=' asks' logprob=-0.201 alt -> ' wants' (-2.618) alt -> ' asked' (-3.085) alt -> ' is' (-3.276) alt -> ':' (-4.017) token=':' logprob=-0.722 alt -> ' two' (-0.988) alt -> ' for' (-2.324) alt -> ' about' (-3.943) alt -> ' "' (-4.078) token=' "' logprob=-0.021 alt -> ' What' (-4.620) alt -> ' weather' (-5.443) alt -> ' “' (-6.224) alt -> ' what' (-6.266) token='What' logprob=-0.000 alt -> "What's" (-11.070) alt -> 'what' (-11.650) alt -> ' What' (-12.809) alt -> 'Weather' (-14.045) token=' is' logprob=-0.000 alt -> ' are' (-12.209) alt -> ' Is' (-13.360) alt -> ' does' (-13.596) alt -> '’s' (-14.168) token=' the' logprob=-0.000 alt -> ' weather' (-9.903) alt -> ' ...' (-13.487) alt -> ' ' (-14.947) alt -> ' your' (-14.980) token=' weather' logprob=-0.000 alt -> ' current' (-16.945) alt -> ' temperature' (-17.378) alt -> ' Weather' (-17.917) alt -> 'weather' (-17.964) token=' like' logprob=-0.000 alt -> ' in' (-11.463) alt -> '-like' (-13.506) alt -> ' likely' (-15.606) alt -> ' ...' (-15.923) token=' in' logprob=-0.000 alt -> ' (' (-14.371) alt -> '"' (-14.663) alt -> '?' (-14.838) alt -> ' [' (-15.070) token=' London' logprob=-0.000 alt -> " London's" (-14.862) alt -> ' london' (-16.875) alt -> 'London' (-17.147) alt -> ' [' (-17.446) token='?' logprob=-0.046 alt -> '?"' (-3.097) alt -> '?",' (-9.678) alt -> '?".' (-11.265) alt -> '"' (-12.764) token=' What' logprob=-0.000 alt -> ' And' (-13.393) alt -> ' ' (-14.160) alt -> ' what' (-14.335) alt -> ' ...' (-15.305) token=' are' logprob=-0.000 alt -> ' about' (-15.922) alt -> ' the' (-16.224) alt -> ' were' (-16.243) alt -> 'are' (-16.546) token=' the' logprob=-0.000 alt -> ' conditions' (-12.831) alt -> ' (' (-14.783) alt -> ' ...' (-15.191) alt -> ' its' (-15.491) token=' conditions' logprob=-0.000 alt -> 'conditions' (-13.146) alt -> '.conditions' (-15.275) alt -> ' condition' (-15.330) alt -> ' CONDITIONS' (-17.096) token=' in' logprob=-0.000 alt -> '?' (-14.124) alt -> ' (' (-14.266) alt -> ' for' (-15.836) alt -> '...' (-15.912) token=' Toronto' logprob=-0.000 alt -> 'Toronto' (-15.043) alt -> ' Tokyo' (-16.086) alt -> ' London' (-18.319) alt -> ' ...' (-18.376) token='?"' logprob=-0.039 alt -> '?"\n\n' (-3.389) alt -> '?".' (-5.605) alt -> '?"\n' (-8.413) alt -> '?' (-8.545) token=' We' logprob=-0.609 alt -> ' They' (-1.490) alt -> ' The' (-2.543) alt -> ' So' (-2.824) alt -> ' This' (-3.787) token=' need' logprob=-0.794 alt -> ' have' (-0.994) alt -> ' should' (-2.279) alt -> ' can' (-3.212) alt -> ' must' (-4.272) token=' to' logprob=-0.009 alt -> ' weather' (-5.996) alt -> ' two' (-6.338) alt -> ' current' (-6.454) alt -> ' the' (-6.600) token=' call' logprob=-1.841 alt -> ' use' (-1.144) alt -> ' provide' (-1.471) alt -> ' fetch' (-2.701) alt -> ' respond' (-2.701) token=' get' logprob=-1.995 alt -> ' the' (-0.809) alt -> ' functions' (-1.766) alt -> ' appropriate' (-2.144) alt -> ' two' (-2.895) token='_weather' logprob=-0.000 alt -> ' weather' (-9.419) alt -> '_temperature' (-10.624) alt -> 'Weather' (-11.635) alt -> '_current' (-12.605) token=' for' logprob=-0.021 alt -> ' and' (-4.961) alt -> ' function' (-5.748) alt -> ' or' (-5.920) alt -> ' (' (-6.050) token=' London' logprob=-0.004 alt -> ' "' (-6.489) alt -> ' city' (-7.188) alt -> " London's" (-7.622) alt -> ' the' (-8.570) token=',' logprob=-1.155 alt -> ' and' (-0.719) alt -> ' to' (-2.879) alt -> ' (' (-2.934) alt -> '?' (-3.233) token=' and' logprob=-0.252 alt -> ' get' (-1.655) alt -> ' presumably' (-5.192) alt -> ' maybe' (-5.192) alt -> ' likely' (-5.435) token=' get' logprob=-0.003 alt -> ' maybe' (-7.037) alt -> ' call' (-7.243) alt -> ' possibly' (-8.369) alt -> ' then' (-8.612) token='_weather' logprob=-0.000 alt -> ' weather' (-9.473) alt -> ' Weather' (-13.018) alt -> 'Weather' (-13.413) alt -> '_conditions' (-13.681) token='_conditions' logprob=-0.000 alt -> ' for' (-10.433) alt -> '_condition' (-10.654) alt -> ' conditions' (-11.640) alt -> '_cond' (-12.087) token=' for' logprob=-0.011 alt -> ' (' (-5.199) alt -> ' or' (-5.807) alt -> '?' (-7.120) alt -> ' maybe' (-7.212) token=' Toronto' logprob=-0.000 alt -> ' maybe' (-10.298) alt -> '...' (-11.411) alt -> '?' (-11.664) alt -> ' presumably' (-11.701) token='.' logprob=-0.419 alt -> '?' (-1.613) alt -> ' (' (-3.178) alt -> ',' (-3.234) alt -> '.\n\n' (-3.878) token=' Lik' logprob=-3.753 alt -> ' The' (-1.266) alt -> ' We' (-2.241) alt -> ' According' (-2.602) alt -> ' But' (-2.909) alt -> ' Use' (-2.932) token='ely' logprob=-0.000 alt -> 'elihood' (-9.822) alt -> 'ert' (-12.566) alt -> 'ewise' (-14.725) alt -> 'ew' (-14.982) token=' we' logprob=-1.200 alt -> ' get' (-1.913) alt -> ' the' (-2.671) alt -> ' separate' (-2.702) alt -> ' need' (-2.746) token=' should' logprob=-1.319 alt -> ' need' (-0.753) alt -> ' can' (-2.474) alt -> ' call' (-2.997) alt -> ' use' (-3.385) token=' provide' logprob=-2.374 alt -> ' call' (-0.875) alt -> ' use' (-1.495) alt -> ' do' (-3.542) alt -> ' return' (-3.612) token=' both' logprob=-1.059 alt -> ' the' (-2.041) alt -> ' separate' (-2.330) alt -> ' a' (-2.582) alt -> ' two' (-2.622) token='.' logprob=-0.525 alt -> ' results' (-2.018) alt -> ' answers' (-2.759) alt -> ' responses' (-2.830) alt -> '.\n\n' (-3.300) token=' The' logprob=-1.290 alt -> ' Use' (-1.819) alt -> ' We' (-2.274) alt -> ' According' (-2.357) alt -> " Let's" (-2.627) token=' user' logprob=-1.253 alt -> ' instructions' (-2.068) alt -> ' tool' (-2.515) alt -> ' system' (-2.710) alt -> ' question' (-2.938) token=" didn't" logprob=-0.950 alt -> ' wants' (-1.489) alt -> ' might' (-2.733) alt -> ' asked' (-2.786) alt -> ' likely' (-2.833) token=' specify' logprob=-0.073 alt -> ' ask' (-3.282) alt -> ' request' (-4.970) alt -> ' mention' (-5.159) alt -> ' explicitly' (-5.249) token=' which' logprob=-1.101 alt -> ' whether' (-2.422) alt -> ' if' (-2.731) alt -> ' temperature' (-2.925) alt -> ' format' (-2.995) token=' temperature' logprob=-4.370 alt -> ' function' (-0.861) alt -> ' type' (-2.255) alt -> ' functions' (-2.331) alt -> ' city' (-2.524) alt -> ' London' (-3.170) token=' unit' logprob=-1.041 alt -> ' or' (-1.796) alt -> ' units' (-2.070) alt -> ' format' (-2.651) alt -> ' type' (-2.676) token='.' logprob=-1.002 alt -> ',' (-1.494) alt -> ';' (-1.748) alt -> ' or' (-1.762) alt -> ' but' (-3.600) token=" We'll" logprob=-0.920 alt -> ' We' (-1.614) alt -> ' The' (-2.714) alt -> ' Just' (-3.027) alt -> ' Probably' (-3.043) token=' use' logprob=-2.571 alt -> ' just' (-1.101) alt -> ' call' (-1.734) alt -> ' assume' (-1.919) alt -> ' provide' (-3.111) token=' get' logprob=-2.313 alt -> ' default' (-1.120) alt -> ' the' (-1.128) alt -> ' function' (-3.225) alt -> ' standard' (-3.332) token='_weather' logprob=-0.000 alt -> '_temperature' (-8.856) alt -> ' weather' (-9.985) alt -> 'Weather' (-12.044) alt -> '-weather' (-12.769) token=' for' logprob=-0.959 alt -> '.' (-1.982) alt -> ' to' (-2.238) alt -> ' (' (-2.409) alt -> ' which' (-2.792) token=' London' logprob=-0.274 alt -> ' temperature' (-2.295) alt -> ' city' (-3.202) alt -> ' current' (-3.473) alt -> ' the' (-4.292) token='.' logprob=-0.989 alt -> ',' (-1.813) alt -> ' to' (-2.035) alt -> ' (' (-2.158) alt -> ':' (-2.777) token=' And' logprob=-2.638 alt -> ' Then' (-1.353) alt -> ' For' (-1.425) alt -> ' The' (-2.543) alt -> " We'll" (-2.896) token=' get' logprob=-0.058 alt -> ' for' (-3.548) alt -> ' conditions' (-5.364) alt -> ' use' (-5.432) alt -> ' then' (-5.448) token='_weather' logprob=-0.000 alt -> ' weather' (-8.305) alt -> 'Weather' (-11.579) alt -> '_conditions' (-11.634) alt -> '_temperature' (-11.796) token='_conditions' logprob=-0.000 alt -> '_condition' (-9.139) alt -> ' for' (-10.025) alt -> ' conditions' (-10.461) alt -> '_cond' (-11.623) token=' for' logprob=-0.015 alt -> ' returns' (-5.905) alt -> ' (' (-6.384) alt -> ' Toronto' (-6.437) alt -> ' likely' (-6.664) token=' Toronto' logprob=-0.000 alt -> ' conditions' (-10.965) alt -> ' both' (-11.997) alt -> ' the' (-12.319) alt -> ' "' (-12.353) token='.' logprob=-0.160 alt -> '.\n\n' (-2.018) alt -> ',' (-5.705) alt -> ' (' (-5.899) alt -> ' to' (-6.054) token=" Let's" logprob=-2.127 alt -> " We'll" (-1.234) alt -> ' Then' (-2.077) alt -> ' We' (-2.339) alt -> ' Use' (-2.560) token=' do' logprob=-1.261 alt -> ' call' (-0.829) alt -> ' produce' (-2.937) alt -> ' use' (-3.098) alt -> ' proceed' (-3.669) token=' that' logprob=-0.428 alt -> ' it' (-2.506) alt -> ' two' (-2.904) alt -> ' calls' (-3.020) alt -> ' function' (-3.207) token='.' logprob=-0.154 alt -> '.\n\n' (-2.075) alt -> ' via' (-5.444) alt -> ' with' (-5.967) alt -> ' in' (-6.036) ``` I'm also wondering why it does not split the output among the "thinking", "content", and "tool_call". ### Relevant log output ```shell ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.12.11
GiteaMirror added the bug label 2026-05-04 22:45:21 -05:00
Author
Owner

@MarkWard0110 commented on GitHub (Nov 15, 2025):

If my assisted assesment of what is happening is correct. It is in the parser where we lose the details. The parser is only providing what it parses. The parser's tool handling is different and that might be why we don't see tooling tokens in the logprobs.

perhaps Ollama should just provide the runner's logprobs output?

I don't know if how the logprobs is implemented is related to how OpenAI implements it? I am guessing that OpenAI limits what is generated because that would expose information of their private models. However, Ollama is different and maybe could have different configuration modes. Hosted, a provider does not want to expose this level of data in the requests. The other mode is Ollama is being used in development and having this information helps improve the development experience. Especially when working with open models.

I'm hoping this information from logprobs will help investigate an LLM when using tools.

<!-- gh-comment-id:3536768189 --> @MarkWard0110 commented on GitHub (Nov 15, 2025): If my assisted assesment of what is happening is correct. It is in the parser where we lose the details. The parser is only providing what it parses. The parser's tool handling is different and that might be why we don't see tooling tokens in the logprobs. perhaps Ollama should just provide the runner's logprobs output? I don't know if how the logprobs is implemented is related to how OpenAI implements it? I am guessing that OpenAI limits what is generated because that would expose information of their private models. However, Ollama is different and maybe could have different configuration modes. Hosted, a provider does not want to expose this level of data in the requests. The other mode is Ollama is being used in development and having this information helps improve the development experience. Especially when working with open models. I'm hoping this information from logprobs will help investigate an LLM when using tools.
Author
Owner

@MarkWard0110 commented on GitHub (Nov 16, 2025):

Testing out the idea - I hacked Ollama to return everything

################################################
(response] -------------------------------- (Turn 1)

Thinking:
The user wants two queries: weather in London (likely temperature) and conditions in Toronto. We can use get_weather for London, get_weather_conditions for Toronto. We'll call both functions.

(tools] --------------------------------

[tool execute]
call name:
 get_weather
arguments:
 {'city': 'London'} result:
 The temperature in London is 26°C

(logprobs] --------------------------------


chat logprobs:
  token='<|channel|>' logprob=-0.000
    alt -> '<|constrain|>' (-25.067)
    alt -> ' '          (-25.811)
    alt -> ' ('         (-27.414)
    alt -> 'comment'    (-27.902)
  token='analysis'   logprob=-0.000
    alt -> 'comment'    (-15.985)
    alt -> ' analysis'  (-26.875)
    alt -> 'Analysis'   (-28.176)
    alt -> 'analytic'   (-28.221)
  token='<|message|>' logprob=-0.000
    alt -> ' to'        (-15.360)
    alt -> '<|channel|>' (-16.278)
    alt -> ' The'       (-17.932)
    alt -> ':'          (-18.607)
  token='The'        logprob=-0.874
    alt -> 'We'         (-1.054)
    alt -> 'User'       (-1.503)
    alt -> 'Need'       (-4.758)
    alt -> 'I'          (-6.347)
  token=' user'      logprob=-0.001
    alt -> ' question'  (-8.119)
    alt -> " user's"    (-8.639)
    alt -> ' task'      (-9.284)
    alt -> ' prompt'    (-9.493)
  token=' wants'     logprob=-2.496
    alt -> ' asks'      (-0.202)
    alt -> ' asked'     (-3.286)
    alt -> ' is'        (-3.307)
    alt -> ':'          (-3.988)
  token=' two'       logprob=-2.630
    alt -> ' weather'   (-0.598)
    alt -> ' the'       (-1.851)
    alt -> ' current'   (-2.271)
    alt -> ':'          (-3.095)
  token=' queries'   logprob=-3.045
    alt -> ' pieces'    (-0.460)
    alt -> ' things'    (-1.665)
    alt -> ' separate'  (-2.960)
    alt -> ' different' (-3.679)
  token=':'          logprob=-0.100
    alt -> ':\n\n'      (-2.983)
    alt -> '.'          (-3.472)
    alt -> ':\n'        (-4.866)
    alt -> ' about'     (-5.832)
  token=' weather'   logprob=-0.439
    alt -> ' "'         (-1.980)
    alt -> ' current'   (-2.236)
    alt -> ' the'       (-3.597)
    alt -> ' what'      (-3.653)
  token=' in'        logprob=-0.070
    alt -> ' for'       (-3.577)
    alt -> ' ('         (-4.248)
    alt -> ' like'      (-4.315)
    alt -> ' and'       (-5.784)
  token=' London'    logprob=-0.000
    alt -> ' london'    (-11.080)
    alt -> ' "'         (-11.723)
    alt -> " London's"  (-12.183)
    alt -> 'London'     (-13.181)
  token=' ('         logprob=-1.251
    alt -> ','          (-1.008)
    alt -> ' and'       (-1.099)
    alt -> ';'          (-5.285)
    alt -> '?'          (-5.802)
  token='likely'     logprob=-0.554
    alt -> 'probably'   (-2.085)
    alt -> 'pres'       (-2.627)
    alt -> 'current'    (-2.718)
    alt -> 'temperature' (-2.746)
  token=' temperature' logprob=-0.537
    alt -> ' current'   (-1.249)
    alt -> ' general'   (-3.666)
    alt -> ' get'       (-4.276)
    alt -> ' the'       (-4.315)
  token=')'          logprob=-0.297
    alt -> '),'         (-2.082)
    alt -> '?)'         (-2.973)
    alt -> ' or'        (-3.581)
    alt -> '?),'        (-3.784)
  token=' and'       logprob=-0.001
    alt -> ' or'        (-8.145)
    alt -> ' using'     (-8.330)
    alt -> ' &'         (-9.280)
    alt -> ' via'       (-9.302)
  token=' conditions' logprob=-0.542
    alt -> ' weather'   (-0.879)
    alt -> ' the'       (-6.785)
    alt -> ' condition' (-7.434)
    alt -> ' "'         (-8.202)
  token=' in'        logprob=-0.006
    alt -> ' ('         (-5.241)
    alt -> ' for'       (-7.915)
    alt -> ' of'        (-10.534)
    alt -> '/'          (-11.204)
  token=' Toronto'   logprob=-0.000
    alt -> 'Toronto'    (-14.540)
    alt -> ' Tor'       (-15.215)
    alt -> ' TOR'       (-16.086)
    alt -> ' Tokyo'     (-16.154)
  token='.'          logprob=-0.487
    alt -> ' ('         (-0.967)
    alt -> '.\n\n'      (-5.419)
    alt -> ','          (-7.423)
    alt -> ' likely'    (-9.287)
  token=' We'        logprob=-0.761
    alt -> ' The'       (-1.579)
    alt -> ' According' (-2.438)
    alt -> " There's"   (-2.820)
    alt -> ' They'      (-3.042)
  token=' can'       logprob=-1.340
    alt -> ' have'      (-0.517)
    alt -> ' need'      (-2.740)
    alt -> ' should'    (-2.850)
    alt -> ' must'      (-4.844)
  token=' use'       logprob=-0.210
    alt -> ' call'      (-1.869)
    alt -> ' respond'   (-4.928)
    alt -> ' provide'   (-5.220)
    alt -> ' answer'    (-5.738)
  token=' get'       logprob=-1.912
    alt -> ' the'       (-0.815)
    alt -> ' functions' (-1.510)
    alt -> ' two'       (-2.648)
    alt -> ' function'  (-3.469)
  token='_weather'   logprob=-0.000
    alt -> '_temperature' (-9.982)
    alt -> ' weather'   (-11.683)
    alt -> 'Weather'    (-12.965)
    alt -> '_current'   (-14.121)
  token=' for'       logprob=-0.087
    alt -> ' and'       (-2.740)
    alt -> ' to'        (-5.322)
    alt -> ' function'  (-5.491)
    alt -> ' ('         (-5.627)
  token=' London'    logprob=-0.075
    alt -> ' temperature' (-3.090)
    alt -> " London's"  (-4.577)
    alt -> ' the'       (-5.344)
    alt -> ' first'     (-5.571)
  token=','          logprob=-0.958
    alt -> ' and'       (-0.863)
    alt -> ' ('         (-2.584)
    alt -> ' to'        (-2.748)
    alt -> '.'          (-3.929)
  token=' get'       logprob=-0.672
    alt -> ' and'       (-0.772)
    alt -> ' maybe'     (-5.476)
    alt -> ' which'     (-5.662)
    alt -> ' returning' (-5.718)
  token='_weather'   logprob=-0.000
    alt -> ' weather'   (-12.158)
    alt -> '_temperature' (-14.686)
    alt -> 'Weather'    (-15.731)
    alt -> ' Weather'   (-15.893)
  token='_conditions' logprob=-0.000
    alt -> '_condition' (-12.244)
    alt -> ' conditions' (-12.920)
    alt -> '_cond'      (-14.815)
    alt -> '_'          (-15.242)
  token=' for'       logprob=-0.000
    alt -> ' or'        (-8.482)
    alt -> ' ('         (-9.018)
    alt -> ' Toronto'   (-10.270)
    alt -> ' maybe'     (-10.421)
  token=' Toronto'   logprob=-0.000
    alt -> 'Toronto'    (-13.016)
    alt -> ' Tokyo'     (-14.254)
    alt -> ' both'      (-14.262)
    alt -> ' maybe'     (-14.575)
  token='.'          logprob=-0.077
    alt -> '.\n\n'      (-3.653)
    alt -> '?'          (-4.124)
    alt -> ','          (-4.227)
    alt -> ' or'        (-4.756)
  token=" We'll"     logprob=-2.018
    alt -> ' The'       (-1.924)
    alt -> ' We'        (-2.186)
    alt -> ' Use'       (-2.386)
    alt -> ' Probably'  (-2.398)
  token=' call'      logprob=-0.483
    alt -> ' need'      (-1.795)
    alt -> ' produce'   (-3.159)
    alt -> ' output'    (-3.523)
    alt -> ' respond'   (-3.764)
  token=' both'      logprob=-0.943
    alt -> ' functions' (-1.512)
    alt -> ' get'       (-1.919)
    alt -> ' the'       (-2.240)
    alt -> ' each'      (-3.126)
  token=' functions' logprob=-0.713
    alt -> '.'          (-0.829)
    alt -> '.\n\n'      (-3.405)
    alt -> ' via'       (-4.795)
    alt -> ' tools'     (-5.174)
  token='.'          logprob=-0.126
    alt -> '.\n\n'      (-2.757)
    alt -> ' and'       (-4.406)
    alt -> ' via'       (-4.652)
    alt -> ' accordingly' (-5.156)
  token='<|end|>'    logprob=-0.212
    alt -> ' Then'      (-3.456)
    alt -> ' Use'       (-3.637)
    alt -> " We'll"     (-3.781)
    alt -> ' The'       (-3.933)
  token='<|start|>'  logprob=-0.000
    alt -> '<|end|>'    (-24.487)
    alt -> 'assistant'  (-28.938)
    alt -> '\n\n'       (-29.180)
    alt -> '<div'       (-29.819)
  token='assistant'  logprob=-0.000
    alt -> 'Assistant'  (-21.501)
    alt -> 'assist'     (-23.223)
    alt -> ' assistant' (-23.236)
    alt -> '助手'         (-23.399)
  token='<|channel|>' logprob=-0.000
    alt -> ' to'        (-13.858)
    alt -> '<|constrain|>' (-19.771)
    alt -> 'comment'    (-21.158)
    alt -> '<|message|>' (-23.316)
  token='comment'    logprob=-0.000
    alt -> 'analysis'   (-15.901)
    alt -> 'final'      (-18.296)
    alt -> 'comments'   (-21.137)
    alt -> 'response'   (-21.226)
  token='ary'        logprob=-0.000
    alt -> 'ator'       (-26.468)
    alt -> 'ariat'      (-28.311)
    alt -> 'ry'         (-29.736)
    alt -> 'atory'      (-30.614)
  token=' to'        logprob=-0.002
    alt -> ' '          (-6.094)
    alt -> ' -'         (-10.887)
    alt -> '—to'        (-12.065)
    alt -> '<|message|>' (-12.695)
  token='='          logprob=-0.000
    alt -> '=function'  (-15.847)
    alt -> ' ='         (-19.186)
    alt -> '=f'         (-19.950)
    alt -> "='"         (-23.460)
  token='functions'  logprob=-0.000
    alt -> 'repo'       (-23.392)
    alt -> ' "'         (-23.971)
    alt -> 'python'     (-25.655)
    alt -> 'tools'      (-26.056)
  token='.get'       logprob=-0.000
    alt -> ':get'       (-13.000)
    alt -> '|get'       (-14.592)
    alt -> '.run'       (-15.401)
    alt -> '.send'      (-15.887)
  token='_weather'   logprob=-0.000
    alt -> '_temperature' (-12.854)
    alt -> 'Weather'    (-14.941)
    alt -> ' weather'   (-16.947)
    alt -> 'weather'    (-17.202)
  token=' '          logprob=-0.000
    alt -> ' code'      (-9.633)
    alt -> '<|constrain|>' (-9.895)
    alt -> ' ...'       (-10.815)
    alt -> '<|channel|>' (-11.103)
  token='<|constrain|>' logprob=-0.000
    alt -> '<|channel|>' (-18.171)
    alt -> '1'          (-20.417)
    alt -> ' arguments' (-21.424)
    alt -> ' '          (-21.643)
  token='json'       logprob=-0.001
    alt -> 'JSON'       (-7.972)
    alt -> 'response'   (-8.863)
    alt -> 'name'       (-11.921)
    alt -> 'arguments'  (-13.002)
  token='<|message|>' logprob=-0.000
    alt -> '<|channel|>' (-8.210)
    alt -> '<|constrain|>' (-12.685)
    alt -> '_output'    (-12.995)
    alt -> ' code'      (-13.335)
  token='{"'         logprob=-0.000
    alt -> '{\n'        (-8.440)
    alt -> '{'          (-9.649)
    alt -> ' {"'        (-15.690)
    alt -> '{}'         (-16.101)
  token='city'       logprob=-0.000
    alt -> 'location'   (-15.067)
    alt -> 'country'    (-15.634)
    alt -> 'name'       (-16.701)
    alt -> ' city'      (-17.518)
  token='":"'        logprob=-0.000
    alt -> '":'         (-7.923)
    alt -> '":"\''      (-16.406)
    alt -> '":["'       (-16.525)
    alt -> '"'          (-16.565)
  token='London'     logprob=-0.000
    alt -> ' London'    (-14.386)
    alt -> 'L'          (-17.357)
    alt -> 'l'          (-19.315)
    alt -> 'Paris'      (-19.461)
  token='"}'         logprob=-0.000
    alt -> '"}\n'       (-12.389)
    alt -> '"}\n\n'     (-14.323)
    alt -> '","'        (-14.773)
    alt -> '"'          (-16.011)
################################################
(response] -------------------------------- (Turn 2)

(tools] --------------------------------

[tool execute]
call name:
 get_weather_conditions
arguments:
 {'city': 'Toronto'} result:
 rainy

(logprobs] --------------------------------


chat logprobs:
  token='<|channel|>' logprob=-0.000
    alt -> ' to'        (-17.133)
    alt -> 'comment'    (-21.403)
    alt -> '<|constrain|>' (-23.135)
    alt -> '<|message|>' (-24.155)
  token='comment'    logprob=-0.014
    alt -> 'analysis'   (-4.306)
    alt -> 'callback'   (-15.911)
    alt -> 'response'   (-16.230)
    alt -> 'reply'      (-16.337)
  token='ary'        logprob=-0.000
    alt -> 'ery'        (-23.446)
    alt -> 'ry'         (-25.221)
    alt -> 'atory'      (-26.039)
    alt -> 'arya'       (-26.404)
  token=' to'        logprob=-0.000
    alt -> ' '          (-12.891)
    alt -> '<|message|>' (-12.973)
    alt -> ' for'       (-18.321)
    alt -> 'ary'        (-19.074)
  token='='          logprob=-0.000
    alt -> ' ='         (-24.367)
    alt -> '={'         (-28.253)
    alt -> '=get'       (-28.465)
    alt -> '=f'         (-28.618)
  token='functions'  logprob=-0.000
    alt -> 'Functions'  (-19.528)
    alt -> 'func'       (-20.281)
    alt -> ' functions' (-21.906)
    alt -> 'function'   (-22.466)
  token='.get'       logprob=-0.000
    alt -> '.Get'       (-19.769)
    alt -> '.'          (-19.972)
    alt -> ':get'       (-20.817)
    alt -> ' .'         (-21.177)
  token='_weather'   logprob=-0.000
    alt -> '_temperature' (-14.599)
    alt -> 'Weather'    (-17.160)
    alt -> ' weather'   (-17.230)
    alt -> '_work'      (-17.632)
  token='_conditions' logprob=-0.000
    alt -> '_condition' (-13.980)
    alt -> ' Conditions' (-15.455)
    alt -> '?'          (-15.915)
    alt -> 'Conditions' (-15.955)
  token=' '          logprob=-0.000
    alt -> ' json'      (-11.435)
    alt -> '<|constrain|>' (-11.720)
    alt -> '  '         (-14.957)
    alt -> 'json'       (-16.566)
  token='<|constrain|>' logprob=-0.000
    alt -> '<|channel|>' (-19.139)
    alt -> '<|call|>'   (-26.652)
    alt -> '<|message|>' (-29.388)
    alt -> 'ilden'      (-30.988)
  token='json'       logprob=-0.000
    alt -> 'JSON'       (-12.895)
    alt -> 'js'         (-13.112)
    alt -> 'ex'         (-14.760)
    alt -> 'int'        (-15.159)
  token='<|message|>' logprob=-0.000
    alt -> '<|channel|>' (-11.507)
    alt -> ' '          (-11.831)
    alt -> ' to'        (-14.637)
    alt -> '<|constrain|>' (-14.702)
  token='{"'         logprob=-0.000
    alt -> '{'          (-21.218)
    alt -> ' {"'        (-22.231)
    alt -> '{\\"'       (-22.333)
    alt -> '{\n'        (-22.802)
  token='city'       logprob=-0.000
    alt -> 'country'    (-19.217)
    alt -> 'City'       (-19.229)
    alt -> 'location'   (-19.817)
    alt -> 'name'       (-20.147)
  token='":"'        logprob=-0.000
    alt -> '":'         (-12.599)
    alt -> '"'          (-18.133)
    alt -> '":"+'       (-18.987)
    alt -> '","'        (-20.024)
  token='Toronto'    logprob=-0.000
    alt -> 'Tor'        (-17.030)
    alt -> ' Toronto'   (-17.192)
    alt -> 'TOR'        (-18.466)
    alt -> 'Tokyo'      (-18.577)
  token='"}'         logprob=-0.000
    alt -> '","'        (-14.549)
    alt -> '"}}'        (-16.166)
    alt -> "'}"         (-17.552)
    alt -> ' ('         (-18.935)
################################################
(response] -------------------------------- (Turn 3)

Content:
In London, it’s currently 26 °C.
In Toronto, the weather conditions are rainy.

(tools] --------------------------------

(logprobs] --------------------------------


chat logprobs:
  token='<|channel|>' logprob=-0.000
    alt -> '<|constrain|>' (-25.109)
    alt -> '<|message|>' (-27.560)
    alt -> 'comment'    (-29.345)
    alt -> 'final'      (-30.339)
  token='final'      logprob=-0.000
    alt -> 'comment'    (-19.106)
    alt -> 'analysis'   (-21.098)
    alt -> 'Final'      (-24.593)
    alt -> ' final'     (-24.998)
  token='<|message|>' logprob=-0.000
    alt -> '<|channel|>' (-13.922)
    alt -> ' '          (-14.157)
    alt -> ' I'         (-16.623)
    alt -> ' ('         (-16.997)
  token='In'         logprob=-2.295
    alt -> '**'         (-0.973)
    alt -> 'London'     (-1.484)
    alt -> 'Here'       (-2.053)
    alt -> '-'          (-2.180)
  token=' London'    logprob=-0.127
    alt -> ' **'        (-2.141)
    alt -> '\u202f'     (-7.299)
    alt -> ' the'       (-7.731)
    alt -> ' *'         (-8.031)
  token=','          logprob=-0.308
    alt -> ' the'       (-1.625)
    alt -> ' it'        (-2.901)
    alt -> " it's"      (-5.577)
    alt -> ' today'     (-5.788)
  token=' it'        logprob=-0.997
    alt -> ' the'       (-0.550)
    alt -> " it's"      (-2.944)
    alt -> ' you'       (-7.539)
    alt -> ' today'     (-7.900)
  token='’s'         logprob=-0.001
    alt -> ' is'        (-6.692)
    alt -> ' looks'     (-10.328)
    alt -> ' feels'     (-10.700)
    alt -> '\u202f'     (-11.118)
  token=' currently' logprob=-1.200
    alt -> ' a'         (-0.997)
    alt -> ' '          (-2.126)
    alt -> ' warm'      (-2.305)
    alt -> ' quite'     (-3.775)
  token=' '          logprob=-1.156
    alt -> ' **'        (-1.132)
    alt -> ' about'     (-2.040)
    alt -> ' warm'      (-2.584)
    alt -> ' around'    (-2.644)
  token='26'         logprob=-0.001
    alt -> '\u202f'     (-7.128)
    alt -> ' **'        (-8.392)
    alt -> '\xa0'       (-11.821)
    alt -> ' *'         (-12.476)
  token='\u202f'     logprob=-0.001
    alt -> '°C'         (-6.633)
    alt -> '\xa0'       (-9.306)
    alt -> '\u2009'     (-12.680)
    alt -> '&nbsp'      (-12.682)
  token='°C'         logprob=-0.000
    alt -> '°'          (-8.403)
    alt -> '℃'          (-11.869)
    alt -> 'º'          (-13.251)
    alt -> '�'          (-14.792)
  token='.'          logprob=-0.179
    alt -> '.\n\n'      (-2.042)
    alt -> '—'          (-4.778)
    alt -> ' –'         (-5.000)
    alt -> ','          (-5.048)
  token='  \n'       logprob=-0.076
    alt -> ' In'        (-3.305)
    alt -> '  \n\n'     (-3.670)
    alt -> ' \n\n'      (-5.942)
    alt -> '   \n'      (-5.953)
  token='In'         logprob=-0.006
    alt -> 'The'        (-5.851)
    alt -> 'Toronto'    (-6.482)
    alt -> 'Over'       (-6.785)
    alt -> 'Across'     (-8.881)
  token=' Toronto'   logprob=-0.000
    alt -> '\u202f'     (-16.036)
    alt -> 'Toronto'    (-17.344)
    alt -> ' nearby'    (-18.308)
    alt -> ' Tokyo'     (-18.689)
  token=','          logprob=-0.017
    alt -> ' the'       (-4.162)
    alt -> ' right'     (-7.886)
    alt -> ' it'        (-7.913)
    alt -> ' you'       (-8.791)
  token=' the'       logprob=-0.046
    alt -> ' it'        (-3.216)
    alt -> ' you'       (-5.584)
    alt -> ' today'     (-8.181)
    alt -> " it's"      (-8.259)
  token=' weather'   logprob=-0.513
    alt -> ' conditions' (-1.008)
    alt -> ' sky'       (-4.119)
    alt -> ' forecast'  (-4.181)
    alt -> ' skies'     (-6.220)
  token=' conditions' logprob=-0.653
    alt -> ' is'        (-0.895)
    alt -> ' condition' (-2.813)
    alt -> ' today'     (-4.625)
    alt -> ' right'     (-7.593)
  token=' are'       logprob=-0.008
    alt -> ' right'     (-5.591)
    alt -> ' today'     (-5.625)
    alt -> ' show'      (-8.752)
    alt -> ' now'       (-9.949)
  token=' rainy'     logprob=-0.066
    alt -> ' **'        (-3.124)
    alt -> ' *'         (-4.327)
    alt -> '\u202f'     (-5.568)
    alt -> ' “'         (-7.342)
  token='.'          logprob=-0.000
    alt -> ' right'     (-10.035)
    alt -> '.\\'        (-10.073)
    alt -> '.\n'        (-10.221)
    alt -> '.\n\n'      (-10.884)
<!-- gh-comment-id:3539410406 --> @MarkWard0110 commented on GitHub (Nov 16, 2025): Testing out the idea - I hacked Ollama to return everything ``` ################################################ (response] -------------------------------- (Turn 1) Thinking: The user wants two queries: weather in London (likely temperature) and conditions in Toronto. We can use get_weather for London, get_weather_conditions for Toronto. We'll call both functions. (tools] -------------------------------- [tool execute] call name: get_weather arguments: {'city': 'London'} result: The temperature in London is 26°C (logprobs] -------------------------------- chat logprobs: token='<|channel|>' logprob=-0.000 alt -> '<|constrain|>' (-25.067) alt -> ' ' (-25.811) alt -> ' (' (-27.414) alt -> 'comment' (-27.902) token='analysis' logprob=-0.000 alt -> 'comment' (-15.985) alt -> ' analysis' (-26.875) alt -> 'Analysis' (-28.176) alt -> 'analytic' (-28.221) token='<|message|>' logprob=-0.000 alt -> ' to' (-15.360) alt -> '<|channel|>' (-16.278) alt -> ' The' (-17.932) alt -> ':' (-18.607) token='The' logprob=-0.874 alt -> 'We' (-1.054) alt -> 'User' (-1.503) alt -> 'Need' (-4.758) alt -> 'I' (-6.347) token=' user' logprob=-0.001 alt -> ' question' (-8.119) alt -> " user's" (-8.639) alt -> ' task' (-9.284) alt -> ' prompt' (-9.493) token=' wants' logprob=-2.496 alt -> ' asks' (-0.202) alt -> ' asked' (-3.286) alt -> ' is' (-3.307) alt -> ':' (-3.988) token=' two' logprob=-2.630 alt -> ' weather' (-0.598) alt -> ' the' (-1.851) alt -> ' current' (-2.271) alt -> ':' (-3.095) token=' queries' logprob=-3.045 alt -> ' pieces' (-0.460) alt -> ' things' (-1.665) alt -> ' separate' (-2.960) alt -> ' different' (-3.679) token=':' logprob=-0.100 alt -> ':\n\n' (-2.983) alt -> '.' (-3.472) alt -> ':\n' (-4.866) alt -> ' about' (-5.832) token=' weather' logprob=-0.439 alt -> ' "' (-1.980) alt -> ' current' (-2.236) alt -> ' the' (-3.597) alt -> ' what' (-3.653) token=' in' logprob=-0.070 alt -> ' for' (-3.577) alt -> ' (' (-4.248) alt -> ' like' (-4.315) alt -> ' and' (-5.784) token=' London' logprob=-0.000 alt -> ' london' (-11.080) alt -> ' "' (-11.723) alt -> " London's" (-12.183) alt -> 'London' (-13.181) token=' (' logprob=-1.251 alt -> ',' (-1.008) alt -> ' and' (-1.099) alt -> ';' (-5.285) alt -> '?' (-5.802) token='likely' logprob=-0.554 alt -> 'probably' (-2.085) alt -> 'pres' (-2.627) alt -> 'current' (-2.718) alt -> 'temperature' (-2.746) token=' temperature' logprob=-0.537 alt -> ' current' (-1.249) alt -> ' general' (-3.666) alt -> ' get' (-4.276) alt -> ' the' (-4.315) token=')' logprob=-0.297 alt -> '),' (-2.082) alt -> '?)' (-2.973) alt -> ' or' (-3.581) alt -> '?),' (-3.784) token=' and' logprob=-0.001 alt -> ' or' (-8.145) alt -> ' using' (-8.330) alt -> ' &' (-9.280) alt -> ' via' (-9.302) token=' conditions' logprob=-0.542 alt -> ' weather' (-0.879) alt -> ' the' (-6.785) alt -> ' condition' (-7.434) alt -> ' "' (-8.202) token=' in' logprob=-0.006 alt -> ' (' (-5.241) alt -> ' for' (-7.915) alt -> ' of' (-10.534) alt -> '/' (-11.204) token=' Toronto' logprob=-0.000 alt -> 'Toronto' (-14.540) alt -> ' Tor' (-15.215) alt -> ' TOR' (-16.086) alt -> ' Tokyo' (-16.154) token='.' logprob=-0.487 alt -> ' (' (-0.967) alt -> '.\n\n' (-5.419) alt -> ',' (-7.423) alt -> ' likely' (-9.287) token=' We' logprob=-0.761 alt -> ' The' (-1.579) alt -> ' According' (-2.438) alt -> " There's" (-2.820) alt -> ' They' (-3.042) token=' can' logprob=-1.340 alt -> ' have' (-0.517) alt -> ' need' (-2.740) alt -> ' should' (-2.850) alt -> ' must' (-4.844) token=' use' logprob=-0.210 alt -> ' call' (-1.869) alt -> ' respond' (-4.928) alt -> ' provide' (-5.220) alt -> ' answer' (-5.738) token=' get' logprob=-1.912 alt -> ' the' (-0.815) alt -> ' functions' (-1.510) alt -> ' two' (-2.648) alt -> ' function' (-3.469) token='_weather' logprob=-0.000 alt -> '_temperature' (-9.982) alt -> ' weather' (-11.683) alt -> 'Weather' (-12.965) alt -> '_current' (-14.121) token=' for' logprob=-0.087 alt -> ' and' (-2.740) alt -> ' to' (-5.322) alt -> ' function' (-5.491) alt -> ' (' (-5.627) token=' London' logprob=-0.075 alt -> ' temperature' (-3.090) alt -> " London's" (-4.577) alt -> ' the' (-5.344) alt -> ' first' (-5.571) token=',' logprob=-0.958 alt -> ' and' (-0.863) alt -> ' (' (-2.584) alt -> ' to' (-2.748) alt -> '.' (-3.929) token=' get' logprob=-0.672 alt -> ' and' (-0.772) alt -> ' maybe' (-5.476) alt -> ' which' (-5.662) alt -> ' returning' (-5.718) token='_weather' logprob=-0.000 alt -> ' weather' (-12.158) alt -> '_temperature' (-14.686) alt -> 'Weather' (-15.731) alt -> ' Weather' (-15.893) token='_conditions' logprob=-0.000 alt -> '_condition' (-12.244) alt -> ' conditions' (-12.920) alt -> '_cond' (-14.815) alt -> '_' (-15.242) token=' for' logprob=-0.000 alt -> ' or' (-8.482) alt -> ' (' (-9.018) alt -> ' Toronto' (-10.270) alt -> ' maybe' (-10.421) token=' Toronto' logprob=-0.000 alt -> 'Toronto' (-13.016) alt -> ' Tokyo' (-14.254) alt -> ' both' (-14.262) alt -> ' maybe' (-14.575) token='.' logprob=-0.077 alt -> '.\n\n' (-3.653) alt -> '?' (-4.124) alt -> ',' (-4.227) alt -> ' or' (-4.756) token=" We'll" logprob=-2.018 alt -> ' The' (-1.924) alt -> ' We' (-2.186) alt -> ' Use' (-2.386) alt -> ' Probably' (-2.398) token=' call' logprob=-0.483 alt -> ' need' (-1.795) alt -> ' produce' (-3.159) alt -> ' output' (-3.523) alt -> ' respond' (-3.764) token=' both' logprob=-0.943 alt -> ' functions' (-1.512) alt -> ' get' (-1.919) alt -> ' the' (-2.240) alt -> ' each' (-3.126) token=' functions' logprob=-0.713 alt -> '.' (-0.829) alt -> '.\n\n' (-3.405) alt -> ' via' (-4.795) alt -> ' tools' (-5.174) token='.' logprob=-0.126 alt -> '.\n\n' (-2.757) alt -> ' and' (-4.406) alt -> ' via' (-4.652) alt -> ' accordingly' (-5.156) token='<|end|>' logprob=-0.212 alt -> ' Then' (-3.456) alt -> ' Use' (-3.637) alt -> " We'll" (-3.781) alt -> ' The' (-3.933) token='<|start|>' logprob=-0.000 alt -> '<|end|>' (-24.487) alt -> 'assistant' (-28.938) alt -> '\n\n' (-29.180) alt -> '<div' (-29.819) token='assistant' logprob=-0.000 alt -> 'Assistant' (-21.501) alt -> 'assist' (-23.223) alt -> ' assistant' (-23.236) alt -> '助手' (-23.399) token='<|channel|>' logprob=-0.000 alt -> ' to' (-13.858) alt -> '<|constrain|>' (-19.771) alt -> 'comment' (-21.158) alt -> '<|message|>' (-23.316) token='comment' logprob=-0.000 alt -> 'analysis' (-15.901) alt -> 'final' (-18.296) alt -> 'comments' (-21.137) alt -> 'response' (-21.226) token='ary' logprob=-0.000 alt -> 'ator' (-26.468) alt -> 'ariat' (-28.311) alt -> 'ry' (-29.736) alt -> 'atory' (-30.614) token=' to' logprob=-0.002 alt -> ' ' (-6.094) alt -> ' -' (-10.887) alt -> '—to' (-12.065) alt -> '<|message|>' (-12.695) token='=' logprob=-0.000 alt -> '=function' (-15.847) alt -> ' =' (-19.186) alt -> '=f' (-19.950) alt -> "='" (-23.460) token='functions' logprob=-0.000 alt -> 'repo' (-23.392) alt -> ' "' (-23.971) alt -> 'python' (-25.655) alt -> 'tools' (-26.056) token='.get' logprob=-0.000 alt -> ':get' (-13.000) alt -> '|get' (-14.592) alt -> '.run' (-15.401) alt -> '.send' (-15.887) token='_weather' logprob=-0.000 alt -> '_temperature' (-12.854) alt -> 'Weather' (-14.941) alt -> ' weather' (-16.947) alt -> 'weather' (-17.202) token=' ' logprob=-0.000 alt -> ' code' (-9.633) alt -> '<|constrain|>' (-9.895) alt -> ' ...' (-10.815) alt -> '<|channel|>' (-11.103) token='<|constrain|>' logprob=-0.000 alt -> '<|channel|>' (-18.171) alt -> '1' (-20.417) alt -> ' arguments' (-21.424) alt -> ' ' (-21.643) token='json' logprob=-0.001 alt -> 'JSON' (-7.972) alt -> 'response' (-8.863) alt -> 'name' (-11.921) alt -> 'arguments' (-13.002) token='<|message|>' logprob=-0.000 alt -> '<|channel|>' (-8.210) alt -> '<|constrain|>' (-12.685) alt -> '_output' (-12.995) alt -> ' code' (-13.335) token='{"' logprob=-0.000 alt -> '{\n' (-8.440) alt -> '{' (-9.649) alt -> ' {"' (-15.690) alt -> '{}' (-16.101) token='city' logprob=-0.000 alt -> 'location' (-15.067) alt -> 'country' (-15.634) alt -> 'name' (-16.701) alt -> ' city' (-17.518) token='":"' logprob=-0.000 alt -> '":' (-7.923) alt -> '":"\'' (-16.406) alt -> '":["' (-16.525) alt -> '"' (-16.565) token='London' logprob=-0.000 alt -> ' London' (-14.386) alt -> 'L' (-17.357) alt -> 'l' (-19.315) alt -> 'Paris' (-19.461) token='"}' logprob=-0.000 alt -> '"}\n' (-12.389) alt -> '"}\n\n' (-14.323) alt -> '","' (-14.773) alt -> '"' (-16.011) ################################################ (response] -------------------------------- (Turn 2) (tools] -------------------------------- [tool execute] call name: get_weather_conditions arguments: {'city': 'Toronto'} result: rainy (logprobs] -------------------------------- chat logprobs: token='<|channel|>' logprob=-0.000 alt -> ' to' (-17.133) alt -> 'comment' (-21.403) alt -> '<|constrain|>' (-23.135) alt -> '<|message|>' (-24.155) token='comment' logprob=-0.014 alt -> 'analysis' (-4.306) alt -> 'callback' (-15.911) alt -> 'response' (-16.230) alt -> 'reply' (-16.337) token='ary' logprob=-0.000 alt -> 'ery' (-23.446) alt -> 'ry' (-25.221) alt -> 'atory' (-26.039) alt -> 'arya' (-26.404) token=' to' logprob=-0.000 alt -> ' ' (-12.891) alt -> '<|message|>' (-12.973) alt -> ' for' (-18.321) alt -> 'ary' (-19.074) token='=' logprob=-0.000 alt -> ' =' (-24.367) alt -> '={' (-28.253) alt -> '=get' (-28.465) alt -> '=f' (-28.618) token='functions' logprob=-0.000 alt -> 'Functions' (-19.528) alt -> 'func' (-20.281) alt -> ' functions' (-21.906) alt -> 'function' (-22.466) token='.get' logprob=-0.000 alt -> '.Get' (-19.769) alt -> '.' (-19.972) alt -> ':get' (-20.817) alt -> ' .' (-21.177) token='_weather' logprob=-0.000 alt -> '_temperature' (-14.599) alt -> 'Weather' (-17.160) alt -> ' weather' (-17.230) alt -> '_work' (-17.632) token='_conditions' logprob=-0.000 alt -> '_condition' (-13.980) alt -> ' Conditions' (-15.455) alt -> '?' (-15.915) alt -> 'Conditions' (-15.955) token=' ' logprob=-0.000 alt -> ' json' (-11.435) alt -> '<|constrain|>' (-11.720) alt -> ' ' (-14.957) alt -> 'json' (-16.566) token='<|constrain|>' logprob=-0.000 alt -> '<|channel|>' (-19.139) alt -> '<|call|>' (-26.652) alt -> '<|message|>' (-29.388) alt -> 'ilden' (-30.988) token='json' logprob=-0.000 alt -> 'JSON' (-12.895) alt -> 'js' (-13.112) alt -> 'ex' (-14.760) alt -> 'int' (-15.159) token='<|message|>' logprob=-0.000 alt -> '<|channel|>' (-11.507) alt -> ' ' (-11.831) alt -> ' to' (-14.637) alt -> '<|constrain|>' (-14.702) token='{"' logprob=-0.000 alt -> '{' (-21.218) alt -> ' {"' (-22.231) alt -> '{\\"' (-22.333) alt -> '{\n' (-22.802) token='city' logprob=-0.000 alt -> 'country' (-19.217) alt -> 'City' (-19.229) alt -> 'location' (-19.817) alt -> 'name' (-20.147) token='":"' logprob=-0.000 alt -> '":' (-12.599) alt -> '"' (-18.133) alt -> '":"+' (-18.987) alt -> '","' (-20.024) token='Toronto' logprob=-0.000 alt -> 'Tor' (-17.030) alt -> ' Toronto' (-17.192) alt -> 'TOR' (-18.466) alt -> 'Tokyo' (-18.577) token='"}' logprob=-0.000 alt -> '","' (-14.549) alt -> '"}}' (-16.166) alt -> "'}" (-17.552) alt -> ' (' (-18.935) ################################################ (response] -------------------------------- (Turn 3) Content: In London, it’s currently 26 °C. In Toronto, the weather conditions are rainy. (tools] -------------------------------- (logprobs] -------------------------------- chat logprobs: token='<|channel|>' logprob=-0.000 alt -> '<|constrain|>' (-25.109) alt -> '<|message|>' (-27.560) alt -> 'comment' (-29.345) alt -> 'final' (-30.339) token='final' logprob=-0.000 alt -> 'comment' (-19.106) alt -> 'analysis' (-21.098) alt -> 'Final' (-24.593) alt -> ' final' (-24.998) token='<|message|>' logprob=-0.000 alt -> '<|channel|>' (-13.922) alt -> ' ' (-14.157) alt -> ' I' (-16.623) alt -> ' (' (-16.997) token='In' logprob=-2.295 alt -> '**' (-0.973) alt -> 'London' (-1.484) alt -> 'Here' (-2.053) alt -> '-' (-2.180) token=' London' logprob=-0.127 alt -> ' **' (-2.141) alt -> '\u202f' (-7.299) alt -> ' the' (-7.731) alt -> ' *' (-8.031) token=',' logprob=-0.308 alt -> ' the' (-1.625) alt -> ' it' (-2.901) alt -> " it's" (-5.577) alt -> ' today' (-5.788) token=' it' logprob=-0.997 alt -> ' the' (-0.550) alt -> " it's" (-2.944) alt -> ' you' (-7.539) alt -> ' today' (-7.900) token='’s' logprob=-0.001 alt -> ' is' (-6.692) alt -> ' looks' (-10.328) alt -> ' feels' (-10.700) alt -> '\u202f' (-11.118) token=' currently' logprob=-1.200 alt -> ' a' (-0.997) alt -> ' ' (-2.126) alt -> ' warm' (-2.305) alt -> ' quite' (-3.775) token=' ' logprob=-1.156 alt -> ' **' (-1.132) alt -> ' about' (-2.040) alt -> ' warm' (-2.584) alt -> ' around' (-2.644) token='26' logprob=-0.001 alt -> '\u202f' (-7.128) alt -> ' **' (-8.392) alt -> '\xa0' (-11.821) alt -> ' *' (-12.476) token='\u202f' logprob=-0.001 alt -> '°C' (-6.633) alt -> '\xa0' (-9.306) alt -> '\u2009' (-12.680) alt -> '&nbsp' (-12.682) token='°C' logprob=-0.000 alt -> '°' (-8.403) alt -> '℃' (-11.869) alt -> 'º' (-13.251) alt -> '�' (-14.792) token='.' logprob=-0.179 alt -> '.\n\n' (-2.042) alt -> '—' (-4.778) alt -> ' –' (-5.000) alt -> ',' (-5.048) token=' \n' logprob=-0.076 alt -> ' In' (-3.305) alt -> ' \n\n' (-3.670) alt -> ' \n\n' (-5.942) alt -> ' \n' (-5.953) token='In' logprob=-0.006 alt -> 'The' (-5.851) alt -> 'Toronto' (-6.482) alt -> 'Over' (-6.785) alt -> 'Across' (-8.881) token=' Toronto' logprob=-0.000 alt -> '\u202f' (-16.036) alt -> 'Toronto' (-17.344) alt -> ' nearby' (-18.308) alt -> ' Tokyo' (-18.689) token=',' logprob=-0.017 alt -> ' the' (-4.162) alt -> ' right' (-7.886) alt -> ' it' (-7.913) alt -> ' you' (-8.791) token=' the' logprob=-0.046 alt -> ' it' (-3.216) alt -> ' you' (-5.584) alt -> ' today' (-8.181) alt -> " it's" (-8.259) token=' weather' logprob=-0.513 alt -> ' conditions' (-1.008) alt -> ' sky' (-4.119) alt -> ' forecast' (-4.181) alt -> ' skies' (-6.220) token=' conditions' logprob=-0.653 alt -> ' is' (-0.895) alt -> ' condition' (-2.813) alt -> ' today' (-4.625) alt -> ' right' (-7.593) token=' are' logprob=-0.008 alt -> ' right' (-5.591) alt -> ' today' (-5.625) alt -> ' show' (-8.752) alt -> ' now' (-9.949) token=' rainy' logprob=-0.066 alt -> ' **' (-3.124) alt -> ' *' (-4.327) alt -> '\u202f' (-5.568) alt -> ' “' (-7.342) token='.' logprob=-0.000 alt -> ' right' (-10.035) alt -> '.\\' (-10.073) alt -> '.\n' (-10.221) alt -> '.\n\n' (-10.884) ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70723