[GH-ISSUE #12034] Magistral 24.07 #33751

Closed
opened 2026-04-22 16:43:55 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @KarGeekrie on GitHub (Aug 22, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12034

Thank you for your great work.

Could you please add the latest version of magistral. Last version on hf is 24.07 https://huggingface.co/mistralai/Magistral-Small-2507 (but Ollama last version is 24.06)

Thank you.

Originally created by @KarGeekrie on GitHub (Aug 22, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12034 Thank you for your great work. Could you please add the latest version of magistral. Last version on hf is 24.07 https://huggingface.co/mistralai/Magistral-Small-2507 (but Ollama last version is 24.06) Thank you.
GiteaMirror added the model label 2026-04-22 16:43:55 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 22, 2025):

$ ollama run hf.co/unsloth/Magistral-Small-2507-GGUF:Q4_K_M hello
Hello! 😊 How can I help you today?
<!-- gh-comment-id:3215017684 --> @rick-github commented on GitHub (Aug 22, 2025): ```console $ ollama run hf.co/unsloth/Magistral-Small-2507-GGUF:Q4_K_M hello Hello! 😊 How can I help you today? ```
Author
Owner

@rick-github commented on GitHub (Aug 22, 2025):

ollama show --modelfile hf.co/unsloth/Magistral-Small-2507-GGUF:Q4_K_M | grep "^FROM" > Modelfile
ollama show --modelfile magistral:24b-small-2506-q4_K_M | grep -v "^FROM" >> Modelfile
ollama create unsloth/magistral:24b-small-2507-q4_K_M
$ ollama run unsloth/magistral:24b-small-2507-q4_K_M hello
Thinking...
The user has greeted me with "hello". I should respond in a friendly manner and ask how I can assist
them or what they need help with.
...done thinking.

Hello! How can I assist you today?
<!-- gh-comment-id:3215041462 --> @rick-github commented on GitHub (Aug 22, 2025): ```console ollama show --modelfile hf.co/unsloth/Magistral-Small-2507-GGUF:Q4_K_M | grep "^FROM" > Modelfile ollama show --modelfile magistral:24b-small-2506-q4_K_M | grep -v "^FROM" >> Modelfile ollama create unsloth/magistral:24b-small-2507-q4_K_M ``` ```console $ ollama run unsloth/magistral:24b-small-2507-q4_K_M hello Thinking... The user has greeted me with "hello". I should respond in a friendly manner and ask how I can assist them or what they need help with. ...done thinking. Hello! How can I assist you today? ```
Author
Owner

@NicksonYap commented on GitHub (Aug 23, 2025):

Just to note that Magistral Small 2506 (the previous one and the one currently in official Ollama repo)
https://ollama.com/library/magistral:24b-small-2506-q4_K_M

tends to output \\boxed{} at the end of the response

Even in Mistral's official documentation
https://docs.mistral.ai/capabilities/reasoning/#system-prompt

shows that for 2506 adds NEVER use \\boxed{} in your response. in the System Prompt (seems like a workaround)
and for 2507 is just uses Format your response using Markdown, and use LaTeX for any mathematical equations

<!-- gh-comment-id:3216424902 --> @NicksonYap commented on GitHub (Aug 23, 2025): Just to note that Magistral Small 2506 (the previous one and the one currently in official Ollama repo) https://ollama.com/library/magistral:24b-small-2506-q4_K_M tends to output `\\boxed{}` at the end of the response Even in Mistral's official documentation https://docs.mistral.ai/capabilities/reasoning/#system-prompt shows that for 2506 adds `NEVER use \\boxed{} in your response.` in the System Prompt (seems like a workaround) and for 2507 is just uses `Format your response using Markdown, and use LaTeX for any mathematical equations`
Author
Owner

@KarGeekrie commented on GitHub (Aug 23, 2025):

Thanks you both for the tip 👍

Yes, you have true, "\\boxed{}" end in response is a major issue when I use magistral 25.06 with Ollama. System prompt update "NEVER use \boxed{} in your response." propose in Mistral doc is not propagated on hf or Ollama system prompt. Maybe open and Ollama issue for ask an update of system prompt will be a good thing?

Do you know if Ollama support system prompt for magistral 25.07 ?

{
  "role": "system",
  "content": [
    {
      "type": "text",
      "text": "First draft your thinking process (inner monologue) until you arrive at a response. Format your response using Markdown, and use LaTeX for any mathematical equations. Write both your thoughts and the response in the same language as the input.\n\nYour thinking process must follow the template below:"
    },
    {
      "type": "thinking",
      "thinking": [
        {
          "type": "text",
          "text": "Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate the response to the user."
        }
      ]
    },
    {
      "type": "text",
      "text": "Here, provide a self-contained response."
    }
  ]
}
<!-- gh-comment-id:3216584741 --> @KarGeekrie commented on GitHub (Aug 23, 2025): Thanks you both for the tip 👍 Yes, you have true, "\\\\boxed{}" end in response is a major issue when I use magistral 25.06 with Ollama. System prompt update "NEVER use \\boxed{} in your response." propose in Mistral doc is not propagated on hf or Ollama system prompt. Maybe open and Ollama issue for ask an update of system prompt will be a good thing? Do you know if Ollama support system prompt for magistral 25.07 ? ``` { "role": "system", "content": [ { "type": "text", "text": "First draft your thinking process (inner monologue) until you arrive at a response. Format your response using Markdown, and use LaTeX for any mathematical equations. Write both your thoughts and the response in the same language as the input.\n\nYour thinking process must follow the template below:" }, { "type": "thinking", "thinking": [ { "type": "text", "text": "Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate the response to the user." } ] }, { "type": "text", "text": "Here, provide a self-contained response." } ] } ```
Author
Owner

@NicksonYap commented on GitHub (Aug 24, 2025):

The Modelfile

I've extracted and tested an Ollama Modelfile for magistral-small-2507

ollama pull hf.co/unsloth/Magistral-Small-2507-GGUF:Q4_K_M

will try official GGUF soon: hf.co/mistralai/Magistral-Small-2507-GGUF:Q4_K_M

The Ollama Modelfile tested using Open WebUI (think parameter must be turned on) for hf.co/unsloth/Magistral-Small-2507-GGUF:Q4_K_M:

FROM hf.co/unsloth/Magistral-Small-2507-GGUF:Q4_K_M

SYSTEM """First draft your thinking process (inner monologue) until you arrive at a response. Format your response using Markdown, and use LaTeX for any mathematical equations. Write both your thoughts and the response in the same language as the input.

Your thinking process must follow the template below:[THINK]Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate the response. Use the same language as the input.[/THINK]Here, provide a self-contained response."""

TEMPLATE """
<s>
{{- range $index, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $index)) 1}}
{{- if eq .Role "user" }}
{{- if and (le (len (slice $.Messages $index)) 2) $.Tools }}[AVAILABLE_TOOLS] {{ json $.Tools }}[/AVAILABLE_TOOLS]
{{- end }} 
{{- if and (eq (len (slice $.Messages $index)) 1) $.System }}[SYSTEM_PROMPT] {{ $.System }}[/SYSTEM_PROMPT]
{{- end }}
[INST] {{ .Content }}[/INST]
{{- else if eq .Role "assistant" }}
{{- if and $.IsThinkSet (and $last .Thinking) -}}[THINK] {{ .Thinking }}[/THINK]{{ end }}
{{- if .Content }}{{ .Content }}
</s>
{{- else if .ToolCalls }}[TOOL_CALLS] [{{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ json .Function.Arguments }}}{{- end }}]
</s>
{{- end }}
{{- else if eq .Role "tool" }}[TOOL_RESULTS] {"content": {{ .Content }}}[/TOOL_RESULTS]{{- end }}
{{- end }}
"""

# ref - explicitly stated: https://huggingface.co/mistralai/Magistral-Small-2507-GGUF
PARAMETER top_p 0.95
PARAMETER temperature 0.7
PARAMETER num_ctx 40960

References:

ref - prompt file: https://huggingface.co/mistralai/Magistral-Small-2507/raw/main/SYSTEM_PROMPT.txt
ref - prompt of tokenized output using code: https://huggingface.co/mistralai/Magistral-Small-2507#transformers

<s>[SYSTEM_PROMPT]First draft your thinking process (inner monologue) until you arrive at a response. Format your response using Markdown, and use LaTeX for any mathematical equations. Write both your thoughts and the response in the same language as the input.

Your thinking process must follow the template below:[THINK]Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate the response. Use the same language as the input.[/THINK]Here, provide a self-contained response.[/SYSTEM_PROMPT][INST]Think about 5 random numbers. Verify if you can combine them with addition, multiplication, subtraction or division to 133.[/INST]

ref - sample TEMPLATE for Mistral: https://github.com/ollama/ollama/blob/main/docs/template.md#mistral

TEMPLATE """
{{- range $index, $_ := .Messages }}
{{- if eq .Role "user" }}
{{- if and (le (len (slice $.Messages $index)) 2) $.Tools }}[AVAILABLE_TOOLS] {{ json $.Tools }}[/AVAILABLE_TOOLS]
{{- end }}[INST] {{ if and (eq (len (slice $.Messages $index)) 1) $.System }}{{ $.System }}

{{ end }}{{ .Content }}[/INST]
{{- else if eq .Role "assistant" }}
{{- if .Content }} {{ .Content }}</s>
{{- else if .ToolCalls }}[TOOL_CALLS] [
{{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ json .Function.Arguments }}}
{{- end }}]</s>
{{- end }}
{{- else if eq .Role "tool" }}[TOOL_RESULTS] {"content": {{ .Content }}}[/TOOL_RESULTS]
{{- end }}
{{- end }}
"""

ref - control tokens: https://docs.mistral.ai/guides/tokenization/#v3-tekken-tokenizer

<unk>
<s>
</s>
[INST]
[/INST]
[AVAILABLE_TOOLS]
[/AVAILABLE_TOOLS]
[TOOL_RESULTS]
[/TOOL_RESULTS]
[TOOL_CALLS]
<pad>
 [PREFIX]
[MIDDLE]
[SUFFIX]
<!-- gh-comment-id:3217809775 --> @NicksonYap commented on GitHub (Aug 24, 2025): ### The Modelfile I've extracted and tested an Ollama Modelfile for `magistral-small-2507` ``` ollama pull hf.co/unsloth/Magistral-Small-2507-GGUF:Q4_K_M ``` will try official GGUF soon: `hf.co/mistralai/Magistral-Small-2507-GGUF:Q4_K_M` The Ollama Modelfile tested using Open WebUI (`think` parameter must be turned on) for `hf.co/unsloth/Magistral-Small-2507-GGUF:Q4_K_M`: ``` FROM hf.co/unsloth/Magistral-Small-2507-GGUF:Q4_K_M SYSTEM """First draft your thinking process (inner monologue) until you arrive at a response. Format your response using Markdown, and use LaTeX for any mathematical equations. Write both your thoughts and the response in the same language as the input. Your thinking process must follow the template below:[THINK]Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate the response. Use the same language as the input.[/THINK]Here, provide a self-contained response.""" TEMPLATE """ <s> {{- range $index, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $index)) 1}} {{- if eq .Role "user" }} {{- if and (le (len (slice $.Messages $index)) 2) $.Tools }}[AVAILABLE_TOOLS] {{ json $.Tools }}[/AVAILABLE_TOOLS] {{- end }} {{- if and (eq (len (slice $.Messages $index)) 1) $.System }}[SYSTEM_PROMPT] {{ $.System }}[/SYSTEM_PROMPT] {{- end }} [INST] {{ .Content }}[/INST] {{- else if eq .Role "assistant" }} {{- if and $.IsThinkSet (and $last .Thinking) -}}[THINK] {{ .Thinking }}[/THINK]{{ end }} {{- if .Content }}{{ .Content }} </s> {{- else if .ToolCalls }}[TOOL_CALLS] [{{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ json .Function.Arguments }}}{{- end }}] </s> {{- end }} {{- else if eq .Role "tool" }}[TOOL_RESULTS] {"content": {{ .Content }}}[/TOOL_RESULTS]{{- end }} {{- end }} """ # ref - explicitly stated: https://huggingface.co/mistralai/Magistral-Small-2507-GGUF PARAMETER top_p 0.95 PARAMETER temperature 0.7 PARAMETER num_ctx 40960 ``` ### References: ref - prompt file: https://huggingface.co/mistralai/Magistral-Small-2507/raw/main/SYSTEM_PROMPT.txt ref - prompt of tokenized output using code: https://huggingface.co/mistralai/Magistral-Small-2507#transformers ``` <s>[SYSTEM_PROMPT]First draft your thinking process (inner monologue) until you arrive at a response. Format your response using Markdown, and use LaTeX for any mathematical equations. Write both your thoughts and the response in the same language as the input. Your thinking process must follow the template below:[THINK]Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate the response. Use the same language as the input.[/THINK]Here, provide a self-contained response.[/SYSTEM_PROMPT][INST]Think about 5 random numbers. Verify if you can combine them with addition, multiplication, subtraction or division to 133.[/INST] ``` ref - sample `TEMPLATE` for Mistral: https://github.com/ollama/ollama/blob/main/docs/template.md#mistral ``` TEMPLATE """ {{- range $index, $_ := .Messages }} {{- if eq .Role "user" }} {{- if and (le (len (slice $.Messages $index)) 2) $.Tools }}[AVAILABLE_TOOLS] {{ json $.Tools }}[/AVAILABLE_TOOLS] {{- end }}[INST] {{ if and (eq (len (slice $.Messages $index)) 1) $.System }}{{ $.System }} {{ end }}{{ .Content }}[/INST] {{- else if eq .Role "assistant" }} {{- if .Content }} {{ .Content }}</s> {{- else if .ToolCalls }}[TOOL_CALLS] [ {{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ json .Function.Arguments }}} {{- end }}]</s> {{- end }} {{- else if eq .Role "tool" }}[TOOL_RESULTS] {"content": {{ .Content }}}[/TOOL_RESULTS] {{- end }} {{- end }} """ ``` ref - control tokens: https://docs.mistral.ai/guides/tokenization/#v3-tekken-tokenizer ``` <unk> <s> </s> [INST] [/INST] [AVAILABLE_TOOLS] [/AVAILABLE_TOOLS] [TOOL_RESULTS] [/TOOL_RESULTS] [TOOL_CALLS] <pad> [PREFIX] [MIDDLE] [SUFFIX] ```
Author
Owner

@NicksonYap commented on GitHub (Aug 24, 2025):

although I'm getting some "bugs" where [THINK] is some times not parsed by Ollama

not sure if the template has issues

<!-- gh-comment-id:3217819545 --> @NicksonYap commented on GitHub (Aug 24, 2025): although I'm getting some "bugs" where `[THINK]` is some times not parsed by Ollama not sure if the template has issues
Author
Owner

@goerch commented on GitHub (Jan 7, 2026):

When running the documented tools example I see:

curl http://localhost:11434/api/chat -d '{
  "model": "magistral",
  "messages": [
    {
      "role": "user",
      "content": "what is the weather in tokyo?"
    }
  ],
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Get the weather in a given city",
        "parameters": {
          "type": "object",
          "properties": {
            "city": {
              "type": "string",
              "description": "The city to get the weather for"
            }
          },
          "required": ["city"]
        }
      }
    }
  ],
  "stream": false
}'
{"model":"magistral","created_at":"2026-01-07T08:08:39.412212Z","message":{"role":"assistant","content":"I would call the `get_weather` function with \"Tokyo\" as the argument to retrieve the current weather conditions in Tokyo. Once the data is obtained, it can be formatted and presented to the user in a clear manner.\n\nAssuming the function returns the weather information successfully, here's how the response might look:\n\nThe current weather in Tokyo is cloudy with a temperature of 20°C.\n\nHowever, since I don't have actual access to the `get_weather` function or real-time data, this is an illustrative example. In practice, calling the function would provide up-to-date information.\n\n**Summary:**\nTo find out the weather in Tokyo, I would call the provided `get_weather` function with \"Tokyo\" as the argument. This function would return current weather details which can then be formatted and presented to the user. Without real-time data or an actual API call, a hypothetical response is given for illustration purposes.\n\n**Final Answer:**\nThe weather in Tokyo is currently cloudy with a temperature of 20°C. (Note: This is a placeholder response. In practice, calling `get_weather(\"Tokyo\")` would provide the actual, up-to-date weather information.)\n\n\\boxed{20^\\circ\\text{C}, \\text{Cloudy}}","thinking":"Okay, I need to find out what the weather is like in Tokyo right now. Since I don't have direct access to external tools or APIs, I can't fetch real-time data. However, if this were a scenario where I could use an API or a similar function, I would make a request to get the current weather conditions for Tokyo.\n\nBut since I'm simulating a situation where I have access to such functions, let's assume that I can call a hypothetical \"get_weather\" function with the city name as a parameter. The function might return data like temperature, humidity, precipitation, etc., but since I don't know the exact details or format, I'll just imagine it returns a general description.\n\nSo, if I were to use this function, it would look something like this:\n\n```python\nweather_info = get_weather(city=\"Tokyo\")\n```\n\nBut since I can't actually execute that, I need to think about what the weather in Tokyo is typically like or perhaps make an educated guess based on common knowledge. However, without real-time data, any answer would be speculative.\n\nBut wait, perhaps the user expects me to use a predefined function called `get_weather` which they've provided access to. Looking back at the problem statement, there's indeed mention of such a function:\n\n```json\n{\n  \"type\": \"function\",\n  \"function\": {\n    \"name\": \"get_weather\",\n    \"description\": \"Get the weather in a given city\",\n    \"parameters\": {\n      \"type\": \"object\",\n      \"required\": [\"city\"],\n      \"properties\": {\n        \"city\": {\n          \"type\": \"string\",\n          \"description\": \"The city to get the weather for\"\n        }\n      }\n    }\n  }\n}\n```\n\nGreat! So, I can use this function to fetch the weather in Tokyo. The function requires a single parameter: `city`, which should be set to `\"Tokyo\"`.\n\nLet me draft how I would call this function:\n\n1. Call `get_weather` with `city=\"Tokyo\"`.\n2. Wait for the response.\n3. Format the response in a user-friendly way.\n\nAssuming the function returns something like \"Rainy, 70°F\", but since we don't know the exact format, I'll assume it's a JSON object or a string with the weather information.\n\nBut since this is just a simulation and I can't actually call external functions, I need to think about how I would structure the response based on typical function outputs.\n\nFor now, let's say the function returns:\n```json\n{\n  \"temperature\": 20,\n  \"conditions\": \"Cloudy\"\n}\n```\n\nBut since I don't have actual data, this is hypothetical. In a real scenario, I'd make the API call and get live data.\n\nHowever, for the sake of completing the task as if I had access to this function, I'll proceed by saying that calling `get_weather(\"Tokyo\")` returns some weather information, which I can then present to the user.\n"},"done":true,"done_reason":"stop","total_duration":63498910542,"load_duration":6587957459,"prompt_eval_count":239,"prompt_eval_duration":2225156500,"eval_count":887,"eval_duration":54472007584}%                                                                                       
(LlmExperiments) jw@Mac LlmExperiments % 

So tool calling does not work with Magistral?

<!-- gh-comment-id:3717758735 --> @goerch commented on GitHub (Jan 7, 2026): When running the documented [tools example](https://github.com/ollama/ollama/blob/626af2d80973270c4d59b8df7153ac47ad67ed7b/docs/api.md?plain=1#L703) I see: ``` curl http://localhost:11434/api/chat -d '{ "model": "magistral", "messages": [ { "role": "user", "content": "what is the weather in tokyo?" } ], "tools": [ { "type": "function", "function": { "name": "get_weather", "description": "Get the weather in a given city", "parameters": { "type": "object", "properties": { "city": { "type": "string", "description": "The city to get the weather for" } }, "required": ["city"] } } } ], "stream": false }' {"model":"magistral","created_at":"2026-01-07T08:08:39.412212Z","message":{"role":"assistant","content":"I would call the `get_weather` function with \"Tokyo\" as the argument to retrieve the current weather conditions in Tokyo. Once the data is obtained, it can be formatted and presented to the user in a clear manner.\n\nAssuming the function returns the weather information successfully, here's how the response might look:\n\nThe current weather in Tokyo is cloudy with a temperature of 20°C.\n\nHowever, since I don't have actual access to the `get_weather` function or real-time data, this is an illustrative example. In practice, calling the function would provide up-to-date information.\n\n**Summary:**\nTo find out the weather in Tokyo, I would call the provided `get_weather` function with \"Tokyo\" as the argument. This function would return current weather details which can then be formatted and presented to the user. Without real-time data or an actual API call, a hypothetical response is given for illustration purposes.\n\n**Final Answer:**\nThe weather in Tokyo is currently cloudy with a temperature of 20°C. (Note: This is a placeholder response. In practice, calling `get_weather(\"Tokyo\")` would provide the actual, up-to-date weather information.)\n\n\\boxed{20^\\circ\\text{C}, \\text{Cloudy}}","thinking":"Okay, I need to find out what the weather is like in Tokyo right now. Since I don't have direct access to external tools or APIs, I can't fetch real-time data. However, if this were a scenario where I could use an API or a similar function, I would make a request to get the current weather conditions for Tokyo.\n\nBut since I'm simulating a situation where I have access to such functions, let's assume that I can call a hypothetical \"get_weather\" function with the city name as a parameter. The function might return data like temperature, humidity, precipitation, etc., but since I don't know the exact details or format, I'll just imagine it returns a general description.\n\nSo, if I were to use this function, it would look something like this:\n\n```python\nweather_info = get_weather(city=\"Tokyo\")\n```\n\nBut since I can't actually execute that, I need to think about what the weather in Tokyo is typically like or perhaps make an educated guess based on common knowledge. However, without real-time data, any answer would be speculative.\n\nBut wait, perhaps the user expects me to use a predefined function called `get_weather` which they've provided access to. Looking back at the problem statement, there's indeed mention of such a function:\n\n```json\n{\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_weather\",\n \"description\": \"Get the weather in a given city\",\n \"parameters\": {\n \"type\": \"object\",\n \"required\": [\"city\"],\n \"properties\": {\n \"city\": {\n \"type\": \"string\",\n \"description\": \"The city to get the weather for\"\n }\n }\n }\n }\n}\n```\n\nGreat! So, I can use this function to fetch the weather in Tokyo. The function requires a single parameter: `city`, which should be set to `\"Tokyo\"`.\n\nLet me draft how I would call this function:\n\n1. Call `get_weather` with `city=\"Tokyo\"`.\n2. Wait for the response.\n3. Format the response in a user-friendly way.\n\nAssuming the function returns something like \"Rainy, 70°F\", but since we don't know the exact format, I'll assume it's a JSON object or a string with the weather information.\n\nBut since this is just a simulation and I can't actually call external functions, I need to think about how I would structure the response based on typical function outputs.\n\nFor now, let's say the function returns:\n```json\n{\n \"temperature\": 20,\n \"conditions\": \"Cloudy\"\n}\n```\n\nBut since I don't have actual data, this is hypothetical. In a real scenario, I'd make the API call and get live data.\n\nHowever, for the sake of completing the task as if I had access to this function, I'll proceed by saying that calling `get_weather(\"Tokyo\")` returns some weather information, which I can then present to the user.\n"},"done":true,"done_reason":"stop","total_duration":63498910542,"load_duration":6587957459,"prompt_eval_count":239,"prompt_eval_duration":2225156500,"eval_count":887,"eval_duration":54472007584}% (LlmExperiments) jw@Mac LlmExperiments % ``` So tool calling does not work with Magistral?
Author
Owner

@rick-github commented on GitHub (Jan 7, 2026):

The model needs to be encouraged to use a tool.

$ ollama-run.py magistral what is the weather in tokyo? --tools get_weather --system "Use a tool if supplied"

calling get_weather({'city': 'tokyo', 'country': 'japan', 'unit': 'metric'})

The weather in Tokyo, Japan is partly sunny with a temperature of 7°C and it feels like 7°C. The forecast
for today is highs around 11°C and lows around 0°C. There is light wind coming from the north at 4 km/h.

For tomorrow, the weather is expected to be passing clouds with highs around 11°C and lows around 6°C.
<!-- gh-comment-id:3718043584 --> @rick-github commented on GitHub (Jan 7, 2026): The model needs to be encouraged to use a tool. ```console $ ollama-run.py magistral what is the weather in tokyo? --tools get_weather --system "Use a tool if supplied" calling get_weather({'city': 'tokyo', 'country': 'japan', 'unit': 'metric'}) The weather in Tokyo, Japan is partly sunny with a temperature of 7°C and it feels like 7°C. The forecast for today is highs around 11°C and lows around 0°C. There is light wind coming from the north at 4 km/h. For tomorrow, the weather is expected to be passing clouds with highs around 11°C and lows around 6°C. ```
Author
Owner

@rick-github commented on GitHub (Jan 7, 2026):

$ curl -s http://localhost:11434/api/chat -d '{
  "model": "magistral",
  "messages": [
    {
      "role": "system",
      "content": "Use a tool if supplied"
    },
    {
      "role": "user",
      "content": "what is the weather in tokyo?"
    }
  ],
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Get the weather in a given city",
        "parameters": {
          "type": "object",
          "properties": {
            "city": {
              "type": "string",
              "description": "The city to get the weather for"
            }
          },
          "required": ["city"]
        }
      }
    }
  ],
  "stream": false
}' | jq
{
  "model": "magistral",
  "created_at": "2026-01-07T09:42:16.964561411Z",
  "message": {
    "role": "assistant",
    "content": "",
    "tool_calls": [
      {
        "id": "call_gxf3vs4l",
        "function": {
          "index": 0,
          "name": "get_weather",
          "arguments": {
            "city": "tokyo"
          }
        }
      }
    ]
  },
  "done": true,
  "done_reason": "stop",
  "total_duration": 482693988,
  "load_duration": 320381578,
  "prompt_eval_count": 77,
  "prompt_eval_duration": 10922513,
  "eval_count": 14,
  "eval_duration": 142864037
}
<!-- gh-comment-id:3718054667 --> @rick-github commented on GitHub (Jan 7, 2026): ```console $ curl -s http://localhost:11434/api/chat -d '{ "model": "magistral", "messages": [ { "role": "system", "content": "Use a tool if supplied" }, { "role": "user", "content": "what is the weather in tokyo?" } ], "tools": [ { "type": "function", "function": { "name": "get_weather", "description": "Get the weather in a given city", "parameters": { "type": "object", "properties": { "city": { "type": "string", "description": "The city to get the weather for" } }, "required": ["city"] } } } ], "stream": false }' | jq { "model": "magistral", "created_at": "2026-01-07T09:42:16.964561411Z", "message": { "role": "assistant", "content": "", "tool_calls": [ { "id": "call_gxf3vs4l", "function": { "index": 0, "name": "get_weather", "arguments": { "city": "tokyo" } } } ] }, "done": true, "done_reason": "stop", "total_duration": 482693988, "load_duration": 320381578, "prompt_eval_count": 77, "prompt_eval_duration": 10922513, "eval_count": 14, "eval_duration": 142864037 } ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33751