[GH-ISSUE #13637] Model Request: LFM2.5 & LFM2.5-VL #71029

Open
opened 2026-05-04 23:48:12 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @chllei on GitHub (Jan 7, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13637

LiquidAI has just released the LFM2.5 and LFM2.5-VL series, which are excellent models suitable for local deployment.

Originally created by @chllei on GitHub (Jan 7, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13637 LiquidAI has just released the LFM2.5 and LFM2.5-VL series, which are excellent models suitable for local deployment.
GiteaMirror added the model label 2026-05-04 23:48:12 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 7, 2026):

The non-vision model is supported, with the caveat that 0.13.5 can't load the model because of a missing tensor, so 0.13.4 or 0.13.6 when it is released.

$ ollama -v
ollama version is 0.13.4
$ ollama run hf.co/LiquidAI/LFM2.5-1.2B-Instruct-GGUF:Q4_K_M hello
Hello! How can I help you today? If you have any questions or need assistance, feel free to ask.

The vision model has a similar problem with a missing tensor, so may be supported in 0.13.6.

<!-- gh-comment-id:3717262684 --> @rick-github commented on GitHub (Jan 7, 2026): The non-vision model is supported, with the caveat that 0.13.5 can't load the model because of a missing tensor, so 0.13.4 or 0.13.6 when it is released. ```console $ ollama -v ollama version is 0.13.4 $ ollama run hf.co/LiquidAI/LFM2.5-1.2B-Instruct-GGUF:Q4_K_M hello Hello! How can I help you today? If you have any questions or need assistance, feel free to ask. ``` The vision model has a similar problem with a missing tensor, so may be supported in 0.13.6.
Author
Owner

@chllei commented on GitHub (Jan 7, 2026):

The non-vision model is supported, with the caveat that 0.13.5 can't load the model because of a missing tensor, so 0.13.4 or 0.13.6 when it is released.

$ ollama -v
ollama version is 0.13.4
$ ollama run hf.co/LiquidAI/LFM2.5-1.2B-Instruct-GGUF:Q4_K_M hello
Hello! How can I help you today? If you have any questions or need assistance, feel free to ask.
The vision model has a similar problem with a missing tensor, so may be supported in 0.13.6.

Thanks. I find it surprising that this model, with less than 2 billion parameters, can simultaneously support text and images. I believe models that support both vision and tool calling are well-suited for everyday local use. Looking forward to version 0.13.6.

<!-- gh-comment-id:3717279493 --> @chllei commented on GitHub (Jan 7, 2026): > The non-vision model is supported, with the caveat that 0.13.5 can't load the model because of a missing tensor, so 0.13.4 or 0.13.6 when it is released. > > $ ollama -v > ollama version is 0.13.4 > $ ollama run hf.co/LiquidAI/LFM2.5-1.2B-Instruct-GGUF:Q4_K_M hello > Hello! How can I help you today? If you have any questions or need assistance, feel free to ask. > The vision model has a similar problem with a missing tensor, so may be supported in 0.13.6. Thanks. I find it surprising that this model, with less than 2 billion parameters, can simultaneously support text and images. I believe models that support both vision and tool calling are well-suited for everyday local use. Looking forward to version 0.13.6.
Author
Owner

@duckida commented on GitHub (Jan 9, 2026):

When will 0.13.6 be released?

<!-- gh-comment-id:3729847930 --> @duckida commented on GitHub (Jan 9, 2026): When will 0.13.6 be released?
Author
Owner

@maternion commented on GitHub (Jan 11, 2026):

@duckida 0.14.0-rc2 has been released but I don't think they have included the lfm2 update in this one, maybe they will do it in the next rc or major release.

<!-- gh-comment-id:3734342642 --> @maternion commented on GitHub (Jan 11, 2026): @duckida 0.14.0-rc2 has been released but I don't think they have included the lfm2 update in this one, maybe they will do it in the next rc or major release.
Author
Owner

@lazycodeman commented on GitHub (Jan 15, 2026):

@duckida 0.14.0-rc2 has been released but I don't think they have included the lfm2 update in this one, maybe they will do it in the next rc or major release.

Even in 0.14.1, the error still persists.

<!-- gh-comment-id:3752433777 --> @lazycodeman commented on GitHub (Jan 15, 2026): > [@duckida](https://github.com/duckida) 0.14.0-rc2 has been released but I don't think they have included the lfm2 update in this one, maybe they will do it in the next rc or major release. Even in 0.14.1, the error still persists.
Author
Owner

@maternion commented on GitHub (Jan 15, 2026):

@duckida 0.14.0-rc2 has been released but I don't think they have included the lfm2 update in this one, maybe they will do it in the next rc or major release.

Even in 0.14.1, the error still persists.

They haven't merged the fixes sadly.

<!-- gh-comment-id:3752867117 --> @maternion commented on GitHub (Jan 15, 2026): > > [@duckida](https://github.com/duckida) 0.14.0-rc2 has been released but I don't think they have included the lfm2 update in this one, maybe they will do it in the next rc or major release. > > Even in 0.14.1, the error still persists. They haven't merged the fixes sadly.
Author
Owner

@chllei commented on GitHub (Jan 15, 2026):

@duckida 0.14.0-rc2 has been released but I don't think they have included the lfm2 update in this one, maybe they will do it in the next rc or major release.

Even in 0.14.1, the error still persists.

I hope to soon support a tool+visual version of the model; LFM2.5vl currently represents the best locally running model for everyday laptops.

<!-- gh-comment-id:3753266601 --> @chllei commented on GitHub (Jan 15, 2026): > > [@duckida](https://github.com/duckida) 0.14.0-rc2 has been released but I don't think they have included the lfm2 update in this one, maybe they will do it in the next rc or major release. > > Even in 0.14.1, the error still persists. I hope to soon support a tool+visual version of the model; LFM2.5vl currently represents the best locally running model for everyday laptops.
Author
Owner

@logxdx commented on GitHub (Jan 22, 2026):

These models don't support tool calling with ollama.
Could anyone get them to work with tool calling?

<!-- gh-comment-id:3782628335 --> @logxdx commented on GitHub (Jan 22, 2026): These models don't support tool calling with ollama. Could anyone get them to work with tool calling?
Author
Owner

@joe-speedboat commented on GitHub (Jan 22, 2026):

Just installed current version, this are the results.
Impressive, we are on the way, pretty fast and accurate.

chris@spark:~$ ollama -v
ollama version is 0.14.3

chris@spark:~$ ollama ls | grep -i lfm2
hf.co/LiquidAI/LFM2.5-VL-1.6B-GGUF:BF16            18d35a907932    3.2 GB    8 minutes ago        
lfm2.5-thinking:latest                             95bd9d45385f    731 MB    21 hours ago         
hf.co/LiquidAI/LFM2.5-1.2B-Instruct-GGUF:Q4_K_M    b71ac99619af    730 MB    6 days ago      
    
chris@spark:~$ for llm in $(ollama ls | grep -i lfm2 | awk '{print $1}' ); do echo "----- $llm ------" ; ollama run $llm "choose a number between 1 and 10"; done
----- hf.co/LiquidAI/LFM2.5-VL-1.6B-GGUF:BF16 ------
Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'output_norm'
llama_model_load_from_file_impl: failed to load model
----- lfm2.5-thinking:latest ------
Thinking...
Okay, let's see. The user wants me to choose a number between 1 and 10. They specified "between 1 and 10," but I need to make sure I pick one. [...removed...] . Alternatively, since the user might expect a random 
selection, I'll go with 7. Let me just pick 7. So my final answer is 7.
...done thinking.

The number chosen is **7**. 

Let me know if you'd like a different choice! 😊

----- hf.co/LiquidAI/LFM2.5-1.2B-Instruct-GGUF:Q4_K_M ------
myNumber: 7
``
<!-- gh-comment-id:3782672689 --> @joe-speedboat commented on GitHub (Jan 22, 2026): Just installed current version, this are the results. Impressive, we are on the way, pretty fast and accurate. ```bash chris@spark:~$ ollama -v ollama version is 0.14.3 chris@spark:~$ ollama ls | grep -i lfm2 hf.co/LiquidAI/LFM2.5-VL-1.6B-GGUF:BF16 18d35a907932 3.2 GB 8 minutes ago lfm2.5-thinking:latest 95bd9d45385f 731 MB 21 hours ago hf.co/LiquidAI/LFM2.5-1.2B-Instruct-GGUF:Q4_K_M b71ac99619af 730 MB 6 days ago chris@spark:~$ for llm in $(ollama ls | grep -i lfm2 | awk '{print $1}' ); do echo "----- $llm ------" ; ollama run $llm "choose a number between 1 and 10"; done ----- hf.co/LiquidAI/LFM2.5-VL-1.6B-GGUF:BF16 ------ Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'output_norm' llama_model_load_from_file_impl: failed to load model ----- lfm2.5-thinking:latest ------ Thinking... Okay, let's see. The user wants me to choose a number between 1 and 10. They specified "between 1 and 10," but I need to make sure I pick one. [...removed...] . Alternatively, since the user might expect a random selection, I'll go with 7. Let me just pick 7. So my final answer is 7. ...done thinking. The number chosen is **7**. Let me know if you'd like a different choice! 😊 ----- hf.co/LiquidAI/LFM2.5-1.2B-Instruct-GGUF:Q4_K_M ------ myNumber: 7 ``
Author
Owner

@chllei commented on GitHub (Apr 23, 2026):

I successfully designed a chat template that supports tool calls. The modelfile:

FROM <path_to_gguf_file>

TEMPLATE """<|startoftext|>{{- if or .System .Tools }}<|im_start|>system
You are a helpful and precise assistant.

{{- if .System }}
The following system configuration is your fundamental guideline.
- Instruction Compliance: Follow the user's instructions.
- Response Style: Be clear and direct. Use simple language. Always response in Simplified Chinese. Answer what you know. Say "I don't know" if you're uncertain.
- Expression: Keep it concise and practical.
- Math: Use simple Latex math notation when needed.
- Tool Use: Call a tool ONLY when necessary. For web search task, use English to query in default, and response in native Chinese.

{{ .System }}{{ end }}

{{- if .Tools }}

# Tools

You may call one or more functions to assist with the user query.

You are provided with function signatures within the following list:
[TOOL_DEFINITIONS]
{{- range .Tools }}
{"type": "function", "function": {
    "name": "{{ .Function.Name }}",
    "description": "{{ .Function.Description }}",
    "parameters": {{ .Function.Parameters }}
}}
{{- end }}
[/TOOL_DEFINITIONS]

For each function call, return a JSON object with "name" and "arguments" within <tool_call></tool_call> tags.
Example:
<tool_call>
{"name": "get_weather", "arguments": {"location": "Beijing"}}
</tool_call>

{{- end }}<|im_end|>
{{ end }}{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
{{- if eq .Role "user" }}<|im_start|>user
{{ .Content }}<|im_end|>
{{- else if eq .Role "assistant" }}<|im_start|>assistant
{{- if .Content }}{{ .Content }}{{- end }}
{{- if .ToolCalls }}<tool_call>
{{- range .ToolCalls }}
{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
{{- end }}</tool_call>
{{- end }}{{ if not $last }}<|im_end|>
{{ end }}
{{- else if eq .Role "tool" }}<|im_start|>user
<tool_response>
{{ .Content }}
</tool_response><|im_end|>
{{- end }}
{{- if and (ne .Role "assistant") $last }}<|im_start|>assistant
{{ end }}
{{- end }}
{{ .Response }}"""

PARAMETER top_k 50
PARAMETER repeat_penalty 1.05
PARAMETER temperature 0.1

PARAMETER num_ctx 128000
<!-- gh-comment-id:4301652622 --> @chllei commented on GitHub (Apr 23, 2026): I successfully designed a chat template that supports tool calls. The modelfile: ```txt FROM <path_to_gguf_file> TEMPLATE """<|startoftext|>{{- if or .System .Tools }}<|im_start|>system You are a helpful and precise assistant. {{- if .System }} The following system configuration is your fundamental guideline. - Instruction Compliance: Follow the user's instructions. - Response Style: Be clear and direct. Use simple language. Always response in Simplified Chinese. Answer what you know. Say "I don't know" if you're uncertain. - Expression: Keep it concise and practical. - Math: Use simple Latex math notation when needed. - Tool Use: Call a tool ONLY when necessary. For web search task, use English to query in default, and response in native Chinese. {{ .System }}{{ end }} {{- if .Tools }} # Tools You may call one or more functions to assist with the user query. You are provided with function signatures within the following list: [TOOL_DEFINITIONS] {{- range .Tools }} {"type": "function", "function": { "name": "{{ .Function.Name }}", "description": "{{ .Function.Description }}", "parameters": {{ .Function.Parameters }} }} {{- end }} [/TOOL_DEFINITIONS] For each function call, return a JSON object with "name" and "arguments" within <tool_call></tool_call> tags. Example: <tool_call> {"name": "get_weather", "arguments": {"location": "Beijing"}} </tool_call> {{- end }}<|im_end|> {{ end }}{{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 -}} {{- if eq .Role "user" }}<|im_start|>user {{ .Content }}<|im_end|> {{- else if eq .Role "assistant" }}<|im_start|>assistant {{- if .Content }}{{ .Content }}{{- end }} {{- if .ToolCalls }}<tool_call> {{- range .ToolCalls }} {"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}} {{- end }}</tool_call> {{- end }}{{ if not $last }}<|im_end|> {{ end }} {{- else if eq .Role "tool" }}<|im_start|>user <tool_response> {{ .Content }} </tool_response><|im_end|> {{- end }} {{- if and (ne .Role "assistant") $last }}<|im_start|>assistant {{ end }} {{- end }} {{ .Response }}""" PARAMETER top_k 50 PARAMETER repeat_penalty 1.05 PARAMETER temperature 0.1 PARAMETER num_ctx 128000 ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#71029