[GH-ISSUE #13565] add LFM2-2.6B-Exp from LiquidAI #34692

Open
opened 2026-04-22 18:27:15 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @scorpion7slayer on GitHub (Dec 26, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/13565

https://huggingface.co/LiquidAI/LFM2-2.6B-Exp

Originally created by @scorpion7slayer on GitHub (Dec 26, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/13565 https://huggingface.co/LiquidAI/LFM2-2.6B-Exp
GiteaMirror added the model label 2026-04-22 18:27:15 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 26, 2025):

$ ollama run hf.co/LiquidAI/LFM2-2.6B-Exp-GGUF:Q4_K_M
>>> hello
Hello! How can I assist you today?

Note there is a bug in the llama.cpp backend that breaks LFM2 models and currently these models can't be run with ollama 0.13.5, an earlier version must be used. The above example was 0.13.4.

<!-- gh-comment-id:3692041252 --> @rick-github commented on GitHub (Dec 26, 2025): ```console $ ollama run hf.co/LiquidAI/LFM2-2.6B-Exp-GGUF:Q4_K_M >>> hello Hello! How can I assist you today? ``` Note there is a [bug](https://github.com/ollama/ollama/issues/13529) in the llama.cpp backend that breaks LFM2 models and currently these models can't be run with ollama 0.13.5, an earlier version must be used. The above example was 0.13.4.
Author
Owner

@scorpion7slayer commented on GitHub (Dec 26, 2025):

oh okay but I think it can still be added to the ollama site because it has good benchmarks

<!-- gh-comment-id:3692728148 --> @scorpion7slayer commented on GitHub (Dec 26, 2025): oh okay but I think it can still be added to the ollama site because it has good benchmarks
Author
Owner

@duraiyuva commented on GitHub (Dec 31, 2025):

Not able to run on Android aarch64. Version 0.13.5. Below is output from Android. Any immediate fix?

ollama run [hf.co/liquidAI/LFM2-2.6B-GGUF](https://hf.co/liquidAI/LFM2-2.6B-GGUF)
Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'output_norm'
<!-- gh-comment-id:3701425030 --> @duraiyuva commented on GitHub (Dec 31, 2025): Not able to run on Android aarch64. Version 0.13.5. Below is output from Android. Any immediate fix? ``` ollama run [hf.co/liquidAI/LFM2-2.6B-GGUF](https://hf.co/liquidAI/LFM2-2.6B-GGUF) Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'output_norm' ```
Author
Owner

@rick-github commented on GitHub (Dec 31, 2025):

Note there is a bug in the llama.cpp backend that breaks LFM2 models and currently these models can't be run with ollama 0.13.5, an earlier version must be used. The above example was 0.13.4.

<!-- gh-comment-id:3701447829 --> @rick-github commented on GitHub (Dec 31, 2025): > Note there is a bug in the llama.cpp backend that breaks LFM2 models and currently these models can't be run with ollama 0.13.5, an earlier version must be used. The above example was 0.13.4.
Author
Owner

@duraiyuva commented on GitHub (Dec 31, 2025):

Yes. It works in ollama 0.13.4. It is strange. For android phone, the Termux repo contains only the latest ollama 0.13.5. It seems the only possibility is to work with proot-distro. The bench mark created interest on the LFM's and it would be great if immediate fix is done.

<!-- gh-comment-id:3701755124 --> @duraiyuva commented on GitHub (Dec 31, 2025): Yes. It works in ollama 0.13.4. It is strange. For android phone, the Termux repo contains only the latest ollama 0.13.5. It seems the only possibility is to work with proot-distro. The bench mark created interest on the LFM's and it would be great if immediate fix is done.
Author
Owner

@Jobians commented on GitHub (Feb 22, 2026):

@duraiyuva any update on Termux?

<!-- gh-comment-id:3940526662 --> @Jobians commented on GitHub (Feb 22, 2026): @duraiyuva any update on Termux?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#34692