[PR #14368] [MERGED] model: improvements to LFM2 architectures #14642

Closed
opened 2026-04-13 00:59:47 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/14368
Author: @jmorganca
Created: 2/23/2026
Status: Merged
Merged: 2/23/2026
Merged by: @jmorganca

Base: mainHead: jmorganca/lfm


📝 Commits (10+)

📊 Changes

21 files changed (+3278 additions, -1335 deletions)

View changed files

📝 convert/convert.go (+3 -1)
📝 convert/convert_lfm2.go (+152 -17)
convert/convert_lfm2_test.go (+271 -0)
convert/convert_lfm2_vl.go (+417 -0)
convert/convert_lfm2_vl_test.go (+249 -0)
📝 convert/tokenizer.go (+7 -2)
📝 convert/tokenizer_test.go (+78 -0)
📝 fs/ggml/ggml.go (+2 -0)
📝 model/models/lfm2/cache.go (+20 -386)
📝 model/models/lfm2/cache_test.go (+17 -419)
📝 model/models/lfm2/model.go (+502 -8)
model/models/lfm2/model_multimodal_test.go (+160 -0)
model/models/lfm2/model_vision.go (+184 -0)
model/models/lfm2/process_image.go (+260 -0)
model/models/lfm2/process_image_test.go (+105 -0)
📝 model/parsers/lfm2.go (+260 -74)
📝 model/parsers/lfm2_test.go (+182 -52)
📝 model/parsers/parsers_test.go (+2 -0)
📝 model/renderers/lfm2.go (+239 -74)
📝 model/renderers/lfm2_test.go (+166 -300)

...and 1 more files

📄 Description

This includes improvements to the LiquidAI LFM2 and LFM2.5 architectures, including support for vision models. It uses the new share recurrent kv cache code introduced by #14356


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/14368 **Author:** [@jmorganca](https://github.com/jmorganca) **Created:** 2/23/2026 **Status:** ✅ Merged **Merged:** 2/23/2026 **Merged by:** [@jmorganca](https://github.com/jmorganca) **Base:** `main` ← **Head:** `jmorganca/lfm` --- ### 📝 Commits (10+) - [`4e9d388`](https://github.com/ollama/ollama/commit/4e9d388f6584950a1aca2af8e7b0cc146b11bc4a) fix repeat messages causing crash - [`317cfb5`](https://github.com/ollama/ollama/commit/317cfb561089e1f6649a76559c85f00c205aa4b4) wip - [`a8c1ca2`](https://github.com/ollama/ollama/commit/a8c1ca226cafdc8b8c960ef82f11fcbb621b0a64) vision support - [`7469d65`](https://github.com/ollama/ollama/commit/7469d659e7fe88f63dd2b0d09e29a98b4ff6b598) models: improved support for LiquidAI LFM2 and LFM2.5 models - [`906b046`](https://github.com/ollama/ollama/commit/906b0469284d60cf0710c659c4e28c94af6af44c) cleanup - [`a52a739`](https://github.com/ollama/ollama/commit/a52a739ddaf4492b5ee04e0ce006bd291f25be70) lint - [`0c82c90`](https://github.com/ollama/ollama/commit/0c82c9069afa433b4c80089bcadcbe17d7b2427b) minimize changes - [`97b2a92`](https://github.com/ollama/ollama/commit/97b2a922994c1908e565303199585242868e041b) lint - [`764c68f`](https://github.com/ollama/ollama/commit/764c68fe7853df511e31706272bdd7beb13feb80) lfm2: fix dense layer-type fallback and MoE routing scale parity - [`8aa5491`](https://github.com/ollama/ollama/commit/8aa54912e892b515e00c17c311254f9270ab388b) renderer/parser fixes ### 📊 Changes **21 files changed** (+3278 additions, -1335 deletions) <details> <summary>View changed files</summary> 📝 `convert/convert.go` (+3 -1) 📝 `convert/convert_lfm2.go` (+152 -17) ➕ `convert/convert_lfm2_test.go` (+271 -0) ➕ `convert/convert_lfm2_vl.go` (+417 -0) ➕ `convert/convert_lfm2_vl_test.go` (+249 -0) 📝 `convert/tokenizer.go` (+7 -2) 📝 `convert/tokenizer_test.go` (+78 -0) 📝 `fs/ggml/ggml.go` (+2 -0) 📝 `model/models/lfm2/cache.go` (+20 -386) 📝 `model/models/lfm2/cache_test.go` (+17 -419) 📝 `model/models/lfm2/model.go` (+502 -8) ➕ `model/models/lfm2/model_multimodal_test.go` (+160 -0) ➕ `model/models/lfm2/model_vision.go` (+184 -0) ➕ `model/models/lfm2/process_image.go` (+260 -0) ➕ `model/models/lfm2/process_image_test.go` (+105 -0) 📝 `model/parsers/lfm2.go` (+260 -74) 📝 `model/parsers/lfm2_test.go` (+182 -52) 📝 `model/parsers/parsers_test.go` (+2 -0) 📝 `model/renderers/lfm2.go` (+239 -74) 📝 `model/renderers/lfm2_test.go` (+166 -300) _...and 1 more files_ </details> ### 📄 Description This includes improvements to the LiquidAI LFM2 and LFM2.5 architectures, including support for vision models. It uses the new share recurrent kv cache code introduced by #14356 --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-13 00:59:47 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#14642