[PR #7667] [MERGED] Support Multiple LoRa Adapters, Closes #7627 #43728

Closed
opened 2026-04-24 23:19:18 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/7667
Author: @ItzCrazyKns
Created: 11/14/2024
Status: Merged
Merged: 11/27/2024
Merged by: @jessegross

Base: mainHead: main


📝 Commits (3)

  • 8608300 feat(llm-server): handle multiple LoRas
  • d72d903 feat(runner): handle multiple lora flags
  • 700534b feat(runner): move utils down, closes #7627

📊 Changes

2 files changed (+26 additions, -14 deletions)

View changed files

📝 llama/runner/runner.go (+23 -8)
📝 llm/server.go (+3 -6)

📄 Description

Hi, so I've updated the Llama server by allowing it to handle multiple LoRa adapters. Previously, the server supported only one LoRa adapter, limiting users who needed to apply multiple adapters for advanced fine-tuning.

Changes Made:

  • Command-Line Parsing:

    • Updated to accept multiple --lora flags.
    • Introduced multiLPath to handle multiple LoRa paths.
  • Model Loading:

    • Modified the loadModel function to loop through and apply each specified LoRa adapter.
    • Removed the restriction that only one adapter can be used at a time in llm/server.go.

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/7667 **Author:** [@ItzCrazyKns](https://github.com/ItzCrazyKns) **Created:** 11/14/2024 **Status:** ✅ Merged **Merged:** 11/27/2024 **Merged by:** [@jessegross](https://github.com/jessegross) **Base:** `main` ← **Head:** `main` --- ### 📝 Commits (3) - [`8608300`](https://github.com/ollama/ollama/commit/8608300cec5645dcc4682344d563ef1cb30ba933) feat(llm-server): handle multiple LoRas - [`d72d903`](https://github.com/ollama/ollama/commit/d72d903cce72cf2d062d4672f6a345b6af22ce74) feat(runner): handle multiple lora flags - [`700534b`](https://github.com/ollama/ollama/commit/700534b3458b2db49a44f211bc88b73cfeb6d309) feat(runner): move utils down, closes #7627 ### 📊 Changes **2 files changed** (+26 additions, -14 deletions) <details> <summary>View changed files</summary> 📝 `llama/runner/runner.go` (+23 -8) 📝 `llm/server.go` (+3 -6) </details> ### 📄 Description Hi, so I've updated the Llama server by allowing it to handle multiple LoRa adapters. Previously, the server supported only one LoRa adapter, limiting users who needed to apply multiple adapters for advanced fine-tuning. Changes Made: - Command-Line Parsing: - Updated to accept multiple `--lora` flags. - Introduced `multiLPath` to handle multiple LoRa paths. - Model Loading: - Modified the `loadModel` function to loop through and apply each specified LoRa adapter. - Removed the restriction that only one adapter can be used at a time in `llm/server.go`. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-24 23:19:18 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#43728