[PR #14032] llm: support multiple LoRA adapters and hot-swapping #76771

Open
opened 2026-05-05 09:26:12 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/14032
Author: @crasyK
Created: 2/2/2026
Status: 🔄 Open

Base: mainHead: feature/multi-lora-hotswap


📝 Commits (1)

  • 244cdf2 llm: support multiple LoRA adapters and hot-swapping

📊 Changes

18 files changed (+667 additions, -20 deletions)

View changed files

📝 api/types.go (+58 -0)
📝 docs/api.md (+79 -0)
📝 docs/modelfile.mdx (+14 -0)
📝 llama/llama.cpp/src/llama-context.cpp (+9 -2)
📝 llama/llama.go (+69 -0)
📝 llm/server.go (+72 -1)
📝 parser/parser.go (+7 -1)
📝 parser/parser_test.go (+22 -0)
📝 runner/llamarunner/runner.go (+92 -2)
📝 server/create.go (+2 -3)
📝 server/routes.go (+67 -0)
server/routes_lora_test.go (+131 -0)
📝 server/sched.go (+17 -0)
📝 server/sched_test.go (+7 -0)
📝 types/model/capability.go (+7 -7)
📝 x/imagegen/safetensors/safetensors.go (+0 -1)
📝 x/imagegen/server.go (+11 -0)
📝 x/imagegen/weights.go (+3 -3)

📄 Description

llm: support multiple LoRA adapters and hot-swapping

This PR removes the single-adapter limitation and exposes llama.cpp's LoRA hot-swap API through Ollama, enabling dynamic multi-adapter workflows.

Background

llama.cpp added multi-LoRA support in August 2024 (PRs #8332, #8857),
but Ollama still had a hardcoded single-adapter limit. This PR bridges that gap.

Closes #9548

Closes #7627

Why This Should Be Accepted

  1. Parity, not novelty — llama.cpp already supports this; we're exposing it
  2. Minimal API surface — Only 2 endpoints, mirroring upstream's design
  3. Backward compatible — Single-adapter Modelfiles work unchanged
  4. Community requested — Issues #9548 and #7627 show user demand

Features

  • Multi-Adapter Model Creation: Removes the single-adapter limitation. You can now specify multiple ADAPTER instructions in a Modelfile.
  • Runtime Hot-Swapping: New API endpoints to list loaded adapters and adjust their scales dynamically (0.0 to 1.0) without reloading the model.
  • Stability Improvements: Patched the bundled llama.cpp to significantly increase graph node capacity, resolving crashes when using multiple concurrent adapters.

Usage Guide

1. Create a Multi-Adapter Model

Create a Modelfile with multiple adapters (using Canis.teach adapters as an example):

FROM mistralai/Ministral-3-3B-Instruct-2512.gguf

TEMPLATE """<s>{{- if .System }}[SYSTEM_PROMPT]{{ .System }}[/SYSTEM_PROMPT]{{ end }}
{{- range .Messages }}
{{- if eq .Role "user" }}[INST]{{ .Content }}[/INST]
{{- else if eq .Role "assistant" }}{{ .Content }}</s>
{{- end }}
{{- end }}"""

PARAMETER stop "[INST]"
PARAMETER stop "[/INST]"
PARAMETER stop "</s>"
PARAMETER stop "[SYSTEM_PROMPT]"
PARAMETER stop "[/SYSTEM_PROMPT]"

ADAPTER CanisAI/teach-generalist-ministral-3b-r2.gguf
ADAPTER CanisAI/teach-humanities-ministral-3b-r2.gguf
ADAPTER CanisAI/teach-language-ministral-3b-r2.gguf
ADAPTER CanisAI/teach-math-ministral-3b-r2.gguf
ADAPTER CanisAI/teach-science-ministral-3b-r2.gguf

Create the model:

ollama create my-multi-lora -f Modelfile

2. List Loaded Adapters

Use the new API endpoint to see which adapters are available and their current status:

curl http://localhost:11434/api/lora-adapters?model=my-multi-lora

Response:

{
  "model": "my-multi-lora",
  "adapters": [
    {"id": 0, "path": ".../math.gguf", "scale": 0.0},
    {"id": 1, "path": ".../science.gguf", "scale": 0.0}
  ]
}

Note: Adapters load with scale 0.0 (disabled) by default if not specified otherwise.

3. Hot-Swap Adapters

Activate specific adapters by sending a POST request. You can set the scale between 0.0 (off) and 1.0 (full strength), or any value in between.

Enable Math Adapter:

curl -X POST http://localhost:11434/api/lora-adapters -d '{
  "model": "my-multi-lora",
  "adapters": [
    {"id": 0, "scale": 1.0}, 
    {"id": 1, "scale": 0.0}
  ]
}'

Now, any subsequent inference requests will use the active adapter(s). This change is instantaneous and does not require model reloading.

Testing

Manual Testing:

  • Base Model: Ministral-3B-Instruct (Q4_K_M)
  • Adapters: 5 TEACH adapters
  • Memory stable with 5 concurrent adapters

Unit Tests: To be added based on reviewer feedback.

Known Limitations

  • Engine Bypass: Uses the legacy runner when adapters are present because ollama-engine does not yet support the LoRA API. Future work could add native engine support.

Implementation Details

  • server/create.go & parser/parser.go: Logic updated to parse and merge multiple ADAPTER commands correctly.
  • llama.cpp Patch: Increased graph_max_nodes buffer (3x) in llama-context.cpp to handle the expanded computation graph size required by multiple adapters.
  • runner/llamarunner: Added GetLoraAdapters and SetLoraAdapters interfaces properly mapped to llama_set_adapter_lora.
  • Engine: Bypassed ollama-engine (which lacks complete LoRA support) to use the legacy runner when adapters are present.

Draft Documentation

Modelfile Reference (addition)

ADAPTER

Multiple ADAPTER instructions can now be specified:

ADAPTER ./math.gguf

ADAPTER ./science.gguf

Adapters load with scale 0.0 (disabled) by default

API Reference (new endpoints)

GET /api/lora-adapters
Returns loaded adapters for a model.

POST /api/lora-adapters
Sets adapter scales at runtime (hot-swap).


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/14032 **Author:** [@crasyK](https://github.com/crasyK) **Created:** 2/2/2026 **Status:** 🔄 Open **Base:** `main` ← **Head:** `feature/multi-lora-hotswap` --- ### 📝 Commits (1) - [`244cdf2`](https://github.com/ollama/ollama/commit/244cdf2cb7407a59f7dd5ed411f568c7625ed6c6) llm: support multiple LoRA adapters and hot-swapping ### 📊 Changes **18 files changed** (+667 additions, -20 deletions) <details> <summary>View changed files</summary> 📝 `api/types.go` (+58 -0) 📝 `docs/api.md` (+79 -0) 📝 `docs/modelfile.mdx` (+14 -0) 📝 `llama/llama.cpp/src/llama-context.cpp` (+9 -2) 📝 `llama/llama.go` (+69 -0) 📝 `llm/server.go` (+72 -1) 📝 `parser/parser.go` (+7 -1) 📝 `parser/parser_test.go` (+22 -0) 📝 `runner/llamarunner/runner.go` (+92 -2) 📝 `server/create.go` (+2 -3) 📝 `server/routes.go` (+67 -0) ➕ `server/routes_lora_test.go` (+131 -0) 📝 `server/sched.go` (+17 -0) 📝 `server/sched_test.go` (+7 -0) 📝 `types/model/capability.go` (+7 -7) 📝 `x/imagegen/safetensors/safetensors.go` (+0 -1) 📝 `x/imagegen/server.go` (+11 -0) 📝 `x/imagegen/weights.go` (+3 -3) </details> ### 📄 Description # llm: support multiple LoRA adapters and hot-swapping This PR removes the single-adapter limitation and exposes llama.cpp's LoRA hot-swap API through Ollama, enabling dynamic multi-adapter workflows. ## Background llama.cpp added multi-LoRA support in August 2024 (PRs #8332, #8857), but Ollama still had a hardcoded single-adapter limit. This PR bridges that gap. Closes #9548 Closes #7627 ## Why This Should Be Accepted 1. **Parity, not novelty** — llama.cpp already supports this; we're exposing it 2. **Minimal API surface** — Only 2 endpoints, mirroring upstream's design 3. **Backward compatible** — Single-adapter Modelfiles work unchanged 4. **Community requested** — Issues #9548 and #7627 show user demand ## Features - **Multi-Adapter Model Creation**: Removes the single-adapter limitation. You can now specify multiple `ADAPTER` instructions in a `Modelfile`. - **Runtime Hot-Swapping**: New API endpoints to list loaded adapters and adjust their scales dynamically (0.0 to 1.0) without reloading the model. - **Stability Improvements**: Patched the bundled `llama.cpp` to significantly increase graph node capacity, resolving crashes when using multiple concurrent adapters. ## Usage Guide ### 1. Create a Multi-Adapter Model Create a `Modelfile` with multiple adapters (using Canis.teach adapters as an example): ```dockerfile FROM mistralai/Ministral-3-3B-Instruct-2512.gguf TEMPLATE """<s>{{- if .System }}[SYSTEM_PROMPT]{{ .System }}[/SYSTEM_PROMPT]{{ end }} {{- range .Messages }} {{- if eq .Role "user" }}[INST]{{ .Content }}[/INST] {{- else if eq .Role "assistant" }}{{ .Content }}</s> {{- end }} {{- end }}""" PARAMETER stop "[INST]" PARAMETER stop "[/INST]" PARAMETER stop "</s>" PARAMETER stop "[SYSTEM_PROMPT]" PARAMETER stop "[/SYSTEM_PROMPT]" ADAPTER CanisAI/teach-generalist-ministral-3b-r2.gguf ADAPTER CanisAI/teach-humanities-ministral-3b-r2.gguf ADAPTER CanisAI/teach-language-ministral-3b-r2.gguf ADAPTER CanisAI/teach-math-ministral-3b-r2.gguf ADAPTER CanisAI/teach-science-ministral-3b-r2.gguf ``` Create the model: ```bash ollama create my-multi-lora -f Modelfile ``` ### 2. List Loaded Adapters Use the new API endpoint to see which adapters are available and their current status: ```bash curl http://localhost:11434/api/lora-adapters?model=my-multi-lora ``` **Response:** ```json { "model": "my-multi-lora", "adapters": [ {"id": 0, "path": ".../math.gguf", "scale": 0.0}, {"id": 1, "path": ".../science.gguf", "scale": 0.0} ] } ``` *Note: Adapters load with scale 0.0 (disabled) by default if not specified otherwise.* ### 3. Hot-Swap Adapters Activate specific adapters by sending a POST request. You can set the scale between `0.0` (off) and `1.0` (full strength), or any value in between. **Enable Math Adapter:** ```bash curl -X POST http://localhost:11434/api/lora-adapters -d '{ "model": "my-multi-lora", "adapters": [ {"id": 0, "scale": 1.0}, {"id": 1, "scale": 0.0} ] }' ``` Now, any subsequent inference requests will use the active adapter(s). This change is instantaneous and does not require model reloading. ## Testing **Manual Testing:** - Base Model: Ministral-3B-Instruct (Q4_K_M) - Adapters: 5 TEACH adapters - Memory stable with 5 concurrent adapters **Unit Tests:** To be added based on reviewer feedback. ## Known Limitations - **Engine Bypass**: Uses the legacy runner when adapters are present because `ollama-engine` does not yet support the LoRA API. Future work could add native engine support. ## Implementation Details - **`server/create.go` & `parser/parser.go`**: Logic updated to parse and merge multiple `ADAPTER` commands correctly. - **`llama.cpp` Patch**: Increased `graph_max_nodes` buffer (3x) in `llama-context.cpp` to handle the expanded computation graph size required by multiple adapters. - **`runner/llamarunner`**: Added `GetLoraAdapters` and `SetLoraAdapters` interfaces properly mapped to `llama_set_adapter_lora`. - **Engine**: Bypassed `ollama-engine` (which lacks complete LoRA support) to use the legacy runner when adapters are present. ## Draft Documentation ### Modelfile Reference (addition) **ADAPTER** Multiple `ADAPTER` instructions can now be specified: ADAPTER ./math.gguf ADAPTER ./science.gguf Adapters load with scale 0.0 (disabled) by default ### API Reference (new endpoints) **GET /api/lora-adapters** Returns loaded adapters for a model. **POST /api/lora-adapters** Sets adapter scales at runtime (hot-swap). --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-05-05 09:26:12 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#76771