[PR #7313] [CLOSED] Add support for RWKV #22921

Closed
opened 2026-04-19 16:39:15 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/7313
Author: @MollySophia
Created: 10/22/2024
Status: Closed

Base: mainHead: fix-rwkv


📝 Commits (2)

📊 Changes

4 files changed (+137 additions, -0 deletions)

View changed files

llm/patches/0009-fix-rwkv-support.patch (+118 -0)
📝 template/index.json (+4 -0)
template/rwkv-world.gotmpl (+6 -0)
template/rwkv-world.json (+9 -0)

📄 Description

Changes in this PR:

  • Added a patch on llama.cpp with commits upstream: llama.cpp:
    10433e8
    and 4ff7fe1, 11d4705. These fixes the problem that rwkv gguf model cannot be loaded, and that conversations cannot be correctly stopped. I guess this patch can be removed after ollama syncs llama.cpp submodule next time.
  • Added a simple template for chatting for RWKV models.

I'm not sure if these are the correct way to fix the problems. Thanks in advance for any suggestions and reviews!

This closes #7223


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/7313 **Author:** [@MollySophia](https://github.com/MollySophia) **Created:** 10/22/2024 **Status:** ❌ Closed **Base:** `main` ← **Head:** `fix-rwkv` --- ### 📝 Commits (2) - [`39bc199`](https://github.com/ollama/ollama/commit/39bc199b65412e7bef1a26a7bc406848753c8129) Add support for RWKV - [`0d47450`](https://github.com/ollama/ollama/commit/0d474501e34ec9da9fdcf48126e20a6b2fe6ec4f) Update fix-rwkv-support patch ### 📊 Changes **4 files changed** (+137 additions, -0 deletions) <details> <summary>View changed files</summary> ➕ `llm/patches/0009-fix-rwkv-support.patch` (+118 -0) 📝 `template/index.json` (+4 -0) ➕ `template/rwkv-world.gotmpl` (+6 -0) ➕ `template/rwkv-world.json` (+9 -0) </details> ### 📄 Description Changes in this PR: - Added a patch on llama.cpp with commits upstream: [llama.cpp: 10433e8 ](https://github.com/ggerganov/llama.cpp/commit/10433e8b457c4cfd759cbb41fc55fc398db4a5da) and [4ff7fe1](https://github.com/ggerganov/llama.cpp/commit/4ff7fe1fb36b04ddd158b2de881c348c5f0ff5e4), [11d4705](https://github.com/ggerganov/llama.cpp/commit/11d47057a51f3d9b9231e6b57d0ca36020c0ee99). These fixes the problem that rwkv gguf model cannot be loaded, and that conversations cannot be correctly stopped. I guess this patch can be removed after ollama syncs llama.cpp submodule next time. - Added a simple template for chatting for RWKV models. I'm not sure if these are the correct way to fix the problems. Thanks in advance for any suggestions and reviews! This closes #7223 --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-19 16:39:15 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#22921