[PR #3964] [MERGED] fix gemma, command-r layer weights #11341

Closed
opened 2026-04-12 23:28:18 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/3964
Author: @mxyng
Created: 4/26/2024
Status: Merged
Merged: 4/26/2024
Merged by: @mxyng

Base: mainHead: mxyng/weights


📝 Commits (1)

  • f81f308 fix gemma, command-r layer weights

📊 Changes

1 file changed (+8 additions, -4 deletions)

View changed files

📝 llm/memory.go (+8 -4)

📄 Description

some models (gemma, command-r) do not have output tensors. instead the token_embd tensor are offloaded in its place


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/3964 **Author:** [@mxyng](https://github.com/mxyng) **Created:** 4/26/2024 **Status:** ✅ Merged **Merged:** 4/26/2024 **Merged by:** [@mxyng](https://github.com/mxyng) **Base:** `main` ← **Head:** `mxyng/weights` --- ### 📝 Commits (1) - [`f81f308`](https://github.com/ollama/ollama/commit/f81f30811878ee3b59deaa2319eec489fafb39ef) fix gemma, command-r layer weights ### 📊 Changes **1 file changed** (+8 additions, -4 deletions) <details> <summary>View changed files</summary> 📝 `llm/memory.go` (+8 -4) </details> ### 📄 Description some models (gemma, command-r) do not have output tensors. instead the token_embd tensor are offloaded in its place --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-12 23:28:18 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#11341