[PR #871] [CLOSED] fix: Add support for legacy CPU (no AVX2/FMA) on Linux #36242

Closed
opened 2026-04-22 20:56:09 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/871
Author: @reynaldichernando
Created: 10/21/2023
Status: Closed

Base: mainHead: fix-support-legacy-cpu-linux


📝 Commits (2)

  • d4e27d0 Init add support for legacy cpu on linux
  • 01d3461 fix indentation

📊 Changes

2 files changed (+27 additions, -0 deletions)

View changed files

📝 llm/llama.cpp/generate_linux.go (+17 -0)
📝 llm/llama.go (+10 -0)

📄 Description

Fixes the illegal instruction error when running with CPU without AVX2 or FMA, by building another set of ollama runner with -DLLAMA_AVX2=off -DLLAMA_FMA=off.

By default, upon running the cmake for ggml/gguf, it will have these arguments set to ON. Setting it to OFF, allows older CPU that don't have these instruction to be able to run the llama.cpp.

fixes #644

Some sources for the AVX2 and FMA compatibility:


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/871 **Author:** [@reynaldichernando](https://github.com/reynaldichernando) **Created:** 10/21/2023 **Status:** ❌ Closed **Base:** `main` ← **Head:** `fix-support-legacy-cpu-linux` --- ### 📝 Commits (2) - [`d4e27d0`](https://github.com/ollama/ollama/commit/d4e27d00c07b6c920f1e2a75741fa4a002bf4c11) Init add support for legacy cpu on linux - [`01d3461`](https://github.com/ollama/ollama/commit/01d3461c44c9e2b839907614a5062239ae8d4fa2) fix indentation ### 📊 Changes **2 files changed** (+27 additions, -0 deletions) <details> <summary>View changed files</summary> 📝 `llm/llama.cpp/generate_linux.go` (+17 -0) 📝 `llm/llama.go` (+10 -0) </details> ### 📄 Description Fixes the illegal instruction error when running with CPU without AVX2 or FMA, by building another set of ollama runner with `-DLLAMA_AVX2=off -DLLAMA_FMA=off`. By default, upon running the cmake for ggml/gguf, it will have these arguments set to ON. Setting it to OFF, allows older CPU that don't have these instruction to be able to run the llama.cpp. fixes #644 Some sources for the AVX2 and FMA compatibility: - [CPUs_with_AVX2](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2) - [CPUs_with_FMA3](https://en.wikipedia.org/wiki/FMA_instruction_set#CPUs_with_FMA3) --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-22 20:56:09 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#36242