[GH-ISSUE #6826] Massive performance regression on 0.1.32 -> GGML_CUDA_FORCE_MMQ: (SET TO NO, after 0.1.31) #50825

Closed
opened 2026-04-28 17:12:58 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @jsa2 on GitHub (Sep 16, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6826

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

For reference:

https://github.com/ollama/ollama/issues/3938

The issue might be actually result of disabling the following mode:

Older versions: 0.1.31
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: YES

New versions (After 0.1.31)
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no

I tried to force this via env variables, but it did not help. Is there way to configure this via OLLAMA

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.31 -> 0.3.10

Originally created by @jsa2 on GitHub (Sep 16, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6826 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? For reference: https://github.com/ollama/ollama/issues/3938 The issue might be actually result of disabling the following mode: Older versions: 0.1.31 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: YES New versions (After 0.1.31) ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no I tried to force this via env variables, but it did not help. Is there way to configure this via OLLAMA ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.31 -> 0.3.10
GiteaMirror added the performanceneeds more infobugnvidia labels 2026-04-28 17:12:59 -05:00
Author
Owner

@dhiltgen commented on GitHub (Sep 16, 2024):

@jsa2 I believe your setup is a mobile GeForce RTX 4070.

I don't have the 4070 Mobile, but I tried building with this flag set, and tried a few different models on a few different CUDA GPUs, and the performance seems to vary. For orca-mini, on a RTX 4060 I see it get slower by ~9%. On a GeForce GT 1030 it gets slower by ~22%. On a GeForce GTX 750 Ti it gets faster by 42%. For llama3.1 for the GPUs I tried I didn't see any significant performance change.

We're working on build system improvements, so the configuration may change in the future, but you can set OLLAMA_CUSTOM_CUDA_DEFS when you run go generate ./... to pass custom flags, where you could enable this for your setup.

See https://github.com/ollama/ollama/blob/main/llm/generate/gen_linux.sh#L188-L193

<!-- gh-comment-id:2353980996 --> @dhiltgen commented on GitHub (Sep 16, 2024): @jsa2 I believe your setup is a mobile GeForce RTX 4070. I don't have the 4070 Mobile, but I tried building with this flag set, and tried a few different models on a few different CUDA GPUs, and the performance seems to vary. For orca-mini, on a RTX 4060 I see it get slower by ~9%. On a GeForce GT 1030 it gets slower by ~22%. On a GeForce GTX 750 Ti it gets faster by 42%. For llama3.1 for the GPUs I tried I didn't see any significant performance change. We're working on build system improvements, so the configuration may change in the future, but you can set OLLAMA_CUSTOM_CUDA_DEFS when you run `go generate ./...` to pass custom flags, where you could enable this for your setup. See https://github.com/ollama/ollama/blob/main/llm/generate/gen_linux.sh#L188-L193
Author
Owner

@jsa2 commented on GitHub (Sep 17, 2024):

Thx for the help. I tried today on A6000 and the issue is same for embedding, so I believe it is specific to the task but impacts most NVIDIA architectures.

To clarify, would this work for passing the cuda variables into the build? OLLAMA_CUSTOM_CUDA_DEFS="-DGGML_CUDA_FORCE_MMQ=on" go generate ./...

I will continue testing

For reference: the setup is still pretty much the same as in the original:

import os

from langchain_community.llms import Ollama
from langchain_community.vectorstores import Chroma
from langchain_community.vectorstores import FAISS
from langchain_community.embeddings import OllamaEmbeddings
from langchain_text_splitters import CharacterTextSplitter
from langchain_community import embeddings
import torch
cuda_available = torch.cuda.is_available()
print(f"CUDA Available: {cuda_available}")
if cuda_available:
    print(f"Using CUDA Device: {torch.cuda.get_device_name(torch.cuda.current_device())}")
deviceToUse=torch.cuda.current_device()

llm = Ollama(model="llama3.1:latest")

from langchain.globals import set_debug
from langchain.globals import set_verbose

set_debug(True)
set_verbose(True)

repo_path = "docRoot"





# Function to load all documents in a directory and concatenate them into a single document
def load_and_merge_documents_from_directory(directory):
    merged_document = ""
    for root, _, files in os.walk(directory):
        for filename in files:
            print(filename)
            if filename.endswith((".md", ".json")): # Assuming all files are Markdown files
                file_path = os.path.join(root, filename)
                # Open the file with error handling to replace undecodable characters
                with open(file_path, "r", encoding="utf-8", errors="replace") as file:
                    merged_document += file.read() + "\n"  # Append document content with a newline
    return merged_document.strip()  # Remove trailing newline


# Load all documents from the directory and merge them into a single document
merged_document = load_and_merge_documents_from_directory(repo_path)

# Initialize CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=500)

# Split the merged document into chunks
documents = text_splitter.split_text(merged_document)

# Create documents from the splits
documents = text_splitter.create_documents(documents)

# 2. Convert documents to Embeddings and store them
# Load this without reindexing (essentially can this be done without the documents param)
vectorstore = FAISS.from_documents( 
    documents=documents,
    embedding=OllamaEmbeddings(
        base_url='http://localhost:11434',
        model='nomic-embed-text:latest',
        show_progress="true",
        num_ctx="2048",
        num_thread=24,
        temperature=0.8,
        top_k=40,
        top_p=0.9
    ),
)


vectorstore.save_local("faiss_index")
<!-- gh-comment-id:2354739392 --> @jsa2 commented on GitHub (Sep 17, 2024): Thx for the help. I tried today on A6000 and the issue is same for embedding, so I believe it is specific to the task but impacts most NVIDIA architectures. To clarify, would this work for passing the cuda variables into the build? ``OLLAMA_CUSTOM_CUDA_DEFS="-DGGML_CUDA_FORCE_MMQ=on" go generate ./...`` I will continue testing For reference: the setup is still pretty much the same as in the original: ```python import os from langchain_community.llms import Ollama from langchain_community.vectorstores import Chroma from langchain_community.vectorstores import FAISS from langchain_community.embeddings import OllamaEmbeddings from langchain_text_splitters import CharacterTextSplitter from langchain_community import embeddings import torch cuda_available = torch.cuda.is_available() print(f"CUDA Available: {cuda_available}") if cuda_available: print(f"Using CUDA Device: {torch.cuda.get_device_name(torch.cuda.current_device())}") deviceToUse=torch.cuda.current_device() llm = Ollama(model="llama3.1:latest") from langchain.globals import set_debug from langchain.globals import set_verbose set_debug(True) set_verbose(True) repo_path = "docRoot" # Function to load all documents in a directory and concatenate them into a single document def load_and_merge_documents_from_directory(directory): merged_document = "" for root, _, files in os.walk(directory): for filename in files: print(filename) if filename.endswith((".md", ".json")): # Assuming all files are Markdown files file_path = os.path.join(root, filename) # Open the file with error handling to replace undecodable characters with open(file_path, "r", encoding="utf-8", errors="replace") as file: merged_document += file.read() + "\n" # Append document content with a newline return merged_document.strip() # Remove trailing newline # Load all documents from the directory and merge them into a single document merged_document = load_and_merge_documents_from_directory(repo_path) # Initialize CharacterTextSplitter text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=500) # Split the merged document into chunks documents = text_splitter.split_text(merged_document) # Create documents from the splits documents = text_splitter.create_documents(documents) # 2. Convert documents to Embeddings and store them # Load this without reindexing (essentially can this be done without the documents param) vectorstore = FAISS.from_documents( documents=documents, embedding=OllamaEmbeddings( base_url='http://localhost:11434', model='nomic-embed-text:latest', show_progress="true", num_ctx="2048", num_thread=24, temperature=0.8, top_k=40, top_p=0.9 ), ) vectorstore.save_local("faiss_index") ```
Author
Owner

@jsa2 commented on GitHub (Sep 17, 2024):

I was able to sucessfully build with the flags, but see only minimal performance improvement :)

will need to see for any other differences, what changes after 31 ->

<!-- gh-comment-id:2355572088 --> @jsa2 commented on GitHub (Sep 17, 2024): I was able to sucessfully build with the flags, but see only minimal performance improvement :) will need to see for any other differences, what changes after 31 ->
Author
Owner

@dhiltgen commented on GitHub (Oct 23, 2024):

@jsa2 are you still seeing performance regressions? You might want to give our new 0.4.0 RC a try and see if that fares better on your setup. If you're still seeing poor performance on the newer versions and 0.1.31 performed much better, could you share a server log with OLLAMA_DEBUG=1 set on that old release and the newest release so maybe I can spot some difference that might explain it?

<!-- gh-comment-id:2430538928 --> @dhiltgen commented on GitHub (Oct 23, 2024): @jsa2 are you still seeing performance regressions? You might want to give our new 0.4.0 RC a try and see if that fares better on your setup. If you're still seeing poor performance on the newer versions and 0.1.31 performed much better, could you share a server log with OLLAMA_DEBUG=1 set on that old release and the newest release so maybe I can spot some difference that might explain it?
Author
Owner

@pdevine commented on GitHub (Oct 3, 2025):

Going to go ahead and close this as stale.

<!-- gh-comment-id:3367227994 --> @pdevine commented on GitHub (Oct 3, 2025): Going to go ahead and close this as stale.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#50825