[GH-ISSUE #3938] Massive performance regression on 0.1.32 #64479

Closed
opened 2026-05-03 17:48:34 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @jsa2 on GitHub (Apr 26, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3938

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Hi dear developers!

Spent some time trying to figure out if it was some other dependency but it seems that I get more than 10x better performance on sub 0.1.32 versions (e.g. .0.1.31) of Ollama (when creating OLLAMA embeddings and store the on FAISS)

Setup

  • Nvidia GPU (4070/4080) on mobile
  • langchain
  • faiss-gpu package
  • Cuda 12.1 and 12.4 (if it matters, not sure if ollama bundles cuda)
import os

from langchain_community.llms import Ollama
from langchain_community.vectorstores import Chroma
from langchain_community.vectorstores import FAISS
from langchain_community.embeddings import OllamaEmbeddings
from langchain_text_splitters import CharacterTextSplitter
from langchain_community import embeddings
import torch
cuda_available = torch.cuda.is_available()
print(f"CUDA Available: {cuda_available}")
if cuda_available:
    print(f"Using CUDA Device: {torch.cuda.get_device_name(torch.cuda.current_device())}")
deviceToUse=torch.cuda.current_device()

print(torch.version.cuda)

llm = Ollama(model="llama3")

from langchain.globals import set_debug
from langchain.globals import set_verbose

set_debug(True)
set_verbose(True)

#repo_path = "docRoot"
repo_path = "docRoot/entra-docs"


# Function to load all documents in a directory and concatenate them into a single document
def load_and_merge_documents_from_directory(directory):
    merged_document = ""
    for root, _, files in os.walk(directory):
        for filename in files:
            print(filename)
            if filename.endswith(".md"):  # Assuming all files are Markdown files
                file_path = os.path.join(root, filename)
                # Open the file with error handling to replace undecodable characters
                with open(file_path, "r", encoding="utf-8", errors="replace") as file:
                    merged_document += file.read() + "\n"  # Append document content with a newline
    return merged_document.strip()  # Remove trailing newline


# Load all documents from the directory and merge them into a single document
merged_document = load_and_merge_documents_from_directory(repo_path)

# Initialize CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)

# Split the merged document into chunks
documents = text_splitter.split_text(merged_document)

# Create documents from the splits
documents = text_splitter.create_documents(documents)

# 2. Convert documents to Embeddings and store them
# Load this without reindexing (essentially can this be done without the documents param)
vectorstore = FAISS.from_documents( 
    documents=documents,
    embedding=OllamaEmbeddings(
        base_url='http://localhost:11434',
        model='nomic-embed-text',
        show_progress="true",
        num_ctx="8192",
        num_gpu=20,
        num_thread=16,
        temperature=2,
        top_k=10,
        top_p=0.5
    ),
)


vectorstore.save_local("faiss_index")

retriever = vectorstore.as_retriever()


from langchain_community.llms import Ollama



from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate

template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
rag_chain = (
    {"context": retriever, "question": RunnablePassthrough()}
    | prompt
    | llm
    | StrOutputParser()
)
print(rag_chain.invoke("tell me about workload identities, and possible limitations when used with Conditional Access, please note CAE is not related to Workload Identities in Conditional Access"))





OS

WSL2

GPU

Nvidia

CPU

Intel, AMD

Ollama version

0.1.32

Originally created by @jsa2 on GitHub (Apr 26, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3938 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Hi dear developers! Spent some time trying to figure out if it was some other dependency but it seems that I get more than 10x better performance on sub 0.1.32 versions (e.g. .0.1.31) of Ollama (when creating OLLAMA embeddings and store the on FAISS) # Setup - Nvidia GPU (4070/4080) on mobile - langchain - faiss-gpu package - Cuda 12.1 and 12.4 (if it matters, not sure if ollama bundles cuda) ```python import os from langchain_community.llms import Ollama from langchain_community.vectorstores import Chroma from langchain_community.vectorstores import FAISS from langchain_community.embeddings import OllamaEmbeddings from langchain_text_splitters import CharacterTextSplitter from langchain_community import embeddings import torch cuda_available = torch.cuda.is_available() print(f"CUDA Available: {cuda_available}") if cuda_available: print(f"Using CUDA Device: {torch.cuda.get_device_name(torch.cuda.current_device())}") deviceToUse=torch.cuda.current_device() print(torch.version.cuda) llm = Ollama(model="llama3") from langchain.globals import set_debug from langchain.globals import set_verbose set_debug(True) set_verbose(True) #repo_path = "docRoot" repo_path = "docRoot/entra-docs" # Function to load all documents in a directory and concatenate them into a single document def load_and_merge_documents_from_directory(directory): merged_document = "" for root, _, files in os.walk(directory): for filename in files: print(filename) if filename.endswith(".md"): # Assuming all files are Markdown files file_path = os.path.join(root, filename) # Open the file with error handling to replace undecodable characters with open(file_path, "r", encoding="utf-8", errors="replace") as file: merged_document += file.read() + "\n" # Append document content with a newline return merged_document.strip() # Remove trailing newline # Load all documents from the directory and merge them into a single document merged_document = load_and_merge_documents_from_directory(repo_path) # Initialize CharacterTextSplitter text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) # Split the merged document into chunks documents = text_splitter.split_text(merged_document) # Create documents from the splits documents = text_splitter.create_documents(documents) # 2. Convert documents to Embeddings and store them # Load this without reindexing (essentially can this be done without the documents param) vectorstore = FAISS.from_documents( documents=documents, embedding=OllamaEmbeddings( base_url='http://localhost:11434', model='nomic-embed-text', show_progress="true", num_ctx="8192", num_gpu=20, num_thread=16, temperature=2, top_k=10, top_p=0.5 ), ) vectorstore.save_local("faiss_index") retriever = vectorstore.as_retriever() from langchain_community.llms import Ollama from langchain_core.runnables import RunnablePassthrough from langchain_core.output_parsers import StrOutputParser from langchain_core.prompts import ChatPromptTemplate template = """Answer the question based only on the following context: {context} Question: {question} """ prompt = ChatPromptTemplate.from_template(template) rag_chain = ( {"context": retriever, "question": RunnablePassthrough()} | prompt | llm | StrOutputParser() ) print(rag_chain.invoke("tell me about workload identities, and possible limitations when used with Conditional Access, please note CAE is not related to Workload Identities in Conditional Access")) ``` ### OS WSL2 ### GPU Nvidia ### CPU Intel, AMD ### Ollama version 0.1.32
GiteaMirror added the bugnvidia labels 2026-05-03 17:48:35 -05:00
Author
Owner

@shadowfaxproject commented on GitHub (Apr 27, 2024):

I am experiencing similar issue after upgrading to 0.1.32. All previous versions u to 1.31 are working.
I did some benchmarking and found the following on my existing setup:

My config:
Chipset Model: Apple M3 Max. 128GB RAM.
Models tested: llama3:instruct 7b, gemma:instruct 7b
Number of prompts ran: 50.

Ollama 0.1.24, 28, 30, 31:
Avg, time per prompt request 2.2 seconds.
Peak GPU utilization: 90-95%
Peak CPU utilization: 10-15%

Ollama 0.1.32:
Avg. time per request 6.2 seconds. (up from 2.2 sec)
Peak GPU utilization: 5-10% (down from 90-95%)
Peak CPU utilization: 600-1100% (up from 10-15%)

Clearly there is something that's changed from .31 -> .32. Looking at the release notes:
_ #"Ollama will now better utilize available VRAM, leading to less out-of-memory errors, as well as better GPU utilization
When running larger models that don't fit into VRAM on macOS, Ollama will now split the model between GPU and CPU to maximize performance."_

Attached some screenshots for utilization comparison.
ollama version is 0 1 31-llama3
ollama version is 0 1 32-llama3

For now sticking to 0.1.31 for better performance.

<!-- gh-comment-id:2080273517 --> @shadowfaxproject commented on GitHub (Apr 27, 2024): I am experiencing similar issue after upgrading to 0.1.32. All previous versions u to 1.31 are working. I did some benchmarking and found the following on my existing setup: My config: Chipset Model: Apple M3 Max. 128GB RAM. Models tested: llama3:instruct 7b, gemma:instruct 7b Number of prompts ran: 50. Ollama 0.1.24, 28, 30, 31: Avg, time per prompt request 2.2 seconds. Peak GPU utilization: 90-95% Peak CPU utilization: 10-15% **Ollama 0.1.32:** Avg. time per request 6.2 seconds. (up from 2.2 sec) Peak GPU utilization: 5-10% (down from 90-95%) Peak CPU utilization: 600-1100% (up from 10-15%) Clearly there is something that's changed from .31 -> .32. Looking at the release notes: _ #"Ollama will now better utilize available VRAM, leading to less out-of-memory errors, as well as better GPU utilization When running larger models that don't fit into VRAM on macOS, Ollama will now split the model between GPU and CPU to maximize performance."_ Attached some screenshots for utilization comparison. <img width="885" alt="ollama version is 0 1 31-llama3" src="https://github.com/ollama/ollama/assets/145872979/f6335cf5-7b28-4582-8ccc-e506de75b3f2"> <img width="938" alt="ollama version is 0 1 32-llama3" src="https://github.com/ollama/ollama/assets/145872979/5c505334-f5c8-42da-a013-2ceb91c60ee0"> For now sticking to 0.1.31 for better performance.
Author
Owner

@dhiltgen commented on GitHub (May 1, 2024):

Can you check the server logs and see if the number of layers being loaded is changing between the releases where you see performance changes?

<!-- gh-comment-id:2089152910 --> @dhiltgen commented on GitHub (May 1, 2024): Can you check the server logs and see if the number of layers being loaded is changing between the releases where you see performance changes?
Author
Owner

@shadowfaxproject commented on GitHub (May 2, 2024):

@dhiltgen this is what I found in server logs that may be relevant to this:
Version 0.1.31

time=2024-04-26T13:24:16.861-07:00 level=INFO source=server.go:127 msg="offload to gpu" reallayers=33 layers=33
llm_load_print_meta: n_layer          = 32
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:        CPU buffer size =    70.31 MiB
llm_load_tensors:      Metal buffer size =  3577.57 MiB

Version0.1.32

time=2024-05-01T23:17:10.421-07:00 level=INFO source=server.go:127 msg="offload to gpu" reallayers=1 layers=29
llm_load_print_meta: n_layer          = 28
llm_load_tensors: offloading 1 repeating layers to GPU
llm_load_tensors: offloaded 1/29 layers to GPU
llm_load_tensors:        CPU buffer size =  4773.90 MiB
llm_load_tensors:      Metal buffer size =   148.53 MiB

As you suspected, not all layers are loaded in 0.1.32 as opposed to 0.1.31. That explains the drop in performance. Also the variable reallayers is set to 1 in 0.1.32 vs to 33 in 0.1.31

<!-- gh-comment-id:2089710206 --> @shadowfaxproject commented on GitHub (May 2, 2024): @dhiltgen this is what I found in server logs that may be relevant to this: Version 0.1.31 ``` time=2024-04-26T13:24:16.861-07:00 level=INFO source=server.go:127 msg="offload to gpu" reallayers=33 layers=33 llm_load_print_meta: n_layer = 32 llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU buffer size = 70.31 MiB llm_load_tensors: Metal buffer size = 3577.57 MiB ``` Version0.1.32 ``` time=2024-05-01T23:17:10.421-07:00 level=INFO source=server.go:127 msg="offload to gpu" reallayers=1 layers=29 llm_load_print_meta: n_layer = 28 llm_load_tensors: offloading 1 repeating layers to GPU llm_load_tensors: offloaded 1/29 layers to GPU llm_load_tensors: CPU buffer size = 4773.90 MiB llm_load_tensors: Metal buffer size = 148.53 MiB ``` As you suspected, not all layers are loaded in 0.1.32 as opposed to 0.1.31. That explains the drop in performance. Also the variable `reallayers` is set to 1 in 0.1.32 vs to 33 in 0.1.31
Author
Owner

@dhiltgen commented on GitHub (May 2, 2024):

@shadowfaxproject can you try the latest RC of 0.1.33, and see if that improves the situation. If not, please try running the server with OLLAMA_DEBUG=1 and share the 0.1.31 vs. 0.1.33 comparison so we can get to the root cause on why it's only loading 1 layer when it should be able to fit the whole model apparently.

https://github.com/ollama/ollama/releases

<!-- gh-comment-id:2090900658 --> @dhiltgen commented on GitHub (May 2, 2024): @shadowfaxproject can you try the latest RC of 0.1.33, and see if that improves the situation. If not, please try running the server with OLLAMA_DEBUG=1 and share the 0.1.31 vs. 0.1.33 comparison so we can get to the root cause on why it's only loading 1 layer when it should be able to fit the whole model apparently. https://github.com/ollama/ollama/releases
Author
Owner

@shadowfaxproject commented on GitHub (May 3, 2024):

@dhiltgen I think I got to the bottom of this on my end. Turns out I am able to run .32 and .33 with full GPUs as expected now.
I was using Ollama API from langchain and I noticed the parameter num_gpu Ollama(model=model_name, num_ctx=8000, top_p=1, temperature=0.25, num_gpu=x)
That param was ignored when loading Ollama until 0.1.31 which means the value of num_gpu did not matter. It would always load all possible layers in GPUs.
From .32 I believe it used the value of num_gpu from the call explicitly (which in m case was set to default 1 somewhere deep in the code base) and hence noticed sudden drop in performance .32 onward.
The issue seems to have resolved itself for me.
Thanks for your help with this.

<!-- gh-comment-id:2093474233 --> @shadowfaxproject commented on GitHub (May 3, 2024): @dhiltgen I think I got to the bottom of this on my end. Turns out I am able to run .32 and .33 with full GPUs as expected now. I was using Ollama API from langchain and I noticed the parameter num_gpu `Ollama(model=model_name, num_ctx=8000, top_p=1, temperature=0.25, num_gpu=x)` That param was ignored when loading Ollama until 0.1.31 which means the value of `num_gpu` did not matter. It would always load all possible layers in GPUs. From .32 I believe it used the value of `num_gpu` from the call explicitly (which in m case was set to default 1 somewhere deep in the code base) and hence noticed sudden drop in performance .32 onward. The issue seems to have resolved itself for me. Thanks for your help with this.
Author
Owner

@dhiltgen commented on GitHub (May 3, 2024):

@jsa2 are you still seeing problems? Is the num_gpu=1 issue the same for you? If that's not it, but you still see perf problems, can you share your server log?

<!-- gh-comment-id:2093518355 --> @dhiltgen commented on GitHub (May 3, 2024): @jsa2 are you still seeing problems? Is the `num_gpu=1` issue the same for you? If that's not it, but you still see perf problems, can you share your server log?
Author
Owner

@dhiltgen commented on GitHub (May 3, 2024):

@shadowfaxproject that sounds correct. num_gpu defaults to -1 which indicates "automatic" mode and we try to load as many layers as will fit. num_gpu=0 effectively turns off GPU processing, and any number greater than zero loads exactly that many layers of the model.

I didn't realize we had a bug in prior versions where "1" wasn't being respected properly to load only a single layer.

The name of this parameter was inherited from llama.cpp, and is slightly confusing as it's not picking the number of GPUs, but the layers to load into the GPU.

<!-- gh-comment-id:2093541569 --> @dhiltgen commented on GitHub (May 3, 2024): @shadowfaxproject that sounds correct. num_gpu defaults to -1 which indicates "automatic" mode and we try to load as many layers as will fit. num_gpu=0 effectively turns off GPU processing, and any number greater than zero loads exactly that many layers of the model. I didn't realize we had a bug in prior versions where "1" wasn't being respected properly to load only a single layer. The name of this parameter was inherited from llama.cpp, and is slightly confusing as it's not picking the number of GPUs, but the layers to load into the GPU.
Author
Owner

@jsa2 commented on GitHub (May 3, 2024):

EDIT

It seems that the param is passed correctly, but I am not seeing any major difference in performance


Thx to both of you for superb help! In my use case I use ollama to create embeddings, but I did not see any difference on setting the num_gpu param there. Not sure if the num_gpu param is passed differently with this use case, and thus it does not get set correctly. But I suspect it is the same case, just need to figure out correct way to pass the params onwards 0.31.1 ->

May 03 21:32:12  ollama[1153]: llm_load_tensors: ggml ctx size =    0.11 MiB
May 03 21:32:12  ollama[1153]: llm_load_tensors: offloading 12 repeating layers to GPU
May 03 21:32:12  ollama[1153]: llm_load_tensors: offloading non-repeating layers to GPU
May 03 21:32:12  ollama[1153]: llm_load_tensors: offloaded 13/13 layers to GPU
May 03 21:32:12  ollama[1153]: llm_load_tensors:        CPU buffer size =    44.72 MiB
May 03 21:32:12  ollama[1153]: llm_load_tensors:      CUDA0 buffer size =   216.15 MiB
May 03 21:32:12  ollama[1153]: .......................................................
May 03 21:32:12  ollama[1153]: llama_new_context_with_model: n_ctx      = 8192
May 03 21:32:12  ollama[1153]: llama_new_context_with_model: n_batch    = 512
May 03 21:32:12  ollama[1153]: llama_new_context_with_model: n_ubatch   = 512
May 03 21:32:12  ollama[1153]: llama_new_context_with_model: freq_base  = 1000.0
May 03 21:32:12  ollama[1153]: llama_new_context_with_model: freq_scale = 1
May 03 21:32:12  ollama[1153]: llama_kv_cache_init:      CUDA0 KV buffer size =   288.00 MiB
May 03 21:32:12  ollama[1153]: llama_new_context_with_model: KV self size  =  288.00 MiB, K (f16):  144.00 MiB, V (f16):  144.00 MiB
May 03 21:32:12  ollama[1153]: llama_new_context_with_model:        CPU  output buffer size =     0.00 MiB
May 03 21:32:12  ollama[1153]: llama_new_context_with_model:      CUDA0 compute buffer size =    23.00 MiB
May 03 21:32:12  ollama[1153]: llama_new_context_with_model:  CUDA_Host compute buffer size =     3.50 MiB


embedding=OllamaEmbeddings(
        base_url='http://localhost:11434',
        model='nomic-embed-text',
        show_progress="true",
        num_ctx="8192",
        num_thread=16,
        num_gpu=-1,
        temperature=0.8,
        top_k=10,
        top_p=0.5,
    )
<!-- gh-comment-id:2093553306 --> @jsa2 commented on GitHub (May 3, 2024): **EDIT** It seems that the param is passed correctly, but I am not seeing any major difference in performance --- Thx to both of you for superb help! In my use case I use **ollama** to create embeddings, but I did not see any difference on setting the num_gpu param there. Not sure if the **num_gpu** param is passed differently with this use case, and thus it does not get set correctly. But I suspect it is the same case, just need to figure out correct way to pass the params onwards 0.31.1 -> ```sh May 03 21:32:12 ollama[1153]: llm_load_tensors: ggml ctx size = 0.11 MiB May 03 21:32:12 ollama[1153]: llm_load_tensors: offloading 12 repeating layers to GPU May 03 21:32:12 ollama[1153]: llm_load_tensors: offloading non-repeating layers to GPU May 03 21:32:12 ollama[1153]: llm_load_tensors: offloaded 13/13 layers to GPU May 03 21:32:12 ollama[1153]: llm_load_tensors: CPU buffer size = 44.72 MiB May 03 21:32:12 ollama[1153]: llm_load_tensors: CUDA0 buffer size = 216.15 MiB May 03 21:32:12 ollama[1153]: ....................................................... May 03 21:32:12 ollama[1153]: llama_new_context_with_model: n_ctx = 8192 May 03 21:32:12 ollama[1153]: llama_new_context_with_model: n_batch = 512 May 03 21:32:12 ollama[1153]: llama_new_context_with_model: n_ubatch = 512 May 03 21:32:12 ollama[1153]: llama_new_context_with_model: freq_base = 1000.0 May 03 21:32:12 ollama[1153]: llama_new_context_with_model: freq_scale = 1 May 03 21:32:12 ollama[1153]: llama_kv_cache_init: CUDA0 KV buffer size = 288.00 MiB May 03 21:32:12 ollama[1153]: llama_new_context_with_model: KV self size = 288.00 MiB, K (f16): 144.00 MiB, V (f16): 144.00 MiB May 03 21:32:12 ollama[1153]: llama_new_context_with_model: CPU output buffer size = 0.00 MiB May 03 21:32:12 ollama[1153]: llama_new_context_with_model: CUDA0 compute buffer size = 23.00 MiB May 03 21:32:12 ollama[1153]: llama_new_context_with_model: CUDA_Host compute buffer size = 3.50 MiB ``` ```python embedding=OllamaEmbeddings( base_url='http://localhost:11434', model='nomic-embed-text', show_progress="true", num_ctx="8192", num_thread=16, num_gpu=-1, temperature=0.8, top_k=10, top_p=0.5, ) ```
Author
Owner

@jsa2 commented on GitHub (May 3, 2024):

Here is screenshot from 0.31, where I get much better performance

image

I will do some additional testing, but like I said thx to both of you for superb help ❤️

<!-- gh-comment-id:2093567760 --> @jsa2 commented on GitHub (May 3, 2024): Here is screenshot from 0.31, where I get much better performance ![image](https://github.com/ollama/ollama/assets/58001986/daca256e-c040-43d7-ae0f-dd508d35ae50) I will do some additional testing, but like I said thx to both of you for superb help ❤️
Author
Owner

@dhiltgen commented on GitHub (May 4, 2024):

@jsa2 can you clarify? From the output above it looks like we are loading all 13 layers (in both cases?) yet you still see a performance regression in the newer release? If that's correct, then it sounds like the layer count isn't the root cause and there's something else going on leading to a performance impact. If that sounds correct, what I'd suggest is do a back-to-back comparison between the 2 versions with OLLAMA_DEBUG=1 set so we can compare the server logs and see if it's a misconfiguration glitch, or if the config is the same, which would tend to indicate some behavioral change or performance regression in llama.cpp perhaps.

<!-- gh-comment-id:2094384656 --> @dhiltgen commented on GitHub (May 4, 2024): @jsa2 can you clarify? From the output above it looks like we are loading all 13 layers (in both cases?) yet you still see a performance regression in the newer release? If that's correct, then it sounds like the layer count isn't the root cause and there's something else going on leading to a performance impact. If that sounds correct, what I'd suggest is do a back-to-back comparison between the 2 versions with OLLAMA_DEBUG=1 set so we can compare the server logs and see if it's a misconfiguration glitch, or if the config is the same, which would tend to indicate some behavioral change or performance regression in llama.cpp perhaps.
Author
Owner

@jsa2 commented on GitHub (Sep 16, 2024):

I noticed (as pointed out by another AI model) when comparing both runs,

Older versions:
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: YES

New versions
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no

<!-- gh-comment-id:2352856360 --> @jsa2 commented on GitHub (Sep 16, 2024): I noticed (as pointed out by another AI model) when comparing both runs, **Older versions:** ggml_cuda_init: GGML_CUDA_FORCE_MMQ: YES **New versions** ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64479