[GH-ISSUE #10777] Understand the technical differences between Ollama and HuggingFace implementation for better model performance #7078

Closed
opened 2026-04-12 19:00:28 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @jpacin on GitHub (May 20, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10777

Background

I've been comparing the performance of running models (phi4, gemma, etc.) directly with HuggingFace transformers versus running the same models through Ollama. I've consistently found that Ollama outperforms the direct HuggingFace implementation in multiple ways:

  1. Memory Efficiency: Ollama uses significantly less VRAM

    • With HF, I could only use a context window of 512 tokens on an RTX 4090 (24GB VRAM)
    • When trying to increase to 2048 tokens, PyTorch used all available VRAM and crashed
    • Ollama handles larger context windows more efficiently
  2. Response Quality: Ollama generates more pertinent and detailed answers

  3. Memory Management: Ollama seems to have better strategies for managing model memory

Investigation So Far

I've tried adjusting various parameters in my HuggingFace implementation:

  • Temperature
  • Top-k
  • Various other sampling parameters
  • Attempted to replicate metaprompting observed in Ollama

I've received some information that there might be differences in:

  • Quantization approaches (Ollama's quantization vs HF's bits and bytes)
  • Sampling parameter defaults
  • Chat templates and prompting strategies

Technical Details

  • Hardware: RTX 4090 with 24GB VRAM
  • Models Tested: phi4, gemma, and other similar models
  • Implementation: Python code using HuggingFace transformers

What I'm Looking For

I'm seeking to understand the technical implementation details that make Ollama more efficient and effective:

  1. Memory Management: How does Ollama manage memory more efficiently for these models?

    • Are there specific VRAM optimization techniques?
    • Is there specialized code for different GPU architectures?
  2. Model Loading and Configuration:

    • How does Ollama configure models differently from the default HF implementations?
    • Are there specific parameters or defaults that improve performance?
  3. Prompting and Templates:

    • How does Ollama handle prompt templates differently?
    • Is there any "metaprompting" or prompt engineering happening automatically?
  4. Specific Code References:

    • Where in the Ollama codebase can I find the implementation of these optimizations?
    • For example, with the Phi-4 model, what specific optimizations are applied?

Goal

My ultimate goal is to replicate Ollama's optimization techniques in my own code using HuggingFace transformers directly, to achieve similar memory efficiency and output quality without needing to use Ollama as an intermediary.

Relevant Findings from Ollama Repository

From my research, I found that Ollama implements several optimizations:

  1. The PARAMETER instructions in the Modelfile allow setting various parameters like:

    • num_ctx (context window size)
    • repeat_penalty
    • temperature
    • And others that affect model behavior
  2. Ollama has specific mechanisms for memory management:

    • OLLAMA_FLASH_ATTENTION environment variable for enabling Flash Attention
    • OLLAMA_NUM_PARALLEL for controlling parallel requests
    • OLLAMA_MAX_LOADED_MODELS for managing multiple models
  3. Ollama uses templates (via TEMPLATE instruction) to properly format prompts for specific model architectures

I would like to understand how these optimizations are implemented at the code level, particularly for the specific models I'm working with.

Originally created by @jpacin on GitHub (May 20, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10777 ## Background I've been comparing the performance of running models (phi4, gemma, etc.) directly with HuggingFace transformers versus running the same models through Ollama. I've consistently found that Ollama outperforms the direct HuggingFace implementation in multiple ways: 1. **Memory Efficiency**: Ollama uses significantly less VRAM - With HF, I could only use a context window of 512 tokens on an RTX 4090 (24GB VRAM) - When trying to increase to 2048 tokens, PyTorch used all available VRAM and crashed - Ollama handles larger context windows more efficiently 2. **Response Quality**: Ollama generates more pertinent and detailed answers 3. **Memory Management**: Ollama seems to have better strategies for managing model memory ## Investigation So Far I've tried adjusting various parameters in my HuggingFace implementation: - Temperature - Top-k - Various other sampling parameters - Attempted to replicate metaprompting observed in Ollama I've received some information that there might be differences in: - Quantization approaches (Ollama's quantization vs HF's bits and bytes) - Sampling parameter defaults - Chat templates and prompting strategies ## Technical Details - **Hardware**: RTX 4090 with 24GB VRAM - **Models Tested**: phi4, gemma, and other similar models - **Implementation**: Python code using HuggingFace transformers ## What I'm Looking For I'm seeking to understand the technical implementation details that make Ollama more efficient and effective: 1. **Memory Management**: How does Ollama manage memory more efficiently for these models? - Are there specific VRAM optimization techniques? - Is there specialized code for different GPU architectures? 2. **Model Loading and Configuration**: - How does Ollama configure models differently from the default HF implementations? - Are there specific parameters or defaults that improve performance? 3. **Prompting and Templates**: - How does Ollama handle prompt templates differently? - Is there any "metaprompting" or prompt engineering happening automatically? 4. **Specific Code References**: - Where in the Ollama codebase can I find the implementation of these optimizations? - For example, with the Phi-4 model, what specific optimizations are applied? ## Goal My ultimate goal is to replicate Ollama's optimization techniques in my own code using HuggingFace transformers directly, to achieve similar memory efficiency and output quality without needing to use Ollama as an intermediary. ## Relevant Findings from Ollama Repository From my research, I found that Ollama implements several optimizations: 1. The `PARAMETER` instructions in the Modelfile allow setting various parameters like: - `num_ctx` (context window size) - `repeat_penalty` - `temperature` - And others that affect model behavior 2. Ollama has specific mechanisms for memory management: - `OLLAMA_FLASH_ATTENTION` environment variable for enabling Flash Attention - `OLLAMA_NUM_PARALLEL` for controlling parallel requests - `OLLAMA_MAX_LOADED_MODELS` for managing multiple models 3. Ollama uses templates (via `TEMPLATE` instruction) to properly format prompts for specific model architectures I would like to understand how these optimizations are implemented at the code level, particularly for the specific models I'm working with.
GiteaMirror added the question label 2026-04-12 19:00:28 -05:00
Author
Owner

@rick-github commented on GitHub (May 20, 2025):

Memory Management: How does Ollama manage memory more efficiently for these models?

You don't indicate how you are loading the HF models, but if you are using the standard from_pretrained method, then you are loading the unquantized safetensors model. The equivalent ollama model will be Q4 quantized (q4_k_m for recent models, q4_0 for older). See here for more info on quantization.

Model Loading and Configuration:

Besides the quantization, ollama models can be configured via the Modelfile. This includes setting the temperature and stop tokens and sometimes setting the temperature, but it's rare for other parameters to be set.

Prompting and Templates:

Instruction (post-trained) models from HF will be accompanied by a JSON file (usually tokenizer_config.json or chat_template.json) that contains the Jinja template used to control inputs to the model. Ollama models have these translated into a Go template and made available via the TEMPLATE setting in the Modelfile. Because Jinja and Go have different semantics, some of the processing done by the Jinja template is not part of the translated Go template. In some cases, the Go template can be extended to offer extra functionality, eg tool processing for gemma.

Metaprompting usually just comes down to instructions in the template about identity and purpose ("Your name is xx and you are a helpful assistant"). These can usually be overridden by supplying a SYSTEM message in a request.

Specific Code References:

Most of the model specific stuff happens in the Modelfile.

I would like to understand how these optimizations are implemented at the code level, particularly for the specific models I'm working with.

Most PARAMETER options (temperature, repeat_penalty, top_k, etc) are passed to the runner with a request and are used to control the probability spreads (logits) that control how tokens are generated. This is done primarily by the llama.cpp kernels that implement the various logic and matrix operations that comprise the processing required for token propagation through the model layers.

Runner level parameters like num_ctx and OLLAMA_NUM_PARALLEL just set the framework in which the token generation happens. They determine the size of the context buffer that is used to hold input and output tokens, and options like OLLAMA_FLASH_ATTENTION control whether space optimizations are performed on the token space.

Realistically, these are not code that you can import into a python script. Rather, they are lower level constructs that are likely already part of the framework (pytorch, etc) that you are using to run the model.

What are you trying to achieve? Do you want ollama level performance from your python script? If so, why not just use ollama, with your python script using the ollama python library to process queries?

<!-- gh-comment-id:2893909153 --> @rick-github commented on GitHub (May 20, 2025): > Memory Management: How does Ollama manage memory more efficiently for these models? You don't indicate how you are loading the HF models, but if you are using the standard `from_pretrained` method, then you are loading the unquantized safetensors model. The equivalent ollama model will be Q4 quantized (q4_k_m for recent models, q4_0 for older). See [here](https://huggingface.co/docs/optimum/en/concept_guides/quantization) for more info on quantization. > Model Loading and Configuration: Besides the quantization, ollama models can be configured via the [Modelfile](https://github.com/ollama/ollama/blob/main/docs/modelfile.md). This includes setting the temperature and stop tokens and sometimes setting the temperature, but it's rare for other parameters to be set. > Prompting and Templates: Instruction (post-trained) models from HF will be accompanied by a JSON file (usually tokenizer_config.json or chat_template.json) that contains the Jinja template used to control inputs to the model. Ollama models have these translated into a Go template and made available via the TEMPLATE setting in the Modelfile. Because Jinja and Go have different semantics, some of the processing done by the Jinja template is not part of the translated Go template. In some cases, the Go template can be extended to offer extra functionality, eg [tool processing](https://ollama.com/PetrosStav/gemma3-tools:4b/blobs/60f374fb28f2) for gemma. Metaprompting usually just comes down to instructions in the template about identity and purpose ("Your name is xx and you are a helpful assistant"). These can usually be overridden by supplying a SYSTEM message in a request. > Specific Code References: Most of the model specific stuff happens in the Modelfile. > I would like to understand how these optimizations are implemented at the code level, particularly for the specific models I'm working with. Most PARAMETER options (temperature, repeat_penalty, top_k, etc) are passed to the runner with a request and are used to control the probability spreads (logits) that control how tokens are generated. This is done primarily by the llama.cpp kernels that implement the various logic and matrix operations that comprise the processing required for token propagation through the model layers. Runner level parameters like num_ctx and OLLAMA_NUM_PARALLEL just set the framework in which the token generation happens. They determine the size of the context buffer that is used to hold input and output tokens, and options like OLLAMA_FLASH_ATTENTION control whether space optimizations are performed on the token space. Realistically, these are not code that you can import into a python script. Rather, they are lower level constructs that are likely already part of the framework (pytorch, etc) that you are using to run the model. What are you trying to achieve? Do you want ollama level performance from your python script? If so, why not just use ollama, with your python script using the [ollama python library](https://github.com/ollama/ollama-python) to process queries?
Author
Owner

@jpacin commented on GitHub (May 22, 2025):

Memory Management: How does Ollama manage memory more efficiently for these models?

You don't indicate how you are loading the HF models, but if you are using the standard from_pretrained method, then you are loading the unquantized safetensors model. The equivalent ollama model will be Q4 quantized (q4_k_m for recent models, q4_0 for older). See here for more info on quantization.

Model Loading and Configuration:

Besides the quantization, ollama models can be configured via the Modelfile. This includes setting the temperature and stop tokens and sometimes setting the temperature, but it's rare for other parameters to be set.

Prompting and Templates:

Instruction (post-trained) models from HF will be accompanied by a JSON file (usually tokenizer_config.json or chat_template.json) that contains the Jinja template used to control inputs to the model. Ollama models have these translated into a Go template and made available via the TEMPLATE setting in the Modelfile. Because Jinja and Go have different semantics, some of the processing done by the Jinja template is not part of the translated Go template. In some cases, the Go template can be extended to offer extra functionality, eg tool processing for gemma.

Metaprompting usually just comes down to instructions in the template about identity and purpose ("Your name is xx and you are a helpful assistant"). These can usually be overridden by supplying a SYSTEM message in a request.

Specific Code References:

Most of the model specific stuff happens in the Modelfile.

I would like to understand how these optimizations are implemented at the code level, particularly for the specific models I'm working with.

Most PARAMETER options (temperature, repeat_penalty, top_k, etc) are passed to the runner with a request and are used to control the probability spreads (logits) that control how tokens are generated. This is done primarily by the llama.cpp kernels that implement the various logic and matrix operations that comprise the processing required for token propagation through the model layers.

Runner level parameters like num_ctx and OLLAMA_NUM_PARALLEL just set the framework in which the token generation happens. They determine the size of the context buffer that is used to hold input and output tokens, and options like OLLAMA_FLASH_ATTENTION control whether space optimizations are performed on the token space.

Realistically, these are not code that you can import into a python script. Rather, they are lower level constructs that are likely already part of the framework (pytorch, etc) that you are using to run the model.

What are you trying to achieve? Do you want ollama level performance from your python script? If so, why not just use ollama, with your python script using the ollama python library to process queries?

Thanks for the reply.
Yes the idea was to replicate Ollama performances without depending on it since we might need to implement various steps inbetween prompts, input and output and we don't want to rely on a third party software, even if it's OSS.
We ended up using some mixture of quantized models to obtain the same result and faster response times btw.

<!-- gh-comment-id:2901051895 --> @jpacin commented on GitHub (May 22, 2025): > > Memory Management: How does Ollama manage memory more efficiently for these models? > > You don't indicate how you are loading the HF models, but if you are using the standard `from_pretrained` method, then you are loading the unquantized safetensors model. The equivalent ollama model will be Q4 quantized (q4_k_m for recent models, q4_0 for older). See [here](https://huggingface.co/docs/optimum/en/concept_guides/quantization) for more info on quantization. > > > Model Loading and Configuration: > > Besides the quantization, ollama models can be configured via the [Modelfile](https://github.com/ollama/ollama/blob/main/docs/modelfile.md). This includes setting the temperature and stop tokens and sometimes setting the temperature, but it's rare for other parameters to be set. > > > Prompting and Templates: > > Instruction (post-trained) models from HF will be accompanied by a JSON file (usually tokenizer_config.json or chat_template.json) that contains the Jinja template used to control inputs to the model. Ollama models have these translated into a Go template and made available via the TEMPLATE setting in the Modelfile. Because Jinja and Go have different semantics, some of the processing done by the Jinja template is not part of the translated Go template. In some cases, the Go template can be extended to offer extra functionality, eg [tool processing](https://ollama.com/PetrosStav/gemma3-tools:4b/blobs/60f374fb28f2) for gemma. > > Metaprompting usually just comes down to instructions in the template about identity and purpose ("Your name is xx and you are a helpful assistant"). These can usually be overridden by supplying a SYSTEM message in a request. > > > Specific Code References: > > Most of the model specific stuff happens in the Modelfile. > > > I would like to understand how these optimizations are implemented at the code level, particularly for the specific models I'm working with. > > Most PARAMETER options (temperature, repeat_penalty, top_k, etc) are passed to the runner with a request and are used to control the probability spreads (logits) that control how tokens are generated. This is done primarily by the llama.cpp kernels that implement the various logic and matrix operations that comprise the processing required for token propagation through the model layers. > > Runner level parameters like num_ctx and OLLAMA_NUM_PARALLEL just set the framework in which the token generation happens. They determine the size of the context buffer that is used to hold input and output tokens, and options like OLLAMA_FLASH_ATTENTION control whether space optimizations are performed on the token space. > > Realistically, these are not code that you can import into a python script. Rather, they are lower level constructs that are likely already part of the framework (pytorch, etc) that you are using to run the model. > > What are you trying to achieve? Do you want ollama level performance from your python script? If so, why not just use ollama, with your python script using the [ollama python library](https://github.com/ollama/ollama-python) to process queries? Thanks for the reply. Yes the idea was to replicate Ollama performances without depending on it since we might need to implement various steps inbetween prompts, input and output and we don't want to rely on a third party software, even if it's OSS. We ended up using some mixture of quantized models to obtain the same result and faster response times btw.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7078