[GH-ISSUE #1716] is there a way to calculate token size? #971

Closed
opened 2026-04-12 10:39:58 -05:00 by GiteaMirror · 21 comments
Owner

Originally created by @ralyodio on GitHub (Dec 26, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1716

I don't know if this limitation exists with the api. I'm swtiching from openai to ollama api, and with openai I need to calculate token size and subtract it from the total 4096.

Do we need to do that for ollama api? If so, how do I caclulate token size of prompt?

Originally created by @ralyodio on GitHub (Dec 26, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1716 I don't know if this limitation exists with the api. I'm swtiching from openai to ollama api, and with openai I need to calculate token size and subtract it from the total 4096. Do we need to do that for ollama api? If so, how do I caclulate token size of prompt?
Author
Owner

@technovangelist commented on GitHub (Dec 26, 2023):

Hi thanks for submitting the issue. Ollama doesn't require you to provide a number representing the quantity of tokens to the api. That said each model has a different context size and once you go over that, answers can degrade. Some models have a context size of 4k but 16k and 32k are showing up too. There are also some with 100k but they will require a huge amount of ram to run.

Does this answer your question?

<!-- gh-comment-id:1869587630 --> @technovangelist commented on GitHub (Dec 26, 2023): Hi thanks for submitting the issue. Ollama doesn't require you to provide a number representing the quantity of tokens to the api. That said each model has a different context size and once you go over that, answers can degrade. Some models have a context size of 4k but 16k and 32k are showing up too. There are also some with 100k but they will require a huge amount of ram to run. Does this answer your question?
Author
Owner

@ralyodio commented on GitHub (Dec 26, 2023):

yes, thank you.

<!-- gh-comment-id:1869676042 --> @ralyodio commented on GitHub (Dec 26, 2023): yes, thank you.
Author
Owner

@mbrochh commented on GitHub (Jan 25, 2024):

If I may add to this question:

What is the correct way to count the number of tokens when I build my prompt?

When interfacing with OpenAI, I can use the Tiktoken library, but I wonder if that library is also relevant when dealing with all other models that ollama supports?

<!-- gh-comment-id:1909253948 --> @mbrochh commented on GitHub (Jan 25, 2024): If I may add to this question: What is the correct way to count the number of tokens when I build my prompt? When interfacing with OpenAI, I can use the Tiktoken library, but I wonder if that library is also relevant when dealing with all other models that ollama supports?
Author
Owner

@pamelafox commented on GitHub (Feb 16, 2024):

I also have this question, as we have logic that tries to truncate conversation history to fit inside a context window and relies on the tiktoken encoding for the GPT models. I see some discussion about hugging face tokenizers but havent seen how easy they are to use. Curious for any packages that help with it, so that we can swap in ollama models easily.

<!-- gh-comment-id:1947601869 --> @pamelafox commented on GitHub (Feb 16, 2024): I also have this question, as we have logic that tries to truncate conversation history to fit inside a context window and relies on the tiktoken encoding for the GPT models. I see some discussion about hugging face tokenizers but havent seen how easy they are to use. Curious for any packages that help with it, so that we can swap in ollama models easily.
Author
Owner

@pamelafox commented on GitHub (Feb 16, 2024):

Update: I found an approach here:
https://github.com/simonw/ttok/issues/8
So I would need to map the model names here to the model names on HuggingFace in the Python in order to download the appropriate tokenizer.json. I'll try it out if I get a chance!

<!-- gh-comment-id:1947705947 --> @pamelafox commented on GitHub (Feb 16, 2024): Update: I found an approach here: https://github.com/simonw/ttok/issues/8 So I would need to map the model names here to the model names on HuggingFace in the Python in order to download the appropriate tokenizer.json. I'll try it out if I get a chance!
Author
Owner

@gjke commented on GitHub (Mar 27, 2024):

Update: I found an approach here: simonw/ttok#8 So I would need to map the model names here to the model names on HuggingFace in the Python in order to download the appropriate tokenizer.json. I'll try it out if I get a chance!

This works if you know what your exact prompt is, which is the case in generate scenario. However I was struggling to understand how to calculate the number of tokens in a chat scenario, where I have a list of messages. It turns out one can do:

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("teknium/OpenHermes-2.5-Mistral-7B")
tokens = tokenizer.apply_chat_template(messages)

len(tokens) is what I was looking for. Maybe somebody will find this useful...

<!-- gh-comment-id:2023379127 --> @gjke commented on GitHub (Mar 27, 2024): > Update: I found an approach here: [simonw/ttok#8](https://github.com/simonw/ttok/issues/8) So I would need to map the model names here to the model names on HuggingFace in the Python in order to download the appropriate tokenizer.json. I'll try it out if I get a chance! This works if you know what your exact prompt is, which is the case in `generate` scenario. However I was struggling to understand how to calculate the number of tokens in a `chat` scenario, where I have a list of messages. It turns out one can do: ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("teknium/OpenHermes-2.5-Mistral-7B") tokens = tokenizer.apply_chat_template(messages) ``` `len(tokens)` is what I was looking for. Maybe somebody will find this useful...
Author
Owner

@mitar commented on GitHub (Apr 8, 2024):

Where does ollama even tokenize? It does not use transformers.AutoTokenizer as it is Go code. And models in library also does not have a layer to tell which tokenizer to use? So how does ollama know for different models which tokenizers to use? Does tokenization happen at ollama level or somewhere else?

We would also like to know what is the input number of tokens and it would be great if ollama showed that in stats/responses.

<!-- gh-comment-id:2043164044 --> @mitar commented on GitHub (Apr 8, 2024): Where does ollama even tokenize? It does not use `transformers.AutoTokenizer` as it is Go code. And models in library also does not have a layer to tell which tokenizer to use? So how does ollama know for different models which tokenizers to use? Does tokenization happen at ollama level or somewhere else? We would also like to know what is the input number of tokens and it would be great if ollama showed that in stats/responses.
Author
Owner

@sc-govsin commented on GitHub (Apr 24, 2024):

@gjke do you know whats the difference between ollama token count and Huggigface token count? I tested the following and token counts are never same

llms = Ollama(model=model)
tokenizer = AutoTokenizer.from_pretrained(model_id_,)
token =tokenizer.encode(document)
if len(token)!=llms.get_num_tokens(document):
        print("Token from huggingface and Ollama are not same.")
<!-- gh-comment-id:2074184940 --> @sc-govsin commented on GitHub (Apr 24, 2024): @gjke do you know whats the difference between ollama token count and Huggigface token count? I tested the following and token counts are never same ``` llms = Ollama(model=model) tokenizer = AutoTokenizer.from_pretrained(model_id_,) token =tokenizer.encode(document) if len(token)!=llms.get_num_tokens(document): print("Token from huggingface and Ollama are not same.") ```
Author
Owner

@mitar commented on GitHub (Apr 24, 2024):

Where is get_num_tokens defined?

<!-- gh-comment-id:2074203446 --> @mitar commented on GitHub (Apr 24, 2024): Where is `get_num_tokens` defined?
Author
Owner

@sc-govsin commented on GitHub (Apr 24, 2024):

@mitar its a predefined method provided with langchain
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.ollama.Ollama.html

<!-- gh-comment-id:2074210186 --> @sc-govsin commented on GitHub (Apr 24, 2024): @mitar its a predefined method provided with langchain https://api.python.langchain.com/en/latest/llms/langchain_community.llms.ollama.Ollama.html
Author
Owner

@mitar commented on GitHub (Apr 24, 2024):

To me it looks like an issue with langchain implementation, where get_num_tokens is not really defined for their Ollama class and just calls into some default.

<!-- gh-comment-id:2074265507 --> @mitar commented on GitHub (Apr 24, 2024): To me it looks like an issue with langchain implementation, where `get_num_tokens` is not really defined for their `Ollama` class and just calls into [some default](https://github.com/langchain-ai/langchain/blob/9111d3a6369da71eb4c78d69bb20d20d00475d9a/libs/core/langchain_core/language_models/base.py#L41-L51).
Author
Owner

@ralyodio commented on GitHub (Apr 26, 2024):

can we get a javascript function to calculate token size? I need it for inputs and outputs to do some cost analysis on comparing groq.com api to renting my own gpu on tensordock.com

<!-- gh-comment-id:2079407249 --> @ralyodio commented on GitHub (Apr 26, 2024): can we get a javascript function to calculate token size? I need it for inputs and outputs to do some cost analysis on comparing groq.com api to renting my own gpu on tensordock.com
Author
Owner

@chigkim commented on GitHub (May 2, 2024):

@jmorganca could you exposed these api points to Ollama?
Llama.cpp server has POST /tokenize and POST /detokenize.
https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md
Then we can just count the number of tokens after tokenize?

<!-- gh-comment-id:2089484277 --> @chigkim commented on GitHub (May 2, 2024): @jmorganca could you exposed these api points to Ollama? Llama.cpp server has POST /tokenize and POST /detokenize. https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md Then we can just count the number of tokens after tokenize?
Author
Owner

@davidearlyoung commented on GitHub (Jun 15, 2024):

I 2nd this feature/ability to be implemented into Ollama.

I was in a situation recently where I really needed to know if a system prompt and prompt where getting close to a input context budget limit for a task. I was hoping that there was something to keep things simple, accurate and efficient and already built into ollama. My research yesterday on this lead me to discover that this is not the case. So I was a bit frustrated at the time that I have to wait, or that I currently would have to apply odd methods.
This concept is already built into, and is a useful feature from the core system that ollama is based on, llama.cpp. And I was a surprised that this was not already built into ollama to be honest.

At the moment, I don't have a lot to offer other then encouragement for those working on this. I'm glad that someone is looking into this.

<!-- gh-comment-id:2170031842 --> @davidearlyoung commented on GitHub (Jun 15, 2024): I 2nd this feature/ability to be implemented into Ollama. I was in a situation recently where I really needed to know if a system prompt and prompt where getting close to a input context budget limit for a task. I was hoping that there was something to keep things simple, accurate and efficient and already built into ollama. My research yesterday on this lead me to discover that this is not the case. So I was a bit frustrated at the time that I have to wait, or that I currently would have to apply odd methods. This concept is already built into, and is a useful feature from the core system that ollama is based on, llama.cpp. And I was a surprised that this was not already built into ollama to be honest. At the moment, I don't have a lot to offer other then encouragement for those working on this. I'm glad that someone is looking into this.
Author
Owner

@PyroGenesis commented on GitHub (Jul 22, 2024):

For reference, FastChat provides the /token_check endpoint:

Example:

Request body:

{
  "prompts": [
    {
      "model": "llama3_large_cl",
      "prompt": "hello",
      "max_tokens": 0
    }
  ]
}

Response body:

{
  "prompts": [
    {
      "fits": true,
      "tokenCount": 2,
      "contextLength": 262144
    }
  ]
}

The tokenCount is very useful, but I find the contextLength pretty useful too because it allows me to dynamically shorten my prompt until it meets the limit (which may change when I switch models), rather than use a fixed limit.

<!-- gh-comment-id:2243959906 --> @PyroGenesis commented on GitHub (Jul 22, 2024): For reference, FastChat provides the `/token_check` endpoint: ### Example: #### Request body: ```json { "prompts": [ { "model": "llama3_large_cl", "prompt": "hello", "max_tokens": 0 } ] } ``` #### Response body: ```json { "prompts": [ { "fits": true, "tokenCount": 2, "contextLength": 262144 } ] } ``` The `tokenCount` is very useful, but I find the `contextLength` pretty useful too because it allows me to dynamically shorten my prompt until it meets the limit (which may change when I switch models), rather than use a fixed limit.
Author
Owner

@InAnYan commented on GitHub (Aug 1, 2024):

Want to add my 2 cents here.

Knowing token count is very important in context of writing correct and general algorithms that split text and work with LLMs.

IMHO tokenization is really part of the domain of LLMs, and they shouldn't be separated. And I really like your approach to add new API endpoints for that. It would also easier to write algorithms, if OpenAI API also provided a token estimation endpoint.

On the other hand, it would be too expensive for big companies like OpenAI or Mistral to provide API for tokenization, because it could be done locally and it will only eat internet resources of those compaines.

So, if you develop an app that uses LLMs, and you want your app to support all kinds of LLM provides (or local LLMs), then you have to:

  • For OpenAI or Mistral (or other big techs) - have a dedicated library for tokenization. OpenAI model count is stable more or less, changes are introduced slowly.
  • For local models using ollama - ask the ollama about the token count, because a user may use dozens of different LLMs, and they all have their own tokenizers.

But, if both OpenAI and other providers also have a token estimation API endpoint, imagine how would be easier to write LLM apps with that: you just need a class like OpenAiApiCompatibleChatLanguageModel and that's it

<!-- gh-comment-id:2262789851 --> @InAnYan commented on GitHub (Aug 1, 2024): Want to add my 2 cents here. Knowing token count is very important in context of writing correct and general algorithms that split text and work with LLMs. IMHO tokenization is really part of the domain of LLMs, and they shouldn't be separated. And I really like your approach to add new API endpoints for that. It would also easier to write algorithms, if OpenAI API also provided a token estimation endpoint. On the other hand, it would be too expensive for big companies like OpenAI or Mistral to provide API for tokenization, because it could be done locally and it will only eat internet resources of those compaines. So, if you develop an app that uses LLMs, and you want your app to support all kinds of LLM provides (or local LLMs), then you have to: - For OpenAI or Mistral (or other big techs) - have a dedicated library for tokenization. OpenAI model count is stable more or less, changes are introduced slowly. - For local models using ollama - ask the ollama about the token count, because a user may use dozens of different LLMs, and they all have their own tokenizers. But, if both OpenAI and other providers also have a token estimation API endpoint, imagine how would be easier to write LLM apps with that: you just need a class like `OpenAiApiCompatibleChatLanguageModel` and that's it
Author
Owner

@CharbelAD commented on GitHub (Aug 27, 2024):

Any update on this issue? It would be extremely useful to have this feature implemented. Currently am using HuggingFace for the tokenizer and Ollama for inference.

<!-- gh-comment-id:2312736386 --> @CharbelAD commented on GitHub (Aug 27, 2024): Any update on this issue? It would be extremely useful to have this feature implemented. Currently am using HuggingFace for the tokenizer and Ollama for inference.
Author
Owner

@sc-govsin commented on GitHub (Aug 28, 2024):

@CharbelAD there is a way to pass huggingface tokenizer as a parameter while making the llm object. I'll have to look into the code base again to share the code.

<!-- gh-comment-id:2314236273 --> @sc-govsin commented on GitHub (Aug 28, 2024): @CharbelAD there is a way to pass huggingface tokenizer as a parameter while making the llm object. I'll have to look into the code base again to share the code.
Author
Owner

@CharbelAD commented on GitHub (Aug 28, 2024):

@CharbelAD there is a way to pass huggingface tokenizer as a parameter while making the llm object. I'll have to look into the code base again to share the code.

Thanks! No need to trouble yourself though, my current is just fine for the time being, but it would be more convenient to have everything in Ollama directly.

<!-- gh-comment-id:2314565275 --> @CharbelAD commented on GitHub (Aug 28, 2024): > @CharbelAD there is a way to pass huggingface tokenizer as a parameter while making the llm object. I'll have to look into the code base again to share the code. Thanks! No need to trouble yourself though, my current is just fine for the time being, but it would be more convenient to have everything in Ollama directly.
Author
Owner

@jmorganca commented on GitHub (Sep 4, 2024):

Thanks for the issue! Merging with https://github.com/ollama/ollama/issues/3582

<!-- gh-comment-id:2327861669 --> @jmorganca commented on GitHub (Sep 4, 2024): Thanks for the issue! Merging with https://github.com/ollama/ollama/issues/3582
Author
Owner

@EthanRyne commented on GitHub (Jun 17, 2025):

@gjke do you know whats the difference between ollama token count and Huggigface token count? I tested the following and token counts are never same

llms = Ollama(model=model)
tokenizer = AutoTokenizer.from_pretrained(model_id_,)
token =tokenizer.encode(document)
if len(token)!=llms.get_num_tokens(document):
        print("Token from huggingface and Ollama are not same.")

thats because you need to convert it to tokenizer.apply_chat_template() instead of direct encoding, since chat template also adds all the relevant special tokens at every user and assistant turn before converting it into a single string of the entire conversation whereas just enocode only adds special token in the beginning or may be not even that, in general the special token insertion is different in the final chat template tokens which is passed to the model during inference

<!-- gh-comment-id:2980112117 --> @EthanRyne commented on GitHub (Jun 17, 2025): > [@gjke](https://github.com/gjke) do you know whats the difference between ollama token count and Huggigface token count? I tested the following and token counts are never same > > ``` > llms = Ollama(model=model) > tokenizer = AutoTokenizer.from_pretrained(model_id_,) > token =tokenizer.encode(document) > if len(token)!=llms.get_num_tokens(document): > print("Token from huggingface and Ollama are not same.") > ``` thats because you need to convert it to `tokenizer.apply_chat_template()` instead of direct encoding, since chat template also adds all the relevant special tokens at every user and assistant turn before converting it into a single string of the entire conversation whereas just enocode only adds special token in the beginning or may be not even that, in general the special token insertion is different in the final chat template tokens which is passed to the model during inference
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#971