[GH-ISSUE #3747] Support XLMRobertaModel architecture #28070

Open
opened 2026-04-22 05:50:58 -05:00 by GiteaMirror · 20 comments
Owner

Originally created by @wouterverduin on GitHub (Apr 19, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3747

Hi all from Ollama!

First off: Great work with Ollama, keep up the good work!

What i am missing though is models in different languages (dutch for me personally). Is it possible to add multilingual embeddings like "intfloat/multilingual-e5-large-instruct"?

If there is a way to do this myself i would love the directions!

Thanks in advance!

Originally created by @wouterverduin on GitHub (Apr 19, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3747 Hi all from Ollama! First off: Great work with Ollama, keep up the good work! What i am missing though is models in different languages (dutch for me personally). Is it possible to add multilingual embeddings like "intfloat/multilingual-e5-large-instruct"? If there is a way to do this myself i would love the directions! Thanks in advance!
GiteaMirror added the modelfeature request labels 2026-04-22 05:50:58 -05:00
Author
Owner

@thinkverse commented on GitHub (Apr 19, 2024):

XLMRobertaModel is not yet a supported model architecture. Afaik Ollama currently only supports bert and nomic-bert embedding architectures. According to the Embedding models blog post, more should be available later.

8d1995c625/server/images.go (L59-L61)

<!-- gh-comment-id:2067075035 --> @thinkverse commented on GitHub (Apr 19, 2024): `XLMRobertaModel` is not yet a supported model architecture. Afaik Ollama currently only supports `bert` and `nomic-bert` embedding architectures. According to the [Embedding models](https://ollama.com/blog/embedding-models) blog post, more should be available later. https://github.com/ollama/ollama/blob/8d1995c625e7f2ed2ff98eb099e1bd8d7e6e133e/server/images.go#L59-L61
Author
Owner

@FidelCastillo commented on GitHub (Apr 20, 2024):

Is possible to use this Bert model in ollama?

jinaai/jina-embeddings-v2-base-es

<!-- gh-comment-id:2067590139 --> @FidelCastillo commented on GitHub (Apr 20, 2024): Is possible to use this Bert model in ollama? jinaai/jina-embeddings-v2-base-es
Author
Owner

@thinkverse commented on GitHub (Apr 20, 2024):

Is possible to use this Bert model in ollama?

jinaai/jina-embeddings-v2-base-es

No because JinaAI uses a modified version of Bert they created called JinaBert and that architecture currently isn't supported.

You can see the architecture in the config.json file: https://huggingface.co/jinaai/jina-embeddings-v2-base-es/blob/main/config.json#L4

<!-- gh-comment-id:2067648652 --> @thinkverse commented on GitHub (Apr 20, 2024): > Is possible to use this Bert model in ollama? > > > > jinaai/jina-embeddings-v2-base-es No because JinaAI uses a modified version of Bert they created called JinaBert and that architecture currently isn't supported. You can see the architecture in the `config.json` file: https://huggingface.co/jinaai/jina-embeddings-v2-base-es/blob/main/config.json#L4
Author
Owner

@dcasota commented on GitHub (Apr 22, 2024):

@thinkverse Actually there is no much choice. E.g. mxbai-embed-large is listed, however in examples/langchain-python-rag-privategpt/ingest.py and privateGPT.py it cannot be used, because the api path isn't in /sentence-transformers.
On the same hand, paraphrase-multilingual-MiniLM-L12-v2 would be very nice as embeddings_model as it allows 50 languages, but the model not listed in https://ollama.com/blog/embedding-models . The architecture is declared as BertModel and the api path in /sentence-transformers would fit, too.

edited: multi language embedding is developer field. Applying https://github.com/ollama/ollama/issues/2572 + https://github.com/ollama/ollama/issues/2965 seem to work to make use of paraphrase-multilingual-MiniLM-L12-v2.

<!-- gh-comment-id:2070846400 --> @dcasota commented on GitHub (Apr 22, 2024): @thinkverse Actually there is no much choice. E.g. `mxbai-embed-large` is listed, however in `examples/langchain-python-rag-privategpt/ingest.py` and `privateGPT.py` it cannot be used, because the api path isn't in /sentence-transformers. On the same hand, `paraphrase-multilingual-MiniLM-L12-v2` would be very nice as embeddings_model as it allows 50 languages, but the [model](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) not listed in https://ollama.com/blog/embedding-models . The architecture is declared as BertModel and the api path in /sentence-transformers would fit, too. edited: multi language embedding is developer field. Applying https://github.com/ollama/ollama/issues/2572 + https://github.com/ollama/ollama/issues/2965 seem to work to make use of `paraphrase-multilingual-MiniLM-L12-v2`.
Author
Owner

@YuryAlsheuski commented on GitHub (May 10, 2024):

Hi @dcasota, @thinkverse! I tried to convert paraphrase-multilingual-MiniLM-L12-v2 to gguf like:
python /root/.ollama/ollama/llm/llama.cpp/convert-hf-to-gguf.py ./paraphrase-multilingual-MiniLM-L12-v2 --outtype f16 --outfile paraphrase-multilingual-MiniLM-L12-v2.gguf but have the next trace:

Loading model: paraphrase-multilingual-MiniLM-L12-v2
gguf: This GGUF file is for Little Endian only
Set model parameters
gguf: context length = 512
gguf: embedding length = 384
gguf: feed forward length = 1536
gguf: head count = 12
gguf: layer norm epsilon = 1e-12
gguf: file type = 1
Set model tokenizer
Traceback (most recent call last):
File "/root/.ollama/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 3001, in
main()
File "/root/.ollama/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 2988, in main
model_instance.set_vocab()
File "/root/.ollama/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 2530, in set_vocab
tokens, toktypes, tokpre = self.get_vocab_base()
File "/root/.ollama/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 238, in get_vocab_base
tokenizer = AutoTokenizer.from_pretrained(self.dir_model)
File "/root/.ollama/ollama/llm/llama.cpp/.venv/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 880, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/root/.ollama/ollama/llm/llama.cpp/.venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2089, in from_pretrained
return cls._from_pretrained(
File "/root/.ollama/ollama/llm/llama.cpp/.venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2292, in _from_pretrained
tokenizer_file_handle = json.load(tokenizer_file_handle)
File "/usr/lib/python3.10/json/init.py", line 293, in load
return loads(fp.read(),
File "/usr/lib/python3.10/json/init.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

So, do we have opportunities to convert and use it manually? Thanks!

<!-- gh-comment-id:2104498169 --> @YuryAlsheuski commented on GitHub (May 10, 2024): Hi @dcasota, @thinkverse! I tried to convert `paraphrase-multilingual-MiniLM-L12-v2` to gguf like: `python /root/.ollama/ollama/llm/llama.cpp/convert-hf-to-gguf.py ./paraphrase-multilingual-MiniLM-L12-v2 --outtype f16 --outfile paraphrase-multilingual-MiniLM-L12-v2.gguf` but have the next trace: Loading model: paraphrase-multilingual-MiniLM-L12-v2 gguf: This GGUF file is for Little Endian only Set model parameters gguf: context length = 512 gguf: embedding length = 384 gguf: feed forward length = 1536 gguf: head count = 12 gguf: layer norm epsilon = 1e-12 gguf: file type = 1 Set model tokenizer Traceback (most recent call last): File "/root/.ollama/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 3001, in <module> main() File "/root/.ollama/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 2988, in main model_instance.set_vocab() File "/root/.ollama/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 2530, in set_vocab tokens, toktypes, tokpre = self.get_vocab_base() File "/root/.ollama/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 238, in get_vocab_base tokenizer = AutoTokenizer.from_pretrained(self.dir_model) File "/root/.ollama/ollama/llm/llama.cpp/.venv/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 880, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/root/.ollama/ollama/llm/llama.cpp/.venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2089, in from_pretrained return cls._from_pretrained( File "/root/.ollama/ollama/llm/llama.cpp/.venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2292, in _from_pretrained tokenizer_file_handle = json.load(tokenizer_file_handle) File "/usr/lib/python3.10/json/__init__.py", line 293, in load return loads(fp.read(), File "/usr/lib/python3.10/json/__init__.py", line 346, in loads return _default_decoder.decode(s) File "/usr/lib/python3.10/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) So, do we have opportunities to convert and use it manually? Thanks!
Author
Owner

@kilmarnock commented on GitHub (May 16, 2024):

the model jina/jina-embeddings-v2-base-de is now downloadable from the ollama model list. It uses a jina-bert-v2 architecture. Unfortunately, this architecture is not supported in ollama v.0.1.38:

llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'jina-bert-v2'

However it claims

Ollama Usage
This model is an embedding model, meaning it can only be used to generate embeddings.

You can get it by doing ollama pull jina/jina-embeddings-v2-base-de

<!-- gh-comment-id:2114932519 --> @kilmarnock commented on GitHub (May 16, 2024): the model jina/jina-embeddings-v2-base-de is now [downloadable](https://ollama.com/jina/jina-embeddings-v2-base-de) from the ollama model list. It uses a jina-bert-v2 architecture. Unfortunately, this architecture is not supported in ollama v.0.1.38: llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'jina-bert-v2' However it claims > Ollama Usage This model is an embedding model, meaning it can only be used to generate embeddings. > You can get it by doing ollama pull jina/jina-embeddings-v2-base-de
Author
Owner

@thinkverse commented on GitHub (May 16, 2024):

the model jina/jina-embeddings-v2-base-de is now downloadable from the ollama model list. It uses a jina-bert-v2 architecture. Unfortunately, this architecture is not supported in ollama v.0.1.38:

The PR to update the llama.cpp backend hasn't yet been merged - https://github.com/ollama/ollama/pull/4414, hopefully in v0.1.39.

<!-- gh-comment-id:2114975386 --> @thinkverse commented on GitHub (May 16, 2024): > the model jina/jina-embeddings-v2-base-de is now [downloadable](https://ollama.com/jina/jina-embeddings-v2-base-de) from the ollama model list. It uses a jina-bert-v2 architecture. Unfortunately, this architecture is not supported in ollama v.0.1.38: The PR to update the llama.cpp backend hasn't yet been merged - https://github.com/ollama/ollama/pull/4414, hopefully in v0.1.39.
Author
Owner

@JoanFM commented on GitHub (May 16, 2024):

the model jina/jina-embeddings-v2-base-de is now downloadable from the ollama model list. It uses a jina-bert-v2 architecture. Unfortunately, this architecture is not supported in ollama v.0.1.38:

llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'jina-bert-v2'

However it claims

Ollama Usage
This model is an embedding model, meaning it can only be used to generate embeddings.

You can get it by doing ollama pull jina/jina-embeddings-v2-base-de

Hey @kilmarnock,

This is my fault that I pushed the model with the documentation too early.

We are waiting for the mentioned PR to be merged and released.

Once it is done I will update the docs with the minimum version needed.

@FidelCastillo this also applies for the Spanish model!

<!-- gh-comment-id:2115080786 --> @JoanFM commented on GitHub (May 16, 2024): > the model jina/jina-embeddings-v2-base-de is now [downloadable](https://ollama.com/jina/jina-embeddings-v2-base-de) from the ollama model list. It uses a jina-bert-v2 architecture. Unfortunately, this architecture is not supported in ollama v.0.1.38: > > llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'jina-bert-v2' > > However it claims > > > Ollama Usage > > This model is an embedding model, meaning it can only be used to generate embeddings. > > > You can get it by doing ollama pull jina/jina-embeddings-v2-base-de Hey @kilmarnock, This is my fault that I pushed the model with the documentation too early. We are waiting for the mentioned PR to be merged and released. Once it is done I will update the docs with the minimum version needed. @FidelCastillo this also applies for the Spanish model!
Author
Owner

@dcasota commented on GitHub (May 16, 2024):

Hi @dcasota, @thinkverse! I tried to convert paraphrase-multilingual-MiniLM-L12-v2 to gguf like: python /root/.ollama/ollama/llm/llama.cpp/convert-hf-to-gguf.py ./paraphrase-multilingual-MiniLM-L12-v2 --outtype f16 --outfile paraphrase-multilingual-MiniLM-L12-v2.gguf but have the next trace:

Loading model: paraphrase-multilingual-MiniLM-L12-v2 gguf: This GGUF file is for Little Endian only Set model parameters gguf: context length = 512 gguf: embedding length = 384 gguf: feed forward length = 1536 gguf: head count = 12 gguf: layer norm epsilon = 1e-12 gguf: file type = 1 Set model tokenizer Traceback (most recent call last): File "/root/.ollama/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 3001, in main() File "/root/.ollama/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 2988, in main model_instance.set_vocab() File "/root/.ollama/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 2530, in set_vocab tokens, toktypes, tokpre = self.get_vocab_base() File "/root/.ollama/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 238, in get_vocab_base tokenizer = AutoTokenizer.from_pretrained(self.dir_model) File "/root/.ollama/ollama/llm/llama.cpp/.venv/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 880, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/root/.ollama/ollama/llm/llama.cpp/.venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2089, in from_pretrained return cls._from_pretrained( File "/root/.ollama/ollama/llm/llama.cpp/.venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2292, in _from_pretrained tokenizer_file_handle = json.load(tokenizer_file_handle) File "/usr/lib/python3.10/json/init.py", line 293, in load return loads(fp.read(), File "/usr/lib/python3.10/json/init.py", line 346, in loads return _default_decoder.decode(s) File "/usr/lib/python3.10/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

So, do we have opportunities to convert and use it manually? Thanks!

Unfortunately, I didn't check the llama.cpp version at that time. The latest ollama bits and python3.11 convert-hf-to-gguf.py --outtype f32 ./paraphrase-multilingual-MiniLM-L12-v2/ --outfile ./models/paraphrase-multilingual-MiniLM-L12-v2.gguf fail with

(.venv) dcaso [ ~/ollama/examples/langchain-python-rag-privategpt/llama.cpp ]$ python3.11 convert-hf-to-gguf.py --outtype f32 ./paraphrase-multilingual-MiniLM-L12-v2/ --outfile ./models/paraphrase-multilingual-MiniLM-L12-v2.gguf
INFO:hf-to-gguf:Loading model: paraphrase-multilingual-MiniLM-L12-v2
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:hf-to-gguf:Set model parameters
INFO:hf-to-gguf:gguf: context length = 512
INFO:hf-to-gguf:gguf: embedding length = 384
INFO:hf-to-gguf:gguf: feed forward length = 1536
INFO:hf-to-gguf:gguf: head count = 12
INFO:hf-to-gguf:gguf: layer norm epsilon = 1e-12
INFO:hf-to-gguf:gguf: file type = 0
INFO:hf-to-gguf:Set model tokenizer
Traceback (most recent call last):
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/llama.cpp/convert-hf-to-gguf.py", line 2546, in <module>
    main()
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/llama.cpp/convert-hf-to-gguf.py", line 2531, in main
    model_instance.set_vocab()
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/llama.cpp/convert-hf-to-gguf.py", line 2072, in set_vocab
    tokens, toktypes, tokpre = self.get_vocab_base()
                               ^^^^^^^^^^^^^^^^^^^^^
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/llama.cpp/convert-hf-to-gguf.py", line 377, in get_vocab_base
    tokenizer = AutoTokenizer.from_pretrained(self.dir_model)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 880, in from_pretrained
    return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2089, in from_pretrained
    return cls._from_pretrained(
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2292, in _from_pretrained
    tokenizer_file_handle = json.load(tokenizer_file_handle)
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/json/__init__.py", line 293, in load
    return loads(fp.read(),
           ^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
(.venv) dcaso [ ~/ollama/examples/langchain-python-rag-privategpt/llama.cpp ]$
<!-- gh-comment-id:2115547739 --> @dcasota commented on GitHub (May 16, 2024): > Hi @dcasota, @thinkverse! I tried to convert `paraphrase-multilingual-MiniLM-L12-v2` to gguf like: `python /root/.ollama/ollama/llm/llama.cpp/convert-hf-to-gguf.py ./paraphrase-multilingual-MiniLM-L12-v2 --outtype f16 --outfile paraphrase-multilingual-MiniLM-L12-v2.gguf` but have the next trace: > > Loading model: paraphrase-multilingual-MiniLM-L12-v2 gguf: This GGUF file is for Little Endian only Set model parameters gguf: context length = 512 gguf: embedding length = 384 gguf: feed forward length = 1536 gguf: head count = 12 gguf: layer norm epsilon = 1e-12 gguf: file type = 1 Set model tokenizer Traceback (most recent call last): File "/root/.ollama/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 3001, in main() File "/root/.ollama/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 2988, in main model_instance.set_vocab() File "/root/.ollama/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 2530, in set_vocab tokens, toktypes, tokpre = self.get_vocab_base() File "/root/.ollama/ollama/llm/llama.cpp/convert-hf-to-gguf.py", line 238, in get_vocab_base tokenizer = AutoTokenizer.from_pretrained(self.dir_model) File "/root/.ollama/ollama/llm/llama.cpp/.venv/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 880, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/root/.ollama/ollama/llm/llama.cpp/.venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2089, in from_pretrained return cls._from_pretrained( File "/root/.ollama/ollama/llm/llama.cpp/.venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2292, in _from_pretrained tokenizer_file_handle = json.load(tokenizer_file_handle) File "/usr/lib/python3.10/json/**init**.py", line 293, in load return loads(fp.read(), File "/usr/lib/python3.10/json/**init**.py", line 346, in loads return _default_decoder.decode(s) File "/usr/lib/python3.10/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) > > So, do we have opportunities to convert and use it manually? Thanks! Unfortunately, I didn't check the llama.cpp version at that time. The latest ollama bits and `python3.11 convert-hf-to-gguf.py --outtype f32 ./paraphrase-multilingual-MiniLM-L12-v2/ --outfile ./models/paraphrase-multilingual-MiniLM-L12-v2.gguf` fail with ``` (.venv) dcaso [ ~/ollama/examples/langchain-python-rag-privategpt/llama.cpp ]$ python3.11 convert-hf-to-gguf.py --outtype f32 ./paraphrase-multilingual-MiniLM-L12-v2/ --outfile ./models/paraphrase-multilingual-MiniLM-L12-v2.gguf INFO:hf-to-gguf:Loading model: paraphrase-multilingual-MiniLM-L12-v2 INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only INFO:hf-to-gguf:Set model parameters INFO:hf-to-gguf:gguf: context length = 512 INFO:hf-to-gguf:gguf: embedding length = 384 INFO:hf-to-gguf:gguf: feed forward length = 1536 INFO:hf-to-gguf:gguf: head count = 12 INFO:hf-to-gguf:gguf: layer norm epsilon = 1e-12 INFO:hf-to-gguf:gguf: file type = 0 INFO:hf-to-gguf:Set model tokenizer Traceback (most recent call last): File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/llama.cpp/convert-hf-to-gguf.py", line 2546, in <module> main() File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/llama.cpp/convert-hf-to-gguf.py", line 2531, in main model_instance.set_vocab() File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/llama.cpp/convert-hf-to-gguf.py", line 2072, in set_vocab tokens, toktypes, tokpre = self.get_vocab_base() ^^^^^^^^^^^^^^^^^^^^^ File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/llama.cpp/convert-hf-to-gguf.py", line 377, in get_vocab_base tokenizer = AutoTokenizer.from_pretrained(self.dir_model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 880, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2089, in from_pretrained return cls._from_pretrained( ^^^^^^^^^^^^^^^^^^^^^ File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2292, in _from_pretrained tokenizer_file_handle = json.load(tokenizer_file_handle) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/json/__init__.py", line 293, in load return loads(fp.read(), ^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/json/__init__.py", line 346, in loads return _default_decoder.decode(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) (.venv) dcaso [ ~/ollama/examples/langchain-python-rag-privategpt/llama.cpp ]$ ```
Author
Owner

@stringang commented on GitHub (May 27, 2024):

after a long time of struggling, I found that it is not supported.

<!-- gh-comment-id:2132506989 --> @stringang commented on GitHub (May 27, 2024): after a long time of struggling, I found that it is not supported.
Author
Owner

@YuryAlsheuski commented on GitHub (May 27, 2024):

In any case, it would be great if paraphrase-multilingual-MiniLM-L12-v2 will be supported with manual importing at least. Currently, it is pretty popular. Thanks!

<!-- gh-comment-id:2133094115 --> @YuryAlsheuski commented on GitHub (May 27, 2024): In any case, it would be great if paraphrase-multilingual-MiniLM-L12-v2 will be supported with manual importing at least. Currently, it is pretty popular. Thanks!
Author
Owner

@dcasota commented on GitHub (May 27, 2024):

@YuryAlsheuski With an older branch of llama.cpp, the manual import seems to work

sudo tdnf install -y git-lfs git
git lfs install

git clone -b b2536 https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
pip3 install -r requirements.txt

# paraphrase-multilingual-MiniLM-L12-v2
export Model=paraphrase-multilingual-MiniLM-L12-v2
export HuggingFacePath=https://huggingface.co/sentence-transformers
git clone $HuggingFacePath/$Model
python3 convert-hf-to-gguf.py ./$Model --outfile ./models/$Model.gguf --outtype f32

cd models
sudo cat <<EOF | sudo tee ./Modelfile
FROM ./paraphrase-multilingual-MiniLM-L12-v2.gguf
EOF
ollama create paraphrase-multilingual-MiniLM-L12-v2 -f ./Modelfile
cd ..
cd ..
<!-- gh-comment-id:2134070860 --> @dcasota commented on GitHub (May 27, 2024): @YuryAlsheuski With an older branch of llama.cpp, the manual import seems to work ``` sudo tdnf install -y git-lfs git git lfs install git clone -b b2536 https://github.com/ggerganov/llama.cpp.git cd llama.cpp pip3 install -r requirements.txt # paraphrase-multilingual-MiniLM-L12-v2 export Model=paraphrase-multilingual-MiniLM-L12-v2 export HuggingFacePath=https://huggingface.co/sentence-transformers git clone $HuggingFacePath/$Model python3 convert-hf-to-gguf.py ./$Model --outfile ./models/$Model.gguf --outtype f32 cd models sudo cat <<EOF | sudo tee ./Modelfile FROM ./paraphrase-multilingual-MiniLM-L12-v2.gguf EOF ollama create paraphrase-multilingual-MiniLM-L12-v2 -f ./Modelfile cd .. cd .. ```
Author
Owner

@thinkverse commented on GitHub (May 27, 2024):

it would be great if paraphrase-multilingual-MiniLM-L12-v2 will be supported

One user has uploaded it to the registry already: https://ollama.com/nextfire/paraphrase-multilingual-minilm, haven't tested it so I cannot speak on its performance.

<!-- gh-comment-id:2134073350 --> @thinkverse commented on GitHub (May 27, 2024): > it would be great if paraphrase-multilingual-MiniLM-L12-v2 will be supported One user has uploaded it to the registry already: https://ollama.com/nextfire/paraphrase-multilingual-minilm, haven't tested it so I cannot speak on its performance.
Author
Owner

@thinkverse commented on GitHub (May 27, 2024):

Is possible to use this Bert model in ollama?

jinaai/jina-embeddings-v2-base-es

The Jina team has added their models to the registry, you can find them on their user profile: https://ollama.com/jina. 👍

<!-- gh-comment-id:2134074665 --> @thinkverse commented on GitHub (May 27, 2024): > Is possible to use this Bert model in ollama? > > jinaai/jina-embeddings-v2-base-es The Jina team has added their models to the registry, you can find them on their user profile: https://ollama.com/jina. 👍
Author
Owner

@JoanFM commented on GitHub (May 28, 2024):

it would be great if paraphrase-multilingual-MiniLM-L12-v2 will be supported

One user has uploaded it to the registry already: https://ollama.com/nextfire/paraphrase-multilingual-minilm, haven't tested it so I cannot speak on its performance.

Hey @thinkverse,

We still need a new release for these to work. Do you know when the next release will be?

<!-- gh-comment-id:2134432809 --> @JoanFM commented on GitHub (May 28, 2024): > > it would be great if paraphrase-multilingual-MiniLM-L12-v2 will be supported > > One user has uploaded it to the registry already: https://ollama.com/nextfire/paraphrase-multilingual-minilm, haven't tested it so I cannot speak on its performance. Hey @thinkverse, We still need a new release for these to work. Do you know when the next release will be?
Author
Owner

@gevzak commented on GitHub (Jun 5, 2024):

Seems like jinaai/jina-embeddings-v2-base-de works since the latest Ollama version, but intfloat/multilingual-e5-large-instruct support is still missing in llama.cpp

<!-- gh-comment-id:2150037730 --> @gevzak commented on GitHub (Jun 5, 2024): Seems like jinaai/jina-embeddings-v2-base-de works since the latest Ollama version, but intfloat/multilingual-e5-large-instruct support is still missing in llama.cpp
Author
Owner

@dcasota commented on GitHub (Jun 5, 2024):

@wouterverduin for Dutch language support in an Ollama installation, you might use sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2.

Here a list with the LLMs mentioned in this thread.

                  Embeddings LLM                             |   in Ollama hub?                                   | architecture | manual import (via gguf)?       | Dutch language ?
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
intfloat/multilingual-e5-large-instruct                      |    no                                              | xlm-roberta  |           ?                      |        yes
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
jinaai/jina-embeddings-v2-base-de	                     |    yes                                             | bert         |   (not necessary anymore)        |        no
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2  |    yes(nextfire/paraphrase-multilingual-minilm)    | bert         | (yes, but not necessary anymore) |        yes
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
sentence-transformers/all-MiniLM-L6-v2                       |    yes(chroma/.. and tazarov/..)                   | bert         | (yes, but not necessary)         |        no
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
mixedbread-ai/mxbai-embed-large-v1                           |    no                                              | bert         |            ?                     |        no

Some information:

  • The easiest way in Ollama is that a model supports the language preferred and the task goal.
    If this is not the case, you can try to extend Ollama, but there are limitations. The main limitation is the architecture. Ollama supports 'bert' as Natural Language Processing model. That said, how to find a model with this architecture?

  • The HuggingFace hub is the biggest hub with over 700K models. The github repo https://github.com/huggingface has dramatically increased over the last months to provide an enriched portfolio of rich features from science work.

  • First, accordingly to https://huggingface.co/models?language=nl&sort=downloads, to find an easy interoperable LLM for Ollama, start in 'Filter by name' with a known LLM (sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2), select it and select its architecture. In this case 'bert'. With this filter, 47571 models are counted.
    This workaround seems necessary because a direct filter criteria 'bert' in the filter section 'Other' doesn't work.

  • Then, go to the 'Languages' filter, and select 'Dutch'. With that additional filter, 86 models counted.
    Here the extract, sorted by 'Trending'

    image

  • The Huggingface portal supports task goal filters in 'Natural Language processing'. There is choice e.g. "Sentence Similarity', 'Text Generation', 'Feature extraction', etc.
    E.g. sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 is in the category 'Sentence Similarity',
    google-bert/bert-base-multilingual-uncased is in the category 'Fill-Mask', nlptown/bert-base-multilingual-uncased-sentiment is in the category 'Text classification', etc.

  • There is a chance that these filtered bert models can be used in Ollama by manual import, see https://github.com/ollama/ollama/issues/3747#issuecomment-2134070860.

  • Edited June 6th: If the language would be German/English, integrating jinaai/jina-embeddings-v2-base-de in ingest.py worked in my lab, however Ollama throws out an issue message, that the model has to be trained first. It's a different behavior than with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2.

    Hope this helps.

<!-- gh-comment-id:2150775092 --> @dcasota commented on GitHub (Jun 5, 2024): @wouterverduin for Dutch language support in an Ollama installation, you might use `sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2`. Here a list with the LLMs mentioned in this thread. ``` Embeddings LLM | in Ollama hub? | architecture | manual import (via gguf)? | Dutch language ? ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- intfloat/multilingual-e5-large-instruct | no | xlm-roberta | ? | yes ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- jinaai/jina-embeddings-v2-base-de | yes | bert | (not necessary anymore) | no ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 | yes(nextfire/paraphrase-multilingual-minilm) | bert | (yes, but not necessary anymore) | yes ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- sentence-transformers/all-MiniLM-L6-v2 | yes(chroma/.. and tazarov/..) | bert | (yes, but not necessary) | no ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- mixedbread-ai/mxbai-embed-large-v1 | no | bert | ? | no ``` Some information: - The easiest way in Ollama is that a model supports the language preferred and the task goal. If this is not the case, you can try to extend Ollama, but there are limitations. The main limitation is the architecture. Ollama supports 'bert' as Natural Language Processing model. That said, how to find a model with this architecture? - The HuggingFace hub is the biggest hub with over 700K models. The github repo https://github.com/huggingface has dramatically increased over the last months to provide an enriched portfolio of rich features from science work. - First, accordingly to https://huggingface.co/models?language=nl&sort=downloads, to find an easy interoperable LLM for Ollama, start in 'Filter by name' with a known LLM (`sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2`), select it and select its architecture. In this case 'bert'. With this filter, 47571 models are counted. This workaround seems necessary because a direct filter criteria 'bert' in the filter section 'Other' doesn't work. - Then, go to the 'Languages' filter, and select 'Dutch'. With that additional filter, 86 models counted. Here the extract, sorted by 'Trending' ![image](https://github.com/ollama/ollama/assets/14890243/7b346944-a840-4343-8d34-b5474f456a9f) - The Huggingface portal supports task goal filters in 'Natural Language processing'. There is choice e.g. "Sentence Similarity', 'Text Generation', 'Feature extraction', etc. E.g. `sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2` is in the category 'Sentence Similarity', `google-bert/bert-base-multilingual-uncased` is in the category 'Fill-Mask', `nlptown/bert-base-multilingual-uncased-sentiment` is in the category 'Text classification', etc. - There is a chance that these filtered bert models can be used in Ollama by manual import, see https://github.com/ollama/ollama/issues/3747#issuecomment-2134070860. - Edited June 6th: If the language would be German/English, integrating `jinaai/jina-embeddings-v2-base-de` in `ingest.py` worked in my lab, however Ollama throws out an issue message, that the model has to be trained first. It's a different behavior than with `sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2`. Hope this helps.
Author
Owner

@Kraego commented on GitHub (Jun 12, 2024):

Seems like jinaai/jina-embeddings-v2-base-de works since the latest Ollama version, but intfloat/multilingual-e5-large-instruct support is still missing in llama.cpp

Can confirm this, working with: ollama:0.1.43

<!-- gh-comment-id:2162898386 --> @Kraego commented on GitHub (Jun 12, 2024): > Seems like jinaai/jina-embeddings-v2-base-de works since the latest Ollama version, but intfloat/multilingual-e5-large-instruct support is still missing in llama.cpp Can confirm this, working with: `ollama:0.1.43`
Author
Owner

@hasansalimkanmaz commented on GitHub (Oct 17, 2024):

@wouterverduin You can run a bunch of multilingual models in GGUF format directly with Ollama thanks to this new integration between Ollama and HF.

<!-- gh-comment-id:2418979548 --> @hasansalimkanmaz commented on GitHub (Oct 17, 2024): @wouterverduin You can run a bunch of multilingual models in GGUF format directly with Ollama thanks to [this new integration](https://huggingface.co/docs/hub/en/ollama) between Ollama and HF.
Author
Owner

@dcasota commented on GitHub (Oct 17, 2024):

@hasansalimkanmaz this is great news. It is a open-source game changer and brings together LLM scientists and business adopters introducing, monitoring and evaluating their custom solution. Hopefully we will see a bunch of success stories.

<!-- gh-comment-id:2419011507 --> @dcasota commented on GitHub (Oct 17, 2024): @hasansalimkanmaz this is great news. It is a open-source game changer and brings together LLM scientists and business adopters introducing, monitoring and evaluating their custom solution. Hopefully we will see a bunch of success stories.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28070