[GH-ISSUE #10824] Embedding model support on Ollama Engine #7106

Open
opened 2026-04-12 19:05:18 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @rjmalagon on GitHub (May 23, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10824

What is the issue?

The new Ollama engine can't use Qwen2 based embedding model that Llama engine can.

Example of this is the GTE-Qwen2 embedding models
(can be tested with rjmalagon/gte-qwen2-1.5b-instruct-embed-f16:latest, converted from https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct)

Relevant log output

Ollama engine output when this model is loaded:

time=2025-05-23T04:37:31.411Z level=INFO source=server.go:631 msg="llama runner started in 4.52 seconds"
time=2025-05-23T04:37:31.431Z level=INFO source=server.go:939 msg="llm embedding error: this model does not support embeddings"
[GIN] 2025/05/23 - 04:37:31 | 500 |  4.626549125s |       127.0.0.1 | POST     "/api/embed"

On the Llama engine output:

llama_model_loader: loaded meta data with 22 key-value pairs and 339 tensors from /root/.ollama/models/blobs/sha256-224550f5a5221748d8d1af5aa5a1aad3510d26f20beb8dec918dab67510c075e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.name str              = gte-Qwen2-1.5B
llama_model_loader: - kv   2:                          qwen2.block_count u32              = 28
llama_model_loader: - kv   3:                       qwen2.context_length u32              = 131072
llama_model_loader: - kv   4:                     qwen2.embedding_length u32              = 1536
llama_model_loader: - kv   5:                  qwen2.feed_forward_length u32              = 8960
llama_model_loader: - kv   6:                 qwen2.attention.head_count u32              = 12
llama_model_loader: - kv   7:              qwen2.attention.head_count_kv u32              = 2
llama_model_loader: - kv   8:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv   9:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  10:                          general.file_type u32              = 1
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  12:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,151646]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,151646]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  15:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  17:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  19:               tokenizer.ggml.add_eos_token bool             = true
llama_model_loader: - kv  20:                    tokenizer.chat_template str              = {% for message in messages %}{{'<|im_...
llama_model_loader: - kv  21:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  141 tensors
llama_model_loader: - type  f16:  198 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = F16
print_info: file size   = 3.31 GiB (16.00 BPW) 
load: special tokens cache size = 3
load: token to piece cache size = 0.9308 MB
print_info: arch             = qwen2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 1536
print_info: n_layer          = 28
print_info: n_head           = 12
print_info: n_head_kv        = 2
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 6
print_info: n_embd_k_gqa     = 256
print_info: n_embd_v_gqa     = 256
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 8960
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = -1
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 1.5B
print_info: model params     = 1.78 B
print_info: general.name     = gte-Qwen2-1.5B
print_info: vocab type       = BPE
print_info: n_vocab          = 151646
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151643 '<|endoftext|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 28 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 29/29 layers to GPU
load_tensors:        ROCm0 model buffer size =  2943.83 MiB
load_tensors:   CPU_Mapped model buffer size =   444.28 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 32768
llama_context: n_ctx_per_seq = 32768
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 1
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (32768) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context:  ROCm_Host  output buffer size =     0.58 MiB
llama_kv_cache_unified: kv_size = 32768, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1, padding = 256
llama_kv_cache_unified:      ROCm0 KV buffer size =   896.00 MiB
llama_kv_cache_unified: KV self size  =  896.00 MiB, K (f16):  448.00 MiB, V (f16):  448.00 MiB
llama_context:      ROCm0 compute buffer size =   299.18 MiB
llama_context:  ROCm_Host compute buffer size =    67.01 MiB
llama_context: graph nodes  = 931
llama_context: graph splits = 2
time=2025-05-23T04:39:40.779Z level=INFO source=server.go:631 msg="llama runner started in 2.26 seconds"
[GIN] 2025/05/23 - 04:39:41 | 200 |   2.92365617s |       127.0.0.1 | POST     "/api/embed"

OS

Docker

GPU

AMD

CPU

AMD

Ollama version

0.7.1

Originally created by @rjmalagon on GitHub (May 23, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10824 ### What is the issue? The new Ollama engine can't use Qwen2 based embedding model that Llama engine can. Example of this is the GTE-Qwen2 embedding models (can be tested with rjmalagon/gte-qwen2-1.5b-instruct-embed-f16:latest, converted from https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) ### Relevant log output ```shell Ollama engine output when this model is loaded: time=2025-05-23T04:37:31.411Z level=INFO source=server.go:631 msg="llama runner started in 4.52 seconds" time=2025-05-23T04:37:31.431Z level=INFO source=server.go:939 msg="llm embedding error: this model does not support embeddings" [GIN] 2025/05/23 - 04:37:31 | 500 | 4.626549125s | 127.0.0.1 | POST "/api/embed" On the Llama engine output: llama_model_loader: loaded meta data with 22 key-value pairs and 339 tensors from /root/.ollama/models/blobs/sha256-224550f5a5221748d8d1af5aa5a1aad3510d26f20beb8dec918dab67510c075e (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.name str = gte-Qwen2-1.5B llama_model_loader: - kv 2: qwen2.block_count u32 = 28 llama_model_loader: - kv 3: qwen2.context_length u32 = 131072 llama_model_loader: - kv 4: qwen2.embedding_length u32 = 1536 llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 8960 llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 12 llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 2 llama_model_loader: - kv 8: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 9: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 10: general.file_type u32 = 1 llama_model_loader: - kv 11: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 12: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,151646] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,151646] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 17: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 19: tokenizer.ggml.add_eos_token bool = true llama_model_loader: - kv 20: tokenizer.chat_template str = {% for message in messages %}{{'<|im_... llama_model_loader: - kv 21: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type f16: 198 tensors print_info: file format = GGUF V3 (latest) print_info: file type = F16 print_info: file size = 3.31 GiB (16.00 BPW) load: special tokens cache size = 3 load: token to piece cache size = 0.9308 MB print_info: arch = qwen2 print_info: vocab_only = 0 print_info: n_ctx_train = 131072 print_info: n_embd = 1536 print_info: n_layer = 28 print_info: n_head = 12 print_info: n_head_kv = 2 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_swa_pattern = 1 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 6 print_info: n_embd_k_gqa = 256 print_info: n_embd_v_gqa = 256 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: f_attn_scale = 0.0e+00 print_info: n_ff = 8960 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = -1 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 131072 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 1.5B print_info: model params = 1.78 B print_info: general.name = gte-Qwen2-1.5B print_info: vocab type = BPE print_info: n_vocab = 151646 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151643 '<|endoftext|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = true) load_tensors: offloading 28 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 29/29 layers to GPU load_tensors: ROCm0 model buffer size = 2943.83 MiB load_tensors: CPU_Mapped model buffer size = 444.28 MiB llama_context: constructing llama_context llama_context: n_seq_max = 1 llama_context: n_ctx = 32768 llama_context: n_ctx_per_seq = 32768 llama_context: n_batch = 512 llama_context: n_ubatch = 512 llama_context: causal_attn = 1 llama_context: flash_attn = 1 llama_context: freq_base = 1000000.0 llama_context: freq_scale = 1 llama_context: n_ctx_per_seq (32768) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: ROCm_Host output buffer size = 0.58 MiB llama_kv_cache_unified: kv_size = 32768, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1, padding = 256 llama_kv_cache_unified: ROCm0 KV buffer size = 896.00 MiB llama_kv_cache_unified: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB llama_context: ROCm0 compute buffer size = 299.18 MiB llama_context: ROCm_Host compute buffer size = 67.01 MiB llama_context: graph nodes = 931 llama_context: graph splits = 2 time=2025-05-23T04:39:40.779Z level=INFO source=server.go:631 msg="llama runner started in 2.26 seconds" [GIN] 2025/05/23 - 04:39:41 | 200 | 2.92365617s | 127.0.0.1 | POST "/api/embed" ``` ### OS Docker ### GPU AMD ### CPU AMD ### Ollama version 0.7.1
GiteaMirror added the feature request label 2026-04-12 19:05:18 -05:00
Author
Owner

@rick-github commented on GitHub (May 23, 2025):

The new engine currently doesn't support embedding at all, clients have to use the old engine. It's a WIP.

<!-- gh-comment-id:2905176025 --> @rick-github commented on GitHub (May 23, 2025): The new engine currently doesn't support embedding at all, clients have to use the old engine. It's a WIP.
Author
Owner

@rjmalagon commented on GitHub (May 23, 2025):

Thanks! Fortunately, my embedding server runs a dedicated instance for this purpose, and it is easy to override it to the llama engine via the environment variable.

Great work with the new ollama engine, I will wait for the embedding support (and the reranker support!, and multimodal embedding!)

The new engine currently doesn't support embedding at all, clients have to use the old engine. It's a WIP.

I will ask to relabel this as a mere feature request.

<!-- gh-comment-id:2905819013 --> @rjmalagon commented on GitHub (May 23, 2025): Thanks! Fortunately, my embedding server runs a dedicated instance for this purpose, and it is easy to override it to the llama engine via the environment variable. Great work with the new ollama engine, I will wait for the embedding support (and the reranker support!, and multimodal embedding!) > The new engine currently doesn't support embedding at all, clients have to use the old engine. It's a WIP. I will ask to relabel this as a mere feature request.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#7106