[GH-ISSUE #7181] The output dimension of the embedding model in Ollama is incorrect. #4561

Open
opened 2026-04-12 15:29:48 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @chelseaztq on GitHub (Oct 12, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7181

What is the issue?

I have a question. I noticed that when using Ollama for inference with certain models, such as conna-embedding v1, the output goes through embedding first, then pooling, and finally a dense layer that transforms a vector of shape 1024 into 1792. However, in Ollama's conna-embedding v1, the output dimension is only 1024 after inference, missing the operation of the dense layer. How can I solve this?

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.3.12

Originally created by @chelseaztq on GitHub (Oct 12, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7181 ### What is the issue? I have a question. I noticed that when using Ollama for inference with certain models, such as conna-embedding v1, the output goes through embedding first, then pooling, and finally a dense layer that transforms a vector of shape 1024 into 1792. However, in Ollama's conna-embedding v1, the output dimension is only 1024 after inference, missing the operation of the dense layer. How can I solve this? ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.12
GiteaMirror added the bug label 2026-04-12 15:29:48 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4561