[GH-ISSUE #11897] Cogito v2 #33659

Closed
opened 2026-04-22 16:33:40 -05:00 by GiteaMirror · 19 comments
Owner

Originally created by @machiav3lli on GitHub (Aug 14, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11897

Cogito announced their new v2: https://www.deepcogito.com/research/cogito-v2-preview

Cogito v1 faired pretty well providing one of - if not the - best post-trained variants of the Llama 3 family. The numbers seems to paint similar image for v2.

Originally created by @machiav3lli on GitHub (Aug 14, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11897 Cogito announced their new v2: https://www.deepcogito.com/research/cogito-v2-preview Cogito v1 faired pretty well providing one of - if not the - best post-trained variants of the Llama 3 family. The numbers seems to paint similar image for v2.
GiteaMirror added the model label 2026-04-22 16:33:40 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 14, 2025):

These are fine-tuned versions of models supported by ollama, so can be imported.

<!-- gh-comment-id:3187833490 --> @rick-github commented on GitHub (Aug 14, 2025): These are fine-tuned versions of models supported by ollama, so can be [imported](https://github.com/ollama/ollama/blob/main/docs/import.md).
Author
Owner

@machiav3lli commented on GitHub (Aug 15, 2025):

These are fine-tuned versions of models supported by ollama, so can be imported.

It's one thing for them to be imported, and the other is being added to the catalog at https://ollama.com/search. considering that v1 was highlighted there, I'd wish for v2 the same.

<!-- gh-comment-id:3191645842 --> @machiav3lli commented on GitHub (Aug 15, 2025): > These are fine-tuned versions of models supported by ollama, so can be [imported](https://github.com/ollama/ollama/blob/main/docs/import.md). It's one thing for them to be imported, and the other is being added to the catalog at https://ollama.com/search. considering that [v1 was highlighted there](https://ollama.com/library/cogito), I'd wish for v2 the same.
Author
Owner

@fighter3005 commented on GitHub (Aug 29, 2025):

It would be nice to have it in the catalog, since it is probably the most capable vision model we can get without an entirely new model architecture, or am I mistaken? GLM 4.5v would be great, but since it is a new architecture, this llama4 scout version here would be much easier to get running.

<!-- gh-comment-id:3236334197 --> @fighter3005 commented on GitHub (Aug 29, 2025): It would be nice to have it in the catalog, since it is probably the most capable vision model we can get without an entirely new model architecture, or am I mistaken? GLM 4.5v would be great, but since it is a new architecture, this llama4 scout version here would be much easier to get running.
Author
Owner

@fighter3005 commented on GitHub (Aug 30, 2025):

Maybe somebody can hlep me here. I am relatively unexperienced with ollama, but I have this Notebook where I try to get cogito v2 109B working. However, I always get

gathering model components 
Error: no Modelfile or safetensors files found

even though the model is downloaded and the files are where they should be (at least I think...). Only thing I can think of is the model.safetensor.index.json is somehow not as ollama expects it?! Sadly I have no access to the original meta model, so I can't check how that one looks yet.

Maybe someone can give it a look and we can get this working.

Note: Importing from HF also does not work because of multiple files... I tried the unsloth gguf q3_x_s, but vision is not working at all.

This Notebook worked for medgemma.


Update: I am trying now to first convert the model with via llama.cpp to GGUF f16, then create quantized model with ollama. However, I am running almost out of storage (roughly 600GB needed, maybe more) and also out of RAM for the quantization. I only have 128GB. If someone has a machine that is capable enough, here is the notebook.

<!-- gh-comment-id:3239254990 --> @fighter3005 commented on GitHub (Aug 30, 2025): Maybe somebody can hlep me here. I am relatively unexperienced with ollama, but I have [this](https://colab.research.google.com/drive/1Tsq1Lhwq3Orcc04yW0iigLfs7FiZUplj?usp=sharing) Notebook where I try to get cogito v2 109B working. However, I always get ``` gathering model components Error: no Modelfile or safetensors files found ``` even though the model is downloaded and the files are where they should be (at least I think...). Only thing I can think of is the model.safetensor.index.json is somehow not as ollama expects it?! ~~Sadly I have no access to the original meta model, so I can't check how that one looks yet~~. Maybe someone can give it a look and we can get this working. Note: Importing from HF also does not work because of multiple files... I tried the unsloth gguf q3_x_s, but vision is not working at all. [This Notebook worked for medgemma.](https://colab.research.google.com/drive/16syIl-dg9V3XXVa6THriDTmqAkKn68m_?usp=sharing) --- Update: I am trying now to first convert the model with via llama.cpp to GGUF f16, then create quantized model with ollama. However, I am running almost out of storage (roughly 600GB needed, maybe more) and also out of RAM for the quantization. I only have 128GB. If someone has a machine that is capable enough, [here is the notebook.](https://colab.research.google.com/drive/1pMiIiXLSJ45--v-KtZSNVZ0c-1-lE3J0?usp=sharing)
Author
Owner

@fighter3005 commented on GitHub (Sep 1, 2025):

@amsaravi you helped with MedGemma. Do you have any Ideas?

<!-- gh-comment-id:3241540171 --> @fighter3005 commented on GitHub (Sep 1, 2025): @amsaravi you helped with MedGemma. Do you have any Ideas?
Author
Owner

@fighter3005 commented on GitHub (Sep 3, 2025):

I got first result with vision broken. I will try to fix, but I have to rely on already quantized versions from others, since I cannot do a clean convert with llama.cpp & quantize with ollama due to the lack of hardware needed.
oscar_while/cogito-v2-preview-llama-109B-MoE_version1:Q4_K_M

<!-- gh-comment-id:3249252207 --> @fighter3005 commented on GitHub (Sep 3, 2025): I got first result with vision broken. I will try to fix, but I have to rely on already quantized versions from others, since I cannot do a clean convert with llama.cpp & quantize with ollama due to the lack of hardware needed. oscar_while/cogito-v2-preview-llama-109B-MoE_version1:Q4_K_M
Author
Owner

@rick-github commented on GitHub (Sep 3, 2025):

cogito v2 is a fine-tune of llama4, so requires the new ollama engine to run. The new engine uses a fused GGUF file, ie text weights and vision weights in the same file - it doesn't support a separate projector. Converting from safetensors with ollama instead of llama.cpp should create the correct form of GGUF file.

<!-- gh-comment-id:3249523602 --> @rick-github commented on GitHub (Sep 3, 2025): cogito v2 is a fine-tune of llama4, so requires the new ollama engine to run. The new engine uses a fused GGUF file, ie text weights and vision weights in the same file - it doesn't support a separate projector. Converting from safetensors with ollama instead of llama.cpp should create the correct form of GGUF file.
Author
Owner

@fighter3005 commented on GitHub (Sep 3, 2025):

cogito v2 is a fine-tune of llama4, so requires the new ollama engine to run. The new engine uses a fused GGUF file, ie text weights and vision weights in the same file - it doesn't support a separate projector. Converting from safetensors with ollama instead of llama.cpp should create the correct form of GGUF file.

But Ollama does not support multiple safetensor files, right? Doesn't llama.cpp also create a fused file? 🤔 I think I got one gguf back when converting the safetensors from deepcognito. Sadly I can't quantize with ollama due to my 128gb ram, so I can't test.

<!-- gh-comment-id:3249777074 --> @fighter3005 commented on GitHub (Sep 3, 2025): > cogito v2 is a fine-tune of llama4, so requires the new ollama engine to run. The new engine uses a fused GGUF file, ie text weights and vision weights in the same file - it doesn't support a separate projector. Converting from safetensors with ollama instead of llama.cpp should create the correct form of GGUF file. But Ollama does not support multiple safetensor files, right? Doesn't llama.cpp also create a fused file? 🤔 I think I got one gguf back when converting the safetensors from deepcognito. Sadly I can't quantize with ollama due to my 128gb ram, so I can't test.
Author
Owner

@rick-github commented on GitHub (Sep 3, 2025):

But Ollama does not support multiple safetensor files, right?

No, ollama supports multiple safetensors files.

Doesn't llama.cpp also create a fused file?

No, llama.cpp does not create fused GGUFs.

<!-- gh-comment-id:3249886297 --> @rick-github commented on GitHub (Sep 3, 2025): > But Ollama does not support multiple safetensor files, right? No, ollama supports multiple safetensors files. > Doesn't llama.cpp also create a fused file? No, llama.cpp does not create fused GGUFs.
Author
Owner

@fighter3005 commented on GitHub (Sep 3, 2025):

@rick-github Sorry to bother you again, but if I do it like this, which is how it is detailed in the docs (I believe), I just get "Error: no Modelfile or safetensors files found".

<!-- gh-comment-id:3250655399 --> @fighter3005 commented on GitHub (Sep 3, 2025): @rick-github Sorry to bother you again, but if I do it like [this](https://colab.research.google.com/drive/1QM8RK-ruen1aG_qrbkm6fINW_-sF2TZT?usp=sharing), which is how it is detailed in the docs (I believe), I just get "Error: no Modelfile or safetensors files found".
Author
Owner

@rick-github commented on GitHub (Sep 3, 2025):

The problem is here: fb92b61754/parser/parser.go (L258)

The server tries to verify that the safetensor files are application/octet-stream by using http.DetectContentType() to determine the type of the contents of the file. The safetensor format is pretty simple, with the first 8 bytes being the length of the following header. Unfortunately the length of the header in model-00001-of-00050.safetensors is 256 bytes, or 00 01 00 00 00 00 00 00, and this matches the pattern for font/ttf. So the server is skipping the safetensor files because at least one of them looks like a font file.

I had a poke through the go source for DetectContentType() in the hope of finding an override (go does this a lot) but found nothing - this will need a change to the ollama code.

<!-- gh-comment-id:3251030658 --> @rick-github commented on GitHub (Sep 3, 2025): The problem is here: https://github.com/ollama/ollama/blob/fb92b61754ed1ec1d9678564d18c202b32980589/parser/parser.go#L258 The server tries to verify that the safetensor files are `application/octet-stream` by using `http.DetectContentType()` to determine the type of the contents of the file. The safetensor [format](https://huggingface.co/docs/safetensors/en/index#format) is pretty simple, with the first 8 bytes being the length of the following header. Unfortunately the length of the header in model-00001-of-00050.safetensors is 256 bytes, or `00 01 00 00 00 00 00 00`, and this [matches](https://cs.opensource.google/go/go/+/refs/tags/go1.25.1:src/net/http/sniff.go;l=174) the [pattern](https://mimesniff.spec.whatwg.org/#matching-a-font-type-pattern:~:text=Embedded%20OpenType%20signature.-,00%2001%2000%2000,-FF%20FF%20FF) for `font/ttf`. So the server is skipping the safetensor files because at least one of them looks like a font file. I had a poke through the go source for `DetectContentType()` in the hope of finding an override (go does this a lot) but found nothing - this will need a change to the ollama code.
Author
Owner

@machiav3lli commented on GitHub (Sep 5, 2025):

I wouldn't say the issue is solved in that sense as Cogito v2 wasn't added to the catalog similar to the v1. Or am I missing something?

<!-- gh-comment-id:3260085944 --> @machiav3lli commented on GitHub (Sep 5, 2025): I wouldn't say the issue is solved in that sense as Cogito v2 wasn't added to the catalog similar to the v1. Or am I missing something?
Author
Owner

@rick-github commented on GitHub (Sep 8, 2025):

It was auto-closed when the PR fixing the import issue was merged.

<!-- gh-comment-id:3267142157 --> @rick-github commented on GitHub (Sep 8, 2025): It was auto-closed when the PR fixing the import issue was merged.
Author
Owner

@rick-github commented on GitHub (Sep 13, 2025):

ollama 0.11.11 will correctly import the safetensors for cogito-v2. However, a template will need to be provided. A simple workaround is to use the one from llama4, but this doesn't allow control of thinking and tools calls are not handled.

$ ollama run cogito-v2:109b-preview-q4_K_M
>>> hello
Hi there! How are you doing today? I'm here to chat and help out however I can.

>>> how many cars does a tyre have?
Haha, I see what you did there! That's a clever play on words. A tire (or tyre, depending on which side of the
Atlantic you're on) doesn't have any cars - it's actually a part that goes on a car. One tire goes on one wheel,
and most cars have 4 wheels, so typically a car has 4 tires. But I love how you phrased the question - it's a
fun little riddle! 😄

>>> /bye
<!-- gh-comment-id:3287328028 --> @rick-github commented on GitHub (Sep 13, 2025): ollama 0.11.11 will correctly import the safetensors for cogito-v2. However, a template will need to be provided. A simple workaround is to use the one from [llama4](https://ollama.com/library/llama4:latest/blobs/161e5d878840), but this doesn't allow control of thinking and tools calls are not handled. ```console $ ollama run cogito-v2:109b-preview-q4_K_M >>> hello Hi there! How are you doing today? I'm here to chat and help out however I can. >>> how many cars does a tyre have? Haha, I see what you did there! That's a clever play on words. A tire (or tyre, depending on which side of the Atlantic you're on) doesn't have any cars - it's actually a part that goes on a car. One tire goes on one wheel, and most cars have 4 wheels, so typically a car has 4 tires. But I love how you phrased the question - it's a fun little riddle! 😄 >>> /bye ```
Author
Owner

@rick-github commented on GitHub (Sep 13, 2025):

Vision works too.

$ ollama run cogito-v2:109b-preview-q4_K_M describe this image: ./picture.png
Added image './picture.png'
This image shows an adorable white puppy sitting on what appears to be a marble or stone step. The puppy is
small, fluffy, and looks like it could be a Samoyed or similar breed. It's wearing a red collar with a small
bell attached. The puppy's head is turned slightly to the side, giving it a curious expression. The background
is out of focus, but it appears to be an indoor setting with a dark wall or area behind the puppy. The overall
composition highlights the puppy's cuteness and innocence, making it a heartwarming image.
<!-- gh-comment-id:3287340284 --> @rick-github commented on GitHub (Sep 13, 2025): Vision works too. ```console $ ollama run cogito-v2:109b-preview-q4_K_M describe this image: ./picture.png Added image './picture.png' This image shows an adorable white puppy sitting on what appears to be a marble or stone step. The puppy is small, fluffy, and looks like it could be a Samoyed or similar breed. It's wearing a red collar with a small bell attached. The puppy's head is turned slightly to the side, giving it a curious expression. The background is out of focus, but it appears to be an indoor setting with a dark wall or area behind the puppy. The overall composition highlights the puppy's cuteness and innocence, making it a heartwarming image. ```
Author
Owner

@rick-github commented on GitHub (Sep 13, 2025):

The template at https://hf.co/v2/unsloth/cogito-v2-preview-llama-109B-MoE-GGUF/blobs/sha256:783adfd1d2535a48733a6538e08891314a04e9815b1a3f5f7e58195da6532108 handles tools. Just need to tweak for thought control.

<!-- gh-comment-id:3287363131 --> @rick-github commented on GitHub (Sep 13, 2025): The template at https://hf.co/v2/unsloth/cogito-v2-preview-llama-109B-MoE-GGUF/blobs/sha256:783adfd1d2535a48733a6538e08891314a04e9815b1a3f5f7e58195da6532108 handles tools. Just need to tweak for thought control.
Author
Owner

@fighter3005 commented on GitHub (Sep 16, 2025):

Nice work. I guess it should work now.

So for the template basically start with:

TEMPLATE """{{- if or .System .Tools }}<|header_start|>system<|header_end|>
{{- if and (.System) (not (.Tools)) }}
{{- if and .IsThinkSet .Think }}
Enable deep thinking subroutine.
{{- end }}

{{ .System }}{{- end }}
{{- if .Tools }}
{{- if and $.IsThinkSet $.Think }}
Enable deep thinking subroutine.
{{- end }}

and then in the context section prefill with the thinking token:

{{- if and $.IsThinkSet (eq $i $lastUserIdx) }}
  {{- if $.Think }}
<think>\n
  {{- end }}
{{- end }}

like with qwen, or probably more suitable in your example:

{"content": "{{ .Content }}"}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>
{{- if and $.IsThinkSet $.Think }}
<think>\n
{{- end }}

...
Let's see if somebody tries this.

PS: I have no Idea how the templating works in ollama. So this is just a wild guess how it could be.

<!-- gh-comment-id:3297252899 --> @fighter3005 commented on GitHub (Sep 16, 2025): Nice work. I guess it should work now. So for the template basically start with: ``` TEMPLATE """{{- if or .System .Tools }}<|header_start|>system<|header_end|> {{- if and (.System) (not (.Tools)) }} {{- if and .IsThinkSet .Think }} Enable deep thinking subroutine. {{- end }} {{ .System }}{{- end }} {{- if .Tools }} {{- if and $.IsThinkSet $.Think }} Enable deep thinking subroutine. {{- end }} ``` and then in the context section prefill with the thinking token: ``` {{- if and $.IsThinkSet (eq $i $lastUserIdx) }} {{- if $.Think }} <think>\n {{- end }} {{- end }} ``` like with qwen, or probably more suitable in your example: ``` {"content": "{{ .Content }}"}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|> {{- if and $.IsThinkSet $.Think }} <think>\n {{- end }} ``` ... Let's see if somebody tries this. PS: I have no Idea how the templating works in ollama. So this is just a wild guess how it could be.
Author
Owner

@rick-github commented on GitHub (Sep 16, 2025):

So for the template basically start with:

Neither of the linked templates start with this.

<!-- gh-comment-id:3297647860 --> @rick-github commented on GitHub (Sep 16, 2025): > So for the template basically start with: Neither of the linked templates start with this.
Author
Owner

@rick-github commented on GitHub (Nov 20, 2025):

https://ollama.com/library/cogito-2.1

<!-- gh-comment-id:3558376166 --> @rick-github commented on GitHub (Nov 20, 2025): https://ollama.com/library/cogito-2.1
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#33659