[GH-ISSUE #3184] Add Video-LLaVA #27721

Open
opened 2026-04-22 05:15:35 -05:00 by GiteaMirror · 27 comments
Owner

Originally created by @Anas20001 on GitHub (Mar 16, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3184

What model would you like?

Add Video-LLaVA to be then used easily
https://github.com/PKU-YuanGroup/Video-LLaVA/tree/main

Originally created by @Anas20001 on GitHub (Mar 16, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3184 ### What model would you like? Add Video-LLaVA to be then used easily https://github.com/PKU-YuanGroup/Video-LLaVA/tree/main
GiteaMirror added the feature request label 2026-04-22 05:15:35 -05:00
Author
Owner

@RaulKite commented on GitHub (Mar 19, 2024):

Is there written in some place that it won't be a way to run this kind of multimodal models in ollama?

<!-- gh-comment-id:2007644112 --> @RaulKite commented on GitHub (Mar 19, 2024): Is there written in some place that it won't be a way to run this kind of multimodal models in ollama?
Author
Owner

@Anas20001 commented on GitHub (Mar 20, 2024):

Is there written in some place that it won't be a way to run this kind of multimodal models in ollama?

I think it's doable, will try to add the model into ollama in the next few days as it's my first contribution to ollama and hopefully will use it to be part of my GsoC project that I will contribute.

<!-- gh-comment-id:2008637648 --> @Anas20001 commented on GitHub (Mar 20, 2024): > Is there written in some place that it won't be a way to run this kind of multimodal models in ollama? I think it's doable, will try to add the model into ollama in the next few days as it's my first contribution to ollama and hopefully will use it to be part of my GsoC project that I will contribute.
Author
Owner

@urinieto commented on GitHub (Apr 19, 2024):

Are there any updates on this?

Also, does ollama currently support video conditioning in any of their models? Thanks!

<!-- gh-comment-id:2067084938 --> @urinieto commented on GitHub (Apr 19, 2024): Are there any updates on this? Also, does `ollama` currently support video conditioning in any of their models? Thanks!
Author
Owner

@Anas20001 commented on GitHub (Apr 20, 2024):

Hi @urinieto

I have worked on this issue in the past days here is the steps I took

I have followed the steps in llama.cpp/examples/llava
after getting the model repo


[ec2-user@ip-172-31-87-215 ~]$ ls downloads/LanguageBind_Video-LLaVA-7B/
README.md               model-00001-of-00002.safetensors  pytorch_model-00002-of-00002.bin
config.json             model-00002-of-00002.safetensors  pytorch_model.bin.index.json
generation_config.json  model.safetensors.index.json      special_tokens_map.json
llava.projector          tokenizer.model
 pytorch_model-00001-of-00002.bin  tokenizer_config.json

running the following llava-surgery command :


[Anas llama.cpp]$ python3 ./examples/llava/llava-surgery-v2.py -m ../downloads/LanguageBind_Video-LLaVA-7B/

ran successfully and it should result in creating both llava.projector and llava.clip but it only creates llava.projector .


model.video_tower.video_tower.encoder.layers.9.temporal_attn.q_proj.bias : torch.Size([1024])
model.video_tower.video_tower.encoder.layers.9.temporal_attn.q_proj.weight : torch.Size([1024, 1024])
model.video_tower.video_tower.encoder.layers.9.temporal_attn.v_proj.bias : torch.Size([1024])
model.video_tower.video_tower.encoder.layers.9.temporal_attn.v_proj.weight : torch.Size([1024, 1024])
model.video_tower.video_tower.encoder.layers.9.temporal_embedding : torch.Size([1, 8, 1024])
model.video_tower.video_tower.encoder.layers.9.temporal_layer_norm1.bias : torch.Size([1024])
model.video_tower.video_tower.encoder.layers.9.temporal_layer_norm1.weight : torch.Size([1024])
model.video_tower.video_tower.post_layernorm.bias : torch.Size([1024])
model.video_tower.video_tower.post_layernorm.weight : torch.Size([1024])
model.video_tower.video_tower.pre_layrnorm.bias : torch.Size([1024])
model.video_tower.video_tower.pre_layrnorm.weight : torch.Size([1024])
Found 4 tensors to extract.
Found additional 0 tensors to extract.
Done!
Now you can convert ../downloads/LanguageBind_Video-LLaVA-7B/ to a a regular LLaMA GGUF file.
Also, use ../downloads/LanguageBind_Video-LLaVA-7B//llava.projector to prepare a llava-encoder.gguf file.

I have trimmed the output from running the command.

when running the following command:

sudo python3 ./examples/llava/convert-image-encoder-to-gguf.py -m ../downloads/LanguageBind_Video-LLaVA-7B/openai --llava-projector ../downloads/LanguageBind_Video-LLaVA-7B/openai/llava.projector --output-dir ../downloads/LanguageBind_Video-LLaVA-7B/openai/ 

it ran successfully created mmproj-model-f16.gguf file.

v.blk.22.ffn_up.weight - f16 - shape = (1024, 4096)
  Converting to float32
v.blk.22.ffn_up.bias - f32 - shape = (1024,)
  Converting to float32
v.blk.22.ln2.weight - f32 - shape = (1024,)
  Converting to float32
v.blk.22.ln2.bias - f32 - shape = (1024,)
skipping parameter: vision_model.post_layernorm.weight
skipping parameter: vision_model.post_layernorm.bias
skipping parameter: visual_projection.weight
skipping parameter: text_projection.weight
Done. Output file: ../downloads/LanguageBind_Video-LLaVA-7B/openai/mmproj-model-f16.gguf

and then I converted the regular safetensors with skipping unknown :

 sudo python3 ./convert.py ../downloads/LanguageBind_Video-LLaVA-7B --skip-unknown
.
.
.
[286/291] Writing tensor blk.31.ffn_norm.weight                 | size   4096           | type F32  | T+ 183
[287/291] Writing tensor blk.31.attn_k.weight                   | size   4096 x   4096  | type F32  | T+ 183
[288/291] Writing tensor blk.31.attn_output.weight              | size   4096 x   4096  | type F32  | T+ 183
[289/291] Writing tensor blk.31.attn_q.weight                   | size   4096 x   4096  | type F32  | T+ 184
[290/291] Writing tensor blk.31.attn_v.weight                   | size   4096 x   4096  | type F32  | T+ 184
[291/291] Writing tensor output_norm.weight                     | size   4096           | type F32  | T+ 185
Wrote ../downloads/LanguageBind_Video-LLaVA-7B/ggml-model-f32.gguf

and I am now looking on how to build the Modelfile based on the following asset files

<!-- gh-comment-id:2067802074 --> @Anas20001 commented on GitHub (Apr 20, 2024): Hi @urinieto I have worked on this issue in the past days here is the steps I took I have followed the steps in [llama.cpp/examples/llava](https://github.com/ggerganov/llama.cpp/blob/master/examples/llava/README.md) after getting the model repo ```bash [ec2-user@ip-172-31-87-215 ~]$ ls downloads/LanguageBind_Video-LLaVA-7B/ README.md model-00001-of-00002.safetensors pytorch_model-00002-of-00002.bin config.json model-00002-of-00002.safetensors pytorch_model.bin.index.json generation_config.json model.safetensors.index.json special_tokens_map.json llava.projector tokenizer.model pytorch_model-00001-of-00002.bin tokenizer_config.json ``` running the following llava-surgery command : ```bash [Anas llama.cpp]$ python3 ./examples/llava/llava-surgery-v2.py -m ../downloads/LanguageBind_Video-LLaVA-7B/ ``` ran successfully and it should result in creating both `llava.projector` and `llava.clip` but it only creates llava.projector . ```bash model.video_tower.video_tower.encoder.layers.9.temporal_attn.q_proj.bias : torch.Size([1024]) model.video_tower.video_tower.encoder.layers.9.temporal_attn.q_proj.weight : torch.Size([1024, 1024]) model.video_tower.video_tower.encoder.layers.9.temporal_attn.v_proj.bias : torch.Size([1024]) model.video_tower.video_tower.encoder.layers.9.temporal_attn.v_proj.weight : torch.Size([1024, 1024]) model.video_tower.video_tower.encoder.layers.9.temporal_embedding : torch.Size([1, 8, 1024]) model.video_tower.video_tower.encoder.layers.9.temporal_layer_norm1.bias : torch.Size([1024]) model.video_tower.video_tower.encoder.layers.9.temporal_layer_norm1.weight : torch.Size([1024]) model.video_tower.video_tower.post_layernorm.bias : torch.Size([1024]) model.video_tower.video_tower.post_layernorm.weight : torch.Size([1024]) model.video_tower.video_tower.pre_layrnorm.bias : torch.Size([1024]) model.video_tower.video_tower.pre_layrnorm.weight : torch.Size([1024]) Found 4 tensors to extract. Found additional 0 tensors to extract. Done! Now you can convert ../downloads/LanguageBind_Video-LLaVA-7B/ to a a regular LLaMA GGUF file. Also, use ../downloads/LanguageBind_Video-LLaVA-7B//llava.projector to prepare a llava-encoder.gguf file. ``` I have trimmed the output from running the command. when running the following command: ```bash sudo python3 ./examples/llava/convert-image-encoder-to-gguf.py -m ../downloads/LanguageBind_Video-LLaVA-7B/openai --llava-projector ../downloads/LanguageBind_Video-LLaVA-7B/openai/llava.projector --output-dir ../downloads/LanguageBind_Video-LLaVA-7B/openai/ ``` it ran successfully created `mmproj-model-f16.gguf` file. ```bash v.blk.22.ffn_up.weight - f16 - shape = (1024, 4096) Converting to float32 v.blk.22.ffn_up.bias - f32 - shape = (1024,) Converting to float32 v.blk.22.ln2.weight - f32 - shape = (1024,) Converting to float32 v.blk.22.ln2.bias - f32 - shape = (1024,) skipping parameter: vision_model.post_layernorm.weight skipping parameter: vision_model.post_layernorm.bias skipping parameter: visual_projection.weight skipping parameter: text_projection.weight Done. Output file: ../downloads/LanguageBind_Video-LLaVA-7B/openai/mmproj-model-f16.gguf ``` and then I converted the regular safetensors with skipping unknown : ```bash sudo python3 ./convert.py ../downloads/LanguageBind_Video-LLaVA-7B --skip-unknown . . . [286/291] Writing tensor blk.31.ffn_norm.weight | size 4096 | type F32 | T+ 183 [287/291] Writing tensor blk.31.attn_k.weight | size 4096 x 4096 | type F32 | T+ 183 [288/291] Writing tensor blk.31.attn_output.weight | size 4096 x 4096 | type F32 | T+ 183 [289/291] Writing tensor blk.31.attn_q.weight | size 4096 x 4096 | type F32 | T+ 184 [290/291] Writing tensor blk.31.attn_v.weight | size 4096 x 4096 | type F32 | T+ 184 [291/291] Writing tensor output_norm.weight | size 4096 | type F32 | T+ 185 Wrote ../downloads/LanguageBind_Video-LLaVA-7B/ggml-model-f32.gguf ``` and I am now looking on how to build the Modelfile based on the following asset files
Author
Owner

@RaulKite commented on GitHub (Apr 24, 2024):

Any advance on this or any way to let a hand? I'm really interested in an easy way of having video-llava running with ollama.

Thanks

<!-- gh-comment-id:2075017548 --> @RaulKite commented on GitHub (Apr 24, 2024): Any advance on this or any way to let a hand? I'm really interested in an easy way of having video-llava running with ollama. Thanks
Author
Owner

@Anas20001 commented on GitHub (Apr 26, 2024):

Hi @RaulKite

I have experiment with different variants of Modelfile and the below is the latest one:


FROM ggml-model-f32-quantized.gguf

TEMPLATE """{{ .System }} USER: {{ .Prompt }} ASSSISTANT:"""

PARAMETER stop "</s>"
PARAMETER stop "USER:"
PARAMETER num_ctx 4096



I have tried to fix the typo in the "Assistant" and to add the projector as ADAPTER llava.projector but when I re-create the model using ollama create anas/video-llava:test -f Modelfile
it returns

transferring model data 
creating model layer 
creating template layer 
creating adapter layer 
Error: invalid file magic

Here is the current pushed model version : https://ollama.com/anas/video-llava

<!-- gh-comment-id:2078492688 --> @Anas20001 commented on GitHub (Apr 26, 2024): Hi @RaulKite I have experiment with different variants of Modelfile and the below is the latest one: ```Modelfile FROM ggml-model-f32-quantized.gguf TEMPLATE """{{ .System }} USER: {{ .Prompt }} ASSSISTANT:""" PARAMETER stop "</s>" PARAMETER stop "USER:" PARAMETER num_ctx 4096 ``` I have tried to fix the typo in the "Assistant" and to add the projector as `ADAPTER llava.projector` but when I re-create the model using `ollama create anas/video-llava:test -f Modelfile` it returns ```bash transferring model data creating model layer creating template layer creating adapter layer Error: invalid file magic ``` Here is the current pushed model version : https://ollama.com/anas/video-llava
Author
Owner

@motin commented on GitHub (Apr 26, 2024):

@Anas20001 I am probably doing something silly (I am new to ollama), but here is some output from ollama using your published model version:

$ ollama run anas/video-llava:test
pulling manifest
pulling 5008bad29639... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 4.1 GB
pulling 3139aa20c675... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏   45 B
pulling 17b7e63fbe77... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏   51 B
pulling 6b53bb12c892... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  411 B
verifying sha256 digest
writing manifest
removing any unused layers
success

>>> What is in this image? /Users/motin/model-test-files/pexels-pixabay-104827.jpg
 The image features a large white bird flying over an ocean, with the sun shining on its wings. It appears to be soaring gracefully above the water as it enjoys its flight
through the sky.

>>> What is in this video? /Users/motin/model-test-files/855282-hd_1280_720_25fps.mp4
 The video shows a woman holding her dog while sitting on a beach, and they both seem to be enjoying their time together in the sand.

Using these test files:
pexels-pixabay-104827

https://github.com/ollama/ollama/assets/793037/e79384d6-4ee5-4261-9143-831bc8644374

Compare this with the output from llava:

$ ollama run llava
pulling manifest
pulling 170370233dd5... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 4.1 GB
pulling 72d6f08a42f6... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 624 MB
pulling 43070e2d4e53... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  11 KB
pulling c43332387573... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏   67 B
pulling ed11eda7790d... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏   30 B
pulling 7c658f9561e5... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  564 B
verifying sha256 digest
writing manifest
removing any unused layers
success
>>> What is in this image? /Users/motin/model-test-files/pexels-pixabay-104827.jpg
Added image '/Users/motin/model-test-files/pexels-pixabay-104827.jpg'
 The image shows a gray and white cat with yellow eyes. The cat is looking directly at the camera, giving a gentle gaze. The background is plain and light-colored, which puts
the focus on the cat. There's also a watermark or overlay that appears to be a digital glitch, creating an error message graphic across the image.

>>> What is in this video? /Users/motin/model-test-files/855282-hd_1280_720_25fps.mp4
 The image you've provided appears to be a photograph, not a video file. It shows a cat with a soft expression and is focused on the cat itself, with no indication of motion
or playback controls that are typically found in video files.
<!-- gh-comment-id:2078923564 --> @motin commented on GitHub (Apr 26, 2024): @Anas20001 I am probably doing something silly (I am new to ollama), but here is some output from ollama using your published model version: ``` $ ollama run anas/video-llava:test pulling manifest pulling 5008bad29639... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 4.1 GB pulling 3139aa20c675... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 45 B pulling 17b7e63fbe77... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 51 B pulling 6b53bb12c892... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 411 B verifying sha256 digest writing manifest removing any unused layers success >>> What is in this image? /Users/motin/model-test-files/pexels-pixabay-104827.jpg The image features a large white bird flying over an ocean, with the sun shining on its wings. It appears to be soaring gracefully above the water as it enjoys its flight through the sky. >>> What is in this video? /Users/motin/model-test-files/855282-hd_1280_720_25fps.mp4 The video shows a woman holding her dog while sitting on a beach, and they both seem to be enjoying their time together in the sand. ``` Using these test files: ![pexels-pixabay-104827](https://github.com/ollama/ollama/assets/793037/12444c1d-e979-42af-950d-cc14340a5272) https://github.com/ollama/ollama/assets/793037/e79384d6-4ee5-4261-9143-831bc8644374 Compare this with the output from llava: ``` $ ollama run llava pulling manifest pulling 170370233dd5... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 4.1 GB pulling 72d6f08a42f6... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 624 MB pulling 43070e2d4e53... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 11 KB pulling c43332387573... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 67 B pulling ed11eda7790d... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 30 B pulling 7c658f9561e5... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 564 B verifying sha256 digest writing manifest removing any unused layers success >>> What is in this image? /Users/motin/model-test-files/pexels-pixabay-104827.jpg Added image '/Users/motin/model-test-files/pexels-pixabay-104827.jpg' The image shows a gray and white cat with yellow eyes. The cat is looking directly at the camera, giving a gentle gaze. The background is plain and light-colored, which puts the focus on the cat. There's also a watermark or overlay that appears to be a digital glitch, creating an error message graphic across the image. >>> What is in this video? /Users/motin/model-test-files/855282-hd_1280_720_25fps.mp4 The image you've provided appears to be a photograph, not a video file. It shows a cat with a soft expression and is focused on the cat itself, with no indication of motion or playback controls that are typically found in video files. ```
Author
Owner

@Anas20001 commented on GitHub (Apr 26, 2024):

Hey @motin,

you are right the model is not working as expected for now, some changes need to be done in the Modelfile of the model, but I faced an issue while trying to update it as mentioned in the previous message. I will try to search it over today.

<!-- gh-comment-id:2078990408 --> @Anas20001 commented on GitHub (Apr 26, 2024): Hey @motin, you are right the model is not working as expected for now, some changes need to be done in the Modelfile of the model, but I faced an issue while trying to update it as mentioned in the previous message. I will try to search it over today.
Author
Owner

@swaynos commented on GitHub (Apr 28, 2024):

I haven't been able to figure out how to load the image encoder (mmproj) and model (gguf) from the same modelfile. In my experimentation even when specifying both using two "FROM" statements in the modelfile it won't generate an output as expected, either failing to recognize the image uploaded, or just failing to produce a response.

It's supposed to work according to the latest PR:
https://github.com/ollama/ollama/pull/1308

<!-- gh-comment-id:2081386333 --> @swaynos commented on GitHub (Apr 28, 2024): I haven't been able to figure out how to load the image encoder (mmproj) and model (gguf) from the same modelfile. In my experimentation even when specifying both using two "FROM" statements in the modelfile it won't generate an output as expected, either failing to recognize the image uploaded, or just failing to produce a response. It's supposed to work according to the latest PR: https://github.com/ollama/ollama/pull/1308
Author
Owner

@Anas20001 commented on GitHub (Apr 30, 2024):

I have fixed the Error: invalid file magic and I have tried using double FROM as @swaynos it succeeded to recognize the images but fails in producing the right response. I am experimenting with the Modelfile but any suggestions would be appreciated.

here is the model that recognizes the image: https://ollama.com/anas/video-llava:test-v2

<!-- gh-comment-id:2084428233 --> @Anas20001 commented on GitHub (Apr 30, 2024): I have fixed the ` Error: invalid file magic ` and I have tried using double `FROM` as @swaynos it succeeded to recognize the images but fails in producing the right response. I am experimenting with the `Modelfile` but any suggestions would be appreciated. here is the model that recognizes the image: https://ollama.com/anas/video-llava:test-v2
Author
Owner

@giannisanni commented on GitHub (May 15, 2024):

Hi, is there any update on this?

<!-- gh-comment-id:2112869976 --> @giannisanni commented on GitHub (May 15, 2024): Hi, is there any update on this?
Author
Owner

@RaulKite commented on GitHub (May 16, 2024):

Today Video-llava have been included in huggingface transformers ...

https://huggingface.co/LanguageBind/Video-LLaVA-7B-hf
https://huggingface.co/docs/transformers/main/en/model_doc/video_llava

<!-- gh-comment-id:2114988904 --> @RaulKite commented on GitHub (May 16, 2024): Today Video-llava have been included in huggingface transformers ... https://huggingface.co/LanguageBind/Video-LLaVA-7B-hf https://huggingface.co/docs/transformers/main/en/model_doc/video_llava
Author
Owner

@DuckyBlender commented on GitHub (May 29, 2024):

Any update?

<!-- gh-comment-id:2136927587 --> @DuckyBlender commented on GitHub (May 29, 2024): Any update?
Author
Owner

@Anas20001 commented on GitHub (Jun 10, 2024):

I am having hard time getting the model to recognize the videos correctly any suggestions would be appreciated

<!-- gh-comment-id:2157989195 --> @Anas20001 commented on GitHub (Jun 10, 2024): I am having hard time getting the model to recognize the videos correctly any suggestions would be appreciated
Author
Owner

@zhanwenchen commented on GitHub (Jun 13, 2024):

@Anas20001 Can you please push your current progress into a dev branch on your fork?

<!-- gh-comment-id:2166874577 --> @zhanwenchen commented on GitHub (Jun 13, 2024): @Anas20001 Can you please push your current progress into a dev branch on your fork?
Author
Owner

@Anas20001 commented on GitHub (Jun 17, 2024):

Hey @zhanwenchen I have uploaded the current progress files into the following hugging face model card : AnasMohamed/video-llava

<!-- gh-comment-id:2173112750 --> @Anas20001 commented on GitHub (Jun 17, 2024): Hey @zhanwenchen I have uploaded the current progress files into the following hugging face model card : [AnasMohamed/video-llava](https://huggingface.co/AnasMohamed/video-llava/tree/main)
Author
Owner

@silasalves commented on GitHub (Jul 26, 2024):

I see someone else (ManishThota) is also trying to get LLaVA video working, and they are also having trouble:

Readme

Not working as expected!!

https://ollama.com/ManishThota/llava_next_video

<!-- gh-comment-id:2252803032 --> @silasalves commented on GitHub (Jul 26, 2024): I see someone else ([ManishThota](https://ollama.com/ManishThota)) is also trying to get LLaVA video working, and they are also having trouble: > Readme > > Not working as expected!! > > _https://ollama.com/ManishThota/llava_next_video_
Author
Owner

@manishkumart commented on GitHub (Jul 30, 2024):

The issue arises because the Ollama model can only process safe sensors for specific architectures mentioned below. However, I have managed to bypass this limitation as we are working with LLaVA-MistralForCausalLM and LLaVA-LlamaForCausalLM architectures, which include:

LlamaForCausalLM
MistralForCausalLM
GemmaForCausalLM

Sometimes it processes the video and in other cases, it does not.

The issue still persists, Need to explore more after the update with the Video Language Model integrated in HF.

I have a detailed medium post explaining how to convert a quantized (GGUF) model and integrate it in Ollama, with custom modfiles.

https://medium.com/@manish.thota1999/an-experiment-to-unlock-ollamas-potential-video-question-answering-e2b4d1bfb5ba

<!-- gh-comment-id:2259137049 --> @manishkumart commented on GitHub (Jul 30, 2024): The issue arises because the Ollama model can only process safe sensors for specific architectures mentioned below. However, I have managed to bypass this limitation as we are working with LLaVA-MistralForCausalLM and LLaVA-LlamaForCausalLM architectures, which include: LlamaForCausalLM MistralForCausalLM GemmaForCausalLM Sometimes it processes the video and in other cases, it does not. The issue still persists, Need to explore more after the update with the Video Language Model integrated in HF. I have a detailed medium post explaining how to convert a quantized (GGUF) model and integrate it in Ollama, with custom modfiles. https://medium.com/@manish.thota1999/an-experiment-to-unlock-ollamas-potential-video-question-answering-e2b4d1bfb5ba
Author
Owner

@silasalves commented on GitHub (Sep 26, 2024):

@manishkumart @Anas20001 Any luck with this issue?

<!-- gh-comment-id:2375711295 --> @silasalves commented on GitHub (Sep 26, 2024): @manishkumart @Anas20001 Any luck with this issue?
Author
Owner

@ixn3rd3mxn commented on GitHub (Dec 24, 2024):

@Anas20001 Is it available use?
{19027F74-0BC9-4805-A2E9-8E3473720F47}

<!-- gh-comment-id:2560598393 --> @ixn3rd3mxn commented on GitHub (Dec 24, 2024): @Anas20001 Is it available use? ![{19027F74-0BC9-4805-A2E9-8E3473720F47}](https://github.com/user-attachments/assets/5710caff-38ae-49fe-8c39-91cc1f5c7921)
Author
Owner

@Anas20001 commented on GitHub (Dec 26, 2024):

Hi @ixn3rd3mxn, the model sometimes processes the video and in other cases it does not unfortunately for now

<!-- gh-comment-id:2563018120 --> @Anas20001 commented on GitHub (Dec 26, 2024): Hi @ixn3rd3mxn, the model sometimes processes the video and in other cases it does not unfortunately for now
Author
Owner

@ixn3rd3mxn commented on GitHub (Dec 27, 2024):

hello again @Anas20001 I have another question, I saw you do it on huggingface-AnasMohamed-video-llava , can you share the basic usage?

<!-- gh-comment-id:2563325792 --> @ixn3rd3mxn commented on GitHub (Dec 27, 2024): hello again @Anas20001 I have another question, I saw you do it on [huggingface-AnasMohamed-video-llava](https://huggingface.co/AnasMohamed/video-llava) , can you share the basic usage?
Author
Owner

@Anas20001 commented on GitHub (Dec 27, 2024):

Sure @ixn3rd3mxn, so this HF repo does have all the files that is similar to the original model card but with added GGUF with two precisions f16, f32, and the Modelfile to experiment with different Modelfile structures

<!-- gh-comment-id:2564032664 --> @Anas20001 commented on GitHub (Dec 27, 2024): Sure @ixn3rd3mxn, so this HF repo does have all the files that is similar to the original model card but with added GGUF with two precisions f16, f32, and the Modelfile to experiment with different Modelfile structures
Author
Owner

@ixn3rd3mxn commented on GitHub (Jan 17, 2025):

Sure @ixn3rd3mxn, so this HF repo does have all the files that is similar to the original model card but with added GGUF with two precisions f16, f32, and the Modelfile to experiment with different Modelfile structures

@Anas20001
Can I take your VideoLLaVA gguf model from https://huggingface.co/AnasMohamed/video-llava and use it in Ollama? If it's possible, is it difficult? Can you teach me how to do it?

<!-- gh-comment-id:2597902226 --> @ixn3rd3mxn commented on GitHub (Jan 17, 2025): > Sure [@ixn3rd3mxn](https://github.com/ixn3rd3mxn), so this HF repo does have all the files that is similar to the original model card but with added GGUF with two precisions f16, f32, and the Modelfile to experiment with different Modelfile structures @Anas20001 Can I take your VideoLLaVA gguf model from https://huggingface.co/AnasMohamed/video-llava and use it in Ollama? If it's possible, is it difficult? Can you teach me how to do it?
Author
Owner

@Anas20001 commented on GitHub (Jan 24, 2025):

Sure @ixn3rd3mxn you can get it into your machine and start experimenting with it using different Modelfile configurations and try to post it into ollama model hub if you find it useful as well. here is a couple of ways that could assist you in doing that,

here is a blog post from HF on how to run GGUF model directly:
https://huggingface.co/docs/hub/ollama

and if you want to customize your Modelfile look over Ollama modefile doc here:
https://github.com/ollama/ollama/blob/main/docs/modelfile.md

and also if you wanted to start from the beginning and quantize your model you can reference to Ollama doc here:

https://ollama.readthedocs.io/en/import/#importing-a-gguf-based-model-or-adapter

<!-- gh-comment-id:2612634081 --> @Anas20001 commented on GitHub (Jan 24, 2025): Sure @ixn3rd3mxn you can get it into your machine and start experimenting with it using different Modelfile configurations and try to post it into ollama model hub if you find it useful as well. here is a couple of ways that could assist you in doing that, here is a blog post from HF on how to run GGUF model directly: https://huggingface.co/docs/hub/ollama and if you want to customize your Modelfile look over Ollama modefile doc here: https://github.com/ollama/ollama/blob/main/docs/modelfile.md and also if you wanted to start from the beginning and quantize your model you can reference to Ollama doc here: https://ollama.readthedocs.io/en/import/#importing-a-gguf-based-model-or-adapter
Author
Owner

@kaue commented on GitHub (Jan 29, 2025):

Is https://github.com/DAMO-NLP-SG/VideoLLaMA2 supported in ollama?

<!-- gh-comment-id:2623119866 --> @kaue commented on GitHub (Jan 29, 2025): Is https://github.com/DAMO-NLP-SG/VideoLLaMA2 supported in ollama?
Author
Owner

@ixn3rd3mxn commented on GitHub (Jan 30, 2025):

Is https://github.com/DAMO-NLP-SG/VideoLLaMA2 supported in ollama?

@kaue Which is better? VideoLLaVA , VideoLLaMA2 and VideoLLaMA2 can detect audio in video?

<!-- gh-comment-id:2623360366 --> @ixn3rd3mxn commented on GitHub (Jan 30, 2025): > Is https://github.com/DAMO-NLP-SG/VideoLLaMA2 supported in ollama? @kaue Which is better? VideoLLaVA , VideoLLaMA2 and VideoLLaMA2 can detect audio in video?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#27721