[GH-ISSUE #6958] molmo by allen ai support #4405

Open
opened 2026-04-12 15:20:36 -05:00 by GiteaMirror · 32 comments
Owner

Originally created by @olumolu on GitHub (Sep 25, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6958

GYVKif8XoAAFLAw
https://huggingface.co/allenai/Molmo-7B-D-0924
https://huggingface.co/allenai/Molmo-72B-0924
This models are really good and have potential and fully open-source please give support for them.

thanks.

Originally created by @olumolu on GitHub (Sep 25, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6958 ![GYVKif8XoAAFLAw](https://github.com/user-attachments/assets/74dead10-7370-4360-a326-41e40446f5b0) https://huggingface.co/allenai/Molmo-7B-D-0924 https://huggingface.co/allenai/Molmo-72B-0924 This models are really good and have potential and fully open-source please give support for them. thanks.
GiteaMirror added the model label 2026-04-12 15:20:36 -05:00
Author
Owner

@AlgorithmicKing737 commented on GitHub (Sep 26, 2024):

WE REALLY REALLY NEED THAT MODEL!!!!!!

<!-- gh-comment-id:2375888091 --> @AlgorithmicKing737 commented on GitHub (Sep 26, 2024): WE REALLY REALLY NEED THAT MODEL!!!!!!
Author
Owner

@olumolu commented on GitHub (Sep 26, 2024):

https://github.com/ggerganov/llama.cpp/issues/9645

<!-- gh-comment-id:2375895490 --> @olumolu commented on GitHub (Sep 26, 2024): https://github.com/ggerganov/llama.cpp/issues/9645
Author
Owner

@3unnycheung commented on GitHub (Sep 26, 2024):

yes

<!-- gh-comment-id:2376730421 --> @3unnycheung commented on GitHub (Sep 26, 2024): yes
Author
Owner

@Streamweaver commented on GitHub (Sep 26, 2024):

Fully agree.

<!-- gh-comment-id:2376782454 --> @Streamweaver commented on GitHub (Sep 26, 2024): Fully agree.
Author
Owner

@someshfengde commented on GitHub (Sep 27, 2024):

yess !!

<!-- gh-comment-id:2378425397 --> @someshfengde commented on GitHub (Sep 27, 2024): yess !!
Author
Owner

@Owen718 commented on GitHub (Sep 27, 2024):

Fully agree +1. REALLY NEED.

<!-- gh-comment-id:2378619259 --> @Owen718 commented on GitHub (Sep 27, 2024): Fully agree +1. REALLY NEED.
Author
Owner

@ParthaPRay commented on GitHub (Sep 27, 2024):

Please give support for 1B model for edge deployment

<!-- gh-comment-id:2378957134 --> @ParthaPRay commented on GitHub (Sep 27, 2024): Please give support for 1B model for edge deployment
Author
Owner

@someshfengde commented on GitHub (Sep 27, 2024):

Hi @ParthaPRay I don't think so we will be able to run that model on edge. I tried running over L4 GPU but no luck. Though after quantization I think I might be able to run it over L4 .

--from documentation
MolmoE-1B is a multimodal Mixture-of-Experts LLM with 1.5B active and 7.2B total parameters

https://huggingface.co/allenai/MolmoE-1B-0924

<!-- gh-comment-id:2379017124 --> @someshfengde commented on GitHub (Sep 27, 2024): Hi @ParthaPRay I don't think so we will be able to run that model on edge. I tried running over L4 GPU but no luck. Though after quantization I think I might be able to run it over L4 . --from documentation MolmoE-1B is a multimodal Mixture-of-Experts LLM with 1.5B active and 7.2B total parameters ## Documentation link: https://huggingface.co/allenai/MolmoE-1B-0924
Author
Owner

@t3chn0m4g3 commented on GitHub (Sep 27, 2024):

100% +1

<!-- gh-comment-id:2379120215 --> @t3chn0m4g3 commented on GitHub (Sep 27, 2024): 100% +1
Author
Owner

@ParthaPRay commented on GitHub (Sep 27, 2024):

Dear @Streamweaver , I think it is doable. Currently I am trying various small LLMs quantized pieces to run on edge. I found success, If you can provide support for 1B for MolmoE with Q4_M or Q4_0 it would suffice.

<!-- gh-comment-id:2379587080 --> @ParthaPRay commented on GitHub (Sep 27, 2024): Dear @Streamweaver , I think it is doable. Currently I am trying various small LLMs quantized pieces to run on edge. I found success, If you can provide support for 1B for MolmoE with Q4_M or Q4_0 it would suffice.
Author
Owner

@ashisdeveloper commented on GitHub (Sep 27, 2024):

10000% yes

<!-- gh-comment-id:2380014632 --> @ashisdeveloper commented on GitHub (Sep 27, 2024): 10000% yes
Author
Owner

@arthurwolf commented on GitHub (Sep 30, 2024):

Please.

<!-- gh-comment-id:2381867666 --> @arthurwolf commented on GitHub (Sep 30, 2024): Please.
Author
Owner

@hudijiang commented on GitHub (Oct 1, 2024):

yes

<!-- gh-comment-id:2384591771 --> @hudijiang commented on GitHub (Oct 1, 2024): yes
Author
Owner

@Amazon90 commented on GitHub (Oct 1, 2024):

PLEASE PLEASE PLEASE

<!-- gh-comment-id:2385859142 --> @Amazon90 commented on GitHub (Oct 1, 2024): PLEASE PLEASE PLEASE
Author
Owner

@olumolu commented on GitHub (Oct 2, 2024):

What is the status

<!-- gh-comment-id:2387304395 --> @olumolu commented on GitHub (Oct 2, 2024): What is the status
Author
Owner

@maxruby commented on GitHub (Oct 2, 2024):

Thank you ollama Team for your great work :)

Is this task dependent on resolution of https://github.com/ggerganov/llama.cpp/issues/9645?
Any estimate on how long it might take to support Molmo-72B-0924 via ollama?

<!-- gh-comment-id:2388013555 --> @maxruby commented on GitHub (Oct 2, 2024): Thank you ollama Team for your great work :) Is this task dependent on resolution of https://github.com/ggerganov/llama.cpp/issues/9645? Any estimate on how long it might take to support Molmo-72B-0924 via ollama?
Author
Owner

@ParthaPRay commented on GitHub (Oct 2, 2024):

Help on molmo 1B

<!-- gh-comment-id:2388502588 --> @ParthaPRay commented on GitHub (Oct 2, 2024): Help on molmo 1B
Author
Owner

@ParthaPRay commented on GitHub (Oct 2, 2024):

Include Molmo 1B via ollama. It helps for edge deployment.

<!-- gh-comment-id:2388503387 --> @ParthaPRay commented on GitHub (Oct 2, 2024): Include Molmo 1B via ollama. It helps for edge deployment.
Author
Owner

@sfdkiaei commented on GitHub (Oct 8, 2024):

and https://huggingface.co/allenai/Molmo-7B-D-0924 please.

<!-- gh-comment-id:2399074721 --> @sfdkiaei commented on GitHub (Oct 8, 2024): and https://huggingface.co/allenai/Molmo-7B-D-0924 please.
Author
Owner

@I-I-IT commented on GitHub (Oct 20, 2024):

Since Molmo has not made available gguf weights, it is difficult to include it in ollama or any other AI client for that matter.

<!-- gh-comment-id:2424871499 --> @I-I-IT commented on GitHub (Oct 20, 2024): Since Molmo has not made available gguf weights, it is difficult to include it in ollama or any other AI client for that matter.
Author
Owner

@arthurwolf commented on GitHub (Oct 20, 2024):

Why ollama does not support proper open-source models instead of supporting so called opensource llama meta or Microsoft models.

You're talking to volunteers spending their free time doing something nice for you, for free, instead of spending time with their friends, or family, or making money, or doing something fun.

Maybe calm down with the attitude? How about you do it?

<!-- gh-comment-id:2424876057 --> @arthurwolf commented on GitHub (Oct 20, 2024): > Why ollama does not support proper open-source models instead of supporting so called opensource llama meta or Microsoft models. You're talking to volunteers spending their free time doing something nice for you, for free, instead of spending time with their friends, or family, or making money, or doing something fun. Maybe calm down with the attitude? How about you do it?
Author
Owner

@t3chn0m4g3 commented on GitHub (Oct 28, 2024):

As @I-I-IT stated, Molmo does not come with gguf weights and in the meantime ollama received the awesome update of directly loading compatible models from HF.

At this point approaching Allen AI directly to include gguf with Molmo is probably the best way to go forward and close this issue.

<!-- gh-comment-id:2441304649 --> @t3chn0m4g3 commented on GitHub (Oct 28, 2024): As @I-I-IT stated, Molmo does not come with gguf weights and in the meantime ollama received the awesome update of directly loading compatible models from HF. At this point approaching Allen AI directly to include gguf with Molmo is probably the best way to go forward and close this issue.
Author
Owner

@olumolu commented on GitHub (Oct 28, 2024):

As @I-I-IT stated, Molmo does not come with gguf weights and in the meantime ollama received the awesome update of directly loading compatible models from HF.

At this point approaching Allen AI directly to include gguf with Molmo is probably the best way to go forward and close this issue.

Media inquiries: press@allenai.org

Non-media inquiries: info@allenai.org

Semantic Scholar support: feedback@semanticscholar.org

https://github.com/allenai

<!-- gh-comment-id:2442852004 --> @olumolu commented on GitHub (Oct 28, 2024): > As @I-I-IT stated, Molmo does not come with gguf weights and in the meantime ollama received the awesome update of directly loading compatible models from HF. > > At this point approaching Allen AI directly to include gguf with Molmo is probably the best way to go forward and close this issue. Media inquiries: press@allenai.org Non-media inquiries: info@allenai.org Semantic Scholar support: feedback@semanticscholar.org https://github.com/allenai
Author
Owner

@I-I-IT commented on GitHub (Oct 29, 2024):

They specifically say in the press release that weights will be released later.

"Molmo was designed and built in the open and Ai2 will be releasing all model weights, captioning and fine-tuning data, and source code in the near future. Select model weights, inference code, and demo are available starting today. "

<!-- gh-comment-id:2444630465 --> @I-I-IT commented on GitHub (Oct 29, 2024): They specifically say in the [press release](https://www.businesswire.com/news/home/20240925326133/en/Introducing-Molmo-A-Family-of-State-of-the-Art-Open-Multimodal-Models) that weights will be released later. "Molmo was designed and built in the open and Ai2 will be releasing all model weights, captioning and fine-tuning data, and source code in the near future. Select model weights, inference code, and demo are available starting today. "
Author
Owner

@I-I-IT commented on GitHub (Oct 30, 2024):

https://huggingface.co/allenai/Molmo-7B-D-0924

From your link

"This checkpoint is a preview of the Molmo release. All artifacts used in creating Molmo (PixMo dataset, training code, evaluations, intermediate checkpoints) will be made available at a later date, furthering our commitment to open-source AI development and reproducibility."

<!-- gh-comment-id:2446394728 --> @I-I-IT commented on GitHub (Oct 30, 2024): > https://huggingface.co/allenai/Molmo-7B-D-0924 From your link "This checkpoint is a preview of the Molmo release. All artifacts used in creating Molmo (PixMo dataset, training code, evaluations, intermediate checkpoints) will be made available at a later date, furthering our commitment to open-source AI development and reproducibility."
Author
Owner

@PadmajaVaishnavi commented on GitHub (Nov 12, 2024):

Hi , i am trying to run "allenai/Molmo-7B-D-0924" VLM but i m getting error whihle running ICL tasks.

error:
MolmoProcessor' object is not callable--how to rectify this error

<!-- gh-comment-id:2471135609 --> @PadmajaVaishnavi commented on GitHub (Nov 12, 2024): Hi , i am trying to run "allenai/Molmo-7B-D-0924" VLM but i m getting error whihle running ICL tasks. error: MolmoProcessor' object is not callable--how to rectify this error
Author
Owner

@I-I-IT commented on GitHub (Nov 12, 2024):

Hi , i am trying to run "allenai/Molmo-7B-D-0924" VLM but i m getting error whihle running ICL tasks.

error: MolmoProcessor' object is not callable--how to rectify this error

I also had error when trying to run it with Oobabooga

<!-- gh-comment-id:2471271744 --> @I-I-IT commented on GitHub (Nov 12, 2024): > Hi , i am trying to run "allenai/Molmo-7B-D-0924" VLM but i m getting error whihle running ICL tasks. > > error: MolmoProcessor' object is not callable--how to rectify this error I also had error when trying to run it with Oobabooga
Author
Owner

@napa3um commented on GitHub (Nov 18, 2024):

Give it urgently! :)

<!-- gh-comment-id:2484340127 --> @napa3um commented on GitHub (Nov 18, 2024): Give it urgently! :)
Author
Owner

@I-I-IT commented on GitHub (Nov 22, 2024):

Another model by Allen AI is now supported. https://ollama.com/library/tulu3

<!-- gh-comment-id:2493142711 --> @I-I-IT commented on GitHub (Nov 22, 2024): Another model by Allen AI is now supported. https://ollama.com/library/tulu3
Author
Owner

@Amazon90 commented on GitHub (Dec 21, 2024):

Compared to molmo and joycaption2, using Florence2 Flux Large achieves similar results with lower VRAM consumption.

<!-- gh-comment-id:2558192312 --> @Amazon90 commented on GitHub (Dec 21, 2024): Compared to molmo and joycaption2, using Florence2 Flux Large achieves similar results with lower VRAM consumption.
Author
Owner

@olumolu commented on GitHub (Jan 15, 2025):

Olmo2 now supported
https://ollama.com/library/olmo2

<!-- gh-comment-id:2591845246 --> @olumolu commented on GitHub (Jan 15, 2025): Olmo2 now supported https://ollama.com/library/olmo2
Author
Owner

@PadmajaVaishnavi commented on GitHub (Jan 17, 2025):

What transformers version is required for the model "allenai/Molmo-7B-D-0924"?
Has the model been tested on newer versions of Hugging Face libraries?
Is there an updated version of preprocessing_molmo.py?

<!-- gh-comment-id:2599100253 --> @PadmajaVaishnavi commented on GitHub (Jan 17, 2025): What transformers version is required for the model "allenai/Molmo-7B-D-0924"? Has the model been tested on newer versions of Hugging Face libraries? Is there an updated version of preprocessing_molmo.py?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4405