[GH-ISSUE #243] Falcon models #62141

Closed
opened 2026-05-03 07:39:24 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @jkleckner on GitHub (Jul 31, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/243

I'm hearing lots of great things about the Falcon models. They are Apache 2 licensed [1] so should be amenable to publishing in your repo. The 40b model would probably not fit into 64GB of memory but some people are beginning to get larger memory Macs.
And it is a unified memory model.

[1] https://huggingface.co/tiiuae/falcon-40b#why-use-falcon-40b

Originally created by @jkleckner on GitHub (Jul 31, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/243 I'm hearing lots of great things about the Falcon models. They are Apache 2 licensed [1] so should be amenable to publishing in your repo. The 40b model would probably not fit into 64GB of memory but some people are beginning to get larger memory Macs. And it is a unified memory model. [1] https://huggingface.co/tiiuae/falcon-40b#why-use-falcon-40b
GiteaMirror added the modelfeature request labels 2026-05-03 07:39:25 -05:00
Author
Owner

@mchiang0610 commented on GitHub (Oct 1, 2023):

Sorry for not getting back to this sooner, but we do have Falcon support:

https://ollama.ai/library/falcon

<!-- gh-comment-id:1741982433 --> @mchiang0610 commented on GitHub (Oct 1, 2023): Sorry for not getting back to this sooner, but we do have Falcon support: https://ollama.ai/library/falcon
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#62141