[GH-ISSUE #7017] amd-llama-135M #50959

Open
opened 2026-04-28 17:41:46 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @olumolu on GitHub (Sep 28, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7017

https://huggingface.co/amd/AMD-Llama-135m
Fully open source with opensource licence and open source data set

Originally created by @olumolu on GitHub (Sep 28, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7017 https://huggingface.co/amd/AMD-Llama-135m Fully open source with opensource licence and open source data set
GiteaMirror added the model label 2026-04-28 17:41:46 -05:00
Author
Owner

@unseenmars commented on GitHub (Sep 28, 2024):

I am one of the contributors of Nexa SDK, and it supports amd-llama-135M.

If you want to quickly try the model, install the SDK from here.
Then, simply run nexa run AMD-Llama-135m:fp16

You can also pull and run gguf version of this model directly from HuggingFace with nexa run -hf QuantFactory/AMD-Llama-135m-GGUF.

<!-- gh-comment-id:2381005606 --> @unseenmars commented on GitHub (Sep 28, 2024): I am one of the contributors of [Nexa SDK](https://github.com/NexaAI/nexa-sdk), and it supports [amd-llama-135M](https://nexaai.com/AMD/AMD-Llama-135m/gguf-fp16/readme). If you want to quickly try the model, install the SDK from [here](https://nexaai.com/download-sdk). Then, simply run `nexa run AMD-Llama-135m:fp16` You can also pull and run gguf version of this model directly from HuggingFace with `nexa run -hf QuantFactory/AMD-Llama-135m-GGUF`.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#50959