[GH-ISSUE #1145] Food for thought use cases: Github Actions :octocat: #26339

Closed
opened 2026-04-22 02:33:40 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @marcellodesales on GitHub (Nov 15, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1145

I've been working on the implementation of DevSecOps Platforms and I think I came up with a Github Action that can execute the models... Obviously:

  • You must have Github Action Runners powered by GPUs
  • You can implement pretty much anything with the model given you have the file-system and the containers to implement a business logic
    • I have embedded the server and the pull of the models in caches to try to speed up the process
  • The intent is to use any model for Source-code, Software Engineering, Cloud, etc...

Just food for thought...

NOTES: I still have problems of #676 and #1072 for this reason, I build a data container with the models (docker image digests) and push them to a docker registry so that I can by-pass the 403 with a cached version of the models...

🧠 Select a Model

Screenshot 2023-11-15 at 3 31 15 PM

Screenshot 2023-11-15 at 1 10 43 PM

🏃‍♂️ Running

Screenshot 2023-11-15 at 3 30 47 PM

🔢 Results

Screenshot 2023-11-15 at 3 31 31 PM

Originally created by @marcellodesales on GitHub (Nov 15, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1145 I've been working on the implementation of DevSecOps Platforms and I think I came up with a Github Action that can execute the models... Obviously: * You must have Github Action Runners powered by GPUs * You can implement pretty much anything with the model given you have the file-system and the containers to implement a business logic * I have embedded the server and the pull of the models in caches to try to speed up the process * The intent is to use any model for Source-code, Software Engineering, Cloud, etc... Just food for thought... > **NOTES**: I still have problems of #676 and #1072 for this reason, I build a data container with the models (docker image digests) and push them to a docker registry so that I can by-pass the 403 with a cached version of the models... # 🧠 Select a Model ![Screenshot 2023-11-15 at 3 31 15 PM](https://github.com/jmorganca/ollama/assets/131457/3194554d-4ea6-48eb-a678-6a76665bcab3) ![Screenshot 2023-11-15 at 1 10 43 PM](https://github.com/jmorganca/ollama/assets/131457/7e0d28e5-9eb5-42bc-89c8-7220d9fbe944) # 🏃‍♂️ Running ![Screenshot 2023-11-15 at 3 30 47 PM](https://github.com/jmorganca/ollama/assets/131457/1f0b6dc8-251c-4628-af98-1eb9fd57cd6d) # 🔢 Results ![Screenshot 2023-11-15 at 3 31 31 PM](https://github.com/jmorganca/ollama/assets/131457/c48a0711-0d5b-4e5a-9916-535628741a0b)
Author
Owner

@jmorganca commented on GitHub (Feb 20, 2024):

This is super cool. I'm excited to try it with the new GitHub Actions with GPU runners when they become available. I'll close this if it's okay, since I don't know if there are any action items to solve it

<!-- gh-comment-id:1953336017 --> @jmorganca commented on GitHub (Feb 20, 2024): This is super cool. I'm excited to try it with the new GitHub Actions with GPU runners when they become available. I'll close this if it's okay, since I don't know if there are any action items to solve it
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#26339