[GH-ISSUE #11771] Ollama Turbo architecture is dumb #69859

Closed
opened 2026-05-04 19:35:49 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @LivioGama on GitHub (Aug 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11771

That's actually an epic fail implementation.

So basically, pay 20$ and be able to use Turbo in the new chat UI, in Openweb UI, and in the terminal. The 3 useless places in the world. The remote models are not exposed locally on the computer. Ok to build new custom app, deserved a batter warning about the hundreds of apps including IDE assistant that support Ollama. Worked on a https://github.com/LivioGama/gpt-oss-120b-MAX

I was just disappointed but it will improve with time.

Originally created by @LivioGama on GitHub (Aug 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11771 That's actually an epic fail implementation. So basically, pay 20$ and be able to use Turbo in the new chat UI, in Openweb UI, and in the terminal. The 3 useless places in the world. The remote models are not exposed locally on the computer. Ok to build new custom app, deserved a batter warning about the hundreds of apps including IDE assistant that support Ollama. Worked on a https://github.com/LivioGama/gpt-oss-120b-MAX I was just disappointed but it will improve with time.
Author
Owner

@jmorganca commented on GitHub (Aug 7, 2025):

@LivioGama thanks for the feedback and sorry Turbo didn't live up to your expectations. I shot you an email 😊

<!-- gh-comment-id:3162711964 --> @jmorganca commented on GitHub (Aug 7, 2025): @LivioGama thanks for the feedback and sorry Turbo didn't live up to your expectations. I shot you an email 😊
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69859