mirror of
https://github.com/ollama/ollama.git
synced 2025-12-05 18:46:22 -06:00
-
released this
2025-10-30 17:12:14 -05:00 | 132 commits to main since this release📅 Originally published on GitHub: Thu, 30 Oct 2025 23:22:27 GMT
🏷️ Git tag created: Thu, 30 Oct 2025 22:12:14 GMTWhat's Changed
qwen3-vlperformance improvements, including flash attention support by defaultqwen3-vlwill now output less leading whitespace in the response when thinking- Fixed issue where
deepseek-v3.1thinking could not be disabled in Ollama's new app - Fixed issue where
qwen3-vlwould fail to interpret images with transparent backgrounds - Ollama will now stop running a model before removing it via
ollama rm - Fixed issue where prompt processing would be slower on Ollama's engine
- Ignore unsupported iGPUs when doing device discovery on Windows
New Contributors
- @athshh made their first contribution in https://github.com/ollama/ollama/pull/12822
Full Changelog: https://github.com/ollama/ollama/compare/v0.12.7...v0.12.8
Downloads