mirror of
https://github.com/ollama/ollama.git
synced 2025-12-05 18:46:22 -06:00
-
released this
2025-10-09 21:08:21 -05:00 | 224 commits to main since this release📅 Originally published on GitHub: Fri, 10 Oct 2025 16:30:53 GMT
🏷️ Git tag created: Fri, 10 Oct 2025 02:08:21 GMTWhat's Changed
- Thinking models now support structured outputs when using the
/api/chatAPI - Ollama's app will now wait until Ollama is running to allow for a conversation to be started
- Fixed issue where
"think": falsewould show an error instead of being silently ignored - Fixed
deepseek-r1output issues - macOS 12 Monterey and macOS 13 Ventura are no longer supported
- AMD gfx900 and gfx906 (MI50, MI60, etc) GPUs are no longer supported via ROCm. We're working to support these GPUs via Vulkan in a future release.
New Contributors
- @shengxinjing made their first contribution in https://github.com/ollama/ollama/pull/12415
Full Changelog: https://github.com/ollama/ollama/compare/v0.12.4...v0.12.5-rc0
Downloads
- Thinking models now support structured outputs when using the