Logo
Explore Help
Sign In
github-starred/ollama-ollama
2
0
Fork 0
You've already forked ollama-ollama
mirror of https://github.com/ollama/ollama.git synced 2025-12-05 18:46:22 -06:00
Code Issues 2.2k Packages Projects Releases 100 Wiki Activity

100 Releases 358 Tags

RSS Feed
  • v0.12.10 80d34260ea
    Compare

    Stable

    GiteaMirror released this 2025-11-05 14:33:01 -06:00 | 110 commits to main since this release

    📅 Originally published on GitHub: Wed, 05 Nov 2025 21:41:21 GMT
    🏷️ Git tag created: Wed, 05 Nov 2025 20:33:01 GMT

    ollama run now works with embedding models

    ollama run can now run embedding models to generate vector embeddings from text:

    ollama run embeddinggemma "Hello world"
    

    Content can also be provided to ollama run via standard input:

    echo "Hello world" | ollama run embeddinggemma
    

    What's Changed

    • Fixed errors when running qwen3-vl:235b and qwen3-vl:235b-instruct
    • Enable flash attention for Vulkan (currently needs to be built from source)
    • Add Vulkan memory detection for Intel GPU using DXGI+PDH
    • Ollama will now return tool call IDs from the /api/chat API
    • Fixed hanging due to CPU discovery
    • Ollama will now show login instructions when switching to a cloud model in interactive mode
    • Fix reading stale VRAM data
    • ollama run now works with embedding models

    New Contributors

    • @ryanycoleman made their first contribution in https://github.com/ollama/ollama/pull/11740
    • @Rajathbail made their first contribution in https://github.com/ollama/ollama/pull/12929
    • @virajwad made their first contribution in https://github.com/ollama/ollama/pull/12664
    • @AXYZdong made their first contribution in https://github.com/ollama/ollama/pull/8601

    Full Changelog: https://github.com/ollama/ollama/compare/v0.12.9...v0.12.10

    Downloads
    • Source Code (ZIP)
    • Source Code (TAR.GZ)
    • ollama-darwin.tgz
      25 MiB
      2025-11-12 08:50:43 -06:00
    • Ollama-darwin.zip
      47 MiB
      2025-11-12 08:50:48 -06:00
    • Ollama.dmg
      47 MiB
      2025-11-12 09:03:29 -06:00
    • ollama-linux-amd64-rocm.tgz
      1.2 GiB
      2025-11-12 08:52:50 -06:00
    • ollama-linux-amd64.tgz
      1.7 GiB
      2025-11-12 08:55:22 -06:00
    • ollama-linux-arm64-jetpack5.tgz
      440 MiB
      2025-11-12 08:56:02 -06:00
    • ollama-linux-arm64-jetpack6.tgz
      348 MiB
      2025-11-12 08:56:33 -06:00
    • ollama-linux-arm64.tgz
      1.9 GiB
      2025-11-12 09:00:05 -06:00
    • OllamaSetup.exe
      1.1 GiB
      2025-11-12 09:05:06 -06:00
    • ollama-windows-amd64-rocm.zip
      354 MiB
      2025-11-12 09:00:44 -06:00
    • ollama-windows-amd64.zip
      1.8 GiB
      2025-11-12 09:03:22 -06:00
    • ollama-windows-arm64.zip
      21 MiB
      2025-11-12 09:03:25 -06:00
    • sha256sum.txt
      1.1 KiB
      2025-11-12 09:05:07 -06:00
  • v0.12.9 392a270261
    Compare

    Stable

    GiteaMirror released this 2025-10-31 17:23:28 -05:00 | 128 commits to main since this release

    📅 Originally published on GitHub: Fri, 31 Oct 2025 23:33:13 GMT
    🏷️ Git tag created: Fri, 31 Oct 2025 22:23:28 GMT

    What's Changed

    • Fix performance regression on CPU-only systems

    Full Changelog: https://github.com/ollama/ollama/compare/v0.12.8...v0.12.9

    Downloads
    • Source Code (ZIP)
    • Source Code (TAR.GZ)
    • ollama-darwin.tgz
      24 MiB
      2025-11-12 08:35:12 -06:00
    • Ollama-darwin.zip
      46 MiB
      2025-11-12 08:35:16 -06:00
    • Ollama.dmg
      46 MiB
      2025-11-12 08:48:17 -06:00
    • ollama-linux-amd64-rocm.tgz
      1.2 GiB
      2025-11-12 08:37:04 -06:00
    • ollama-linux-amd64.tgz
      1.7 GiB
      2025-11-12 08:40:42 -06:00
    • ollama-linux-arm64-jetpack5.tgz
      440 MiB
      2025-11-12 08:41:23 -06:00
    • ollama-linux-arm64-jetpack6.tgz
      348 MiB
      2025-11-12 08:41:54 -06:00
    • ollama-linux-arm64.tgz
      1.9 GiB
      2025-11-12 08:44:35 -06:00
    • OllamaSetup.exe
      1.1 GiB
      2025-11-12 08:50:39 -06:00
    • ollama-windows-amd64-rocm.zip
      340 MiB
      2025-11-12 08:45:07 -06:00
    • ollama-windows-amd64.zip
      1.8 GiB
      2025-11-12 08:48:06 -06:00
    • ollama-windows-arm64.zip
      21 MiB
      2025-11-12 08:48:10 -06:00
    • sha256sum.txt
      1.1 KiB
      2025-11-12 08:50:39 -06:00
  • v0.12.8 db973c8fc2
    Compare

    Stable

    GiteaMirror released this 2025-10-30 17:12:14 -05:00 | 132 commits to main since this release

    📅 Originally published on GitHub: Thu, 30 Oct 2025 23:22:27 GMT
    🏷️ Git tag created: Thu, 30 Oct 2025 22:12:14 GMT

    Ollama_halloween_background

    What's Changed

    • qwen3-vl performance improvements, including flash attention support by default
    • qwen3-vl will now output less leading whitespace in the response when thinking
    • Fixed issue where deepseek-v3.1 thinking could not be disabled in Ollama's new app
    • Fixed issue where qwen3-vl would fail to interpret images with transparent backgrounds
    • Ollama will now stop running a model before removing it via ollama rm
    • Fixed issue where prompt processing would be slower on Ollama's engine
    • Ignore unsupported iGPUs when doing device discovery on Windows

    New Contributors

    • @athshh made their first contribution in https://github.com/ollama/ollama/pull/12822

    Full Changelog: https://github.com/ollama/ollama/compare/v0.12.7...v0.12.8

    Downloads
    • Source Code (ZIP)
    • Source Code (TAR.GZ)
    • ollama-darwin.tgz
      24 MiB
      2025-11-12 08:20:35 -06:00
    • Ollama-darwin.zip
      46 MiB
      2025-11-12 08:20:40 -06:00
    • Ollama.dmg
      46 MiB
      2025-11-12 08:33:21 -06:00
    • ollama-linux-amd64-rocm.tgz
      1.2 GiB
      2025-11-12 08:22:30 -06:00
    • ollama-linux-amd64.tgz
      1.7 GiB
      2025-11-12 08:25:09 -06:00
    • ollama-linux-arm64-jetpack5.tgz
      440 MiB
      2025-11-12 08:25:49 -06:00
    • ollama-linux-arm64-jetpack6.tgz
      348 MiB
      2025-11-12 08:26:24 -06:00
    • ollama-linux-arm64.tgz
      1.9 GiB
      2025-11-12 08:30:01 -06:00
    • OllamaSetup.exe
      1.1 GiB
      2025-11-12 08:35:07 -06:00
    • ollama-windows-amd64-rocm.zip
      340 MiB
      2025-11-12 08:30:36 -06:00
    • ollama-windows-amd64.zip
      1.8 GiB
      2025-11-12 08:33:13 -06:00
    • ollama-windows-arm64.zip
      21 MiB
      2025-11-12 08:33:16 -06:00
    • sha256sum.txt
      1.1 KiB
      2025-11-12 08:35:08 -06:00
  • v0.12.7 c88647104d
    Compare

    Stable

    GiteaMirror released this 2025-10-29 13:50:56 -05:00 | 144 commits to main since this release

    📅 Originally published on GitHub: Wed, 29 Oct 2025 02:07:54 GMT
    🏷️ Git tag created: Wed, 29 Oct 2025 18:50:56 GMT

    Ollama screenshot 2025-10-29 at 13 56 55@2x

    New models

    • Qwen3-VL: Qwen3-VL is now available in all parameter sizes ranging from 2B to 235B
    • MiniMax-M2: a 230 Billion parameter model built for coding & agentic workflows available on Ollama's cloud

    Add files and adjust thinking levels in Ollama's new app

    Ollama's new app now includes a way to add one or many files when prompting the model:

    Screenshot 2025-10-29 at 2 16 55 PM

    For better responses, thinking levels can now be adjusted for the gpt-oss models:

    Screenshot 2025-10-29 at 2 12 33 PM

    New API documentation

    New API documentation is available for Ollama's API: https://docs.ollama.com/api

    Screenshot 2025-10-29 at 4 02 53 PM

    What's Changed

    • Model load failures now include more information on Windows
    • Fixed embedding results being incorrect when running embeddinggemma
    • Fixed gemma3n on Vulkan backend
    • Increased time allocated for ROCm to discover devices
    • Fixed truncation error when generating embeddings
    • Fixed request status code when running cloud models
    • The OpenAI-compatible /v1/embeddings endpoint now supports encoding_format parameter
    • Ollama will now parse tool calls that don't conform to {"name": name, "arguments": args} (thanks @rick-github!)
    • Fixed prompt processing reporting in the llama runner
    • Increase speed when scheduling models
    • Fixed issue where FROM <model> would not inherit RENDERER or PARSER commands

    New Contributors

    • @npardal made their first contribution in https://github.com/ollama/ollama/pull/12715

    Full Changelog: https://github.com/ollama/ollama/compare/v0.12.6...v0.12.7

    Downloads
    • Source Code (ZIP)
    • Source Code (TAR.GZ)
    • ollama-darwin.tgz
      24 MiB
      2025-11-12 08:04:35 -06:00
    • Ollama-darwin.zip
      46 MiB
      2025-11-12 08:04:39 -06:00
    • Ollama.dmg
      46 MiB
      2025-11-12 08:18:11 -06:00
    • ollama-linux-amd64-rocm.tgz
      1.2 GiB
      2025-11-12 08:06:28 -06:00
    • ollama-linux-amd64.tgz
      1.7 GiB
      2025-11-12 08:09:57 -06:00
    • ollama-linux-arm64-jetpack5.tgz
      440 MiB
      2025-11-12 08:11:10 -06:00
    • ollama-linux-arm64-jetpack6.tgz
      348 MiB
      2025-11-12 08:11:42 -06:00
    • ollama-linux-arm64.tgz
      1.9 GiB
      2025-11-12 08:14:33 -06:00
    • OllamaSetup.exe
      1.1 GiB
      2025-11-12 08:20:29 -06:00
    • ollama-windows-amd64-rocm.zip
      340 MiB
      2025-11-12 08:15:06 -06:00
    • ollama-windows-amd64.zip
      1.8 GiB
      2025-11-12 08:18:02 -06:00
    • ollama-windows-arm64.zip
      21 MiB
      2025-11-12 08:18:05 -06:00
    • sha256sum.txt
      1.1 KiB
      2025-11-12 08:20:30 -06:00
  • v0.12.6 1813ff85a0
    Compare

    Stable

    GiteaMirror released this 2025-10-16 15:07:41 -05:00 | 185 commits to main since this release

    📅 Originally published on GitHub: Wed, 15 Oct 2025 23:02:31 GMT
    🏷️ Git tag created: Thu, 16 Oct 2025 20:07:41 GMT

    What's Changed

    • Ollama's app now supports searching when running DeepSeek-V3.1, Qwen3 and other models that support tool calling.
    • Flash attention is now enabled by default for Gemma 3, improving performance and memory utilization
    • Fixed issue where Ollama would hang while generating responses
    • Fixed issue where qwen3-coder would act in raw mode when using /api/generate or ollama run qwen3-coder <prompt>
    • Fixed qwen3-embedding providing invalid results
    • Ollama will now evict models correctly when num_gpu is set
    • Fixed issue where tool_index with a value of 0 would not be sent to the model

    Experimental Vulkan Support

    Experimental support for Vulkan is now available when you build locally from source. This will enable additional GPUs from AMD, and Intel which are not currently supported by Ollama. To build locally, install the Vulkan SDK and set VULKAN_SDK in your environment, then follow the developer instructions. In a future release, Vulkan support will be included in the binary release as well. Please file issues if you run into any problems.

    New Contributors

    • @yajianggroup made their first contribution in https://github.com/ollama/ollama/pull/12377
    • @inforithmics made their first contribution in https://github.com/ollama/ollama/pull/11835
    • @sbhavani made their first contribution in https://github.com/ollama/ollama/pull/12619

    Full Changelog: https://github.com/ollama/ollama/compare/v0.12.5...v0.12.6

    Downloads
    • Source Code (ZIP)
    • Source Code (TAR.GZ)
    • ollama-darwin.tgz
      24 MiB
      2025-11-12 07:46:58 -06:00
    • Ollama-darwin.zip
      46 MiB
      2025-11-12 07:47:04 -06:00
    • Ollama.dmg
      46 MiB
      2025-11-12 08:02:47 -06:00
    • ollama-linux-amd64-rocm.tgz
      1.2 GiB
      2025-11-12 07:50:02 -06:00
    • ollama-linux-amd64.tgz
      1.8 GiB
      2025-11-12 07:53:48 -06:00
    • ollama-linux-arm64-jetpack5.tgz
      440 MiB
      2025-11-12 07:54:30 -06:00
    • ollama-linux-arm64-jetpack6.tgz
      348 MiB
      2025-11-12 07:55:15 -06:00
    • ollama-linux-arm64.tgz
      1.9 GiB
      2025-11-12 07:58:39 -06:00
    • OllamaSetup.exe
      1.1 GiB
      2025-11-12 08:04:29 -06:00
    • ollama-windows-amd64-rocm.zip
      340 MiB
      2025-11-12 07:59:27 -06:00
    • ollama-windows-amd64.zip
      1.8 GiB
      2025-11-12 08:02:37 -06:00
    • ollama-windows-arm64.zip
      21 MiB
      2025-11-12 08:02:42 -06:00
    • sha256sum.txt
      1.1 KiB
      2025-11-12 08:04:30 -06:00
  • v0.12.5 3d32249c74
    Compare

    Stable

    GiteaMirror released this 2025-10-09 21:08:21 -05:00 | 224 commits to main since this release

    📅 Originally published on GitHub: Fri, 10 Oct 2025 16:30:53 GMT
    🏷️ Git tag created: Fri, 10 Oct 2025 02:08:21 GMT

    What's Changed

    • Thinking models now support structured outputs when using the /api/chat API
    • Ollama's app will now wait until Ollama is running to allow for a conversation to be started
    • Fixed issue where "think": false would show an error instead of being silently ignored
    • Fixed deepseek-r1 output issues
    • macOS 12 Monterey and macOS 13 Ventura are no longer supported
    • AMD gfx900 and gfx906 (MI50, MI60, etc) GPUs are no longer supported via ROCm. We're working to support these GPUs via Vulkan in a future release.

    New Contributors

    • @shengxinjing made their first contribution in https://github.com/ollama/ollama/pull/12415

    Full Changelog: https://github.com/ollama/ollama/compare/v0.12.4...v0.12.5-rc0

    Downloads
    • Source Code (ZIP)
    • Source Code (TAR.GZ)
    • ollama-darwin.tgz
      24 MiB
      2025-11-12 07:31:15 -06:00
    • Ollama-darwin.zip
      46 MiB
      2025-11-12 07:31:19 -06:00
    • Ollama.dmg
      46 MiB
      2025-11-12 07:45:10 -06:00
    • ollama-linux-amd64-rocm.tgz
      1.2 GiB
      2025-11-12 07:33:12 -06:00
    • ollama-linux-amd64.tgz
      1.7 GiB
      2025-11-12 07:36:02 -06:00
    • ollama-linux-arm64-jetpack5.tgz
      434 MiB
      2025-11-12 07:36:51 -06:00
    • ollama-linux-arm64-jetpack6.tgz
      346 MiB
      2025-11-12 07:37:26 -06:00
    • ollama-linux-arm64.tgz
      1.8 GiB
      2025-11-12 07:41:05 -06:00
    • OllamaSetup.exe
      1.1 GiB
      2025-11-12 07:46:53 -06:00
    • ollama-windows-amd64-rocm.zip
      323 MiB
      2025-11-12 07:41:42 -06:00
    • ollama-windows-amd64.zip
      1.7 GiB
      2025-11-12 07:45:03 -06:00
    • ollama-windows-arm64.zip
      21 MiB
      2025-11-12 07:45:05 -06:00
    • sha256sum.txt
      1.1 KiB
      2025-11-12 07:46:54 -06:00
  • v0.12.4 15e3611d3d
    Compare

    Stable

    GiteaMirror released this 2025-10-09 12:37:47 -05:00 | 230 commits to main since this release

    📅 Originally published on GitHub: Fri, 03 Oct 2025 16:38:12 GMT
    🏷️ Git tag created: Thu, 09 Oct 2025 17:37:47 GMT

    What's Changed

    • Flash attention is now enabled by default for Qwen 3 and Qwen 3 Coder
    • Fixed minor memory estimation issues when scheduling models on NVIDIA GPUs
    • Fixed an issue where keep_alive in the API would accept different values for the /api/chat and /api/generate endpoints
    • Fixed tool calling rendering with qwen3-coder
    • More reliable and accurate VRAM detection
    • OLLAMA_FLASH_ATTENTION can now be overridden to 0 for models that have flash attention enabled by default
    • macOS 12 Monterey and macOS 13 Ventura are no longer supported
    • Fixed crash where templates were not correctly defined
    • Fix memory calculations on NVIDIA iGPUs
    • AMD gfx900 and gfx906 (MI50, MI60, etc) GPUs are no longer supported via ROCm. We're working to support these GPUs via Vulkan in a future release.

    New Contributors

    • @Fachep made their first contribution in https://github.com/ollama/ollama/pull/12412

    Full Changelog: https://github.com/ollama/ollama/compare/v0.12.3...v0.12.4-rc3

    Downloads
    • Source Code (ZIP)
    • Source Code (TAR.GZ)
    • ollama-darwin.tgz
      24 MiB
      2025-11-12 07:15:23 -06:00
    • Ollama-darwin.zip
      46 MiB
      2025-11-12 07:15:28 -06:00
    • Ollama.dmg
      46 MiB
      2025-11-12 07:29:10 -06:00
    • ollama-linux-amd64-rocm.tgz
      1.2 GiB
      2025-11-12 07:17:46 -06:00
    • ollama-linux-amd64.tgz
      1.7 GiB
      2025-11-12 07:21:10 -06:00
    • ollama-linux-arm64-jetpack5.tgz
      434 MiB
      2025-11-12 07:21:51 -06:00
    • ollama-linux-arm64-jetpack6.tgz
      346 MiB
      2025-11-12 07:22:22 -06:00
    • ollama-linux-arm64.tgz
      1.8 GiB
      2025-11-12 07:25:07 -06:00
    • OllamaSetup.exe
      1.1 GiB
      2025-11-12 07:31:10 -06:00
    • ollama-windows-amd64-rocm.zip
      323 MiB
      2025-11-12 07:25:35 -06:00
    • ollama-windows-amd64.zip
      1.7 GiB
      2025-11-12 07:29:00 -06:00
    • ollama-windows-arm64.zip
      21 MiB
      2025-11-12 07:29:03 -06:00
    • sha256sum.txt
      1.1 KiB
      2025-11-12 07:31:11 -06:00
  • v0.12.3 b04e46da3e
    Compare

    Stable

    GiteaMirror released this 2025-09-25 20:30:45 -05:00 | 270 commits to main since this release

    📅 Originally published on GitHub: Fri, 26 Sep 2025 05:08:26 GMT
    🏷️ Git tag created: Fri, 26 Sep 2025 01:30:45 GMT

    New models

    • DeepSeek-V3.1-Terminus: DeepSeek-V3.1-Terminus is a hybrid model that supports both thinking mode and non-thinking mode. It delivers more stable & reliable outputs across benchmarks compared to the previous version:

      Run on Ollama's cloud:

      ollama run deepseek-v3.1:671b-cloud
      

      Run locally (requires 500GB+ of VRAM)

      ollama run deepseek-v3.1
      
    • Kimi-K2-Instruct-0905: Kimi K2-Instruct-0905 is the latest, most capable version of Kimi K2. It is a state-of-the-art mixture-of-experts (MoE) language model, featuring 32 billion activated parameters and a total of 1 trillion parameters.

      ollama run kimi-k2:1t-cloud
      

    What's Changed

    • Fixed issue where tool calls provided as stringified JSON would not be parsed correctly
    • ollama push will now provide a URL to follow to sign in
    • Fixed issues where qwen3-coder would output unicode characters incorrectly
    • Fix issue where loading a model with /load would crash

    New Contributors

    • @gr4ceG made their first contribution in https://github.com/ollama/ollama/pull/12385

    Full Changelog: https://github.com/ollama/ollama/compare/v0.12.2...v0.12.3

    Downloads
    • Source Code (ZIP)
    • Source Code (TAR.GZ)
    • ollama-darwin.tgz
      24 MiB
      2025-11-12 07:00:14 -06:00
    • Ollama-darwin.zip
      46 MiB
      2025-11-12 07:00:25 -06:00
    • Ollama.dmg
      46 MiB
      2025-11-12 07:13:39 -06:00
    • ollama-linux-amd64-rocm.tgz
      1.1 GiB
      2025-11-12 07:03:04 -06:00
    • ollama-linux-amd64.tgz
      1.8 GiB
      2025-11-12 07:06:09 -06:00
    • ollama-linux-arm64-jetpack5.tgz
      429 MiB
      2025-11-12 07:06:48 -06:00
    • ollama-linux-arm64-jetpack6.tgz
      343 MiB
      2025-11-12 07:07:28 -06:00
    • ollama-linux-arm64.tgz
      1.7 GiB
      2025-11-12 07:10:30 -06:00
    • OllamaSetup.exe
      1.1 GiB
      2025-11-12 07:15:18 -06:00
    • ollama-windows-amd64-rocm.zip
      246 MiB
      2025-11-12 07:10:53 -06:00
    • ollama-windows-amd64.zip
      1.8 GiB
      2025-11-12 07:13:32 -06:00
    • ollama-windows-arm64.zip
      21 MiB
      2025-11-12 07:13:34 -06:00
    • sha256sum.txt
      1.1 KiB
      2025-11-12 07:15:19 -06:00
  • v0.12.2 2e742544bf
    Compare

    Stable

    GiteaMirror released this 2025-09-24 13:21:32 -05:00 | 276 commits to main since this release

    📅 Originally published on GitHub: Wed, 24 Sep 2025 21:19:20 GMT
    🏷️ Git tag created: Wed, 24 Sep 2025 18:21:32 GMT

    Web search

    ollama_web_search

    A new web search API is now available in Ollama. Ollama provides a generous free tier of web searches for individuals to use, and higher rate limits are available via Ollama’s cloud. This web search capability can augment models with the latest information from the web to reduce hallucinations and improve accuracy.

    What's Changed

    • Models with Qwen3's architecture including MoE now run in Ollama's new engine
    • Fixed issue where built-in tools for gpt-oss were not being rendered correctly
    • Support multi-regex pretokenizers in Ollama's new engine
    • Ollama's new engine can now load tensors by matching a prefix or suffix

    Full Changelog: https://github.com/ollama/ollama/compare/v0.12.1...v0.12.2

    Downloads
    • Source Code (ZIP)
    • Source Code (TAR.GZ)
    • ollama-darwin.tgz
      24 MiB
      2025-11-12 06:43:41 -06:00
    • Ollama-darwin.zip
      46 MiB
      2025-11-12 06:43:46 -06:00
    • ollama-linux-amd64-rocm.tgz
      1.1 GiB
      2025-11-12 06:45:25 -06:00
    • ollama-linux-amd64.tgz
      1.8 GiB
      2025-11-12 06:48:45 -06:00
    • ollama-linux-arm64-jetpack5.tgz
      429 MiB
      2025-11-12 06:49:35 -06:00
    • ollama-linux-arm64-jetpack6.tgz
      343 MiB
      2025-11-12 06:50:10 -06:00
    • ollama-linux-arm64.tgz
      1.7 GiB
      2025-11-12 06:53:00 -06:00
    • OllamaSetup.exe
      1.1 GiB
      2025-11-12 07:00:07 -06:00
    • ollama-windows-amd64-rocm.zip
      246 MiB
      2025-11-12 06:53:24 -06:00
    • ollama-windows-amd64.zip
      1.8 GiB
      2025-11-12 06:57:01 -06:00
    • sha256sum.txt
      1.1 KiB
      2025-11-12 07:00:08 -06:00
  • v0.12.1 64883e3c4c
    Compare

    Stable

    GiteaMirror released this 2025-09-23 01:20:20 -05:00 | 282 commits to main since this release

    📅 Originally published on GitHub: Sun, 21 Sep 2025 23:19:05 GMT
    🏷️ Git tag created: Tue, 23 Sep 2025 06:20:20 GMT

    New models

    • Qwen3 Embedding: state of the art open embedding model by the Qwen team

    What's Changed

    • Qwen3-Coder now supports tool calling
    • Ollama's app will now longer show "connection lost" in error when connecting to cloud models
    • Fixed issue where Gemma3 QAT models would not output correct tokens
    • Fix issue where & characters in Qwen3-Coder would not be parsed correctly when function calling
    • Fixed issues where ollama signin would not work properly on Linux

    Full Changelog: https://github.com/ollama/ollama/compare/v0.12.0...v0.12.1

    Downloads
    • Source Code (ZIP)
    • Source Code (TAR.GZ)
    • ollama-darwin.tgz
      24 MiB
      2025-11-12 06:29:31 -06:00
    • Ollama-darwin.zip
      46 MiB
      2025-11-12 06:29:35 -06:00
    • Ollama.dmg
      46 MiB
      2025-11-12 06:41:55 -06:00
    • ollama-linux-amd64-rocm.tgz
      1.1 GiB
      2025-11-12 06:31:18 -06:00
    • ollama-linux-amd64.tgz
      1.8 GiB
      2025-11-12 06:33:55 -06:00
    • ollama-linux-arm64-jetpack5.tgz
      429 MiB
      2025-11-12 06:34:37 -06:00
    • ollama-linux-arm64-jetpack6.tgz
      343 MiB
      2025-11-12 06:35:10 -06:00
    • ollama-linux-arm64.tgz
      1.7 GiB
      2025-11-12 06:38:21 -06:00
    • OllamaSetup.exe
      1.1 GiB
      2025-11-12 06:43:37 -06:00
    • ollama-windows-amd64-rocm.zip
      246 MiB
      2025-11-12 06:38:48 -06:00
    • ollama-windows-amd64.zip
      1.8 GiB
      2025-11-12 06:41:49 -06:00
    • ollama-windows-arm64.zip
      21 MiB
      2025-11-12 06:41:51 -06:00
    • sha256sum.txt
      1.1 KiB
      2025-11-12 06:43:37 -06:00
First Previous 1 2 3 4 5 ... Next Last
Powered by Gitea Version: 1.24.6 Page: 138ms Template: 14ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API