[PR #12868] Ollama Docker: Compile for Compute Capability 10.0 #13986

Open
opened 2026-04-13 00:41:54 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/12868
Author: @liopeer
Created: 10/30/2025
Status: 🔄 Open

Base: mainHead: lionel-add-cc100-cuda12-docker


📝 Commits (4)

  • d16a2da add compute capability 10.0
  • 5dbb81b Merge branch 'main' into lionel-add-cc100-cuda12-docker
  • ec4bc7e fix comma
  • c6fc3f7 Merge remote-tracking branch 'upstream/main' into lionel-add-cc100-cuda12-docker

📊 Changes

1 file changed (+1 additions, -1 deletions)

View changed files

📝 CMakePresets.json (+1 -1)

📄 Description

My Blackwell B200 did not work with the default Ollama Docker image, see #12860. This PR adds compute cability 10.0 to the build process of the Docker image (for CUDA 12), which solved the issue for me.


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/12868 **Author:** [@liopeer](https://github.com/liopeer) **Created:** 10/30/2025 **Status:** 🔄 Open **Base:** `main` ← **Head:** `lionel-add-cc100-cuda12-docker` --- ### 📝 Commits (4) - [`d16a2da`](https://github.com/ollama/ollama/commit/d16a2daa8170874b4a9d2b117f9e03b5894800f4) add compute capability 10.0 - [`5dbb81b`](https://github.com/ollama/ollama/commit/5dbb81b72c4b61b737e4970db711f7718e4efcc6) Merge branch 'main' into lionel-add-cc100-cuda12-docker - [`ec4bc7e`](https://github.com/ollama/ollama/commit/ec4bc7e4f614393bb53e5ac58b4bb12b300b4343) fix comma - [`c6fc3f7`](https://github.com/ollama/ollama/commit/c6fc3f727c412e6cff3df546dfd62b95255f217a) Merge remote-tracking branch 'upstream/main' into lionel-add-cc100-cuda12-docker ### 📊 Changes **1 file changed** (+1 additions, -1 deletions) <details> <summary>View changed files</summary> 📝 `CMakePresets.json` (+1 -1) </details> ### 📄 Description My Blackwell B200 did not work with the default Ollama Docker image, see #12860. This PR adds compute cability 10.0 to the build process of the Docker image (for CUDA 12), which solved the issue for me. --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-13 00:41:54 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#13986