[PR #14696] docs: Add GTX 960M to supported Nvidia GPU table #61489

Open
opened 2026-04-29 16:35:11 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/14696
Author: @S0AndS0
Created: 3/7/2026
Status: 🔄 Open

Base: mainHead: fix-docs-gpu-nvidia-gtx-960m


📝 Commits (1)

  • e89f111 docs: Add GTX 960M to supported Nvidia GPU table

📊 Changes

1 file changed (+4 additions, -1 deletions)

View changed files

📝 docs/gpu.mdx (+4 -1)

📄 Description

As well as note for how to obtain compute version locally via CLI, because the table saying 980 should use 5.2 vs CLI asserting 5.0 consumed a few hours of me chasing my own tail x-}

This has been tested successfully, partially successfully due to some models not playing nice, on NixOS with the following configuration snippets;

  • /etc/nixos/services/ollama.nix
    { pkgs, ... }:
    
    {
      services.ollama = {
        enable = true;
    
        package = pkgs.ollama-cuda.override {
          cudaArches = [ "50" ];
        };
    
        loadModels = [
          "gemma3n:e4b"
        ];
      };
    }
    
  • /etc/nixos/flake/my-device.nix
    {
      config,
      lib,
      modulesPath,
      ...
    }:
    
    {
      hardware.enableRedistributableFirmware = true;
      hardware.graphics.enable = true;
    
      nix.settings = {
        substituters = [
          "https://cache.nixos-cuda.org"
        ];
        trusted-public-keys = [
          "cache.nixos-cuda.org:74DUi4Ye579gUqzH4ziL9IyiJBlDpMRn9MBN8oNan9M="
        ];
      };
    }
    

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/14696 **Author:** [@S0AndS0](https://github.com/S0AndS0) **Created:** 3/7/2026 **Status:** 🔄 Open **Base:** `main` ← **Head:** `fix-docs-gpu-nvidia-gtx-960m` --- ### 📝 Commits (1) - [`e89f111`](https://github.com/ollama/ollama/commit/e89f111de8f53219856e4ed56133e6de7f4fbd28) docs: Add GTX 960M to supported Nvidia GPU table ### 📊 Changes **1 file changed** (+4 additions, -1 deletions) <details> <summary>View changed files</summary> 📝 `docs/gpu.mdx` (+4 -1) </details> ### 📄 Description As well as note for how to obtain compute version locally via CLI, because the table saying `980` should use `5.2` vs CLI asserting `5.0` consumed a few hours of me chasing my own tail x-} This has been tested successfully, partially successfully due to some models not playing nice, on NixOS with the following configuration snippets; - `/etc/nixos/services/ollama.nix` ```nix { pkgs, ... }: { services.ollama = { enable = true; package = pkgs.ollama-cuda.override { cudaArches = [ "50" ]; }; loadModels = [ "gemma3n:e4b" ]; }; } ``` - `/etc/nixos/flake/my-device.nix` ```nix { config, lib, modulesPath, ... }: { hardware.enableRedistributableFirmware = true; hardware.graphics.enable = true; nix.settings = { substituters = [ "https://cache.nixos-cuda.org" ]; trusted-public-keys = [ "cache.nixos-cuda.org:74DUi4Ye579gUqzH4ziL9IyiJBlDpMRn9MBN8oNan9M=" ]; }; } ``` --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-29 16:35:11 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#61489