[PR #1098] [MERGED] Created tutorial for running Ollama on NVIDIA Jetson devices #41713

Closed
opened 2026-04-24 21:33:26 -05:00 by GiteaMirror · 0 comments
Owner

📋 Pull Request Information

Original PR: https://github.com/ollama/ollama/pull/1098
Author: @bnodnarb
Created: 11/12/2023
Status: Merged
Merged: 11/15/2023
Merged by: @mchiang0610

Base: mainHead: main


📝 Commits (2)

  • bd277c6 Created tutorial for running Ollama on NVIDIA Jetson devices
  • 90b6517 Merge branch 'jmorganca:main' into main

📊 Changes

2 files changed (+40 additions, -1 deletions)

View changed files

📝 docs/tutorials.md (+2 -1)
docs/tutorials/nvidia-jetson.md (+38 -0)

📄 Description

This pull request provides guidance for people interested in enabling NVIDIA's AI edge computing devices to run Ollama at full power (i.e. on the integrated GPU). Several people (myself included) have expressed interest in this capability (please see issue #1071).

@BruceMacD mentioned via Discord that the CLI will soon support passing num_gpu as a parameter when running ollama serve. I will update the tutorial when that becomes available.

Thanks!


🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/ollama/ollama/pull/1098 **Author:** [@bnodnarb](https://github.com/bnodnarb) **Created:** 11/12/2023 **Status:** ✅ Merged **Merged:** 11/15/2023 **Merged by:** [@mchiang0610](https://github.com/mchiang0610) **Base:** `main` ← **Head:** `main` --- ### 📝 Commits (2) - [`bd277c6`](https://github.com/ollama/ollama/commit/bd277c6522889515fa74e4975f3608e3d1999457) Created tutorial for running Ollama on NVIDIA Jetson devices - [`90b6517`](https://github.com/ollama/ollama/commit/90b651737e435923017a6ed3f130c3f479f010b0) Merge branch 'jmorganca:main' into main ### 📊 Changes **2 files changed** (+40 additions, -1 deletions) <details> <summary>View changed files</summary> 📝 `docs/tutorials.md` (+2 -1) ➕ `docs/tutorials/nvidia-jetson.md` (+38 -0) </details> ### 📄 Description This pull request provides guidance for people interested in enabling NVIDIA's AI edge computing devices to run Ollama at full power (i.e. on the integrated GPU). Several people (myself included) have expressed interest in this capability (please see issue #1071). @BruceMacD mentioned via Discord that the CLI will soon support passing `num_gpu` as a parameter when running `ollama serve`. I will update the tutorial when that becomes available. Thanks! --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
GiteaMirror added the pull-request label 2026-04-24 21:33:26 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#41713