[GH-ISSUE #15575] Offline Ubuntu Server #72003

Closed
opened 2026-05-05 03:17:09 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @cosmiccat9784 on GitHub (Apr 14, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15575

Hello, I want to install ollama on and offline Ubuntu Server machine, can someone tell me how to? I have a windows machine and have a Cinnamon Mint VM. - both connected to internet.

I want to be able to run ollama, how do I install it, and what files as of now do i use? I was looking through similar issues but they were all outdated referenced to the ollama github page - eg. files that didnt exist anymore.

Can someone tell me how to? I have a USB flash drive by the way.

Originally created by @cosmiccat9784 on GitHub (Apr 14, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15575 Hello, I want to install ollama on and offline Ubuntu Server machine, can someone tell me how to? I have a windows machine and have a Cinnamon Mint VM. - both connected to internet. I want to be able to run ollama, how do I install it, and what files as of now do i use? I was looking through similar issues but they were all outdated referenced to the ollama github page - eg. files that didnt exist anymore. Can someone tell me how to? I have a USB flash drive by the way.
Author
Owner

@andrenaP commented on GitHub (Apr 14, 2026):

If you server don't have NVIDIA GPU. The easiest way is to install ollama in docker container. Then you can do something like docker save ollama > ollama.tar and then put it on USB flash and load container on your server. As for files I know that all models are usually saved in /var/lib/ollama just save them inside container or mount as a folder.

If you want to use NVIDIA you probably can download and save ollama.deb file on usb flash and the same way copy /var/lib/ollama folder to server. Or you can use docker containers that support NVIDIA but they taking 4 GB more than normal.

<!-- gh-comment-id:4243251872 --> @andrenaP commented on GitHub (Apr 14, 2026): If you server don't have NVIDIA GPU. The easiest way is to install ollama in docker container. Then you can do something like `docker save ollama > ollama.tar` and then put it on USB flash and load container on your server. As for files I know that all models are usually saved in `/var/lib/ollama` just save them inside container or mount as a folder. If you want to use NVIDIA you probably can download and save ollama.deb file on usb flash and the same way copy `/var/lib/ollama` folder to server. Or you can use docker containers that support NVIDIA but they taking 4 GB more than normal.
Author
Owner

@rick-github commented on GitHub (Apr 14, 2026):

Ollama server has two components, program and models.

To install the program, download the appropriate version from the latest release, copy it to the offline server and follow these instructions.

To get models, install ollama in the Cinnamon Mint VM, browse the model library and download models with ollama pull <model-name>. When you have download the models, store then in an archive:

cd /usr/share/ollama/.ollama && zip -r /tmp/models.zip models

Copy models.zip to the offline server and unzip the files into the model directory.

<!-- gh-comment-id:4245379407 --> @rick-github commented on GitHub (Apr 14, 2026): Ollama server has two components, program and models. To install the program, download the appropriate version from the [latest release](https://github.com/ollama/ollama/releases/tag/v0.20.7), copy it to the offline server and follow [these instructions](https://github.com/ollama/ollama/blob/main/docs/linux.mdx#manual-install). To get models, [install ollama](https://github.com/ollama/ollama?tab=readme-ov-file#linux) in the Cinnamon Mint VM, browse the [model library](https://ollama.com/library) and download models with `ollama pull <model-name>`. When you have download the models, store then in an archive: ``` cd /usr/share/ollama/.ollama && zip -r /tmp/models.zip models ``` Copy models.zip to the offline server and unzip the files into the model directory.
Author
Owner

@PureBlissAK commented on GitHub (Apr 18, 2026):

🤖 Automated Triage & Analysis Report

Issue: #15575
Analyzed: 2026-04-18T18:19:34.488299

Analysis

  • Type: unknown
  • Severity: medium
  • Components: unknown

Implementation Plan

  • Effort: medium
  • Steps:

This issue has been triaged and marked for implementation.

<!-- gh-comment-id:4274305128 --> @PureBlissAK commented on GitHub (Apr 18, 2026): <!-- ollama-issue-orchestrator:v1 issue:15575 --> ## 🤖 Automated Triage & Analysis Report **Issue**: #15575 **Analyzed**: 2026-04-18T18:19:34.488299 ### Analysis - **Type**: unknown - **Severity**: medium - **Components**: unknown ### Implementation Plan - **Effort**: medium - **Steps**: *This issue has been triaged and marked for implementation.*
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#72003