[GH-ISSUE #14760] Feature request: add export command and pull -r flag for environment migration #35301

Open
opened 2026-04-22 19:42:37 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @kenelite on GitHub (Mar 10, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/14760

Feature request: add export command and pull -r flag for environment migration

Summary

I would like to propose a small CLI feature to make it easier to migrate an Ollama environment between machines.

The idea is to support a workflow similar to pip freeze / pip install -r:

  • ollama export: exports the list of installed models to stdout or a file
  • ollama pull -r <file>: batch pulls models from an “export” file

This would allow users to easily back up and restore their model setup.


Problem

Today, when moving from one machine to another (e.g., laptop → server, old server → new server), users need to:

  • Manually remember which models they had installed
  • Manually re-run multiple ollama pull commands
  • Potentially forget some models or make mistakes

There is no built-in, reproducible way to “snapshot” the current model list and restore it elsewhere.


Proposed solution

Add two pieces of functionality to the ollama CLI:

1. ollama export

  • Lists installed (local) models using the existing API
  • Outputs a text format that is easy to read and parse
  • Supports writing to stdout or to a file
  • Skips remote/cloud-only models by commenting them out

Example output:

# Ollama models export
# Generated at: 2026-01-21T12:34:56Z
# Use 'ollama pull -r <file>' to restore these models

llama3.1:8b
qwen2.5:7b
# (remote) some-remote-only-model

Example usage:

ollama export
ollama export -o models.txt

2. ollama pull -r <file>

  • Reads a “requirements-like” file line by line
  • Ignores empty lines and lines starting with #
  • For each remaining line, calls the existing pull logic
  • Prints a summary of any failures at the end

Example usage:

# On the source machine
ollama export -o models.txt

# Copy file to target
scp models.txt target:~/

# On the target machine
ollama pull -r models.txt

Benefits

  • Simple environment migration: easy way to mirror an Ollama setup across machines.
  • Backups: users can keep a text file snapshot of their current models.
  • Consistency: follows a very familiar pattern from other ecosystems (e.g. pip, conda).

Implementation notes

I have opened a PR with a concrete implementation and tests:

High-level details:

  • Adds an export subcommand in cmd/cmd.go with a handler that:
    • Uses the existing API client to list models
    • Writes a header and model names to stdout or an output file
    • Comments out remote-only models (can’t be pulled directly)
  • Extends ollama pull with a -r/--requirements flag:
    • When provided, it reads models from a file and pulls them in batch
    • Reuses the existing pull logic to avoid code duplication
  • Includes unit tests for both export and pull -r behaviors.

I’m happy to adjust naming, flags, or behavior to better fit the maintainers’ vision.


Questions for maintainers

  • Is this kind of environment-migration workflow something you’d be open to supporting in core?
  • Are export and pull -r acceptable names/UX, or would you prefer a different interface?
  • Are there any constraints or edge cases (e.g. cloud-only models, future registry changes) that you’d want the design to account for?
Originally created by @kenelite on GitHub (Mar 10, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/14760 # Feature request: add export command and pull -r flag for environment migration ## Summary I would like to propose a small CLI feature to make it easier to migrate an Ollama environment between machines. The idea is to support a workflow similar to `pip freeze` / `pip install -r`: - `ollama export`: exports the list of installed models to stdout or a file - `ollama pull -r <file>`: batch pulls models from an “export” file This would allow users to easily back up and restore their model setup. --- ## Problem Today, when moving from one machine to another (e.g., laptop → server, old server → new server), users need to: - Manually remember which models they had installed - Manually re-run multiple `ollama pull` commands - Potentially forget some models or make mistakes There is no built-in, reproducible way to “snapshot” the current model list and restore it elsewhere. --- ## Proposed solution Add two pieces of functionality to the `ollama` CLI: ### 1. `ollama export` - Lists installed (local) models using the existing API - Outputs a text format that is easy to read and parse - Supports writing to stdout or to a file - Skips remote/cloud-only models by commenting them out Example output: ```text # Ollama models export # Generated at: 2026-01-21T12:34:56Z # Use 'ollama pull -r <file>' to restore these models llama3.1:8b qwen2.5:7b # (remote) some-remote-only-model ``` Example usage: ```bash ollama export ollama export -o models.txt ``` ### 2. `ollama pull -r <file>` - Reads a “requirements-like” file line by line - Ignores empty lines and lines starting with `#` - For each remaining line, calls the existing `pull` logic - Prints a summary of any failures at the end Example usage: ```bash # On the source machine ollama export -o models.txt # Copy file to target scp models.txt target:~/ # On the target machine ollama pull -r models.txt ``` --- ## Benefits - **Simple environment migration**: easy way to mirror an Ollama setup across machines. - **Backups**: users can keep a text file snapshot of their current models. - **Consistency**: follows a very familiar pattern from other ecosystems (e.g. `pip`, `conda`). --- ## Implementation notes I have opened a PR with a concrete implementation and tests: - PR: https://github.com/ollama/ollama/pull/13816 High-level details: - Adds an `export` subcommand in `cmd/cmd.go` with a handler that: - Uses the existing API client to list models - Writes a header and model names to stdout or an output file - Comments out remote-only models (can’t be pulled directly) - Extends `ollama pull` with a `-r/--requirements` flag: - When provided, it reads models from a file and pulls them in batch - Reuses the existing pull logic to avoid code duplication - Includes unit tests for both `export` and `pull -r` behaviors. I’m happy to adjust naming, flags, or behavior to better fit the maintainers’ vision. --- ## Questions for maintainers - Is this kind of environment-migration workflow something you’d be open to supporting in core? - Are `export` and `pull -r` acceptable names/UX, or would you prefer a different interface? - Are there any constraints or edge cases (e.g. cloud-only models, future registry changes) that you’d want the design to account for?
GiteaMirror added the feature request label 2026-04-22 19:42:37 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#35301