[GH-ISSUE #335] Model import/export #25908

Closed
opened 2026-04-22 01:45:30 -05:00 by GiteaMirror · 34 comments
Owner

Originally created by @mikeroySoft on GitHub (Aug 11, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/335

When using large models like Llama2:70b, the download files are quite big.
As a user with multiple local systems, having to ollama pull on every device means that much more bandwidth and time spent.
It would be great if we could download the model once and then export/import it to other ollama clients in the office without pulling it from the internet.

Example:
On the first device, we would do:

ollama pull llama2:70b

ollama export llama2:70b /Volumes/MyUSB/llama2_70b-local.ollama_model

Then we would take MyUSB over to another device and do:

ollama import /Volumes/MyUSB/llama2_70b-local.ollama_model

ollama run llama2:local-70b or ollama run llama2-local:70b or even just ollama run llama2_70b-local

I'm obviously not sure about the naming structure here, but I hope I've conveyed the problem and thought process.

Thanks for the fantastic project!

Originally created by @mikeroySoft on GitHub (Aug 11, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/335 When using large models like Llama2:70b, the download files are quite big. As a user with multiple local systems, having to `ollama pull` on every device means that much more bandwidth and time spent. It would be great if we could download the model once and then export/import it to other ollama clients in the office without pulling it from the internet. Example: On the first device, we would do: `ollama pull llama2:70b` `ollama export llama2:70b /Volumes/MyUSB/llama2_70b-local.ollama_model` Then we would take MyUSB over to another device and do: `ollama import /Volumes/MyUSB/llama2_70b-local.ollama_model` `ollama run llama2:local-70b` or `ollama run llama2-local:70b` or even just `ollama run llama2_70b-local` I'm obviously not sure about the naming structure here, but I hope I've conveyed the problem and thought process. Thanks for the fantastic project!
GiteaMirror added the feature request label 2026-04-22 01:45:30 -05:00
Author
Owner

@mikeroySoft commented on GitHub (Aug 11, 2023):

Just to note, the reasoning I have with naming the import something other than 'llama2:70b' when you do run is that I didn't want to conflict with the main one available from the web.

<!-- gh-comment-id:1675270366 --> @mikeroySoft commented on GitHub (Aug 11, 2023): Just to note, the reasoning I have with naming the import something other than 'llama2:70b' when you do `run` is that I didn't want to conflict with the main one available from the web.
Author
Owner

@jmorganca commented on GitHub (Aug 23, 2023):

@mikeroySoft very cool idea. Do you have thoughts on how the format should work? Ideally it should contain both the manifest, blobs, be a single file and easy to understand.

<!-- gh-comment-id:1690472757 --> @jmorganca commented on GitHub (Aug 23, 2023): @mikeroySoft very cool idea. Do you have thoughts on how the format should work? Ideally it should contain both the manifest, blobs, be a single file and easy to understand.
Author
Owner

@mikeroySoft commented on GitHub (Aug 24, 2023):

So I was working with this a bit last night, and I managed to get ollama export and ollama import doing something useful, but I'm not sure if my logic is sound. (I haven't grokked the entire codebase yet to know what existing code I should be reusing).

My thought was just to gather the model and manifests using server.ParseModelPath and GetManifestPath, tar them up, add a .ollamabundle extension to the output, and save it on the filesystem.

ollama export llama2:70b /Volumes/MyUSB/myLlama.ollamabundle

For my Import POC we run: ollama import /Volumes/MyUSB/myLlama.ollamabundle, and it drops the sha256: blob/s into ~/.ollama/models/blobs, and saves respective manifest.json to, for example, 7b within the ~/.ollama/models path. (well actually it currently just saves the manifest.json into ~/.ollama/models, I actually just manually mv'd it to ~/.ollama/models/manifests/registry.ollama.ai/library/llama2/myLlama, but I'll update the logic there if the strategy is sound)

So our list then shows up as llama2:myLlama, and we'd run ollama run llama2:myLlama, but I'm not sure if that's more appropriate than ollama run myLlama.

Like, is having the distinction in the model tag better than in the name?

In any case, is that a sane approach? Am I missing something glaring?

<!-- gh-comment-id:1692126504 --> @mikeroySoft commented on GitHub (Aug 24, 2023): So I was working with this a bit last night, and I managed to get `ollama export` and `ollama import` doing something useful, but I'm not sure if my logic is sound. (I haven't grokked the entire codebase yet to know what existing code I should be reusing). My thought was just to gather the model and manifests using server.ParseModelPath and GetManifestPath, tar them up, add a `.ollamabundle` extension to the output, and save it on the filesystem. `ollama export llama2:70b /Volumes/MyUSB/myLlama.ollamabundle` For my Import POC we run: `ollama import /Volumes/MyUSB/myLlama.ollamabundle`, and it drops the sha256:<foo> blob/s into ~/.ollama/models/blobs, and saves respective manifest.json to, for example, `7b` within the ~/.ollama/models path. (well actually it currently just saves the manifest.json into ~/.ollama/models, I actually just manually `mv`'d it to `~/.ollama/models/manifests/registry.ollama.ai/library/llama2/myLlama`, but I'll update the logic there if the strategy is sound) So our list then shows up as `llama2:myLlama`, and we'd run `ollama run llama2:myLlama`, but I'm not sure if that's more appropriate than `ollama run myLlama`. Like, is having the distinction in the model tag better than in the name? In any case, is that a sane approach? Am I missing something glaring?
Author
Owner

@mikeroySoft commented on GitHub (Aug 25, 2023):

I was working on an improved approach last night where we define the Name and Tag of the model as an input flag during import.
ollama import <name>:<tag> </path/to/exported.ollamabundle>

So:
ollama import myLlama:7b /path/to/myLlama.ollamabundle is what I'm looking at currently.

Working out some kinks in the implementation, but I think this feels like a good approach?
In this way we put the model blob and manifest in the right places, ollama list shows the model name and tag, and we can then run it.

<!-- gh-comment-id:1693665866 --> @mikeroySoft commented on GitHub (Aug 25, 2023): I was working on an improved approach last night where we define the Name and Tag of the model as an input flag during import. `ollama import <name>:<tag> </path/to/exported.ollamabundle>` So: `ollama import myLlama:7b /path/to/myLlama.ollamabundle` is what I'm looking at currently. Working out some kinks in the implementation, but I think this feels like a good approach? In this way we put the model blob and manifest in the right places, `ollama list` shows the model name and tag, and we can then `run` it.
Author
Owner

@mikeroySoft commented on GitHub (Aug 26, 2023):

I have a working example now, seems to do the thing.

https://github.com/mikeroySoft/ollama/tree/ollamabundle-1
https://github.com/mikeroySoft/ollama/tree/import-export-v1

Wasn't sure if it was ready for PR yet, but I only made changes from cmd.go, so it's fairly simple to see what I did.

In action:

❯ ./ollama list
NAME            SIZE    MODIFIED       
llama2:7b       3.8 GB  10 seconds ago
❯ ./ollama export --help
Use this to transfer a model between Ollama installations. 
The export bundle destination can be any valid file path, but must end with .ollamabundle.

  Example: ollama export llama2:7b /path/to/myExportedLlama-7b.ollamabundle

Usage:
  ollama export MODEL:TAG FILEPATH [flags]

Flags:
  -h, --help   help for export
❯ ./ollama export llama2:7b ../../testing/myLlama-7b.ollamabundle
❯ ./ollama import myGreatLlama2:7b ../../testing/myLlama-7b.ollamabundle
❯ ./ollama list
NAME                    SIZE    MODIFIED           
llama2:7b               3.8 GB  About a minute ago
myGreatLlama2:7b        3.8 GB  7 seconds ago     
❯ ./ollama run myGreatLlama2:7b
>>>  
<!-- gh-comment-id:1694087748 --> @mikeroySoft commented on GitHub (Aug 26, 2023): I have a working example now, seems to do the thing. [~~https://github.com/mikeroySoft/ollama/tree/ollamabundle-1~~](https://github.com/mikeroySoft/ollama/tree/ollamabundle-1) [https://github.com/mikeroySoft/ollama/tree/import-export-v1](https://github.com/mikeroySoft/ollama/tree/import-export-v1) Wasn't sure if it was ready for PR yet, but I only made changes from cmd.go, so it's fairly simple to see what I did. In action: ``` ❯ ./ollama list NAME SIZE MODIFIED llama2:7b 3.8 GB 10 seconds ago ❯ ./ollama export --help Use this to transfer a model between Ollama installations. The export bundle destination can be any valid file path, but must end with .ollamabundle. Example: ollama export llama2:7b /path/to/myExportedLlama-7b.ollamabundle Usage: ollama export MODEL:TAG FILEPATH [flags] Flags: -h, --help help for export ❯ ./ollama export llama2:7b ../../testing/myLlama-7b.ollamabundle ❯ ./ollama import myGreatLlama2:7b ../../testing/myLlama-7b.ollamabundle ❯ ./ollama list NAME SIZE MODIFIED llama2:7b 3.8 GB About a minute ago myGreatLlama2:7b 3.8 GB 7 seconds ago ❯ ./ollama run myGreatLlama2:7b >>> ```
Author
Owner

@marco-trovato commented on GitHub (Dec 7, 2023):

I would like to confirm that everyone else using other programs is forced to adapt to this weird filename format in creative way, i.e.:
https://github.com/LostRuins/koboldcpp/issues/390#issuecomment-1843623049

<!-- gh-comment-id:1845237118 --> @marco-trovato commented on GitHub (Dec 7, 2023): I would like to confirm that everyone else using other programs is forced to adapt to this weird filename format in creative way, i.e.: https://github.com/LostRuins/koboldcpp/issues/390#issuecomment-1843623049
Author
Owner

@mikeroySoft commented on GitHub (Dec 8, 2023):

I assume you're referring to .ollamabundle
I'm open to changing that, I just needed something to export to. It's just a tarball really.

<!-- gh-comment-id:1847598164 --> @mikeroySoft commented on GitHub (Dec 8, 2023): I assume you're referring to .ollamabundle I'm open to changing that, I just needed something to export to. It's just a tarball really.
Author
Owner

@marco-trovato commented on GitHub (Dec 8, 2023):

I assume you're referring to .ollamabundle I'm open to changing that

The workaround was all about the problem of the weird models filenames ("sha:xxxxx") which are very diffucult to use with any other software.
(Once you have 10 or more of them, you won't be able to tell one from the other anymore)

A readable name like "vicuna-33b-q4_K_M" would be easier to use and maintain.

<!-- gh-comment-id:1847819590 --> @marco-trovato commented on GitHub (Dec 8, 2023): > I assume you're referring to .ollamabundle I'm open to changing that The workaround was all about the problem of the weird models filenames ("sha:xxxxx") which are very diffucult to use with any other software. (Once you have 10 or more of them, you won't be able to tell one from the other anymore) A readable name like "vicuna-33b-q4_K_M" would be easier to use and maintain.
Author
Owner

@Potentialis commented on GitHub (Dec 12, 2023):

The workaround was all about the problem of the weird models filenames ("sha:xxxxx") which are very diffucult to use with any other software. (Once you have 10 or more of them, you won't be able to tell one from the other anymore)

I don't know why to use a hash with the model name (But I assume there is a reason).
The solution I'm currently using is the following: I create the Modelfile and use the create command to import the model, then I replace the file created by ollama with a symlink to the corresponding gguf file in my folder. The problem is that I end up copying the file and then deleting it, which doesn't make sense (plus it doesn't help the lifetime of my SSD).

A possible solution would be to have a flag for the create command so instead of copying it would create a symlink.
Example:
ollama create ModelX --file Modelfile --link

I understand that the default behavior should be to copy, because the user may end up creating a model and then deleting the original file, which would obviously cause problems, meaning that they will not have a good experience with ollama, but since the user uses the --link flag he is aware that ollama will depend on that file in that specific place.

<!-- gh-comment-id:1852848120 --> @Potentialis commented on GitHub (Dec 12, 2023): > The workaround was all about the problem of the weird models filenames ("sha:xxxxx") which are very diffucult to use with any other software. (Once you have 10 or more of them, you won't be able to tell one from the other anymore) I don't know why to use a hash with the model name (But I assume there is a reason). The solution I'm currently using is the following: I create the Modelfile and use the create command to import the model, then I replace the file created by ollama with a symlink to the corresponding gguf file in my folder. The problem is that I end up copying the file and then deleting it, which doesn't make sense (plus it doesn't help the lifetime of my SSD). A possible solution would be to have a flag for the `create` command so instead of copying it would create a symlink. Example: `ollama create ModelX --file Modelfile --link` I understand that the default behavior should be to copy, because the user may end up creating a model and then deleting the original file, which would obviously cause problems, meaning that they will not have a good experience with ollama, but since the user uses the --link flag he is aware that ollama will depend on that file in that specific place.
Author
Owner

@jingyibo123 commented on GitHub (Dec 26, 2023):

Usage:
ollama export MODEL:TAG FILEPATH [flags]

Flags:
-h, --help help for export
❯ ./ollama export llama2:7b ../../testing/myLlama-7b.ollamabundle

@mikeroySoft I wonder if it's theoretically possible to convert the .ollamabundle file to original GGUF or GGML format? I'm new the ollama but I'd be happy to work on a further PR.

<!-- gh-comment-id:1869188641 --> @jingyibo123 commented on GitHub (Dec 26, 2023): > Usage: > ollama export MODEL:TAG FILEPATH [flags] > > Flags: > -h, --help help for export > ❯ ./ollama export llama2:7b ../../testing/myLlama-7b.ollamabundle @mikeroySoft I wonder if it's theoretically possible to convert the `.ollamabundle` file to original `GGUF` or `GGML` format? I'm new the ollama but I'd be happy to work on a further PR.
Author
Owner

@liar666 commented on GitHub (Feb 28, 2024):

Hello all,

Thanks for the great tool! It works like a charm in Emacs with ellama.

I'm using ollama version is 0.1.27 (latest I got using the installation command from the website) but I don't see any import/export command. Has it been merged? Will it be soon?

In the meantime, I'm using this script to export (still lacks automatic detection/management of OS):

#!/usr/bin/env bash

# From: https://github.com/ollama/ollama/blob/main/docs/faq.md#where-are-models-stored
# - macOS: ~/.ollama/models
# - Linux: /usr/share/ollama/.ollama/models
# - Windows: C:\Users\<username>\.ollama\models

# TODO automatically detect OS
INSTALL_DIR="/usr/share/ollama/.ollama/models" # change according to your OS
MODEL="zephyr:latest"                          # default model name

if [[ $# -eq 1 ]] ; then
    model=$1
    echo "Exporting $model"

    model_file=$(echo "${INSTALL_DIR}/manifests/registry.ollama.ai/library/$model" | tr ':' '/')
    echo -e ">> Model metadata file:\n${model_file}"

    layers=$(jq --monochrome-output '.layers[].digest' "${model_file}" | tr -d '"')
    layers_files=$(echo ${layers} | sed 's|^|'${INSTALL_DIR}/blobs/'|g' | tr '\n' ' ')
    echo -e ">> Model layers file:\n${layers_files}"

    echo ">> Compressing everything"
    # tar cvzf "$(echo "$model" | tr ':' '_')".tgz $model_file $layers_files
    7z a -spf "$(echo "$model" | tr ':' '_')".7z "${model_file}" ${layers_files}
else
    echo "Usage: $0 <modelname:version>"
    echo "Use 'ollama list' to see installed models & versions"
fi

# NOTE: to import, simply uncompress the 7z/tgz file in / (if using same target/destination OSes)
# TODO write an import script (or add an option to this one) that replaces the directories according to destination OS
<!-- gh-comment-id:1968768357 --> @liar666 commented on GitHub (Feb 28, 2024): Hello all, Thanks for the great tool! It works like a charm in Emacs with `ellama`. I'm using `ollama version is 0.1.27` (latest I got using the installation command from the website) but I don't see any `import`/`export` command. Has it been merged? Will it be soon? In the meantime, I'm using this script to export (still lacks automatic detection/management of OS): ``` #!/usr/bin/env bash # From: https://github.com/ollama/ollama/blob/main/docs/faq.md#where-are-models-stored # - macOS: ~/.ollama/models # - Linux: /usr/share/ollama/.ollama/models # - Windows: C:\Users\<username>\.ollama\models # TODO automatically detect OS INSTALL_DIR="/usr/share/ollama/.ollama/models" # change according to your OS MODEL="zephyr:latest" # default model name if [[ $# -eq 1 ]] ; then model=$1 echo "Exporting $model" model_file=$(echo "${INSTALL_DIR}/manifests/registry.ollama.ai/library/$model" | tr ':' '/') echo -e ">> Model metadata file:\n${model_file}" layers=$(jq --monochrome-output '.layers[].digest' "${model_file}" | tr -d '"') layers_files=$(echo ${layers} | sed 's|^|'${INSTALL_DIR}/blobs/'|g' | tr '\n' ' ') echo -e ">> Model layers file:\n${layers_files}" echo ">> Compressing everything" # tar cvzf "$(echo "$model" | tr ':' '_')".tgz $model_file $layers_files 7z a -spf "$(echo "$model" | tr ':' '_')".7z "${model_file}" ${layers_files} else echo "Usage: $0 <modelname:version>" echo "Use 'ollama list' to see installed models & versions" fi # NOTE: to import, simply uncompress the 7z/tgz file in / (if using same target/destination OSes) # TODO write an import script (or add an option to this one) that replaces the directories according to destination OS ```
Author
Owner

@trymeouteh commented on GitHub (Mar 1, 2024):

Would like to see this as well, just like how you can import/export Docker images.

I am aware that you can import GGUF files by making a MODELFILE and this is a great feature just like DOCKERFILE. To have a ollama import command to import GGUF files without having to create a MODELFILE would be handy and to have an export command to export models into GGUF files would be very useful.

<!-- gh-comment-id:1972182358 --> @trymeouteh commented on GitHub (Mar 1, 2024): Would like to see this as well, just like how you can import/export Docker images. I am aware that you can import GGUF files by making a MODELFILE and this is a great feature just like DOCKERFILE. To have a ollama import command to import GGUF files without having to create a MODELFILE would be handy and to have an export command to export models into GGUF files would be very useful.
Author
Owner

@hkjang commented on GitHub (Mar 11, 2024):

I am also looking forward to this feature. Thank you for your efforts in developing local llm.

ollama save -o starcoder2.tar starcoder2:latest
ollama load -i starcoder2.tar

docker save -o starcoder2.tar starcoder2:latest
docker load -i starcoder2.tar
<!-- gh-comment-id:1989636351 --> @hkjang commented on GitHub (Mar 11, 2024): I am also looking forward to this feature. Thank you for your efforts in developing local llm. ```bash ollama save -o starcoder2.tar starcoder2:latest ollama load -i starcoder2.tar ``` ```bash docker save -o starcoder2.tar starcoder2:latest docker load -i starcoder2.tar ```
Author
Owner

@supersonictw commented on GitHub (Apr 25, 2024):

I write a bash script for Linux/macOS to export the model to a folder.
https://gist.github.com/supersonictw/f6cf5e599377132fe5e180b3d495c553

<!-- gh-comment-id:2076743128 --> @supersonictw commented on GitHub (Apr 25, 2024): I write a bash script for Linux/macOS to export the model to a folder. https://gist.github.com/supersonictw/f6cf5e599377132fe5e180b3d495c553
Author
Owner

@supersonictw commented on GitHub (Jun 12, 2024):

I write a bash script for Linux/macOS to export the model to a folder. https://gist.github.com/supersonictw/f6cf5e599377132fe5e180b3d495c553

I might rewrite the script in golang, for ollama,
to be the feature ollama extract $MODEL_NAME $TARGET_PATH, and pr this.

The feature makes models can be restored by ollama create $MODEL_NAME after extracting.

<!-- gh-comment-id:2162421502 --> @supersonictw commented on GitHub (Jun 12, 2024): > I write a bash script for Linux/macOS to export the model to a folder. https://gist.github.com/supersonictw/f6cf5e599377132fe5e180b3d495c553 I might rewrite the script in golang, for ollama, to be the feature `ollama extract $MODEL_NAME $TARGET_PATH`, and pr this. The feature makes models can be restored by `ollama create $MODEL_NAME` after extracting.
Author
Owner

@JerrettDavis commented on GitHub (Jun 14, 2024):

I write a bash script for Linux/macOS to export the model to a folder. https://gist.github.com/supersonictw/f6cf5e599377132fe5e180b3d495c553

I might rewrite the script in golang, for ollama, to be the feature ollama extract $MODEL_NAME $TARGET_PATH, and pr this.

The feature makes models can be restored by ollama create $MODEL_NAME after extracting.

Done and done!

https://gist.github.com/JerrettDavis/7bc86098e705e3a7b4efcd60a2b413d7

<!-- gh-comment-id:2168852210 --> @JerrettDavis commented on GitHub (Jun 14, 2024): > > I write a bash script for Linux/macOS to export the model to a folder. https://gist.github.com/supersonictw/f6cf5e599377132fe5e180b3d495c553 > > I might rewrite the script in golang, for ollama, to be the feature `ollama extract $MODEL_NAME $TARGET_PATH`, and pr this. > > The feature makes models can be restored by `ollama create $MODEL_NAME` after extracting. Done and done! https://gist.github.com/JerrettDavis/7bc86098e705e3a7b4efcd60a2b413d7
Author
Owner

@JerrettDavis commented on GitHub (Jun 15, 2024):

Done and done!
https://gist.github.com/JerrettDavis/7bc86098e705e3a7b4efcd60a2b413d7

Good but it seems just converting the script into golang, not following the coding style of ollama? (idk, confusing) I'm trying to rewrite the code for compatibility between every platform. 💻

You're right! I'm primarily a .Net developer, and just making a bit of a step over into golang while working on some related projects. I'm doing my best to pick up the golang paradigms and get things up to snuff. I did a more or less 1-for-1 rewrite, but I'm trying to go back to get it closer to the project style. And hopefully with some tests as well!

Unfortunately, I encountered some Windows build errors related to the existing build scripts, and got side tracked getting that all working. I'll be taking another look at this tomorrow hopefully to make it fit the golang styles a bit better.

<!-- gh-comment-id:2169102042 --> @JerrettDavis commented on GitHub (Jun 15, 2024): > > Done and done! > > https://gist.github.com/JerrettDavis/7bc86098e705e3a7b4efcd60a2b413d7 > > Good but it seems just converting the script into golang, not following the coding style of ollama? (idk, confusing) I'm trying to rewrite the code for compatibility between every platform. 💻 You're right! I'm primarily a .Net developer, and just making a bit of a step over into golang while working on some related projects. I'm doing my best to pick up the golang paradigms and get things up to snuff. I did a more or less 1-for-1 rewrite, but I'm trying to go back to get it closer to the project style. And hopefully with some tests as well! Unfortunately, I encountered some Windows build errors related to the existing build scripts, and got side tracked getting that all working. I'll be taking another look at this tomorrow hopefully to make it fit the golang styles a bit better.
Author
Owner

@JerrettDavis commented on GitHub (Jun 15, 2024):

I thought about that myself, but I wasn't sure if the export was something that was suited for the actual API or if it was wanted. I had considered adding an /export endpoint that returned the same content .tar'ed or similar.

<!-- gh-comment-id:2169123198 --> @JerrettDavis commented on GitHub (Jun 15, 2024): I thought about that myself, but I wasn't sure if the export was something that was suited for the actual API or if it was wanted. I had considered adding an /export endpoint that returned the same content .tar'ed or similar.
Author
Owner

@t18n commented on GitHub (Jun 17, 2024):

Slightly related to this issue. My need is not to have the entire model backed up; rather, I would like to be able to "sync" it to my dotfiles across all computers. So, I wrote a script to do 2 things:

  • Specify the list of models I would like to install
  • Remove the models that doesn't belong to the synced list

You can check it out here:
https://gist.github.com/t18n/96a8c345ee72d76ec5873de5ee58394f

Note that I don't use any custom model, and I don't mind leaving a terminal pane open for 15 minutes while the script is pulling the models :)

<!-- gh-comment-id:2173147199 --> @t18n commented on GitHub (Jun 17, 2024): Slightly related to this issue. My need is not to have the entire model backed up; rather, I would like to be able to "sync" it to my dotfiles across all computers. So, I wrote a script to do 2 things: - Specify the list of models I would like to install - Remove the models that doesn't belong to the synced list You can check it out here: https://gist.github.com/t18n/96a8c345ee72d76ec5873de5ee58394f Note that I don't use any custom model, and I don't mind leaving a terminal pane open for 15 minutes while the script is pulling the models :)
Author
Owner

@yangboz commented on GitHub (Aug 15, 2024):

When using large models like Llama2:70b, the download files are quite big. As a user with multiple local systems, having to ollama pull on every device means that much more bandwidth and time spent. It would be great if we could download the model once and then export/import it to other ollama clients in the office without pulling it from the internet.

Example: On the first device, we would do:

ollama pull llama2:70b

ollama export llama2:70b /Volumes/MyUSB/llama2_70b-local.ollama_model

Then we would take MyUSB over to another device and do:

ollama import /Volumes/MyUSB/llama2_70b-local.ollama_model

ollama run llama2:local-70b or ollama run llama2-local:70b or even just ollama run llama2_70b-local

I'm obviously not sure about the naming structure here, but I hope I've conveyed the problem and thought process.

Thanks for the fantastic project!

just tried it , but got Error: unknown command "export" for "ollama" , any idea ? thanks.

<!-- gh-comment-id:2291000776 --> @yangboz commented on GitHub (Aug 15, 2024): > When using large models like Llama2:70b, the download files are quite big. As a user with multiple local systems, having to `ollama pull` on every device means that much more bandwidth and time spent. It would be great if we could download the model once and then export/import it to other ollama clients in the office without pulling it from the internet. > > Example: On the first device, we would do: > > `ollama pull llama2:70b` > > `ollama export llama2:70b /Volumes/MyUSB/llama2_70b-local.ollama_model` > > Then we would take MyUSB over to another device and do: > > `ollama import /Volumes/MyUSB/llama2_70b-local.ollama_model` > > `ollama run llama2:local-70b` or `ollama run llama2-local:70b` or even just `ollama run llama2_70b-local` > > I'm obviously not sure about the naming structure here, but I hope I've conveyed the problem and thought process. > > Thanks for the fantastic project! just tried it , but got Error: unknown command "export" for "ollama" , any idea ? thanks.
Author
Owner

@supersonictw commented on GitHub (Aug 15, 2024):

just tried it , but got Error: unknown command "export" for "ollama" , any idea ? thanks.

The feature ollama export is unfinished yet.

Try the script:

https://gist.github.com/supersonictw/f6cf5e599377132fe5e180b3d495c553

<!-- gh-comment-id:2291049069 --> @supersonictw commented on GitHub (Aug 15, 2024): > just tried it , but got Error: unknown command "export" for "ollama" , any idea ? thanks. The feature `ollama export` is unfinished yet. Try the script: https://gist.github.com/supersonictw/f6cf5e599377132fe5e180b3d495c553
Author
Owner

@yangboz commented on GitHub (Aug 15, 2024):

just tried it , but got Error: unknown command "export" for "ollama" , any idea ? thanks.

The feature ollama export is unfinished yet.

Try the script:

https://gist.github.com/supersonictw/f6cf5e599377132fe5e180b3d495c553

verified it output as following:

Modelfile	model.bin	source.txt
(py39) yangboz@m1 dangkang % cat vicuna-latest/source.txt 
registry.ollama.ai/library/vicuna:latest% 
<!-- gh-comment-id:2291086618 --> @yangboz commented on GitHub (Aug 15, 2024): > > just tried it , but got Error: unknown command "export" for "ollama" , any idea ? thanks. > > The feature `ollama export` is unfinished yet. > > Try the script: > > https://gist.github.com/supersonictw/f6cf5e599377132fe5e180b3d495c553 verified it output as following: ``` Modelfile model.bin source.txt (py39) yangboz@m1 dangkang % cat vicuna-latest/source.txt registry.ollama.ai/library/vicuna:latest% ```
Author
Owner

@yangboz commented on GitHub (Aug 15, 2024):

and how about import the exported model file by similiary ollama import sh ? if could provided here, we will be greatly appreciated.

<!-- gh-comment-id:2291090706 --> @yangboz commented on GitHub (Aug 15, 2024): and how about import the exported model file by similiary ollama import sh ? if could provided here, we will be greatly appreciated.
Author
Owner

@supersonictw commented on GitHub (Aug 15, 2024):

and how about import the exported model file by similiary ollama import sh ? if could provided here, we will be greatly appreciated.

https://gist.github.com/supersonictw/f6cf5e599377132fe5e180b3d495c553?permalink_comment_id=5078039#gistcomment-5078039

<!-- gh-comment-id:2291163673 --> @supersonictw commented on GitHub (Aug 15, 2024): > and how about import the exported model file by similiary ollama import sh ? if could provided here, we will be greatly appreciated. https://gist.github.com/supersonictw/f6cf5e599377132fe5e180b3d495c553?permalink_comment_id=5078039#gistcomment-5078039
Author
Owner

@aliok commented on GitHub (Oct 24, 2024):

I've tried something similar today.

A CLI that can work with the Ollama models directory, without the daemon running.

I used the Ollama server packages as a library in my code and it almost worked. There are a few private fields, which constraints the usage as a library.

For example, I realized these fields are not public and can't be accessed when Ollama code is used as a library.

3a75e74e34/server/manifest.go (L23-L25)

What do you think about making these fields public?

<!-- gh-comment-id:2435594582 --> @aliok commented on GitHub (Oct 24, 2024): I've tried something similar today. A CLI that can work with the Ollama models directory, without the daemon running. I used the Ollama server packages as a library in my code and it _almost_ worked. There are a few private fields, which constraints the usage as a library. For example, I realized these fields are not public and can't be accessed when Ollama code is used as a library. https://github.com/ollama/ollama/blob/3a75e74e34c976d596437c8aa14587ada562301e/server/manifest.go#L23-L25 What do you think about making these fields public?
Author
Owner

@lytalk commented on GitHub (Mar 19, 2025):

I want to know why there is no PR for the import/export now. Is there any concern?

<!-- gh-comment-id:2735323852 --> @lytalk commented on GitHub (Mar 19, 2025): I want to know why there is no PR for the import/export now. Is there any concern?
Author
Owner

@Vasdranna commented on GitHub (May 29, 2025):

Any news regarding this functionality? I wish we could have import and export commands

<!-- gh-comment-id:2919534131 --> @Vasdranna commented on GitHub (May 29, 2025): Any news regarding this functionality? I wish we could have `import` and `export` commands
Author
Owner

@trymeouteh commented on GitHub (Jun 6, 2025):

Would like to be able to export a model into a single file and import that model file back into ollama. Would allow users to backup LLMs and transfer their LLMs to another machine without having to download gigabytes or terabytes

<!-- gh-comment-id:2951210588 --> @trymeouteh commented on GitHub (Jun 6, 2025): Would like to be able to export a model into a single file and import that model file back into ollama. Would allow users to backup LLMs and transfer their LLMs to another machine without having to download gigabytes or terabytes
Author
Owner

@PasaOpasen commented on GitHub (Jun 19, 2025):

+1

<!-- gh-comment-id:2988758913 --> @PasaOpasen commented on GitHub (Jun 19, 2025): +1
Author
Owner

@hailiang-wang commented on GitHub (Jun 21, 2025):

+1

<!-- gh-comment-id:2993463201 --> @hailiang-wang commented on GitHub (Jun 21, 2025): +1
Author
Owner

@mikeroySoft commented on GitHub (Jun 22, 2025):

I managed to get this implemented a bit more gracefully than my first approach. Thanks Claude for the help lol!
Seems to be working out in my testing, would love some feedback.

<!-- gh-comment-id:2994444029 --> @mikeroySoft commented on GitHub (Jun 22, 2025): I managed to get this [implemented](https://github.com/ollama/ollama/pull/11161) a bit more gracefully than my first approach. Thanks Claude for the help lol! Seems to be working out in my testing, would love some feedback.
Author
Owner

@jcheek commented on GitHub (Jun 22, 2025):

While I have no desire to delay the merging of the PR above, I have a request -- could the README be updated with usage steps/examples for import and export? @mikeroySoft (Just so we don't have to go to the PR to read how to do it)

<!-- gh-comment-id:2994451128 --> @jcheek commented on GitHub (Jun 22, 2025): While I have no desire to delay the merging of the PR above, I have a request -- could the README be updated with usage steps/examples for import and export? @mikeroySoft (Just so we don't have to go to the PR to read how to do it)
Author
Owner

@Mazyod commented on GitHub (Jun 25, 2025):

This Go script has served use well over the past few months, in case someone needed a makeshift solution before this feature lands.

https://gist.github.com/Mazyod/3bfcb4ec1aaa9b61a877d8ba1a308624

<!-- gh-comment-id:3003466996 --> @Mazyod commented on GitHub (Jun 25, 2025): This Go script has served use well over the past few months, in case someone needed a makeshift solution before this feature lands. https://gist.github.com/Mazyod/3bfcb4ec1aaa9b61a877d8ba1a308624
Author
Owner

@kalle07 commented on GitHub (Apr 16, 2026):

https://github.com/mattjamo/OllamaToGGUF/blob/main/OllamaToGGUF.py

<!-- gh-comment-id:4260151660 --> @kalle07 commented on GitHub (Apr 16, 2026): https://github.com/mattjamo/OllamaToGGUF/blob/main/OllamaToGGUF.py
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#25908