[GH-ISSUE #9638] How to Download Only the Modelfile for Self-Imported Models #6289

Closed
opened 2026-04-12 17:43:02 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @ZimaBlueee on GitHub (Mar 11, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9638

Hello,
First, thank you for releasing such an excellent project!

My server is on an internal network, so I can only manually import model files downloaded from other sources. However, this poses a problem: if I download an image directly from Ollama, it will also download the modelfile along with it. Since I have imported the models myself, I can't obtain the modelfile from Ollama.
Could you please advise on how to obtain just the modelfile?
I understand that the modelfile differs for each large model; is there any method or path to download a modelfile independently?

Thank you in advance for your help!

Originally created by @ZimaBlueee on GitHub (Mar 11, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9638 Hello, First, thank you for releasing such an excellent project! My server is on an internal network, so I can only manually import model files downloaded from other sources. However, this poses a problem: if I download an image directly from Ollama, it will also download the modelfile along with it. Since I have imported the models myself, I can't obtain the modelfile from Ollama. Could you please advise on how to obtain just the modelfile? I understand that the modelfile differs for each large model; is there any method or path to download a modelfile independently? Thank you in advance for your help!
GiteaMirror added the question label 2026-04-12 17:43:02 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 11, 2025):

There's no mechanism for pulling a complete modelfile from the ollama library, but since you only need the template and parameters, you can go to the model in the library and click on the "template" and "params" items. For example, llama3.2 template and params. You will need to manually covert the params from the JSON struct in the library to the PARAMETER param value format for the modelfile.

<!-- gh-comment-id:2712258803 --> @rick-github commented on GitHub (Mar 11, 2025): There's no mechanism for pulling a complete modelfile from the ollama library, but since you only need the template and parameters, you can go to the model in the library and click on the "template" and "params" items. For example, llama3.2 [template](https://ollama.com/library/llama3.2/blobs/966de95ca8a6) and [params](https://ollama.com/library/llama3.2/blobs/56bb8bd477a5). You will need to manually covert the params from the JSON struct in the library to the `PARAMETER param value` format for the modelfile.
Author
Owner

@ZimaBlueee commented on GitHub (Mar 11, 2025):

@rick-github I understand what you mean. Could you please consider providing a basic modelfile for each model? As a novice, I am really worried about making mistakes or missing something.

<!-- gh-comment-id:2712698665 --> @ZimaBlueee commented on GitHub (Mar 11, 2025): @rick-github I understand what you mean. Could you please consider providing a basic modelfile for each model? As a novice, I am really worried about making mistakes or missing something.
Author
Owner

@niaojiao commented on GitHub (Mar 11, 2025):

i use a server that can connect the network
then , i download use

commond: ollama run deepseek-r1:32b

when i download the model

commond: ollama show --modelfile deepseek-r1:32b
cp /usr/share/ollama/.ollama/models/blobs/sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 deepseek-r1:32b.gguf

mv deepseek-r1:32b.gguf ==> to you server which do not have network

touch a file like Modelfile

From /root/qwq:32b-preview-fp16
commond: ollama create deepseek-r1:32b -f Modelfile

<!-- gh-comment-id:2712775678 --> @niaojiao commented on GitHub (Mar 11, 2025): i use a server that can connect the network then , i download use commond: ollama run deepseek-r1:32b when i download the model commond: ollama show --modelfile deepseek-r1:32b cp /usr/share/ollama/.ollama/models/blobs/sha256-6150cb382311b69f09cc0f9a1b69fc029cbd742b66bb8ec531aa5ecf5c613e93 deepseek-r1:32b.gguf mv deepseek-r1:32b.gguf ==> to you server which do not have network touch a file like Modelfile From /root/qwq:32b-preview-fp16 commond: ollama create deepseek-r1:32b -f Modelfile
Author
Owner

@rick-github commented on GitHub (Mar 11, 2025):

Use this script: ollama-modelfile.py llama3.2 will construct a Modelfile from library data as if it were local.

#!/usr/bin/env python3

import requests
import argparse

def quote(s):
  s = str(s)
  if "\n" in s or s.startswith(" ") or s.endswith(" "):
    if '"' in s:
      return '"""' + s + '"""'
    return '"' + s + '"'
  return s

parser = argparse.ArgumentParser()
parser.add_argument("model", nargs='?')
args = parser.parse_args()

model = args.model.split(':')[0]
tag = args.model.split(':')[1] if ':' in args.model else 'latest'

response = requests.get(f"https://registry.ollama.ai/v2/library/{model}/manifests/{tag}")
if response.status_code != 200:
  raise Exception("manifest fetch failed")
manifest = response.json()
modelblob = ""
adapter = ""
projector = ""
system = ""
template = ""
licenses = []
params = ""
for blob in manifest["layers"]:
  if not any(t in blob["mediaType"] for t in ["model","adapter","projector"]):
    response = requests.get(f"https://registry.ollama.ai/v2/library/{model}/blobs/{blob['digest']}")
    if response.status_code != 200:
      raise Exception("blob fetch failed")
  if blob["mediaType"] == "application/vnd.ollama.image.model":
    modelblob = "/root/.ollama/models/blobs/" + blob['digest'].replace(':', '-')
  elif blob["mediaType"] == "application/vnd.ollama.image.adapter":
    adapter = "/root/.ollama/models/blobs/" + blob['digest'].replace(':', '-')
  elif blob["mediaType"] == "application/vnd.ollama.image.projector":
    projector = "/root/.ollama/models/blobs/" + blob['digest'].replace(':', '-')
  elif blob["mediaType"] == "application/vnd.ollama.image.system":
    system = response.text
  elif blob["mediaType"] == "application/vnd.ollama.image.template":
    response.encoding = 'UTF-8'
    template = response.text
  elif blob["mediaType"] == "application/vnd.ollama.image.license":
    licenses.append(response.text)
  elif blob["mediaType"] == "application/vnd.ollama.image.params":
    params = response.json()

print(f"""# Modelfile generated by "ollama show"
# To build a new Modelfile based on this, replace FROM with:
# FROM {model}:{tag}
""")
print(f'FROM {modelblob}')
if adapter:
  print(f'FROM {adapter}')
if projector:
  print(f'FROM {projector}')
print(f'TEMPLATE {quote(template)}')
if system:
  print(f'SYSTEM {quote(system)}')
if params:
  for k in params.keys():
    for v in params[k] if isinstance(params[k], list) else [params[k]]:
      print(f'PARAMETER {k} {quote(v)}')
for license in licenses:
  print(f'LICENSE {quote(license)}')
print()
<!-- gh-comment-id:2713976764 --> @rick-github commented on GitHub (Mar 11, 2025): Use this script: `ollama-modelfile.py llama3.2` will construct a Modelfile from library data as if it were local. ```python #!/usr/bin/env python3 import requests import argparse def quote(s): s = str(s) if "\n" in s or s.startswith(" ") or s.endswith(" "): if '"' in s: return '"""' + s + '"""' return '"' + s + '"' return s parser = argparse.ArgumentParser() parser.add_argument("model", nargs='?') args = parser.parse_args() model = args.model.split(':')[0] tag = args.model.split(':')[1] if ':' in args.model else 'latest' response = requests.get(f"https://registry.ollama.ai/v2/library/{model}/manifests/{tag}") if response.status_code != 200: raise Exception("manifest fetch failed") manifest = response.json() modelblob = "" adapter = "" projector = "" system = "" template = "" licenses = [] params = "" for blob in manifest["layers"]: if not any(t in blob["mediaType"] for t in ["model","adapter","projector"]): response = requests.get(f"https://registry.ollama.ai/v2/library/{model}/blobs/{blob['digest']}") if response.status_code != 200: raise Exception("blob fetch failed") if blob["mediaType"] == "application/vnd.ollama.image.model": modelblob = "/root/.ollama/models/blobs/" + blob['digest'].replace(':', '-') elif blob["mediaType"] == "application/vnd.ollama.image.adapter": adapter = "/root/.ollama/models/blobs/" + blob['digest'].replace(':', '-') elif blob["mediaType"] == "application/vnd.ollama.image.projector": projector = "/root/.ollama/models/blobs/" + blob['digest'].replace(':', '-') elif blob["mediaType"] == "application/vnd.ollama.image.system": system = response.text elif blob["mediaType"] == "application/vnd.ollama.image.template": response.encoding = 'UTF-8' template = response.text elif blob["mediaType"] == "application/vnd.ollama.image.license": licenses.append(response.text) elif blob["mediaType"] == "application/vnd.ollama.image.params": params = response.json() print(f"""# Modelfile generated by "ollama show" # To build a new Modelfile based on this, replace FROM with: # FROM {model}:{tag} """) print(f'FROM {modelblob}') if adapter: print(f'FROM {adapter}') if projector: print(f'FROM {projector}') print(f'TEMPLATE {quote(template)}') if system: print(f'SYSTEM {quote(system)}') if params: for k in params.keys(): for v in params[k] if isinstance(params[k], list) else [params[k]]: print(f'PARAMETER {k} {quote(v)}') for license in licenses: print(f'LICENSE {quote(license)}') print() ```
Author
Owner

@pdevine commented on GitHub (Mar 12, 2025):

I'm going to go ahead and close the issue as answered.

<!-- gh-comment-id:2716125166 --> @pdevine commented on GitHub (Mar 12, 2025): I'm going to go ahead and close the issue as answered.
Author
Owner

@chigkim commented on GitHub (Feb 15, 2026):

Thanks @rick-github for the script!

That said, it would be an awesome feature for Ollama to show modelfile for models that are not downloaded.

I often import finetuned safetensors, and I have to download the entire modell from Ollama to get the modelfile.

Given how sensitive models can be even with slightest template changes, I have to pull the entire model again to import back If the model template gets updated.

Usually I delete the original models to save space. It's not a big deal with models under 10B, but it's pretty annoying having to redownloading bigger models just to get the updated modelfile.

Thanks!

<!-- gh-comment-id:3904623649 --> @chigkim commented on GitHub (Feb 15, 2026): Thanks @rick-github for the script! That said, it would be an awesome feature for Ollama to show modelfile for models that are not downloaded. I often import finetuned safetensors, and I have to download the entire modell from Ollama to get the modelfile. Given how sensitive models can be even with slightest template changes, I have to pull the entire model again to import back If the model template gets updated. Usually I delete the original models to save space. It's not a big deal with models under 10B, but it's pretty annoying having to redownloading bigger models just to get the updated modelfile. Thanks!
Author
Owner

@rick-github commented on GitHub (Feb 15, 2026):

The components of the Modelfile are available in the model card. Go to the model card of the model that has the Modelfile that you want and look at the "Details" table, click on the appropriate element (template, params, license) to examine it. The template can be used in a Modelfile with TEMPLATE """<template>""", the params need to be expanded into a number of lines of the format PARAMETER <parameter-name> <value>.

Image
<!-- gh-comment-id:3904654081 --> @rick-github commented on GitHub (Feb 15, 2026): The components of the Modelfile are available in the model card. Go to the model card of the model that has the Modelfile that you want and look at the "Details" table, click on the appropriate element (template, params, license) to examine it. The template can be used in a Modelfile with `TEMPLATE """<template>"""`, the params need to be expanded into a number of lines of the format `PARAMETER <parameter-name> <value>`. <img width="785" height="583" alt="Image" src="https://github.com/user-attachments/assets/7ada4eeb-3ec6-4b91-a8d4-5b5e36098c36" />
Author
Owner

@chigkim commented on GitHub (Feb 15, 2026):

Yes, I think the feature request is about convenience, not because there is no workaround.
It would be annoying to look up and manually build modelfile every update. Or the provided script could break if the website changes.
I mean One of the primary premises of Ollama is convenience anyways. :)
Thanks!

<!-- gh-comment-id:3904850564 --> @chigkim commented on GitHub (Feb 15, 2026): Yes, I think the feature request is about convenience, not because there is no workaround. It would be annoying to look up and manually build modelfile every update. Or the provided script could break if the website changes. I mean One of the primary premises of Ollama is convenience anyways. :) Thanks!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6289