[GH-ISSUE #8466] FR: Meaningful names of models in models/blobs dir #5447

Closed
opened 2026-04-12 16:40:47 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @vt-alt on GitHub (Jan 17, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8466

Please make models to have meaningful filenames (like user/modelname-quantization.gguf) in models/blobs directory, so they can be (easier) used with other model inference software.

Currently they have a lot of similar names like sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730.

Originally created by @vt-alt on GitHub (Jan 17, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8466 Please make models to have meaningful filenames (like user/modelname-quantization.gguf) in models/blobs directory, so they can be (easier) used with other model inference software. Currently they have a lot of similar names like `sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730`.
GiteaMirror added the feature request label 2026-04-12 16:40:47 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 17, 2025):

#!/bin/bash

die() {
  echo "$1"
  exit 1
}

! PARSED=$(getopt --options=n --longoptions=dryrun --name "$0" -- "$@")
[[ ${PIPESTATUS[0]} -ne 0 ]] && die "Parsing failed"
eval set -- "$PARSED"

DRYRUN=

while true; do
  case "$1" in
    -n|--dryrun)
      DRYRUN=echo
      shift
      ;;
    --)
      shift
      break
      ;;
    *)
      die "Parsing failed"
      ;;
  esac
done

OLLAMA_MODELS=${OLLAMA_MODELS-~ollama/.ollama/models}
LMSTUDIO_MODELS=${LMSTUDIO_MODELS-~/.lmstudio/models}

_=$(command -v jq) || die "Need jq"
cd $OLLAMA_MODELS || die "Couldn't cd ollama model directory"
[ ! -d manifests -o ! -d blobs ] && die "Manifest or blobs not found"

DEST="$LMSTUDIO_MODELS/ollama"
mkdir -p "$DEST" || die "Couldn't mkdir $DEST"


link_model() {
  model="$1"
  # fully qualify the model
  registry=registry.ollama.ai
  library=library
  name="${model%:*}" ; name="${name##*/}"
  [[ "$model" = */*/* ]] && registry="${model%%/*}"
  [[ "$model" = */* ]] && { library="${model%/*}" ; library="${library#*/}" ; }
  [[ "$model" = *:* ]] && tag="${model##*:}" || tag=latest
  # must already exist in ollama repo
  [ ! -f "manifests/$registry/$library/$name/$tag" ] && die "$model not found in ollama repo"
  # must not already exist in lmstudio repo
  [ -d "$DEST/$registry/$library/$name-$tag" ] && die "$model already exists in lmstudio repo"
  LMDIR="$DEST/$registry/$library/$name-$tag"
  $DRYRUN mkdir -p "$LMDIR" || die "Couldn't mkdir '$LMDIR'"

  layers=$(cat "manifests/$registry/$library/$name/$tag" | jq -r '.layers[]|select(.mediaType|test("application/vnd.ollama.image.(model|projector)"))|.digest')
  [ -z "$layers" ] && die "No GGUF layers found in $model"

  for layer in $(cat "manifests/$registry/$library/$name/$tag" | jq -r '.layers[]|select(.mediaType=="application/vnd.ollama.image.model")|.digest') ; do
    digest=${layer/:/-}
    $DRYRUN ln -s "$OLLAMA_MODELS/blobs/$digest" "$LMDIR/$digest.gguf" || die "Failed to symlink $digest to $LMDIR/$digest.gguf"
  done

  for layer in $(cat "manifests/$registry/$library/$name/$tag" | jq -r '.layers[]|select(.mediaType=="application/vnd.ollama.image.projector")|.digest') ; do
    digest=${layer/:/-}
    $DRYRUN ln -s "$OLLAMA_MODELS/blobs/$digest" "$LMDIR/mmproj-$digest.gguf" || die "Failed to symlink $digest to $LMDIR/mmproj-$digest.gguf"
  done
}

[ -z "$*" ] && die "usage: $0 modelname [modelname ...]"

for model in $* ; do
  link_model $model
done
$ find ~/.lmstudio/models/ollama/
/home/rick/.lmstudio/models/ollama/
/home/rick/.lmstudio/models/ollama/registry.ollama.ai
/home/rick/.lmstudio/models/ollama/registry.ollama.ai/library
/home/rick/.lmstudio/models/ollama/registry.ollama.ai/library/qwen3.5-latest
/home/rick/.lmstudio/models/ollama/registry.ollama.ai/library/qwen3.5-latest/sha256-dec52a44569a2a25341c4e4d3fee25846eed4f6f0b936278e3a3c900bb99d37c.gguf
/home/rick/.lmstudio/models/ollama/hf.co
/home/rick/.lmstudio/models/ollama/hf.co/bartowski
/home/rick/.lmstudio/models/ollama/hf.co/bartowski/google_gemma-4-26B-A4B-it-GGUF-Q4_0
/home/rick/.lmstudio/models/ollama/hf.co/bartowski/google_gemma-4-26B-A4B-it-GGUF-Q4_0/sha256-7607b930a9a9250173c593df3d60ca7e271d109c2392173ff9f1f30780cfc7fa.gguf
/home/rick/.lmstudio/models/ollama/hf.co/bartowski/google_gemma-4-26B-A4B-it-GGUF-Q4_0/mmproj-sha256-41cdabd1e8066e983ee6c288eb0117777376223ee0279cadcd67b2295e4d975f.gguf
<!-- gh-comment-id:2597731833 --> @rick-github commented on GitHub (Jan 17, 2025): ```sh #!/bin/bash die() { echo "$1" exit 1 } ! PARSED=$(getopt --options=n --longoptions=dryrun --name "$0" -- "$@") [[ ${PIPESTATUS[0]} -ne 0 ]] && die "Parsing failed" eval set -- "$PARSED" DRYRUN= while true; do case "$1" in -n|--dryrun) DRYRUN=echo shift ;; --) shift break ;; *) die "Parsing failed" ;; esac done OLLAMA_MODELS=${OLLAMA_MODELS-~ollama/.ollama/models} LMSTUDIO_MODELS=${LMSTUDIO_MODELS-~/.lmstudio/models} _=$(command -v jq) || die "Need jq" cd $OLLAMA_MODELS || die "Couldn't cd ollama model directory" [ ! -d manifests -o ! -d blobs ] && die "Manifest or blobs not found" DEST="$LMSTUDIO_MODELS/ollama" mkdir -p "$DEST" || die "Couldn't mkdir $DEST" link_model() { model="$1" # fully qualify the model registry=registry.ollama.ai library=library name="${model%:*}" ; name="${name##*/}" [[ "$model" = */*/* ]] && registry="${model%%/*}" [[ "$model" = */* ]] && { library="${model%/*}" ; library="${library#*/}" ; } [[ "$model" = *:* ]] && tag="${model##*:}" || tag=latest # must already exist in ollama repo [ ! -f "manifests/$registry/$library/$name/$tag" ] && die "$model not found in ollama repo" # must not already exist in lmstudio repo [ -d "$DEST/$registry/$library/$name-$tag" ] && die "$model already exists in lmstudio repo" LMDIR="$DEST/$registry/$library/$name-$tag" $DRYRUN mkdir -p "$LMDIR" || die "Couldn't mkdir '$LMDIR'" layers=$(cat "manifests/$registry/$library/$name/$tag" | jq -r '.layers[]|select(.mediaType|test("application/vnd.ollama.image.(model|projector)"))|.digest') [ -z "$layers" ] && die "No GGUF layers found in $model" for layer in $(cat "manifests/$registry/$library/$name/$tag" | jq -r '.layers[]|select(.mediaType=="application/vnd.ollama.image.model")|.digest') ; do digest=${layer/:/-} $DRYRUN ln -s "$OLLAMA_MODELS/blobs/$digest" "$LMDIR/$digest.gguf" || die "Failed to symlink $digest to $LMDIR/$digest.gguf" done for layer in $(cat "manifests/$registry/$library/$name/$tag" | jq -r '.layers[]|select(.mediaType=="application/vnd.ollama.image.projector")|.digest') ; do digest=${layer/:/-} $DRYRUN ln -s "$OLLAMA_MODELS/blobs/$digest" "$LMDIR/mmproj-$digest.gguf" || die "Failed to symlink $digest to $LMDIR/mmproj-$digest.gguf" done } [ -z "$*" ] && die "usage: $0 modelname [modelname ...]" for model in $* ; do link_model $model done ``` ```console $ find ~/.lmstudio/models/ollama/ /home/rick/.lmstudio/models/ollama/ /home/rick/.lmstudio/models/ollama/registry.ollama.ai /home/rick/.lmstudio/models/ollama/registry.ollama.ai/library /home/rick/.lmstudio/models/ollama/registry.ollama.ai/library/qwen3.5-latest /home/rick/.lmstudio/models/ollama/registry.ollama.ai/library/qwen3.5-latest/sha256-dec52a44569a2a25341c4e4d3fee25846eed4f6f0b936278e3a3c900bb99d37c.gguf /home/rick/.lmstudio/models/ollama/hf.co /home/rick/.lmstudio/models/ollama/hf.co/bartowski /home/rick/.lmstudio/models/ollama/hf.co/bartowski/google_gemma-4-26B-A4B-it-GGUF-Q4_0 /home/rick/.lmstudio/models/ollama/hf.co/bartowski/google_gemma-4-26B-A4B-it-GGUF-Q4_0/sha256-7607b930a9a9250173c593df3d60ca7e271d109c2392173ff9f1f30780cfc7fa.gguf /home/rick/.lmstudio/models/ollama/hf.co/bartowski/google_gemma-4-26B-A4B-it-GGUF-Q4_0/mmproj-sha256-41cdabd1e8066e983ee6c288eb0117777376223ee0279cadcd67b2295e4d975f.gguf ```
Author
Owner

@pdevine commented on GitHub (Jan 18, 2025):

Hey @vt-alt . Thanks for the issue. The blobs are content addressable which means that if you have two or more models which share the same content they will share the same blob which saves space on your disk and also means you don't have to pull down that data if you already have it. The format is also changing with the new engine though so it probably won't be useful with other tools unless they support the new format.

@rick-github 's script seems like a nice workaround for now.

<!-- gh-comment-id:2599537619 --> @pdevine commented on GitHub (Jan 18, 2025): Hey @vt-alt . Thanks for the issue. The blobs are content addressable which means that if you have two or more models which share the same content they will share the same blob which saves space on your disk and also means you don't have to pull down that data if you already have it. The format is also changing with the new engine though so it probably won't be useful with other tools unless they support the new format. @rick-github 's script seems like a nice workaround for now.
Author
Owner

@vt-alt commented on GitHub (Jan 29, 2025):

jfyi https://www.reddit.com/r/LocalLLaMA/comments/1icta5y/why_do_people_like_ollama_more_than_lm_studio/
Ironically how some people believe that ollama using "proprietary format" is its downside (in comparison to an actually proprietary software). I think this maybe because of obscurity of the blobs storage.

<!-- gh-comment-id:2622938004 --> @vt-alt commented on GitHub (Jan 29, 2025): jfyi https://www.reddit.com/r/LocalLLaMA/comments/1icta5y/why_do_people_like_ollama_more_than_lm_studio/ Ironically how some people believe that ollama using "proprietary format" is its downside (in comparison to an actually proprietary software). I think this maybe because of obscurity of the blobs storage.
Author
Owner

@pdevine commented on GitHub (Jan 29, 2025):

@vt-alt I hear you, but ultimately I think people are going to be really happy w/ much faster downloads and model loading times as well as less disk space used. Some people of course will be disappointed that it isn't in the flavour that they want.

<!-- gh-comment-id:2623040472 --> @pdevine commented on GitHub (Jan 29, 2025): @vt-alt I hear you, but ultimately I think people are going to be really happy w/ much faster downloads and model loading times as well as less disk space used. Some people of course will be disappointed that it isn't in the flavour that they want.
Author
Owner

@rick-github commented on GitHub (Jan 29, 2025):

While I personally have no problems with blob storage, I fail to see how the name of the file leads to faster downloads and less disk space. The shared blob feature for models can be achieved by naming the actual GGUF files in line with the name of the model, with some collision avoidance detection.

And on the subject of faster downloads, is the ollama team aware of the Cloudfare CDN degradation (https://github.com/ollama/ollama/issues/8632) and the stalling problem (https://github.com/ollama/ollama/issues/8535)? There have been tens of bugs filed over the last few weeks and users are handing around scripts, workarounds and torrent links to compensate, with no acknowledgement AFAIK from team members.

<!-- gh-comment-id:2623058662 --> @rick-github commented on GitHub (Jan 29, 2025): While I personally have no problems with blob storage, I fail to see how the name of the file leads to faster downloads and less disk space. The shared blob feature for models can be achieved by naming the actual GGUF files in line with the name of the model, with some collision avoidance detection. And on the subject of faster downloads, is the ollama team aware of the Cloudfare CDN degradation (https://github.com/ollama/ollama/issues/8632) and the stalling problem (https://github.com/ollama/ollama/issues/8535)? There have been tens of bugs filed over the last few weeks and users are handing around scripts, workarounds and torrent links to compensate, with no acknowledgement AFAIK from team members.
Author
Owner

@pdevine commented on GitHub (Jan 29, 2025):

@rick-github sorry, my comment was pretty subtle. We're planning to get the blob size down below the Cloudflare CDN limits so that we can distribute the models more effectively. You get faster downloads because you're not being forced to pull from our R2 bucket in the eastern US, but instead pull from something closer in your own region (this should work with most tensors except for some from non-quantized models). You use less disk space because the many of the quantizations of the tensors share the same tensors.

Also, thanks for pointing out the other issues. I'll comment in there.

<!-- gh-comment-id:2623143643 --> @pdevine commented on GitHub (Jan 29, 2025): @rick-github sorry, my comment was pretty subtle. We're planning to get the blob size down below the Cloudflare CDN limits so that we can distribute the models more effectively. You get faster downloads because you're not being forced to pull from our R2 bucket in the eastern US, but instead pull from something closer in your own region (this should work with most tensors except for some from non-quantized models). You use less disk space because the many of the quantizations of the tensors share the same tensors. Also, thanks for pointing out the other issues. I'll comment in there.
Author
Owner

@sbgraphic commented on GitHub (Feb 24, 2026):

Hello, I am a bit late on this thread and I wonder if @rick-github's script is still functioning with Ollama & LM Studio's latest version? (Mac OS setup)

I tested the script and it has created the gguf folder as expected; it's great! However, LM Studio cannot see the models "as installed" in the app. Is this the expected behavior or not?

PS: just found this: https://github.com/garciaba79/llama-model-manager

<!-- gh-comment-id:3954647513 --> @sbgraphic commented on GitHub (Feb 24, 2026): Hello, I am a bit late on this thread and I wonder if @rick-github's script is still functioning with Ollama & LM Studio's latest version? (Mac OS setup) I tested the script and it has created the gguf folder as expected; it's great! However, LM Studio cannot see the models "as installed" in the app. Is this the expected behavior or not? _PS: just found this: https://github.com/garciaba79/llama-model-manager_
Author
Owner

@rick-github commented on GitHub (Feb 25, 2026):

However, LM Studio cannot see the models "as installed" in the app. Is this the expected behavior or not?

I'm not a regular LM Studio user, let me check and I'll update the script if required.

<!-- gh-comment-id:3955669285 --> @rick-github commented on GitHub (Feb 25, 2026): > However, LM Studio cannot see the models "as installed" in the app. Is this the expected behavior or not? I'm not a regular LM Studio user, let me check and I'll update the script if required.
Author
Owner

@anburocky3 commented on GitHub (Apr 5, 2026):

I was trying LM Studio and couldnt find the models!

<!-- gh-comment-id:4189444277 --> @anburocky3 commented on GitHub (Apr 5, 2026): I was trying LM Studio and couldnt find the models!
Author
Owner

@vt-alt commented on GitHub (Apr 6, 2026):

Maybe there could be additional dir automatically maintained by ollama server with projection of meaningful file names symlinked to blobs?

<!-- gh-comment-id:4191264461 --> @vt-alt commented on GitHub (Apr 6, 2026): Maybe there could be additional dir automatically maintained by ollama server with projection of meaningful file names symlinked to blobs?
Author
Owner

@rick-github commented on GitHub (Apr 6, 2026):

The script has been updated to work with LM Studio 0.4.9-1. Now works for models with projectors.

<!-- gh-comment-id:4193287019 --> @rick-github commented on GitHub (Apr 6, 2026): The script has been updated to work with LM Studio 0.4.9-1. Now works for models with projectors.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5447