[GH-ISSUE #3921] Copying quantized models doesn't work #2431

Closed
opened 2026-04-12 12:44:42 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @saul-jb on GitHub (Apr 25, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3921

Originally assigned to: @mxyng on GitHub.

What is the issue?

I've just built the latest version through docker (5f73c08729) and am getting errors copying some models:

$ ollama cp llama3:8b-instruct-q5_K_M llama3-8b-1
Error: model "llama3:8b-instruct-q5_K_M" not found

$ ollama cp llama3 llama3-8b-1
Error: model "llama3" not found

$ ollama cp yarn-llama2:13b-128k-q5_K_M test
Error: model "yarn-llama2:13b-128k-q5_K_M" not found

I have these models installed:

$ ollama list
...
llama3:70b-instruct              	be39eb53a197	39 GB 	46 hours ago	
llama3:8b-instruct-q5_K_M        	fdc4ae3d5d42	5.7 GB	21 hours ago
yarn-llama2:13b-128k-q5_K_M      	6c618202668d	9.2 GB	5 weeks ago	
llava:latest                     	8dd30f6b0cb1	4.7 GB	16 hours ago	

Other models seem to work:

$ ollama cp llava test
copied 'llava' to 'test'

$ ollama cp llava:latest test
copied 'llava:latest' to 'test'

I was hoping that #3713 would address it but it still remains an issue.

OS

Linux

GPU

AMD

CPU

Intel

Ollama version

0.1.32-42-g5f73c08-dirty

Originally created by @saul-jb on GitHub (Apr 25, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3921 Originally assigned to: @mxyng on GitHub. ### What is the issue? I've just built the latest version through docker (5f73c08729e97eb3f760633c6ffba4f34cfe5538) and am getting errors copying some models: ``` $ ollama cp llama3:8b-instruct-q5_K_M llama3-8b-1 Error: model "llama3:8b-instruct-q5_K_M" not found $ ollama cp llama3 llama3-8b-1 Error: model "llama3" not found $ ollama cp yarn-llama2:13b-128k-q5_K_M test Error: model "yarn-llama2:13b-128k-q5_K_M" not found ``` I have these models installed: ``` $ ollama list ... llama3:70b-instruct be39eb53a197 39 GB 46 hours ago llama3:8b-instruct-q5_K_M fdc4ae3d5d42 5.7 GB 21 hours ago yarn-llama2:13b-128k-q5_K_M 6c618202668d 9.2 GB 5 weeks ago llava:latest 8dd30f6b0cb1 4.7 GB 16 hours ago ``` Other models seem to work: ``` $ ollama cp llava test copied 'llava' to 'test' $ ollama cp llava:latest test copied 'llava:latest' to 'test' ``` I was hoping that #3713 would address it but it still remains an issue. ### OS Linux ### GPU AMD ### CPU Intel ### Ollama version 0.1.32-42-g5f73c08-dirty
GiteaMirror added the bug label 2026-04-12 12:44:42 -05:00
Author
Owner

@saul-jb commented on GitHub (Apr 25, 2024):

This issue does not seem to occur on the latest stable release: 0.1.32

<!-- gh-comment-id:2078189929 --> @saul-jb commented on GitHub (Apr 25, 2024): This issue does not seem to occur on the latest stable release: `0.1.32`
Author
Owner

@mxyng commented on GitHub (May 1, 2024):

Hi @saul-jb, I'm not able to reproduce this on the commit you mentioned nor can I reproduce this with latest main.

$ ollama ls llama3
NAME                            ID              SIZE    MODIFIED
llama3:8b-instruct-q5_K_M       fdc4ae3d5d42    5.7 GB  42 seconds ago
$ ollama cp llama3:8b-instruct-q5_K_M llama3-8b-1
copied 'llama3:8b-instruct-q5_K_M' to 'llama3-8b-1'
$ ollama ls llama3
NAME                            ID              SIZE    MODIFIED
llama3:8b-instruct-q5_K_M       fdc4ae3d5d42    5.7 GB  2 minutes ago
llama3-8b-1:latest              fdc4ae3d5d42    5.7 GB  2 seconds ago

Serve logs would help greatly in finding the root cause.

<!-- gh-comment-id:2089198382 --> @mxyng commented on GitHub (May 1, 2024): Hi @saul-jb, I'm not able to reproduce this on the commit you mentioned nor can I reproduce this with latest main. ``` $ ollama ls llama3 NAME ID SIZE MODIFIED llama3:8b-instruct-q5_K_M fdc4ae3d5d42 5.7 GB 42 seconds ago $ ollama cp llama3:8b-instruct-q5_K_M llama3-8b-1 copied 'llama3:8b-instruct-q5_K_M' to 'llama3-8b-1' $ ollama ls llama3 NAME ID SIZE MODIFIED llama3:8b-instruct-q5_K_M fdc4ae3d5d42 5.7 GB 2 minutes ago llama3-8b-1:latest fdc4ae3d5d42 5.7 GB 2 seconds ago ``` Serve logs would help greatly in finding the root cause.
Author
Owner

@saul-jb commented on GitHub (May 2, 2024):

Hi @saul-jb, I'm not able to reproduce this on the commit you mentioned nor can I reproduce this with latest main.

I've just tested this again with 0.1.33-rc7 but it still persists...

Serve logs would help greatly in finding the root cause.

Environment="OLLAMA_DEBUG=1"

$ journalctl -u ollama.service | tail -n 50
... (Normal startup logs)
May 02 21:00:46 remote ollama[6243]: [GIN] 2024/05/02 - 21:00:46 | 200 |        53.1µs |       127.0.0.1 | HEAD     "/"
May 02 21:00:46 remote ollama[6243]: [GIN] 2024/05/02 - 21:00:46 | 404 |     320.378µs |       127.0.0.1 | POST     "/api/copy"

Is there anyway to increase the verbosity of these logs?

The logs show a 404 - is it possible something isn't being escaped properly on the cp command in linux? Or maybe the format has changed and broken stuff due to running on a pre-release?

Running the model works:

$ ollama run llama3:8b-instruct-q5_K_M
...

I've also tried:

$ ollama rm llama3:8b-instruct-q5_K_M
deleted 'llama3:8b-instruct-q5_K_M'

$ ollama pull llama3:8b-instruct-q5_K_M
...
success

$ ollama cp llama3:8b-instruct-q5_K_M test
Error: model "llama3:8b-instruct-q5_K_M" not found
<!-- gh-comment-id:2091652754 --> @saul-jb commented on GitHub (May 2, 2024): > Hi @saul-jb, I'm not able to reproduce this on the commit you mentioned nor can I reproduce this with latest main. I've just tested this again with `0.1.33-rc7` but it still persists... > Serve logs would help greatly in finding the root cause. `Environment="OLLAMA_DEBUG=1"` ``` $ journalctl -u ollama.service | tail -n 50 ... (Normal startup logs) May 02 21:00:46 remote ollama[6243]: [GIN] 2024/05/02 - 21:00:46 | 200 | 53.1µs | 127.0.0.1 | HEAD "/" May 02 21:00:46 remote ollama[6243]: [GIN] 2024/05/02 - 21:00:46 | 404 | 320.378µs | 127.0.0.1 | POST "/api/copy" ``` Is there anyway to increase the verbosity of these logs? The logs show a 404 - is it possible something isn't being escaped properly on the `cp` command in linux? Or maybe the format has changed and broken stuff due to running on a pre-release? Running the model works: ``` $ ollama run llama3:8b-instruct-q5_K_M ... ``` I've also tried: ``` $ ollama rm llama3:8b-instruct-q5_K_M deleted 'llama3:8b-instruct-q5_K_M' $ ollama pull llama3:8b-instruct-q5_K_M ... success $ ollama cp llama3:8b-instruct-q5_K_M test Error: model "llama3:8b-instruct-q5_K_M" not found ```
Author
Owner

@mxyng commented on GitHub (May 2, 2024):

is it possible something isn't being escaped properly on the cp command in linux?

nothing should need escaping. also the error is printed using the input name which sees the input correctly.

maybe the format has changed and broken stuff due to running on a pre-release?

I don't recall any format changes. the filepath resolution was changed but it shouldn't fail intermittently

can you locate the manifest file on disk? It should be $OLLAMA_MODELS/manifests/registry.ollama.ai/library/llama3/8b-instruct-q5_K_M where OLLAMA_MODELS is ~/.ollama/models unless it was overridden.

can you paste the contents of $OLLAMA_MODELS/manifests/registry.ollama.ai/library/llama3/ and a model you can copy?

<!-- gh-comment-id:2091776892 --> @mxyng commented on GitHub (May 2, 2024): > is it possible something isn't being escaped properly on the cp command in linux? nothing should need escaping. also the error is printed using the input name which sees the input correctly. > maybe the format has changed and broken stuff due to running on a pre-release? I don't recall any format changes. the filepath resolution was changed but it shouldn't fail intermittently can you locate the manifest file on disk? It should be `$OLLAMA_MODELS/manifests/registry.ollama.ai/library/llama3/8b-instruct-q5_K_M` where `OLLAMA_MODELS` is `~/.ollama/models` unless it was overridden. can you paste the contents of `$OLLAMA_MODELS/manifests/registry.ollama.ai/library/llama3/` and a model you can copy?
Author
Owner

@saul-jb commented on GitHub (May 2, 2024):

can you locate the manifest file on disk?

The model directory seems to be located here for some reason: /usr/share/ollama/.ollama/models
I haven't overridden the settings.

$ ls /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/llama3
70b-instruct  8b-instruct-q5_K_M
(70b-instruct works but 8b-instruct-q5_K_M doesn't)

$ ls /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/llama2
70b-chat  70b-chat-q5_K_S  70b-text  latest
(All work other than 70b-chat-q5_K_S)

$ ls /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/llava
latest
(All work)

$ ls /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/falcon
40b-instruct-q8_0
(All work)

The permissions are the same across manifests that do work and those that don't.

$ cat /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/llama3/70b-instruct | jq

{
  "schemaVersion": 2,
  "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
  "config": {
    "mediaType": "application/vnd.docker.container.image.v1+json",
    "digest": "sha256:fa25612cd8dc0a94a33b5fde51be9bf4b012a0a1659c349323fceb745c705096",
    "size": 484
  },
  "layers": [
    {
      "mediaType": "application/vnd.ollama.image.model",
      "digest": "sha256:4fe022a8902336d3c452c88f7aca5590f5b5b02ccfd06320fdefab02412e1f0b",
      "size": 39969732000
    },
    {
      "mediaType": "application/vnd.ollama.image.license",
      "digest": "sha256:4fa551d4f938f68b8c1e6afa9d28befb70e3f33f75d0753248d530364aeea40f",
      "size": 12403
    },
    {
      "mediaType": "application/vnd.ollama.image.template",
      "digest": "sha256:8ab4849b038cf0abc5b1c9b8ee1443dca6b93a045c2272180d985126eb40bf6f",
      "size": 254
    },
    {
      "mediaType": "application/vnd.ollama.image.params",
      "digest": "sha256:577073ffcc6ce95b9981eacc77d1039568639e5638e83044994560d9ef82ce1b",
      "size": 110
    }
  ]
}

$ cat /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/llama3/8b-instruct-q5_K_M | jq

{
  "schemaVersion": 2,
  "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
  "config": {
    "mediaType": "application/vnd.docker.container.image.v1+json",
    "digest": "sha256:da95b32d78798a2a5fa7acd617ba7ea5792e6a0cb968e2f763e7163f8dbdf218",
    "size": 485
  },
  "layers": [
    {
      "mediaType": "application/vnd.ollama.image.model",
      "digest": "sha256:ba49635fdfcf1eb4e97aaa1d549e4eff003169bf3325a9317710c1d586380ebe",
      "size": 5732987072
    },
    {
      "mediaType": "application/vnd.ollama.image.license",
      "digest": "sha256:4fa551d4f938f68b8c1e6afa9d28befb70e3f33f75d0753248d530364aeea40f",
      "size": 12403
    },
    {
      "mediaType": "application/vnd.ollama.image.template",
      "digest": "sha256:8ab4849b038cf0abc5b1c9b8ee1443dca6b93a045c2272180d985126eb40bf6f",
      "size": 254
    },
    {
      "mediaType": "application/vnd.ollama.image.params",
      "digest": "sha256:577073ffcc6ce95b9981eacc77d1039568639e5638e83044994560d9ef82ce1b",
      "size": 110
    }
  ]
}
<!-- gh-comment-id:2091827043 --> @saul-jb commented on GitHub (May 2, 2024): > can you locate the manifest file on disk? The model directory seems to be located here for some reason: `/usr/share/ollama/.ollama/models` I haven't overridden the settings. ``` $ ls /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/llama3 70b-instruct 8b-instruct-q5_K_M (70b-instruct works but 8b-instruct-q5_K_M doesn't) $ ls /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/llama2 70b-chat 70b-chat-q5_K_S 70b-text latest (All work other than 70b-chat-q5_K_S) $ ls /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/llava latest (All work) $ ls /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/falcon 40b-instruct-q8_0 (All work) ``` The permissions are the same across manifests that do work and those that don't. `$ cat /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/llama3/70b-instruct | jq` ```json { "schemaVersion": 2, "mediaType": "application/vnd.docker.distribution.manifest.v2+json", "config": { "mediaType": "application/vnd.docker.container.image.v1+json", "digest": "sha256:fa25612cd8dc0a94a33b5fde51be9bf4b012a0a1659c349323fceb745c705096", "size": 484 }, "layers": [ { "mediaType": "application/vnd.ollama.image.model", "digest": "sha256:4fe022a8902336d3c452c88f7aca5590f5b5b02ccfd06320fdefab02412e1f0b", "size": 39969732000 }, { "mediaType": "application/vnd.ollama.image.license", "digest": "sha256:4fa551d4f938f68b8c1e6afa9d28befb70e3f33f75d0753248d530364aeea40f", "size": 12403 }, { "mediaType": "application/vnd.ollama.image.template", "digest": "sha256:8ab4849b038cf0abc5b1c9b8ee1443dca6b93a045c2272180d985126eb40bf6f", "size": 254 }, { "mediaType": "application/vnd.ollama.image.params", "digest": "sha256:577073ffcc6ce95b9981eacc77d1039568639e5638e83044994560d9ef82ce1b", "size": 110 } ] } ``` `$ cat /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/llama3/8b-instruct-q5_K_M | jq` ```json { "schemaVersion": 2, "mediaType": "application/vnd.docker.distribution.manifest.v2+json", "config": { "mediaType": "application/vnd.docker.container.image.v1+json", "digest": "sha256:da95b32d78798a2a5fa7acd617ba7ea5792e6a0cb968e2f763e7163f8dbdf218", "size": 485 }, "layers": [ { "mediaType": "application/vnd.ollama.image.model", "digest": "sha256:ba49635fdfcf1eb4e97aaa1d549e4eff003169bf3325a9317710c1d586380ebe", "size": 5732987072 }, { "mediaType": "application/vnd.ollama.image.license", "digest": "sha256:4fa551d4f938f68b8c1e6afa9d28befb70e3f33f75d0753248d530364aeea40f", "size": 12403 }, { "mediaType": "application/vnd.ollama.image.template", "digest": "sha256:8ab4849b038cf0abc5b1c9b8ee1443dca6b93a045c2272180d985126eb40bf6f", "size": 254 }, { "mediaType": "application/vnd.ollama.image.params", "digest": "sha256:577073ffcc6ce95b9981eacc77d1039568639e5638e83044994560d9ef82ce1b", "size": 110 } ] } ```
Author
Owner

@saul-jb commented on GitHub (May 2, 2024):

The ~/.ollama/models directory only contains blobs.

<!-- gh-comment-id:2091830346 --> @saul-jb commented on GitHub (May 2, 2024): The `~/.ollama/models` directory only contains `blobs`.
Author
Owner

@mxyng commented on GitHub (May 2, 2024):

If you're on linux, it'd be /usr/share/ollama/.ollama/models. In docker, it should be /root/.ollama/models

<!-- gh-comment-id:2091842353 --> @mxyng commented on GitHub (May 2, 2024): If you're on linux, it'd be `/usr/share/ollama/.ollama/models`. In docker, it should be `/root/.ollama/models`
Author
Owner

@jmorganca commented on GitHub (May 9, 2024):

Hi @saul-jb great to see you :). I can't seem to reproduce this but let us know if it's still not fixed and we can re-open+fix

<!-- gh-comment-id:2103503944 --> @jmorganca commented on GitHub (May 9, 2024): Hi @saul-jb great to see you :). I can't seem to reproduce this but let us know if it's still not fixed and we can re-open+fix
Author
Owner

@mxyng commented on GitHub (May 9, 2024):

https://github.com/ollama/ollama/pull/4261 (rather an incarnation of that bug) is likely the culprit

<!-- gh-comment-id:2103523739 --> @mxyng commented on GitHub (May 9, 2024): https://github.com/ollama/ollama/pull/4261 (rather an incarnation of that bug) is ~likely~ the culprit
Author
Owner

@saul-jb commented on GitHub (May 14, 2024):

#4261 (rather an incarnation of that bug) is likely the culprit

Can confirm that this is no longer an issue as of 0.1.37.
Thanks for looking into this.

<!-- gh-comment-id:2109185167 --> @saul-jb commented on GitHub (May 14, 2024): > #4261 (rather an incarnation of that bug) is ~likely~ the culprit Can confirm that this is no longer an issue as of `0.1.37`. Thanks for looking into this.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#2431