[GH-ISSUE #8423] save with OLLAMA_MODELS set doesn't work anymore in 0.5.5 #67468

Open
opened 2026-05-04 10:27:53 -05:00 by GiteaMirror · 24 comments
Owner

Originally created by @sammyf on GitHub (Jan 14, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8423

What is the issue?

On Archlinux (with the latest updates), Ollama 0.5.5
Worked with prior version just a few hours ago

$ ollama run llama3.2-abliterated:1b_Q8
> /set parameter num_ctx 8192
> Set parameter 'num_ctx' to '8192'
> >>> /save llama3.2-abliterated:1b_Q8_8k
> error: The model name 'llama3.2-abliterated:1b_Q8_8k' is invalid
> >>> Send a message (/? for help)


Environment :

Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_FLASH_ATTENTION=1"
#Environment="OLLAMA_KV_CACHE_TYPE=q4_0"
Environment="OLLAMA_KEEP_ALIVE=-1"
Environment="OLLAMA_MODELS=/media/GLIMSPANKY/ollama/models"

Locale (just in case) : LANG="en_US.UTF-8"

I removed the users and group 'ollama', and reinstalled but that didn't change the output.

removing the OLLAMA_MODELS environment fixes it (but obviously the models go to the wrong drive)

symlinking /usr/share/ollama/.ollama/models to another target directory results in the same error message.

pull, run and create work fine.

EDIT:
adding a slash at the end of the path like this Environment="OLLAMA_MODELS=/media/GLIMSPANKY/ollama/models/" didn't help neither (but worth a try)

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.5

Originally created by @sammyf on GitHub (Jan 14, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8423 ### What is the issue? On Archlinux (with the latest updates), Ollama 0.5.5 Worked with prior version just a few hours ago ``` $ ollama run llama3.2-abliterated:1b_Q8 > /set parameter num_ctx 8192 > Set parameter 'num_ctx' to '8192' > >>> /save llama3.2-abliterated:1b_Q8_8k > error: The model name 'llama3.2-abliterated:1b_Q8_8k' is invalid > >>> Send a message (/? for help) Environment : Environment="OLLAMA_HOST=0.0.0.0:11434" Environment="OLLAMA_FLASH_ATTENTION=1" #Environment="OLLAMA_KV_CACHE_TYPE=q4_0" Environment="OLLAMA_KEEP_ALIVE=-1" Environment="OLLAMA_MODELS=/media/GLIMSPANKY/ollama/models" ``` Locale (just in case) : `LANG="en_US.UTF-8"` I removed the users and group 'ollama', and reinstalled but that didn't change the output. removing the `OLLAMA_MODELS` environment fixes it (but obviously the models go to the wrong drive) symlinking `/usr/share/ollama/.ollama/models` to another target directory results in the same error message. pull, run and create work fine. EDIT: adding a slash at the end of the path like this `Environment="OLLAMA_MODELS=/media/GLIMSPANKY/ollama/models/"` didn't help neither (but worth a try) ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.5
GiteaMirror added the bug label 2026-05-04 10:27:53 -05:00
Author
Owner

@pdevine commented on GitHub (Jan 15, 2025):

@sammyf Sorry about this. It should be fixed now in 0.5.6 when it releases later today.

<!-- gh-comment-id:2593790619 --> @pdevine commented on GitHub (Jan 15, 2025): @sammyf Sorry about this. It should be fixed now in 0.5.6 when it releases later today.
Author
Owner

@belfie13 commented on GitHub (Jan 19, 2025):

0.5.7 macos still won't save

>>> /save test
error: The model name 'test' is invalid
<!-- gh-comment-id:2600806781 --> @belfie13 commented on GitHub (Jan 19, 2025): 0.5.7 macos still won't save ``` >>> /save test error: The model name 'test' is invalid ```
Author
Owner

@GuiAmPm commented on GitHub (Jan 24, 2025):

0.5.7 fails to save on ArchLinux as well

<!-- gh-comment-id:2613032271 --> @GuiAmPm commented on GitHub (Jan 24, 2025): 0.5.7 fails to save on ArchLinux as well
Author
Owner

@Panican-Whyasker commented on GitHub (Jan 29, 2025):

@pdevine I'm sorry but the case persists in Ollama v0.5.7 and on a Windows machine. Can you pls re-open that case?

Same here with Ollama 0.5.7 on Windows Server 2016.

OLLAMA_MODELS set to a folder on the F: drive. ~10 TB free space.

PS F:\SysAdmin\AI> ollama run --verbose --keepalive 3h deepseek-r1:671b

/set parameter num_thread 36
Set parameter 'num_thread' to '36'
/save deepseek-r1:671b-36t
error: The model name 'deepseek-r1:671b-36t' is invalid

<!-- gh-comment-id:2622753321 --> @Panican-Whyasker commented on GitHub (Jan 29, 2025): @pdevine I'm sorry but the case persists in Ollama v0.5.7 and on a Windows machine. Can you pls re-open that case? Same here with Ollama 0.5.7 on Windows Server 2016. OLLAMA_MODELS set to a folder on the F: drive. ~10 TB free space. PS F:\SysAdmin\AI> ollama run --verbose --keepalive 3h deepseek-r1:671b >>> /set parameter num_thread 36 Set parameter 'num_thread' to '36' >>> /save deepseek-r1:671b-36t error: The model name 'deepseek-r1:671b-36t' is invalid
Author
Owner

@pdevine commented on GitHub (Jan 29, 2025):

@GuiAmPm @Panican-Whyasker can you both try running ollama -v? I just want to verify that both the server and client are up to date. If they are out of sync with each other you'll run into this problem.

<!-- gh-comment-id:2622906117 --> @pdevine commented on GitHub (Jan 29, 2025): @GuiAmPm @Panican-Whyasker can you both try running `ollama -v`? I just want to verify that both the server and client are up to date. If they are out of sync with each other you'll run into this problem.
Author
Owner

@GuiAmPm commented on GitHub (Jan 29, 2025):

@pdevine, Note that it fixed itself after I ran ollama create my_model -f ./custom_model.
Running /save afterwards worked, though now I'm facing another issue unrelated to this⁰.

I can't reproduce the original error anymore.

Here's the output from running ollama -v:
ollama version is 0.5.7

It was installed using sudo pacman -S ollama.

⁰ The /save command created a new model from the base model (llama3), not custom_model.

<!-- gh-comment-id:2623080349 --> @GuiAmPm commented on GitHub (Jan 29, 2025): @pdevine, Note that it fixed itself after I ran `ollama create my_model -f ./custom_model`. Running `/save` afterwards worked, though now I'm facing another issue unrelated to this⁰. I can't reproduce the original error anymore. Here's the output from running `ollama -v`: `ollama version is 0.5.7` It was installed using `sudo pacman -S ollama`. ⁰ The `/save` command created a new model from the base model (llama3), not `custom_model`.
Author
Owner

@pdevine commented on GitHub (Jan 29, 2025):

@GuiAmPm I think your client/server were just mismatched before, but glad it's working now. Can you post some more details of the other issue that you're seeing? I just want to make sure that I'm understanding it.

<!-- gh-comment-id:2623150060 --> @pdevine commented on GitHub (Jan 29, 2025): @GuiAmPm I think your client/server were just mismatched before, but glad it's working now. Can you post some more details of the other issue that you're seeing? I just want to make sure that I'm understanding it.
Author
Owner

@GuiAmPm commented on GitHub (Jan 29, 2025):

If that's the case that's strange, it was a fresh Installation of ollama on a system that haven't had that installed before. Only 0.5.7 was installed via the pacman command and I didn't run it again afterwards.

About the second problem, I just created a model from a file with a custom system message, generated some text, then tried to save the state. The saved state had the original models system message and no history.

I will try to reproduce the issue with the same model and inputs and open an new issue.

<!-- gh-comment-id:2623171214 --> @GuiAmPm commented on GitHub (Jan 29, 2025): If that's the case that's strange, it was a fresh Installation of ollama on a system that haven't had that installed before. Only 0.5.7 was installed via the pacman command and I didn't run it again afterwards. About the second problem, I just created a model from a file with a custom system message, generated some text, then tried to save the state. The saved state had the original models system message and no history. I will try to reproduce the issue with the same model and inputs and open an new issue.
Author
Owner

@Panican-Whyasker commented on GitHub (Jan 30, 2025):

@GuiAmPm @Panican-Whyasker can you both try running ollama -v? I just want to verify that both the server and client are up to date. If they are out of sync with each other you'll run into this problem.

PS F:\SysAdmin\AI> ollama -v
ollama version is 0.5.7

<!-- gh-comment-id:2623841388 --> @Panican-Whyasker commented on GitHub (Jan 30, 2025): > [@GuiAmPm](https://github.com/GuiAmPm) [@Panican-Whyasker](https://github.com/Panican-Whyasker) can you both try running `ollama -v`? I just want to verify that both the server and client are up to date. If they are out of sync with each other you'll run into this problem. PS F:\SysAdmin\AI> ollama -v ollama version is 0.5.7
Author
Owner

@Panican-Whyasker commented on GitHub (Jan 30, 2025):

Note that it fixed itself after I ran ollama create my_model -f ./custom_model. Running /save afterwards worked...
...

Was unable to repeat that (successfuly).
If running that from my working directory, I get:

PS F:\SysAdmin\AI> ollama create deepseek-ri:671b-36t -f .\deepseek-r1
gathering model components
Error: no Modelfile or safetensors files found

If I run that from the \Ollama\models\blobs directory, I get:

PS F:\SysAdmin\AI\Ollama\models\blobs> ollama create deepseek-r1:671b-36t -f .\deepseek-r1
gathering model components
Error: no Modelfile or safetensors files found

If I run it from \Ollama\models\manifests\registry.ollama.ai\library, I get:

PS F:\SysAdmin\AI\Ollama\models\manifests\registry.ollama.ai\library> ollama create deepseek-r1:671b-36t -f .\deepseek-r1

Error: read F:\SysAdmin\AI\Ollama\models\manifests\registry.ollama.ai\library\deepseek-r1: Incorrect function.

<!-- gh-comment-id:2623913916 --> @Panican-Whyasker commented on GitHub (Jan 30, 2025): > Note that it fixed itself after I ran `ollama create my_model -f ./custom_model`. Running `/save` afterwards worked... ... Was unable to repeat that (successfuly). If running that from my working directory, I get: PS F:\SysAdmin\AI> ollama create deepseek-ri:671b-36t -f .\deepseek-r1 gathering model components Error: no Modelfile or safetensors files found If I run that from the \Ollama\models\blobs directory, I get: PS F:\SysAdmin\AI\Ollama\models\blobs> ollama create deepseek-r1:671b-36t -f .\deepseek-r1 gathering model components Error: no Modelfile or safetensors files found If I run it from \Ollama\models\manifests\registry.ollama.ai\library, I get: PS F:\SysAdmin\AI\Ollama\models\manifests\registry.ollama.ai\library> ollama create deepseek-r1:671b-36t -f .\deepseek-r1 Error: read F:\SysAdmin\AI\Ollama\models\manifests\registry.ollama.ai\library\deepseek-r1: Incorrect function.
Author
Owner

@pdevine commented on GitHub (Jan 30, 2025):

@Panican-Whyasker can you paste the contents of deepseek-r1? I'm assuming that's your modelfile?

<!-- gh-comment-id:2623946099 --> @pdevine commented on GitHub (Jan 30, 2025): @Panican-Whyasker can you paste the contents of `deepseek-r1`? I'm assuming that's your modelfile?
Author
Owner

@Panican-Whyasker commented on GitHub (Jan 30, 2025):

@pdevine not certain what you mean by "contents" - here's the file 671b residing in

\Ollama\models\manifests\registry.ollama.ai\library\deepseek-r1:

{"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"sha256:fdf3d6cb73c79fca34a6ad2f703ba908972c4c92f1ff977c35a0f1134e0b25a8","size":497},"layers":[{"mediaType":"application/vnd.ollama.image.model","digest":"sha256:9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9","size":404430186432,"from":"/home/ollama/.ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9"},{"mediaType":"application/vnd.ollama.image.template","digest":"sha256:369ca498f347f710d068cbb38bf0b8692dd3fa30f30ca2ff755e211c94768150","size":387},{"mediaType":"application/vnd.ollama.image.license","digest":"sha256:6e4c38e1172f42fdbff13edf9a7a017679fb82b0fde415a3e8b3c31c6ed4a4e4","size":1065},{"mediaType":"application/vnd.ollama.image.params","digest":"sha256:f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588","size":148}]}

<!-- gh-comment-id:2623985305 --> @Panican-Whyasker commented on GitHub (Jan 30, 2025): @pdevine not certain what you mean by "contents" - here's the file 671b residing in \Ollama\models\manifests\registry.ollama.ai\library\deepseek-r1: {"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"sha256:fdf3d6cb73c79fca34a6ad2f703ba908972c4c92f1ff977c35a0f1134e0b25a8","size":497},"layers":[{"mediaType":"application/vnd.ollama.image.model","digest":"sha256:9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9","size":404430186432,"from":"/home/ollama/.ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9"},{"mediaType":"application/vnd.ollama.image.template","digest":"sha256:369ca498f347f710d068cbb38bf0b8692dd3fa30f30ca2ff755e211c94768150","size":387},{"mediaType":"application/vnd.ollama.image.license","digest":"sha256:6e4c38e1172f42fdbff13edf9a7a017679fb82b0fde415a3e8b3c31c6ed4a4e4","size":1065},{"mediaType":"application/vnd.ollama.image.params","digest":"sha256:f4d24e9138dd4603380add165d2b0d970bef471fac194b436ebd50e6147c6588","size":148}]}
Author
Owner

@Panican-Whyasker commented on GitHub (Jan 30, 2025):

@pdevine another note: I was able to /save a smaller (~50 GB) model on the same machine (after /set parameter num_thread 36, since it is a NUMA server with 4 Xeons). The model that fails to /save is ~400 GB that otherwise works decently on that machine (after /set parameter num_thread 36) - inference at ~2 tokens/sec and eval at ~1 token/sec.

<!-- gh-comment-id:2624050771 --> @Panican-Whyasker commented on GitHub (Jan 30, 2025): @pdevine another note: I was able to /save a smaller (~50 GB) model on the same machine (after /set parameter num_thread 36, since it is a NUMA server with 4 Xeons). The model that fails to /save is ~400 GB that otherwise works decently on that machine (after /set parameter num_thread 36) - inference at ~2 tokens/sec and eval at ~1 token/sec.
Author
Owner

@ryant00000 commented on GitHub (Jan 30, 2025):

> ollama run deepseek-r1:671b-q8_0 --verbose
>>> /set parameter min_p .2
Set parameter 'min_p' to '.2'
>>> /set parameter temperature .6
Set parameter 'temperature' to '.6'
>>> /save testing
error: The model name 'testing' is invalid
>>> /save deepseek-r1:671b-q8_0
error: The model name 'deepseek-r1:671b-q8_0' is invalid
>>> /bye

> ollama -v
ollama version is 0.5.7

This is the default deepseek-r1:671b-q8_0. I can also confirm this works with smaller models, not sure where the size cutoff is. Server logs seem not helpful, no actual info in them.

<!-- gh-comment-id:2624922717 --> @ryant00000 commented on GitHub (Jan 30, 2025): ``` > ollama run deepseek-r1:671b-q8_0 --verbose >>> /set parameter min_p .2 Set parameter 'min_p' to '.2' >>> /set parameter temperature .6 Set parameter 'temperature' to '.6' >>> /save testing error: The model name 'testing' is invalid >>> /save deepseek-r1:671b-q8_0 error: The model name 'deepseek-r1:671b-q8_0' is invalid >>> /bye > ollama -v ollama version is 0.5.7 ``` This is the default deepseek-r1:671b-q8_0. I can also confirm this works with smaller models, not sure where the size cutoff is. Server logs seem not helpful, no actual info in them.
Author
Owner

@Panican-Whyasker commented on GitHub (Jan 30, 2025):

@ryant00000 the 671b-q4_K_M variant here.

<!-- gh-comment-id:2624959913 --> @Panican-Whyasker commented on GitHub (Jan 30, 2025): @ryant00000 the 671b-q4_K_M variant here.
Author
Owner

@pdevine commented on GitHub (Jan 30, 2025):

@ryant00000 I'll try to duplicate that. I don't think it's the size, but at least this gives me something to go on.

<!-- gh-comment-id:2625117891 --> @pdevine commented on GitHub (Jan 30, 2025): @ryant00000 I'll try to duplicate that. I don't _think_ it's the size, but at least this gives me something to go on.
Author
Owner

@GuiAmPm commented on GitHub (Jan 30, 2025):

@pdevine
I have removed ollama from my system and delete folders and started from zero.

➜  ~ sudo pacman -Sy ollama
// ...
➜  ~ sudo systemctl start ollama                         
➜  ~ ollama run artifish/llama3.2-uncensored
>>> /save test
error: The model name 'test' is invalid
>>> /bye
➜  ~ sudo journalctl -S "1 min ago"
// ...
runner started in 1.01 seconds"
Jan 30 18:38:07 sera ollama[19363]: [GIN] 2025/01/30 - 18:38:07 | 200 |  1.359925011s |       127.0.0.1 | POST     "/api/generate"
Jan 30 18:38:16 sera ollama[19363]: [GIN] 2025/01/30 - 18:38:16 | 200 |       334.8µs |       127.0.0.1 | POST     "/api/create"
<!-- gh-comment-id:2625166716 --> @GuiAmPm commented on GitHub (Jan 30, 2025): @pdevine I have removed ollama from my system and delete folders and started from zero. ``` ➜ ~ sudo pacman -Sy ollama // ... ➜ ~ sudo systemctl start ollama ➜ ~ ollama run artifish/llama3.2-uncensored >>> /save test error: The model name 'test' is invalid >>> /bye ➜ ~ sudo journalctl -S "1 min ago" // ... runner started in 1.01 seconds" Jan 30 18:38:07 sera ollama[19363]: [GIN] 2025/01/30 - 18:38:07 | 200 | 1.359925011s | 127.0.0.1 | POST "/api/generate" Jan 30 18:38:16 sera ollama[19363]: [GIN] 2025/01/30 - 18:38:16 | 200 | 334.8µs | 127.0.0.1 | POST "/api/create" ```
Author
Owner

@GuiAmPm commented on GitHub (Jan 30, 2025):

And after running ollama create model -f ./modelfile the bug stops happening. But then the other bug happens.

https://github.com/ollama/ollama/issues/8701

<!-- gh-comment-id:2625192898 --> @GuiAmPm commented on GitHub (Jan 30, 2025): And after running `ollama create model -f ./modelfile` the bug stops happening. But then the other bug happens. https://github.com/ollama/ollama/issues/8701
Author
Owner

@GuiAmPm commented on GitHub (Jan 30, 2025):

A guess on my end: Could it be that the /save is not calling mkdir and the directory is not created, while after calling the create model functionality the path is created and /save works again?

<!-- gh-comment-id:2625207928 --> @GuiAmPm commented on GitHub (Jan 30, 2025): A guess on my end: Could it be that the /save is not calling `mkdir` and the directory is not created, while after calling the create model functionality the path is created and `/save` works again?
Author
Owner

@Panican-Whyasker commented on GitHub (Jan 30, 2025):

@pdevine no idea if that is related to inability to /save a (huge) model, but with the same 671b-q4_K_M model I started getting a sudden error (in the middle or near the end of LLM's answer) with the ollama runner crashing, and that too often:

  • **Error: an error was encountered while running the model: read tcp 127.0.0.1:58122->127.0.0.1:52358: wsarecv: An existing connection was forcibly closed by the remote host.

:Error: an error was encountered while running the model: read tcp 127.0.0.1:60865->127.0.0.1:58956: wsarecv: An existing connection was forcibly closed by the remote host.

The model is 404 GB and the NUMA server has 768 GB of RAM shared by 4 Xeon CPUs. So, the deepseek-r1:671b model runs on 400+ GB of RAM with one CPU at 100% but that CPU has faster access to only 192 GB and the rest it uses via the other CPUs' memory controllers (hence, slower bandwidth).

<!-- gh-comment-id:2625309289 --> @Panican-Whyasker commented on GitHub (Jan 30, 2025): @pdevine no idea if that is related to inability to /save a (huge) model, but with the same 671b-q4_K_M model I started getting a sudden error (in the middle or near the end of LLM's answer) with the ollama runner crashing, and that too often: - **Error: an error was encountered while running the model: read tcp 127.0.0.1:58122->127.0.0.1:52358: wsarecv: An existing connection was forcibly closed by the remote host. :Error: an error was encountered while running the model: read tcp 127.0.0.1:60865->127.0.0.1:58956: wsarecv: An existing connection was forcibly closed by the remote host. The model is 404 GB and the NUMA server has 768 GB of RAM shared by 4 Xeon CPUs. So, the deepseek-r1:671b model runs on 400+ GB of RAM with one CPU at 100% but that CPU has faster access to only 192 GB and the rest it uses via the other CPUs' memory controllers (hence, slower bandwidth).
Author
Owner

@tarbard commented on GitHub (Feb 5, 2025):

Fails for me too.

ollama version is 0.5.7

/save blah
error: The model name 'blah' is invalid

I have a OLLAMA_MODELS set. OS is linux. models are on a different drive.

<!-- gh-comment-id:2636188273 --> @tarbard commented on GitHub (Feb 5, 2025): Fails for me too. ollama version is 0.5.7 >>> /save blah error: The model name 'blah' is invalid I have a OLLAMA_MODELS set. OS is linux. models are on a different drive.
Author
Owner

@JPUnmanned commented on GitHub (Feb 20, 2025):

Fails for me as well using Docker in linux and Ollama version 0.5.11. Removing the environment variable causes the folder to get made and allows models to be saved, however, ollama will fail to read in the model after it is saved.

<!-- gh-comment-id:2670759792 --> @JPUnmanned commented on GitHub (Feb 20, 2025): Fails for me as well using Docker in linux and Ollama version 0.5.11. Removing the environment variable causes the folder to get made and allows models to be saved, however, ollama will fail to read in the model after it is saved.
Author
Owner

@duongnt027 commented on GitHub (Mar 3, 2025):

I'm experiencing issues saving custom models with Ollama version 0.5.12.

  1. Saving a model directly from my custom model fails:

    >>>ollama run <custom-model>
    >>>/save <new-custom-model>
    Error: "The model name '<new-custom-model>' is invalid"
    
  2. Saving from a base model (e.g., llama3.1) works:

    >>>ollama run llama3.1
    >>>/save <new-custom-model>
    Created new model '<new-custom-model>'
    
  3. However, even after using /set nohistory before saving, the new model still retains history. Is /set nohistory intended for a different purpose?

How can I create a model that doesn't use history?

<!-- gh-comment-id:2693604159 --> @duongnt027 commented on GitHub (Mar 3, 2025): I'm experiencing issues saving custom models with Ollama version 0.5.12. 1. Saving a model directly from my custom model fails: ```bash >>>ollama run <custom-model> >>>/save <new-custom-model> Error: "The model name '<new-custom-model>' is invalid" ``` 2. Saving from a base model (e.g., llama3.1) works: ```bash >>>ollama run llama3.1 >>>/save <new-custom-model> Created new model '<new-custom-model>' ``` 3. However, even after using `/set nohistory` before saving, the new model still retains history. Is `/set nohistory` intended for a different purpose? How can I create a model that doesn't use history?
Author
Owner

@Panican-Whyasker commented on GitHub (Nov 3, 2025):

Update for Ollama 0.12.9
That still does not work.

/save deepseek-r1:671b-36t
Error: pull model manifest: file does not exist

<!-- gh-comment-id:3480353057 --> @Panican-Whyasker commented on GitHub (Nov 3, 2025): Update for Ollama 0.12.9 That still does not work. >>> /save deepseek-r1:671b-36t Error: pull model manifest: file does not exist
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67468