[GH-ISSUE #2763] ollama pull - Error: unexpected end of JSON input #48177

Closed
opened 2026-04-28 07:02:53 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @shuther on GitHub (Feb 26, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2763

Originally assigned to: @bmizerany on GitHub.

While I pulled already llama2:7b , I wanted to install llama2 (without the 7b tag). My understanding was that it was the same exact model (same hash), so maybe ollama would install only the metadata file?

Anyway, I am getting an error message:

ollama ls
NAME              	ID          	SIZE  	MODIFIED    
codellama:latest  	8fdf8f752f6e	3.8 GB	3 days ago 	
dolphin-phi:latest	c5761fc77240	1.6 GB	3 days ago 	
llama2:7b         	78e26419b446	3.8 GB	2 days ago 	
llava:7b          	8dd30f6b0cb1	4.7 GB	3 days ago 	
mistral:latest    	61e88e884507	4.1 GB	5 weeks ago	
phi:latest        	e2fd6321a5fe	1.6 GB	5 weeks ago
ollama pull llama2
Error: unexpected end of JSON input

I am not sure how to generate more logs

Originally created by @shuther on GitHub (Feb 26, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2763 Originally assigned to: @bmizerany on GitHub. While I pulled already llama2:7b , I wanted to install llama2 (without the 7b tag). My understanding was that it was the same exact model (same hash), so maybe ollama would install only the metadata file? Anyway, I am getting an error message: ``` ollama ls NAME ID SIZE MODIFIED codellama:latest 8fdf8f752f6e 3.8 GB 3 days ago dolphin-phi:latest c5761fc77240 1.6 GB 3 days ago llama2:7b 78e26419b446 3.8 GB 2 days ago llava:7b 8dd30f6b0cb1 4.7 GB 3 days ago mistral:latest 61e88e884507 4.1 GB 5 weeks ago phi:latest e2fd6321a5fe 1.6 GB 5 weeks ago ``` ``` ollama pull llama2 Error: unexpected end of JSON input ``` I am not sure how to generate more logs
Author
Owner

@hunt-47 commented on GitHub (Feb 26, 2024):

same thing happened to me when itried ollama in a Raspberry pi 5.
when ran the code "ollama run phi" because o f lack of power the pi automaticallay rebooted.
then when i ran the code again the error happened.

just pull another version of the same model.

eg:- for llama2

ollama run llama2:7b

<!-- gh-comment-id:1964736919 --> @hunt-47 commented on GitHub (Feb 26, 2024): same thing happened to me when itried ollama in a Raspberry pi 5. when ran the code "ollama run phi" because o f lack of power the pi automaticallay rebooted. then when i ran the code again the error happened. just pull another version of the same model. eg:- for llama2 ollama run llama2:7b
Author
Owner

@AlyamanMas commented on GitHub (Mar 12, 2024):

I have the same issue. I tried running different versions of the model, even running a completely different model, and even deleting the ollama directory where models are stored, all to no avail. Running ollama version 0.1.28 through nixpkgs unstable. The models tried are mistral and mistral-openorca.

<!-- gh-comment-id:1991847318 --> @AlyamanMas commented on GitHub (Mar 12, 2024): I have the same issue. I tried running different versions of the model, even running a completely different model, and even deleting the ollama directory where models are stored, all to no avail. Running ollama version 0.1.28 through nixpkgs unstable. The models tried are mistral and mistral-openorca.
Author
Owner

@AlyamanMas commented on GitHub (Mar 15, 2024):

Any activity here? This is preventing me from using Ollama.

<!-- gh-comment-id:2000463174 --> @AlyamanMas commented on GitHub (Mar 15, 2024): Any activity here? This is preventing me from using Ollama.
Author
Owner

@Wgelyjr commented on GitHub (May 26, 2024):

same thing happened to me when itried ollama in a Raspberry pi 5. when ran the code "ollama run phi" because o f lack of power the pi automaticallay rebooted. then when i ran the code again the error happened.

just pull another version of the same model.

eg:- for llama2

ollama run llama2:7b

Experiencing exactly the same here. Even attempting to remove the model returns the JSON error. There doesn't seem much I can do...

<!-- gh-comment-id:2132025568 --> @Wgelyjr commented on GitHub (May 26, 2024): > same thing happened to me when itried ollama in a Raspberry pi 5. when ran the code "ollama run phi" because o f lack of power the pi automaticallay rebooted. then when i ran the code again the error happened. > > just pull another version of the same model. > > eg:- for llama2 > > ollama run llama2:7b Experiencing exactly the same here. Even attempting to remove the model returns the JSON error. There doesn't seem much I can do...
Author
Owner

@calyptis commented on GitHub (Jun 17, 2024):

Ran into the error when ollama pull failed due to non-sufficient disk space.
Any subsequent retry raised the JSON error.

Solution: rm -rf /usr/share/ollama/.ollama/models/*

I suppose incomplete model files were stored when the pull failed.
For other OS, consult this page.

<!-- gh-comment-id:2173287272 --> @calyptis commented on GitHub (Jun 17, 2024): Ran into the error when `ollama pull` failed due to non-sufficient disk space. Any subsequent retry raised the JSON error. Solution: `rm -rf /usr/share/ollama/.ollama/models/*` I suppose incomplete model files were stored when the pull failed. For other OS, consult [this page](https://github.com/ollama/ollama/blob/main/docs/faq.md#where-are-models-stored).
Author
Owner

@peter-hartmann-emrsn commented on GitHub (Jun 18, 2024):

Simply delete the manifest of that model only. No need to delete all models.

  • on MAC: rm -rf ~/.ollama/models/manifests/registry.ollama.ai/library/llama2/
  • on Ubuntu: rm -rf /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/llama2/
  • on Snap: rm -rf /var/snap/ollama/common/models/manifests/registry.ollama.ai/library/llama2/

sudo if needed.
Then simple ollama pull llama2 again.

<!-- gh-comment-id:2175002148 --> @peter-hartmann-emrsn commented on GitHub (Jun 18, 2024): Simply delete the manifest of that model only. No need to delete all models. * on MAC: ```rm -rf ~/.ollama/models/manifests/registry.ollama.ai/library/llama2/``` * on Ubuntu: ```rm -rf /usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/llama2/``` * on Snap: ```rm -rf /var/snap/ollama/common/models/manifests/registry.ollama.ai/library/llama2/``` ```sudo``` if needed. Then simple ```ollama pull llama2``` again.
Author
Owner

@pdevine commented on GitHub (Jul 18, 2024):

I think what's happening here is the manifest (for whatever reason) didn't get written correctly after the pull. For the rest of the blobs, we do verify each of them, but if the manifest gets a short write it will end up having this issue.

<!-- gh-comment-id:2237731241 --> @pdevine commented on GitHub (Jul 18, 2024): I think what's happening here is the manifest (for whatever reason) didn't get written correctly after the pull. For the rest of the blobs, we do verify each of them, but if the manifest gets a short write it will end up having this issue.
Author
Owner

@bmizerany commented on GitHub (Mar 4, 2025):

Closing due to age.

<!-- gh-comment-id:2696453654 --> @bmizerany commented on GitHub (Mar 4, 2025): Closing due to age.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#48177