[GH-ISSUE #998] After upgrade to 0.1.8, models won't load #26245

Closed
opened 2026-04-22 02:20:59 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @lestan on GitHub (Nov 4, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/998

After updating to 0.1.8 from a fully functioning Ollama install where I was able to successfully run LLaMA 2, Mistral and Zephyr without issues on my Intel MacBook Pro, I am now getting an error:

Error: llama runner exited,you may not have enough available memory to run this model

I was in the middle of testing these 3 models when I noticed the Ollama icon show an update was available. Once it updated and restarted, everything stopped working and I kept receiving this error. I closed all other programs, rebooted my laptop and it didn't help.

Is there an easy way to revert back to 0.1.7?

Originally created by @lestan on GitHub (Nov 4, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/998 After updating to 0.1.8 from a fully functioning Ollama install where I was able to successfully run LLaMA 2, Mistral and Zephyr without issues on my Intel MacBook Pro, I am now getting an error: Error: llama runner exited,you may not have enough available memory to run this model I was in the middle of testing these 3 models when I noticed the Ollama icon show an update was available. Once it updated and restarted, everything stopped working and I kept receiving this error. I closed all other programs, rebooted my laptop and it didn't help. Is there an easy way to revert back to 0.1.7?
Author
Owner

@mchiang0610 commented on GitHub (Nov 4, 2023):

Hi @lestan, so sorry about this. Looking into this. Would it be possible to ask you to help me troubleshoot as I look into this?

For the models that are not running, would it be possible to try to see if there is an update on them via 'ollama pull' command?

The manual download for the previous version is here

https://github.com/jmorganca/ollama/releases/tag/v0.1.7

<!-- gh-comment-id:1793521529 --> @mchiang0610 commented on GitHub (Nov 4, 2023): Hi @lestan, so sorry about this. Looking into this. Would it be possible to ask you to help me troubleshoot as I look into this? For the models that are not running, would it be possible to try to see if there is an update on them via 'ollama pull' command? The manual download for the previous version is here https://github.com/jmorganca/ollama/releases/tag/v0.1.7
Author
Owner

@lestan commented on GitHub (Nov 4, 2023):

Hi @mchiang0610 thanks for the link.

I did try your suggestion when this happened. I pulled llama 2 again, but it didn't help.

I will try and revert to 0.1.7 and confirm that it's working again just to rule out other issues.

Les

<!-- gh-comment-id:1793531397 --> @lestan commented on GitHub (Nov 4, 2023): Hi @mchiang0610 thanks for the link. I did try your suggestion when this happened. I pulled llama 2 again, but it didn't help. I will try and revert to 0.1.7 and confirm that it's working again just to rule out other issues. Les
Author
Owner

@jmorganca commented on GitHub (Nov 4, 2023):

@lestan would it be possible to check the logs in ~/.ollama/logs/server.log to see what error might be causing this? Sorry you hit this error we are looking at it

<!-- gh-comment-id:1793534527 --> @jmorganca commented on GitHub (Nov 4, 2023): @lestan would it be possible to check the logs in `~/.ollama/logs/server.log` to see what error might be causing this? Sorry you hit this error we are looking at it
Author
Owner

@jmorganca commented on GitHub (Nov 4, 2023):

Also, would it be possible to know a bit more about your setup if possible?

  • Which Mac (e.g. 2016 Macbook Pro)?
  • How much memory?
  • Which version of macOS?
<!-- gh-comment-id:1793535654 --> @jmorganca commented on GitHub (Nov 4, 2023): Also, would it be possible to know a bit more about your setup if possible? * Which Mac (e.g. 2016 Macbook Pro)? * How much memory? * Which version of macOS?
Author
Owner

@fastrocket commented on GitHub (Nov 6, 2023):

I run ollama on various boxes including a 2019 Intel MacBook Pro with 16GB, Ventura 13.3.1 (a). Ollama version 0.1.8 works for me.

% ollama --version
ollama version 0.1.8
% ollama run zephyr
>>> tell me a story
Once upon a time, in a far-off kingdom, there was a kind and just queen named Isabella. She
loved her people deeply and worked tirelessly to ensure their happiness and prosperity. 
However, one day, a terrible curse fell upon the land. The once fertile fields turned 
barren, the rivers ran dry, and the forests withered away.

The queen summoned her most trusted advisors and magicians to find a solution to this 
crisis. They searched high and low but found no cure for the curse. Frustrated and 
desperate, the queen decided to take matters into her own hands.
...
<!-- gh-comment-id:1793929150 --> @fastrocket commented on GitHub (Nov 6, 2023): I run ollama on various boxes including a 2019 Intel MacBook Pro with 16GB, Ventura 13.3.1 (a). Ollama version 0.1.8 works for me. ``` % ollama --version ollama version 0.1.8 % ollama run zephyr >>> tell me a story Once upon a time, in a far-off kingdom, there was a kind and just queen named Isabella. She loved her people deeply and worked tirelessly to ensure their happiness and prosperity. However, one day, a terrible curse fell upon the land. The once fertile fields turned barren, the rivers ran dry, and the forests withered away. The queen summoned her most trusted advisors and magicians to find a solution to this crisis. They searched high and low but found no cure for the curse. Frustrated and desperate, the queen decided to take matters into her own hands. ... ```
Author
Owner

@lestan commented on GitHub (Nov 6, 2023):

Apologies for delay in responding and for what I may say next.

After a few days away from my project, I tried it this morning and everything seems to be working! 🤷‍♂️ I tried zephyr, llama2 and mistral and all worked fine.

@jmorganca - answers to your questions:
MacBook Pro (Retina, 15-inch, Mid 2015)
Processor: 2.2 GHz Quad-Core Intel Core i7
Memory: 16 GB 1600 MHz DDR3
MacOS: Monterey 12.7

I also checked server.log and didn't see anything related to a crash, error or memory. Interestingly, I don't see any log entries for 2023/11/04 when this issue occurred. I do see log entries for 2023/11/03 and 2023/11/06. I have been using Ollama since version 0.0.16 in August and haven't run into this before. I'm guessing it's an environment issue at this point.

Thank you all for your response. I love Ollama and appreciate what you all are doing. I'm excited for where Ollama is going!

<!-- gh-comment-id:1794657662 --> @lestan commented on GitHub (Nov 6, 2023): Apologies for delay in responding and for what I may say next. After a few days away from my project, I tried it this morning and everything seems to be working! 🤷‍♂️ I tried zephyr, llama2 and mistral and all worked fine. @jmorganca - answers to your questions: MacBook Pro (Retina, 15-inch, Mid 2015) Processor: 2.2 GHz Quad-Core Intel Core i7 Memory: 16 GB 1600 MHz DDR3 MacOS: Monterey 12.7 I also checked server.log and didn't see anything related to a crash, error or memory. Interestingly, I don't see any log entries for 2023/11/04 when this issue occurred. I do see log entries for 2023/11/03 and 2023/11/06. I have been using Ollama since version 0.0.16 in August and haven't run into this before. I'm guessing it's an environment issue at this point. Thank you all for your response. I love Ollama and appreciate what you all are doing. I'm excited for where Ollama is going!
Author
Owner

@effndc commented on GitHub (Nov 6, 2023):

Did you install ollama with brew? If so, after updating did you try a brew services restart ollama.

Within brew it provides this as a hint:

==> Caveats
To restart ollama after an upgrade:
  brew services restart ollama
Or, if you don't want/need a background service you can just run:
  /opt/homebrew/opt/ollama/bin/ollama serve 

I just encountered this myself after doing some brew package updates, and this cleared it up.

<!-- gh-comment-id:1795947426 --> @effndc commented on GitHub (Nov 6, 2023): Did you install `ollama` with brew? If so, after updating did you try a `brew services restart ollama`. Within brew it provides this as a hint: ``` text ==> Caveats To restart ollama after an upgrade: brew services restart ollama Or, if you don't want/need a background service you can just run: /opt/homebrew/opt/ollama/bin/ollama serve ``` I just encountered this myself after doing some brew package updates, and this cleared it up.
Author
Owner

@jmorganca commented on GitHub (Nov 7, 2023):

Fantastic! Glad it's working. I'll close this for now, and no need to apologize – please open an issue anytime!

<!-- gh-comment-id:1797904987 --> @jmorganca commented on GitHub (Nov 7, 2023): Fantastic! Glad it's working. I'll close this for now, and no need to apologize – please open an issue anytime!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#26245