[GH-ISSUE #3738] Error starting llama3 - external llama server: Executable not found in $PATH #2303

Closed
opened 2026-04-12 12:35:17 -05:00 by GiteaMirror · 28 comments
Owner

Originally created by @NasonZ on GitHub (Apr 19, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3738

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Description
When attempting to run the llama3:instruct model using the ollama run command, I encountered an error indicating that the executable ollama_llama_server could not be found in the $PATH.

Steps to Reproduce

  • Updated via curl -fsSL https://ollama.com/install.sh | sh
  • Pulled the llama3:instruct model using ollama pull llama3:instruct.
  • Verified successful pull with ollama list, which showed the model as available.
  • Attempted to run the model using ollama run llama3:instruct.
  • Received an error message.
me@me-MS-7C56:~/ollama/models/meta/llama3/7b_instruct$ ollama pull llama3:instruct
pulling manifest 
pulling 00e1317cbf74... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████▏ 4.7 GB                         
pulling 4fa551d4f938... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████▏  12 KB                         
pulling 8ab4849b038c... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████▏  254 B                         
pulling c0aac7c7f00d... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████▏  128 B                         
pulling db46ef36ef0b... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████▏  483 B                         
verifying sha256 digest 
writing manifest 
removing any unused layers 
success 


me@me-MS-7C56:~/ollama/models/meta/llama3/7b_instruct$ ollama list
NAME                     ID              SIZE      MODIFIED       
llama3:instruct          71a106a91016    4.7 GB    49 seconds ago    
mistral-7b-pro:latest    27ebf620ae7f    7.7 GB    4 weeks ago  

me@me-MS-7C56:~/ollama/models/meta/llama3/7b_instruct$ ollama run llama3:instruct
Error: error starting the external llama server: exec: "ollama_llama_server": executable file not found in $PATH 

Environment
Operating System: Ubuntu 22.04
ollama version: 0.1.32

Any ideas how to fix this issue?

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.1.32

Originally created by @NasonZ on GitHub (Apr 19, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3738 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? **Description** When attempting to run the llama3:instruct model using the ollama run command, I encountered an error indicating that the executable ollama_llama_server could not be found in the $PATH. **Steps to Reproduce** - Updated via `curl -fsSL https://ollama.com/install.sh | sh` - Pulled the `llama3:instruct` model using `ollama pull llama3:instruct`. - Verified successful pull with `ollama list`, which showed the model as available. - Attempted to run the model using `ollama run llama3:instruct`. - Received an error message. ``` me@me-MS-7C56:~/ollama/models/meta/llama3/7b_instruct$ ollama pull llama3:instruct pulling manifest pulling 00e1317cbf74... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████▏ 4.7 GB pulling 4fa551d4f938... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████▏ 12 KB pulling 8ab4849b038c... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████▏ 254 B pulling c0aac7c7f00d... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████▏ 128 B pulling db46ef36ef0b... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████▏ 483 B verifying sha256 digest writing manifest removing any unused layers success me@me-MS-7C56:~/ollama/models/meta/llama3/7b_instruct$ ollama list NAME ID SIZE MODIFIED llama3:instruct 71a106a91016 4.7 GB 49 seconds ago mistral-7b-pro:latest 27ebf620ae7f 7.7 GB 4 weeks ago me@me-MS-7C56:~/ollama/models/meta/llama3/7b_instruct$ ollama run llama3:instruct Error: error starting the external llama server: exec: "ollama_llama_server": executable file not found in $PATH ``` **Environment** Operating System: Ubuntu 22.04 ollama version: 0.1.32 Any ideas how to fix this issue? ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.1.32
GiteaMirror added the bugmacos labels 2026-04-12 12:35:17 -05:00
Author
Owner

@alfredsam-nbfc commented on GitHub (Apr 19, 2024):

same here. any solution?

<!-- gh-comment-id:2067244559 --> @alfredsam-nbfc commented on GitHub (Apr 19, 2024): same here. any solution?
Author
Owner

@dhiltgen commented on GitHub (Apr 19, 2024):

Can you share your server log?

<!-- gh-comment-id:2067291932 --> @dhiltgen commented on GitHub (Apr 19, 2024): Can you share your server log?
Author
Owner

@NasonZ commented on GitHub (Apr 19, 2024):

@alfredsam-nbfc

fixed by reinstalling via curl -fsSL https://ollama.com/install.sh | sh .

<!-- gh-comment-id:2067320325 --> @NasonZ commented on GitHub (Apr 19, 2024): @alfredsam-nbfc fixed by reinstalling via `curl -fsSL https://ollama.com/install.sh | sh` .
Author
Owner

@NasonZ commented on GitHub (Apr 19, 2024):

Can you share your server log?

Sure

$ journalctl -u ollama
Mar 15 14:55:03 me-MS-7C56 systemd[1]: Started Ollama Service.
Mar 15 14:55:03 me-MS-7C56 ollama[1355023]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
Mar 15 14:55:03 me-MS-7C56 ollama[1355023]: Your new public key is:
Mar 15 14:55:03 me-MS-7C56 ollama[1355023]: ssh-edXXXmro+/SC+7DMXXXGB
Mar 15 14:55:03 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:03.727Z level=INFO source=images.go:806 msg="total blobs: 0"
Mar 15 14:55:03 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:03.727Z level=INFO source=images.go:813 msg="total unused blobs removed: 0"
Mar 15 14:55:03 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:03.727Z level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)"
Mar 15 14:55:03 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:03.729Z level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to /tmp/ollama2055899656/runners ..."
Mar 15 14:55:06 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:06.931Z level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu_avx cuda_v11 cpu_avx2 rocm_v60000 cpu]"
Mar 15 14:55:06 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:06.931Z level=INFO source=gpu.go:77 msg="Detecting GPU type"
Mar 15 14:55:06 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:06.931Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so"
Mar 15 14:55:07 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:07.106Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.535.161.07]"
Mar 15 14:55:07 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:07.119Z level=INFO source=gpu.go:82 msg="Nvidia GPU detected"
Mar 15 14:55:07 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:07.119Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Mar 15 14:55:07 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:07.124Z level=INFO source=gpu.go:119 msg="CUDA Compute Capability detected: 8.6"
Mar 15 16:04:31 me-MS-7C56 ollama[1355023]: [GIN] 2024/03/15 - 16:04:31 | 200 |    9.220005ms |       127.0.0.1 | HEAD     "/"
Mar 15 16:04:31 me-MS-7C56 ollama[1355023]: [GIN] 2024/03/15 - 16:04:31 | 200 |  107.027478ms |       127.0.0.1 | GET      "/api/tags"
Mar 15 16:33:40 me-MS-7C56 ollama[1355023]: [GIN] 2024/03/15 - 16:33:40 | 200 |       37.21µs |       127.0.0.1 | HEAD     "/"
Mar 15 16:33:41 me-MS-7C56 ollama[1355023]: [GIN] 2024/03/15 - 16:33:41 | 200 |   60.647614ms |       127.0.0.1 | GET      "/api/tags"

as I mentioned to @dhiltgen, reinstalling fixed the issue.

Let me know if you need anything further.

<!-- gh-comment-id:2067330882 --> @NasonZ commented on GitHub (Apr 19, 2024): > Can you share your server log? Sure ``` $ journalctl -u ollama Mar 15 14:55:03 me-MS-7C56 systemd[1]: Started Ollama Service. Mar 15 14:55:03 me-MS-7C56 ollama[1355023]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key. Mar 15 14:55:03 me-MS-7C56 ollama[1355023]: Your new public key is: Mar 15 14:55:03 me-MS-7C56 ollama[1355023]: ssh-edXXXmro+/SC+7DMXXXGB Mar 15 14:55:03 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:03.727Z level=INFO source=images.go:806 msg="total blobs: 0" Mar 15 14:55:03 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:03.727Z level=INFO source=images.go:813 msg="total unused blobs removed: 0" Mar 15 14:55:03 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:03.727Z level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)" Mar 15 14:55:03 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:03.729Z level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to /tmp/ollama2055899656/runners ..." Mar 15 14:55:06 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:06.931Z level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu_avx cuda_v11 cpu_avx2 rocm_v60000 cpu]" Mar 15 14:55:06 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:06.931Z level=INFO source=gpu.go:77 msg="Detecting GPU type" Mar 15 14:55:06 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:06.931Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so" Mar 15 14:55:07 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:07.106Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.535.161.07]" Mar 15 14:55:07 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:07.119Z level=INFO source=gpu.go:82 msg="Nvidia GPU detected" Mar 15 14:55:07 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:07.119Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" Mar 15 14:55:07 me-MS-7C56 ollama[1355023]: time=2024-03-15T14:55:07.124Z level=INFO source=gpu.go:119 msg="CUDA Compute Capability detected: 8.6" Mar 15 16:04:31 me-MS-7C56 ollama[1355023]: [GIN] 2024/03/15 - 16:04:31 | 200 | 9.220005ms | 127.0.0.1 | HEAD "/" Mar 15 16:04:31 me-MS-7C56 ollama[1355023]: [GIN] 2024/03/15 - 16:04:31 | 200 | 107.027478ms | 127.0.0.1 | GET "/api/tags" Mar 15 16:33:40 me-MS-7C56 ollama[1355023]: [GIN] 2024/03/15 - 16:33:40 | 200 | 37.21µs | 127.0.0.1 | HEAD "/" Mar 15 16:33:41 me-MS-7C56 ollama[1355023]: [GIN] 2024/03/15 - 16:33:41 | 200 | 60.647614ms | 127.0.0.1 | GET "/api/tags" ``` as I mentioned to @dhiltgen, reinstalling fixed the issue. Let me know if you need anything further.
Author
Owner

@asif-kaleem commented on GitHub (Apr 23, 2024):

Facing similar problem on Mac m3
Error: error starting the external llama server: fork/exec /var/folders/64/w9tycd0x3mv243jr9zb1glg00000gn/T/ollama97142749/runners/metal/ollama_llama_server: no such file or directory

<!-- gh-comment-id:2072014816 --> @asif-kaleem commented on GitHub (Apr 23, 2024): Facing similar problem on Mac m3 `Error: error starting the external llama server: fork/exec /var/folders/64/w9tycd0x3mv243jr9zb1glg00000gn/T/ollama97142749/runners/metal/ollama_llama_server: no such file or directory `
Author
Owner

@marcondesmacaneiro commented on GitHub (Apr 23, 2024):

Facing similar problem on Mac m3 Error: error starting the external llama server: fork/exec /var/folders/64/w9tycd0x3mv243jr9zb1glg00000gn/T/ollama97142749/runners/metal/ollama_llama_server: no such file or directory

For me, I reinstall Ollama

<!-- gh-comment-id:2072183237 --> @marcondesmacaneiro commented on GitHub (Apr 23, 2024): > Facing similar problem on Mac m3 `Error: error starting the external llama server: fork/exec /var/folders/64/w9tycd0x3mv243jr9zb1glg00000gn/T/ollama97142749/runners/metal/ollama_llama_server: no such file or directory ` For me, I reinstall Ollama
Author
Owner

@dhiltgen commented on GitHub (Apr 23, 2024):

@asif-kaleem I think that's a different problem. The MacOS tmp cleaner removed the file out from underneath us. It should self-correct on the next model load. Are you seeing it get stuck in this state and no longer work?

<!-- gh-comment-id:2072769056 --> @dhiltgen commented on GitHub (Apr 23, 2024): @asif-kaleem I think that's a different problem. The MacOS tmp cleaner removed the file out from underneath us. It should self-correct on the next model load. Are you seeing it get stuck in this state and no longer work?
Author
Owner

@dhiltgen commented on GitHub (Apr 23, 2024):

Quick update - @asif-kaleem I have a PR up that should resolve the problem you noticed - #3846

<!-- gh-comment-id:2072986178 --> @dhiltgen commented on GitHub (Apr 23, 2024): Quick update - @asif-kaleem I have a PR up that should resolve the problem you noticed - #3846
Author
Owner

@dhiltgen commented on GitHub (Apr 24, 2024):

I believe this should be resolved in the next release with #3846 and #3850 merged.

<!-- gh-comment-id:2073703051 --> @dhiltgen commented on GitHub (Apr 24, 2024): I believe this should be resolved in the next release with #3846 and #3850 merged.
Author
Owner

@Hasan-Z commented on GitHub (Apr 28, 2024):

Same here on Windows 11 Pro
time=2024-04-28T20:50:09.953+03:00 level=ERROR source=server.go:285 msg="unable to load any llama server" error="error starting the external llama server: exec: \"ollama_llama_server.exe\": executable file not found in %PATH% "

I reinstalled "ollama" again and problem solved. It looks like the installer (for windows at least) is closing or not copying required files, when this error happened, I searched for the file and it looks like it was not installed on my machine, after I reinstalled again this file existed so the problem solved.

<!-- gh-comment-id:2081575998 --> @Hasan-Z commented on GitHub (Apr 28, 2024): Same here on Windows 11 Pro `time=2024-04-28T20:50:09.953+03:00 level=ERROR source=server.go:285 msg="unable to load any llama server" error="error starting the external llama server: exec: \"ollama_llama_server.exe\": executable file not found in %PATH% "` I reinstalled "ollama" again and problem solved. It looks like the installer (for windows at least) is closing or not copying required files, when this error happened, I searched for the file and it looks like it was not installed on my machine, after I reinstalled again this file existed so the problem solved.
Author
Owner

@dhiltgen commented on GitHub (Apr 28, 2024):

The pre-release for 0.1.33 is available now, which should resolve these exe missing problems on windows.

<!-- gh-comment-id:2081584732 --> @dhiltgen commented on GitHub (Apr 28, 2024): The pre-release for 0.1.33 is available now, which should resolve these exe missing problems on windows.
Author
Owner

@mimunoz11 commented on GitHub (Apr 28, 2024):

in macOS Sonoma, reinstall fix the issue

<!-- gh-comment-id:2081636462 --> @mimunoz11 commented on GitHub (Apr 28, 2024): in macOS Sonoma, reinstall fix the issue
Author
Owner

@bsdnet commented on GitHub (May 3, 2024):

Same here. It happened on Debian.

Error: error starting the external llama server: exec: "ollama_llama_server": executable file not found in $PATH

The workaround is: sudo systemctl restart ollama

<!-- gh-comment-id:2093774279 --> @bsdnet commented on GitHub (May 3, 2024): Same here. It happened on Debian. Error: error starting the external llama server: exec: "ollama_llama_server": executable file not found in $PATH The workaround is: sudo systemctl restart ollama
Author
Owner

@dhiltgen commented on GitHub (May 3, 2024):

@bsdnet if you upgrade to 0.1.33 this defect should be resolved and no longer require restarting the service to work around it. If you still see this persisting, please let us know.

<!-- gh-comment-id:2093861458 --> @dhiltgen commented on GitHub (May 3, 2024): @bsdnet if you upgrade to 0.1.33 this defect should be resolved and no longer require restarting the service to work around it. If you still see this persisting, please let us know.
Author
Owner

@bsdnet commented on GitHub (May 4, 2024):

Thank you. Let me give it a try and get back to you if I still see the same issue

<!-- gh-comment-id:2093954250 --> @bsdnet commented on GitHub (May 4, 2024): Thank you. Let me give it a try and get back to you if I still see the same issue
Author
Owner

@SharinganAi commented on GitHub (May 6, 2024):

In Sonoma 14.1.1, There was an update pending in Ollama. after update this started working.

<!-- gh-comment-id:2095673269 --> @SharinganAi commented on GitHub (May 6, 2024): In Sonoma 14.1.1, There was an update pending in Ollama. after update this started working.
Author
Owner

@didlawowo commented on GitHub (May 11, 2024):

i have the same problem with mac

reinstall doesn't change anything, please reopen

<!-- gh-comment-id:2105615363 --> @didlawowo commented on GitHub (May 11, 2024): i have the same problem with mac reinstall doesn't change anything, please reopen
Author
Owner

@dhiltgen commented on GitHub (May 11, 2024):

@didlawowo can you share your server log?

<!-- gh-comment-id:2105975876 --> @dhiltgen commented on GitHub (May 11, 2024): @didlawowo can you share your server log?
Author
Owner

@Bangkokian commented on GitHub (May 13, 2024):

Same issue, MacOS

Error: error starting the external llama server: fork/exec /var/folders/hs/ ... /T/ollama501987923/runners/metal/ollama_llama_server: no such file or directory

<!-- gh-comment-id:2106739012 --> @Bangkokian commented on GitHub (May 13, 2024): Same issue, MacOS Error: error starting the external llama server: fork/exec /var/folders/hs/ ... /T/ollama501987923/runners/metal/ollama_llama_server: no such file or directory
Author
Owner

@JoshInLisbon commented on GitHub (May 13, 2024):

On a Mac (running Sonoma), I opened activity monitor and closed all Ollama related processes (there were 3). I then started the model up again and it worked fine. (No reinstall needed.)

<!-- gh-comment-id:2107661649 --> @JoshInLisbon commented on GitHub (May 13, 2024): On a Mac (running Sonoma), I opened activity monitor and closed all Ollama related processes (there were 3). I then started the model up again and it worked fine. (No reinstall needed.)
Author
Owner

@jshbmllr commented on GitHub (May 13, 2024):

@bsdnet if you upgrade to 0.1.33 this defect should be resolved and no longer require restarting the service to work around it. If you still see this persisting, please let us know.

I ran into this issue with 0.1.34 on Linux (RHEL 8) and a 'systemctl reset ollama' fixed it. Let me know if creating an issue or sharing logs is helpful; the server is on an air-gapped system so a little more effort than throwing them in this comment is required.

<!-- gh-comment-id:2107755374 --> @jshbmllr commented on GitHub (May 13, 2024): > @bsdnet if you upgrade to 0.1.33 this defect should be resolved and no longer require restarting the service to work around it. If you still see this persisting, please let us know. I ran into this issue with 0.1.34 on Linux (RHEL 8) and a 'systemctl reset ollama' fixed it. Let me know if creating an issue or sharing logs is helpful; the server is on an air-gapped system so a little more effort than throwing them in this comment is required.
Author
Owner

@ig0r commented on GitHub (May 13, 2024):

@JoshInLisbon Thanks! Closing Ollama in the menu bar did help.

<!-- gh-comment-id:2108260161 --> @ig0r commented on GitHub (May 13, 2024): @JoshInLisbon Thanks! Closing Ollama in the menu bar did help.
Author
Owner

@dhiltgen commented on GitHub (May 13, 2024):

@Bangkokian what version of Ollama were you running? This should have been resolved in 0.1.33 and newer, but maybe there's some other corner case I missed.

@jshbmllr can you share your server log for the failure?

Restarting the app or quitting the menu item shouldn't be necessary, so if folks are still having to do that to work around this then there's still a bug here somewhere I need to fix.

<!-- gh-comment-id:2108904488 --> @dhiltgen commented on GitHub (May 13, 2024): @Bangkokian what version of Ollama were you running? This should have been resolved in 0.1.33 and newer, but maybe there's some other corner case I missed. @jshbmllr can you share your server log for the failure? Restarting the app or quitting the menu item [shouldn't be necessary](https://github.com/ollama/ollama/blob/main/llm/server.go#L267-L276), so if folks are still having to do that to work around this then there's still a bug here somewhere I need to fix.
Author
Owner

@jshbmllr commented on GitHub (May 14, 2024):

@dhiltgen Disregard! This was indeed a version issue; I was on 0.1.32. I'm transitioning from a metal install to a container deploy and got my numbers crossed. Sorry!

<!-- gh-comment-id:2110135853 --> @jshbmllr commented on GitHub (May 14, 2024): @dhiltgen Disregard! This was indeed a version issue; I was on 0.1.32. I'm transitioning from a metal install to a container deploy and got my numbers crossed. Sorry!
Author
Owner

@T8354HQ commented on GitHub (May 15, 2024):

ollama run llama3
Error: error starting the external llama server: fork/exec /var/folders/d1/6g8w22ms0874r1qjd314m8yc0000gn/T/ollama3689840524/runners/metal/ollama_llama_server: no such file or directory

This happened my MAC OS, when giving command: ollama run llama3.
Env is python 3.11 which is just settled for LLMs locally.

<!-- gh-comment-id:2111921619 --> @T8354HQ commented on GitHub (May 15, 2024): ollama run llama3 Error: error starting the external llama server: fork/exec /var/folders/d1/6g8w22ms0874r1qjd314m8yc0000gn/T/ollama3689840524/runners/metal/ollama_llama_server: no such file or directory This happened my MAC OS, when giving command: ollama run llama3. Env is python 3.11 which is just settled for LLMs locally.
Author
Owner

@dhiltgen commented on GitHub (May 15, 2024):

@T8354HQ which version are you running? This should be fixed in 0.1.33.

<!-- gh-comment-id:2112905882 --> @dhiltgen commented on GitHub (May 15, 2024): @T8354HQ which version are you running? This should be fixed in 0.1.33.
Author
Owner

@ig0r commented on GitHub (May 15, 2024):

@dhiltgen I can confirm that upgrading 0.1.32 -> 0.1.37 resolved the problem on macOS Sonoma 14.5

<!-- gh-comment-id:2113040336 --> @ig0r commented on GitHub (May 15, 2024): @dhiltgen I can confirm that upgrading 0.1.32 -> 0.1.37 resolved the problem on macOS Sonoma 14.5
Author
Owner

@dhiltgen commented on GitHub (May 15, 2024):

I'm going to go ahead and re-close this, as it does in fact look like it's resolved.

<!-- gh-comment-id:2113067831 --> @dhiltgen commented on GitHub (May 15, 2024): I'm going to go ahead and re-close this, as it does in fact look like it's resolved.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#2303