[GH-ISSUE #6157] always "Error: something went wrong, please see the ollama server logs for details" but no useful info in service log #3845

Closed
opened 2026-04-12 14:41:02 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @EachSheep on GitHub (Aug 4, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6157

What is the issue?

I encountered an insufficient storage space error while downloading llama3.1:70b-instruct-fp16. To resolve this, I backed up the files from /usr/share/ollama/.ollama/models to another drive with more space, located at /users/shared/ollama/.ollama/models, and configured /etc/systemd/system/ollama.service.d/override_ollama.service as follows:

[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_KEEP_ALIVE=5m"
Environment="OLLAMA_MODELS=/users/shared/ollama/.ollama/models"

Then, I deleted the original /usr/share/ollama/ directory, recreated the /usr/share/ollama directory, and changed its owner and group with sudo chown ollama:ollama /usr/share/ollama. After that, I reloaded the service configuration and restarted the service using sudo systemctl daemon-reload and sudo systemctl restart ollama.

However, when running ollama list, I continuously received an error, as did other ollama commands, stating Error: something went wrong, please see the ollama server logs for details.

I checked the ollama logs using sudo journalctl -u ollama.service > ollama_logs.txt. The logs after restarting the service were:

Aug 04 07:24:50 node2 systemd[1]: Stopping Ollama Service...
Aug 04 07:24:50 node2 systemd[1]: ollama.service: Succeeded.
Aug 04 07:24:50 node2 systemd[1]: Stopped Ollama Service.
Aug 04 07:24:50 node2 systemd[1]: Started Ollama Service.
Aug 04 07:24:50 node2 ollama[78522]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
Aug 04 07:24:50 node2 ollama[78522]: Your new public key is:
Aug 04 07:24:50 node2 ollama[78522]: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN8ECjLBExZOAz8FOGXuADTif9I8RIatZmmI11P2TzCh
Aug 04 07:24:50 node2 ollama[78522]: 2024/08/04 07:24:50 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS=/users/shared/ollama/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
Aug 04 07:24:50 node2 ollama[78522]: time=2024-08-04T07:24:50.954Z level=INFO source=images.go:781 msg="total blobs: 14"
Aug 04 07:24:50 node2 ollama[78522]: time=2024-08-04T07:24:50.954Z level=INFO source=images.go:788 msg="total unused blobs removed: 0"
Aug 04 07:24:50 node2 ollama[78522]: time=2024-08-04T07:24:50.955Z level=INFO source=routes.go:1155 msg="Listening on [::]:11434 (version 0.3.3)"
Aug 04 07:24:50 node2 ollama[78522]: time=2024-08-04T07:24:50.955Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama873184855/runners

There are no other error messages. This issue has been troubling me for half a day, and I am quite frustrated. I hope you can help me look into it.

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.3.3

Originally created by @EachSheep on GitHub (Aug 4, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6157 ### What is the issue? I encountered an insufficient storage space error while downloading llama3.1:70b-instruct-fp16. To resolve this, I backed up the files from `/usr/share/ollama/.ollama/models` to another drive with more space, located at `/users/shared/ollama/.ollama/models`, and configured `/etc/systemd/system/ollama.service.d/override_ollama.service` as follows: ``` [Service] Environment="OLLAMA_HOST=0.0.0.0:11434" Environment="OLLAMA_KEEP_ALIVE=5m" Environment="OLLAMA_MODELS=/users/shared/ollama/.ollama/models" ``` Then, I deleted the original `/usr/share/ollama/` directory, recreated the `/usr/share/ollama` directory, and changed its owner and group with `sudo chown ollama:ollama /usr/share/ollama`. After that, I reloaded the service configuration and restarted the service using `sudo systemctl daemon-reload` and `sudo systemctl restart ollama`. However, when running `ollama list`, I continuously received an error, as did other `ollama` commands, stating `Error: something went wrong, please see the ollama server logs for details`. I checked the `ollama` logs using `sudo journalctl -u ollama.service > ollama_logs.txt`. The logs after restarting the service were: ``` Aug 04 07:24:50 node2 systemd[1]: Stopping Ollama Service... Aug 04 07:24:50 node2 systemd[1]: ollama.service: Succeeded. Aug 04 07:24:50 node2 systemd[1]: Stopped Ollama Service. Aug 04 07:24:50 node2 systemd[1]: Started Ollama Service. Aug 04 07:24:50 node2 ollama[78522]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key. Aug 04 07:24:50 node2 ollama[78522]: Your new public key is: Aug 04 07:24:50 node2 ollama[78522]: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN8ECjLBExZOAz8FOGXuADTif9I8RIatZmmI11P2TzCh Aug 04 07:24:50 node2 ollama[78522]: 2024/08/04 07:24:50 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS=/users/shared/ollama/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" Aug 04 07:24:50 node2 ollama[78522]: time=2024-08-04T07:24:50.954Z level=INFO source=images.go:781 msg="total blobs: 14" Aug 04 07:24:50 node2 ollama[78522]: time=2024-08-04T07:24:50.954Z level=INFO source=images.go:788 msg="total unused blobs removed: 0" Aug 04 07:24:50 node2 ollama[78522]: time=2024-08-04T07:24:50.955Z level=INFO source=routes.go:1155 msg="Listening on [::]:11434 (version 0.3.3)" Aug 04 07:24:50 node2 ollama[78522]: time=2024-08-04T07:24:50.955Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama873184855/runners ``` There are no other error messages. This issue has been troubling me for half a day, and I am quite frustrated. I hope you can help me look into it. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.3
GiteaMirror added the bug label 2026-04-12 14:41:02 -05:00
Author
Owner

@EachSheep commented on GitHub (Aug 4, 2024):

New log is below:

Aug 04 07:22:15 node2 systemd[1]: Stopped Ollama Service.
Aug 04 07:22:15 node2 systemd[1]: Started Ollama Service.
Aug 04 07:22:15 node2 ollama[76193]: 2024/08/04 07:22:15 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/users/shared/ollama/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
Aug 04 07:22:15 node2 ollama[76193]: time=2024-08-04T07:22:15.153Z level=INFO source=images.go:781 msg="total blobs: 14"
Aug 04 07:22:15 node2 ollama[76193]: time=2024-08-04T07:22:15.153Z level=INFO source=images.go:788 msg="total unused blobs removed: 0"
Aug 04 07:22:15 node2 ollama[76193]: time=2024-08-04T07:22:15.154Z level=INFO source=routes.go:1155 msg="Listening on [::]:11434 (version 0.3.3)"
Aug 04 07:22:15 node2 ollama[76193]: time=2024-08-04T07:22:15.154Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2152502762/runners
Aug 04 07:22:20 node2 ollama[76193]: time=2024-08-04T07:22:20.148Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60102]"
Aug 04 07:22:20 node2 ollama[76193]: time=2024-08-04T07:22:20.148Z level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
Aug 04 07:22:20 node2 ollama[76193]: time=2024-08-04T07:22:20.940Z level=INFO source=types.go:105 msg="inference compute" id=GPU-8daf251b-5abe-a014-5693-2158b0b56116 library=cuda compute=8.0 driver=12.5 name="NVIDIA A100-PCIE-40GB" total="39.5 GiB" available="39.1 GiB"
Aug 04 07:22:20 node2 ollama[76193]: time=2024-08-04T07:22:20.940Z level=INFO source=types.go:105 msg="inference compute" id=GPU-da045543-85a8-23ab-e23a-4fc7809ede22 library=cuda compute=8.0 driver=12.5 name="NVIDIA A100-PCIE-40GB" total="39.5 GiB" available="39.1 GiB"
Aug 04 07:24:50 node2 systemd[1]: Stopping Ollama Service...
Aug 04 07:24:50 node2 systemd[1]: ollama.service: Succeeded.
Aug 04 07:24:50 node2 systemd[1]: Stopped Ollama Service.
Aug 04 07:24:50 node2 systemd[1]: Started Ollama Service.
Aug 04 07:24:50 node2 ollama[78522]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
Aug 04 07:24:50 node2 ollama[78522]: Your new public key is:
Aug 04 07:24:50 node2 ollama[78522]: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN8ECjLBExZOAz8FOGXuADTif9I8RIatZmmI11P2TzCh
Aug 04 07:24:50 node2 ollama[78522]: 2024/08/04 07:24:50 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/users/shared/ollama/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
Aug 04 07:24:50 node2 ollama[78522]: time=2024-08-04T07:24:50.954Z level=INFO source=images.go:781 msg="total blobs: 14"
Aug 04 07:24:50 node2 ollama[78522]: time=2024-08-04T07:24:50.954Z level=INFO source=images.go:788 msg="total unused blobs removed: 0"
Aug 04 07:24:50 node2 ollama[78522]: time=2024-08-04T07:24:50.955Z level=INFO source=routes.go:1155 msg="Listening on [::]:11434 (version 0.3.3)"
Aug 04 07:24:50 node2 ollama[78522]: time=2024-08-04T07:24:50.955Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama873184855/runners
Aug 04 07:24:55 node2 ollama[78522]: time=2024-08-04T07:24:55.902Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60102]"
Aug 04 07:24:55 node2 ollama[78522]: time=2024-08-04T07:24:55.902Z level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
Aug 04 07:24:56 node2 ollama[78522]: time=2024-08-04T07:24:56.704Z level=INFO source=types.go:105 msg="inference compute" id=GPU-8daf251b-5abe-a014-5693-2158b0b56116 library=cuda compute=8.0 driver=12.5 name="NVIDIA A100-PCIE-40GB" total="39.5 GiB" available="39.1 GiB"
Aug 04 07:24:56 node2 ollama[78522]: time=2024-08-04T07:24:56.704Z level=INFO source=types.go:105 msg="inference compute" id=GPU-da045543-85a8-23ab-e23a-4fc7809ede22 library=cuda compute=8.0 driver=12.5 name="NVIDIA A100-PCIE-40GB" total="39.5 GiB" available="39.1 GiB"
<!-- gh-comment-id:2267395821 --> @EachSheep commented on GitHub (Aug 4, 2024): New log is below: ``` Aug 04 07:22:15 node2 systemd[1]: Stopped Ollama Service. Aug 04 07:22:15 node2 systemd[1]: Started Ollama Service. Aug 04 07:22:15 node2 ollama[76193]: 2024/08/04 07:22:15 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/users/shared/ollama/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" Aug 04 07:22:15 node2 ollama[76193]: time=2024-08-04T07:22:15.153Z level=INFO source=images.go:781 msg="total blobs: 14" Aug 04 07:22:15 node2 ollama[76193]: time=2024-08-04T07:22:15.153Z level=INFO source=images.go:788 msg="total unused blobs removed: 0" Aug 04 07:22:15 node2 ollama[76193]: time=2024-08-04T07:22:15.154Z level=INFO source=routes.go:1155 msg="Listening on [::]:11434 (version 0.3.3)" Aug 04 07:22:15 node2 ollama[76193]: time=2024-08-04T07:22:15.154Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2152502762/runners Aug 04 07:22:20 node2 ollama[76193]: time=2024-08-04T07:22:20.148Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60102]" Aug 04 07:22:20 node2 ollama[76193]: time=2024-08-04T07:22:20.148Z level=INFO source=gpu.go:205 msg="looking for compatible GPUs" Aug 04 07:22:20 node2 ollama[76193]: time=2024-08-04T07:22:20.940Z level=INFO source=types.go:105 msg="inference compute" id=GPU-8daf251b-5abe-a014-5693-2158b0b56116 library=cuda compute=8.0 driver=12.5 name="NVIDIA A100-PCIE-40GB" total="39.5 GiB" available="39.1 GiB" Aug 04 07:22:20 node2 ollama[76193]: time=2024-08-04T07:22:20.940Z level=INFO source=types.go:105 msg="inference compute" id=GPU-da045543-85a8-23ab-e23a-4fc7809ede22 library=cuda compute=8.0 driver=12.5 name="NVIDIA A100-PCIE-40GB" total="39.5 GiB" available="39.1 GiB" Aug 04 07:24:50 node2 systemd[1]: Stopping Ollama Service... Aug 04 07:24:50 node2 systemd[1]: ollama.service: Succeeded. Aug 04 07:24:50 node2 systemd[1]: Stopped Ollama Service. Aug 04 07:24:50 node2 systemd[1]: Started Ollama Service. Aug 04 07:24:50 node2 ollama[78522]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key. Aug 04 07:24:50 node2 ollama[78522]: Your new public key is: Aug 04 07:24:50 node2 ollama[78522]: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN8ECjLBExZOAz8FOGXuADTif9I8RIatZmmI11P2TzCh Aug 04 07:24:50 node2 ollama[78522]: 2024/08/04 07:24:50 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/users/shared/ollama/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" Aug 04 07:24:50 node2 ollama[78522]: time=2024-08-04T07:24:50.954Z level=INFO source=images.go:781 msg="total blobs: 14" Aug 04 07:24:50 node2 ollama[78522]: time=2024-08-04T07:24:50.954Z level=INFO source=images.go:788 msg="total unused blobs removed: 0" Aug 04 07:24:50 node2 ollama[78522]: time=2024-08-04T07:24:50.955Z level=INFO source=routes.go:1155 msg="Listening on [::]:11434 (version 0.3.3)" Aug 04 07:24:50 node2 ollama[78522]: time=2024-08-04T07:24:50.955Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama873184855/runners Aug 04 07:24:55 node2 ollama[78522]: time=2024-08-04T07:24:55.902Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60102]" Aug 04 07:24:55 node2 ollama[78522]: time=2024-08-04T07:24:55.902Z level=INFO source=gpu.go:205 msg="looking for compatible GPUs" Aug 04 07:24:56 node2 ollama[78522]: time=2024-08-04T07:24:56.704Z level=INFO source=types.go:105 msg="inference compute" id=GPU-8daf251b-5abe-a014-5693-2158b0b56116 library=cuda compute=8.0 driver=12.5 name="NVIDIA A100-PCIE-40GB" total="39.5 GiB" available="39.1 GiB" Aug 04 07:24:56 node2 ollama[78522]: time=2024-08-04T07:24:56.704Z level=INFO source=types.go:105 msg="inference compute" id=GPU-da045543-85a8-23ab-e23a-4fc7809ede22 library=cuda compute=8.0 driver=12.5 name="NVIDIA A100-PCIE-40GB" total="39.5 GiB" available="39.1 GiB" ```
Author
Owner

@rick-github commented on GitHub (Aug 4, 2024):

OLLAMA_MODELS just changes the model directory, it's still using the original /usr/share/ollama for the private key. You can either re-create that directory with appropriate permissions, add Environment="HOME=/users/shared/ollama/" to the service file, or usermod -d /users/shared/ollama ollama.

<!-- gh-comment-id:2267455980 --> @rick-github commented on GitHub (Aug 4, 2024): `OLLAMA_MODELS` just changes the model directory, it's still using the original `/usr/share/ollama` for the private key. You can either re-create that directory with appropriate permissions, add `Environment="HOME=/users/shared/ollama/"` to the service file, or `usermod -d /users/shared/ollama ollama`.
Author
Owner

@EachSheep commented on GitHub (Aug 6, 2024):

Thanks for your reply! I just recreated all the files and everything works fine!

<!-- gh-comment-id:2270448877 --> @EachSheep commented on GitHub (Aug 6, 2024): Thanks for your reply! I just recreated all the files and everything works fine!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3845