[GH-ISSUE #7015] Error Running Ollama After Installation #50957

Closed
opened 2026-04-28 17:41:21 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @cksdxz1007 on GitHub (Sep 28, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7015

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

After installing Ollama and attempting to run it, an error occurs. Upon checking the log file ~/.ollama/logs/server.log, the following content is found:

Couldn't find '/Users/cynningli/.ollama/id_ed25519'. Generating new private key.
Your new public key is:

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILE1HWt7ruohIwTV4yR9hiBi45VRf3Cs64ohZxX1ijUK
2024/09/28 10:45:25 routes.go:1153: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/cynningli/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: http_proxy: https_proxy: no_proxy:]"
time=2024-09-28T10:45:25.331+08:00 level=INFO source=images.go:753 msg="total blobs: 0"
time=2024-09-28T10:45:25.331+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-09-28T10:45:25.332+08:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11434 (version 0.3.12)"
time=2024-09-28T10:45:25.334+08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/var/folders/n1/gc6_9bqx6nv7sglk0b1q38r00000gn/T/ollama711942109/runners
time=2024-09-28T10:45:25.376+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners=[metal]
time=2024-09-28T10:45:25.466+08:00 level=INFO source=types.go:107 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="10.7 GiB" available="10.7 GiB"
/Users/cynningli/.ollama/logs/server.log (END)

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.3.12

Originally created by @cksdxz1007 on GitHub (Sep 28, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7015 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? After installing Ollama and attempting to run it, an error occurs. Upon checking the log file ~/.ollama/logs/server.log, the following content is found: ``` Couldn't find '/Users/cynningli/.ollama/id_ed25519'. Generating new private key. Your new public key is: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILE1HWt7ruohIwTV4yR9hiBi45VRf3Cs64ohZxX1ijUK 2024/09/28 10:45:25 routes.go:1153: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/cynningli/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: http_proxy: https_proxy: no_proxy:]" time=2024-09-28T10:45:25.331+08:00 level=INFO source=images.go:753 msg="total blobs: 0" time=2024-09-28T10:45:25.331+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-09-28T10:45:25.332+08:00 level=INFO source=routes.go:1200 msg="Listening on 127.0.0.1:11434 (version 0.3.12)" time=2024-09-28T10:45:25.334+08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/var/folders/n1/gc6_9bqx6nv7sglk0b1q38r00000gn/T/ollama711942109/runners time=2024-09-28T10:45:25.376+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners=[metal] time=2024-09-28T10:45:25.466+08:00 level=INFO source=types.go:107 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="10.7 GiB" available="10.7 GiB" /Users/cynningli/.ollama/logs/server.log (END) ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.3.12
GiteaMirror added the networkingmacos labels 2026-04-28 17:41:22 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 28, 2024):

There's no errors in this log. The key message is informational and can be ignored. What error message do you see?

<!-- gh-comment-id:2380552621 --> @rick-github commented on GitHub (Sep 28, 2024): There's no errors in this log. The key message is informational and can be ignored. What error message do you see?
Author
Owner

@cksdxz1007 commented on GitHub (Sep 29, 2024):

When running the command ollama run qwen2.5:7b, the following error message appears:

❯ ollama run qwen2.5:7b
Error: something went wrong, please see the ollama server logs for details

Additionally, listing the .ollama directory shows the presence of the key files with correct permissions:

❯ ll .ollama/
total 16
-rw-------@ 1 cynningli  staff   387B  9 28 10:45 id_ed25519
-rw-r--r--@ 1 cynningli  staff    81B  9 28 10:45 id_ed25519.pub
drwxr-xr-x@ 3 cynningli  staff    96B  9 28 10:45 logs
drwxr-xr-x@ 3 cynningli  staff    96B  9 28 11:02 models

How can I solve this problem?

<!-- gh-comment-id:2381274702 --> @cksdxz1007 commented on GitHub (Sep 29, 2024): When running the command ollama run qwen2.5:7b, the following error message appears: ``` ❯ ollama run qwen2.5:7b Error: something went wrong, please see the ollama server logs for details ``` Additionally, listing the .ollama directory shows the presence of the key files with correct permissions: ``` ❯ ll .ollama/ total 16 -rw-------@ 1 cynningli staff 387B 9 28 10:45 id_ed25519 -rw-r--r--@ 1 cynningli staff 81B 9 28 10:45 id_ed25519.pub drwxr-xr-x@ 3 cynningli staff 96B 9 28 10:45 logs drwxr-xr-x@ 3 cynningli staff 96B 9 28 11:02 models ``` How can I solve this problem?
Author
Owner

@rick-github commented on GitHub (Sep 29, 2024):

Post full server logs.

<!-- gh-comment-id:2381301614 --> @rick-github commented on GitHub (Sep 29, 2024): Post full [server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues).
Author
Owner

@cksdxz1007 commented on GitHub (Sep 29, 2024):

server.log

<!-- gh-comment-id:2381307743 --> @cksdxz1007 commented on GitHub (Sep 29, 2024): [server.log](https://github.com/user-attachments/files/17178945/server.log)
Author
Owner

@rick-github commented on GitHub (Sep 29, 2024):

This is the same content as your original post and contains no errors. From your follow up post, you pulled a model at 11:02 and that is not shown in the logs your posted, so perhaps you have posted an old log. If that is the full extent of the current logs, then something else is wrong - do have free space on your disk?

<!-- gh-comment-id:2381317340 --> @rick-github commented on GitHub (Sep 29, 2024): This is the same content as your original post and contains no errors. From your follow up post, you pulled a model at 11:02 and that is not shown in the logs your posted, so perhaps you have posted an old log. If that is the full extent of the current logs, then something else is wrong - do have free space on your disk?
Author
Owner

@cksdxz1007 commented on GitHub (Sep 29, 2024):

image
image

I am certain that my computer has sufficient free space, and the log files are uploaded directly from ~/.ollama/logs/. However, I just noticed that after running the command ollama run qwen2.5:7b and encountering an error, the modification date of the log file did not change.As a result, after I tried deleting server.log and then executing the command ollama run qwen2.5:7b again, no new log file was generated under .ollama/logs.

<!-- gh-comment-id:2381323411 --> @cksdxz1007 commented on GitHub (Sep 29, 2024): ![image](https://github.com/user-attachments/assets/ded31630-6518-4479-8468-435ed325bae5) ![image](https://github.com/user-attachments/assets/05f5bef6-c7c3-4675-923e-c0bb11b87090) I am certain that my computer has sufficient free space, and the log files are uploaded directly from ~/.ollama/logs/. However, I just noticed that after running the command ollama run qwen2.5:7b and encountering an error, the modification date of the log file did not change.As a result, after I tried deleting server.log and then executing the command `ollama run qwen2.5:7b` again, no new log file was generated under .ollama/logs.
Author
Owner

@cksdxz1007 commented on GitHub (Sep 29, 2024):

After restarting the service, there are new log files now. Please check.
server.log

<!-- gh-comment-id:2381324530 --> @cksdxz1007 commented on GitHub (Sep 29, 2024): After restarting the service, there are new log files now. Please check. [server.log](https://github.com/user-attachments/files/17179060/server.log)
Author
Owner

@rick-github commented on GitHub (Sep 29, 2024):

This log has the same content as your first post, but with different timestamps. So it seems that the server stops logging some time after it executes the code at line 107 in types.go, but doesn't log a fatal error message, which indicates something went seriously wrong. A dead server would explain the something went wrong error message. Can you open two terminals, in the first run ollama serve and in the second run ollama run qwen2.5:7b, and post whatever messages are show in the first terminal. Judging from the logs so far, the server will fail before it gets to the point of accepting the run command, but it may generate an error message when it dies that could clarify the problem.

<!-- gh-comment-id:2381333187 --> @rick-github commented on GitHub (Sep 29, 2024): This log has the same content as your first post, but with different timestamps. So it seems that the server stops logging some time after it executes the code at line 107 in types.go, but doesn't log a fatal error message, which indicates something went seriously wrong. A dead server would explain the `something went wrong` error message. Can you open two terminals, in the first run `ollama serve` and in the second run `ollama run qwen2.5:7b`, and post whatever messages are show in the first terminal. Judging from the logs so far, the server will fail before it gets to the point of accepting the `run` command, but it may generate an error message when it dies that could clarify the problem.
Author
Owner

@cksdxz1007 commented on GitHub (Sep 29, 2024):

https://github.com/user-attachments/assets/b1b744d9-d682-4b2e-9615-c5037e65aedb

I am using screen recording, which allows for a more intuitive demonstration.It seems that after running the command, there is no new message prompt in the terminal on the left.

<!-- gh-comment-id:2381345913 --> @cksdxz1007 commented on GitHub (Sep 29, 2024): https://github.com/user-attachments/assets/b1b744d9-d682-4b2e-9615-c5037e65aedb I am using screen recording, which allows for a more intuitive demonstration.It seems that after running the command, there is no new message prompt in the terminal on the left.
Author
Owner

@rick-github commented on GitHub (Sep 29, 2024):

No failure, but I did notice something that would explain the problem. You have http_proxy and https_proxy set. This will prevent the ollama client from connecting to the ollama server, which would also result in a something went wrong error message. You can either unset http_proxy or set no_proxy:

export no_proxy="localhost,127.0.0.1"
<!-- gh-comment-id:2381349466 --> @rick-github commented on GitHub (Sep 29, 2024): No failure, but I did notice something that would explain the problem. You have `http_proxy` and `https_proxy` set. This will prevent the ollama client from connecting to the ollama server, which would also result in a `something went wrong` error message. You can either unset `http_proxy` or set `no_proxy`: ``` export no_proxy="localhost,127.0.0.1" ```
Author
Owner

@cksdxz1007 commented on GitHub (Sep 29, 2024):

As expected, the error was caused by the http_proxy setting. After canceling it and running ollama run qwen2.5:7b, Ollama has started downloading the model. Thank you!

<!-- gh-comment-id:2381353857 --> @cksdxz1007 commented on GitHub (Sep 29, 2024): As expected, the error was caused by the http_proxy setting. After canceling it and running ollama run qwen2.5:7b, Ollama has started downloading the model. Thank you!
Author
Owner

@IAM582 commented on GitHub (Jan 31, 2025):

On the same topic, mine shows this on the last server.log;
2025/01/31 04:58:56 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\lungu\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-01-31T04:58:56.498+02:00 level=INFO source=images.go:432 msg="total blobs: 0"
time=2025-01-31T04:58:56.498+02:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-01-31T04:58:56.499+02:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)"
time=2025-01-31T04:58:56.499+02:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cuda_v12_avx rocm_avx cpu cpu_avx cpu_avx2 cuda_v11_avx]"
time=2025-01-31T04:58:56.499+02:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-01-31T04:58:56.499+02:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-01-31T04:58:56.499+02:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16
time=2025-01-31T04:58:56.677+02:00 level=INFO source=gpu.go:334 msg="detected OS VRAM overhead" id=GPU-405e50f4-39e9-effe-4a37-b42b73d881a6 library=cuda compute=8.6 driver=12.5 name="NVIDIA GeForce RTX 3070 Laptop GPU" overhead="727.2 MiB"
time=2025-01-31T04:58:56.678+02:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-405e50f4-39e9-effe-4a37-b42b73d881a6 library=cuda variant=v12 compute=8.6 driver=12.5 name="NVIDIA GeForce RTX 3070 Laptop GPU" total="8.0 GiB" available="7.0 GiB"
It also doesn't start. I have followed the steps suggested above but still nothing

<!-- gh-comment-id:2626207846 --> @IAM582 commented on GitHub (Jan 31, 2025): On the same topic, mine shows this on the last server.log; 2025/01/31 04:58:56 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\lungu\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-01-31T04:58:56.498+02:00 level=INFO source=images.go:432 msg="total blobs: 0" time=2025-01-31T04:58:56.498+02:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-01-31T04:58:56.499+02:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)" time=2025-01-31T04:58:56.499+02:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cuda_v12_avx rocm_avx cpu cpu_avx cpu_avx2 cuda_v11_avx]" time=2025-01-31T04:58:56.499+02:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-01-31T04:58:56.499+02:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-01-31T04:58:56.499+02:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16 time=2025-01-31T04:58:56.677+02:00 level=INFO source=gpu.go:334 msg="detected OS VRAM overhead" id=GPU-405e50f4-39e9-effe-4a37-b42b73d881a6 library=cuda compute=8.6 driver=12.5 name="NVIDIA GeForce RTX 3070 Laptop GPU" overhead="727.2 MiB" time=2025-01-31T04:58:56.678+02:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-405e50f4-39e9-effe-4a37-b42b73d881a6 library=cuda variant=v12 compute=8.6 driver=12.5 name="NVIDIA GeForce RTX 3070 Laptop GPU" total="8.0 GiB" available="7.0 GiB" It also doesn't start. I have followed the steps suggested above but still nothing
Author
Owner

@rick-github commented on GitHub (Jan 31, 2025):

Nothing in the log shows an error. Open a new ticket.

<!-- gh-comment-id:2626937749 --> @rick-github commented on GitHub (Jan 31, 2025): Nothing in the log shows an error. Open a new ticket.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#50957