[GH-ISSUE #9936] Suddenly can't run Ollama serve - now it's working, pls ignore this #53015

Closed
opened 2026-04-29 01:39:52 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @gus147 on GitHub (Mar 22, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9936

Originally assigned to: @jmorganca on GitHub.

What is the issue?

[gus147@Clevo gusAI]$ ollama serve
2025/03/22 10:23:12 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/gus147/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-03-22T10:23:12.710+05:45 level=INFO source=images.go:432 msg="total blobs: 105"
time=2025-03-22T10:23:12.711+05:45 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-22T10:23:12.711+05:45 level=INFO source=routes.go:1297 msg="Listening on 127.0.0.1:11434 (version 0.6.2)"
time=2025-03-22T10:23:12.712+05:45 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-22T10:23:12.811+05:45 level=INFO source=types.go:130 msg="inference compute" id=GPU-caec3a31-0206-a53d-2803-06c6288efa81 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090 Laptop GPU" total="15.7 GiB" available="14.9 GiB"

Please help Ollama Team, I didn't do anything to my computer, suddenly ollama serve command doesn't work and after I updated the latest ollama version, it still doesn't work.
Please help.

Relevant log output


OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @gus147 on GitHub (Mar 22, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9936 Originally assigned to: @jmorganca on GitHub. ### What is the issue? [gus147@Clevo gusAI]$ ollama serve 2025/03/22 10:23:12 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/gus147/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-03-22T10:23:12.710+05:45 level=INFO source=images.go:432 msg="total blobs: 105" time=2025-03-22T10:23:12.711+05:45 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-03-22T10:23:12.711+05:45 level=INFO source=routes.go:1297 msg="Listening on 127.0.0.1:11434 (version 0.6.2)" time=2025-03-22T10:23:12.712+05:45 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-03-22T10:23:12.811+05:45 level=INFO source=types.go:130 msg="inference compute" id=GPU-caec3a31-0206-a53d-2803-06c6288efa81 library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090 Laptop GPU" total="15.7 GiB" available="14.9 GiB" Please help Ollama Team, I didn't do anything to my computer, suddenly ollama serve command doesn't work and after I updated the latest ollama version, it still doesn't work. Please help. ### Relevant log output ```shell ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-29 01:39:52 -05:00
Author
Owner

@jmorganca commented on GitHub (Mar 22, 2025):

Hi @gus147 sorry about this. Did you see an error ? It looks like Ollama is running

<!-- gh-comment-id:2745020971 --> @jmorganca commented on GitHub (Mar 22, 2025): Hi @gus147 sorry about this. Did you see an error ? It looks like Ollama is running
Author
Owner

@gus147 commented on GitHub (Mar 22, 2025):

@jmorganca After I opened the 127.0.0.1:11434 then it worked on the command line. thank you

<!-- gh-comment-id:2745034408 --> @gus147 commented on GitHub (Mar 22, 2025): @jmorganca After I opened the 127.0.0.1:11434 then it worked on the command line. thank you
Author
Owner

@jmorganca commented on GitHub (Mar 22, 2025):

@gus147 great to hear 😊

<!-- gh-comment-id:2745125070 --> @jmorganca commented on GitHub (Mar 22, 2025): @gus147 great to hear 😊
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53015