[GH-ISSUE #10405] upgrading Ollama error #6837

Closed
opened 2026-04-12 18:38:31 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @MonsieurMa on GitHub (Apr 25, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10405

What is the issue?

I've encountered an issue after upgrading Ollama on my Windows 10 system. Here's the detailed situation:
Environment
Operating System: Windows 10
Ollama Version: After upgrade
Problem Description
When I run the ollama list command, it can successfully display the available models. However, when I attempt to run a model using the ollama run command, it fails to execute the model.
For example, when I run the following command:ollama run deepseek-r1:14b
I receive the following error message:
Error: llama runner process has terminated: exit status 2
Steps to Reproduce
Upgrade Ollama on Windows 10.
Run ollama list to confirm that the model is listed.
Try to run a model with ollama run <model_name>.

Relevant log output

log
time=2025-04-25T18:23:30.600+08:00 level=INFO source=logging.go:50 msg="ollama app started"
time=2025-04-25T18:23:30.611+08:00 level=INFO source=lifecycle.go:19 msg="app config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\Ollama\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-04-25T18:23:30.708+08:00 level=INFO source=server.go:182 msg="unable to connect to server"
time=2025-04-25T18:23:30.708+08:00 level=INFO source=server.go:141 msg="starting server..."
time=2025-04-25T18:23:30.912+08:00 level=INFO source=server.go:127 msg="started ollama server with pid 6388"
time=2025-04-25T18:23:30.912+08:00 level=INFO source=server.go:129 msg="ollama server logs C:\\Users\\Administrator\\AppData\\Local\\Ollama\\server.log"
time=2025-04-25T18:27:19.570+08:00 level=INFO source=logging.go:50 msg="ollama app started"
time=2025-04-25T18:27:19.570+08:00 level=INFO source=lifecycle.go:19 msg="app config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\Ollama\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-04-25T18:27:19.758+08:00 level=INFO source=lifecycle.go:72 msg="Detected another instance of ollama running, exiting"


server
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/http/server.go:2137 +0x785 fp=0xc00004bfb8 sp=0xc00004bb98 pc=0x7ff6c0482565
net/http.(*Server).Serve.gowrap3()
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/http/server.go:3454 +0x28 fp=0xc00004bfe0 sp=0xc00004bfb8 pc=0x7ff6c0487cc8
runtime.goexit({})
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc00004bfe8 sp=0xc00004bfe0 pc=0x7ff6c017d161
created by net/http.(*Server).Serve in goroutine 1
	C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/http/server.go:3454 +0x485
rax     0x1
rbx     0x7ffb36e9a040
rcx     0x1
rdx     0x2dbf3ff0c8
rdi     0x2
rsi     0x1
rbp     0x2dbf3ff228
rsp     0x2dbf3ff078
r8      0x0
r9      0x7ffbddc289b8
r10     0x80
r11     0x13c7efcac00
r12     0x13c305bdd70
r13     0x2dbf3ff1f8
r14     0x2dbf3ff1d8
r15     0x2dbf3ff160
rip     0x7ff6c0faa432
rflags  0x10202
cs      0x33
fs      0x53
gs      0x2b
time=2025-04-25T18:27:45.127+08:00 level=ERROR source=sched.go:457 msg="error loading llama server" error="llama runner process has terminated: exit status 2"
[GIN] 2025/04/25 - 18:27:45 | 500 |    1.0875474s |       127.0.0.1 | POST     "/api/generate"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.6.6

Originally created by @MonsieurMa on GitHub (Apr 25, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10405 ### What is the issue? I've encountered an issue after upgrading Ollama on my Windows 10 system. Here's the detailed situation: Environment Operating System: Windows 10 Ollama Version: After upgrade Problem Description When I run the ollama list command, it can successfully display the available models. However, when I attempt to run a model using the ollama run command, it fails to execute the model. For example, when I run the following command:ollama run deepseek-r1:14b I receive the following error message: Error: llama runner process has terminated: exit status 2 Steps to Reproduce Upgrade Ollama on Windows 10. Run ollama list to confirm that the model is listed. Try to run a model with ollama run <model_name>. ### Relevant log output ```shell log time=2025-04-25T18:23:30.600+08:00 level=INFO source=logging.go:50 msg="ollama app started" time=2025-04-25T18:23:30.611+08:00 level=INFO source=lifecycle.go:19 msg="app config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\Ollama\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-04-25T18:23:30.708+08:00 level=INFO source=server.go:182 msg="unable to connect to server" time=2025-04-25T18:23:30.708+08:00 level=INFO source=server.go:141 msg="starting server..." time=2025-04-25T18:23:30.912+08:00 level=INFO source=server.go:127 msg="started ollama server with pid 6388" time=2025-04-25T18:23:30.912+08:00 level=INFO source=server.go:129 msg="ollama server logs C:\\Users\\Administrator\\AppData\\Local\\Ollama\\server.log" time=2025-04-25T18:27:19.570+08:00 level=INFO source=logging.go:50 msg="ollama app started" time=2025-04-25T18:27:19.570+08:00 level=INFO source=lifecycle.go:19 msg="app config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\Ollama\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-04-25T18:27:19.758+08:00 level=INFO source=lifecycle.go:72 msg="Detected another instance of ollama running, exiting" server C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/http/server.go:2137 +0x785 fp=0xc00004bfb8 sp=0xc00004bb98 pc=0x7ff6c0482565 net/http.(*Server).Serve.gowrap3() C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/http/server.go:3454 +0x28 fp=0xc00004bfe0 sp=0xc00004bfb8 pc=0x7ff6c0487cc8 runtime.goexit({}) C:/hostedtoolcache/windows/go/1.24.0/x64/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc00004bfe8 sp=0xc00004bfe0 pc=0x7ff6c017d161 created by net/http.(*Server).Serve in goroutine 1 C:/hostedtoolcache/windows/go/1.24.0/x64/src/net/http/server.go:3454 +0x485 rax 0x1 rbx 0x7ffb36e9a040 rcx 0x1 rdx 0x2dbf3ff0c8 rdi 0x2 rsi 0x1 rbp 0x2dbf3ff228 rsp 0x2dbf3ff078 r8 0x0 r9 0x7ffbddc289b8 r10 0x80 r11 0x13c7efcac00 r12 0x13c305bdd70 r13 0x2dbf3ff1f8 r14 0x2dbf3ff1d8 r15 0x2dbf3ff160 rip 0x7ff6c0faa432 rflags 0x10202 cs 0x33 fs 0x53 gs 0x2b time=2025-04-25T18:27:45.127+08:00 level=ERROR source=sched.go:457 msg="error loading llama server" error="llama runner process has terminated: exit status 2" [GIN] 2025/04/25 - 18:27:45 | 500 | 1.0875474s | 127.0.0.1 | POST "/api/generate" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.6.6
GiteaMirror added the bug label 2026-04-12 18:38:31 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6837