[GH-ISSUE #9288] When running DEEPSEEK with Ollama, model crashes occur(2) #68112

Closed
opened 2026-05-04 12:34:04 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @mjdp168 on GitHub (Feb 22, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9288

What is the issue?

After the previous #9248 incident, I followed the methods suggested by everyone and used the code:

from deepseek-r1:671b
system """nice!"""
PARAMETER num_ctx 16384
PARAMETER num_predict 4096

However, the model crashes during the latter half of the loading process, displaying the error message:
Error: llama runner process has terminated: exit status 2.
Please help me resolve this issue.

Relevant log output

time=2025-02-21T09:19:17.047+08:00 level=INFO source=logging.go:50 msg="ollama app started"
time=2025-02-21T09:19:17.058+08:00 level=INFO source=lifecycle.go:19 msg="app config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\mj\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-02-21T09:19:17.111+08:00 level=INFO source=server.go:182 msg="unable to connect to server"
time=2025-02-21T09:19:17.112+08:00 level=INFO source=server.go:141 msg="starting server..."
time=2025-02-21T09:19:17.119+08:00 level=INFO source=server.go:127 msg="started ollama server with pid 17736"
time=2025-02-21T09:19:17.119+08:00 level=INFO source=server.go:129 msg="ollama server logs C:\\Users\\mj\\AppData\\Local\\Ollama\\server.log"
time=2025-02-22T18:08:18.183+08:00 level=INFO source=lifecycle.go:89 msg="Waiting for ollama server to shutdown..."
time=2025-02-22T18:08:18.197+08:00 level=INFO source=server.go:158 msg="server shutdown with exit code 0"
time=2025-02-22T18:08:18.197+08:00 level=INFO source=lifecycle.go:93 msg="Ollama app exiting"

OS

Windows

GPU

No response

CPU

AMD

Ollama version

0.511

Originally created by @mjdp168 on GitHub (Feb 22, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9288 ### What is the issue? After the previous #9248 incident, I followed the methods suggested by everyone and used the code: from deepseek-r1:671b system """nice!""" PARAMETER num_ctx 16384 PARAMETER num_predict 4096 However, the model crashes during the latter half of the loading process, displaying the error message: **Error: llama runner process has terminated: exit status 2.** Please help me resolve this issue. ### Relevant log output ```shell time=2025-02-21T09:19:17.047+08:00 level=INFO source=logging.go:50 msg="ollama app started" time=2025-02-21T09:19:17.058+08:00 level=INFO source=lifecycle.go:19 msg="app config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\mj\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-02-21T09:19:17.111+08:00 level=INFO source=server.go:182 msg="unable to connect to server" time=2025-02-21T09:19:17.112+08:00 level=INFO source=server.go:141 msg="starting server..." time=2025-02-21T09:19:17.119+08:00 level=INFO source=server.go:127 msg="started ollama server with pid 17736" time=2025-02-21T09:19:17.119+08:00 level=INFO source=server.go:129 msg="ollama server logs C:\\Users\\mj\\AppData\\Local\\Ollama\\server.log" time=2025-02-22T18:08:18.183+08:00 level=INFO source=lifecycle.go:89 msg="Waiting for ollama server to shutdown..." time=2025-02-22T18:08:18.197+08:00 level=INFO source=server.go:158 msg="server shutdown with exit code 0" time=2025-02-22T18:08:18.197+08:00 level=INFO source=lifecycle.go:93 msg="Ollama app exiting" ``` ### OS Windows ### GPU _No response_ ### CPU AMD ### Ollama version 0.511
GiteaMirror added the bug label 2026-05-04 12:34:04 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 22, 2025):

Add server log.

<!-- gh-comment-id:2676150695 --> @rick-github commented on GitHub (Feb 22, 2025): Add server log.
Author
Owner

@mjdp168 commented on GitHub (Feb 22, 2025):

2025/02/22 18:11:02 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\mj\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-02-22T18:11:02.451+08:00 level=INFO source=images.go:432 msg="total blobs: 7"
time=2025-02-22T18:11:02.452+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-02-22T18:11:02.453+08:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11434 (version 0.5.11)"
time=2025-02-22T18:11:02.453+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-02-22T18:11:02.453+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-02-22T18:11:02.453+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=64 efficiency=0 threads=64
time=2025-02-22T18:11:02.462+08:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-02-22T18:11:02.462+08:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="511.9 GiB" available="494.5 GiB"
[GIN] 2025/02/22 - 18:12:36 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/02/22 - 18:12:36 | 200 | 23.7311ms | 127.0.0.1 | POST "/api/create"

<!-- gh-comment-id:2676156338 --> @mjdp168 commented on GitHub (Feb 22, 2025): 2025/02/22 18:11:02 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:2562047h47m16.854775807s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\mj\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-02-22T18:11:02.451+08:00 level=INFO source=images.go:432 msg="total blobs: 7" time=2025-02-22T18:11:02.452+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-02-22T18:11:02.453+08:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11434 (version 0.5.11)" time=2025-02-22T18:11:02.453+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-02-22T18:11:02.453+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-02-22T18:11:02.453+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=64 efficiency=0 threads=64 time=2025-02-22T18:11:02.462+08:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered" time=2025-02-22T18:11:02.462+08:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="511.9 GiB" available="494.5 GiB" [GIN] 2025/02/22 - 18:12:36 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/22 - 18:12:36 | 200 | 23.7311ms | 127.0.0.1 | POST "/api/create"
Author
Owner

@mjdp168 commented on GitHub (Feb 22, 2025):

@rick-github ,thank

<!-- gh-comment-id:2676156514 --> @mjdp168 commented on GitHub (Feb 22, 2025): @rick-github ,thank
Author
Owner

@rick-github commented on GitHub (Feb 22, 2025):

Add server log from when the error occurs.

<!-- gh-comment-id:2676157983 --> @rick-github commented on GitHub (Feb 22, 2025): Add server log from when the error occurs.
Author
Owner

@mjdp168 commented on GitHub (Feb 22, 2025):

thank ,That's it just now

<!-- gh-comment-id:2676162024 --> @mjdp168 commented on GitHub (Feb 22, 2025): thank ,That's it just now
Author
Owner

@rick-github commented on GitHub (Feb 22, 2025):

There are no errors in this log.

<!-- gh-comment-id:2676163204 --> @rick-github commented on GitHub (Feb 22, 2025): There are no errors in this log.
Author
Owner

@ENUMERA8OR commented on GitHub (Feb 26, 2025):

How many parameters model are you trying to run? Make sure that nothing else is running at the moment when you try to run the model, that might consume necessary computational resources.

<!-- gh-comment-id:2685736548 --> @ENUMERA8OR commented on GitHub (Feb 26, 2025): How many parameters model are you trying to run? Make sure that nothing else is running at the moment when you try to run the model, that might consume necessary computational resources.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#68112