[GH-ISSUE #9543] Error: llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed #52737

Closed
opened 2026-04-29 00:42:41 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @askie on GitHub (Mar 6, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9543

What is the issue?

after upgrade , ollama can not run any model :
Error: llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed

Relevant log output

time=2025-03-06T17:23:10.238+08:00 level=INFO source=logging.go:50 msg="ollama app started"
time=2025-03-06T17:23:10.253+08:00 level=INFO source=lifecycle.go:19 msg="app config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES:-1 HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:9259h15m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\zzz\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-03-06T17:23:10.269+08:00 level=DEBUG source=lifecycle.go:34 msg="starting callback loop"
time=2025-03-06T17:23:10.269+08:00 level=DEBUG source=store.go:60 msg="loaded existing store C:\\Users\\zzz\\AppData\\Local\\Ollama\\config.json - ID: 709fe49e-1d9e-40e4-9a4b-09c2588ec2d4"
time=2025-03-06T17:23:10.269+08:00 level=DEBUG source=lifecycle.go:68 msg="Not first time, skipping first run notification"
time=2025-03-06T17:23:10.270+08:00 level=DEBUG source=server.go:181 msg="heartbeat from server: Head \"http://0.0.0.0:11434/\": dial tcp 0.0.0.0:11434: connectex: No connection could be made because the target machine actively refused it."
time=2025-03-06T17:23:10.270+08:00 level=INFO source=server.go:182 msg="unable to connect to server"
time=2025-03-06T17:23:10.270+08:00 level=DEBUG source=eventloop.go:22 msg="starting event handling loop"
time=2025-03-06T17:23:10.270+08:00 level=INFO source=server.go:141 msg="starting server..."
time=2025-03-06T17:23:10.283+08:00 level=INFO source=server.go:127 msg="started ollama server with pid 5864"
time=2025-03-06T17:23:10.283+08:00 level=INFO source=server.go:129 msg="ollama server logs C:\\Users\\zzz\\AppData\\Local\\Ollama\\server.log"
time=2025-03-06T17:23:13.272+08:00 level=DEBUG source=updater.go:74 msg="checking for available update" requestURL="https://ollama.com/api/update?arch=amd64&nonce=iTysJ1BQr5q-iM2fBldx0A&os=windows&ts=1741252993&version=0.5.13"
time=2025-03-06T17:23:13.980+08:00 level=DEBUG source=updater.go:83 msg="check update response 204 (current version is up to date)"
time=2025-03-06T17:23:46.895+08:00 level=DEBUG source=eventloop.go:145 msg="unmanaged app message, lParm: 0x204"
time=2025-03-06T17:23:47.744+08:00 level=DEBUG source=logging_windows.go:12 msg="viewing logs with start C:\\Users\\zzz\\AppData\\Local\\Ollama"

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

0.5.13

Originally created by @askie on GitHub (Mar 6, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9543 ### What is the issue? after upgrade , ollama can not run any model : Error: llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed ### Relevant log output ```shell time=2025-03-06T17:23:10.238+08:00 level=INFO source=logging.go:50 msg="ollama app started" time=2025-03-06T17:23:10.253+08:00 level=INFO source=lifecycle.go:19 msg="app config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES:-1 HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:9259h15m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\zzz\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-03-06T17:23:10.269+08:00 level=DEBUG source=lifecycle.go:34 msg="starting callback loop" time=2025-03-06T17:23:10.269+08:00 level=DEBUG source=store.go:60 msg="loaded existing store C:\\Users\\zzz\\AppData\\Local\\Ollama\\config.json - ID: 709fe49e-1d9e-40e4-9a4b-09c2588ec2d4" time=2025-03-06T17:23:10.269+08:00 level=DEBUG source=lifecycle.go:68 msg="Not first time, skipping first run notification" time=2025-03-06T17:23:10.270+08:00 level=DEBUG source=server.go:181 msg="heartbeat from server: Head \"http://0.0.0.0:11434/\": dial tcp 0.0.0.0:11434: connectex: No connection could be made because the target machine actively refused it." time=2025-03-06T17:23:10.270+08:00 level=INFO source=server.go:182 msg="unable to connect to server" time=2025-03-06T17:23:10.270+08:00 level=DEBUG source=eventloop.go:22 msg="starting event handling loop" time=2025-03-06T17:23:10.270+08:00 level=INFO source=server.go:141 msg="starting server..." time=2025-03-06T17:23:10.283+08:00 level=INFO source=server.go:127 msg="started ollama server with pid 5864" time=2025-03-06T17:23:10.283+08:00 level=INFO source=server.go:129 msg="ollama server logs C:\\Users\\zzz\\AppData\\Local\\Ollama\\server.log" time=2025-03-06T17:23:13.272+08:00 level=DEBUG source=updater.go:74 msg="checking for available update" requestURL="https://ollama.com/api/update?arch=amd64&nonce=iTysJ1BQr5q-iM2fBldx0A&os=windows&ts=1741252993&version=0.5.13" time=2025-03-06T17:23:13.980+08:00 level=DEBUG source=updater.go:83 msg="check update response 204 (current version is up to date)" time=2025-03-06T17:23:46.895+08:00 level=DEBUG source=eventloop.go:145 msg="unmanaged app message, lParm: 0x204" time=2025-03-06T17:23:47.744+08:00 level=DEBUG source=logging_windows.go:12 msg="viewing logs with start C:\\Users\\zzz\\AppData\\Local\\Ollama" ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.13
GiteaMirror added the bug label 2026-04-29 00:42:42 -05:00
Author
Owner

@jmorganca commented on GitHub (Mar 7, 2025):

@askie similar to https://github.com/ollama/ollama/issues/9149, I'm wondering if you had llama.cpp installed as well?

<!-- gh-comment-id:2707246558 --> @jmorganca commented on GitHub (Mar 7, 2025): @askie similar to https://github.com/ollama/ollama/issues/9149, I'm wondering if you had llama.cpp installed as well?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52737