[GH-ISSUE #11418] Llama Runner process has terminated [MSTY] #69596

Closed
opened 2026-05-04 18:36:05 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @psi00 on GitHub (Jul 14, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11418

What is the issue?

I'm using Msty.app as the UI, which uses Ollama for its backend. Running any model results in a Llama Runner error;
llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed

Relevant log output

{"level":50,"time":1752516964568,"pid":10104,"hostname":"Ryzen7","msg":"2025/07/14 19:16:04 routes.go:1232: INFO server config env=\"map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:10000 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:3 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\\\Users\\\\James\\\\AppData\\\\Roaming\\\\Msty\\\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:3 OLLAMA_ORIGINS:[http://localhost http://127.0.0.1 http://0.0.0.0 http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]\"\n"}
{"level":50,"time":1752516964570,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:04.569+01:00 level=INFO source=images.go:458 msg=\"total blobs: 10\"\n"}
{"level":50,"time":1752516964570,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:04.570+01:00 level=INFO source=images.go:465 msg=\"total unused blobs removed: 0\"\n"}
{"level":30,"time":1752516964571,"pid":10104,"hostname":"Ryzen7","msg":"[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.\n\n[GIN-debug] [WARNING] Running in \"debug\" mode. Switch to \"release\" mode in production.\n - using env:\texport GIN_MODE=release\n - using code:\tgin.SetMode(gin.ReleaseMode)\n\n"}
{"level":30,"time":1752516964571,"pid":10104,"hostname":"Ryzen7","msg":"[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)\n[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)\n[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers)\n[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func4 (5 handlers)\n[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)\n[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)\n[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)\n[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)\n[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)\n[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)\n[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)\n[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)\n[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)\n[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)\n[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)\n[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)\n[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)\n[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)\n[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)\n[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)\n[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)\n[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)\n[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)\n[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)\n"}
{"level":50,"time":1752516964571,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:04.571+01:00 level=INFO source=routes.go:1299 msg=\"Listening on 127.0.0.1:10000 (version 0.6.6)\"\ntime=2025-07-14T19:16:04.571+01:00 level=INFO source=gpu.go:217 msg=\"looking for compatible GPUs\"\n"}
{"level":50,"time":1752516964571,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:04.571+01:00 level=INFO source=gpu_windows.go:167 msg=packages count=1\ntime=2025-07-14T19:16:04.571+01:00 level=INFO source=gpu_windows.go:214 msg=\"\" package=0 cores=8 efficiency=0 threads=16\n"}
{"level":50,"time":1752516964691,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:04.691+01:00 level=INFO source=types.go:130 msg=\"inference compute\" id=GPU-8e0431b3-3727-b8f6-f64e-9303bcf98020 library=cuda variant=v12 compute=8.6 driver=12.8 name=\"NVIDIA GeForce RTX 3060\" total=\"12.0 GiB\" available=\"11.0 GiB\"\n"}
{"level":30,"time":1752516964798,"pid":10104,"hostname":"Ryzen7","msg":"[GIN] 2025/07/14 - 19:16:04 | 200 |            0s |       127.0.0.1 | GET      \"/\"\n"}
{"level":30,"time":1752516965114,"pid":10104,"hostname":"Ryzen7","msg":"[GIN] 2025/07/14 - 19:16:05 | 200 |      1.0282ms |       127.0.0.1 | GET      \"/api/tags\"\n"}
{"level":30,"time":1752516965590,"pid":10104,"hostname":"Ryzen7","msg":"[GIN] 2025/07/14 - 19:16:05 | 200 |            0s |       127.0.0.1 | GET      \"/\"\n"}
{"level":30,"time":1752516965707,"pid":10104,"hostname":"Ryzen7","msg":"[GIN] 2025/07/14 - 19:16:05 | 200 |            0s |       127.0.0.1 | GET      \"/\"\n"}
{"level":30,"time":1752516965742,"pid":10104,"hostname":"Ryzen7","msg":"[GIN] 2025/07/14 - 19:16:05 | 200 |            0s |       127.0.0.1 | GET      \"/\"\n"}
{"level":30,"time":1752516965836,"pid":10104,"hostname":"Ryzen7","msg":"[GIN] 2025/07/14 - 19:16:05 | 200 |            0s |       127.0.0.1 | GET      \"/\"\n"}
{"level":30,"time":1752516965839,"pid":10104,"hostname":"Ryzen7","msg":"[GIN] 2025/07/14 - 19:16:05 | 200 |      1.5081ms |       127.0.0.1 | GET      \"/api/tags\"\n"}
{"level":50,"time":1752516971235,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.235+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=general.alignment default=32\n"}
{"level":50,"time":1752516971265,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.265+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=general.alignment default=32\n"}
{"level":50,"time":1752516971277,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.277+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=general.alignment default=32\n"}
{"level":50,"time":1752516971278,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.278+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.vision.block_count default=0\n"}
{"level":50,"time":1752516971301,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.301+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.attention.key_length default=128\ntime=2025-07-14T19:16:11.301+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.attention.value_length default=128\ntime=2025-07-14T19:16:11.301+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.attention.key_length default=128\ntime=2025-07-14T19:16:11.301+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.attention.value_length default=128\n"}
{"level":50,"time":1752516971301,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.301+01:00 level=INFO source=sched.go:722 msg=\"new model will fit in available VRAM in single GPU, loading\" model=C:\\Users\\James\\AppData\\Roaming\\Msty\\models\\blobs\\sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc gpu=GPU-8e0431b3-3727-b8f6-f64e-9303bcf98020 parallel=3 available=11478908928 required=\"1.8 GiB\"\n"}
{"level":50,"time":1752516971316,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.315+01:00 level=INFO source=server.go:105 msg=\"system memory\" total=\"15.9 GiB\" free=\"5.0 GiB\" free_swap=\"14.3 GiB\"\ntime=2025-07-14T19:16:11.315+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.vision.block_count default=0\n"}
{"level":50,"time":1752516971331,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.331+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.attention.key_length default=128\ntime=2025-07-14T19:16:11.331+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.attention.value_length default=128\ntime=2025-07-14T19:16:11.331+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.attention.key_length default=128\ntime=2025-07-14T19:16:11.331+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.attention.value_length default=128\n"}
{"level":50,"time":1752516971332,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.331+01:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split=\"\" memory.available=\"[10.7 GiB]\" memory.gpu_overhead=\"0 B\" memory.required.full=\"1.8 GiB\" memory.required.partial=\"1.8 GiB\" memory.required.kv=\"168.0 MiB\" memory.required.allocations=\"[1.8 GiB]\" memory.weights.total=\"934.7 MiB\" memory.weights.repeating=\"752.1 MiB\" memory.weights.nonrepeating=\"182.6 MiB\" memory.graph.full=\"299.8 MiB\" memory.graph.partial=\"482.3 MiB\"\ntime=2025-07-14T19:16:11.331+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.attention.key_length default=128\ntime=2025-07-14T19:16:11.331+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.attention.value_length default=128\ntime=2025-07-14T19:16:11.331+01:00 level=INFO source=server.go:185 msg=\"enabling flash attention\"\n"}
{"level":50,"time":1752516971332,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.331+01:00 level=WARN source=server.go:193 msg=\"kv cache type not supported by model\" type=\"\"\n"}
{"level":50,"time":1752516971368,"pid":10104,"hostname":"Ryzen7","msg":"llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from C:\\Users\\James\\AppData\\Roaming\\Msty\\models\\blobs\\sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest))\nllama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.\n"}
{"level":50,"time":1752516971368,"pid":10104,"hostname":"Ryzen7","msg":"llama_model_loader: - kv   0:                       general.architecture str              = qwen2\nllama_model_loader: - kv   1:                               general.type str              = model\nllama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 1.5B\nllama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen\nllama_model_loader: - kv   4:                         general.size_label str              = 1.5B\nllama_model_loader: - kv   5:                          qwen2.block_count u32              = 28\nllama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072\nllama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 1536\nllama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 8960\nllama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 12\nllama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 2\nllama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 10000.000000\nllama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001\nllama_model_loader: - kv  13:                          general.file_type u32              = 15\nllama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2\nllama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = qwen2\n"}
{"level":50,"time":1752516971382,"pid":10104,"hostname":"Ryzen7","msg":"llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,151936]  = [\"!\", \"\\\"\", \"#\", \"$\", \"%\", \"&\", \"'\", ...\n"}
{"level":50,"time":1752516971385,"pid":10104,"hostname":"Ryzen7","msg":"llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...\n"}
{"level":50,"time":1752516971399,"pid":10104,"hostname":"Ryzen7","msg":"llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,151387]  = [\"Ġ Ġ\", \"ĠĠ ĠĠ\", \"i n\", \"Ġ t\",...\nllama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 151646\nllama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151643\nllama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643\nllama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true\nllama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false\nllama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...\nllama_model_loader: - kv  25:               general.quantization_version u32              = 2\nllama_model_loader: - type  f32:  141 tensors\nllama_model_loader: - type q4_K:  169 tensors\nllama_model_loader: - type q6_K:   29 tensors\n"}
{"level":50,"time":1752516971400,"pid":10104,"hostname":"Ryzen7","msg":"print_info: file format = GGUF V3 (latest)\nprint_info: file type   = Q4_K - Medium\nprint_info: file size   = 1.04 GiB (5.00 BPW) \n"}
{"level":50,"time":1752516971496,"pid":10104,"hostname":"Ryzen7","msg":"load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect\n"}
{"level":50,"time":1752516971496,"pid":10104,"hostname":"Ryzen7","msg":"load: special tokens cache size = 22\n"}
{"level":50,"time":1752516971533,"pid":10104,"hostname":"Ryzen7","msg":"load: token to piece cache size = 0.9310 MB\nprint_info: arch             = qwen2\nprint_info: vocab_only       = 1\nprint_info: model type       = ?B\nprint_info: model params     = 1.78 B\nprint_info: general.name     = DeepSeek R1 Distill Qwen 1.5B\nprint_info: vocab type       = BPE\nprint_info: n_vocab          = 151936\nprint_info: n_merges         = 151387\nprint_info: BOS token        = 151646 '<|begin▁of▁sentence|>'\nprint_info: EOS token        = 151643 '<|end▁of▁sentence|>'\nprint_info: EOT token        = 151643 '<|end▁of▁sentence|>'\nprint_info: PAD token        = 151643 '<|end▁of▁sentence|>'\nprint_info: LF token         = 198 'Ċ'\nprint_info: FIM PRE token    = 151659 '<|fim_prefix|>'\nprint_info: FIM SUF token    = 151661 '<|fim_suffix|>'\n"}
{"level":50,"time":1752516971533,"pid":10104,"hostname":"Ryzen7","msg":"print_info: FIM MID token    = 151660 '<|fim_middle|>'\nprint_info: FIM PAD token    = 151662 '<|fim_pad|>'\nprint_info: FIM REP token    = 151663 '<|repo_name|>'\nprint_info: FIM SEP token    = 151664 '<|file_sep|>'\nprint_info: EOG token        = 151643 '<|end▁of▁sentence|>'\nprint_info: EOG token        = 151662 '<|fim_pad|>'\nprint_info: EOG token        = 151663 '<|repo_name|>'\nprint_info: EOG token        = 151664 '<|file_sep|>'\nprint_info: max token length = 256\nllama_model_load: vocab only - skipping tensors\n"}
{"level":50,"time":1752516971543,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.543+01:00 level=INFO source=server.go:405 msg=\"starting llama server\" cmd=\"C:\\\\Users\\\\James\\\\AppData\\\\Roaming\\\\Msty\\\\msty-local.exe runner --model C:\\\\Users\\\\James\\\\AppData\\\\Roaming\\\\Msty\\\\models\\\\blobs\\\\sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc --ctx-size 6144 --batch-size 512 --n-gpu-layers 29 --threads 8 --flash-attn --no-mmap --parallel 3 --port 59174\"\n"}
{"level":50,"time":1752516971548,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.547+01:00 level=INFO source=sched.go:451 msg=\"loaded runners\" count=1\ntime=2025-07-14T19:16:11.547+01:00 level=INFO source=server.go:580 msg=\"waiting for llama runner to start responding\"\n"}
{"level":50,"time":1752516971548,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.548+01:00 level=INFO source=server.go:614 msg=\"waiting for server to become available\" status=\"llm server error\"\n"}
{"level":50,"time":1752516971582,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.581+01:00 level=INFO source=runner.go:853 msg=\"starting go runner\"\n"}
{"level":50,"time":1752516971699,"pid":10104,"hostname":"Ryzen7","msg":"ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no\r\nggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no\r\nggml_cuda_init: found 1 CUDA devices:\r\n  Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes\r\nload_backend: loaded CUDA backend from C:\\Users\\James\\AppData\\Roaming\\Msty\\lib\\ollama\\cuda_v12\\ggml-cuda.dll\n"}
{"level":50,"time":1752516971707,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.706+01:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)\n"}
{"level":50,"time":1752516971708,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.708+01:00 level=INFO source=runner.go:913 msg=\"Server listening on 127.0.0.1:59174\"\n"}
{"level":50,"time":1752516971789,"pid":10104,"hostname":"Ryzen7","msg":"llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3060) - 11242 MiB free\n"}
{"level":50,"time":1752516971799,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.798+01:00 level=INFO source=server.go:614 msg=\"waiting for server to become available\" status=\"llm server loading model\"\n"}
{"level":50,"time":1752516971826,"pid":10104,"hostname":"Ryzen7","msg":"llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from C:\\Users\\James\\AppData\\Roaming\\Msty\\models\\blobs\\sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest))\nllama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.\nllama_model_loader: - kv   0:                       general.architecture str              = qwen2\nllama_model_loader: - kv   1:                               general.type str              = model\nllama_model_loader: - kv   2:                               general.name str              = DeepSeek R1 Distill Qwen 1.5B\nllama_model_loader: - kv   3:                           general.basename str              = DeepSeek-R1-Distill-Qwen\nllama_model_loader: - kv   4:                         general.size_label str              = 1.5B\nllama_model_loader: - kv   5:                          qwen2.block_count u32              = 28\nllama_model_loader: - kv   6:                       qwen2.context_length u32              = 131072\nllama_model_loader: - kv   7:                     qwen2.embedding_length u32              = 1536\nllama_model_loader: - kv   8:                  qwen2.feed_forward_length u32              = 8960\nllama_model_loader: - kv   9:                 qwen2.attention.head_count u32              = 12\nllama_model_loader: - kv  10:              qwen2.attention.head_count_kv u32              = 2\nllama_model_loader: - kv  11:                       qwen2.rope.freq_base f32              = 10000.000000\nllama_model_loader: - kv  12:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001\nllama_model_loader: - kv  13:                          general.file_type u32              = 15\nllama_model_loader: - kv  14:                       tokenizer.ggml.model str              = gpt2\nllama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = qwen2\n"}
{"level":50,"time":1752516971840,"pid":10104,"hostname":"Ryzen7","msg":"llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,151936]  = [\"!\", \"\\\"\", \"#\", \"$\", \"%\", \"&\", \"'\", ...\n"}
{"level":50,"time":1752516971844,"pid":10104,"hostname":"Ryzen7","msg":"llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...\n"}
{"level":50,"time":1752516971859,"pid":10104,"hostname":"Ryzen7","msg":"llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,151387]  = [\"Ġ Ġ\", \"ĠĠ ĠĠ\", \"i n\", \"Ġ t\",...\nllama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 151646\nllama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151643\nllama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643\nllama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true\nllama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false\nllama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...\nllama_model_loader: - kv  25:               general.quantization_version u32              = 2\nllama_model_loader: - type  f32:  141 tensors\nllama_model_loader: - type q4_K:  169 tensors\nllama_model_loader: - type q6_K:   29 tensors\nprint_info: file format = GGUF V3 (latest)\nprint_info: file type   = Q4_K - Medium\nprint_info: file size   = 1.04 GiB (5.00 BPW) \n"}
{"level":50,"time":1752516971940,"pid":10104,"hostname":"Ryzen7","msg":"load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect\n"}
{"level":50,"time":1752516971940,"pid":10104,"hostname":"Ryzen7","msg":"load: special tokens cache size = 22\n"}
{"level":50,"time":1752516971977,"pid":10104,"hostname":"Ryzen7","msg":"load: token to piece cache size = 0.9310 MB\nprint_info: arch             = qwen2\nprint_info: vocab_only       = 0\nprint_info: n_ctx_train      = 131072\nprint_info: n_embd           = 1536\nprint_info: n_layer          = 28\nprint_info: n_head           = 12\nprint_info: n_head_kv        = 2\nprint_info: n_rot            = 128\nprint_info: n_swa            = 0\nprint_info: n_swa_pattern    = 1\nprint_info: n_embd_head_k    = 128\nprint_info: n_embd_head_v    = 128\nprint_info: n_gqa            = 6\nprint_info: n_embd_k_gqa     = 256\nprint_info: n_embd_v_gqa     = 256\nprint_info: f_norm_eps       = 0.0e+00\nprint_info: f_norm_rms_eps   = 1.0e-06\nprint_info: f_clamp_kqv      = 0.0e+00\nprint_info: f_max_alibi_bias = 0.0e+00\nprint_info: f_logit_scale    = 0.0e+00\nprint_info: f_attn_scale     = 0.0e+00\nprint_info: n_ff             = 8960\nprint_info: n_expert         = 0\nprint_info: n_expert_used    = 0\nprint_info: causal attn      = 1\nprint_info: pooling type     = 0\nprint_info: rope type        = 2\nprint_info: rope scaling     = linear\nprint_info: freq_base_train  = 10000.0\nprint_info: freq_scale_train = 1\nprint_info: n_ctx_orig_yarn  = 131072\nprint_info: rope_finetuned   = unknown\nprint_info: ssm_d_conv       = 0\nprint_info: ssm_d_inner      = 0\nprint_info: ssm_d_state      = 0\nprint_info: ssm_dt_rank      = 0\nprint_info: ssm_dt_b_c_rms   = 0\nprint_info: model type       = 1.5B\nprint_info: model params     = 1.78 B\nprint_info: general.name     = DeepSeek R1 Distill Qwen 1.5B\nprint_info: vocab type       = BPE\nprint_info: n_vocab          = 151936\nprint_info: n_merges         = 151387\nprint_info: BOS token        = 151646 '<|begin▁of▁sentence|>'\nprint_info: EOS token        = 151643 '<|end▁of▁sentence|>'\nprint_info: EOT token        = 151643 '<|end▁of▁sentence|>'\nprint_info: PAD token        = 151643 '<|end▁of▁sentence|>'\nprint_info: LF token         = 198 'Ċ'\nprint_info: FIM PRE token    = 151659 '<|fim_prefix|>'\nprint_info: FIM SUF token    = 151661 '<|fim_suffix|>'\nprint_info: FIM MID token    = 151660 '<|fim_middle|>'\nprint_info: FIM PAD token    = 151662 '<|fim_pad|>'\nprint_info: FIM REP token    = 151663 '<|repo_name|>'\nprint_info: FIM SEP token    = 151664 '<|file_sep|>'\nprint_info: EOG token        = 151643 '<|end▁of▁sentence|>'\nprint_info: EOG token        = 151662 '<|fim_pad|>'\nprint_info: EOG token        = 151663 '<|repo_name|>'\nprint_info: EOG token        = 151664 '<|file_sep|>'\nprint_info: max token length = 256\nload_tensors: loading model tensors, this can take a while... (mmap = false)\n"}
{"level":50,"time":1752516971982,"pid":10104,"hostname":"Ryzen7","msg":"load_tensors: offloading 28 repeating layers to GPU\nload_tensors: offloading output layer to GPU\nload_tensors: offloaded 29/29 layers to GPU\nload_tensors:          CPU model buffer size =   125.19 MiB\nload_tensors:        CUDA0 model buffer size =   934.70 MiB\n"}
{"level":50,"time":1752516972202,"pid":10104,"hostname":"Ryzen7","msg":"llama_context: constructing llama_context\nllama_context: n_seq_max     = 3\nllama_context: n_ctx         = 6144\nllama_context: n_ctx_per_seq = 2048\nllama_context: n_batch       = 1536\nllama_context: n_ubatch      = 512\nllama_context: causal_attn   = 1\nllama_context: flash_attn    = 1\nllama_context: freq_base     = 10000.0\nllama_context: freq_scale    = 1\nllama_context: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized\n"}
{"level":50,"time":1752516972203,"pid":10104,"hostname":"Ryzen7","msg":"llama_context:  CUDA_Host  output buffer size =     1.76 MiB\ninit: kv_size = 6144, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1\n"}
{"level":50,"time":1752516972209,"pid":10104,"hostname":"Ryzen7","msg":"init:      CUDA0 KV buffer size =   168.00 MiB\nllama_context: KV self size  =  168.00 MiB, K (f16):   84.00 MiB, V (f16):   84.00 MiB\n"}
{"level":50,"time":1752516972217,"pid":10104,"hostname":"Ryzen7","msg":"D:/a/llama.cpp/llama.cpp/ggml/src/ggml.c:1777: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed\r\n"}
{"level":50,"time":1752516972501,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:12.500+01:00 level=INFO source=server.go:614 msg=\"waiting for server to become available\" status=\"llm server not responding\"\n"}
{"level":50,"time":1752516972565,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:12.565+01:00 level=ERROR source=server.go:449 msg=\"llama runner terminated\" error=\"exit status 0xc0000409\"\n"}
{"level":50,"time":1752516972751,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:12.750+01:00 level=ERROR source=sched.go:457 msg=\"error loading llama server\" error=\"llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed\"\n"}
{"level":30,"time":1752516972752,"pid":10104,"hostname":"Ryzen7","msg":"[GIN] 2025/07/14 - 19:16:12 | 500 |    1.5308332s |       127.0.0.1 | POST     \"/api/chat\"\n"}

OS

Windows

GPU

AMD

CPU

AMD

Ollama version

0.6.6

Originally created by @psi00 on GitHub (Jul 14, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11418 ### What is the issue? I'm using Msty.app as the UI, which uses Ollama for its backend. Running any model results in a Llama Runner error; `llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed` ### Relevant log output ```shell {"level":50,"time":1752516964568,"pid":10104,"hostname":"Ryzen7","msg":"2025/07/14 19:16:04 routes.go:1232: INFO server config env=\"map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:true OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:10000 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:3 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\\\Users\\\\James\\\\AppData\\\\Roaming\\\\Msty\\\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:3 OLLAMA_ORIGINS:[http://localhost http://127.0.0.1 http://0.0.0.0 http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]\"\n"} {"level":50,"time":1752516964570,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:04.569+01:00 level=INFO source=images.go:458 msg=\"total blobs: 10\"\n"} {"level":50,"time":1752516964570,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:04.570+01:00 level=INFO source=images.go:465 msg=\"total unused blobs removed: 0\"\n"} {"level":30,"time":1752516964571,"pid":10104,"hostname":"Ryzen7","msg":"[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.\n\n[GIN-debug] [WARNING] Running in \"debug\" mode. Switch to \"release\" mode in production.\n - using env:\texport GIN_MODE=release\n - using code:\tgin.SetMode(gin.ReleaseMode)\n\n"} {"level":30,"time":1752516964571,"pid":10104,"hostname":"Ryzen7","msg":"[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)\n[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)\n[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers)\n[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func4 (5 handlers)\n[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)\n[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)\n[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)\n[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)\n[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)\n[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)\n[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)\n[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)\n[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)\n[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)\n[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)\n[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)\n[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)\n[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)\n[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)\n[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)\n[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)\n[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)\n[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)\n[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)\n"} {"level":50,"time":1752516964571,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:04.571+01:00 level=INFO source=routes.go:1299 msg=\"Listening on 127.0.0.1:10000 (version 0.6.6)\"\ntime=2025-07-14T19:16:04.571+01:00 level=INFO source=gpu.go:217 msg=\"looking for compatible GPUs\"\n"} {"level":50,"time":1752516964571,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:04.571+01:00 level=INFO source=gpu_windows.go:167 msg=packages count=1\ntime=2025-07-14T19:16:04.571+01:00 level=INFO source=gpu_windows.go:214 msg=\"\" package=0 cores=8 efficiency=0 threads=16\n"} {"level":50,"time":1752516964691,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:04.691+01:00 level=INFO source=types.go:130 msg=\"inference compute\" id=GPU-8e0431b3-3727-b8f6-f64e-9303bcf98020 library=cuda variant=v12 compute=8.6 driver=12.8 name=\"NVIDIA GeForce RTX 3060\" total=\"12.0 GiB\" available=\"11.0 GiB\"\n"} {"level":30,"time":1752516964798,"pid":10104,"hostname":"Ryzen7","msg":"[GIN] 2025/07/14 - 19:16:04 | 200 | 0s | 127.0.0.1 | GET \"/\"\n"} {"level":30,"time":1752516965114,"pid":10104,"hostname":"Ryzen7","msg":"[GIN] 2025/07/14 - 19:16:05 | 200 | 1.0282ms | 127.0.0.1 | GET \"/api/tags\"\n"} {"level":30,"time":1752516965590,"pid":10104,"hostname":"Ryzen7","msg":"[GIN] 2025/07/14 - 19:16:05 | 200 | 0s | 127.0.0.1 | GET \"/\"\n"} {"level":30,"time":1752516965707,"pid":10104,"hostname":"Ryzen7","msg":"[GIN] 2025/07/14 - 19:16:05 | 200 | 0s | 127.0.0.1 | GET \"/\"\n"} {"level":30,"time":1752516965742,"pid":10104,"hostname":"Ryzen7","msg":"[GIN] 2025/07/14 - 19:16:05 | 200 | 0s | 127.0.0.1 | GET \"/\"\n"} {"level":30,"time":1752516965836,"pid":10104,"hostname":"Ryzen7","msg":"[GIN] 2025/07/14 - 19:16:05 | 200 | 0s | 127.0.0.1 | GET \"/\"\n"} {"level":30,"time":1752516965839,"pid":10104,"hostname":"Ryzen7","msg":"[GIN] 2025/07/14 - 19:16:05 | 200 | 1.5081ms | 127.0.0.1 | GET \"/api/tags\"\n"} {"level":50,"time":1752516971235,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.235+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=general.alignment default=32\n"} {"level":50,"time":1752516971265,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.265+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=general.alignment default=32\n"} {"level":50,"time":1752516971277,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.277+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=general.alignment default=32\n"} {"level":50,"time":1752516971278,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.278+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.vision.block_count default=0\n"} {"level":50,"time":1752516971301,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.301+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.attention.key_length default=128\ntime=2025-07-14T19:16:11.301+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.attention.value_length default=128\ntime=2025-07-14T19:16:11.301+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.attention.key_length default=128\ntime=2025-07-14T19:16:11.301+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.attention.value_length default=128\n"} {"level":50,"time":1752516971301,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.301+01:00 level=INFO source=sched.go:722 msg=\"new model will fit in available VRAM in single GPU, loading\" model=C:\\Users\\James\\AppData\\Roaming\\Msty\\models\\blobs\\sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc gpu=GPU-8e0431b3-3727-b8f6-f64e-9303bcf98020 parallel=3 available=11478908928 required=\"1.8 GiB\"\n"} {"level":50,"time":1752516971316,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.315+01:00 level=INFO source=server.go:105 msg=\"system memory\" total=\"15.9 GiB\" free=\"5.0 GiB\" free_swap=\"14.3 GiB\"\ntime=2025-07-14T19:16:11.315+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.vision.block_count default=0\n"} {"level":50,"time":1752516971331,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.331+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.attention.key_length default=128\ntime=2025-07-14T19:16:11.331+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.attention.value_length default=128\ntime=2025-07-14T19:16:11.331+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.attention.key_length default=128\ntime=2025-07-14T19:16:11.331+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.attention.value_length default=128\n"} {"level":50,"time":1752516971332,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.331+01:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=29 layers.offload=29 layers.split=\"\" memory.available=\"[10.7 GiB]\" memory.gpu_overhead=\"0 B\" memory.required.full=\"1.8 GiB\" memory.required.partial=\"1.8 GiB\" memory.required.kv=\"168.0 MiB\" memory.required.allocations=\"[1.8 GiB]\" memory.weights.total=\"934.7 MiB\" memory.weights.repeating=\"752.1 MiB\" memory.weights.nonrepeating=\"182.6 MiB\" memory.graph.full=\"299.8 MiB\" memory.graph.partial=\"482.3 MiB\"\ntime=2025-07-14T19:16:11.331+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.attention.key_length default=128\ntime=2025-07-14T19:16:11.331+01:00 level=WARN source=ggml.go:152 msg=\"key not found\" key=qwen2.attention.value_length default=128\ntime=2025-07-14T19:16:11.331+01:00 level=INFO source=server.go:185 msg=\"enabling flash attention\"\n"} {"level":50,"time":1752516971332,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.331+01:00 level=WARN source=server.go:193 msg=\"kv cache type not supported by model\" type=\"\"\n"} {"level":50,"time":1752516971368,"pid":10104,"hostname":"Ryzen7","msg":"llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from C:\\Users\\James\\AppData\\Roaming\\Msty\\models\\blobs\\sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest))\nllama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.\n"} {"level":50,"time":1752516971368,"pid":10104,"hostname":"Ryzen7","msg":"llama_model_loader: - kv 0: general.architecture str = qwen2\nllama_model_loader: - kv 1: general.type str = model\nllama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 1.5B\nllama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen\nllama_model_loader: - kv 4: general.size_label str = 1.5B\nllama_model_loader: - kv 5: qwen2.block_count u32 = 28\nllama_model_loader: - kv 6: qwen2.context_length u32 = 131072\nllama_model_loader: - kv 7: qwen2.embedding_length u32 = 1536\nllama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 8960\nllama_model_loader: - kv 9: qwen2.attention.head_count u32 = 12\nllama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 2\nllama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000\nllama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001\nllama_model_loader: - kv 13: general.file_type u32 = 15\nllama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2\nllama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2\n"} {"level":50,"time":1752516971382,"pid":10104,"hostname":"Ryzen7","msg":"llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,151936] = [\"!\", \"\\\"\", \"#\", \"$\", \"%\", \"&\", \"'\", ...\n"} {"level":50,"time":1752516971385,"pid":10104,"hostname":"Ryzen7","msg":"llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...\n"} {"level":50,"time":1752516971399,"pid":10104,"hostname":"Ryzen7","msg":"llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = [\"Ġ Ġ\", \"ĠĠ ĠĠ\", \"i n\", \"Ġ t\",...\nllama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646\nllama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643\nllama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643\nllama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true\nllama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false\nllama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de...\nllama_model_loader: - kv 25: general.quantization_version u32 = 2\nllama_model_loader: - type f32: 141 tensors\nllama_model_loader: - type q4_K: 169 tensors\nllama_model_loader: - type q6_K: 29 tensors\n"} {"level":50,"time":1752516971400,"pid":10104,"hostname":"Ryzen7","msg":"print_info: file format = GGUF V3 (latest)\nprint_info: file type = Q4_K - Medium\nprint_info: file size = 1.04 GiB (5.00 BPW) \n"} {"level":50,"time":1752516971496,"pid":10104,"hostname":"Ryzen7","msg":"load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect\n"} {"level":50,"time":1752516971496,"pid":10104,"hostname":"Ryzen7","msg":"load: special tokens cache size = 22\n"} {"level":50,"time":1752516971533,"pid":10104,"hostname":"Ryzen7","msg":"load: token to piece cache size = 0.9310 MB\nprint_info: arch = qwen2\nprint_info: vocab_only = 1\nprint_info: model type = ?B\nprint_info: model params = 1.78 B\nprint_info: general.name = DeepSeek R1 Distill Qwen 1.5B\nprint_info: vocab type = BPE\nprint_info: n_vocab = 151936\nprint_info: n_merges = 151387\nprint_info: BOS token = 151646 '<|begin▁of▁sentence|>'\nprint_info: EOS token = 151643 '<|end▁of▁sentence|>'\nprint_info: EOT token = 151643 '<|end▁of▁sentence|>'\nprint_info: PAD token = 151643 '<|end▁of▁sentence|>'\nprint_info: LF token = 198 'Ċ'\nprint_info: FIM PRE token = 151659 '<|fim_prefix|>'\nprint_info: FIM SUF token = 151661 '<|fim_suffix|>'\n"} {"level":50,"time":1752516971533,"pid":10104,"hostname":"Ryzen7","msg":"print_info: FIM MID token = 151660 '<|fim_middle|>'\nprint_info: FIM PAD token = 151662 '<|fim_pad|>'\nprint_info: FIM REP token = 151663 '<|repo_name|>'\nprint_info: FIM SEP token = 151664 '<|file_sep|>'\nprint_info: EOG token = 151643 '<|end▁of▁sentence|>'\nprint_info: EOG token = 151662 '<|fim_pad|>'\nprint_info: EOG token = 151663 '<|repo_name|>'\nprint_info: EOG token = 151664 '<|file_sep|>'\nprint_info: max token length = 256\nllama_model_load: vocab only - skipping tensors\n"} {"level":50,"time":1752516971543,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.543+01:00 level=INFO source=server.go:405 msg=\"starting llama server\" cmd=\"C:\\\\Users\\\\James\\\\AppData\\\\Roaming\\\\Msty\\\\msty-local.exe runner --model C:\\\\Users\\\\James\\\\AppData\\\\Roaming\\\\Msty\\\\models\\\\blobs\\\\sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc --ctx-size 6144 --batch-size 512 --n-gpu-layers 29 --threads 8 --flash-attn --no-mmap --parallel 3 --port 59174\"\n"} {"level":50,"time":1752516971548,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.547+01:00 level=INFO source=sched.go:451 msg=\"loaded runners\" count=1\ntime=2025-07-14T19:16:11.547+01:00 level=INFO source=server.go:580 msg=\"waiting for llama runner to start responding\"\n"} {"level":50,"time":1752516971548,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.548+01:00 level=INFO source=server.go:614 msg=\"waiting for server to become available\" status=\"llm server error\"\n"} {"level":50,"time":1752516971582,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.581+01:00 level=INFO source=runner.go:853 msg=\"starting go runner\"\n"} {"level":50,"time":1752516971699,"pid":10104,"hostname":"Ryzen7","msg":"ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no\r\nggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no\r\nggml_cuda_init: found 1 CUDA devices:\r\n Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes\r\nload_backend: loaded CUDA backend from C:\\Users\\James\\AppData\\Roaming\\Msty\\lib\\ollama\\cuda_v12\\ggml-cuda.dll\n"} {"level":50,"time":1752516971707,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.706+01:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(gcc)\n"} {"level":50,"time":1752516971708,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.708+01:00 level=INFO source=runner.go:913 msg=\"Server listening on 127.0.0.1:59174\"\n"} {"level":50,"time":1752516971789,"pid":10104,"hostname":"Ryzen7","msg":"llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3060) - 11242 MiB free\n"} {"level":50,"time":1752516971799,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:11.798+01:00 level=INFO source=server.go:614 msg=\"waiting for server to become available\" status=\"llm server loading model\"\n"} {"level":50,"time":1752516971826,"pid":10104,"hostname":"Ryzen7","msg":"llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from C:\\Users\\James\\AppData\\Roaming\\Msty\\models\\blobs\\sha256-aabd4debf0c8f08881923f2c25fc0fdeed24435271c2b3e92c4af36704040dbc (version GGUF V3 (latest))\nllama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.\nllama_model_loader: - kv 0: general.architecture str = qwen2\nllama_model_loader: - kv 1: general.type str = model\nllama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 1.5B\nllama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen\nllama_model_loader: - kv 4: general.size_label str = 1.5B\nllama_model_loader: - kv 5: qwen2.block_count u32 = 28\nllama_model_loader: - kv 6: qwen2.context_length u32 = 131072\nllama_model_loader: - kv 7: qwen2.embedding_length u32 = 1536\nllama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 8960\nllama_model_loader: - kv 9: qwen2.attention.head_count u32 = 12\nllama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 2\nllama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000\nllama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001\nllama_model_loader: - kv 13: general.file_type u32 = 15\nllama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2\nllama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2\n"} {"level":50,"time":1752516971840,"pid":10104,"hostname":"Ryzen7","msg":"llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,151936] = [\"!\", \"\\\"\", \"#\", \"$\", \"%\", \"&\", \"'\", ...\n"} {"level":50,"time":1752516971844,"pid":10104,"hostname":"Ryzen7","msg":"llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...\n"} {"level":50,"time":1752516971859,"pid":10104,"hostname":"Ryzen7","msg":"llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = [\"Ġ Ġ\", \"ĠĠ ĠĠ\", \"i n\", \"Ġ t\",...\nllama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646\nllama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643\nllama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643\nllama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true\nllama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false\nllama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de...\nllama_model_loader: - kv 25: general.quantization_version u32 = 2\nllama_model_loader: - type f32: 141 tensors\nllama_model_loader: - type q4_K: 169 tensors\nllama_model_loader: - type q6_K: 29 tensors\nprint_info: file format = GGUF V3 (latest)\nprint_info: file type = Q4_K - Medium\nprint_info: file size = 1.04 GiB (5.00 BPW) \n"} {"level":50,"time":1752516971940,"pid":10104,"hostname":"Ryzen7","msg":"load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect\n"} {"level":50,"time":1752516971940,"pid":10104,"hostname":"Ryzen7","msg":"load: special tokens cache size = 22\n"} {"level":50,"time":1752516971977,"pid":10104,"hostname":"Ryzen7","msg":"load: token to piece cache size = 0.9310 MB\nprint_info: arch = qwen2\nprint_info: vocab_only = 0\nprint_info: n_ctx_train = 131072\nprint_info: n_embd = 1536\nprint_info: n_layer = 28\nprint_info: n_head = 12\nprint_info: n_head_kv = 2\nprint_info: n_rot = 128\nprint_info: n_swa = 0\nprint_info: n_swa_pattern = 1\nprint_info: n_embd_head_k = 128\nprint_info: n_embd_head_v = 128\nprint_info: n_gqa = 6\nprint_info: n_embd_k_gqa = 256\nprint_info: n_embd_v_gqa = 256\nprint_info: f_norm_eps = 0.0e+00\nprint_info: f_norm_rms_eps = 1.0e-06\nprint_info: f_clamp_kqv = 0.0e+00\nprint_info: f_max_alibi_bias = 0.0e+00\nprint_info: f_logit_scale = 0.0e+00\nprint_info: f_attn_scale = 0.0e+00\nprint_info: n_ff = 8960\nprint_info: n_expert = 0\nprint_info: n_expert_used = 0\nprint_info: causal attn = 1\nprint_info: pooling type = 0\nprint_info: rope type = 2\nprint_info: rope scaling = linear\nprint_info: freq_base_train = 10000.0\nprint_info: freq_scale_train = 1\nprint_info: n_ctx_orig_yarn = 131072\nprint_info: rope_finetuned = unknown\nprint_info: ssm_d_conv = 0\nprint_info: ssm_d_inner = 0\nprint_info: ssm_d_state = 0\nprint_info: ssm_dt_rank = 0\nprint_info: ssm_dt_b_c_rms = 0\nprint_info: model type = 1.5B\nprint_info: model params = 1.78 B\nprint_info: general.name = DeepSeek R1 Distill Qwen 1.5B\nprint_info: vocab type = BPE\nprint_info: n_vocab = 151936\nprint_info: n_merges = 151387\nprint_info: BOS token = 151646 '<|begin▁of▁sentence|>'\nprint_info: EOS token = 151643 '<|end▁of▁sentence|>'\nprint_info: EOT token = 151643 '<|end▁of▁sentence|>'\nprint_info: PAD token = 151643 '<|end▁of▁sentence|>'\nprint_info: LF token = 198 'Ċ'\nprint_info: FIM PRE token = 151659 '<|fim_prefix|>'\nprint_info: FIM SUF token = 151661 '<|fim_suffix|>'\nprint_info: FIM MID token = 151660 '<|fim_middle|>'\nprint_info: FIM PAD token = 151662 '<|fim_pad|>'\nprint_info: FIM REP token = 151663 '<|repo_name|>'\nprint_info: FIM SEP token = 151664 '<|file_sep|>'\nprint_info: EOG token = 151643 '<|end▁of▁sentence|>'\nprint_info: EOG token = 151662 '<|fim_pad|>'\nprint_info: EOG token = 151663 '<|repo_name|>'\nprint_info: EOG token = 151664 '<|file_sep|>'\nprint_info: max token length = 256\nload_tensors: loading model tensors, this can take a while... (mmap = false)\n"} {"level":50,"time":1752516971982,"pid":10104,"hostname":"Ryzen7","msg":"load_tensors: offloading 28 repeating layers to GPU\nload_tensors: offloading output layer to GPU\nload_tensors: offloaded 29/29 layers to GPU\nload_tensors: CPU model buffer size = 125.19 MiB\nload_tensors: CUDA0 model buffer size = 934.70 MiB\n"} {"level":50,"time":1752516972202,"pid":10104,"hostname":"Ryzen7","msg":"llama_context: constructing llama_context\nllama_context: n_seq_max = 3\nllama_context: n_ctx = 6144\nllama_context: n_ctx_per_seq = 2048\nllama_context: n_batch = 1536\nllama_context: n_ubatch = 512\nllama_context: causal_attn = 1\nllama_context: flash_attn = 1\nllama_context: freq_base = 10000.0\nllama_context: freq_scale = 1\nllama_context: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized\n"} {"level":50,"time":1752516972203,"pid":10104,"hostname":"Ryzen7","msg":"llama_context: CUDA_Host output buffer size = 1.76 MiB\ninit: kv_size = 6144, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 28, can_shift = 1\n"} {"level":50,"time":1752516972209,"pid":10104,"hostname":"Ryzen7","msg":"init: CUDA0 KV buffer size = 168.00 MiB\nllama_context: KV self size = 168.00 MiB, K (f16): 84.00 MiB, V (f16): 84.00 MiB\n"} {"level":50,"time":1752516972217,"pid":10104,"hostname":"Ryzen7","msg":"D:/a/llama.cpp/llama.cpp/ggml/src/ggml.c:1777: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed\r\n"} {"level":50,"time":1752516972501,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:12.500+01:00 level=INFO source=server.go:614 msg=\"waiting for server to become available\" status=\"llm server not responding\"\n"} {"level":50,"time":1752516972565,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:12.565+01:00 level=ERROR source=server.go:449 msg=\"llama runner terminated\" error=\"exit status 0xc0000409\"\n"} {"level":50,"time":1752516972751,"pid":10104,"hostname":"Ryzen7","msg":"time=2025-07-14T19:16:12.750+01:00 level=ERROR source=sched.go:457 msg=\"error loading llama server\" error=\"llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed\"\n"} {"level":30,"time":1752516972752,"pid":10104,"hostname":"Ryzen7","msg":"[GIN] 2025/07/14 - 19:16:12 | 500 | 1.5308332s | 127.0.0.1 | POST \"/api/chat\"\n"} ``` ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.6.6
GiteaMirror added the bug label 2026-05-04 18:36:05 -05:00
Author
Owner

@rick-github commented on GitHub (Jul 14, 2025):

{"level":50,"time":1752516972217,"pid":10104,"hostname":"Ryzen7","msg":"D:/a/llama.cpp/llama.cpp/ggml/src/ggml.c:1777: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed\r\n"}

#9509

Upgrade ollama.

<!-- gh-comment-id:3070551068 --> @rick-github commented on GitHub (Jul 14, 2025): ``` {"level":50,"time":1752516972217,"pid":10104,"hostname":"Ryzen7","msg":"D:/a/llama.cpp/llama.cpp/ggml/src/ggml.c:1777: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed\r\n"} ``` #9509 [Upgrade](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-upgrade-ollama) ollama.
Author
Owner

@psi00 commented on GitHub (Jul 14, 2025):

Thanks. I'll have to let the developers of Msty know since there's no way to update Ollama within their UI

<!-- gh-comment-id:3070586523 --> @psi00 commented on GitHub (Jul 14, 2025): Thanks. I'll have to let the developers of Msty know since there's no way to update Ollama within their UI
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69596