[GH-ISSUE #9509] tensor->op == GGML_OP_UNARY error when running a model #68253

Closed
opened 2026-05-04 13:01:03 -05:00 by GiteaMirror · 24 comments
Owner

Originally created by @Dejon141 on GitHub (Mar 5, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9509

Originally assigned to: @jmorganca on GitHub.

What is the issue?

I'm getting this error after updating to ollama v0.5.13
"Error: llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed"

Relevant log output


OS

Windows

GPU

Geforce GTX 1060 6gb

CPU

Intel i5 10400F

Ollama version

v0.5.13

Originally created by @Dejon141 on GitHub (Mar 5, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9509 Originally assigned to: @jmorganca on GitHub. ### What is the issue? I'm getting this error after updating to ollama v0.5.13 "Error: llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed" ### Relevant log output ```shell ``` ### OS Windows ### GPU Geforce GTX 1060 6gb ### CPU Intel i5 10400F ### Ollama version v0.5.13
GiteaMirror added the bug label 2026-05-04 13:01:03 -05:00
Author
Owner

@jmorganca commented on GitHub (Mar 5, 2025):

@Dejon141 sorry this happened. May I ask which version you were upgrading from (if you happen to know)? Would it be possible to post the full logs (on Windows: click on Ollama -> View logs, and then share server.log).

<!-- gh-comment-id:2699949632 --> @jmorganca commented on GitHub (Mar 5, 2025): @Dejon141 sorry this happened. May I ask which version you were upgrading from (if you happen to know)? Would it be possible to post the full logs (on Windows: click on Ollama -> View logs, and then share server.log).
Author
Owner

@DangerousBerries commented on GitHub (Mar 5, 2025):

@Dejon141 sorry this happened. May I ask which version you were upgrading from (if you happen to know)? Would it be possible to post the full logs (on Windows: click on Ollama -> View logs, and then share server.log).

Same here with the newest version.

Log.txt

<!-- gh-comment-id:2700056800 --> @DangerousBerries commented on GitHub (Mar 5, 2025): > [@Dejon141](https://github.com/Dejon141) sorry this happened. May I ask which version you were upgrading from (if you happen to know)? Would it be possible to post the full logs (on Windows: click on Ollama -> View logs, and then share server.log). Same here with the newest version. [Log.txt](https://github.com/user-attachments/files/19084152/Log.txt)
Author
Owner

@jmorganca commented on GitHub (Mar 5, 2025):

@DangerousBerries does this happen on all models, or only certain ones like snowflake-arctic-embed2?

<!-- gh-comment-id:2700099257 --> @jmorganca commented on GitHub (Mar 5, 2025): @DangerousBerries does this happen on all models, or only certain ones like `snowflake-arctic-embed2`?
Author
Owner

@jmorganca commented on GitHub (Mar 5, 2025):

@DangerousBerries this seems to be a differnet issue, I've opened https://github.com/ollama/ollama/issues/9511 for that

<!-- gh-comment-id:2700103229 --> @jmorganca commented on GitHub (Mar 5, 2025): @DangerousBerries this seems to be a differnet issue, I've opened https://github.com/ollama/ollama/issues/9511 for that
Author
Owner

@jmorganca commented on GitHub (Mar 5, 2025):

@Dejon141 which model are you running? I can try on reproduce on a GeForce 1060

<!-- gh-comment-id:2700105633 --> @jmorganca commented on GitHub (Mar 5, 2025): @Dejon141 which model are you running? I can try on reproduce on a GeForce 1060
Author
Owner

@Dejon141 commented on GitHub (Mar 6, 2025):

Happens to me on all models works on WSL but not windows, here is the server log,

server.log

The error occurs with all the models I use deepseek-r1, llama3.2, etc.

<!-- gh-comment-id:2702765239 --> @Dejon141 commented on GitHub (Mar 6, 2025): Happens to me on all models works on WSL but not windows, here is the server log, [server.log](https://github.com/user-attachments/files/19101418/server.log) The error occurs with all the models I use deepseek-r1, llama3.2, etc.
Author
Owner

@MyColorfulDays commented on GitHub (Mar 6, 2025):

FYI, I solved it by remove llama.cpp from my Path, #9149

  1. Remove llama.cpp from Path or install the latest version from https://github.com/ggml-org/llama.cpp/releases.
  2. Restart ollama.
<!-- gh-comment-id:2704011571 --> @MyColorfulDays commented on GitHub (Mar 6, 2025): FYI, I solved it by remove llama.cpp from my Path, #9149 1. Remove llama.cpp from Path or install the latest version from https://github.com/ggml-org/llama.cpp/releases. 2. Restart ollama.
Author
Owner

@dentroai commented on GitHub (Mar 6, 2025):

got the same issue on linux with snowflake arctic 2 large and ollama 0.5.13.

Didn't want to figure out how to re-install llama.cpp as @MyColorfulDays suggested, so just downgraded ollama to 0.5.12 for now.

For anyone insterested how to downgrade:

export OLLAMA_VERSION=0.5.12
curl -fsSL https://ollama.com/install.sh | sh
<!-- gh-comment-id:2704791944 --> @dentroai commented on GitHub (Mar 6, 2025): got the same issue on linux with snowflake arctic 2 large and ollama 0.5.13. Didn't want to figure out how to re-install llama.cpp as @MyColorfulDays suggested, so just downgraded ollama to 0.5.12 for now. For anyone insterested how to downgrade: ``` export OLLAMA_VERSION=0.5.12 curl -fsSL https://ollama.com/install.sh | sh ```
Author
Owner

@Dejon141 commented on GitHub (Mar 6, 2025):

Could having the llama cpp cli tool in my path cause this.
Also I installed llama cpp like a few weeks ago so i probably already have the newest version.

I figured it out after trying to remove llamacpp from path it worked and default to the one that comes with ollama it seems that llamacpp cli was interfering with ollama when it was updated.

Is there any way to keep both together without interference.

Also can an expert mode be added to ollama like something like "ollama advanced --LLamaCppcommand"

<!-- gh-comment-id:2705158370 --> @Dejon141 commented on GitHub (Mar 6, 2025): Could having the llama cpp cli tool in my path cause this. Also I installed llama cpp like a few weeks ago so i probably already have the newest version. I figured it out after trying to remove llamacpp from path it worked and default to the one that comes with ollama it seems that llamacpp cli was interfering with ollama when it was updated. Is there any way to keep both together without interference. Also can an expert mode be added to ollama like something like "ollama advanced --LLamaCppcommand"
Author
Owner

@Minhao-Zhang commented on GitHub (Mar 6, 2025):

Having similar issue with snowflake-arctic-embed2 and phi4-mini on Ollama 0.5.13 on Windows CUDA 12.8, RTX 4070. I am able to run other models like qwen2.5:7b.

{"error":"llama runner process has terminated: GGML_ASSERT(ctx-\u003ekv[key_id].get_type() != GGUF_TYPE_STRING) failed"}⏎                       

I do not have llama.cpp on my Windows machine so I don't know re-installing it would work or not. Is there a way to temporarily downgrade Ollama on Windows?

<!-- gh-comment-id:2705201090 --> @Minhao-Zhang commented on GitHub (Mar 6, 2025): Having similar issue with snowflake-arctic-embed2 and phi4-mini on Ollama 0.5.13 on Windows CUDA 12.8, RTX 4070. I am able to run other models like qwen2.5:7b. ``` {"error":"llama runner process has terminated: GGML_ASSERT(ctx-\u003ekv[key_id].get_type() != GGUF_TYPE_STRING) failed"}⏎ ``` I do not have llama.cpp on my Windows machine so I don't know re-installing it would work or not. Is there a way to temporarily downgrade Ollama on Windows?
Author
Owner

@MyColorfulDays commented on GitHub (Mar 7, 2025):

@Minhao-Zhang You can downgrade by download the history release version from https://github.com/ollama/ollama/releases

Image
<!-- gh-comment-id:2705475271 --> @MyColorfulDays commented on GitHub (Mar 7, 2025): @Minhao-Zhang You can downgrade by download the history release version from https://github.com/ollama/ollama/releases <img width="863" alt="Image" src="https://github.com/user-attachments/assets/7907e2ea-1d11-4d05-82e0-21f44d49add6" />
Author
Owner

@Minhao-Zhang commented on GitHub (Mar 7, 2025):

@Minhao-Zhang You can downgrade by download the history release version from https://github.com/ollama/ollama/releases

Image

Thanks!

<!-- gh-comment-id:2705639521 --> @Minhao-Zhang commented on GitHub (Mar 7, 2025): > [@Minhao-Zhang](https://github.com/Minhao-Zhang) You can downgrade by download the history release version from https://github.com/ollama/ollama/releases > > <img alt="Image" width="863" src="https://private-user-images.githubusercontent.com/21351745/420183146-7907e2ea-1d11-4d05-82e0-21f44d49add6.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NDEzMjkyNDQsIm5iZiI6MTc0MTMyODk0NCwicGF0aCI6Ii8yMTM1MTc0NS80MjAxODMxNDYtNzkwN2UyZWEtMWQxMS00ZDA1LTgyZTAtMjFmNDRkNDlhZGQ2LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAzMDclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMzA3VDA2MjkwNFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTYwZmE1NTA3MjQwZTI4NDc2OWM5MWJjNmM3Yzc2OTI4MTZlM2EwZGY2MzllMTk5YWQxY2E5ODQzMjZmYjFhOGImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.c6vztTI555YGoVI5npA5kF5RmgGv0guGXp23LZMuaLA"> Thanks!
Author
Owner

@waynehsmith commented on GitHub (Apr 29, 2025):

I had the same issue. Looks like the dlls in C:\Program Files\Docker\Docker\resources\bin which was on the path were the culprits.

I added the .old extension to each of the following files and Ollama started working again:

  • ggml-base.dll
  • ggml-cpu.dll
  • ggml.dll
  • llama.dll

Currently running Docker Desktop 4.41.0

Hope this helps someone else!

<!-- gh-comment-id:2837342457 --> @waynehsmith commented on GitHub (Apr 29, 2025): I had the same issue. Looks like the dlls in C:\Program Files\Docker\Docker\resources\bin which was on the path were the culprits. I added the .old extension to each of the following files and Ollama started working again: - ggml-base.dll - ggml-cpu.dll - ggml.dll - llama.dll Currently running Docker Desktop 4.41.0 Hope this helps someone else!
Author
Owner

@toddp0 commented on GitHub (Apr 29, 2025):

It is specifically "ggml-base.dll" in the folder C:\Program Files\Docker\Docker\resources\bin

<!-- gh-comment-id:2840458070 --> @toddp0 commented on GitHub (Apr 29, 2025): It is specifically "ggml-base.dll" in the folder C:\Program Files\Docker\Docker\resources\bin
Author
Owner

@mattjrutter commented on GitHub (Apr 29, 2025):

Renaming the DLLs and then turning off "Enable Docker Model Runner" in Docker Desktop settings > Features in development seems to not only fix the issue but doesn't break anything else.

Yep. This worked for me. A band-aid of a solution, but it certainly works. Hopefully it helps those working on Ollama to determine next steps. Ollama + Docker is a fairly popular duo to install together. It seems that llama.cpp has been an issue with Ollama since March at least, and the current advice was to not use llama.cpp. But with Docker now bringing it in, that just pushed up the priority.

<!-- gh-comment-id:2840459175 --> @mattjrutter commented on GitHub (Apr 29, 2025): > Renaming the DLLs and then turning off "Enable Docker Model Runner" in Docker Desktop settings > Features in development seems to not only fix the issue but doesn't break anything else. Yep. This worked for me. A band-aid of a solution, but it certainly works. Hopefully it helps those working on Ollama to determine next steps. Ollama + Docker is a fairly popular duo to install together. It seems that llama.cpp has been an issue with Ollama since March at least, and the current advice was to not use llama.cpp. But with Docker now bringing it in, that just pushed up the priority.
Author
Owner

@soraliu commented on GitHub (Apr 29, 2025):

It seems the conflict of ggml-base.dll between Docker and Ollama causes this error. I tried removing Docker from the PATH environment variable, but it doesn't work. Could anyone explain how Ollama locates the ggml-base.dll file? I'm confused as to why Ollama always uses Docker's ggml-base.dll instead of the one located under ...\AppData\Local\Programs\Ollama\lib\ollama.

Image Image
<!-- gh-comment-id:2840484322 --> @soraliu commented on GitHub (Apr 29, 2025): It seems the conflict of `ggml-base.dll` between Docker and Ollama causes this error. I tried removing Docker from the `PATH` environment variable, but it doesn't work. Could anyone explain how Ollama locates the `ggml-base.dll` file? I'm confused as to why Ollama always uses Docker's `ggml-base.dll` instead of the one located under `...\AppData\Local\Programs\Ollama\lib\ollama`. <img width="285" alt="Image" src="https://github.com/user-attachments/assets/d3d00e3d-6af1-4c67-a4cc-01cbb310fc79" /> <img width="311" alt="Image" src="https://github.com/user-attachments/assets/50a7f58d-985b-442c-82cb-db8eef77b1a1" />
Author
Owner

@rick-github commented on GitHub (Apr 30, 2025):

As per https://github.com/ollama/ollama/issues/10474#issuecomment-2840965229, Docker is releasing an update that will resolve this issue. On the ollama side, #10485 should also help.

<!-- gh-comment-id:2842405579 --> @rick-github commented on GitHub (Apr 30, 2025): As per https://github.com/ollama/ollama/issues/10474#issuecomment-2840965229, Docker is releasing an update that will resolve this issue. On the ollama side, #10485 should also help.
Author
Owner

@rick-github commented on GitHub (Apr 30, 2025):

4.41.1
2025-04-30
Download Docker Desktop

Bug fixes and enhancements

  • For all platforms
    Fixed an issue where Docker Desktop failed to start when a proxy configuration was specified in the admin-settings.json file.
  • For Windows
    Fixed possible conflict with 3rd party tools (for example, Ollama) by avoiding placing llama.cpp DLLs in a directory included in the system PATH.
<!-- gh-comment-id:2842461831 --> @rick-github commented on GitHub (Apr 30, 2025): [4.41.1](https://docs.docker.com/desktop/release-notes/#4411) 2025-04-30 Download Docker Desktop - [Windows](https://desktop.docker.com/win/main/amd64/191279/Docker%20Desktop%20Installer.exe) ([checksum](https://desktop.docker.com/win/main/amd64/191279/checksums.txt)) | [Windows ARM Beta](https://desktop.docker.com/win/main/arm64/191279/Docker%20Desktop%20Installer.exe) ([checksum](https://desktop.docker.com/win/main/arm64/191279/checksums.txt)) | [Mac with Apple chip](https://desktop.docker.com/mac/main/arm64/191279/Docker.dmg) ([checksum](https://desktop.docker.com/mac/main/arm64/191279/checksums.txt)) | [Mac with Intel chip](https://desktop.docker.com/mac/main/amd64/191279/Docker.dmg) ([checksum](https://desktop.docker.com/mac/main/amd64/191279/checksums.txt)) | [Debian](https://desktop.docker.com/linux/main/amd64/191279/docker-desktop-amd64.deb) - [RPM](https://desktop.docker.com/linux/main/amd64/191279/docker-desktop-x86_64.rpm) - [Arch](https://desktop.docker.com/linux/main/amd64/191279/docker-desktop-x86_64.pkg.tar.zst) ([checksum](https://desktop.docker.com/linux/main/amd64/191279/checksums.txt)) [Bug fixes and enhancements](https://docs.docker.com/desktop/release-notes/#bug-fixes-and-enhancements) - [For all platforms](https://docs.docker.com/desktop/release-notes/#for-all-platforms) Fixed an issue where Docker Desktop failed to start when a proxy configuration was specified in the admin-settings.json file. - [For Windows](https://docs.docker.com/desktop/release-notes/#for-windows) Fixed possible conflict with 3rd party tools (for example, Ollama) by avoiding placing llama.cpp DLLs in a directory included in the system PATH.
Author
Owner

@GurujantSingh commented on GitHub (May 1, 2025):

is this an issue in ubuntu as well?

<!-- gh-comment-id:2844229816 --> @GurujantSingh commented on GitHub (May 1, 2025): is this an issue in ubuntu as well?
Author
Owner

@TonyReggae commented on GitHub (May 2, 2025):

没想到吧?ollama的bug是docker引起的,升级一下这个更新就可以解决

Image

<!-- gh-comment-id:2846734150 --> @TonyReggae commented on GitHub (May 2, 2025): 没想到吧?ollama的bug是docker引起的,升级一下这个更新就可以解决 ![Image](https://github.com/user-attachments/assets/477f7f4c-1491-4996-82ef-a3707d7eabb1)
Author
Owner

@kingfreelydance commented on GitHub (May 2, 2025):

确实想不到,我说突然我的ollama用不了任何模型了

<!-- gh-comment-id:2847261461 --> @kingfreelydance commented on GitHub (May 2, 2025): 确实想不到,我说突然我的ollama用不了任何模型了
Author
Owner

@cyrain-cheng commented on GitHub (May 3, 2025):

ollama的bug是docker引起的,升级一下这个更新就可以解决

Image

真的想不到,升级就好了

<!-- gh-comment-id:2848704504 --> @cyrain-cheng commented on GitHub (May 3, 2025): > ollama的bug是docker引起的,升级一下这个更新就可以解决 > > ![Image](https://github.com/user-attachments/assets/477f7f4c-1491-4996-82ef-a3707d7eabb1) 真的想不到,升级就好了
Author
Owner

@tyaginightrider commented on GitHub (May 4, 2025):

my ollama app is upto date nd getting this error again nd again after update...,
can you guide me how i can solve this problem ?
can you guide me step by step?

Testing model llama3.2:1b-text-q4_K_M
[2025-05-04T06:43:55.704Z ERROR dkn_workflows::providers::ollama] Failed to generate embedding for model llama3.2:1b-text-q4_K_M: An error occurred with ollama-rs: {"error":"llama runner process has terminated: GGML_ASSERT(tensor-\u003eop == GGML_OP_UNARY) failed"}

My System configuration
laptop Brand => Hp Victus
Window 11
13th Gen Intel(R) Core(TM) i5-13500H 2.60 GHz
16gb RAM
16 Core
RTX 4090
500gb SSD

Image

<!-- gh-comment-id:2849055013 --> @tyaginightrider commented on GitHub (May 4, 2025): my ollama app is upto date nd getting this error again nd again after update..., can you guide me how i can solve this problem ? can you guide me step by step? Testing model llama3.2:1b-text-q4_K_M [2025-05-04T06:43:55.704Z ERROR dkn_workflows::providers::ollama] Failed to generate embedding for model llama3.2:1b-text-q4_K_M: An error occurred with ollama-rs: {"error":"llama runner process has terminated: GGML_ASSERT(tensor-\u003eop == GGML_OP_UNARY) failed"} My System configuration laptop Brand => Hp Victus Window 11 13th Gen Intel(R) Core(TM) i5-13500H 2.60 GHz 16gb RAM 16 Core RTX 4090 500gb SSD ![Image](https://github.com/user-attachments/assets/7261cdf3-9ad7-4c8a-a109-3576a2cda4b6)
Author
Owner

@rick-github commented on GitHub (May 12, 2025):

@tyaginightrider Output of ollama -v?

<!-- gh-comment-id:2874192110 --> @rick-github commented on GitHub (May 12, 2025): @tyaginightrider Output of `ollama -v`?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#68253