[GH-ISSUE #9725] 升级到ollama0.6.0无法使用 #52868

Closed
opened 2026-04-29 01:14:11 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @shaken154 on GitHub (Mar 13, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9725

What is the issue?

在win10专业工作站版系统上,内存D5 192G和显卡RTX2080TI22G,CUDA12.4,升级到ollama0.6.0无法使用。

Relevant log output

PowerShell 7.5.0
PS C:\Users\Administrator> ollama ls
NAME                                                          ID              SIZE      MODIFIED
gemma3:27b-it-fp16                                            cf10306fe9c6    54 GB     10 hours ago
gemma3:27b                                                    30ddded7fba6    17 GB     10 hours ago
minicpm-v:8b-2.6-fp16                                         f3f122c78635    16 GB     3 weeks ago
huihui_ai/qwen2.5-abliterate:32b-instruct-q8_0                861f54f56f44    34 GB     3 weeks ago
huihui_ai/deepseek-r1-abliterated:70b-llama-distill-q4_K_M    50f8d0fe980f    42 GB     3 weeks ago
huihui_ai/deepseek-r1-abliterated:14b-qwen-distill-q4_K_M     6b2209ffd758    9.0 GB    3 weeks ago
huihui_ai/qwen2.5-coder-abliterate:32b-instruct-q8_0          e6c35f06e4c4    34 GB     3 weeks ago
huihui_ai/deepseek-r1-abliterated:70b-llama-distill-fp16      1e86280674d4    141 GB    3 weeks ago
huihui_ai/deepseek-r1-abliterated:32b-qwen-distill-fp16       9f2aa8dff7c5    65 GB     3 weeks ago
bartowski/DeepSeek-V2.5-Q5_K_M-cu124-GPU22G:latest            fe098c5671ef    167 GB    3 weeks ago
DeepSeek-R1-671b-1.73bit-GPU22G:latest                        0a068073192a    168 GB    4 weeks ago
qwen2:72b-text-q4_K_M                                         395e2f1e4576    47 GB     4 weeks ago
deepseek-coder-v2:236b                                        c78d80129305    132 GB    4 weeks ago
bge-m3:latest                                                 790764642607    1.2 GB    4 weeks ago
nemotron:70b                                                  2262f047a28a    42 GB     4 weeks ago
medllama2:7b-q8_0                                             1bc066950c7a    7.2 GB    4 weeks ago
nomic-embed-text:latest                                       0a109f422b47    274 MB    5 weeks ago
qwen2.5:7b                                                    845dbda0ea48    4.7 GB    5 weeks ago
PS C:\Users\Administrator> ollama run gemma3:27b
>>> 你是?
Error: POST predict: Post "http://127.0.0.1:9390/completion": read tcp 127.0.0.1:9392->127.0.0.1:9390: wsarecv: An existing connection was forcibly closed by the remote host.
PS C:\Users\Administrator> ollama ps
NAME          ID              SIZE     PROCESSOR    UNTIL
gemma3:27b    30ddded7fba6    18 GB    100% GPU     Forever
PS C:\Users\Administrator> ollama stop gemma3:27b
PS C:\Users\Administrator> ollama ps
NAME    ID    SIZE    PROCESSOR    UNTIL
PS C:\Users\Administrator> ollama run qwen2.5:7b
Error: llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed
PS C:\Users\Administrator>

OS

Windows

GPU

Intel, Nvidia

CPU

Intel

Ollama version

0.6.0

Originally created by @shaken154 on GitHub (Mar 13, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9725 ### What is the issue? 在win10专业工作站版系统上,内存D5 192G和显卡RTX2080TI22G,CUDA12.4,升级到ollama0.6.0无法使用。 ### Relevant log output ```shell PowerShell 7.5.0 PS C:\Users\Administrator> ollama ls NAME ID SIZE MODIFIED gemma3:27b-it-fp16 cf10306fe9c6 54 GB 10 hours ago gemma3:27b 30ddded7fba6 17 GB 10 hours ago minicpm-v:8b-2.6-fp16 f3f122c78635 16 GB 3 weeks ago huihui_ai/qwen2.5-abliterate:32b-instruct-q8_0 861f54f56f44 34 GB 3 weeks ago huihui_ai/deepseek-r1-abliterated:70b-llama-distill-q4_K_M 50f8d0fe980f 42 GB 3 weeks ago huihui_ai/deepseek-r1-abliterated:14b-qwen-distill-q4_K_M 6b2209ffd758 9.0 GB 3 weeks ago huihui_ai/qwen2.5-coder-abliterate:32b-instruct-q8_0 e6c35f06e4c4 34 GB 3 weeks ago huihui_ai/deepseek-r1-abliterated:70b-llama-distill-fp16 1e86280674d4 141 GB 3 weeks ago huihui_ai/deepseek-r1-abliterated:32b-qwen-distill-fp16 9f2aa8dff7c5 65 GB 3 weeks ago bartowski/DeepSeek-V2.5-Q5_K_M-cu124-GPU22G:latest fe098c5671ef 167 GB 3 weeks ago DeepSeek-R1-671b-1.73bit-GPU22G:latest 0a068073192a 168 GB 4 weeks ago qwen2:72b-text-q4_K_M 395e2f1e4576 47 GB 4 weeks ago deepseek-coder-v2:236b c78d80129305 132 GB 4 weeks ago bge-m3:latest 790764642607 1.2 GB 4 weeks ago nemotron:70b 2262f047a28a 42 GB 4 weeks ago medllama2:7b-q8_0 1bc066950c7a 7.2 GB 4 weeks ago nomic-embed-text:latest 0a109f422b47 274 MB 5 weeks ago qwen2.5:7b 845dbda0ea48 4.7 GB 5 weeks ago PS C:\Users\Administrator> ollama run gemma3:27b >>> 你是? Error: POST predict: Post "http://127.0.0.1:9390/completion": read tcp 127.0.0.1:9392->127.0.0.1:9390: wsarecv: An existing connection was forcibly closed by the remote host. PS C:\Users\Administrator> ollama ps NAME ID SIZE PROCESSOR UNTIL gemma3:27b 30ddded7fba6 18 GB 100% GPU Forever PS C:\Users\Administrator> ollama stop gemma3:27b PS C:\Users\Administrator> ollama ps NAME ID SIZE PROCESSOR UNTIL PS C:\Users\Administrator> ollama run qwen2.5:7b Error: llama runner process has terminated: GGML_ASSERT(tensor->op == GGML_OP_UNARY) failed PS C:\Users\Administrator> ``` ### OS Windows ### GPU Intel, Nvidia ### CPU Intel ### Ollama version 0.6.0
GiteaMirror added the bug label 2026-04-29 01:14:11 -05:00
Author
Owner

@shaken154 commented on GitHub (Mar 13, 2025):

版本退回0.5.13,qwen2.5:7b还是用不了,决定不用ollama了,转用LM Studio,这个非常稳定。

<!-- gh-comment-id:2720876000 --> @shaken154 commented on GitHub (Mar 13, 2025): 版本退回0.5.13,qwen2.5:7b还是用不了,决定不用ollama了,转用LM Studio,这个非常稳定。
Author
Owner

@rick-github commented on GitHub (Mar 13, 2025):

#9509

<!-- gh-comment-id:2720900845 --> @rick-github commented on GitHub (Mar 13, 2025): #9509
Author
Owner

@pdevine commented on GitHub (Mar 14, 2025):

Going to close this as a dupe.

<!-- gh-comment-id:2725892121 --> @pdevine commented on GitHub (Mar 14, 2025): Going to close this as a dupe.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52868