[GH-ISSUE #12760] 求助 #70521

Closed
opened 2026-05-04 21:51:04 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @zack-1123 on GitHub (Oct 24, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12760

PS C:\Users\cwy> ollama run huihui_ai/qwen3-abliterated:0.6b-v2

运行后一直卡在这个界面,使用的CPU,无GPU。之前一直是可以使用小模型进行基础的使用。最近不知道为什么无法使用了。

Originally created by @zack-1123 on GitHub (Oct 24, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12760 PS C:\Users\cwy> ollama run huihui_ai/qwen3-abliterated:0.6b-v2 ⠧ 运行后一直卡在这个界面,使用的CPU,无GPU。之前一直是可以使用小模型进行基础的使用。最近不知道为什么无法使用了。
GiteaMirror added the needs more info label 2026-05-04 21:51:05 -05:00
Author
Owner

@w123wh commented on GitHub (Oct 24, 2025):

ollama run huihui_ai/qwen3-abliterated:0.6b

<!-- gh-comment-id:3440805174 --> @w123wh commented on GitHub (Oct 24, 2025): ollama run huihui_ai/qwen3-abliterated:0.6b
Author
Owner

@pdevine commented on GitHub (Oct 24, 2025):

@zack-1123 Can you attach the server logs and the output from ollama ps?

<!-- gh-comment-id:3441018557 --> @pdevine commented on GitHub (Oct 24, 2025): @zack-1123 Can you attach the server logs and the output from `ollama ps`?
Author
Owner

@zack-1123 commented on GitHub (Oct 24, 2025):

ollama run huihui_ai/qwen3-abliterated:0.6b
C:\Windows\system32>ollama run huihui_ai/qwen3-abliterated:0.6b
pulling manifest
pulling 40b49c33d1e8: 100% ▕██████████████████████████████████████████████████████████▏ 396 MB
pulling ae370d884f10: 100% ▕██████████████████████████████████████████████████████████▏ 1.7 KB
pulling d18a5cc71b84: 100% ▕██████████████████████████████████████████████████████████▏ 11 KB
pulling cff3f395ef37: 100% ▕██████████████████████████████████████████████████████████▏ 120 B
pulling 333c6384823e: 100% ▕██████████████████████████████████████████████████████████▏ 490 B
verifying sha256 digest
writing manifest
success

<!-- gh-comment-id:3441562943 --> @zack-1123 commented on GitHub (Oct 24, 2025): > ollama run huihui_ai/qwen3-abliterated:0.6b C:\Windows\system32>ollama run huihui_ai/qwen3-abliterated:0.6b pulling manifest pulling 40b49c33d1e8: 100% ▕██████████████████████████████████████████████████████████▏ 396 MB pulling ae370d884f10: 100% ▕██████████████████████████████████████████████████████████▏ 1.7 KB pulling d18a5cc71b84: 100% ▕██████████████████████████████████████████████████████████▏ 11 KB pulling cff3f395ef37: 100% ▕██████████████████████████████████████████████████████████▏ 120 B pulling 333c6384823e: 100% ▕██████████████████████████████████████████████████████████▏ 490 B verifying sha256 digest writing manifest success ⠙
Author
Owner

@zack-1123 commented on GitHub (Oct 24, 2025):

Can you attach the server logs and the output from ollama ps?

没有正在运行的模型;
app.log
time=2025-10-24T15:46:57.519+08:00 level=INFO source=app_windows.go:272 msg="starting Ollama" app=C:\Users\cwy\AppData\Local\Programs\Ollama version=0.12.6 OS=Windows/10.0.19044
time=2025-10-24T15:46:57.537+08:00 level=INFO source=app.go:232 msg="initialized tools registry" tool_count=0
time=2025-10-24T15:46:57.550+08:00 level=INFO source=app.go:247 msg="starting ollama server"
time=2025-10-24T15:46:58.493+08:00 level=INFO source=app.go:279 msg="starting ui server" port=53130
time=2025-10-24T15:47:01.502+08:00 level=INFO source=updater.go:252 msg="beginning update checker" interval=1h0m0s

server.log
time=2025-10-24T15:46:58.707+08:00 level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\cwy\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-10-24T15:46:58.746+08:00 level=INFO source=images.go:522 msg="total blobs: 10"
time=2025-10-24T15:46:58.748+08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0"
time=2025-10-24T15:46:58.750+08:00 level=INFO source=routes.go:1564 msg="Listening on 127.0.0.1:11434 (version 0.12.6)"
time=2025-10-24T15:46:58.751+08:00 level=INFO source=runner.go:80 msg="discovering available GPUs..."
time=2025-10-24T15:46:59.601+08:00 level=INFO source=types.go:129 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="23.9 GiB" available="18.5 GiB"
time=2025-10-24T15:46:59.602+08:00 level=INFO source=routes.go:1605 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB"
[GIN] 2025/10/24 - 15:46:59 | 200 | 172.7µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/10/24 - 15:46:59 | 200 | 4.5574ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/10/24 - 15:47:30 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/10/24 - 15:47:30 | 404 | 1.1029ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/10/24 - 15:47:32 | 200 | 2.0551686s | 127.0.0.1 | POST "/api/pull"
[GIN] 2025/10/24 - 15:47:32 | 200 | 86.847ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/10/24 - 15:51:14 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/10/24 - 15:51:14 | 200 | 94.8234ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/10/24 - 15:53:46 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/10/24 - 15:53:46 | 200 | 545.6µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/10/24 - 15:54:33 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/10/24 - 15:54:33 | 404 | 1.6116ms | 127.0.0.1 | POST "/api/show"
time=2025-10-24T15:54:36.427+08:00 level=INFO source=download.go:177 msg="downloading ae370d884f10 in 1 1.7 KB part(s)"
time=2025-10-24T15:54:38.223+08:00 level=INFO source=download.go:177 msg="downloading d18a5cc71b84 in 1 11 KB part(s)"
time=2025-10-24T15:54:40.504+08:00 level=INFO source=download.go:177 msg="downloading cff3f395ef37 in 1 120 B part(s)"
time=2025-10-24T15:54:42.283+08:00 level=INFO source=download.go:177 msg="downloading 333c6384823e in 1 490 B part(s)"
[GIN] 2025/10/24 - 15:54:43 | 200 | 9.9985354s | 127.0.0.1 | POST "/api/pull"
[GIN] 2025/10/24 - 15:54:44 | 200 | 99.6962ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/10/24 - 15:54:58 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/10/24 - 15:54:58 | 200 | 81.0244ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/10/24 - 15:55:18 | 200 | 546.9µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/10/24 - 15:55:18 | 500 | 0s | 127.0.0.1 | POST "/api/show"
[GIN] 2025/10/24 - 15:55:24 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/10/24 - 15:55:24 | 404 | 511.5µs | 127.0.0.1 | POST "/api/show"
time=2025-10-24T15:55:27.926+08:00 level=INFO source=download.go:177 msg="downloading fa8235e5b48f in 1 1.1 KB part(s)"
time=2025-10-24T15:55:30.708+08:00 level=INFO source=download.go:177 msg="downloading 542b217f179c in 1 148 B part(s)"
time=2025-10-24T15:55:32.516+08:00 level=INFO source=download.go:177 msg="downloading 8dde1baf1db0 in 1 78 B part(s)"
time=2025-10-24T15:55:34.298+08:00 level=INFO source=download.go:177 msg="downloading 23291dc44752 in 1 483 B part(s)"
[GIN] 2025/10/24 - 15:55:35 | 200 | 10.8746228s | 127.0.0.1 | POST "/api/pull"
[GIN] 2025/10/24 - 15:55:35 | 200 | 27.1874ms | 127.0.0.1 | POST "/api/show"

<!-- gh-comment-id:3441648044 --> @zack-1123 commented on GitHub (Oct 24, 2025): > Can you attach the server logs and the output from `ollama ps`? 没有正在运行的模型; app.log time=2025-10-24T15:46:57.519+08:00 level=INFO source=app_windows.go:272 msg="starting Ollama" app=C:\Users\cwy\AppData\Local\Programs\Ollama version=0.12.6 OS=Windows/10.0.19044 time=2025-10-24T15:46:57.537+08:00 level=INFO source=app.go:232 msg="initialized tools registry" tool_count=0 time=2025-10-24T15:46:57.550+08:00 level=INFO source=app.go:247 msg="starting ollama server" time=2025-10-24T15:46:58.493+08:00 level=INFO source=app.go:279 msg="starting ui server" port=53130 time=2025-10-24T15:47:01.502+08:00 level=INFO source=updater.go:252 msg="beginning update checker" interval=1h0m0s server.log time=2025-10-24T15:46:58.707+08:00 level=INFO source=routes.go:1511 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\cwy\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-10-24T15:46:58.746+08:00 level=INFO source=images.go:522 msg="total blobs: 10" time=2025-10-24T15:46:58.748+08:00 level=INFO source=images.go:529 msg="total unused blobs removed: 0" time=2025-10-24T15:46:58.750+08:00 level=INFO source=routes.go:1564 msg="Listening on 127.0.0.1:11434 (version 0.12.6)" time=2025-10-24T15:46:58.751+08:00 level=INFO source=runner.go:80 msg="discovering available GPUs..." time=2025-10-24T15:46:59.601+08:00 level=INFO source=types.go:129 msg="inference compute" id=cpu library=cpu compute="" name=cpu description=cpu libdirs=ollama driver="" pci_id="" type="" total="23.9 GiB" available="18.5 GiB" time=2025-10-24T15:46:59.602+08:00 level=INFO source=routes.go:1605 msg="entering low vram mode" "total vram"="0 B" threshold="20.0 GiB" [GIN] 2025/10/24 - 15:46:59 | 200 | 172.7µs | 127.0.0.1 | HEAD "/" [GIN] 2025/10/24 - 15:46:59 | 200 | 4.5574ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/24 - 15:47:30 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/10/24 - 15:47:30 | 404 | 1.1029ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/10/24 - 15:47:32 | 200 | 2.0551686s | 127.0.0.1 | POST "/api/pull" [GIN] 2025/10/24 - 15:47:32 | 200 | 86.847ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/10/24 - 15:51:14 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/10/24 - 15:51:14 | 200 | 94.8234ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/10/24 - 15:53:46 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/10/24 - 15:53:46 | 200 | 545.6µs | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/24 - 15:54:33 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/10/24 - 15:54:33 | 404 | 1.6116ms | 127.0.0.1 | POST "/api/show" time=2025-10-24T15:54:36.427+08:00 level=INFO source=download.go:177 msg="downloading ae370d884f10 in 1 1.7 KB part(s)" time=2025-10-24T15:54:38.223+08:00 level=INFO source=download.go:177 msg="downloading d18a5cc71b84 in 1 11 KB part(s)" time=2025-10-24T15:54:40.504+08:00 level=INFO source=download.go:177 msg="downloading cff3f395ef37 in 1 120 B part(s)" time=2025-10-24T15:54:42.283+08:00 level=INFO source=download.go:177 msg="downloading 333c6384823e in 1 490 B part(s)" [GIN] 2025/10/24 - 15:54:43 | 200 | 9.9985354s | 127.0.0.1 | POST "/api/pull" [GIN] 2025/10/24 - 15:54:44 | 200 | 99.6962ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/10/24 - 15:54:58 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/10/24 - 15:54:58 | 200 | 81.0244ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/10/24 - 15:55:18 | 200 | 546.9µs | 127.0.0.1 | HEAD "/" [GIN] 2025/10/24 - 15:55:18 | 500 | 0s | 127.0.0.1 | POST "/api/show" [GIN] 2025/10/24 - 15:55:24 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/10/24 - 15:55:24 | 404 | 511.5µs | 127.0.0.1 | POST "/api/show" time=2025-10-24T15:55:27.926+08:00 level=INFO source=download.go:177 msg="downloading fa8235e5b48f in 1 1.1 KB part(s)" time=2025-10-24T15:55:30.708+08:00 level=INFO source=download.go:177 msg="downloading 542b217f179c in 1 148 B part(s)" time=2025-10-24T15:55:32.516+08:00 level=INFO source=download.go:177 msg="downloading 8dde1baf1db0 in 1 78 B part(s)" time=2025-10-24T15:55:34.298+08:00 level=INFO source=download.go:177 msg="downloading 23291dc44752 in 1 483 B part(s)" [GIN] 2025/10/24 - 15:55:35 | 200 | 10.8746228s | 127.0.0.1 | POST "/api/pull" [GIN] 2025/10/24 - 15:55:35 | 200 | 27.1874ms | 127.0.0.1 | POST "/api/show"
Author
Owner

@ClearTrifles commented on GitHub (Oct 24, 2025):

我也遇到这个问题了,重装软件和模型都不行,尝试换其他模型也不行,就是安装之后跑不起来,上一个版本就出现过,重装了下好了,这个版本怎么试都不行,0.12.6,感觉这个出现的概率很高

<!-- gh-comment-id:3442054485 --> @ClearTrifles commented on GitHub (Oct 24, 2025): 我也遇到这个问题了,重装软件和模型都不行,尝试换其他模型也不行,就是安装之后跑不起来,上一个版本就出现过,重装了下好了,这个版本怎么试都不行,0.12.6,感觉这个出现的概率很高
Author
Owner

@rick-github commented on GitHub (Oct 24, 2025):

Could be #12699. Roll back to 0.12.3 or wait for 0.12.7.

<!-- gh-comment-id:3443034592 --> @rick-github commented on GitHub (Oct 24, 2025): Could be #12699. Roll back to [0.12.3](https://github.com/ollama/ollama/releases/tag/v0.12.3) or wait for 0.12.7.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70521