[GH-ISSUE #9291] gpu VRAM usage didn't recover within timeout Error: llama runner process has terminated: error:Could not initialize Tensile host: No devices found #6059

Closed
opened 2026-04-12 17:23:14 -05:00 by GiteaMirror · 10 comments
Owner

Originally created by @wangzd0209 on GitHub (Feb 22, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9291

What is the issue?

cmd ./ollama run test-qwen-0.5

when I want to run ollama i find it cannot know the device what is the problem?

PS when i run ./ollama -version the version is 0.0.0

Relevant log output

time=2025-02-22T20:51:27.157+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:27.157+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:27.157+08:00 level=DEBUG source=server.go:1047 msg="stopping llama server"
time=2025-02-22T20:51:27.157+08:00 level=DEBUG source=sched.go:381 msg="runner released" modelPath=/root/.ollama/models/blobs/sha256-1b93e4b8552fa16f44b7ee8e6f94df0a0e9abe9727f45d48db9af6cab651bb5a
time=2025-02-22T20:51:27.408+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:27.409+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:27.659+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:27.659+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:27.909+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:27.909+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:28.159+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:28.159+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:28.409+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:28.409+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:28.659+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:28.659+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:28.909+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:28.909+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:29.158+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:29.158+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:29.408+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:29.409+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:29.659+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:29.659+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:29.909+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:29.909+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:30.159+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:30.159+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:30.408+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:30.408+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:30.658+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:30.658+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:30.908+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:30.908+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:31.158+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:31.158+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:31.408+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:31.408+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:31.658+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:31.658+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:31.908+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:31.909+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:32.158+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.001550841 model=/root/.ollama/models/blobs/sha256-1b93e4b8552fa16f44b7ee8e6f94df0a0e9abe9727f45d48db9af6cab651bb5a
time=2025-02-22T20:51:32.158+08:00 level=DEBUG source=sched.go:385 msg="sending an unloaded event" modelPath=/root/.ollama/models/blobs/sha256-1b93e4b8552fa16f44b7ee8e6f94df0a0e9abe9727f45d48db9af6cab651bb5a
time=2025-02-22T20:51:32.158+08:00 level=DEBUG source=sched.go:308 msg="ignoring unload event with no pending requests"
time=2025-02-22T20:51:32.158+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:32.159+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:32.408+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.251668218 model=/root/.ollama/models/blobs/sha256-1b93e4b8552fa16f44b7ee8e6f94df0a0e9abe9727f45d48db9af6cab651bb5a
time=2025-02-22T20:51:32.409+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB"
time=2025-02-22T20:51:32.409+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB"
time=2025-02-22T20:51:32.657+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5010965 model=/root/.ollama/models/blobs/sha256-1b93e4b8552fa16f44b7ee8e6f94df0a0e9abe9727f45d48db9af6cab651bb5a

OS

Linux

GPU

Other

CPU

Intel

Ollama version

No response

Originally created by @wangzd0209 on GitHub (Feb 22, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9291 ### What is the issue? cmd ./ollama run test-qwen-0.5 when I want to run ollama i find it cannot know the device what is the problem? PS when i run ./ollama -version the version is 0.0.0 ### Relevant log output ```shell time=2025-02-22T20:51:27.157+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:27.157+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:27.157+08:00 level=DEBUG source=server.go:1047 msg="stopping llama server" time=2025-02-22T20:51:27.157+08:00 level=DEBUG source=sched.go:381 msg="runner released" modelPath=/root/.ollama/models/blobs/sha256-1b93e4b8552fa16f44b7ee8e6f94df0a0e9abe9727f45d48db9af6cab651bb5a time=2025-02-22T20:51:27.408+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:27.409+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:27.659+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:27.659+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:27.909+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:27.909+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:28.159+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:28.159+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:28.409+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:28.409+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:28.659+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:28.659+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:28.909+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:28.909+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:29.158+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:29.158+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:29.408+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:29.409+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:29.659+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:29.659+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:29.909+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:29.909+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:30.159+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:30.159+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:30.408+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:30.408+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:30.658+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:30.658+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:30.908+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:30.908+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:31.158+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:31.158+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:31.408+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:31.408+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:31.658+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:31.658+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:31.908+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:31.909+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:32.158+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.001550841 model=/root/.ollama/models/blobs/sha256-1b93e4b8552fa16f44b7ee8e6f94df0a0e9abe9727f45d48db9af6cab651bb5a time=2025-02-22T20:51:32.158+08:00 level=DEBUG source=sched.go:385 msg="sending an unloaded event" modelPath=/root/.ollama/models/blobs/sha256-1b93e4b8552fa16f44b7ee8e6f94df0a0e9abe9727f45d48db9af6cab651bb5a time=2025-02-22T20:51:32.158+08:00 level=DEBUG source=sched.go:308 msg="ignoring unload event with no pending requests" time=2025-02-22T20:51:32.158+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:32.159+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:32.408+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.251668218 model=/root/.ollama/models/blobs/sha256-1b93e4b8552fa16f44b7ee8e6f94df0a0e9abe9727f45d48db9af6cab651bb5a time=2025-02-22T20:51:32.409+08:00 level=DEBUG source=gpu.go:359 msg="updating system memory data" before.total="503.4 GiB" before.free="475.4 GiB" before.free_swap="15.4 GiB" now.total="503.4 GiB" now.free="475.4 GiB" now.free_swap="15.4 GiB" time=2025-02-22T20:51:32.409+08:00 level=DEBUG source=amd_linux.go:440 msg="updating rocm free memory" gpu=5 name=1d94:6210 before="64.0 GiB" now="64.0 GiB" time=2025-02-22T20:51:32.657+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5010965 model=/root/.ollama/models/blobs/sha256-1b93e4b8552fa16f44b7ee8e6f94df0a0e9abe9727f45d48db9af6cab651bb5a ``` ### OS Linux ### GPU Other ### CPU Intel ### Ollama version _No response_
GiteaMirror added the bug label 2026-04-12 17:23:14 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 22, 2025):

time=2025-02-22T20:51:32.657+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5010965 model=/root/.ollama/models/blobs/sha256-1b93e4b8552fa16f44b7ee8e6f94df0a0e9abe9727f45d48db9af6cab651bb5a

Just a warning, you can ignore it.

PS when i run ./ollama -version the version is 0.0.0

Because you compiled from source.

Error: llama runner process has terminated: error:Could not initialize Tensile host: No devices found 

The log doesn't contain enough information to debug this. Post full log.

<!-- gh-comment-id:2676220701 --> @rick-github commented on GitHub (Feb 22, 2025): ``` time=2025-02-22T20:51:32.657+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5010965 model=/root/.ollama/models/blobs/sha256-1b93e4b8552fa16f44b7ee8e6f94df0a0e9abe9727f45d48db9af6cab651bb5a ``` Just a warning, you can ignore it. > PS when i run ./ollama -version the version is 0.0.0 Because you compiled from source. ``` Error: llama runner process has terminated: error:Could not initialize Tensile host: No devices found ``` The log doesn't contain enough information to debug this. Post full log.
Author
Owner

@wangzd0209 commented on GitHub (Feb 23, 2025):

time=2025-02-22T20:51:32.657+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5010965 model=/root/.ollama/models/blobs/sha256-1b93e4b8552fa16f44b7ee8e6f94df0a0e9abe9727f45d48db9af6cab651bb5a

Just a warning, you can ignore it.

PS when i run ./ollama -version the version is 0.0.0

Because you compiled from source.

Error: llama runner process has terminated: error:Could not initialize Tensile host: No devices found 

The log doesn't contain enough information to debug this. Post full log.

when I use sudo ./ollama serve anything is ok but when i use ./ollama serve it show this problem what happen? or some cmd can handle that?

<!-- gh-comment-id:2676550552 --> @wangzd0209 commented on GitHub (Feb 23, 2025): > ``` > time=2025-02-22T20:51:32.657+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5010965 model=/root/.ollama/models/blobs/sha256-1b93e4b8552fa16f44b7ee8e6f94df0a0e9abe9727f45d48db9af6cab651bb5a > ``` > > Just a warning, you can ignore it. > > > PS when i run ./ollama -version the version is 0.0.0 > > Because you compiled from source. > > ``` > Error: llama runner process has terminated: error:Could not initialize Tensile host: No devices found > ``` > > The log doesn't contain enough information to debug this. Post full log. when I use sudo ./ollama serve anything is ok but when i use ./ollama serve it show this problem what happen? or some cmd can handle that?
Author
Owner

@wangzd0209 commented on GitHub (Feb 23, 2025):

time=2025-02-22T20:51:32.657+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5010965 model=/root/.ollama/models/blobs/sha256-1b93e4b8552fa16f44b7ee8e6f94df0a0e9abe9727f45d48db9af6cab651bb5a

Just a warning, you can ignore it.

PS when i run ./ollama -version the version is 0.0.0

Because you compiled from source.

Error: llama runner process has terminated: error:Could not initialize Tensile host: No devices found 

The log doesn't contain enough information to debug this. Post full log.

when I use sudo ./ollama serve anything is ok but when i use ./ollama serve it show this problem what happen? or some cmd can handle that?

<!-- gh-comment-id:2676550682 --> @wangzd0209 commented on GitHub (Feb 23, 2025): > ``` > time=2025-02-22T20:51:32.657+08:00 level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.5010965 model=/root/.ollama/models/blobs/sha256-1b93e4b8552fa16f44b7ee8e6f94df0a0e9abe9727f45d48db9af6cab651bb5a > ``` > > Just a warning, you can ignore it. > > > PS when i run ./ollama -version the version is 0.0.0 > > Because you compiled from source. > > ``` > Error: llama runner process has terminated: error:Could not initialize Tensile host: No devices found > ``` > > The log doesn't contain enough information to debug this. Post full log. when I use sudo ./ollama serve anything is ok but when i use ./ollama serve it show this problem what happen? or some cmd can handle that?
Author
Owner

@rick-github commented on GitHub (Feb 23, 2025):

Sounds like permissions issues: https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#amd-gpu-discovery

<!-- gh-comment-id:2676816405 --> @rick-github commented on GitHub (Feb 23, 2025): Sounds like permissions issues: https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#amd-gpu-discovery
Author
Owner

@wangzd0209 commented on GitHub (Feb 23, 2025):

Sounds like permissions issues: https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#amd-gpu-discovery

thanks for your help the problem maybe slove. And I can use ollama in my computer now.

<!-- gh-comment-id:2676866322 --> @wangzd0209 commented on GitHub (Feb 23, 2025): > Sounds like permissions issues: https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#amd-gpu-discovery thanks for your help the problem maybe slove. And I can use ollama in my computer now.
Author
Owner

@wangzd0209 commented on GitHub (Feb 24, 2025):

Sounds like permissions issues: https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#amd-gpu-discovery

today i find a special problem when i add video group in my machine ollama is good. But when i use a docker i found that the video gid is 39 out of container ,but i still found the video gid is 44 in a ubuntu container. What happen?

<!-- gh-comment-id:2678284901 --> @wangzd0209 commented on GitHub (Feb 24, 2025): > Sounds like permissions issues: https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#amd-gpu-discovery today i find a special problem when i add video group in my machine ollama is good. But when i use a docker i found that the video gid is 39 out of container ,but i still found the video gid is 44 in a ubuntu container. What happen?
Author
Owner

@rick-github commented on GitHub (Feb 24, 2025):

The /etc/group file is different inside the container compared to outside the container.

<!-- gh-comment-id:2678517430 --> @rick-github commented on GitHub (Feb 24, 2025): The `/etc/group` file is different inside the container compared to outside the container.
Author
Owner

@wangzd0209 commented on GitHub (Feb 24, 2025):

The /etc/group file is different inside the container compared to outside the container.

did you fix that ?because i can use ollama in recently version but i can not use ollama in eraly version like 0.3.5

<!-- gh-comment-id:2678536667 --> @wangzd0209 commented on GitHub (Feb 24, 2025): > The `/etc/group` file is different inside the container compared to outside the container. did you fix that ?because i can use ollama in recently version but i can not use ollama in eraly version like 0.3.5
Author
Owner

@wangzd0209 commented on GitHub (Feb 24, 2025):

The /etc/group file is different inside the container compared to outside the container.

so container transfer the group id by itself? like 44 inside to 39 outside right?

<!-- gh-comment-id:2678553356 --> @wangzd0209 commented on GitHub (Feb 24, 2025): > The `/etc/group` file is different inside the container compared to outside the container. so container transfer the group id by itself? like 44 inside to 39 outside right?
Author
Owner

@rick-github commented on GitHub (Feb 24, 2025):

If you want to control the uid/gid of the program in the container, use the --user docker command line option or the user docker compose field.

<!-- gh-comment-id:2678607383 --> @rick-github commented on GitHub (Feb 24, 2025): If you want to control the uid/gid of the program in the container, use the [`--user`](https://docs.docker.com/engine/containers/run/#user) docker command line option or the [`user`](https://docs.docker.com/reference/compose-file/services/#user) docker compose field.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#6059