[GH-ISSUE #6206] [question] How to default to CPU? #65914

Closed
opened 2026-05-03 23:08:52 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @yurivict on GitHub (Aug 6, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6206

What is the issue?

I created the FreeBSD port for ollama.
However, GPU isn't available and all 'ollama run' commands fail with the ollama server printing this:

time=2024-08-06T09:57:27.238-07:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.06509013 model=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435

It appears to default to GPU.

ollama help run and ollama help start don't offer any options allowing to default to CPU.

How can I make ollama to default to CPU?

Thank you,
Yuri

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.3.4

Originally created by @yurivict on GitHub (Aug 6, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6206 ### What is the issue? I created the FreeBSD port for ollama. However, GPU isn't available and all 'ollama run' commands fail with the ollama server printing this: ``` time=2024-08-06T09:57:27.238-07:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.06509013 model=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 ``` It appears to default to GPU. ```ollama help run``` and ```ollama help start``` don't offer any options allowing to default to CPU. How can I make ollama to default to CPU? Thank you, Yuri ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.4
GiteaMirror added the bug label 2026-05-03 23:08:52 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 6, 2024):

If there's no GPU, ollama should fallback to CPU by default. If you can post more context from the log, it will be easier to debug.

<!-- gh-comment-id:2271757862 --> @rick-github commented on GitHub (Aug 6, 2024): If there's no GPU, ollama should fallback to CPU by default. If you can post more context from the log, it will be easier to debug.
Author
Owner

@yurivict commented on GitHub (Aug 6, 2024):

Here is the full server log with debug:

$ OLLAMA_DEBUG=1 ollama start 
2024/08/06 10:11:45 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/yuri/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-06T10:11:45.361-07:00 level=INFO source=images.go:781 msg="total blobs: 10"
time=2024-08-06T10:11:45.362-07:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:   export GIN_MODE=release
 - using code:  gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullModelHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateModelHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushModelHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyModelHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteModelHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).ProcessHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (6 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2024-08-06T10:11:45.363-07:00 level=INFO source=routes.go:1155 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2024-08-06T10:11:45.367-07:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama3678563659/runners
time=2024-08-06T10:11:45.368-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu file=build/bsd/x86_64/cpu/bin/ollama_llama_server.gz
time=2024-08-06T10:11:45.368-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx file=build/bsd/x86_64/cpu_avx/bin/ollama_llama_server.gz
time=2024-08-06T10:11:45.368-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx2 file=build/bsd/x86_64/cpu_avx2/bin/ollama_llama_server.gz
time=2024-08-06T10:11:45.368-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=vulkan file=build/bsd/x86_64/vulkan/bin/ollama_llama_server.gz
time=2024-08-06T10:11:45.400-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/cpu/ollama_llama_server
time=2024-08-06T10:11:45.400-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/cpu_avx/ollama_llama_server
time=2024-08-06T10:11:45.400-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/cpu_avx2/ollama_llama_server
time=2024-08-06T10:11:45.400-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/vulkan/ollama_llama_server
time=2024-08-06T10:11:45.400-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [vulkan cpu cpu_avx cpu_avx2]"
time=2024-08-06T10:11:45.400-07:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-08-06T10:11:45.400-07:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-08-06T10:11:45.458-07:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=vulkan compute="" driver=0.0 name="" total="24.0 GiB" available="24.0 GiB"
[GIN] 2024/08/06 - 10:11:54 | 200 |      60.436µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/08/06 - 10:11:54 | 200 |   11.252121ms |       127.0.0.1 | POST     "/api/show"
time=2024-08-06T10:11:54.784-07:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x12ee540 gpu_count=1
time=2024-08-06T10:11:54.792-07:00 level=DEBUG source=sched.go:219 msg="loading first model" model=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435
time=2024-08-06T10:11:54.792-07:00 level=DEBUG source=memory.go:101 msg=evaluating library=vulkan gpu_count=1 available="[24.0 GiB]"
time=2024-08-06T10:11:54.793-07:00 level=INFO source=sched.go:710 msg="new model will fit in available VRAM in single GPU, loading" model=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 gpu=0 parallel=4 available=25769803776 required="6.0 GiB"
time=2024-08-06T10:11:54.793-07:00 level=DEBUG source=server.go:100 msg="system memory" total="24.0 GiB" free="0 B" free_swap="0 B"
time=2024-08-06T10:11:54.793-07:00 level=DEBUG source=memory.go:101 msg=evaluating library=vulkan gpu_count=1 available="[24.0 GiB]"
time=2024-08-06T10:11:54.794-07:00 level=INFO source=memory.go:309 msg="offload to vulkan" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[24.0 GiB]" memory.required.full="6.0 GiB" memory.required.partial="6.0 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.0 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.6 GiB" memory.weights.nonrepeating="105.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="585.0 MiB"
time=2024-08-06T10:11:54.794-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/cpu/ollama_llama_server
time=2024-08-06T10:11:54.794-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/cpu_avx/ollama_llama_server
time=2024-08-06T10:11:54.794-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/cpu_avx2/ollama_llama_server
time=2024-08-06T10:11:54.794-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/vulkan/ollama_llama_server
time=2024-08-06T10:11:54.794-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/cpu/ollama_llama_server
time=2024-08-06T10:11:54.794-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/cpu_avx/ollama_llama_server
time=2024-08-06T10:11:54.794-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/cpu_avx2/ollama_llama_server
time=2024-08-06T10:11:54.794-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/vulkan/ollama_llama_server
time=2024-08-06T10:11:54.795-07:00 level=INFO source=server.go:390 msg="starting llama server" cmd="/tmp/ollama3678563659/runners/vulkan/ollama_llama_server --model /home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --verbose --parallel 4 --port 42667"
time=2024-08-06T10:11:54.795-07:00 level=DEBUG source=server.go:407 msg=subprocess environment="[PATH=/home/yuri/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin LD_LIBRARY_PATH=/tmp/ollama3678563659/runners/vulkan:/tmp/ollama3678563659/runners]"
time=2024-08-06T10:11:54.799-07:00 level=INFO source=sched.go:445 msg="loaded runners" count=1
time=2024-08-06T10:11:54.799-07:00 level=INFO source=server.go:590 msg="waiting for llama runner to start responding"
ld-elf.so.1: Shared object "libllama.so" not found, required by "ollama_llama_server"
time=2024-08-06T10:11:54.799-07:00 level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server error"
time=2024-08-06T10:11:55.054-07:00 level=ERROR source=sched.go:451 msg="error loading llama server" error="llama runner process has terminated: exit status 1"
time=2024-08-06T10:11:55.054-07:00 level=DEBUG source=sched.go:454 msg="triggering expiration for failed load" model=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435
time=2024-08-06T10:11:55.054-07:00 level=DEBUG source=sched.go:355 msg="runner expired event received" modelPath=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435
time=2024-08-06T10:11:55.054-07:00 level=DEBUG source=sched.go:371 msg="got lock to unload" modelPath=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435
[GIN] 2024/08/06 - 10:11:55 | 500 |  337.338585ms |       127.0.0.1 | POST     "/api/chat"
time=2024-08-06T10:11:55.140-07:00 level=DEBUG source=server.go:1048 msg="stopping llama server"
time=2024-08-06T10:11:55.140-07:00 level=DEBUG source=sched.go:376 msg="runner released" modelPath=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435
time=2024-08-06T10:12:00.141-07:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.086690798 model=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435
time=2024-08-06T10:12:00.141-07:00 level=DEBUG source=sched.go:380 msg="sending an unloaded event" modelPath=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435
time=2024-08-06T10:12:00.141-07:00 level=DEBUG source=sched.go:303 msg="ignoring unload event with no pending requests"
time=2024-08-06T10:12:00.391-07:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.336683889 model=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435
time=2024-08-06T10:12:00.643-07:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.588469281 model=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435
^Ctime=2024-08-06T10:12:02.908-07:00 level=DEBUG source=sched.go:119 msg="shutting down scheduler pending loop"
time=2024-08-06T10:12:02.908-07:00 level=DEBUG source=assets.go:112 msg="cleaning up" dir=/tmp/ollama3678563659
time=2024-08-06T10:12:02.908-07:00 level=DEBUG source=sched.go:313 msg="shutting down scheduler completed loop"
<!-- gh-comment-id:2271761476 --> @yurivict commented on GitHub (Aug 6, 2024): Here is the full server log with debug: ``` $ OLLAMA_DEBUG=1 ollama start 2024/08/06 10:11:45 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/yuri/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-08-06T10:11:45.361-07:00 level=INFO source=images.go:781 msg="total blobs: 10" time=2024-08-06T10:11:45.362-07:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullModelHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateModelHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushModelHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyModelHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteModelHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).ProcessHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2024-08-06T10:11:45.363-07:00 level=INFO source=routes.go:1155 msg="Listening on 127.0.0.1:11434 (version 0.0.0)" time=2024-08-06T10:11:45.367-07:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama3678563659/runners time=2024-08-06T10:11:45.368-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu file=build/bsd/x86_64/cpu/bin/ollama_llama_server.gz time=2024-08-06T10:11:45.368-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx file=build/bsd/x86_64/cpu_avx/bin/ollama_llama_server.gz time=2024-08-06T10:11:45.368-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx2 file=build/bsd/x86_64/cpu_avx2/bin/ollama_llama_server.gz time=2024-08-06T10:11:45.368-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=vulkan file=build/bsd/x86_64/vulkan/bin/ollama_llama_server.gz time=2024-08-06T10:11:45.400-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/cpu/ollama_llama_server time=2024-08-06T10:11:45.400-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/cpu_avx/ollama_llama_server time=2024-08-06T10:11:45.400-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/cpu_avx2/ollama_llama_server time=2024-08-06T10:11:45.400-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/vulkan/ollama_llama_server time=2024-08-06T10:11:45.400-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [vulkan cpu cpu_avx cpu_avx2]" time=2024-08-06T10:11:45.400-07:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-08-06T10:11:45.400-07:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2024-08-06T10:11:45.458-07:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=vulkan compute="" driver=0.0 name="" total="24.0 GiB" available="24.0 GiB" [GIN] 2024/08/06 - 10:11:54 | 200 | 60.436µs | 127.0.0.1 | HEAD "/" [GIN] 2024/08/06 - 10:11:54 | 200 | 11.252121ms | 127.0.0.1 | POST "/api/show" time=2024-08-06T10:11:54.784-07:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x12ee540 gpu_count=1 time=2024-08-06T10:11:54.792-07:00 level=DEBUG source=sched.go:219 msg="loading first model" model=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 time=2024-08-06T10:11:54.792-07:00 level=DEBUG source=memory.go:101 msg=evaluating library=vulkan gpu_count=1 available="[24.0 GiB]" time=2024-08-06T10:11:54.793-07:00 level=INFO source=sched.go:710 msg="new model will fit in available VRAM in single GPU, loading" model=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 gpu=0 parallel=4 available=25769803776 required="6.0 GiB" time=2024-08-06T10:11:54.793-07:00 level=DEBUG source=server.go:100 msg="system memory" total="24.0 GiB" free="0 B" free_swap="0 B" time=2024-08-06T10:11:54.793-07:00 level=DEBUG source=memory.go:101 msg=evaluating library=vulkan gpu_count=1 available="[24.0 GiB]" time=2024-08-06T10:11:54.794-07:00 level=INFO source=memory.go:309 msg="offload to vulkan" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[24.0 GiB]" memory.required.full="6.0 GiB" memory.required.partial="6.0 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.0 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.6 GiB" memory.weights.nonrepeating="105.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="585.0 MiB" time=2024-08-06T10:11:54.794-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/cpu/ollama_llama_server time=2024-08-06T10:11:54.794-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/cpu_avx/ollama_llama_server time=2024-08-06T10:11:54.794-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/cpu_avx2/ollama_llama_server time=2024-08-06T10:11:54.794-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/vulkan/ollama_llama_server time=2024-08-06T10:11:54.794-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/cpu/ollama_llama_server time=2024-08-06T10:11:54.794-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/cpu_avx/ollama_llama_server time=2024-08-06T10:11:54.794-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/cpu_avx2/ollama_llama_server time=2024-08-06T10:11:54.794-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3678563659/runners/vulkan/ollama_llama_server time=2024-08-06T10:11:54.795-07:00 level=INFO source=server.go:390 msg="starting llama server" cmd="/tmp/ollama3678563659/runners/vulkan/ollama_llama_server --model /home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --verbose --parallel 4 --port 42667" time=2024-08-06T10:11:54.795-07:00 level=DEBUG source=server.go:407 msg=subprocess environment="[PATH=/home/yuri/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin LD_LIBRARY_PATH=/tmp/ollama3678563659/runners/vulkan:/tmp/ollama3678563659/runners]" time=2024-08-06T10:11:54.799-07:00 level=INFO source=sched.go:445 msg="loaded runners" count=1 time=2024-08-06T10:11:54.799-07:00 level=INFO source=server.go:590 msg="waiting for llama runner to start responding" ld-elf.so.1: Shared object "libllama.so" not found, required by "ollama_llama_server" time=2024-08-06T10:11:54.799-07:00 level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server error" time=2024-08-06T10:11:55.054-07:00 level=ERROR source=sched.go:451 msg="error loading llama server" error="llama runner process has terminated: exit status 1" time=2024-08-06T10:11:55.054-07:00 level=DEBUG source=sched.go:454 msg="triggering expiration for failed load" model=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 time=2024-08-06T10:11:55.054-07:00 level=DEBUG source=sched.go:355 msg="runner expired event received" modelPath=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 time=2024-08-06T10:11:55.054-07:00 level=DEBUG source=sched.go:371 msg="got lock to unload" modelPath=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 [GIN] 2024/08/06 - 10:11:55 | 500 | 337.338585ms | 127.0.0.1 | POST "/api/chat" time=2024-08-06T10:11:55.140-07:00 level=DEBUG source=server.go:1048 msg="stopping llama server" time=2024-08-06T10:11:55.140-07:00 level=DEBUG source=sched.go:376 msg="runner released" modelPath=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 time=2024-08-06T10:12:00.141-07:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.086690798 model=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 time=2024-08-06T10:12:00.141-07:00 level=DEBUG source=sched.go:380 msg="sending an unloaded event" modelPath=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 time=2024-08-06T10:12:00.141-07:00 level=DEBUG source=sched.go:303 msg="ignoring unload event with no pending requests" time=2024-08-06T10:12:00.391-07:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.336683889 model=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 time=2024-08-06T10:12:00.643-07:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.588469281 model=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 ^Ctime=2024-08-06T10:12:02.908-07:00 level=DEBUG source=sched.go:119 msg="shutting down scheduler pending loop" time=2024-08-06T10:12:02.908-07:00 level=DEBUG source=assets.go:112 msg="cleaning up" dir=/tmp/ollama3678563659 time=2024-08-06T10:12:02.908-07:00 level=DEBUG source=sched.go:313 msg="shutting down scheduler completed loop" ```
Author
Owner

@rick-github commented on GitHub (Aug 6, 2024):

ollama identified a vulkan engine and launched a server, but the server couldn't find libllama.so:

ld-elf.so.1: Shared object "libllama.so" not found, required by "ollama_llama_server"

I know nothing about the FreeBSD ports system but the build process looks incomplete.

<!-- gh-comment-id:2271770937 --> @rick-github commented on GitHub (Aug 6, 2024): ollama identified a vulkan engine and launched a server, but the server couldn't find libllama.so: ``` ld-elf.so.1: Shared object "libllama.so" not found, required by "ollama_llama_server" ``` I know nothing about the FreeBSD ports system but the build process looks incomplete.
Author
Owner

@yurivict commented on GitHub (Aug 6, 2024):

I see.

Thank you for the insight. I'll try to fix this.

<!-- gh-comment-id:2271786053 --> @yurivict commented on GitHub (Aug 6, 2024): I see. Thank you for the insight. I'll try to fix this.
Author
Owner

@yurivict commented on GitHub (Aug 6, 2024):

@rick-github

Do you know how to change engine to CPU?

<!-- gh-comment-id:2271944792 --> @yurivict commented on GitHub (Aug 6, 2024): @rick-github Do you know how to change engine to CPU?
Author
Owner

@yurivict commented on GitHub (Aug 6, 2024):

OLLAMA_LLM_LIBRARY="cpu" doesn't work.

<!-- gh-comment-id:2271947809 --> @yurivict commented on GitHub (Aug 6, 2024): OLLAMA_LLM_LIBRARY="cpu" doesn't work.
Author
Owner

@rick-github commented on GitHub (Aug 6, 2024):

OLLAMA_LLM_LIBRARY="cpu" should work, but I haven't tried it myself. If you can provide logs it might be easier to debug. You could also try other library options like cpu_avx and cpu_avx2.

You can also force ollama to use the CPU on a per-model basis by telling the server to offload 0 layers to the GPU. You can do this by any of these ways:

  1. From command line, run ollama run some-model-name and the enter the command /set parameter num_gpu 0
  2. Via API: set the option num_gpu to 0: curl localhost:11434/api/generate -d '{"model":"some-model-name","prompt":"why is the sky blue","option":{"num_gpu":0}}'
  3. Create a copy of the model and set the default layer count:
$ ollama show --modelfile some-model-name > Modelfile
# edit the file and add "PARAMETER num_gpu 0"
$ ollama create some-model-name-cpu -f Modelfile
<!-- gh-comment-id:2271966667 --> @rick-github commented on GitHub (Aug 6, 2024): `OLLAMA_LLM_LIBRARY="cpu"` should work, but I haven't tried it myself. If you can provide logs it might be easier to debug. You could also try other library options like `cpu_avx` and `cpu_avx2`. You can also force ollama to use the CPU on a per-model basis by telling the server to offload 0 layers to the GPU. You can do this by any of these ways: 1. From command line, run `ollama run some-model-name` and the enter the command `/set parameter num_gpu 0` 2. Via API: set the option `num_gpu` to 0: `curl localhost:11434/api/generate -d '{"model":"some-model-name","prompt":"why is the sky blue","option":{"num_gpu":0}}'` 3. Create a copy of the model and set the default layer count: ```sh $ ollama show --modelfile some-model-name > Modelfile # edit the file and add "PARAMETER num_gpu 0" $ ollama create some-model-name-cpu -f Modelfile ```
Author
Owner

@yurivict commented on GitHub (Aug 6, 2024):

OLLAMA_LLM_LIBRARY="cpu" should work, but I haven't tried it myself. If you can provide logs it might be easier to debug. You could also try other library options like cpu_avx and cpu_avx2.

OLLAMA_LLM_LIBRARY="cpu" and all other values seem to be ignored.

The log still has library=vulkan:

time=2024-08-06T12:17:43.744-07:00 level=INFO source=routes.go:1155 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2024-08-06T12:17:43.749-07:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama578124798/runners
time=2024-08-06T12:17:43.750-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu file=build/bsd/x86_64/cpu/bin/ollama_llama_server.gz
time=2024-08-06T12:17:43.750-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx file=build/bsd/x86_64/cpu_avx/bin/ollama_llama_server.gz
time=2024-08-06T12:17:43.750-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx2 file=build/bsd/x86_64/cpu_avx2/bin/ollama_llama_server.gz
time=2024-08-06T12:17:43.750-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=vulkan file=build/bsd/x86_64/vulkan/bin/ollama_llama_server.gz
time=2024-08-06T12:17:43.780-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama578124798/runners/cpu/ollama_llama_server
time=2024-08-06T12:17:43.780-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama578124798/runners/cpu_avx/ollama_llama_server
time=2024-08-06T12:17:43.780-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama578124798/runners/cpu_avx2/ollama_llama_server
time=2024-08-06T12:17:43.780-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama578124798/runners/vulkan/ollama_llama_server
time=2024-08-06T12:17:43.780-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 vulkan]"
time=2024-08-06T12:17:43.780-07:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-08-06T12:17:43.780-07:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-08-06T12:17:43.838-07:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=vulkan compute="" driver=0.0 name="" total="24.0 GiB" available="24.0 GiB"

I think that this is a bug.

<!-- gh-comment-id:2271979808 --> @yurivict commented on GitHub (Aug 6, 2024): > OLLAMA_LLM_LIBRARY="cpu" should work, but I haven't tried it myself. If you can provide logs it might be easier to debug. You could also try other library options like cpu_avx and cpu_avx2. OLLAMA_LLM_LIBRARY="cpu" and all other values seem to be ignored. The log still has library=vulkan: ``` time=2024-08-06T12:17:43.744-07:00 level=INFO source=routes.go:1155 msg="Listening on 127.0.0.1:11434 (version 0.0.0)" time=2024-08-06T12:17:43.749-07:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama578124798/runners time=2024-08-06T12:17:43.750-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu file=build/bsd/x86_64/cpu/bin/ollama_llama_server.gz time=2024-08-06T12:17:43.750-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx file=build/bsd/x86_64/cpu_avx/bin/ollama_llama_server.gz time=2024-08-06T12:17:43.750-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx2 file=build/bsd/x86_64/cpu_avx2/bin/ollama_llama_server.gz time=2024-08-06T12:17:43.750-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=vulkan file=build/bsd/x86_64/vulkan/bin/ollama_llama_server.gz time=2024-08-06T12:17:43.780-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama578124798/runners/cpu/ollama_llama_server time=2024-08-06T12:17:43.780-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama578124798/runners/cpu_avx/ollama_llama_server time=2024-08-06T12:17:43.780-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama578124798/runners/cpu_avx2/ollama_llama_server time=2024-08-06T12:17:43.780-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama578124798/runners/vulkan/ollama_llama_server time=2024-08-06T12:17:43.780-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 vulkan]" time=2024-08-06T12:17:43.780-07:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-08-06T12:17:43.780-07:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2024-08-06T12:17:43.838-07:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=vulkan compute="" driver=0.0 name="" total="24.0 GiB" available="24.0 GiB" ``` I think that this is a bug.
Author
Owner

@rick-github commented on GitHub (Aug 6, 2024):

Are you setting OLLAMA_LLM_LIBRARY in the environment of the server? What does grep OLLAMA_LLM_LIBRARY server.log show, where server.log is where you are saving the output from the server?

<!-- gh-comment-id:2271985537 --> @rick-github commented on GitHub (Aug 6, 2024): Are you setting `OLLAMA_LLM_LIBRARY` in the environment of the server? What does `grep OLLAMA_LLM_LIBRARY server.log` show, where `server.log` is where you are saving the output from the server?
Author
Owner

@rick-github commented on GitHub (Aug 6, 2024):

Just tried it with v0.3.3 and it used cpu instead of cuda:

ollama  | time=2024-08-06T19:39:01.008Z level=INFO source=server.go:172 msg="user override" OLLAMA_LLM_LIBRARY=cpu path=/tmp/ollama603258251/runners/cpu
ollama  | time=2024-08-06T19:39:01.008Z level=DEBUG source=gpu.go:636 msg="no filter required for library cpu"
ollama  | time=2024-08-06T19:39:01.008Z level=INFO source=server.go:384 msg="starting llama server" cmd="/tmp/ollama603258251/runners/cpu/ollama_llama_server --model /root/.ollama/models/blobs/sha256-8de95da68dc4
<!-- gh-comment-id:2272019022 --> @rick-github commented on GitHub (Aug 6, 2024): Just tried it with v0.3.3 and it used cpu instead of cuda: ``` ollama | time=2024-08-06T19:39:01.008Z level=INFO source=server.go:172 msg="user override" OLLAMA_LLM_LIBRARY=cpu path=/tmp/ollama603258251/runners/cpu ollama | time=2024-08-06T19:39:01.008Z level=DEBUG source=gpu.go:636 msg="no filter required for library cpu" ollama | time=2024-08-06T19:39:01.008Z level=INFO source=server.go:384 msg="starting llama server" cmd="/tmp/ollama603258251/runners/cpu/ollama_llama_server --model /root/.ollama/models/blobs/sha256-8de95da68dc4 ```
Author
Owner

@yurivict commented on GitHub (Aug 6, 2024):

Now when I look again, maybe it's working fine. There are lines showing library=vulkan but then it says later that override is working.

<!-- gh-comment-id:2272069335 --> @yurivict commented on GitHub (Aug 6, 2024): Now when I look again, maybe it's working fine. There are lines showing library=vulkan but then it says later that override is working.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65914