[GH-ISSUE #6288] OLLAMA_LLM_LIBRARY=cpu is ignored: ErrorOutOfDeviceMemory when zero layers are offloaded to GPU through Vulkan #50451

Closed
opened 2026-04-28 15:55:47 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @yurivict on GitHub (Aug 9, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6288

What is the issue?

ollama server is run on CPU: OLLAMA_LLM_LIBRARY=cpu ollama start

While attempting to run the gemma model, it still attempts to use vulkan and fails:

2024/08/09 09:58:04 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY:cpu OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/yuri/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-09T09:58:04.045-07:00 level=INFO source=images.go:781 msg="total blobs: 47"
time=2024-08-09T09:58:04.047-07:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:   export GIN_MODE=release
 - using code:  gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullModelHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateModelHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushModelHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyModelHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteModelHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).ProcessHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (6 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2024-08-09T09:58:04.049-07:00 level=INFO source=routes.go:1155 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2024-08-09T09:58:04.053-07:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1197517005/runners
time=2024-08-09T09:58:04.053-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu file=build/bsd/x86_64/cpu/bin/ollama_llama_server.gz
time=2024-08-09T09:58:04.053-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx file=build/bsd/x86_64/cpu_avx/bin/ollama_llama_server.gz
time=2024-08-09T09:58:04.053-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx2 file=build/bsd/x86_64/cpu_avx2/bin/ollama_llama_server.gz
time=2024-08-09T09:58:04.053-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=vulkan file=build/bsd/x86_64/vulkan/bin/ollama_llama_server.gz
time=2024-08-09T09:58:04.053-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=vulkan file=build/bsd/x86_64/vulkan/bin/vulkan-shaders-gen.gz
time=2024-08-09T09:58:04.166-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu/ollama_llama_server
time=2024-08-09T09:58:04.166-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu_avx/ollama_llama_server
time=2024-08-09T09:58:04.166-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu_avx2/ollama_llama_server
time=2024-08-09T09:58:04.166-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/vulkan/ollama_llama_server
time=2024-08-09T09:58:04.166-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 vulkan]"
time=2024-08-09T09:58:04.166-07:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-08-09T09:58:04.166-07:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-08-09T09:58:04.248-07:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=vulkan compute="" driver=0.0 name="" total="6.2 GiB" available="6.2 GiB"
[GIN] 2024/08/09 - 09:58:06 | 200 |      43.926µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/08/09 - 09:58:06 | 200 |   64.335737ms |       127.0.0.1 | POST     "/api/show"
time=2024-08-09T09:58:06.863-07:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x146ea00 gpu_count=1
time=2024-08-09T09:58:06.921-07:00 level=DEBUG source=sched.go:219 msg="loading first model" model=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77
time=2024-08-09T09:58:06.921-07:00 level=DEBUG source=memory.go:101 msg=evaluating library=vulkan gpu_count=1 available="[6.2 GiB]"
time=2024-08-09T09:58:06.921-07:00 level=DEBUG source=memory.go:101 msg=evaluating library=vulkan gpu_count=1 available="[6.2 GiB]"
time=2024-08-09T09:58:06.922-07:00 level=DEBUG source=memory.go:101 msg=evaluating library=vulkan gpu_count=1 available="[6.2 GiB]"
time=2024-08-09T09:58:06.922-07:00 level=DEBUG source=memory.go:101 msg=evaluating library=vulkan gpu_count=1 available="[6.2 GiB]"
time=2024-08-09T09:58:06.923-07:00 level=DEBUG source=server.go:100 msg="system memory" total="24.0 GiB" free="0 B" free_swap="0 B"
time=2024-08-09T09:58:06.923-07:00 level=DEBUG source=memory.go:101 msg=evaluating library=vulkan gpu_count=1 available="[6.2 GiB]"
time=2024-08-09T09:58:06.924-07:00 level=INFO source=memory.go:309 msg="offload to vulkan" layers.requested=-1 layers.model=29 layers.offload=25 layers.split="" memory.available="[6.2 GiB]" memory.required.full="7.3 GiB" memory.required.partial="6.2 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.9 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="615.2 MiB" memory.graph.full="506.0 MiB" memory.graph.partial="1.1 GiB"
time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu/ollama_llama_server
time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu_avx/ollama_llama_server
time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu_avx2/ollama_llama_server
time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/vulkan/ollama_llama_server
time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu/ollama_llama_server
time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu_avx/ollama_llama_server
time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu_avx2/ollama_llama_server
time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/vulkan/ollama_llama_server
time=2024-08-09T09:58:06.924-07:00 level=INFO source=server.go:172 msg="user override" OLLAMA_LLM_LIBRARY=cpu path=/tmp/ollama1197517005/runners/cpu
time=2024-08-09T09:58:06.924-07:00 level=INFO source=server.go:390 msg="starting llama server" cmd="/tmp/ollama1197517005/runners/cpu/ollama_llama_server --model /home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77 --ctx-size 2048 --batch-size 512 --embedding --log-disable --verbose --parallel 1 --port 10149"
time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=server.go:407 msg=subprocess environment="[PATH=/home/yuri/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin LD_LIBRARY_PATH=/tmp/ollama1197517005/runners/cpu:/tmp/ollama1197517005/runners]"
time=2024-08-09T09:58:06.928-07:00 level=INFO source=sched.go:445 msg="loaded runners" count=1
time=2024-08-09T09:58:06.928-07:00 level=INFO source=server.go:590 msg="waiting for llama runner to start responding"
time=2024-08-09T09:58:06.929-07:00 level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=673836 commit="b5bb445feab3" tid="0x2678fb612000" timestamp=1723222686
INFO [main] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="0x2678fb612000" timestamp=1723222686 total_threads=8
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="10149" tid="0x2678fb612000" timestamp=1723222686
llama_model_loader: loaded meta data with 24 key-value pairs and 254 tensors from /home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gemma
llama_model_loader: - kv   1:                               general.name str              = gemma-1.1-7b-it
llama_model_loader: - kv   2:                       gemma.context_length u32              = 8192
llama_model_loader: - kv   3:                     gemma.embedding_length u32              = 3072
llama_model_loader: - kv   4:                          gemma.block_count u32              = 28
llama_model_loader: - kv   5:                  gemma.feed_forward_length u32              = 24576
llama_model_loader: - kv   6:                 gemma.attention.head_count u32              = 16
llama_model_loader: - kv   7:              gemma.attention.head_count_kv u32              = 16
llama_model_loader: - kv   8:     gemma.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   9:                 gemma.attention.key_length u32              = 256
llama_model_loader: - kv  10:               gemma.attention.value_length u32              = 256
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,256000]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,256000]  = [0.000000, 0.000000, 0.000000, 0.0000...
time=2024-08-09T09:58:07.220-07:00 level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  19:            tokenizer.ggml.padding_token_id u32              = 0
lla
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 16
llm_load_print_meta: n_rot            = 256
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 256
llm_load_print_meta: n_embd_head_v    = 256
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 24576
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type   
llm_load_print_meta: PAD token        = 0 '<pad>'
llm_load_print_meta: LF token         = 227 '<0x0A>'
llm_load_print_meta: EOT token        = 107 '<end_of_turn>'
llm_load_print_meta: max token length = 93
ggml_vulkan: Found 1 Vulkan devices:
Vulkan0: NVIDIA GeForce RTX 2060 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32
llm_load_tensors: ggml ctx size =    0.12 MiB
llm_load_tensors: offloading 0 repeating layers to GPU
llm_load_tensors: offloaded 0/29 layers to GPU
llm_load_tensors:        CPU buffer size =  4773.90 MiB
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
time=2024-08-09T09:58:08.114-07:00 level=DEBUG source=server.go:635 msg="model load progress 1.00"
time=2024-08-09T09:58:08.366-07:00 level=DEBUG source=server.go:638 msg="model load completed, waiting for server to become available" status="llm server loading model"
ggml_vulkan: Failed to allocate pinned memory.
ggml_vulkan: vk::Device::allocateMemory: ErrorOutOfDeviceMemory
llama_kv_cache_init:        CPU KV buffer size =   896.00 MiB
llama_new_context_with_model: KV self size  =  896.00 MiB, K (f16):  448.00 MiB, V (f16):  448.00 MiB
llama_new_context_with_model: Vulkan_Host  output buffer size =     0.99 MiB
ggml_vulkan: Device memory allocation of size 1175699456 failed.
ggml_vulkan: vk::Device::allocateMemory: ErrorOutOfDeviceMemory
ggml_gallocr_reserve_n: failed to allocate NVIDIA GeForce RTX 2060 buffer of size 1175699456
llama_new_context_with_model: failed to allocate compute buffers
llama_init_from_gpt_params: error: failed to create context with model '/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77'
ERROR [load_model] unable to load model | model="/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77" tid="0x2678fb612000" timestamp=1723222689
time=2024-08-09T09:58:09.727-07:00 level=DEBUG source=server.go:430 msg="llama runner terminated" error="signal: abort trap"
time=2024-08-09T09:58:09.787-07:00 level=ERROR source=sched.go:451 msg="error loading llama server" error="llama runner process has terminated: error:failed to create context with model '/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77'"
time=2024-08-09T09:58:09.787-07:00 level=DEBUG source=sched.go:454 msg="triggering expiration for failed load" model=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77
time=2024-08-09T09:58:09.787-07:00 level=DEBUG source=sched.go:355 msg="runner expired event received" modelPath=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77
time=2024-08-09T09:58:09.787-07:00 level=DEBUG source=sched.go:371 msg="got lock to unload" modelPath=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77
[GIN] 2024/08/09 - 09:58:09 | 500 |    3.0697853s |       127.0.0.1 | POST     "/api/chat"
time=2024-08-09T09:58:09.875-07:00 level=DEBUG source=server.go:1048 msg="stopping llama server"
time=2024-08-09T09:58:09.875-07:00 level=DEBUG source=sched.go:376 msg="runner released" modelPath=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77
time=2024-08-09T09:58:14.877-07:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.090068886 model=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77
time=2024-08-09T09:58:14.877-07:00 level=DEBUG source=sched.go:380 msg="sending an unloaded event" modelPath=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77
time=2024-08-09T09:58:14.878-07:00 level=DEBUG source=sched.go:303 msg="ignoring unload event with no pending requests"
time=2024-08-09T09:58:15.130-07:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.342633342 model=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77
time=2024-08-09T09:58:15.381-07:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.593264672 model=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77
time=2024-08-09T09:59:07.125-07:00 level=DEBUG source=sched.go:119 msg="shutting down scheduler pending loop"
time=2024-08-09T09:59:07.125-07:00 level=DEBUG source=sched.go:313 msg="shutting down scheduler completed loop"
time=2024-08-09T09:59:07.125-07:00 level=DEBUG source=assets.go:112 msg="cleaning up" dir=/tmp/ollama1197517005

It appears to still load the model into VRAM even when 0 layers are offloaded.

Version: 0.3.4
FreeBSD 14.1

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.3.4

Originally created by @yurivict on GitHub (Aug 9, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6288 ### What is the issue? ollama server is run on CPU: ```OLLAMA_LLM_LIBRARY=cpu ollama start``` While attempting to run the gemma model, it still attempts to use vulkan and fails: ``` 2024/08/09 09:58:04 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY:cpu OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/yuri/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-08-09T09:58:04.045-07:00 level=INFO source=images.go:781 msg="total blobs: 47" time=2024-08-09T09:58:04.047-07:00 level=INFO source=images.go:788 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullModelHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateModelHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushModelHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyModelHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteModelHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).ProcessHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2024-08-09T09:58:04.049-07:00 level=INFO source=routes.go:1155 msg="Listening on 127.0.0.1:11434 (version 0.0.0)" time=2024-08-09T09:58:04.053-07:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1197517005/runners time=2024-08-09T09:58:04.053-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu file=build/bsd/x86_64/cpu/bin/ollama_llama_server.gz time=2024-08-09T09:58:04.053-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx file=build/bsd/x86_64/cpu_avx/bin/ollama_llama_server.gz time=2024-08-09T09:58:04.053-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=cpu_avx2 file=build/bsd/x86_64/cpu_avx2/bin/ollama_llama_server.gz time=2024-08-09T09:58:04.053-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=vulkan file=build/bsd/x86_64/vulkan/bin/ollama_llama_server.gz time=2024-08-09T09:58:04.053-07:00 level=DEBUG source=payload.go:182 msg=extracting variant=vulkan file=build/bsd/x86_64/vulkan/bin/vulkan-shaders-gen.gz time=2024-08-09T09:58:04.166-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu/ollama_llama_server time=2024-08-09T09:58:04.166-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu_avx/ollama_llama_server time=2024-08-09T09:58:04.166-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu_avx2/ollama_llama_server time=2024-08-09T09:58:04.166-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/vulkan/ollama_llama_server time=2024-08-09T09:58:04.166-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 vulkan]" time=2024-08-09T09:58:04.166-07:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-08-09T09:58:04.166-07:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2024-08-09T09:58:04.248-07:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=vulkan compute="" driver=0.0 name="" total="6.2 GiB" available="6.2 GiB" [GIN] 2024/08/09 - 09:58:06 | 200 | 43.926µs | 127.0.0.1 | HEAD "/" [GIN] 2024/08/09 - 09:58:06 | 200 | 64.335737ms | 127.0.0.1 | POST "/api/show" time=2024-08-09T09:58:06.863-07:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x146ea00 gpu_count=1 time=2024-08-09T09:58:06.921-07:00 level=DEBUG source=sched.go:219 msg="loading first model" model=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77 time=2024-08-09T09:58:06.921-07:00 level=DEBUG source=memory.go:101 msg=evaluating library=vulkan gpu_count=1 available="[6.2 GiB]" time=2024-08-09T09:58:06.921-07:00 level=DEBUG source=memory.go:101 msg=evaluating library=vulkan gpu_count=1 available="[6.2 GiB]" time=2024-08-09T09:58:06.922-07:00 level=DEBUG source=memory.go:101 msg=evaluating library=vulkan gpu_count=1 available="[6.2 GiB]" time=2024-08-09T09:58:06.922-07:00 level=DEBUG source=memory.go:101 msg=evaluating library=vulkan gpu_count=1 available="[6.2 GiB]" time=2024-08-09T09:58:06.923-07:00 level=DEBUG source=server.go:100 msg="system memory" total="24.0 GiB" free="0 B" free_swap="0 B" time=2024-08-09T09:58:06.923-07:00 level=DEBUG source=memory.go:101 msg=evaluating library=vulkan gpu_count=1 available="[6.2 GiB]" time=2024-08-09T09:58:06.924-07:00 level=INFO source=memory.go:309 msg="offload to vulkan" layers.requested=-1 layers.model=29 layers.offload=25 layers.split="" memory.available="[6.2 GiB]" memory.required.full="7.3 GiB" memory.required.partial="6.2 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.9 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="615.2 MiB" memory.graph.full="506.0 MiB" memory.graph.partial="1.1 GiB" time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu/ollama_llama_server time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu_avx/ollama_llama_server time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu_avx2/ollama_llama_server time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/vulkan/ollama_llama_server time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu/ollama_llama_server time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu_avx/ollama_llama_server time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/cpu_avx2/ollama_llama_server time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama1197517005/runners/vulkan/ollama_llama_server time=2024-08-09T09:58:06.924-07:00 level=INFO source=server.go:172 msg="user override" OLLAMA_LLM_LIBRARY=cpu path=/tmp/ollama1197517005/runners/cpu time=2024-08-09T09:58:06.924-07:00 level=INFO source=server.go:390 msg="starting llama server" cmd="/tmp/ollama1197517005/runners/cpu/ollama_llama_server --model /home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77 --ctx-size 2048 --batch-size 512 --embedding --log-disable --verbose --parallel 1 --port 10149" time=2024-08-09T09:58:06.924-07:00 level=DEBUG source=server.go:407 msg=subprocess environment="[PATH=/home/yuri/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin LD_LIBRARY_PATH=/tmp/ollama1197517005/runners/cpu:/tmp/ollama1197517005/runners]" time=2024-08-09T09:58:06.928-07:00 level=INFO source=sched.go:445 msg="loaded runners" count=1 time=2024-08-09T09:58:06.928-07:00 level=INFO source=server.go:590 msg="waiting for llama runner to start responding" time=2024-08-09T09:58:06.929-07:00 level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=673836 commit="b5bb445feab3" tid="0x2678fb612000" timestamp=1723222686 INFO [main] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="0x2678fb612000" timestamp=1723222686 total_threads=8 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="10149" tid="0x2678fb612000" timestamp=1723222686 llama_model_loader: loaded meta data with 24 key-value pairs and 254 tensors from /home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = gemma llama_model_loader: - kv 1: general.name str = gemma-1.1-7b-it llama_model_loader: - kv 2: gemma.context_length u32 = 8192 llama_model_loader: - kv 3: gemma.embedding_length u32 = 3072 llama_model_loader: - kv 4: gemma.block_count u32 = 28 llama_model_loader: - kv 5: gemma.feed_forward_length u32 = 24576 llama_model_loader: - kv 6: gemma.attention.head_count u32 = 16 llama_model_loader: - kv 7: gemma.attention.head_count_kv u32 = 16 llama_model_loader: - kv 8: gemma.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 9: gemma.attention.key_length u32 = 256 llama_model_loader: - kv 10: gemma.attention.value_length u32 = 256 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,256000] = ["<pad>", "<eos>", "<bos>", "<unk>", ... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,256000] = [0.000000, 0.000000, 0.000000, 0.0000... time=2024-08-09T09:58:07.220-07:00 level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 2 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 0 lla llm_load_print_meta: n_head = 16 llm_load_print_meta: n_head_kv = 16 llm_load_print_meta: n_rot = 256 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 256 llm_load_print_meta: n_embd_head_v = 256 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 24576 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type llm_load_print_meta: PAD token = 0 '<pad>' llm_load_print_meta: LF token = 227 '<0x0A>' llm_load_print_meta: EOT token = 107 '<end_of_turn>' llm_load_print_meta: max token length = 93 ggml_vulkan: Found 1 Vulkan devices: Vulkan0: NVIDIA GeForce RTX 2060 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 llm_load_tensors: ggml ctx size = 0.12 MiB llm_load_tensors: offloading 0 repeating layers to GPU llm_load_tensors: offloaded 0/29 layers to GPU llm_load_tensors: CPU buffer size = 4773.90 MiB llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 time=2024-08-09T09:58:08.114-07:00 level=DEBUG source=server.go:635 msg="model load progress 1.00" time=2024-08-09T09:58:08.366-07:00 level=DEBUG source=server.go:638 msg="model load completed, waiting for server to become available" status="llm server loading model" ggml_vulkan: Failed to allocate pinned memory. ggml_vulkan: vk::Device::allocateMemory: ErrorOutOfDeviceMemory llama_kv_cache_init: CPU KV buffer size = 896.00 MiB llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB llama_new_context_with_model: Vulkan_Host output buffer size = 0.99 MiB ggml_vulkan: Device memory allocation of size 1175699456 failed. ggml_vulkan: vk::Device::allocateMemory: ErrorOutOfDeviceMemory ggml_gallocr_reserve_n: failed to allocate NVIDIA GeForce RTX 2060 buffer of size 1175699456 llama_new_context_with_model: failed to allocate compute buffers llama_init_from_gpt_params: error: failed to create context with model '/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77' ERROR [load_model] unable to load model | model="/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77" tid="0x2678fb612000" timestamp=1723222689 time=2024-08-09T09:58:09.727-07:00 level=DEBUG source=server.go:430 msg="llama runner terminated" error="signal: abort trap" time=2024-08-09T09:58:09.787-07:00 level=ERROR source=sched.go:451 msg="error loading llama server" error="llama runner process has terminated: error:failed to create context with model '/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77'" time=2024-08-09T09:58:09.787-07:00 level=DEBUG source=sched.go:454 msg="triggering expiration for failed load" model=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77 time=2024-08-09T09:58:09.787-07:00 level=DEBUG source=sched.go:355 msg="runner expired event received" modelPath=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77 time=2024-08-09T09:58:09.787-07:00 level=DEBUG source=sched.go:371 msg="got lock to unload" modelPath=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77 [GIN] 2024/08/09 - 09:58:09 | 500 | 3.0697853s | 127.0.0.1 | POST "/api/chat" time=2024-08-09T09:58:09.875-07:00 level=DEBUG source=server.go:1048 msg="stopping llama server" time=2024-08-09T09:58:09.875-07:00 level=DEBUG source=sched.go:376 msg="runner released" modelPath=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77 time=2024-08-09T09:58:14.877-07:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.090068886 model=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77 time=2024-08-09T09:58:14.877-07:00 level=DEBUG source=sched.go:380 msg="sending an unloaded event" modelPath=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77 time=2024-08-09T09:58:14.878-07:00 level=DEBUG source=sched.go:303 msg="ignoring unload event with no pending requests" time=2024-08-09T09:58:15.130-07:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.342633342 model=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77 time=2024-08-09T09:58:15.381-07:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.593264672 model=/home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77 time=2024-08-09T09:59:07.125-07:00 level=DEBUG source=sched.go:119 msg="shutting down scheduler pending loop" time=2024-08-09T09:59:07.125-07:00 level=DEBUG source=sched.go:313 msg="shutting down scheduler completed loop" time=2024-08-09T09:59:07.125-07:00 level=DEBUG source=assets.go:112 msg="cleaning up" dir=/tmp/ollama1197517005 ``` It appears to still load the model into VRAM even when 0 layers are offloaded. Version: 0.3.4 FreeBSD 14.1 ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.4
GiteaMirror added the bug label 2026-04-28 15:55:47 -05:00
Author
Owner

@rick-github commented on GitHub (Aug 9, 2024):

OLLAMA_LLM_LIBRARY is not being ignored, the chosen runner is a CPU based one:

time=2024-08-09T09:58:06.924-07:00 level=INFO source=server.go:172 msg="user override" OLLAMA_LLM_LIBRARY=cpu path=/tmp/ollama1197517005/runners/cpu
time=2024-08-09T09:58:06.924-07:00 level=INFO source=server.go:390 msg="starting llama server" cmd="/tmp/ollama1197517005/runners/cpu/ollama_llama_server --model /home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77 --ctx-size 2048 --batch-size 512 --embedding --log-disable --verbose --parallel 1 --port 10149"

The question is why is the CPU based runner trying to use the GPU. Perhaps another library problem?

<!-- gh-comment-id:2278379955 --> @rick-github commented on GitHub (Aug 9, 2024): OLLAMA_LLM_LIBRARY is not being ignored, the chosen runner is a CPU based one: ``` time=2024-08-09T09:58:06.924-07:00 level=INFO source=server.go:172 msg="user override" OLLAMA_LLM_LIBRARY=cpu path=/tmp/ollama1197517005/runners/cpu time=2024-08-09T09:58:06.924-07:00 level=INFO source=server.go:390 msg="starting llama server" cmd="/tmp/ollama1197517005/runners/cpu/ollama_llama_server --model /home/yuri/.ollama/models/blobs/sha256-ef311de6af9db043d51ca4b1e766c28e0a1ac41d60420fed5e001dc470c64b77 --ctx-size 2048 --batch-size 512 --embedding --log-disable --verbose --parallel 1 --port 10149" ``` The question is why is the CPU based runner trying to use the GPU. Perhaps another library problem?
Author
Owner

@yurivict commented on GitHub (Aug 9, 2024):

I verified that the cpu runner is the one that's actually running: /tmp/ollama2814492440/runners/cpu/ollama_llama_server

<!-- gh-comment-id:2278417558 --> @yurivict commented on GitHub (Aug 9, 2024): I verified that the cpu runner is the one that's actually running: /tmp/ollama2814492440/runners/cpu/ollama_llama_server
Author
Owner

@dhiltgen commented on GitHub (Aug 9, 2024):

It looks like you're building from source locally and trying to get a Vulkan runner working. My suspicion is the CPU runner is getting compiled with the vulkan flags. #2033 tracks formally adding support. You can try setting export CGO_CFLAGS="-g" and then capture the generate output with something like go generate ./... 2>&1 | tee generate.log and scan through what flags are being passed into the different runner builds

<!-- gh-comment-id:2278496074 --> @dhiltgen commented on GitHub (Aug 9, 2024): It looks like you're building from source locally and trying to get a Vulkan runner working. My suspicion is the CPU runner is getting compiled with the vulkan flags. #2033 tracks formally adding support. You can try setting `export CGO_CFLAGS="-g"` and then capture the generate output with something like `go generate ./... 2>&1 | tee generate.log` and scan through what flags are being passed into the different runner builds
Author
Owner

@jmorganca commented on GitHub (Aug 13, 2024):

Hi there, thanks for the issue! There isn't built-in support for Vulkan yet. As @dhiltgen mentioned you might be able to get additional debug info but it might be hard to help here otherwise, sorry.

<!-- gh-comment-id:2285380293 --> @jmorganca commented on GitHub (Aug 13, 2024): Hi there, thanks for the issue! There isn't built-in support for Vulkan yet. As @dhiltgen mentioned you might be able to get additional debug info but it might be hard to help here otherwise, sorry.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#50451