[GH-ISSUE #8164] llama3.2 3B "will fit in available VRAM" of a Nvidia 4060 TI but then runs on CPU. llm server error #51724

Closed
opened 2026-04-28 20:48:12 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @felixniemeyer on GitHub (Dec 18, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/8164

What is the issue?

I'm trying to use llama3.2 on my Nvidia 4060 Ti 16GB but ollama runs it on the CPU.

Here is the server log with debug level logging.

2024/12/18 22:54:10 routes.go:1194: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/felix/davdev/ai/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-12-18T22:54:10.529+01:00 level=INFO source=images.go:753 msg="total blobs: 10"
time=2024-12-18T22:54:10.529+01:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-12-18T22:54:10.529+01:00 level=INFO source=routes.go:1245 msg="Listening on 127.0.0.1:11434 (version 0.5.1)"
time=2024-12-18T22:54:10.536+01:00 level=DEBUG source=common.go:79 msg="runners located" dir=/usr/lib/ollama/runners
time=2024-12-18T22:54:10.536+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server
time=2024-12-18T22:54:10.536+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server
time=2024-12-18T22:54:10.536+01:00 level=INFO source=routes.go:1274 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2]"
time=2024-12-18T22:54:10.536+01:00 level=DEBUG source=routes.go:1275 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-12-18T22:54:10.536+01:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-12-18T22:54:10.536+01:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2024-12-18T22:54:10.538+01:00 level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-12-18T22:54:10.538+01:00 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcuda.so*
time=2024-12-18T22:54:10.538+01:00 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/lib/ollama/libcuda.so* /home/felix/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2024-12-18T22:54:10.569+01:00 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths="[/usr/lib/libcuda.so.550.135 /usr/lib64/libcuda.so.550.135]"
initializing /usr/lib/libcuda.so.550.135
dlsym: cuInit - 0x7f884667cc90
dlsym: cuDriverGetVersion - 0x7f884667ccb0
dlsym: cuDeviceGetCount - 0x7f884667ccf0
dlsym: cuDeviceGet - 0x7f884667ccd0
dlsym: cuDeviceGetAttribute - 0x7f884667cdd0
dlsym: cuDeviceGetUuid - 0x7f884667cd30
dlsym: cuDeviceGetName - 0x7f884667cd10
dlsym: cuCtxCreate_v3 - 0x7f884667cfb0
dlsym: cuMemGetInfo_v2 - 0x7f8846686ef0
dlsym: cuCtxDestroy - 0x7f88466e18d0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2024-12-18T22:54:10.631+01:00 level=DEBUG source=gpu.go:134 msg="detected GPUs" count=1 library=/usr/lib/libcuda.so.550.135
[GPU-2cdd1749-8b62-3f3e-4af8-08b5e1741b0d] CUDA totalMem 16073 mb
[GPU-2cdd1749-8b62-3f3e-4af8-08b5e1741b0d] CUDA freeMem 15886 mb
[GPU-2cdd1749-8b62-3f3e-4af8-08b5e1741b0d] Compute Capability 8.9
time=2024-12-18T22:54:10.720+01:00 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2024-12-18T22:54:10.720+01:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-2cdd1749-8b62-3f3e-4af8-08b5e1741b0d library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4060 Ti" total="15.7 GiB" available="15.5 GiB"
[GIN] 2024/12/18 - 22:54:22 | 200 |      33.213µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/12/18 - 22:54:22 | 200 |   17.212787ms |       127.0.0.1 | POST     "/api/show"
time=2024-12-18T22:54:22.318+01:00 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="31.3 GiB" before.free="30.2 GiB" before.free_swap="34.5 GiB" now.total="31.3 GiB" now.free="30.1 GiB" now.free_swap="34.5 GiB"
initializing /usr/lib/libcuda.so.550.135
dlsym: cuInit - 0x7f884667cc90
dlsym: cuDriverGetVersion - 0x7f884667ccb0
dlsym: cuDeviceGetCount - 0x7f884667ccf0
dlsym: cuDeviceGet - 0x7f884667ccd0
dlsym: cuDeviceGetAttribute - 0x7f884667cdd0
dlsym: cuDeviceGetUuid - 0x7f884667cd30
dlsym: cuDeviceGetName - 0x7f884667cd10
dlsym: cuCtxCreate_v3 - 0x7f884667cfb0
dlsym: cuMemGetInfo_v2 - 0x7f8846686ef0
dlsym: cuCtxDestroy - 0x7f88466e18d0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2024-12-18T22:54:22.405+01:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-2cdd1749-8b62-3f3e-4af8-08b5e1741b0d name="NVIDIA GeForce RTX 4060 Ti" overhead="0 B" before.total="15.7 GiB" before.free="15.5 GiB" now.total="15.7 GiB" now.free="15.5 GiB" now.used="186.7 MiB"
releasing cuda driver library
time=2024-12-18T22:54:22.405+01:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x562e5f053520 gpu_count=1
time=2024-12-18T22:54:22.436+01:00 level=DEBUG source=sched.go:224 msg="loading first model" model=/home/felix/davdev/ai/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
time=2024-12-18T22:54:22.436+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[15.5 GiB]"
time=2024-12-18T22:54:22.436+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/home/felix/davdev/ai/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff gpu=GPU-2cdd1749-8b62-3f3e-4af8-08b5e1741b0d parallel=4 available=16658268160 required="3.7 GiB"
time=2024-12-18T22:54:22.436+01:00 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="31.3 GiB" before.free="30.1 GiB" before.free_swap="34.5 GiB" now.total="31.3 GiB" now.free="30.1 GiB" now.free_swap="34.5 GiB"
initializing /usr/lib/libcuda.so.550.135
dlsym: cuInit - 0x7f884667cc90
dlsym: cuDriverGetVersion - 0x7f884667ccb0
dlsym: cuDeviceGetCount - 0x7f884667ccf0
dlsym: cuDeviceGet - 0x7f884667ccd0
dlsym: cuDeviceGetAttribute - 0x7f884667cdd0
dlsym: cuDeviceGetUuid - 0x7f884667cd30
dlsym: cuDeviceGetName - 0x7f884667cd10
dlsym: cuCtxCreate_v3 - 0x7f884667cfb0
dlsym: cuMemGetInfo_v2 - 0x7f8846686ef0
dlsym: cuCtxDestroy - 0x7f88466e18d0
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-2cdd1749-8b62-3f3e-4af8-08b5e1741b0d name="NVIDIA GeForce RTX 4060 Ti" overhead="0 B" before.total="15.7 GiB" before.free="15.5 GiB" now.total="15.7 GiB" now.free="15.5 GiB" now.used="186.7 MiB"
releasing cuda driver library
time=2024-12-18T22:54:22.513+01:00 level=INFO source=server.go:104 msg="system memory" total="31.3 GiB" free="30.1 GiB" free_swap="34.5 GiB"
time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[15.5 GiB]"
time=2024-12-18T22:54:22.513+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[15.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.7 GiB" memory.required.partial="3.7 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[3.7 GiB]" memory.weights.total="2.4 GiB" memory.weights.repeating="2.1 GiB" memory.weights.nonrepeating="308.2 MiB" memory.graph.full="424.0 MiB" memory.graph.partial="570.7 MiB"
time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server
time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server
time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server
time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server
time=2024-12-18T22:54:22.514+01:00 level=DEBUG source=gpu.go:714 msg="no filter required for library cpu"
time=2024-12-18T22:54:22.515+01:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server runner --model /home/felix/davdev/ai/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --verbose --threads 8 --parallel 4 --port 41657"
time=2024-12-18T22:54:22.515+01:00 level=DEBUG source=server.go:393 msg=subprocess environment="[PATH=/opt/resolve/bin:/home/felix/scripts:/home/felix/.config/yarn/global/node_modules/.bin:/home/felix/.local/bin:/opt/google-cloud-cli/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/opt/cuda/bin:/opt/cuda/nsight_compute:/opt/cuda/nsight_systems/bin:/usr/lib/jvm/default/bin:/home/felix/.npm/global-packages/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/usr/lib/rustup/bin:/var/lib/snapd/snap/bin CUDA_PATH=/opt/cuda LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama:/usr/lib/ollama/runners/cpu_avx2]"
time=2024-12-18T22:54:22.515+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-12-18T22:54:22.515+01:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2024-12-18T22:54:22.515+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2024-12-18T22:54:22.518+01:00 level=INFO source=runner.go:946 msg="starting go runner"
time=2024-12-18T22:54:22.518+01:00 level=INFO source=runner.go:947 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=8
time=2024-12-18T22:54:22.519+01:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:41657"
llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /home/felix/davdev/ai/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Llama 3.2 3B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Llama-3.2
llama_model_loader: - kv   5:                         general.size_label str              = 3B
llama_model_loader: - kv   6:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   7:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   8:                          llama.block_count u32              = 28
llama_model_loader: - kv   9:                       llama.context_length u32              = 131072
llama_model_loader: - kv  10:                     llama.embedding_length u32              = 3072
llama_model_loader: - kv  11:                  llama.feed_forward_length u32              = 8192
llama_model_loader: - kv  12:                 llama.attention.head_count u32              = 24
llama_model_loader: - kv  13:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  14:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  15:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  16:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  17:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  18:                          general.file_type u32              = 15
llama_model_loader: - kv  19:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  20:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  21:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  22:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  23:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  24:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  25:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  26:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  27:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  28:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  29:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   58 tensors
llama_model_loader: - type q4_K:  168 tensors
llama_model_loader: - type q6_K:   29 tensors
time=2024-12-18T22:54:22.767+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 3072
llm_load_print_meta: n_layer          = 28
llm_load_print_meta: n_head           = 24
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 3
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 8192
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 3B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 3.21 B
llm_load_print_meta: model size       = 1.87 GiB (5.01 BPW) 
llm_load_print_meta: general.name     = Llama 3.2 3B Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token        = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token        = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size =    0.12 MiB
llm_load_tensors:        CPU buffer size =  1918.35 MiB
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 2048
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
time=2024-12-18T22:54:23.018+01:00 level=DEBUG source=server.go:600 msg="model load progress 1.00"
llama_kv_cache_init:        CPU KV buffer size =   896.00 MiB
llama_new_context_with_model: KV self size  =  896.00 MiB, K (f16):  448.00 MiB, V (f16):  448.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     2.00 MiB
llama_new_context_with_model:        CPU compute buffer size =   424.01 MiB
llama_new_context_with_model: graph nodes  = 902
llama_new_context_with_model: graph splits = 1
time=2024-12-18T22:54:23.270+01:00 level=INFO source=server.go:594 msg="llama runner started in 0.75 seconds"
time=2024-12-18T22:54:23.270+01:00 level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/home/felix/davdev/ai/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff
[GIN] 2024/12/18 - 22:54:23 | 200 |  968.500554ms |       127.0.0.1 | POST     "/api/generate"
time=2024-12-18T22:54:23.270+01:00 level=DEBUG source=sched.go:466 msg="context for request finished"
time=2024-12-18T22:54:23.270+01:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/felix/davdev/ai/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff duration=5m0s
time=2024-12-18T22:54:23.270+01:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/home/felix/davdev/ai/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff refCount=0

Here

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.5.1

Originally created by @felixniemeyer on GitHub (Dec 18, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/8164 ### What is the issue? I'm trying to use llama3.2 on my Nvidia 4060 Ti 16GB but ollama runs it on the CPU. Here is the server log with debug level logging. ``` 2024/12/18 22:54:10 routes.go:1194: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/felix/davdev/ai/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2024-12-18T22:54:10.529+01:00 level=INFO source=images.go:753 msg="total blobs: 10" time=2024-12-18T22:54:10.529+01:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-12-18T22:54:10.529+01:00 level=INFO source=routes.go:1245 msg="Listening on 127.0.0.1:11434 (version 0.5.1)" time=2024-12-18T22:54:10.536+01:00 level=DEBUG source=common.go:79 msg="runners located" dir=/usr/lib/ollama/runners time=2024-12-18T22:54:10.536+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server time=2024-12-18T22:54:10.536+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server time=2024-12-18T22:54:10.536+01:00 level=INFO source=routes.go:1274 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2]" time=2024-12-18T22:54:10.536+01:00 level=DEBUG source=routes.go:1275 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY" time=2024-12-18T22:54:10.536+01:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler" time=2024-12-18T22:54:10.536+01:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2024-12-18T22:54:10.538+01:00 level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA" time=2024-12-18T22:54:10.538+01:00 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcuda.so* time=2024-12-18T22:54:10.538+01:00 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/lib/ollama/libcuda.so* /home/felix/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]" time=2024-12-18T22:54:10.569+01:00 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths="[/usr/lib/libcuda.so.550.135 /usr/lib64/libcuda.so.550.135]" initializing /usr/lib/libcuda.so.550.135 dlsym: cuInit - 0x7f884667cc90 dlsym: cuDriverGetVersion - 0x7f884667ccb0 dlsym: cuDeviceGetCount - 0x7f884667ccf0 dlsym: cuDeviceGet - 0x7f884667ccd0 dlsym: cuDeviceGetAttribute - 0x7f884667cdd0 dlsym: cuDeviceGetUuid - 0x7f884667cd30 dlsym: cuDeviceGetName - 0x7f884667cd10 dlsym: cuCtxCreate_v3 - 0x7f884667cfb0 dlsym: cuMemGetInfo_v2 - 0x7f8846686ef0 dlsym: cuCtxDestroy - 0x7f88466e18d0 calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 1 time=2024-12-18T22:54:10.631+01:00 level=DEBUG source=gpu.go:134 msg="detected GPUs" count=1 library=/usr/lib/libcuda.so.550.135 [GPU-2cdd1749-8b62-3f3e-4af8-08b5e1741b0d] CUDA totalMem 16073 mb [GPU-2cdd1749-8b62-3f3e-4af8-08b5e1741b0d] CUDA freeMem 15886 mb [GPU-2cdd1749-8b62-3f3e-4af8-08b5e1741b0d] Compute Capability 8.9 time=2024-12-18T22:54:10.720+01:00 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu" releasing cuda driver library time=2024-12-18T22:54:10.720+01:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-2cdd1749-8b62-3f3e-4af8-08b5e1741b0d library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4060 Ti" total="15.7 GiB" available="15.5 GiB" [GIN] 2024/12/18 - 22:54:22 | 200 | 33.213µs | 127.0.0.1 | HEAD "/" [GIN] 2024/12/18 - 22:54:22 | 200 | 17.212787ms | 127.0.0.1 | POST "/api/show" time=2024-12-18T22:54:22.318+01:00 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="31.3 GiB" before.free="30.2 GiB" before.free_swap="34.5 GiB" now.total="31.3 GiB" now.free="30.1 GiB" now.free_swap="34.5 GiB" initializing /usr/lib/libcuda.so.550.135 dlsym: cuInit - 0x7f884667cc90 dlsym: cuDriverGetVersion - 0x7f884667ccb0 dlsym: cuDeviceGetCount - 0x7f884667ccf0 dlsym: cuDeviceGet - 0x7f884667ccd0 dlsym: cuDeviceGetAttribute - 0x7f884667cdd0 dlsym: cuDeviceGetUuid - 0x7f884667cd30 dlsym: cuDeviceGetName - 0x7f884667cd10 dlsym: cuCtxCreate_v3 - 0x7f884667cfb0 dlsym: cuMemGetInfo_v2 - 0x7f8846686ef0 dlsym: cuCtxDestroy - 0x7f88466e18d0 calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 1 time=2024-12-18T22:54:22.405+01:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-2cdd1749-8b62-3f3e-4af8-08b5e1741b0d name="NVIDIA GeForce RTX 4060 Ti" overhead="0 B" before.total="15.7 GiB" before.free="15.5 GiB" now.total="15.7 GiB" now.free="15.5 GiB" now.used="186.7 MiB" releasing cuda driver library time=2024-12-18T22:54:22.405+01:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x562e5f053520 gpu_count=1 time=2024-12-18T22:54:22.436+01:00 level=DEBUG source=sched.go:224 msg="loading first model" model=/home/felix/davdev/ai/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff time=2024-12-18T22:54:22.436+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[15.5 GiB]" time=2024-12-18T22:54:22.436+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/home/felix/davdev/ai/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff gpu=GPU-2cdd1749-8b62-3f3e-4af8-08b5e1741b0d parallel=4 available=16658268160 required="3.7 GiB" time=2024-12-18T22:54:22.436+01:00 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="31.3 GiB" before.free="30.1 GiB" before.free_swap="34.5 GiB" now.total="31.3 GiB" now.free="30.1 GiB" now.free_swap="34.5 GiB" initializing /usr/lib/libcuda.so.550.135 dlsym: cuInit - 0x7f884667cc90 dlsym: cuDriverGetVersion - 0x7f884667ccb0 dlsym: cuDeviceGetCount - 0x7f884667ccf0 dlsym: cuDeviceGet - 0x7f884667ccd0 dlsym: cuDeviceGetAttribute - 0x7f884667cdd0 dlsym: cuDeviceGetUuid - 0x7f884667cd30 dlsym: cuDeviceGetName - 0x7f884667cd10 dlsym: cuCtxCreate_v3 - 0x7f884667cfb0 dlsym: cuMemGetInfo_v2 - 0x7f8846686ef0 dlsym: cuCtxDestroy - 0x7f88466e18d0 calling cuInit calling cuDriverGetVersion raw version 0x2f08 CUDA driver version: 12.4 calling cuDeviceGetCount device count 1 time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-2cdd1749-8b62-3f3e-4af8-08b5e1741b0d name="NVIDIA GeForce RTX 4060 Ti" overhead="0 B" before.total="15.7 GiB" before.free="15.5 GiB" now.total="15.7 GiB" now.free="15.5 GiB" now.used="186.7 MiB" releasing cuda driver library time=2024-12-18T22:54:22.513+01:00 level=INFO source=server.go:104 msg="system memory" total="31.3 GiB" free="30.1 GiB" free_swap="34.5 GiB" time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[15.5 GiB]" time=2024-12-18T22:54:22.513+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[15.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.7 GiB" memory.required.partial="3.7 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[3.7 GiB]" memory.weights.total="2.4 GiB" memory.weights.repeating="2.1 GiB" memory.weights.nonrepeating="308.2 MiB" memory.graph.full="424.0 MiB" memory.graph.partial="570.7 MiB" time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server time=2024-12-18T22:54:22.514+01:00 level=DEBUG source=gpu.go:714 msg="no filter required for library cpu" time=2024-12-18T22:54:22.515+01:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server runner --model /home/felix/davdev/ai/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --verbose --threads 8 --parallel 4 --port 41657" time=2024-12-18T22:54:22.515+01:00 level=DEBUG source=server.go:393 msg=subprocess environment="[PATH=/opt/resolve/bin:/home/felix/scripts:/home/felix/.config/yarn/global/node_modules/.bin:/home/felix/.local/bin:/opt/google-cloud-cli/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/opt/cuda/bin:/opt/cuda/nsight_compute:/opt/cuda/nsight_systems/bin:/usr/lib/jvm/default/bin:/home/felix/.npm/global-packages/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/usr/lib/rustup/bin:/var/lib/snapd/snap/bin CUDA_PATH=/opt/cuda LD_LIBRARY_PATH=/usr/lib/ollama:/usr/lib/ollama:/usr/lib/ollama/runners/cpu_avx2]" time=2024-12-18T22:54:22.515+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-12-18T22:54:22.515+01:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2024-12-18T22:54:22.515+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2024-12-18T22:54:22.518+01:00 level=INFO source=runner.go:946 msg="starting go runner" time=2024-12-18T22:54:22.518+01:00 level=INFO source=runner.go:947 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=8 time=2024-12-18T22:54:22.519+01:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:41657" llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /home/felix/davdev/ai/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Llama-3.2 llama_model_loader: - kv 5: general.size_label str = 3B llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 8: llama.block_count u32 = 28 llama_model_loader: - kv 9: llama.context_length u32 = 131072 llama_model_loader: - kv 10: llama.embedding_length u32 = 3072 llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192 llama_model_loader: - kv 12: llama.attention.head_count u32 = 24 llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 16: llama.attention.key_length u32 = 128 llama_model_loader: - kv 17: llama.attention.value_length u32 = 128 llama_model_loader: - kv 18: general.file_type u32 = 15 llama_model_loader: - kv 19: llama.vocab_size u32 = 128256 llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 58 tensors llama_model_loader: - type q4_K: 168 tensors llama_model_loader: - type q6_K: 29 tensors time=2024-12-18T22:54:22.767+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 3072 llm_load_print_meta: n_layer = 28 llm_load_print_meta: n_head = 24 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 3 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 8192 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 3B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 3.21 B llm_load_print_meta: model size = 1.87 GiB (5.01 BPW) llm_load_print_meta: general.name = Llama 3.2 3B Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: ggml ctx size = 0.12 MiB llm_load_tensors: CPU buffer size = 1918.35 MiB llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 time=2024-12-18T22:54:23.018+01:00 level=DEBUG source=server.go:600 msg="model load progress 1.00" llama_kv_cache_init: CPU KV buffer size = 896.00 MiB llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB llama_new_context_with_model: CPU output buffer size = 2.00 MiB llama_new_context_with_model: CPU compute buffer size = 424.01 MiB llama_new_context_with_model: graph nodes = 902 llama_new_context_with_model: graph splits = 1 time=2024-12-18T22:54:23.270+01:00 level=INFO source=server.go:594 msg="llama runner started in 0.75 seconds" time=2024-12-18T22:54:23.270+01:00 level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/home/felix/davdev/ai/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff [GIN] 2024/12/18 - 22:54:23 | 200 | 968.500554ms | 127.0.0.1 | POST "/api/generate" time=2024-12-18T22:54:23.270+01:00 level=DEBUG source=sched.go:466 msg="context for request finished" time=2024-12-18T22:54:23.270+01:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/felix/davdev/ai/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff duration=5m0s time=2024-12-18T22:54:23.270+01:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/home/felix/davdev/ai/ollama/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff refCount=0 ``` Here ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.5.1
GiteaMirror added the bug label 2026-04-28 20:48:12 -05:00
Author
Owner

@felixniemeyer commented on GitHub (Dec 18, 2024):

Oooooopsie,
I'm on arch linux and there are separate packages for ollama-cuda or rocm on pacman.
These have to be installed.
Now chatting with godspeed, thanks for your work!

<!-- gh-comment-id:2552344535 --> @felixniemeyer commented on GitHub (Dec 18, 2024): Oooooopsie, I'm on arch linux and there are separate packages for ollama-cuda or rocm on pacman. These have to be installed. Now chatting with godspeed, thanks for your work!
Author
Owner

@felixniemeyer commented on GitHub (Dec 19, 2024):

To add some details:
I saw in the logs that there are only CPU runners available:

...
time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server
time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server
time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server
time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server
...

Of course, it won't run on GPU, if all available runners are CPU runners.

That made me wonder wondered why pacman would ship without GPU runners and had the idea, that maybe there are separate packages for things not everyone needs.

And indeed:

pacman -Ss ollama
extra/describeimage 1.3.2-1
    Describe images using Ollama
extra/fortunecraft 1.8.3-2
    Craft fortunes using Ollama
extra/llm-manager 1.2.1-1
    LLM task->model configuration utility
extra/ollama 0.5.1-2 [installed]
    Create, run and share large language models (LLMs)
extra/ollama-cuda 0.5.1-2 [installed]
    Create, run and share large language models (LLMs) with CUDA
extra/ollama-docs 0.5.1-2
    Documentation for Ollama
extra/ollama-rocm 0.5.1-2
    Create, run and share large language models (LLMs) with ROCm

hope this helps

<!-- gh-comment-id:2552898893 --> @felixniemeyer commented on GitHub (Dec 19, 2024): To add some details: I saw in the logs that there are only CPU runners available: ``` ... time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server time=2024-12-18T22:54:22.513+01:00 level=DEBUG source=common.go:123 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server ... ``` Of course, it won't run on GPU, if all available runners are CPU runners. That made me wonder wondered why pacman would ship without GPU runners and had the idea, that maybe there are separate packages for things not everyone needs. And indeed: ``` pacman -Ss ollama extra/describeimage 1.3.2-1 Describe images using Ollama extra/fortunecraft 1.8.3-2 Craft fortunes using Ollama extra/llm-manager 1.2.1-1 LLM task->model configuration utility extra/ollama 0.5.1-2 [installed] Create, run and share large language models (LLMs) extra/ollama-cuda 0.5.1-2 [installed] Create, run and share large language models (LLMs) with CUDA extra/ollama-docs 0.5.1-2 Documentation for Ollama extra/ollama-rocm 0.5.1-2 Create, run and share large language models (LLMs) with ROCm ``` hope this helps
Author
Owner

@xaionaro commented on GitHub (Dec 25, 2024):

I have the same problem on Ubuntu, but I don't have ollama-cuda on Ubuntu :(

<!-- gh-comment-id:2561546502 --> @xaionaro commented on GitHub (Dec 25, 2024): I have the same problem on Ubuntu, but I don't have `ollama-cuda` on Ubuntu :(
Author
Owner

@xaionaro commented on GitHub (Dec 25, 2024):

For me the solution was to:

cd /tmp
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt update
sudo apt install -y cuda-toolkit

cd ~/go/src
mkdir -p github.com/ollama
cd github.com/ollama
git clone https://github.com/ollama/ollama/
cd ollama
make -j12
<!-- gh-comment-id:2561581903 --> @xaionaro commented on GitHub (Dec 25, 2024): For me the solution was to: ```sh cd /tmp wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb sudo dpkg -i cuda-keyring_1.1-1_all.deb sudo apt update sudo apt install -y cuda-toolkit cd ~/go/src mkdir -p github.com/ollama cd github.com/ollama git clone https://github.com/ollama/ollama/ cd ollama make -j12 ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51724