[GH-ISSUE #7622] ollama doesn't seem to use my GPU after update #51375

Closed
opened 2026-04-28 19:43:05 -05:00 by GiteaMirror · 46 comments
Owner

Originally created by @miguelmarco on GitHub (Nov 11, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7622

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I had ollama compiled from source and it worked fine. Recently I rebuild it to the last version, and it seems to not use my GPU anymore (it uses a lot of CPU processes, and it runs much slower).

Here is the output of the server:

2024/11/11 21:50:18 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/mmarco/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-11-11T21:50:18.708+01:00 level=INFO source=images.go:755 msg="total blobs: 39"
time=2024-11-11T21:50:18.709+01:00 level=INFO source=images.go:762 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:   export GIN_MODE=release
 - using code:  gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2024-11-11T21:50:18.710+01:00 level=INFO source=routes.go:1240 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2024-11-11T21:50:18.711+01:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama3024011950/runners
time=2024-11-11T21:50:18.837+01:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cpu]"
time=2024-11-11T21:50:18.837+01:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-11T21:50:19.018+01:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-d421266e-e17b-d0ef-24f5-88a6f702d374 library=cuda variant=v12 compute=8.6 driver=12.4 name="NVIDIA GeForce RTX 3090" total="23.7 GiB" available="20.0 GiB"
[GIN] 2024/11/11 - 21:51:17 | 200 |      68.061µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/11/11 - 21:51:17 | 200 |   34.186091ms |       127.0.0.1 | POST     "/api/show"
time=2024-11-11T21:51:17.462+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/home/mmarco/.ollama/models/blobs/sha256-87048bcd55216712ef14c11c2c303728463207b165bf18440b9b84b07ec00f87 gpu=GPU-d421266e-e17b-d0ef-24f5-88a6f702d374 parallel=4 available=21468807168 required="6.2 GiB"
time=2024-11-11T21:51:17.604+01:00 level=INFO source=server.go:105 msg="system memory" total="62.7 GiB" free="50.0 GiB" free_swap="0 B"
time=2024-11-11T21:51:17.605+01:00 level=INFO source=memory.go:343 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[20.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-11-11T21:51:17.606+01:00 level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama3024011950/runners/cpu_avx2/ollama_llama_server --model /home/mmarco/.ollama/models/blobs/sha256-87048bcd55216712ef14c11c2c303728463207b165bf18440b9b84b07ec00f87 --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --threads 12 --parallel 4 --port 38059"
time=2024-11-11T21:51:17.607+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-11-11T21:51:17.607+01:00 level=INFO source=server.go:562 msg="waiting for llama runner to start responding"
time=2024-11-11T21:51:17.607+01:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error"
time=2024-11-11T21:51:17.613+01:00 level=INFO source=runner.go:863 msg="starting go runner"
time=2024-11-11T21:51:17.613+01:00 level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=12
time=2024-11-11T21:51:17.613+01:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:38059"
llama_model_loader: loaded meta data with 29 key-value pairs and 291 tensors from /home/mmarco/.ollama/models/blobs/sha256-87048bcd55216712ef14c11c2c303728463207b165bf18440b9b84b07ec00f87 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Meta-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 8B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   9:                          llama.block_count u32              = 32
llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
llama_model_loader: - kv  11:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                          general.file_type u32              = 2
llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  27:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
time=2024-11-11T21:51:17.858+01:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 8.03 B
llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW) 
llm_load_print_meta: general.name     = Meta Llama 3.1 8B Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token        = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token        = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size =    0.14 MiB
llm_load_tensors:        CPU buffer size =  4437.80 MiB
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 2048
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =  1024.00 MiB
llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     2.02 MiB
llama_new_context_with_model:        CPU compute buffer size =   560.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 1
time=2024-11-11T21:51:20.115+01:00 level=INFO source=server.go:601 msg="llama runner started in 2.51 seconds"
[GIN] 2024/11/11 - 21:51:20 | 200 |  2.903863177s |       127.0.0.1 | POST     "/api/generate"

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

main branch (36a8372b28)

Originally created by @miguelmarco on GitHub (Nov 11, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7622 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I had ollama compiled from source and it worked fine. Recently I rebuild it to the last version, and it seems to not use my GPU anymore (it uses a lot of CPU processes, and it runs much slower). Here is the output of the server: ``` 2024/11/11 21:50:18 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/mmarco/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2024-11-11T21:50:18.708+01:00 level=INFO source=images.go:755 msg="total blobs: 39" time=2024-11-11T21:50:18.709+01:00 level=INFO source=images.go:762 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2024-11-11T21:50:18.710+01:00 level=INFO source=routes.go:1240 msg="Listening on 127.0.0.1:11434 (version 0.0.0)" time=2024-11-11T21:50:18.711+01:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama3024011950/runners time=2024-11-11T21:50:18.837+01:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cpu]" time=2024-11-11T21:50:18.837+01:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-11-11T21:50:19.018+01:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-d421266e-e17b-d0ef-24f5-88a6f702d374 library=cuda variant=v12 compute=8.6 driver=12.4 name="NVIDIA GeForce RTX 3090" total="23.7 GiB" available="20.0 GiB" [GIN] 2024/11/11 - 21:51:17 | 200 | 68.061µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/11 - 21:51:17 | 200 | 34.186091ms | 127.0.0.1 | POST "/api/show" time=2024-11-11T21:51:17.462+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/home/mmarco/.ollama/models/blobs/sha256-87048bcd55216712ef14c11c2c303728463207b165bf18440b9b84b07ec00f87 gpu=GPU-d421266e-e17b-d0ef-24f5-88a6f702d374 parallel=4 available=21468807168 required="6.2 GiB" time=2024-11-11T21:51:17.604+01:00 level=INFO source=server.go:105 msg="system memory" total="62.7 GiB" free="50.0 GiB" free_swap="0 B" time=2024-11-11T21:51:17.605+01:00 level=INFO source=memory.go:343 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[20.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB" time=2024-11-11T21:51:17.606+01:00 level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama3024011950/runners/cpu_avx2/ollama_llama_server --model /home/mmarco/.ollama/models/blobs/sha256-87048bcd55216712ef14c11c2c303728463207b165bf18440b9b84b07ec00f87 --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --threads 12 --parallel 4 --port 38059" time=2024-11-11T21:51:17.607+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-11-11T21:51:17.607+01:00 level=INFO source=server.go:562 msg="waiting for llama runner to start responding" time=2024-11-11T21:51:17.607+01:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error" time=2024-11-11T21:51:17.613+01:00 level=INFO source=runner.go:863 msg="starting go runner" time=2024-11-11T21:51:17.613+01:00 level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=12 time=2024-11-11T21:51:17.613+01:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:38059" llama_model_loader: loaded meta data with 29 key-value pairs and 291 tensors from /home/mmarco/.ollama/models/blobs/sha256-87048bcd55216712ef14c11c2c303728463207b165bf18440b9b84b07ec00f87 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 8B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 9: llama.block_count u32 = 32 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 4096 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: general.file_type u32 = 2 llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 27: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors time=2024-11-11T21:51:17.858+01:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: ggml ctx size = 0.14 MiB llm_load_tensors: CPU buffer size = 4437.80 MiB llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: CPU output buffer size = 2.02 MiB llama_new_context_with_model: CPU compute buffer size = 560.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 1 time=2024-11-11T21:51:20.115+01:00 level=INFO source=server.go:601 msg="llama runner started in 2.51 seconds" [GIN] 2024/11/11 - 21:51:20 | 200 | 2.903863177s | 127.0.0.1 | POST "/api/generate" ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version main branch (36a8372b2884c40cc5b86f6f859b012dc8125b80)
GiteaMirror added the buildbug labels 2026-04-28 19:43:05 -05:00
Author
Owner

@miguelmarco commented on GitHub (Nov 11, 2024):

Apparently, it detects correctly the GPU:

time=2024-11-11T21:50:19.018+01:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-d421266e-e17b-d0ef-24f5-88a6f702d374 library=cuda variant=v12 compute=8.6 driver=12.4 name="NVIDIA GeForce RTX 3090" total="23.7 GiB" available="20.0 GiB"

And claims that it loads the model in the GPU memory:

time=2024-11-11T21:51:17.462+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/home/mmarco/.ollama/models/blobs/sha256-87048bcd55216712ef14c11c2c303728463207b165bf18440b9b84b07ec00f87 gpu=GPU-d421266e-e17b-d0ef-24f5-88a6f702d374 parallel=4 available=21468807168 required="6.2 GiB"
time=2024-11-11T21:51:17.604+01:00 level=INFO source=server.go:105 msg="system memory" total="62.7 GiB" free="50.0 GiB" free_swap="0 B"
time=2024-11-11T21:51:17.605+01:00 level=INFO source=memory.go:343 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[20.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"

but the result is that it uses the CPU.

<!-- gh-comment-id:2469014273 --> @miguelmarco commented on GitHub (Nov 11, 2024): Apparently, it detects correctly the GPU: ``` time=2024-11-11T21:50:19.018+01:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-d421266e-e17b-d0ef-24f5-88a6f702d374 library=cuda variant=v12 compute=8.6 driver=12.4 name="NVIDIA GeForce RTX 3090" total="23.7 GiB" available="20.0 GiB" ``` And claims that it loads the model in the GPU memory: ``` time=2024-11-11T21:51:17.462+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/home/mmarco/.ollama/models/blobs/sha256-87048bcd55216712ef14c11c2c303728463207b165bf18440b9b84b07ec00f87 gpu=GPU-d421266e-e17b-d0ef-24f5-88a6f702d374 parallel=4 available=21468807168 required="6.2 GiB" time=2024-11-11T21:51:17.604+01:00 level=INFO source=server.go:105 msg="system memory" total="62.7 GiB" free="50.0 GiB" free_swap="0 B" time=2024-11-11T21:51:17.605+01:00 level=INFO source=memory.go:343 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[20.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB" ``` but the result is that it uses the CPU.
Author
Owner

@rick-github commented on GitHub (Nov 11, 2024):

time=2024-11-11T21:50:18.837+01:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cpu]"

Arch Linux user? https://github.com/ollama/ollama/issues/7564

<!-- gh-comment-id:2469229348 --> @rick-github commented on GitHub (Nov 11, 2024): ``` time=2024-11-11T21:50:18.837+01:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cpu]" ``` Arch Linux user? https://github.com/ollama/ollama/issues/7564
Author
Owner

@miguelmarco commented on GitHub (Nov 12, 2024):

gentoo

I think I found part of the problem: the build scripts don't find my cuda install (which is in /opt/cuda).

After editing by hand some of the makefiles in the llama/make directory, I could trigger the build of the corresponding cuda code, but then the compilation eventually gave me this error:

GOARCH=amd64 CGO_LDFLAGS="-L"/home/mmarco/ollama/llama/build/linux-amd64/runners/cuda_v12/" " go build -buildmode=pie  "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.4.1-5-g36a8372-dirty\" \"-X=github.com/ollama/ollama/llama.CpuFeatures=avx\" " -trimpath -tags avx,cuda,cuda_v12 -o /home/mmarco/ollama/llama/build/linux-amd64/runners/cuda_v12/ollama_llama_server ./runner
# github.com/ollama/ollama/llama
ggml.c: En la función ‘ggml_vec_mad_f16’:
ggml.c:2378:45: aviso: el paso del argumento 1 de ‘__avx_f32cx8_load’ descarta el calificador ‘const’ del tipo del destino del puntero [-Wdiscarded-qualifiers]
 2378 |             ax[j] = GGML_F16_VEC_LOAD(x + i + j*GGML_F16_EPR, j);
      |                                             ^
ggml.c:1458:51: nota: en definición de macro ‘GGML_F32Cx8_LOAD’
 1458 | #define GGML_F32Cx8_LOAD(x)     __avx_f32cx8_load(x)
      |                                                   ^
ggml.c:2378:21: nota: en expansión de macro ‘GGML_F16_VEC_LOAD’
 2378 |             ax[j] = GGML_F16_VEC_LOAD(x + i + j*GGML_F16_EPR, j);
      |                     ^~~~~~~~~~~~~~~~~
ggml.c:1441:53: nota: se esperaba ‘ggml_fp16_t *’ {también conocido como ‘short unsigned int *’} pero el argumento es de tipo ‘const ggml_fp16_t *’ {también conocido como ‘const short unsigned int *’}
 1441 | static inline __m256 __avx_f32cx8_load(ggml_fp16_t *x) {
      |                                        ~~~~~~~~~~~~~^
# github.com/ollama/ollama/llama/runner
/usr/lib/go/pkg/tool/linux_amd64/link: running x86_64-pc-linux-gnu-g++ failed: exit status 1
/usr/bin/x86_64-pc-linux-gnu-g++ -m64 -s -Wl,-z,relro -pie -o $WORK/b001/exe/a.out -Wl,--export-dynamic-symbol=_cgo_panic -Wl,--export-dynamic-symbol=_cgo_topofstack -Wl,--export-dynamic-symbol=crosscall2 -Wl,--export-dynamic-symbol=llamaProgressCallback -Wl,--compress-debug-sections=zlib /tmp/go-link-2973296743/go.o /tmp/go-link-2973296743/000000.o /tmp/go-link-2973296743/000001.o /tmp/go-link-2973296743/000002.o /tmp/go-link-2973296743/000003.o /tmp/go-link-2973296743/000004.o /tmp/go-link-2973296743/000005.o /tmp/go-link-2973296743/000006.o /tmp/go-link-2973296743/000007.o /tmp/go-link-2973296743/000008.o /tmp/go-link-2973296743/000009.o /tmp/go-link-2973296743/000010.o /tmp/go-link-2973296743/000011.o /tmp/go-link-2973296743/000012.o /tmp/go-link-2973296743/000013.o /tmp/go-link-2973296743/000014.o /tmp/go-link-2973296743/000015.o /tmp/go-link-2973296743/000016.o /tmp/go-link-2973296743/000017.o /tmp/go-link-2973296743/000018.o /tmp/go-link-2973296743/000019.o /tmp/go-link-2973296743/000020.o /tmp/go-link-2973296743/000021.o /tmp/go-link-2973296743/000022.o /tmp/go-link-2973296743/000023.o /tmp/go-link-2973296743/000024.o /tmp/go-link-2973296743/000025.o /tmp/go-link-2973296743/000026.o /tmp/go-link-2973296743/000027.o /tmp/go-link-2973296743/000028.o /tmp/go-link-2973296743/000029.o /tmp/go-link-2973296743/000030.o /tmp/go-link-2973296743/000031.o /tmp/go-link-2973296743/000032.o /tmp/go-link-2973296743/000033.o /tmp/go-link-2973296743/000034.o /tmp/go-link-2973296743/000035.o /tmp/go-link-2973296743/000036.o /tmp/go-link-2973296743/000037.o /tmp/go-link-2973296743/000038.o /tmp/go-link-2973296743/000039.o /tmp/go-link-2973296743/000040.o /tmp/go-link-2973296743/000041.o /tmp/go-link-2973296743/000042.o /tmp/go-link-2973296743/000043.o -L/home/mmarco/ollama/llama/build/linux-amd64/runners/cuda_v12/ -lggml_cuda_v12 -L/usr/local/cuda-12/lib64 -L/home/mmarco/ollama/llama/build/Linux/amd64 -L/home/mmarco/ollama/llama/build/Linux/amd64 -lcuda -lcudart -lcublas -lcublasLt -lpthread -ldl -lrt -lresolv -L/home/mmarco/ollama/llama/build/linux-amd64/runners/cuda_v12/ -lresolv -L/home/mmarco/ollama/llama/build/linux-amd64/runners/cuda_v12/ -lpthread
/usr/lib/gcc/x86_64-pc-linux-gnu/13/../../../../x86_64-pc-linux-gnu/bin/ld: no se puede encontrar -lcudart: No existe el fichero o el directorio
/usr/lib/gcc/x86_64-pc-linux-gnu/13/../../../../x86_64-pc-linux-gnu/bin/ld: no se puede encontrar -lcublas: No existe el fichero o el directorio
/usr/lib/gcc/x86_64-pc-linux-gnu/13/../../../../x86_64-pc-linux-gnu/bin/ld: no se puede encontrar -lcublasLt: No existe el fichero o el directorio
collect2: error: ld devolvió el estado de salida 1

I guess it still doesn't find some of the libraries to link to.

Any clue on what should be the right way to make the build scripts use the cuda install in that directory?

<!-- gh-comment-id:2469480566 --> @miguelmarco commented on GitHub (Nov 12, 2024): gentoo I think I found part of the problem: the build scripts don't find my cuda install (which is in `/opt/cuda`). After editing by hand some of the makefiles in the `llama/make` directory, I could trigger the build of the corresponding cuda code, but then the compilation eventually gave me this error: ``` GOARCH=amd64 CGO_LDFLAGS="-L"/home/mmarco/ollama/llama/build/linux-amd64/runners/cuda_v12/" " go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.4.1-5-g36a8372-dirty\" \"-X=github.com/ollama/ollama/llama.CpuFeatures=avx\" " -trimpath -tags avx,cuda,cuda_v12 -o /home/mmarco/ollama/llama/build/linux-amd64/runners/cuda_v12/ollama_llama_server ./runner # github.com/ollama/ollama/llama ggml.c: En la función ‘ggml_vec_mad_f16’: ggml.c:2378:45: aviso: el paso del argumento 1 de ‘__avx_f32cx8_load’ descarta el calificador ‘const’ del tipo del destino del puntero [-Wdiscarded-qualifiers] 2378 | ax[j] = GGML_F16_VEC_LOAD(x + i + j*GGML_F16_EPR, j); | ^ ggml.c:1458:51: nota: en definición de macro ‘GGML_F32Cx8_LOAD’ 1458 | #define GGML_F32Cx8_LOAD(x) __avx_f32cx8_load(x) | ^ ggml.c:2378:21: nota: en expansión de macro ‘GGML_F16_VEC_LOAD’ 2378 | ax[j] = GGML_F16_VEC_LOAD(x + i + j*GGML_F16_EPR, j); | ^~~~~~~~~~~~~~~~~ ggml.c:1441:53: nota: se esperaba ‘ggml_fp16_t *’ {también conocido como ‘short unsigned int *’} pero el argumento es de tipo ‘const ggml_fp16_t *’ {también conocido como ‘const short unsigned int *’} 1441 | static inline __m256 __avx_f32cx8_load(ggml_fp16_t *x) { | ~~~~~~~~~~~~~^ # github.com/ollama/ollama/llama/runner /usr/lib/go/pkg/tool/linux_amd64/link: running x86_64-pc-linux-gnu-g++ failed: exit status 1 /usr/bin/x86_64-pc-linux-gnu-g++ -m64 -s -Wl,-z,relro -pie -o $WORK/b001/exe/a.out -Wl,--export-dynamic-symbol=_cgo_panic -Wl,--export-dynamic-symbol=_cgo_topofstack -Wl,--export-dynamic-symbol=crosscall2 -Wl,--export-dynamic-symbol=llamaProgressCallback -Wl,--compress-debug-sections=zlib /tmp/go-link-2973296743/go.o /tmp/go-link-2973296743/000000.o /tmp/go-link-2973296743/000001.o /tmp/go-link-2973296743/000002.o /tmp/go-link-2973296743/000003.o /tmp/go-link-2973296743/000004.o /tmp/go-link-2973296743/000005.o /tmp/go-link-2973296743/000006.o /tmp/go-link-2973296743/000007.o /tmp/go-link-2973296743/000008.o /tmp/go-link-2973296743/000009.o /tmp/go-link-2973296743/000010.o /tmp/go-link-2973296743/000011.o /tmp/go-link-2973296743/000012.o /tmp/go-link-2973296743/000013.o /tmp/go-link-2973296743/000014.o /tmp/go-link-2973296743/000015.o /tmp/go-link-2973296743/000016.o /tmp/go-link-2973296743/000017.o /tmp/go-link-2973296743/000018.o /tmp/go-link-2973296743/000019.o /tmp/go-link-2973296743/000020.o /tmp/go-link-2973296743/000021.o /tmp/go-link-2973296743/000022.o /tmp/go-link-2973296743/000023.o /tmp/go-link-2973296743/000024.o /tmp/go-link-2973296743/000025.o /tmp/go-link-2973296743/000026.o /tmp/go-link-2973296743/000027.o /tmp/go-link-2973296743/000028.o /tmp/go-link-2973296743/000029.o /tmp/go-link-2973296743/000030.o /tmp/go-link-2973296743/000031.o /tmp/go-link-2973296743/000032.o /tmp/go-link-2973296743/000033.o /tmp/go-link-2973296743/000034.o /tmp/go-link-2973296743/000035.o /tmp/go-link-2973296743/000036.o /tmp/go-link-2973296743/000037.o /tmp/go-link-2973296743/000038.o /tmp/go-link-2973296743/000039.o /tmp/go-link-2973296743/000040.o /tmp/go-link-2973296743/000041.o /tmp/go-link-2973296743/000042.o /tmp/go-link-2973296743/000043.o -L/home/mmarco/ollama/llama/build/linux-amd64/runners/cuda_v12/ -lggml_cuda_v12 -L/usr/local/cuda-12/lib64 -L/home/mmarco/ollama/llama/build/Linux/amd64 -L/home/mmarco/ollama/llama/build/Linux/amd64 -lcuda -lcudart -lcublas -lcublasLt -lpthread -ldl -lrt -lresolv -L/home/mmarco/ollama/llama/build/linux-amd64/runners/cuda_v12/ -lresolv -L/home/mmarco/ollama/llama/build/linux-amd64/runners/cuda_v12/ -lpthread /usr/lib/gcc/x86_64-pc-linux-gnu/13/../../../../x86_64-pc-linux-gnu/bin/ld: no se puede encontrar -lcudart: No existe el fichero o el directorio /usr/lib/gcc/x86_64-pc-linux-gnu/13/../../../../x86_64-pc-linux-gnu/bin/ld: no se puede encontrar -lcublas: No existe el fichero o el directorio /usr/lib/gcc/x86_64-pc-linux-gnu/13/../../../../x86_64-pc-linux-gnu/bin/ld: no se puede encontrar -lcublasLt: No existe el fichero o el directorio collect2: error: ld devolvió el estado de salida 1 ``` I guess it still doesn't find some of the libraries to link to. Any clue on what should be the right way to make the build scripts use the cuda install in that directory?
Author
Owner

@dhiltgen commented on GitHub (Nov 12, 2024):

This should be resolved by #7499

<!-- gh-comment-id:2471564368 --> @dhiltgen commented on GitHub (Nov 12, 2024): This should be resolved by #7499
Author
Owner

@miguelmarco commented on GitHub (Nov 12, 2024):

Thank you! I am testing it now.

It didn't automatically select my cuda install, because it seems to look for a dircetory ended in -11or -12 (mine is just /opt/cuda, but I could trick it by creating a symlink.

<!-- gh-comment-id:2471626818 --> @miguelmarco commented on GitHub (Nov 12, 2024): Thank you! I am testing it now. It didn't automatically select my cuda install, because it seems to look for a dircetory ended in `-11`or `-12` (mine is just `/opt/cuda`, but I could trick it by creating a symlink.
Author
Owner

@miguelmarco commented on GitHub (Nov 12, 2024):

And I get the error about not finding the cudart and cublas libraries.

I guess also a matter of looking at lib directory instead of lib64?

<!-- gh-comment-id:2471629615 --> @miguelmarco commented on GitHub (Nov 12, 2024): And I get the error about not finding the cudart and cublas libraries. I guess also a matter of looking at `lib` directory instead of `lib64`?
Author
Owner

@dhiltgen commented on GitHub (Nov 12, 2024):

Yes, it does currently have a hard-coded assumption of lib64 - https://github.com/ollama/ollama/pull/7499/files#diff-9ecc9a0012aabead74d0bed0e05fa907b943a0b4a82ca6244155e31b118f03f4R14

I'll see if I can soften that to prefer lib64 then fall back to lib if there isn't a lib64 present.

<!-- gh-comment-id:2471636623 --> @dhiltgen commented on GitHub (Nov 12, 2024): Yes, it does currently have a hard-coded assumption of lib64 - https://github.com/ollama/ollama/pull/7499/files#diff-9ecc9a0012aabead74d0bed0e05fa907b943a0b4a82ca6244155e31b118f03f4R14 I'll see if I can soften that to prefer `lib64` then fall back to `lib` if there isn't a `lib64` present.
Author
Owner

@dhiltgen commented on GitHub (Nov 12, 2024):

Updated. The PR should handle this scenario now.

<!-- gh-comment-id:2471649837 --> @dhiltgen commented on GitHub (Nov 12, 2024): Updated. The PR should handle this scenario now.
Author
Owner

@miguelmarco commented on GitHub (Nov 13, 2024):

Actually, libcublas libcudart and libcublaslt are in /opt/cuda/lib64, and still the build scripts don't seem to find them.

<!-- gh-comment-id:2472738006 --> @miguelmarco commented on GitHub (Nov 13, 2024): Actually, libcublas libcudart and libcublaslt are in `/opt/cuda/lib64`, and still the build scripts don't seem to find them.
Author
Owner

@dhiltgen commented on GitHub (Nov 13, 2024):

On that branch make help-runners will report what we autodetected and guid to set CUDA_PATH to help locate if we didn't find it by default. If you set CUDA_PATH=/opt/cuda I believe that should work. If that doesn't work, can you share what you see in /opt and /opt/cuda* on your system so I can adjust the paths to pick it up?

<!-- gh-comment-id:2474781026 --> @dhiltgen commented on GitHub (Nov 13, 2024): On that branch `make help-runners` will report what we autodetected and guid to set `CUDA_PATH` to help locate if we didn't find it by default. If you set `CUDA_PATH=/opt/cuda` I believe that should work. If that doesn't work, can you share what you see in `/opt` and `/opt/cuda*` on your system so I can adjust the paths to pick it up?
Author
Owner

@miguelmarco commented on GitHub (Nov 14, 2024):

This is the content of my /optdirectory:

Signal  bin  brave  cuda  dropbox  firefox  openjdk-bin-17  openjdk-bin-17.0.13_p11  rust-bin-1.81.0  vscode

and of /opt/cuda/

/opt/cuda/
├── bin
│   └── crt
├── include -> targets/x86_64-linux/include
├── include.backup.0000 -> targets/x86_64-linux/include
├── lib64 -> targets/x86_64-linux/lib
├── lib64.backup.0000 -> targets/x86_64-linux/lib
├── nvml
│   └── example
├── nvvm
│   ├── bin
│   ├── include
│   ├── lib64
│   └── libdevice
└── targets
    └── x86_64-linux
        ├── include
        │   ├── CL
        │   ├── cooperative_groups
        │   ├── crt
        │   ├── cub
        │   ├── cuda
        │   ├── nv
        │   ├── nvtx3
        │   └── thrust
        └── lib
            ├── cmake
            └── stubs

This is what i get :

mmarco@neumann ~/ollama $ export CUDA_PATH=/opt/cuda/
mmarco@neumann ~/ollama $ make help-runners
The following runners will be built based on discovered GPU libraries: 'default'
(On MacOS arm64 'default' is the metal runner.  For all other platforms 'default' is one or more CPU runners)

GPU Runner CPU Flags: 'avx'  (Override with CUSTOM_CPU_FLAGS)

# CUDA_PATH sets the location where CUDA toolkits are present
CUDA_PATH=/opt/cuda/
        CUDA_11=
        CUDA_11_COMPILER=
        CUDA_12=
        CUDA_12_COMPILER=

# HIP_PATH sets the location where the ROCm toolkit is present
HIP_PATH=/opt/rocm
        HIP_COMPILER=
<!-- gh-comment-id:2475980154 --> @miguelmarco commented on GitHub (Nov 14, 2024): This is the content of my `/opt`directory: ``` Signal bin brave cuda dropbox firefox openjdk-bin-17 openjdk-bin-17.0.13_p11 rust-bin-1.81.0 vscode ``` and of `/opt/cuda/` ``` /opt/cuda/ ├── bin │   └── crt ├── include -> targets/x86_64-linux/include ├── include.backup.0000 -> targets/x86_64-linux/include ├── lib64 -> targets/x86_64-linux/lib ├── lib64.backup.0000 -> targets/x86_64-linux/lib ├── nvml │   └── example ├── nvvm │   ├── bin │   ├── include │   ├── lib64 │   └── libdevice └── targets └── x86_64-linux ├── include │   ├── CL │   ├── cooperative_groups │   ├── crt │   ├── cub │   ├── cuda │   ├── nv │   ├── nvtx3 │   └── thrust └── lib ├── cmake └── stubs ``` This is what i get : ``` mmarco@neumann ~/ollama $ export CUDA_PATH=/opt/cuda/ mmarco@neumann ~/ollama $ make help-runners The following runners will be built based on discovered GPU libraries: 'default' (On MacOS arm64 'default' is the metal runner. For all other platforms 'default' is one or more CPU runners) GPU Runner CPU Flags: 'avx' (Override with CUSTOM_CPU_FLAGS) # CUDA_PATH sets the location where CUDA toolkits are present CUDA_PATH=/opt/cuda/ CUDA_11= CUDA_11_COMPILER= CUDA_12= CUDA_12_COMPILER= # HIP_PATH sets the location where the ROCm toolkit is present HIP_PATH=/opt/rocm HIP_COMPILER= ```
Author
Owner

@dhiltgen commented on GitHub (Nov 14, 2024):

Interesting. So there's no symlink to expose the version of the toolkit (e.g. /opt/cuda-11 -> /opt/cuda or vice versa)

It doesn't look like there's an obvious way to know the version from the filesystem without running commands and parsing output. You can try make help-runners CUDA_12=/opt/cuda (assuming it's v12) and see if that looks better, and then try building with make cuda_v12 CUDA_12=/opt/cuda (there still might be some broken assumptions in the make variables though, I'll have to play with this pattern a bit)

<!-- gh-comment-id:2476766936 --> @dhiltgen commented on GitHub (Nov 14, 2024): Interesting. So there's no symlink to expose the version of the toolkit (e.g. /opt/cuda-11 -> /opt/cuda or vice versa) It doesn't look like there's an obvious way to know the version from the filesystem without running commands and parsing output. You can try `make help-runners CUDA_12=/opt/cuda` (assuming it's v12) and see if that looks better, and then try building with `make cuda_v12 CUDA_12=/opt/cuda` (there still might be some broken assumptions in the make variables though, I'll have to play with this pattern a bit)
Author
Owner

@dhiltgen commented on GitHub (Nov 14, 2024):

One other question - is there a way to install both v11 and v12 toolkits on your distro, and if so, what does that look like?

<!-- gh-comment-id:2476987324 --> @dhiltgen commented on GitHub (Nov 14, 2024): One other question - is there a way to install both v11 and v12 toolkits on your distro, and if so, what does that look like?
Author
Owner

@miguelmarco commented on GitHub (Nov 14, 2024):

This seems to work:

mmarco@neumann ~/ollama $ make help-runners CUDA_12=/opt/cuda
The following runners will be built based on discovered GPU libraries: 'default cuda_v12'
(On MacOS arm64 'default' is the metal runner.  For all other platforms 'default' is one or more CPU runners)

GPU Runner CPU Flags: 'avx'  (Override with CUSTOM_CPU_FLAGS)

# CUDA_PATH sets the location where CUDA toolkits are present
CUDA_PATH=/usr/local/cuda
        CUDA_11=
        CUDA_11_COMPILER=
        CUDA_12=/opt/cuda
        CUDA_12_COMPILER=/opt/cuda/bin/nvcc

# HIP_PATH sets the location where the ROCm toolkit is present
HIP_PATH=/opt/rocm
        HIP_COMPILER=

As far as I know, the gentoo package manager doesn't install two separated versions of cuda.

However, if i try to run make, I get this as part of the output:

shared -L -lcuda -L../dist/linux-amd64/lib/ollama  -lcublas  -lcudart  -lcublasLt ./build/linux-amd64/ggml-cuda.cuda_v12.o ./build/linux-amd64/ggml-cuda/acc.cuda_v12.o ./build/linux-amd64/ggml-cuda/arange.cuda_v12.o ./build/linux-amd64/ggml-cuda/argsort.cuda_v12.o ./build/linux-amd64/ggml-cuda/binbcast.cuda_v12.o ./build/linux-amd64/ggml-cuda/clamp.cuda_v12.o ./build/linux-amd64/ggml-cuda/concat.cuda_v12.o ./build/linux-amd64/ggml-cuda/conv-transpose-1d.cuda_v12.o ./build/linux-amd64/ggml-cuda/convert.cuda_v12.o ./build/linux-amd64/ggml-cuda/cpy.cuda_v12.o ./build/linux-amd64/ggml-cuda/cross-entropy-loss.cuda_v12.o ./build/linux-amd64/ggml-cuda/diagmask.cuda_v12.o ./build/linux-amd64/ggml-cuda/dmmv.cuda_v12.o ./build/linux-amd64/ggml-cuda/getrows.cuda_v12.o ./build/linux-amd64/ggml-cuda/im2col.cuda_v12.o ./build/linux-amd64/ggml-cuda/mmq.cuda_v12.o ./build/linux-amd64/ggml-cuda/mmvq.cuda_v12.o ./build/linux-amd64/ggml-cuda/norm.cuda_v12.o ./build/linux-amd64/ggml-cuda/opt-step-adamw.cuda_v12.o ./build/linux-amd64/ggml-cuda/out-prod.cuda_v12.o ./build/linux-amd64/ggml-cuda/pad.cuda_v12.o ./build/linux-amd64/ggml-cuda/pool2d.cuda_v12.o ./build/linux-amd64/ggml-cuda/quantize.cuda_v12.o ./build/linux-amd64/ggml-cuda/rope.cuda_v12.o ./build/linux-amd64/ggml-cuda/rwkv-wkv.cuda_v12.o ./build/linux-amd64/ggml-cuda/scale.cuda_v12.o ./build/linux-amd64/ggml-cuda/softmax.cuda_v12.o ./build/linux-amd64/ggml-cuda/sum.cuda_v12.o ./build/linux-amd64/ggml-cuda/sumrows.cuda_v12.o ./build/linux-amd64/ggml-cuda/tsembd.cuda_v12.o ./build/linux-amd64/ggml-cuda/unary.cuda_v12.o ./build/linux-amd64/ggml-cuda/upscale.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-iq1_s.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-iq2_s.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-iq2_xs.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-iq2_xxs.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-iq3_s.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-iq3_xxs.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-iq4_nl.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-iq4_xs.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-q2_k.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-q3_k.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-q4_0.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-q4_1.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-q4_k.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-q5_0.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-q5_1.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-q5_k.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-q6_k.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-q8_0.cuda_v12.o ./build/linux-amd64/ggml.cuda_v12.o ./build/linux-amd64/ggml-backend.cuda_v12.o ./build/linux-amd64/ggml-alloc.cuda_v12.o ./build/linux-amd64/ggml-quants.cuda_v12.o ./build/linux-amd64/sgemm.cuda_v12.o ./build/linux-amd64/ggml-aarch64.cuda_v12.o ./build/linux-amd64/ggml-cuda/fattn-tile-f16.cuda_v12.o ./build/linux-amd64/ggml-cuda/fattn-tile-f32.cuda_v12.o ./build/linux-amd64/ggml-cuda/fattn.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-wmma-f16-instance-kqfloat-cpb16.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-wmma-f16-instance-kqfloat-cpb32.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-wmma-f16-instance-kqhalf-cpb16.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-wmma-f16-instance-kqhalf-cpb32.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-wmma-f16-instance-kqhalf-cpb8.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q4_0-q4_0.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q4_0-q4_0.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q8_0-q8_0.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q8_0-q8_0.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-f16-f16.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-vec-f16-instance-hs256-f16-f16.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-vec-f16-instance-hs64-f16-f16.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-f16-f16.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-vec-f32-instance-hs256-f16-f16.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-vec-f32-instance-hs64-f16-f16.cuda_v12.o -o build/linux-amd64/libggml_cuda_v12.so
make[2]: shared: No existe el fichero o el directorio

It seems that the build system is trying to use some command named shared which I don't have in my system.

and compilation ends up dying with this error message:

/usr/lib/go/pkg/tool/linux_amd64/link: running x86_64-pc-linux-gnu-g++ failed: exit status 1
/usr/bin/x86_64-pc-linux-gnu-g++ -m64 -s -Wl,-z,relro -pie -o $WORK/b001/exe/a.out -Wl,--export-dynamic-symbol=_cgo_panic -Wl,--export-dynamic-symbol=_cgo_topofstack -Wl,--export-dynamic-symbol=crosscall2 -Wl,--export-dynamic-symbol=llamaProgressCallback -Wl,--compress-debug-sections=zlib /tmp/go-link-877724444/go.o /tmp/go-link-877724444/000000.o /tmp/go-link-877724444/000001.o /tmp/go-link-877724444/000002.o /tmp/go-link-877724444/000003.o /tmp/go-link-877724444/000004.o /tmp/go-link-877724444/000005.o /tmp/go-link-877724444/000006.o /tmp/go-link-877724444/000007.o /tmp/go-link-877724444/000008.o /tmp/go-link-877724444/000009.o /tmp/go-link-877724444/000010.o /tmp/go-link-877724444/000011.o /tmp/go-link-877724444/000012.o /tmp/go-link-877724444/000013.o /tmp/go-link-877724444/000014.o /tmp/go-link-877724444/000015.o /tmp/go-link-877724444/000016.o /tmp/go-link-877724444/000017.o /tmp/go-link-877724444/000018.o /tmp/go-link-877724444/000019.o /tmp/go-link-877724444/000020.o /tmp/go-link-877724444/000021.o /tmp/go-link-877724444/000022.o /tmp/go-link-877724444/000023.o /tmp/go-link-877724444/000024.o /tmp/go-link-877724444/000025.o /tmp/go-link-877724444/000026.o /tmp/go-link-877724444/000027.o /tmp/go-link-877724444/000028.o /tmp/go-link-877724444/000029.o /tmp/go-link-877724444/000030.o /tmp/go-link-877724444/000031.o /tmp/go-link-877724444/000032.o /tmp/go-link-877724444/000033.o /tmp/go-link-877724444/000034.o /tmp/go-link-877724444/000035.o /tmp/go-link-877724444/000036.o /tmp/go-link-877724444/000037.o /tmp/go-link-877724444/000038.o /tmp/go-link-877724444/000039.o /tmp/go-link-877724444/000040.o /tmp/go-link-877724444/000041.o /tmp/go-link-877724444/000042.o /tmp/go-link-877724444/000043.o -L./build/linux-amd64/runners/cuda_v12_avx/ -lggml_cuda_v12 -L/usr/local/cuda-12/lib64 -L/home/mmarco/ollama/llama/build/linux-amd64 -lcuda -lcudart -lcublas -lcublasLt -lpthread -ldl -lrt -lresolv -L./build/linux-amd64/runners/cuda_v12_avx/ -lresolv -L./build/linux-amd64/runners/cuda_v12_avx/ -lpthread
/usr/lib/gcc/x86_64-pc-linux-gnu/13/../../../../x86_64-pc-linux-gnu/bin/ld: no se puede encontrar -lcudart: No existe el fichero o el directorio
/usr/lib/gcc/x86_64-pc-linux-gnu/13/../../../../x86_64-pc-linux-gnu/bin/ld: no se puede encontrar -lcublas: No existe el fichero o el directorio
/usr/lib/gcc/x86_64-pc-linux-gnu/13/../../../../x86_64-pc-linux-gnu/bin/ld: no se puede encontrar -lcublasLt: No existe el fichero o el directorio
collect2: error: ld devolvió el estado de salida 1

so it seems that the linker cant find the cudart, cublas and cublasLt libraries.

<!-- gh-comment-id:2477235699 --> @miguelmarco commented on GitHub (Nov 14, 2024): This seems to work: ``` mmarco@neumann ~/ollama $ make help-runners CUDA_12=/opt/cuda The following runners will be built based on discovered GPU libraries: 'default cuda_v12' (On MacOS arm64 'default' is the metal runner. For all other platforms 'default' is one or more CPU runners) GPU Runner CPU Flags: 'avx' (Override with CUSTOM_CPU_FLAGS) # CUDA_PATH sets the location where CUDA toolkits are present CUDA_PATH=/usr/local/cuda CUDA_11= CUDA_11_COMPILER= CUDA_12=/opt/cuda CUDA_12_COMPILER=/opt/cuda/bin/nvcc # HIP_PATH sets the location where the ROCm toolkit is present HIP_PATH=/opt/rocm HIP_COMPILER= ``` As far as I know, the gentoo package manager doesn't install two separated versions of cuda. However, if i try to run make, I get this as part of the output: ``` shared -L -lcuda -L../dist/linux-amd64/lib/ollama -lcublas -lcudart -lcublasLt ./build/linux-amd64/ggml-cuda.cuda_v12.o ./build/linux-amd64/ggml-cuda/acc.cuda_v12.o ./build/linux-amd64/ggml-cuda/arange.cuda_v12.o ./build/linux-amd64/ggml-cuda/argsort.cuda_v12.o ./build/linux-amd64/ggml-cuda/binbcast.cuda_v12.o ./build/linux-amd64/ggml-cuda/clamp.cuda_v12.o ./build/linux-amd64/ggml-cuda/concat.cuda_v12.o ./build/linux-amd64/ggml-cuda/conv-transpose-1d.cuda_v12.o ./build/linux-amd64/ggml-cuda/convert.cuda_v12.o ./build/linux-amd64/ggml-cuda/cpy.cuda_v12.o ./build/linux-amd64/ggml-cuda/cross-entropy-loss.cuda_v12.o ./build/linux-amd64/ggml-cuda/diagmask.cuda_v12.o ./build/linux-amd64/ggml-cuda/dmmv.cuda_v12.o ./build/linux-amd64/ggml-cuda/getrows.cuda_v12.o ./build/linux-amd64/ggml-cuda/im2col.cuda_v12.o ./build/linux-amd64/ggml-cuda/mmq.cuda_v12.o ./build/linux-amd64/ggml-cuda/mmvq.cuda_v12.o ./build/linux-amd64/ggml-cuda/norm.cuda_v12.o ./build/linux-amd64/ggml-cuda/opt-step-adamw.cuda_v12.o ./build/linux-amd64/ggml-cuda/out-prod.cuda_v12.o ./build/linux-amd64/ggml-cuda/pad.cuda_v12.o ./build/linux-amd64/ggml-cuda/pool2d.cuda_v12.o ./build/linux-amd64/ggml-cuda/quantize.cuda_v12.o ./build/linux-amd64/ggml-cuda/rope.cuda_v12.o ./build/linux-amd64/ggml-cuda/rwkv-wkv.cuda_v12.o ./build/linux-amd64/ggml-cuda/scale.cuda_v12.o ./build/linux-amd64/ggml-cuda/softmax.cuda_v12.o ./build/linux-amd64/ggml-cuda/sum.cuda_v12.o ./build/linux-amd64/ggml-cuda/sumrows.cuda_v12.o ./build/linux-amd64/ggml-cuda/tsembd.cuda_v12.o ./build/linux-amd64/ggml-cuda/unary.cuda_v12.o ./build/linux-amd64/ggml-cuda/upscale.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-iq1_s.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-iq2_s.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-iq2_xs.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-iq2_xxs.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-iq3_s.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-iq3_xxs.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-iq4_nl.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-iq4_xs.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-q2_k.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-q3_k.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-q4_0.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-q4_1.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-q4_k.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-q5_0.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-q5_1.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-q5_k.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-q6_k.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/mmq-instance-q8_0.cuda_v12.o ./build/linux-amd64/ggml.cuda_v12.o ./build/linux-amd64/ggml-backend.cuda_v12.o ./build/linux-amd64/ggml-alloc.cuda_v12.o ./build/linux-amd64/ggml-quants.cuda_v12.o ./build/linux-amd64/sgemm.cuda_v12.o ./build/linux-amd64/ggml-aarch64.cuda_v12.o ./build/linux-amd64/ggml-cuda/fattn-tile-f16.cuda_v12.o ./build/linux-amd64/ggml-cuda/fattn-tile-f32.cuda_v12.o ./build/linux-amd64/ggml-cuda/fattn.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-wmma-f16-instance-kqfloat-cpb16.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-wmma-f16-instance-kqfloat-cpb32.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-wmma-f16-instance-kqhalf-cpb16.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-wmma-f16-instance-kqhalf-cpb32.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-wmma-f16-instance-kqhalf-cpb8.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q4_0-q4_0.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q4_0-q4_0.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-q8_0-q8_0.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-q8_0-q8_0.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-vec-f16-instance-hs128-f16-f16.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-vec-f16-instance-hs256-f16-f16.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-vec-f16-instance-hs64-f16-f16.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-vec-f32-instance-hs128-f16-f16.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-vec-f32-instance-hs256-f16-f16.cuda_v12.o ./build/linux-amd64/ggml-cuda/template-instances/fattn-vec-f32-instance-hs64-f16-f16.cuda_v12.o -o build/linux-amd64/libggml_cuda_v12.so make[2]: shared: No existe el fichero o el directorio ``` It seems that the build system is trying to use some command named `shared` which I don't have in my system. and compilation ends up dying with this error message: ``` /usr/lib/go/pkg/tool/linux_amd64/link: running x86_64-pc-linux-gnu-g++ failed: exit status 1 /usr/bin/x86_64-pc-linux-gnu-g++ -m64 -s -Wl,-z,relro -pie -o $WORK/b001/exe/a.out -Wl,--export-dynamic-symbol=_cgo_panic -Wl,--export-dynamic-symbol=_cgo_topofstack -Wl,--export-dynamic-symbol=crosscall2 -Wl,--export-dynamic-symbol=llamaProgressCallback -Wl,--compress-debug-sections=zlib /tmp/go-link-877724444/go.o /tmp/go-link-877724444/000000.o /tmp/go-link-877724444/000001.o /tmp/go-link-877724444/000002.o /tmp/go-link-877724444/000003.o /tmp/go-link-877724444/000004.o /tmp/go-link-877724444/000005.o /tmp/go-link-877724444/000006.o /tmp/go-link-877724444/000007.o /tmp/go-link-877724444/000008.o /tmp/go-link-877724444/000009.o /tmp/go-link-877724444/000010.o /tmp/go-link-877724444/000011.o /tmp/go-link-877724444/000012.o /tmp/go-link-877724444/000013.o /tmp/go-link-877724444/000014.o /tmp/go-link-877724444/000015.o /tmp/go-link-877724444/000016.o /tmp/go-link-877724444/000017.o /tmp/go-link-877724444/000018.o /tmp/go-link-877724444/000019.o /tmp/go-link-877724444/000020.o /tmp/go-link-877724444/000021.o /tmp/go-link-877724444/000022.o /tmp/go-link-877724444/000023.o /tmp/go-link-877724444/000024.o /tmp/go-link-877724444/000025.o /tmp/go-link-877724444/000026.o /tmp/go-link-877724444/000027.o /tmp/go-link-877724444/000028.o /tmp/go-link-877724444/000029.o /tmp/go-link-877724444/000030.o /tmp/go-link-877724444/000031.o /tmp/go-link-877724444/000032.o /tmp/go-link-877724444/000033.o /tmp/go-link-877724444/000034.o /tmp/go-link-877724444/000035.o /tmp/go-link-877724444/000036.o /tmp/go-link-877724444/000037.o /tmp/go-link-877724444/000038.o /tmp/go-link-877724444/000039.o /tmp/go-link-877724444/000040.o /tmp/go-link-877724444/000041.o /tmp/go-link-877724444/000042.o /tmp/go-link-877724444/000043.o -L./build/linux-amd64/runners/cuda_v12_avx/ -lggml_cuda_v12 -L/usr/local/cuda-12/lib64 -L/home/mmarco/ollama/llama/build/linux-amd64 -lcuda -lcudart -lcublas -lcublasLt -lpthread -ldl -lrt -lresolv -L./build/linux-amd64/runners/cuda_v12_avx/ -lresolv -L./build/linux-amd64/runners/cuda_v12_avx/ -lpthread /usr/lib/gcc/x86_64-pc-linux-gnu/13/../../../../x86_64-pc-linux-gnu/bin/ld: no se puede encontrar -lcudart: No existe el fichero o el directorio /usr/lib/gcc/x86_64-pc-linux-gnu/13/../../../../x86_64-pc-linux-gnu/bin/ld: no se puede encontrar -lcublas: No existe el fichero o el directorio /usr/lib/gcc/x86_64-pc-linux-gnu/13/../../../../x86_64-pc-linux-gnu/bin/ld: no se puede encontrar -lcublasLt: No existe el fichero o el directorio collect2: error: ld devolvió el estado de salida 1 ``` so it seems that the linker cant find the cudart, cublas and cublasLt libraries.
Author
Owner

@dhiltgen commented on GitHub (Nov 14, 2024):

@miguelmarco thanks - I see the bug. I'll get the branch updated soon to fix that. Until then adding GPU_COMPILER=/opt/cuda/bin/nvcc should override the broken variable expansion.

<!-- gh-comment-id:2477481618 --> @dhiltgen commented on GitHub (Nov 14, 2024): @miguelmarco thanks - I see the bug. I'll get the branch updated soon to fix that. Until then adding `GPU_COMPILER=/opt/cuda/bin/nvcc` should override the broken variable expansion.
Author
Owner

@miguelmarco commented on GitHub (Nov 15, 2024):

Thanks, that seems to work.

However, after compiling, when I try to run it and load a model, it fails with the message :

Error: llama runner process has terminated: exit status 127

In the servrer log, the error seems to show here:

[GIN] 2024/11/15 - 10:39:37 | 200 |       33.19µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/11/15 - 10:39:37 | 200 |   31.050327ms |       127.0.0.1 | POST     "/api/show"
time=2024-11-15T10:39:38.231+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/home/mmarco/.ollama/models/blobs/sha256-87048bcd55216712ef14c11c2c303728463207b165bf18440b9b84b07ec00f87 gpu=GPU-d421266e-e17b-d0ef-24f5-88a6f702d374 parallel=4 available=24340529152 required="6.2 GiB"
time=2024-11-15T10:39:38.375+01:00 level=INFO source=server.go:105 msg="system memory" total="62.7 GiB" free="57.2 GiB" free_swap="0 B"
time=2024-11-15T10:39:38.376+01:00 level=INFO source=memory.go:343 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[22.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-11-15T10:39:38.377+01:00 level=INFO source=server.go:383 msg="starting llama server" cmd="/home/mmarco/ollama/llama/build/linux-amd64/runners/cuda_v12_avx/ollama_llama_server --model /home/mmarco/.ollama/models/blobs/sha256-87048bcd55216712ef14c11c2c303728463207b165bf18440b9b84b07ec00f87 --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --threads 12 --parallel 4 --port 35243"
time=2024-11-15T10:39:38.377+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-11-15T10:39:38.378+01:00 level=INFO source=server.go:562 msg="waiting for llama runner to start responding"
/home/mmarco/ollama/llama/build/linux-amd64/runners/cuda_v12_avx/ollama_llama_server: error while loading shared libraries: libggml_cuda_v12.so: cannot open shared object file: No such file or directory
time=2024-11-15T10:39:38.378+01:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error"
time=2024-11-15T10:39:38.628+01:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: exit status 127"
[GIN] 2024/11/15 - 10:39:38 | 500 |  649.996652ms |       127.0.0.1 | POST     "/api/generate"

It seems it didn't build the library libggml_cuda_v12.so.

<!-- gh-comment-id:2478368859 --> @miguelmarco commented on GitHub (Nov 15, 2024): Thanks, that seems to work. However, after compiling, when I try to run it and load a model, it fails with the message : ``` Error: llama runner process has terminated: exit status 127 ``` In the servrer log, the error seems to show here: ``` [GIN] 2024/11/15 - 10:39:37 | 200 | 33.19µs | 127.0.0.1 | HEAD "/" [GIN] 2024/11/15 - 10:39:37 | 200 | 31.050327ms | 127.0.0.1 | POST "/api/show" time=2024-11-15T10:39:38.231+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/home/mmarco/.ollama/models/blobs/sha256-87048bcd55216712ef14c11c2c303728463207b165bf18440b9b84b07ec00f87 gpu=GPU-d421266e-e17b-d0ef-24f5-88a6f702d374 parallel=4 available=24340529152 required="6.2 GiB" time=2024-11-15T10:39:38.375+01:00 level=INFO source=server.go:105 msg="system memory" total="62.7 GiB" free="57.2 GiB" free_swap="0 B" time=2024-11-15T10:39:38.376+01:00 level=INFO source=memory.go:343 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[22.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB" time=2024-11-15T10:39:38.377+01:00 level=INFO source=server.go:383 msg="starting llama server" cmd="/home/mmarco/ollama/llama/build/linux-amd64/runners/cuda_v12_avx/ollama_llama_server --model /home/mmarco/.ollama/models/blobs/sha256-87048bcd55216712ef14c11c2c303728463207b165bf18440b9b84b07ec00f87 --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --threads 12 --parallel 4 --port 35243" time=2024-11-15T10:39:38.377+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-11-15T10:39:38.378+01:00 level=INFO source=server.go:562 msg="waiting for llama runner to start responding" /home/mmarco/ollama/llama/build/linux-amd64/runners/cuda_v12_avx/ollama_llama_server: error while loading shared libraries: libggml_cuda_v12.so: cannot open shared object file: No such file or directory time=2024-11-15T10:39:38.378+01:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error" time=2024-11-15T10:39:38.628+01:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: exit status 127" [GIN] 2024/11/15 - 10:39:38 | 500 | 649.996652ms | 127.0.0.1 | POST "/api/generate" ``` It seems it didn't build the library `libggml_cuda_v12.so`.
Author
Owner

@dhiltgen commented on GitHub (Nov 18, 2024):

@miguelmarco the paths weren't quite correct to find the locally built libraries at runtime, but should be fixed now on the branch. Give it another try and let us know how it goes.

<!-- gh-comment-id:2484390240 --> @dhiltgen commented on GitHub (Nov 18, 2024): @miguelmarco the paths weren't quite correct to find the locally built libraries at runtime, but should be fixed now on the branch. Give it another try and let us know how it goes.
Author
Owner

@miguelmarco commented on GitHub (Nov 19, 2024):

Now it doesn't work even giving an explicit CUDA_12 variable:

mmarco@neumann ~/ollama $ make help-runners CUDA_12=/opt/cuda
The following runners will be built based on discovered GPU libraries: 'default'
(On MacOS arm64 'default' is the metal runner.  For all other platforms 'default' is one or more CPU runners)

GPU Runner CPU Flags: 'avx'  (Override with CUSTOM_CPU_FLAGS)

# CUDA_PATH sets the location where CUDA toolkits are present
CUDA_PATH=/usr/local/cuda
        CUDA_11_PATH=
        CUDA_11_COMPILER=
        CUDA_12_PATH=
        CUDA_12_COMPILER=

# HIP_PATH sets the location where the ROCm toolkit is present
HIP_PATH=
        HIP_COMPILER=
<!-- gh-comment-id:2486702641 --> @miguelmarco commented on GitHub (Nov 19, 2024): Now it doesn't work even giving an explicit CUDA_12 variable: ``` mmarco@neumann ~/ollama $ make help-runners CUDA_12=/opt/cuda The following runners will be built based on discovered GPU libraries: 'default' (On MacOS arm64 'default' is the metal runner. For all other platforms 'default' is one or more CPU runners) GPU Runner CPU Flags: 'avx' (Override with CUSTOM_CPU_FLAGS) # CUDA_PATH sets the location where CUDA toolkits are present CUDA_PATH=/usr/local/cuda CUDA_11_PATH= CUDA_11_COMPILER= CUDA_12_PATH= CUDA_12_COMPILER= # HIP_PATH sets the location where the ROCm toolkit is present HIP_PATH= HIP_COMPILER= ```
Author
Owner

@miguelmarco commented on GitHub (Nov 19, 2024):

setting CUDA_12_PATH I get again (after some time compiling) the same mesage about not finding libraries:

make -j 22 cuda_v12 CUDA_12_PATH=/opt/cuda
(...)
/usr/lib/go/pkg/tool/linux_amd64/link: running x86_64-pc-linux-gnu-g++ failed: exit status 1
/usr/bin/x86_64-pc-linux-gnu-g++ -m64 -s -Wl,-z,relro -pie -o $WORK/b001/exe/a.out -Wl,--export-dynamic-symbol=_cgo_panic -Wl,--export-dynamic-symbol=_cgo_topofstack -Wl,--export-dynamic-symbol=crosscall2 -Wl,--export-dynamic-symbol=llamaProgressCallback -Wl,--compress-debug-sections=zlib /tmp/go-link-833398932/go.o /tmp/go-link-833398932/000000.o /tmp/go-link-833398932/000001.o /tmp/go-link-833398932/000002.o /tmp/go-link-833398932/000003.o /tmp/go-link-833398932/000004.o /tmp/go-link-833398932/000005.o /tmp/go-link-833398932/000006.o /tmp/go-link-833398932/000007.o /tmp/go-link-833398932/000008.o /tmp/go-link-833398932/000009.o /tmp/go-link-833398932/000010.o /tmp/go-link-833398932/000011.o /tmp/go-link-833398932/000012.o /tmp/go-link-833398932/000013.o /tmp/go-link-833398932/000014.o /tmp/go-link-833398932/000015.o /tmp/go-link-833398932/000016.o /tmp/go-link-833398932/000017.o /tmp/go-link-833398932/000018.o /tmp/go-link-833398932/000019.o /tmp/go-link-833398932/000020.o /tmp/go-link-833398932/000021.o /tmp/go-link-833398932/000022.o /tmp/go-link-833398932/000023.o /tmp/go-link-833398932/000024.o /tmp/go-link-833398932/000025.o /tmp/go-link-833398932/000026.o /tmp/go-link-833398932/000027.o /tmp/go-link-833398932/000028.o /tmp/go-link-833398932/000029.o /tmp/go-link-833398932/000030.o /tmp/go-link-833398932/000031.o /tmp/go-link-833398932/000032.o /tmp/go-link-833398932/000033.o /tmp/go-link-833398932/000034.o /tmp/go-link-833398932/000035.o /tmp/go-link-833398932/000036.o /tmp/go-link-833398932/000037.o /tmp/go-link-833398932/000038.o /tmp/go-link-833398932/000039.o /tmp/go-link-833398932/000040.o /tmp/go-link-833398932/000041.o /tmp/go-link-833398932/000042.o /tmp/go-link-833398932/000043.o -L./build/linux-amd64/runners/cuda_v12_avx/ -lggml_cuda_v12 -L/usr/local/cuda-12/lib64 -L/home/mmarco/ollama/llama/build/linux-amd64 -lcuda -lcudart -lcublas -lcublasLt -lpthread -ldl -lrt -lresolv -L./build/linux-amd64/runners/cuda_v12_avx/ -lresolv -L./build/linux-amd64/runners/cuda_v12_avx/ -lpthread
/usr/lib/gcc/x86_64-pc-linux-gnu/12/../../../../x86_64-pc-linux-gnu/bin/ld: no se puede encontrar -lcudart: No existe el fichero o el directorio
/usr/lib/gcc/x86_64-pc-linux-gnu/12/../../../../x86_64-pc-linux-gnu/bin/ld: no se puede encontrar -lcublas: No existe el fichero o el directorio
/usr/lib/gcc/x86_64-pc-linux-gnu/12/../../../../x86_64-pc-linux-gnu/bin/ld: no se puede encontrar -lcublasLt: No existe el fichero o el directorio
collect2: error: ld devolvió el estado de salida 1

make[2]: *** [make/gpu.make:69: build/linux-amd64/runners/cuda_v12_avx/ollama_llama_server] Error 1
make[1]: *** [Makefile:51: cuda_v12] Error 2
make: *** [Makefile:4: cuda_v12] Error 2
<!-- gh-comment-id:2486844374 --> @miguelmarco commented on GitHub (Nov 19, 2024): setting CUDA_12_PATH I get again (after some time compiling) the same mesage about not finding libraries: ``` make -j 22 cuda_v12 CUDA_12_PATH=/opt/cuda (...) /usr/lib/go/pkg/tool/linux_amd64/link: running x86_64-pc-linux-gnu-g++ failed: exit status 1 /usr/bin/x86_64-pc-linux-gnu-g++ -m64 -s -Wl,-z,relro -pie -o $WORK/b001/exe/a.out -Wl,--export-dynamic-symbol=_cgo_panic -Wl,--export-dynamic-symbol=_cgo_topofstack -Wl,--export-dynamic-symbol=crosscall2 -Wl,--export-dynamic-symbol=llamaProgressCallback -Wl,--compress-debug-sections=zlib /tmp/go-link-833398932/go.o /tmp/go-link-833398932/000000.o /tmp/go-link-833398932/000001.o /tmp/go-link-833398932/000002.o /tmp/go-link-833398932/000003.o /tmp/go-link-833398932/000004.o /tmp/go-link-833398932/000005.o /tmp/go-link-833398932/000006.o /tmp/go-link-833398932/000007.o /tmp/go-link-833398932/000008.o /tmp/go-link-833398932/000009.o /tmp/go-link-833398932/000010.o /tmp/go-link-833398932/000011.o /tmp/go-link-833398932/000012.o /tmp/go-link-833398932/000013.o /tmp/go-link-833398932/000014.o /tmp/go-link-833398932/000015.o /tmp/go-link-833398932/000016.o /tmp/go-link-833398932/000017.o /tmp/go-link-833398932/000018.o /tmp/go-link-833398932/000019.o /tmp/go-link-833398932/000020.o /tmp/go-link-833398932/000021.o /tmp/go-link-833398932/000022.o /tmp/go-link-833398932/000023.o /tmp/go-link-833398932/000024.o /tmp/go-link-833398932/000025.o /tmp/go-link-833398932/000026.o /tmp/go-link-833398932/000027.o /tmp/go-link-833398932/000028.o /tmp/go-link-833398932/000029.o /tmp/go-link-833398932/000030.o /tmp/go-link-833398932/000031.o /tmp/go-link-833398932/000032.o /tmp/go-link-833398932/000033.o /tmp/go-link-833398932/000034.o /tmp/go-link-833398932/000035.o /tmp/go-link-833398932/000036.o /tmp/go-link-833398932/000037.o /tmp/go-link-833398932/000038.o /tmp/go-link-833398932/000039.o /tmp/go-link-833398932/000040.o /tmp/go-link-833398932/000041.o /tmp/go-link-833398932/000042.o /tmp/go-link-833398932/000043.o -L./build/linux-amd64/runners/cuda_v12_avx/ -lggml_cuda_v12 -L/usr/local/cuda-12/lib64 -L/home/mmarco/ollama/llama/build/linux-amd64 -lcuda -lcudart -lcublas -lcublasLt -lpthread -ldl -lrt -lresolv -L./build/linux-amd64/runners/cuda_v12_avx/ -lresolv -L./build/linux-amd64/runners/cuda_v12_avx/ -lpthread /usr/lib/gcc/x86_64-pc-linux-gnu/12/../../../../x86_64-pc-linux-gnu/bin/ld: no se puede encontrar -lcudart: No existe el fichero o el directorio /usr/lib/gcc/x86_64-pc-linux-gnu/12/../../../../x86_64-pc-linux-gnu/bin/ld: no se puede encontrar -lcublas: No existe el fichero o el directorio /usr/lib/gcc/x86_64-pc-linux-gnu/12/../../../../x86_64-pc-linux-gnu/bin/ld: no se puede encontrar -lcublasLt: No existe el fichero o el directorio collect2: error: ld devolvió el estado de salida 1 make[2]: *** [make/gpu.make:69: build/linux-amd64/runners/cuda_v12_avx/ollama_llama_server] Error 1 make[1]: *** [Makefile:51: cuda_v12] Error 2 make: *** [Makefile:4: cuda_v12] Error 2 ```
Author
Owner

@navr32 commented on GitHub (Nov 19, 2024):

hi all !
Working on manjaro aka Archlinux.
I just come to upgrade my source git ollama from v0.3.14-11-g3085c47b to main.
And i have the same problem than @miguelmarco .
The gpu runner cuda_v12 is not builded.
So when start ollama serve just runners "cpu cpu_avx cpu_avx2" is present.

I have try many tags..cpu def to try but nothing work.
I have try @dhiltgen branch "make_target" and "info_ux" but same problems.

And see on others post try The make help-runners but this give no target .

<!-- gh-comment-id:2486894768 --> @navr32 commented on GitHub (Nov 19, 2024): hi all ! Working on manjaro aka Archlinux. I just come to upgrade my source git ollama from v0.3.14-11-g3085c47b to main. And i have the same problem than @miguelmarco . The gpu runner cuda_v12 is not builded. So when start ollama serve just runners "cpu cpu_avx cpu_avx2" is present. I have try many tags..cpu def to try but nothing work. I have try @dhiltgen branch "make_target" and "info_ux" but same problems. And see on others post try The make help-runners but this give no target .
Author
Owner

@maxer456 commented on GitHub (Nov 20, 2024):

Hello, I also have the same linking issue, also on Gentoo.
It looks like the linking path doesn't come from the makefiles, but from llama/llama.go:
2a0716598a/llama/llama.go (L29)

As a workaround, changing that to my cuda directory (/opt/cuda/lib64) works. Then the cuda_v12 target (using CUDA_12_PATH) compiles, go build . works as well and running OLLAMA_DEBUG=1 ./ollama serve confirms the runner starts successfully. I also tested running the llama3.2:1b model and that works just fine, executing the model on the GPU.

<!-- gh-comment-id:2487975466 --> @maxer456 commented on GitHub (Nov 20, 2024): Hello, I also have the same linking issue, also on Gentoo. It looks like the linking path doesn't come from the makefiles, but from `llama/llama.go`: https://github.com/dhiltgen/ollama/blob/2a0716598ab7128f91012310e7ce44bda5046142/llama/llama.go#L29 As a workaround, changing that to my cuda directory (/opt/cuda/lib64) works. Then the cuda_v12 target (using `CUDA_12_PATH`) compiles, `go build .` works as well and running `OLLAMA_DEBUG=1 ./ollama serve` confirms the runner starts successfully. I also tested running the `llama3.2:1b` model and that works just fine, executing the model on the GPU.
Author
Owner

@maxer456 commented on GitHub (Nov 21, 2024):

With the latest changes on @dhiltgen's make_targets branch, compilation works for me with just supplying CUDA_12_PATH, including make cuda_v12, make all and make dist.
Thank you!

<!-- gh-comment-id:2490605190 --> @maxer456 commented on GitHub (Nov 21, 2024): With the latest changes on @dhiltgen's `make_targets` branch, compilation works for me with just supplying `CUDA_12_PATH`, including `make cuda_v12`, `make all` and `make dist`. Thank you!
Author
Owner

@navr32 commented on GitHub (Dec 6, 2024):

Success with the latest branch!

I wanted to thank @dhiltgen and all other contributors for their amazing work! After testing the latest branch, everything compiles and runs fine now, even without an AVX processor, with
full GPU acceleration on my Nvidia GPUs. Thanks again!

Environment

I'm currently running Manjaro, an Arch distribution.

Steps to test

To test, I followed these steps:

  1. Clone the repository: git clone https://github.com/dhiltgen/ollama
  2. Switch to the make_targets branch: cd ollama/ then git checkout make_targets

Verify CUDA detection

To verify that the CUDA path is correct and detected, I ran:

make CUDA_12_PATH=/opt/cuda help-runners

This responded with:

The following runners will be built based on discovered GPU libraries: 'cpu cuda_v12 rocm'

This confirms that CUDA is properly detected!

Build and run

Finally, I built with the following options:

make -j16 CUSTOM_CPU_FLAGS="" CUDA_12_PATH=/opt/cuda cuda_v12
go build .

And everything works fine! All GPUs are detected and models run on the GPU without AVX. Congratulations!

<!-- gh-comment-id:2524637378 --> @navr32 commented on GitHub (Dec 6, 2024): ### Success with the latest branch! I wanted to thank @dhiltgen and all other contributors for their amazing work! After testing the latest branch, everything compiles and runs fine now, even without an AVX processor, with full GPU acceleration on my Nvidia GPUs. Thanks again! ### Environment I'm currently running Manjaro, an Arch distribution. ### Steps to test To test, I followed these steps: 1. **Clone the repository**: `git clone https://github.com/dhiltgen/ollama` 2. **Switch to the make_targets branch**: `cd ollama/` then `git checkout make_targets` ### Verify CUDA detection To verify that the CUDA path is correct and detected, I ran: ```bash make CUDA_12_PATH=/opt/cuda help-runners ``` This responded with: ``` The following runners will be built based on discovered GPU libraries: 'cpu cuda_v12 rocm' ``` This confirms that CUDA is properly detected! ### Build and run Finally, I built with the following options: ```bash make -j16 CUSTOM_CPU_FLAGS="" CUDA_12_PATH=/opt/cuda cuda_v12 go build . ``` And everything works fine! All GPUs are detected and models run on the GPU without AVX. Congratulations!
Author
Owner

@kelvin-bbn commented on GitHub (Dec 29, 2024):

Hello @dhiltgen. Thank you for your work on CPUs without AVX. I cloned the git repository but I cannot get "git checkout make_targets" to work and there is no Makefile. I am new to compiling without a Makefile so I maybe doing things wrong. I am running a Ubuntu 24,04 on an old Intel machine with an RTX3050. I need some examples.

Thank you!

<!-- gh-comment-id:2564622729 --> @kelvin-bbn commented on GitHub (Dec 29, 2024): Hello @dhiltgen. Thank you for your work on CPUs without AVX. I cloned the git repository but I cannot get "git checkout make_targets" to work and there is no Makefile. I am new to compiling without a Makefile so I maybe doing things wrong. I am running a Ubuntu 24,04 on an old Intel machine with an RTX3050. I need some examples. Thank you!
Author
Owner
<!-- gh-comment-id:2564626862 --> @rick-github commented on GitHub (Dec 29, 2024): https://github.com/ollama/ollama/blob/main/docs/development.md#advanced-cpu-vector-settings
Author
Owner

@Gbrothers1 commented on GitHub (Dec 29, 2024):

Git checkout make_targets also not working
root@AI-SEVER ~/ollama (main)# git checkout make_targets (cuda_env)
error: pathspec 'make_targets' did not match any file(s) known to git

<!-- gh-comment-id:2564659393 --> @Gbrothers1 commented on GitHub (Dec 29, 2024): Git checkout make_targets also not working root@AI-SEVER ~/ollama (main)# git checkout make_targets (cuda_env) error: pathspec 'make_targets' did not match any file(s) known to git
Author
Owner
<!-- gh-comment-id:2564661563 --> @rick-github commented on GitHub (Dec 29, 2024): https://github.com/ollama/ollama/blob/main/docs/development.md#advanced-cpu-vector-settings
Author
Owner

@kelvin-bbn commented on GitHub (Dec 29, 2024):

@rick-github How does make work without a Makefile to guide it. I still get the following error:

make -j16 CUSTOM_CPU_FLAGS="" CUDA_12_PATH=/usr/local/cuda-12.6/ cuda_v12
make: *** No rule to make target 'cuda_v12'. Stop.

I have golang and cuda installed and in the path.
Thank you!

<!-- gh-comment-id:2564806127 --> @kelvin-bbn commented on GitHub (Dec 29, 2024): @rick-github How does make work without a Makefile to guide it. I still get the following error: make -j16 CUSTOM_CPU_FLAGS="" CUDA_12_PATH=/usr/local/cuda-12.6/ cuda_v12 make: *** No rule to make target 'cuda_v12'. Stop. I have golang and cuda installed and in the path. Thank you!
Author
Owner

@rick-github commented on GitHub (Dec 30, 2024):

make -j16 CUSTOM_CPU_FLAGS="" CUDA_12_PATH=/usr/local/cuda-12.6/ RUNNER_TARGETS=cuda_v12 runners
<!-- gh-comment-id:2564884580 --> @rick-github commented on GitHub (Dec 30, 2024): ```console make -j16 CUSTOM_CPU_FLAGS="" CUDA_12_PATH=/usr/local/cuda-12.6/ RUNNER_TARGETS=cuda_v12 runners ```
Author
Owner

@kelvin-bbn commented on GitHub (Dec 30, 2024):

@rick-github From the ~/ollama directory, I got the following error:

make -j16 CUSTOM_CPU_FLAGS="" CUDA_12_PATH=/usr/local/cuda-12.6/ RUNNER_TARGETS=cuda_v12 runners
make: *** No rule to make target 'runners'. Stop.

<!-- gh-comment-id:2564910378 --> @kelvin-bbn commented on GitHub (Dec 30, 2024): @rick-github From the ~/ollama directory, I got the following error: make -j16 CUSTOM_CPU_FLAGS="" CUDA_12_PATH=/usr/local/cuda-12.6/ RUNNER_TARGETS=cuda_v12 runners make: *** No rule to make target 'runners'. Stop.
Author
Owner

@rick-github commented on GitHub (Dec 30, 2024):

Is your repo up to date? git log -n 1 to see where your repo is at, git pull to sync with remote head.

<!-- gh-comment-id:2564924505 --> @rick-github commented on GitHub (Dec 30, 2024): Is your repo up to date? `git log -n 1` to see where your repo is at, `git pull` to sync with remote head.
Author
Owner

@kelvin-bbn commented on GitHub (Dec 30, 2024):

It is up to date:
git pull
Already up to date.

git log -n 1
commit 608e87bf87 (HEAD -> main, origin/main, origin/HEAD)
Author: Patrick Devine patrick@infrahq.com
Date: Thu Sep 5 17:02:28 2024 -0700

Fix gemma2 2b conversion (#6645)

I can clone the repo again. What do you recommend? I did git clone https://github.com/dhiltgen/ollama

<!-- gh-comment-id:2564928317 --> @kelvin-bbn commented on GitHub (Dec 30, 2024): It is up to date: git pull Already up to date. git log -n 1 commit 608e87bf8707e377f1c195ae22330e26f67de91e (HEAD -> main, origin/main, origin/HEAD) Author: Patrick Devine <patrick@infrahq.com> Date: Thu Sep 5 17:02:28 2024 -0700 Fix gemma2 2b conversion (#6645) I can clone the repo again. What do you recommend? I did git clone https://github.com/dhiltgen/ollama
Author
Owner

@rick-github commented on GitHub (Dec 30, 2024):

https://github.com/ollama/ollama.git

<!-- gh-comment-id:2564929414 --> @rick-github commented on GitHub (Dec 30, 2024): https://github.com/ollama/ollama.git
Author
Owner

@kelvin-bbn commented on GitHub (Dec 30, 2024):

It's compiling. Thank you @rick-github

<!-- gh-comment-id:2564946846 --> @kelvin-bbn commented on GitHub (Dec 30, 2024): It's compiling. Thank you @rick-github
Author
Owner

@aaronchantrill commented on GitHub (Dec 31, 2024):

I'm on https://github.com/ollama/ollama and cannot find the "make_targets" branch. Did something change in the last couple of days?

<!-- gh-comment-id:2566533511 --> @aaronchantrill commented on GitHub (Dec 31, 2024): I'm on https://github.com/ollama/ollama and cannot find the "make_targets" branch. Did something change in the last couple of days?
Author
Owner

@rick-github commented on GitHub (Jan 1, 2025):

There is no make_targets branch on the ollama repo, that's on dhiltgen's repo.

<!-- gh-comment-id:2566776068 --> @rick-github commented on GitHub (Jan 1, 2025): There is no make_targets branch on the ollama repo, that's on dhiltgen's repo.
Author
Owner

@iamangus commented on GitHub (Jan 1, 2025):

Are there plans to provide a container image built with these flags? (or lack there of)

<!-- gh-comment-id:2567070763 --> @iamangus commented on GitHub (Jan 1, 2025): Are there plans to provide a container image built with these flags? (or lack there of)
Author
Owner

@rick-github commented on GitHub (Jan 2, 2025):

Set flags and build a docker image:

docker build --build-arg VERSION=noavx --build-arg CUSTOM_CPU_FLAGS= --build-arg OLLAMA_SKIP_ROCM_GENERATE=1 --build-arg OLLAMA_FAST_BUILD=1 --platform=linux/amd64 -t ollama-noavx .
<!-- gh-comment-id:2567358641 --> @rick-github commented on GitHub (Jan 2, 2025): Set flags and build a docker image: ``` docker build --build-arg VERSION=noavx --build-arg CUSTOM_CPU_FLAGS= --build-arg OLLAMA_SKIP_ROCM_GENERATE=1 --build-arg OLLAMA_FAST_BUILD=1 --platform=linux/amd64 -t ollama-noavx . ```
Author
Owner

@aaronchantrill commented on GitHub (Jan 12, 2025):

There is no make_targets branch on the ollama repo, that's on dhiltgen's repo.

I can't find that branch on dhiltgen's repo either.

<!-- gh-comment-id:2585843508 --> @aaronchantrill commented on GitHub (Jan 12, 2025): > There is no make_targets branch on the ollama repo, that's on dhiltgen's repo. I can't find that branch on dhiltgen's repo either.
Author
Owner

@rick-github commented on GitHub (Jan 12, 2025):

git clone https://github.com/ollama/ollama.git
cd ollama
make -j16 CUSTOM_CPU_FLAGS="" CUDA_12_PATH=/usr/local/cuda-12.6/ RUNNER_TARGETS=cuda_v12 runners
<!-- gh-comment-id:2585934784 --> @rick-github commented on GitHub (Jan 12, 2025): ``` git clone https://github.com/ollama/ollama.git cd ollama make -j16 CUSTOM_CPU_FLAGS="" CUDA_12_PATH=/usr/local/cuda-12.6/ RUNNER_TARGETS=cuda_v12 runners ```
Author
Owner

@ppereirasky commented on GitHub (Jan 27, 2025):

I think in Windows environment it would be like this (in a Powershell with Admin rights):

make -j16 CUSTOM_CPU_FLAGS="" CUDA_12_PATH="'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8'" CUDA_12_COMPILER="'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\nvcc.exe'" RUNNER_TARGETS=cuda_v12 runners

Right?

<!-- gh-comment-id:2616765347 --> @ppereirasky commented on GitHub (Jan 27, 2025): I think in Windows environment it would be like this (in a Powershell with Admin rights): `make -j16 CUSTOM_CPU_FLAGS="" CUDA_12_PATH="'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8'" CUDA_12_COMPILER="'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\nvcc.exe'" RUNNER_TARGETS=cuda_v12 runners` Right?
Author
Owner

@rick-github commented on GitHub (Jan 27, 2025):

I'm not a Windows guy. Try it and see what happens.

<!-- gh-comment-id:2616775202 --> @rick-github commented on GitHub (Jan 27, 2025): I'm not a Windows guy. Try it and see what happens.
Author
Owner

@ppereirasky commented on GitHub (Jan 28, 2025):

OK - for me - with an RTX GPU, a Celeron G3930 (no avx support and only 2 cores 2 threads) and Windows 10 I think the best is to install the Cuda Toolkit in a folder on C: like C:\CudaToolkit (no spaces) and then compile and build like this:

make -j2 CUSTOM_CPU_FLAGS="" CUDA_12_PATH="C:\\CudaToolkit\\CUDA\\v12.8" CUDA_12_COMPILER="C:\\CudaToolkit\\CUDA\\v12.8\\bin\\nvcc.exe" CUDA_ARCHITECTURES="89;90;90a"

It detects 2 runners to build automatically cpu and cuda_v12 and it finishes without errors - however it takes "ages" to complete. I'm also only building for CUDA_ARCHITECTURES 89;90;90a because my RTX is the 4060 Ti 16Gb, so no need for the older ones - without this it would compile for every CUDA generation which would take much longer.

<!-- gh-comment-id:2618833420 --> @ppereirasky commented on GitHub (Jan 28, 2025): OK - for me - with an RTX GPU, a Celeron G3930 (no avx support and only 2 cores 2 threads) and Windows 10 I think the best is to install the Cuda Toolkit in a folder on C: like C:\CudaToolkit (no spaces) and then compile and build like this: `make -j2 CUSTOM_CPU_FLAGS="" CUDA_12_PATH="C:\\CudaToolkit\\CUDA\\v12.8" CUDA_12_COMPILER="C:\\CudaToolkit\\CUDA\\v12.8\\bin\\nvcc.exe" CUDA_ARCHITECTURES="89;90;90a"` It detects 2 runners to build automatically **cpu** and **cuda_v12** and it finishes without errors - however it takes "ages" to complete. I'm also only building for CUDA_ARCHITECTURES 89;90;90a because my RTX is the 4060 Ti 16Gb, so no need for the older ones - without this it would compile for every CUDA generation which would take much longer.
Author
Owner

@Thot-Htp commented on GitHub (Jan 28, 2025):

@ppereirasky, how did you get the parameters for CUDA_ARCHITECTURES? I have an RTX 3060 Ti, but in general where can they be found?

Thanks

<!-- gh-comment-id:2620022929 --> @Thot-Htp commented on GitHub (Jan 28, 2025): @ppereirasky, how did you get the parameters for CUDA_ARCHITECTURES? I have an RTX 3060 Ti, but in general where can they be found? Thanks
Author
Owner

@rick-github commented on GitHub (Jan 28, 2025):

https://developer.nvidia.com/cuda-gpus

A 3060 starts at 86 (8.6).

<!-- gh-comment-id:2620039996 --> @rick-github commented on GitHub (Jan 28, 2025): https://developer.nvidia.com/cuda-gpus A 3060 starts at 86 (8.6).
Author
Owner

@Thot-Htp commented on GitHub (Jan 28, 2025):

@rick-github Ah, so "architecture" = "compute capability" x 10

Thanks

<!-- gh-comment-id:2620159379 --> @Thot-Htp commented on GitHub (Jan 28, 2025): @rick-github Ah, so "architecture" = "compute capability" x 10 Thanks
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51375