[GH-ISSUE #3538] binary install on a cluster produces extra information in responses in both cpu and gpu mode #27942

Closed
opened 2026-04-22 05:36:26 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @bozo32 on GitHub (Apr 8, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3538

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I installed ollama on the university cluster following the instructions here:
The download page has a list of assets, one of them is binary for Linux named ollama-linux-amd64.

Just download it to your Linux cluster, then run the following:

start the server in background
./ollama-linux-amd64 serve&

run a local model afterwards
./ollama-linux-amd64 run llama2

I had to
chmod +x ollama-linux-amd64
but then it worked
when running
./ollama-linux-amd64 run llama2
everything worked fine...if slowly...but there is extra information in the responses

when I then used sinteractive to grab a gpu (A100 80GB)
I had to re-install everything
fine
and it, again, produced lots of extra information in the responses

session starts in CPU mode

(base) tamas002@login0:~/ai$ wget https://github.com/ollama/ollama/releases/download/v0.1.30/ollama-linux-amd64
--2024-04-08 13:01:04--  https://github.com/ollama/ollama/releases/download/v0.1.30/ollama-linux-amd64Resolving github.com (github.com)... 140.82.121.3
Connecting to github.com (github.com)|140.82.121.3|:443... connected.
HTTP request sent, awaiting response... 302 FoundLocation: https://objects.githubusercontent.com/github-production-release-asset-2e65be/658928958/bdcdb212-95c5-426d-9879-9e5b50876d89?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240408%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240408T110105Z&X-Amz-Expires=300&X-Amz-Signature=08db18a78d027fd8a9cdbf030599bb52ee8b576f3cc397c3d5553c9ef4ce68ce&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=658928958&response-content-disposition=attachment%3B%20filename%3Dollama-linux-amd64&response-content-type=application%2Foctet-stream [following]
--2024-04-08 13:01:05--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/658928958/bdcdb212-95c5-426d-9879-9e5b50876d89?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240408%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240408T110105Z&X-Amz-Expires=300&X-Amz-Signature=08db18a78d027fd8a9cdbf030599bb52ee8b576f3cc397c3d5553c9ef4ce68ce&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=658928958&response-content-disposition=attachment%3B%20filename%3Dollama-linux-amd64&response-content-type=application%2Foctet-streamResolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.109.133, ...Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.111.133|:443... connected.HTTP request sent, awaiting response... 200 OK
Length: 297108760 (283M) [application/octet-stream]Saving to: ‘ollama-linux-amd64’
ollama-linux-amd64             100%[===================================================>] 283.34M   265MB/s    in 1.1s

2024-04-08 13:01:06 (265 MB/s) - ‘ollama-linux-amd64’ saved [297108760/297108760]
(base) tamas002@login0:~/ai$ chmod +x ollama-*
(base) tamas002@login0:~/ai$ ./ollama-linux-amd64 serve&
[1] 1387761(base) tamas002@login0:~/ai$ time=2024-04-08T13:01:31.191+02:00 level=INFO source=images.go:804 msg="total blobs: 114"
time=2024-04-08T13:01:33.095+02:00 level=INFO source=images.go:811 msg="total unused blobs removed: 95"time=2024-04-08T13:01:33.098+02:00 level=INFO source=routes.go:1118 msg="Listening on 127.0.0.1:11434 (version 0.1.30)"
time=2024-04-08T13:01:33.112+02:00 level=INFO source=payload_common.go:113 msg="Extracting dynamic libraries to /tmp/ollama1248928676/runners ..."
time=2024-04-08T13:01:36.039+02:00 level=INFO source=payload_common.go:140 msg="Dynamic LLM libraries [rocm_v60000 cuda_v11 cpu_avx cpu_avx2 cpu]"time=2024-04-08T13:01:36.039+02:00 level=INFO source=gpu.go:115 msg="Detecting GPU type"time=2024-04-08T13:01:36.039+02:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libcudart.so*"time=2024-04-08T13:01:36.041+02:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/tmp/ollama1248928676/runners/cuda_v11/libcudart.so.11.0]"time=2024-04-08T13:01:36.042+02:00 level=INFO source=gpu.go:340 msg="Unable to load cudart CUDA management library /tmp/ollama1248928676/runners/cuda_v11/libcudart.so.11.0: cudart init failure: 35"
time=2024-04-08T13:01:36.042+02:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"time=2024-04-08T13:01:36.044+02:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []"time=2024-04-08T13:01:36.044+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-08T13:01:36.044+02:00 level=INFO source=routes.go:1141 msg="no GPU detected"
^C
(base) tamas002@login0:~/ai$ ps -x
    PID TTY      STAT   TIME COMMAND1316477 ?        S      0:00 /shared/webapps/jupyterhub/central/3.10.9-3.1.1/bin/python3 /shared/webapps/jupyterhub/central
1316478 ?        Rl     0:23 /shared/webapps/jupyterhub/central/3.10.9-3.1.1/bin/python /shared/webapps/jupyterhub/central/
1316584 pts/0    Ss     0:00 /bin/bash -l
1387761 pts/0    Sl     0:05 ./ollama-linux-amd64 serve
1388237 pts/0    R+     0:00 ps -x
(base) tamas002@login0:~/ai$ ./ollama-linux-amd64 run llama2
[GIN] 2024/04/08 - 13:03:00 | 200 |     103.664µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/04/08 - 13:03:00 | 200 |     4.81742ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/04/08 - 13:03:00 | 200 |    2.043975ms |       127.0.0.1 | POST     "/api/show"
⠸ time=2024-04-08T13:03:00.764+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-08T13:03:00.764+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-08T13:03:00.764+02:00 level=INFO source=llm.go:85 msg="GPU not available, falling back to CPU"
loading library /tmp/ollama1248928676/runners/cpu_avx2/libext_server.so
time=2024-04-08T13:03:00.766+02:00 level=INFO source=dyn_ext_server.go:87 msg="Loading Dynamic llm server: /tmp/ollama1248928676/runners/cpu_avx2/libext_server.so"
time=2024-04-08T13:03:00.766+02:00 level=INFO source=dyn_ext_server.go:147 msg="Initializing llama server"
llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /home/WUR/tamas002/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000,0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6,6, 6, ...
llama_model_loader: - kv  15:                      tokenizer.ggml.merges arr[str,61249]   = ["▁ t", "e r", "i n", "▁ a", "e n...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  19:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  20:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  21:                    tokenizer.chat_template str              = {% if messages[0]['role'] == 'system'...
llama_model_loader: - kv  22:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
⠼ llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 6.74 B
llm_load_print_meta: model size       = 3.56 GiB (4.54 BPW)
llm_load_print_meta: general.name     = LLaMA v2
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.11 MiB
⠴ llm_load_tensors:        CPU buffer size =  3647.87 MiB
..................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
⠙ llama_kv_cache_init:        CPU KV buffer size =  1024.00 MiB
llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
llama_new_context_with_model:        CPU  output buffer size =    70.50 MiB
llama_new_context_with_model:        CPU compute buffer size =   164.00 MiB
llama_new_context_with_model: graph nodes  = 1060
llama_new_context_with_model: graph splits = 1
⠹ {"function":"initialize","level":"INFO","line":444,"msg":"initializing slots","n_slots":1,"tid":"140593019721472","timestamp":1712574181}
{"function":"initialize","level":"INFO","line":453,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"140593019721472","timestamp":1712574181}
time=2024-04-08T13:03:01.744+02:00 level=INFO source=dyn_ext_server.go:159 msg="Starting llama main loop"
{"function":"update_slots","level":"INFO","line":1572,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"140587255912192","timestamp":1712574181}
[GIN] 2024/04/08 - 13:03:01 | 200 |  1.282697676s |       127.0.0.1 | POST     "/api/chat"
>>> hello?
{"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"140587255912192","timestamp":1712574184}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1803,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":22,"slot_id":0,"task_id":0,"tid":"140587255912192","timestamp":1712574184}
{"function":"update_slots","level":"INFO","line":1830,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"140587255912192","timestamp":1712574184}
Hello! It's nice to meet you. How are you today? Is there something I can help you with or would you like to chat?{"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time     =    4261.18 ms /    22 tokens (  193.69 ms per token,     5.16 tokens per second)","n_prompt_tokens_processed":22,"n_tokens_second":5.162895210827999,"slot_id":0,"t_prompt_processing":4261.175,"t_token":193.68977272727273,"task_id":0,"tid":"140587255912192","timestamp":1712574205}
{"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time =   16137.99 ms /    31 runs   (  520.58ms per token,     1.92 tokens per second)","n_decoded":31,"n_tokens_second":1.9209331521459612,"slot_id":0,"t_token":520.5803225806452,"t_token_generation":16137.99,"task_id":0,"tid":"140587255912192","timestamp":1712574205}
{"function":"print_timings","level":"INFO","line":289,"msg":"          total time =   20399.17 ms","slot_id":0,"t_prompt_processing":4261.175,"t_token_generation":16137.99,"t_total":20399.165,"task_id":0,"tid":"140587255912192","timestamp":1712574205}
{"function":"update_slots","level":"INFO","line":1634,"msg":"slot released","n_cache_tokens":53,"n_ctx":2048,"n_past":52,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"140587255912192","timestamp":1712574205,"truncated":false}
[GIN] 2024/04/08 - 13:03:25 | 200 | 20.402214217s |       127.0.0.1 | POST     "/api/chat"


>>> /bye
(base) tamas002@login0:~/ai$ sinteractive -p gpu --gres=gpu:1 --accel-bind=g --cpus-per-gpu=1 --mem-per-cpu=96G
srun: job 51621252 queued and waiting for resources
srun: job 51621252 has been allocated resources
(base) tamas002@gpun203:~/ai$ ps -x
    PID TTY      STAT   TIME COMMAND
   3815 pts/0    SNs    0:00 /usr/bin/bash -i
   3844 pts/0    RN+    0:00 ps -x
(base) tamas002@gpun203:~/ai$ ./ollama-linux-amd64 run llama2
Error: could not connect to ollama app, is it running?
(base) tamas002@gpun203:~/ai$ ./ollama-linux-amd64 serve&
[1] 3856
(base) tamas002@gpun203:~/ai$ time=2024-04-08T13:04:36.181+02:00 level=INFO source=images.go:804 msg="total blobs: 19"
time=2024-04-08T13:04:36.185+02:00 level=INFO source=images.go:811 msg="total unused blobs removed: 0"
time=2024-04-08T13:04:36.186+02:00 level=INFO source=routes.go:1118 msg="Listening on 127.0.0.1:11434 (version 0.1.30)"
time=2024-04-08T13:04:36.221+02:00 level=INFO source=payload_common.go:113 msg="Extracting dynamic libraries to /tmp/ollama123086200/runners ..."
time=2024-04-08T13:04:41.078+02:00 level=INFO source=payload_common.go:140 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60000]"
time=2024-04-08T13:04:41.079+02:00 level=INFO source=gpu.go:115 msg="Detecting GPU type"
time=2024-04-08T13:04:41.079+02:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libcudart.so*"
time=2024-04-08T13:04:41.080+02:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/tmp/ollama123086200/runners/cuda_v11/libcudart.so.11.0]"
time=2024-04-08T13:04:43.008+02:00 level=INFO source=gpu.go:120 msg="Nvidia GPU detected via cudart"
time=2024-04-08T13:04:43.008+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-08T13:04:43.123+02:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.0"
^C
(base) tamas002@gpun203:~/ai$ ps -x
    PID TTY      STAT   TIME COMMAND
   3815 pts/0    SNs    0:00 /usr/bin/bash -i
   3856 pts/0    SNl    0:06 ./ollama-linux-amd64 serve
   3883 pts/0    RN+    0:00 ps -x
(base) tamas002@gpun203:~/ai$ ./ollama-linux-amd64 run llama2
[GIN] 2024/04/08 - 13:05:17 | 200 |      52.452µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/04/08 - 13:05:17 | 200 |    1.755494ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/04/08 - 13:05:17 | 200 |    1.076623ms |       127.0.0.1 | POST     "/api/show"
⠹ time=2024-04-08T13:05:17.589+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-08T13:05:17.589+02:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.0"
time=2024-04-08T13:05:17.590+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-08T13:05:17.590+02:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.0"
time=2024-04-08T13:05:17.590+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
loading library /tmp/ollama123086200/runners/cuda_v11/libext_server.so
time=2024-04-08T13:05:17.595+02:00 level=INFO source=dyn_ext_server.go:87 msg="Loading Dynamic llm server: /tmp/ollama123086200/runners/cuda_v11/libext_server.so"
time=2024-04-08T13:05:17.595+02:00 level=INFO source=dyn_ext_server.go:147 msg="Initializing llama server"
llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /home/WUR/tamas002/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000,0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6,6, 6, ...
llama_model_loader: - kv  15:                      tokenizer.ggml.merges arr[str,61249]   = ["▁ t", "e r", "i n", "▁ a", "e n...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  19:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  20:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  21:                    tokenizer.chat_template str              = {% if messages[0]['role'] == 'system'...
llama_model_loader: - kv  22:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 6.74 B
llm_load_print_meta: model size       = 3.56 GiB (4.54 BPW)
llm_load_print_meta: general.name     = LLaMA v2
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   yes
ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes
⠸ llm_load_tensors: ggml ctx size =    0.22 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:        CPU buffer size =    70.31 MiB
llm_load_tensors:      CUDA0 buffer size =  3577.56 MiB
⠧ ........
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =  1024.00 MiB
llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =    70.50 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   164.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    12.00 MiB
llama_new_context_with_model: graph nodes  = 1060
llama_new_context_with_model: graph splits = 2
⠸ {"function":"initialize","level":"INFO","line":444,"msg":"initializing slots","n_slots":1,"tid":"140309511649024","timestamp":1712574318}
{"function":"initialize","level":"INFO","line":453,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"140309511649024","timestamp":1712574318}
time=2024-04-08T13:05:18.756+02:00 level=INFO source=dyn_ext_server.go:159 msg="Starting llama main loop"
[GIN] 2024/04/08 - 13:05:18 | 200 |  1.385049293s |       127.0.0.1 | POST     "/api/chat"
{"function":"update_slots","level":"INFO","line":1572,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"140307546400512","timestamp":1712574318}
>>> hello?
{"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"140307546400512","timestamp":1712574321}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1803,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":22,"slot_id":0,"task_id":0,"tid":"140307546400512","timestamp":1712574321}
{"function":"update_slots","level":"INFO","line":1830,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"140307546400512","timestamp":1712574321}
Hello! It's nice to meet you. Is there something I can help you with or would you like to chat?{"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time     =      91.98 ms /    22 tokens (    4.18 ms per token,   239.18 tokens per second)","n_prompt_tokens_processed":22,"n_tokens_second":239.1772303276728,"slot_id":0,"t_prompt_processing":91.982,"t_token":4.181,"task_id":0,"tid":"140307546400512","timestamp":1712574322}
{"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time =     188.19 ms /    26 runs   (    7.24ms per token,   138.16 tokens per second)","n_decoded":26,"n_tokens_second":138.1589784737684,"slot_id":0,"t_token":7.238038461538461,"t_token_generation":188.189,"task_id":0,"tid":"140307546400512","timestamp":1712574322}
{"function":"print_timings","level":"INFO","line":289,"msg":"          total time =     280.17 ms","slot_id":0,"t_prompt_processing":91.982,"t_token_generation":188.189,"t_total":280.171,"task_id":0,"tid":"140307546400512","timestamp":1712574322}
[GIN] 2024/04/08 - 13:05:22 | 200 |  283.225416ms |       127.0.0.1 | POST     "/api/chat"


>>> {"function":"update_slots","level":"INFO","line":1634,"msg":"slot released","n_cache_tokens":48,"n_ctx":2048,"n_past":47,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"140307546400512","timestamp":1712574322,"truncated":false}

>>> Send a message (/? for help)

What did you expect to see?

normal behaviour

Steps to reproduce

session pasted above

Are there any recent changes that introduced the issue?

no

OS

Linux

Architecture

amd64

Platform

No response

Ollama version

1.30

GPU

Nvidia

GPU info

Mon Apr 8 13:07:26 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A100 80G... Off | 00000000:CA:00.0 Off | 0 |
| N/A 40C P0 64W / 300W | 5511MiB / 81920MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 3856 C ./ollama-linux-amd64 5508MiB |
+-----------------------------------------------------------------------------+

CPU

Intel

Other software

absolutely nothing.
processor info
processor : 0-31
vendor_id : GenuineIntel
cpu family : 6
model : 85
model name : Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz
stepping : 4
microcode : 0x2007006
cpu MHz : 3066.403
cache size : 22528 KB
physical id : 1
siblings : 16
core id : 12
cpu cores : 16
apicid : 56
initial apicid : 56
fpu : yes
fpu_exception : yes
cpuid level : 22
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vlxsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke md_clear flush_l1d arch_capabilities
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit
bogomips : 4201.39
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:

Originally created by @bozo32 on GitHub (Apr 8, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3538 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I installed ollama on the university cluster following the instructions here: The download page has a list of assets, one of them is binary for Linux named ollama-linux-amd64. Just download it to your Linux cluster, then run the following: start the server in background ./ollama-linux-amd64 serve& run a local model afterwards ./ollama-linux-amd64 run llama2 I had to chmod +x ollama-linux-amd64 but then it worked when running ./ollama-linux-amd64 run llama2 everything worked fine...if slowly...but there is extra information in the responses when I then used sinteractive to grab a gpu (A100 80GB) I had to re-install everything fine and it, again, produced lots of extra information in the responses session starts in CPU mode ``` (base) tamas002@login0:~/ai$ wget https://github.com/ollama/ollama/releases/download/v0.1.30/ollama-linux-amd64 --2024-04-08 13:01:04-- https://github.com/ollama/ollama/releases/download/v0.1.30/ollama-linux-amd64Resolving github.com (github.com)... 140.82.121.3 Connecting to github.com (github.com)|140.82.121.3|:443... connected. HTTP request sent, awaiting response... 302 FoundLocation: https://objects.githubusercontent.com/github-production-release-asset-2e65be/658928958/bdcdb212-95c5-426d-9879-9e5b50876d89?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240408%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240408T110105Z&X-Amz-Expires=300&X-Amz-Signature=08db18a78d027fd8a9cdbf030599bb52ee8b576f3cc397c3d5553c9ef4ce68ce&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=658928958&response-content-disposition=attachment%3B%20filename%3Dollama-linux-amd64&response-content-type=application%2Foctet-stream [following] --2024-04-08 13:01:05-- https://objects.githubusercontent.com/github-production-release-asset-2e65be/658928958/bdcdb212-95c5-426d-9879-9e5b50876d89?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240408%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240408T110105Z&X-Amz-Expires=300&X-Amz-Signature=08db18a78d027fd8a9cdbf030599bb52ee8b576f3cc397c3d5553c9ef4ce68ce&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=658928958&response-content-disposition=attachment%3B%20filename%3Dollama-linux-amd64&response-content-type=application%2Foctet-streamResolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.109.133, ...Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.111.133|:443... connected.HTTP request sent, awaiting response... 200 OK Length: 297108760 (283M) [application/octet-stream]Saving to: ‘ollama-linux-amd64’ ollama-linux-amd64 100%[===================================================>] 283.34M 265MB/s in 1.1s 2024-04-08 13:01:06 (265 MB/s) - ‘ollama-linux-amd64’ saved [297108760/297108760] (base) tamas002@login0:~/ai$ chmod +x ollama-* (base) tamas002@login0:~/ai$ ./ollama-linux-amd64 serve& [1] 1387761(base) tamas002@login0:~/ai$ time=2024-04-08T13:01:31.191+02:00 level=INFO source=images.go:804 msg="total blobs: 114" time=2024-04-08T13:01:33.095+02:00 level=INFO source=images.go:811 msg="total unused blobs removed: 95"time=2024-04-08T13:01:33.098+02:00 level=INFO source=routes.go:1118 msg="Listening on 127.0.0.1:11434 (version 0.1.30)" time=2024-04-08T13:01:33.112+02:00 level=INFO source=payload_common.go:113 msg="Extracting dynamic libraries to /tmp/ollama1248928676/runners ..." time=2024-04-08T13:01:36.039+02:00 level=INFO source=payload_common.go:140 msg="Dynamic LLM libraries [rocm_v60000 cuda_v11 cpu_avx cpu_avx2 cpu]"time=2024-04-08T13:01:36.039+02:00 level=INFO source=gpu.go:115 msg="Detecting GPU type"time=2024-04-08T13:01:36.039+02:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libcudart.so*"time=2024-04-08T13:01:36.041+02:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/tmp/ollama1248928676/runners/cuda_v11/libcudart.so.11.0]"time=2024-04-08T13:01:36.042+02:00 level=INFO source=gpu.go:340 msg="Unable to load cudart CUDA management library /tmp/ollama1248928676/runners/cuda_v11/libcudart.so.11.0: cudart init failure: 35" time=2024-04-08T13:01:36.042+02:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"time=2024-04-08T13:01:36.044+02:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []"time=2024-04-08T13:01:36.044+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-08T13:01:36.044+02:00 level=INFO source=routes.go:1141 msg="no GPU detected" ^C (base) tamas002@login0:~/ai$ ps -x PID TTY STAT TIME COMMAND1316477 ? S 0:00 /shared/webapps/jupyterhub/central/3.10.9-3.1.1/bin/python3 /shared/webapps/jupyterhub/central 1316478 ? Rl 0:23 /shared/webapps/jupyterhub/central/3.10.9-3.1.1/bin/python /shared/webapps/jupyterhub/central/ 1316584 pts/0 Ss 0:00 /bin/bash -l 1387761 pts/0 Sl 0:05 ./ollama-linux-amd64 serve 1388237 pts/0 R+ 0:00 ps -x (base) tamas002@login0:~/ai$ ./ollama-linux-amd64 run llama2 [GIN] 2024/04/08 - 13:03:00 | 200 | 103.664µs | 127.0.0.1 | HEAD "/" [GIN] 2024/04/08 - 13:03:00 | 200 | 4.81742ms | 127.0.0.1 | POST "/api/show" [GIN] 2024/04/08 - 13:03:00 | 200 | 2.043975ms | 127.0.0.1 | POST "/api/show" ⠸ time=2024-04-08T13:03:00.764+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-08T13:03:00.764+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-08T13:03:00.764+02:00 level=INFO source=llm.go:85 msg="GPU not available, falling back to CPU" loading library /tmp/ollama1248928676/runners/cpu_avx2/libext_server.so time=2024-04-08T13:03:00.766+02:00 level=INFO source=dyn_ext_server.go:87 msg="Loading Dynamic llm server: /tmp/ollama1248928676/runners/cpu_avx2/libext_server.so" time=2024-04-08T13:03:00.766+02:00 level=INFO source=dyn_ext_server.go:147 msg="Initializing llama server" llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /home/WUR/tamas002/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = LLaMA v2 llama_model_loader: - kv 2: llama.context_length u32 = 4096 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000,0.0000... llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6,6, 6, ... llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'... llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors ⠼ llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 4096 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 11008 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 6.74 B llm_load_print_meta: model size = 3.56 GiB (4.54 BPW) llm_load_print_meta: general.name = LLaMA v2 llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.11 MiB ⠴ llm_load_tensors: CPU buffer size = 3647.87 MiB .................................................................................................. llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 ⠙ llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: CPU output buffer size = 70.50 MiB llama_new_context_with_model: CPU compute buffer size = 164.00 MiB llama_new_context_with_model: graph nodes = 1060 llama_new_context_with_model: graph splits = 1 ⠹ {"function":"initialize","level":"INFO","line":444,"msg":"initializing slots","n_slots":1,"tid":"140593019721472","timestamp":1712574181} {"function":"initialize","level":"INFO","line":453,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"140593019721472","timestamp":1712574181} time=2024-04-08T13:03:01.744+02:00 level=INFO source=dyn_ext_server.go:159 msg="Starting llama main loop" {"function":"update_slots","level":"INFO","line":1572,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"140587255912192","timestamp":1712574181} [GIN] 2024/04/08 - 13:03:01 | 200 | 1.282697676s | 127.0.0.1 | POST "/api/chat" >>> hello? {"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"140587255912192","timestamp":1712574184} {"function":"update_slots","ga_i":0,"level":"INFO","line":1803,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":22,"slot_id":0,"task_id":0,"tid":"140587255912192","timestamp":1712574184} {"function":"update_slots","level":"INFO","line":1830,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"140587255912192","timestamp":1712574184} Hello! It's nice to meet you. How are you today? Is there something I can help you with or would you like to chat?{"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 4261.18 ms / 22 tokens ( 193.69 ms per token, 5.16 tokens per second)","n_prompt_tokens_processed":22,"n_tokens_second":5.162895210827999,"slot_id":0,"t_prompt_processing":4261.175,"t_token":193.68977272727273,"task_id":0,"tid":"140587255912192","timestamp":1712574205} {"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 16137.99 ms / 31 runs ( 520.58ms per token, 1.92 tokens per second)","n_decoded":31,"n_tokens_second":1.9209331521459612,"slot_id":0,"t_token":520.5803225806452,"t_token_generation":16137.99,"task_id":0,"tid":"140587255912192","timestamp":1712574205} {"function":"print_timings","level":"INFO","line":289,"msg":" total time = 20399.17 ms","slot_id":0,"t_prompt_processing":4261.175,"t_token_generation":16137.99,"t_total":20399.165,"task_id":0,"tid":"140587255912192","timestamp":1712574205} {"function":"update_slots","level":"INFO","line":1634,"msg":"slot released","n_cache_tokens":53,"n_ctx":2048,"n_past":52,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"140587255912192","timestamp":1712574205,"truncated":false} [GIN] 2024/04/08 - 13:03:25 | 200 | 20.402214217s | 127.0.0.1 | POST "/api/chat" >>> /bye (base) tamas002@login0:~/ai$ sinteractive -p gpu --gres=gpu:1 --accel-bind=g --cpus-per-gpu=1 --mem-per-cpu=96G srun: job 51621252 queued and waiting for resources srun: job 51621252 has been allocated resources (base) tamas002@gpun203:~/ai$ ps -x PID TTY STAT TIME COMMAND 3815 pts/0 SNs 0:00 /usr/bin/bash -i 3844 pts/0 RN+ 0:00 ps -x (base) tamas002@gpun203:~/ai$ ./ollama-linux-amd64 run llama2 Error: could not connect to ollama app, is it running? (base) tamas002@gpun203:~/ai$ ./ollama-linux-amd64 serve& [1] 3856 (base) tamas002@gpun203:~/ai$ time=2024-04-08T13:04:36.181+02:00 level=INFO source=images.go:804 msg="total blobs: 19" time=2024-04-08T13:04:36.185+02:00 level=INFO source=images.go:811 msg="total unused blobs removed: 0" time=2024-04-08T13:04:36.186+02:00 level=INFO source=routes.go:1118 msg="Listening on 127.0.0.1:11434 (version 0.1.30)" time=2024-04-08T13:04:36.221+02:00 level=INFO source=payload_common.go:113 msg="Extracting dynamic libraries to /tmp/ollama123086200/runners ..." time=2024-04-08T13:04:41.078+02:00 level=INFO source=payload_common.go:140 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60000]" time=2024-04-08T13:04:41.079+02:00 level=INFO source=gpu.go:115 msg="Detecting GPU type" time=2024-04-08T13:04:41.079+02:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libcudart.so*" time=2024-04-08T13:04:41.080+02:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/tmp/ollama123086200/runners/cuda_v11/libcudart.so.11.0]" time=2024-04-08T13:04:43.008+02:00 level=INFO source=gpu.go:120 msg="Nvidia GPU detected via cudart" time=2024-04-08T13:04:43.008+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-08T13:04:43.123+02:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.0" ^C (base) tamas002@gpun203:~/ai$ ps -x PID TTY STAT TIME COMMAND 3815 pts/0 SNs 0:00 /usr/bin/bash -i 3856 pts/0 SNl 0:06 ./ollama-linux-amd64 serve 3883 pts/0 RN+ 0:00 ps -x (base) tamas002@gpun203:~/ai$ ./ollama-linux-amd64 run llama2 [GIN] 2024/04/08 - 13:05:17 | 200 | 52.452µs | 127.0.0.1 | HEAD "/" [GIN] 2024/04/08 - 13:05:17 | 200 | 1.755494ms | 127.0.0.1 | POST "/api/show" [GIN] 2024/04/08 - 13:05:17 | 200 | 1.076623ms | 127.0.0.1 | POST "/api/show" ⠹ time=2024-04-08T13:05:17.589+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-08T13:05:17.589+02:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.0" time=2024-04-08T13:05:17.590+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-08T13:05:17.590+02:00 level=INFO source=gpu.go:188 msg="[cudart] CUDART CUDA Compute Capability detected: 8.0" time=2024-04-08T13:05:17.590+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" loading library /tmp/ollama123086200/runners/cuda_v11/libext_server.so time=2024-04-08T13:05:17.595+02:00 level=INFO source=dyn_ext_server.go:87 msg="Loading Dynamic llm server: /tmp/ollama123086200/runners/cuda_v11/libext_server.so" time=2024-04-08T13:05:17.595+02:00 level=INFO source=dyn_ext_server.go:147 msg="Initializing llama server" llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /home/WUR/tamas002/.ollama/models/blobs/sha256-8934d96d3f08982e95922b2b7a2c626a1fe873d7c3b06e8e56d7bc0a1fef9246 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = LLaMA v2 llama_model_loader: - kv 2: llama.context_length u32 = 4096 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000,0.0000... llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6,6, 6, ... llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'... llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 4096 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 11008 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 6.74 B llm_load_print_meta: model size = 3.56 GiB (4.54 BPW) llm_load_print_meta: general.name = LLaMA v2 llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes ggml_cuda_init: CUDA_USE_TENSOR_CORES: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes ⠸ llm_load_tensors: ggml ctx size = 0.22 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU buffer size = 70.31 MiB llm_load_tensors: CUDA0 buffer size = 3577.56 MiB ⠧ ........ llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 70.50 MiB llama_new_context_with_model: CUDA0 compute buffer size = 164.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 12.00 MiB llama_new_context_with_model: graph nodes = 1060 llama_new_context_with_model: graph splits = 2 ⠸ {"function":"initialize","level":"INFO","line":444,"msg":"initializing slots","n_slots":1,"tid":"140309511649024","timestamp":1712574318} {"function":"initialize","level":"INFO","line":453,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"140309511649024","timestamp":1712574318} time=2024-04-08T13:05:18.756+02:00 level=INFO source=dyn_ext_server.go:159 msg="Starting llama main loop" [GIN] 2024/04/08 - 13:05:18 | 200 | 1.385049293s | 127.0.0.1 | POST "/api/chat" {"function":"update_slots","level":"INFO","line":1572,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"140307546400512","timestamp":1712574318} >>> hello? {"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"140307546400512","timestamp":1712574321} {"function":"update_slots","ga_i":0,"level":"INFO","line":1803,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":22,"slot_id":0,"task_id":0,"tid":"140307546400512","timestamp":1712574321} {"function":"update_slots","level":"INFO","line":1830,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"140307546400512","timestamp":1712574321} Hello! It's nice to meet you. Is there something I can help you with or would you like to chat?{"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 91.98 ms / 22 tokens ( 4.18 ms per token, 239.18 tokens per second)","n_prompt_tokens_processed":22,"n_tokens_second":239.1772303276728,"slot_id":0,"t_prompt_processing":91.982,"t_token":4.181,"task_id":0,"tid":"140307546400512","timestamp":1712574322} {"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 188.19 ms / 26 runs ( 7.24ms per token, 138.16 tokens per second)","n_decoded":26,"n_tokens_second":138.1589784737684,"slot_id":0,"t_token":7.238038461538461,"t_token_generation":188.189,"task_id":0,"tid":"140307546400512","timestamp":1712574322} {"function":"print_timings","level":"INFO","line":289,"msg":" total time = 280.17 ms","slot_id":0,"t_prompt_processing":91.982,"t_token_generation":188.189,"t_total":280.171,"task_id":0,"tid":"140307546400512","timestamp":1712574322} [GIN] 2024/04/08 - 13:05:22 | 200 | 283.225416ms | 127.0.0.1 | POST "/api/chat" >>> {"function":"update_slots","level":"INFO","line":1634,"msg":"slot released","n_cache_tokens":48,"n_ctx":2048,"n_past":47,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"140307546400512","timestamp":1712574322,"truncated":false} >>> Send a message (/? for help) ``` ### What did you expect to see? normal behaviour ### Steps to reproduce session pasted above ### Are there any recent changes that introduced the issue? no ### OS Linux ### Architecture amd64 ### Platform _No response_ ### Ollama version 1.30 ### GPU Nvidia ### GPU info Mon Apr 8 13:07:26 2024 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA A100 80G... Off | 00000000:CA:00.0 Off | 0 | | N/A 40C P0 64W / 300W | 5511MiB / 81920MiB | 0% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 3856 C ./ollama-linux-amd64 5508MiB | +-----------------------------------------------------------------------------+ ### CPU Intel ### Other software absolutely nothing. processor info processor : 0-31 vendor_id : GenuineIntel cpu family : 6 model : 85 model name : Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz stepping : 4 microcode : 0x2007006 cpu MHz : 3066.403 cache size : 22528 KB physical id : 1 siblings : 16 core id : 12 cpu cores : 16 apicid : 56 initial apicid : 56 fpu : yes fpu_exception : yes cpuid level : 22 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vlxsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke md_clear flush_l1d arch_capabilities bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit bogomips : 4201.39 clflush size : 64 cache_alignment : 64 address sizes : 46 bits physical, 48 bits virtual power management:
GiteaMirror added the needs more info label 2026-04-22 05:36:26 -05:00
Author
Owner

@riedlc commented on GitHub (May 13, 2024):

I found your code helpful and managed to get ollama working on our university cluster using those instructions.
By default I was also only getting the CPU mode.
It seems that in a slurm environment, you actively have to request the GPU as a resource, not just a node on the GPU queue. So after I changed my srun command to:
srun --partition=netsi_gpu --nodes=1 --pty --gres=gpu:2 --ntasks=1 --mem=50GB --time=04:00:00 /bin/bash
==> notice the --gres=gpu:2

i see the following in the ollama serve startup message:

time=2024-05-13T13:43:14.021-04:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-4590c250-e3c9-a088-e3eb-b1835350a3f3 library=cuda compute=7.0 driver=12.3 name="Tesla V100-SXM2-16GB" total="15.8 GiB" available="15.5 GiB"
time=2024-05-13T13:43:14.022-04:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-2167b8a1-e76e-f7bf-24c2-1d091b3c40ec library=cuda compute=7.0 driver=12.3 name="Tesla V100-SXM2-16GB" total="15.8 GiB" available="15.5 GiB"

However, when I then submit queries to the API, the inference module of the server seems to use only one of the GPUs. Any ideas?

ggml_cuda_init: found 1 CUDA devices:
  Device 0: Tesla V100-SXM2-16GB, compute capability 7.0, VMM: yes
<!-- gh-comment-id:2108498490 --> @riedlc commented on GitHub (May 13, 2024): I found your code helpful and managed to get ollama working on our university cluster using those instructions. By default I was also only getting the CPU mode. It seems that in a slurm environment, you actively have to request the GPU as a resource, not just a node on the GPU queue. So after I changed my srun command to: `srun --partition=netsi_gpu --nodes=1 --pty --gres=gpu:2 --ntasks=1 --mem=50GB --time=04:00:00 /bin/bash` ==> notice the --gres=gpu:2 i see the following in the ollama serve startup message: ``` time=2024-05-13T13:43:14.021-04:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-4590c250-e3c9-a088-e3eb-b1835350a3f3 library=cuda compute=7.0 driver=12.3 name="Tesla V100-SXM2-16GB" total="15.8 GiB" available="15.5 GiB" time=2024-05-13T13:43:14.022-04:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-2167b8a1-e76e-f7bf-24c2-1d091b3c40ec library=cuda compute=7.0 driver=12.3 name="Tesla V100-SXM2-16GB" total="15.8 GiB" available="15.5 GiB" ``` However, when I then submit queries to the API, the inference module of the server seems to use only one of the GPUs. Any ideas? ``` ggml_cuda_init: found 1 CUDA devices: Device 0: Tesla V100-SXM2-16GB, compute capability 7.0, VMM: yes ```
Author
Owner

@dhiltgen commented on GitHub (Jun 1, 2024):

@bozo32 it's a little hard to understand what's going on when you run the server and client in the same terminal and their output is jumbled together.

Is there a reason you chose not to use the install script so it runs as a system service? If not, please try to install it https://github.com/ollama/ollama/blob/main/docs/linux.md#install which will set up the server to run as a service so you don't have to worry about it. If there's a reason you're running it manually, can you open 2 different terminals/login sessions, or do something like ollama serve &> server.log & so the output is isolated?

I don't know what this output is sinteractive -p gpu --gres=gpu:1 --accel-bind=g --cpus-per-gpu=1 --mem-per-cpu=96G but it seems after that we did detect the GPU correctly, but it doesn't look like you sent any prompts at that point.

<!-- gh-comment-id:2143623948 --> @dhiltgen commented on GitHub (Jun 1, 2024): @bozo32 it's a little hard to understand what's going on when you run the server and client in the same terminal and their output is jumbled together. Is there a reason you chose not to use the install script so it runs as a system service? If not, please try to install it https://github.com/ollama/ollama/blob/main/docs/linux.md#install which will set up the server to run as a service so you don't have to worry about it. If there's a reason you're running it manually, can you open 2 different terminals/login sessions, or do something like `ollama serve &> server.log &` so the output is isolated? I don't know what this output is `sinteractive -p gpu --gres=gpu:1 --accel-bind=g --cpus-per-gpu=1 --mem-per-cpu=96G` but it seems after that we did detect the GPU correctly, but it doesn't look like you sent any prompts at that point.
Author
Owner

@bozo32 commented on GitHub (Jun 2, 2024):

the HPC folks, wisely, don't give us root.
since then they have created a scripts to set ollama up and create a tunnel in a SLURM environment

#A script to start ollama on a free port, and show info on that to the user
#Author - Jan van Haarst
#20240426

set -o errexit
set -o pipefail
set -o nounset

Get script dir

SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )

Function to get free port

free_port () {
START="${1:-2000}"
END="${2:-65535}"
comm --nocheck-order -13 <(ss --tcp --udp --listening --numeric | awk '{print $5}' | sed 's/.*://' | sort -nu | awk '(N
R>1) && ($1 >= '"$START"' )' ) <(seq "$START" "$END" | sort -n) | head -1
}

PORT=$(free_port 11435 11450)
export OLLAMA_HOST=$(hostname -f):${PORT}

Load latest ollama

ml ollama

Print info on how to set up a tunnel

echo ------------------------------------------------------------------------------
echo Use this on your host to setup a tunnel to the running instance:
echo ssh -L 11434:"${OLLAMA_HOST}" "${USER}"@login.anunna.wur.nl

Print info on what URL we have

echo "Then connect to http://localhost:11434"
echo "(Stop instance by pressing CTRL-C twice)"
echo ------------------------------------------------------------------------------

start ollama

ollama-linux-amd64 serve

#######

#!/bin/bash
#A script to gather from user for ollama
#Author - Jan van Haarst
#20240426

set -o errexit
set -o pipefail
set -o nounset

Get script dir

SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )

Colors

export NEWT_COLORS='
window=,red
border=white,red
textbox=white,red
button=black,white
'

Time

HOURS=$(whiptail --inputbox "How long do you want to run the backend (in hours) ?" 0 0 1 3>&1 1>&2 2>&3):0:0
WAIT=$(whiptail --inputbox "How long do you want to wait for an available slot (in seconds) ?" 0 0 60 3>&1 1>&2 2>&3)

How many GPUs ?

GPU=$(whiptail --notags --title "How many GPUs do you need ?" --radiolist "Choose an option" 0 0 0
"0" "0" "off"
"1" "1" "on"
"2" "2" "off"
"3" "3" "off"
"4" "4" "off" 3>&1 1>&2 2>&3)

How many cores ?

if [ $GPU == 0 ]
then
CPU=$(whiptail --inputbox "How many CPU cores do you need ?" 0 0 8 3>&1 1>&2 2>&3)
else
CPU=$(whiptail --inputbox "How many CPU cores do you need per GPU ?" 0 0 8 3>&1 1>&2 2>&3)
fi

How much RAM ?

MEM_PER_CPU=$(whiptail --inputbox "How much RAM do you need per CPU core (in Gbyte) ?" 0 0 8 3>&1 1>&2 2>&3)G

if [ $GPU == 0 ]
then
PARTITION=""
GRES=""
CPU="--cpus-per-task="${CPU}
else
PARTITION="--partition=gpu"
GRES="--gres=gpu:"${GPU}
CPU="--cpus-per-gpu="${CPU}
fi

echo Now running:
echo srun --immediate="${WAIT}" --nodes=1 --ntasks=1 "$PARTITION" "$GRES" --time="${HOURS}" --mem-per-cpu="${MEM_PER_CPU}"
"$CPU" "${SCRIPT_DIR}"/start_ollama.sh
echo
srun --immediate="${WAIT}" --nodes=1 --ntasks=1 "$PARTITION" "$GRES" --time="${HOURS}" --mem-per-cpu="${MEM_PER_CPU}" "$CPU
" "${SCRIPT_DIR}"/start_ollama.sh
(base) tamas002@login0:~$

<!-- gh-comment-id:2143786641 --> @bozo32 commented on GitHub (Jun 2, 2024): the HPC folks, wisely, don't give us root. since then they have created a scripts to set ollama up and create a tunnel in a SLURM environment #A script to start ollama on a free port, and show info on that to the user #Author - Jan van Haarst #20240426 set -o errexit set -o pipefail set -o nounset # Get script dir SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) # Function to get free port free_port () { START="${1:-2000}" END="${2:-65535}" comm --nocheck-order -13 <(ss --tcp --udp --listening --numeric | awk '{print $5}' | sed 's/.*://' | sort -nu | awk '(N R>1) && ($1 >= '"$START"' )' ) <(seq "$START" "$END" | sort -n) | head -1 } PORT=$(free_port 11435 11450) export OLLAMA_HOST=$(hostname -f):${PORT} # Load latest ollama ml ollama # Print info on how to set up a tunnel echo ------------------------------------------------------------------------------ echo Use this on your host to setup a tunnel to the running instance: echo ssh -L 11434:"${OLLAMA_HOST}" "${USER}"@login.anunna.wur.nl # Print info on what URL we have echo "Then connect to http://localhost:11434" echo "(Stop instance by pressing CTRL-C twice)" echo ------------------------------------------------------------------------------ # start ollama ollama-linux-amd64 serve ####### #!/bin/bash #A script to gather from user for ollama #Author - Jan van Haarst #20240426 set -o errexit set -o pipefail set -o nounset # Get script dir SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) # Colors export NEWT_COLORS=' window=,red border=white,red textbox=white,red button=black,white ' # Time HOURS=$(whiptail --inputbox "How long do you want to run the backend (in hours) ?" 0 0 1 3>&1 1>&2 2>&3):0:0 WAIT=$(whiptail --inputbox "How long do you want to wait for an available slot (in seconds) ?" 0 0 60 3>&1 1>&2 2>&3) # How many GPUs ? GPU=$(whiptail --notags --title "How many GPUs do you need ?" --radiolist "Choose an option" 0 0 0 \ "0" "0" "off" \ "1" "1" "on" \ "2" "2" "off" \ "3" "3" "off" \ "4" "4" "off" 3>&1 1>&2 2>&3) # How many cores ? if [ $GPU == 0 ] then CPU=$(whiptail --inputbox "How many CPU cores do you need ?" 0 0 8 3>&1 1>&2 2>&3) else CPU=$(whiptail --inputbox "How many CPU cores do you need per GPU ?" 0 0 8 3>&1 1>&2 2>&3) fi # How much RAM ? MEM_PER_CPU=$(whiptail --inputbox "How much RAM do you need per CPU core (in Gbyte) ?" 0 0 8 3>&1 1>&2 2>&3)G if [ $GPU == 0 ] then PARTITION="" GRES="" CPU="--cpus-per-task="${CPU} else PARTITION="--partition=gpu" GRES="--gres=gpu:"${GPU} CPU="--cpus-per-gpu="${CPU} fi echo Now running: echo srun --immediate="${WAIT}" --nodes=1 --ntasks=1 "$PARTITION" "$GRES" --time="${HOURS}" --mem-per-cpu="${MEM_PER_CPU}" "$CPU" "${SCRIPT_DIR}"/start_ollama.sh echo srun --immediate="${WAIT}" --nodes=1 --ntasks=1 "$PARTITION" "$GRES" --time="${HOURS}" --mem-per-cpu="${MEM_PER_CPU}" "$CPU " "${SCRIPT_DIR}"/start_ollama.sh (base) tamas002@login0:~$
Author
Owner

@dhiltgen commented on GitHub (Jun 2, 2024):

So is everything working OK now, or do you still have a problem? If you still have a problem, can you clarify what the problem is?

<!-- gh-comment-id:2143951159 --> @dhiltgen commented on GitHub (Jun 2, 2024): So is everything working OK now, or do you still have a problem? If you still have a problem, can you clarify what the problem is?
Author
Owner

@jjxyhb commented on GitHub (Feb 28, 2025):

我发现您的代码很有帮助,并设法使用这些说明让 ollama 在我们的大学集群上运行。默认情况下,我也只获得 CPU 模式。似乎在 slurm 环境中,您必须主动请求 GPU 作为资源,而不仅仅是 GPU 队列上的一个节点。因此,在我将 srun 命令更改为: ==> 注意 --gres=gpu:2srun --partition=netsi_gpu --nodes=1 --pty --gres=gpu:2 --ntasks=1 --mem=50GB --time=04:00:00 /bin/bash

我在 OLLAMA SERVE 启动消息中看到以下内容:

time=2024-05-13T13:43:14.021-04:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-4590c250-e3c9-a088-e3eb-b1835350a3f3 library=cuda compute=7.0 driver=12.3 name="Tesla V100-SXM2-16GB" total="15.8 GiB" available="15.5 GiB"
time=2024-05-13T13:43:14.022-04:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-2167b8a1-e76e-f7bf-24c2-1d091b3c40ec library=cuda compute=7.0 driver=12.3 name="Tesla V100-SXM2-16GB" total="15.8 GiB" available="15.5 GiB"

但是,当我随后向 API 提交查询时,服务器的推理模块似乎只使用其中一个 GPU。有什么想法吗?

ggml_cuda_init: found 1 CUDA devices:
  Device 0: Tesla V100-SXM2-16GB, compute capability 7.0, VMM: yes

您好,您可以大概讲一下如何在集群的指定节点上通过ollama去配置deepseek,非常感谢

<!-- gh-comment-id:2689608929 --> @jjxyhb commented on GitHub (Feb 28, 2025): > 我发现您的代码很有帮助,并设法使用这些说明让 ollama 在我们的大学集群上运行。默认情况下,我也只获得 CPU 模式。似乎在 slurm 环境中,您必须主动请求 GPU 作为资源,而不仅仅是 GPU 队列上的一个节点。因此,在我将 srun 命令更改为: ==> 注意 --gres=gpu:2`srun --partition=netsi_gpu --nodes=1 --pty --gres=gpu:2 --ntasks=1 --mem=50GB --time=04:00:00 /bin/bash` > > 我在 OLLAMA SERVE 启动消息中看到以下内容: > > ``` > time=2024-05-13T13:43:14.021-04:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-4590c250-e3c9-a088-e3eb-b1835350a3f3 library=cuda compute=7.0 driver=12.3 name="Tesla V100-SXM2-16GB" total="15.8 GiB" available="15.5 GiB" > time=2024-05-13T13:43:14.022-04:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-2167b8a1-e76e-f7bf-24c2-1d091b3c40ec library=cuda compute=7.0 driver=12.3 name="Tesla V100-SXM2-16GB" total="15.8 GiB" available="15.5 GiB" > ``` > > 但是,当我随后向 API 提交查询时,服务器的推理模块似乎只使用其中一个 GPU。有什么想法吗? > > ``` > ggml_cuda_init: found 1 CUDA devices: > Device 0: Tesla V100-SXM2-16GB, compute capability 7.0, VMM: yes > ``` 您好,您可以大概讲一下如何在集群的指定节点上通过ollama去配置deepseek,非常感谢
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#27942