[GH-ISSUE #5494] H100s (via Vast.ai) generate GPU warning + fetching/loading models appears very slow #65474

Closed
opened 2026-05-03 21:25:43 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @wkoszek on GitHub (Jul 5, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5494

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

I tried 1xH100 box and got an error during installation. Got the same output from another bigger 2xH100 box too:

root@C.11391672:~$ curl -fsSL https://ollama.com/install.sh | sh
>>> Downloading ollama...
######################################################################## 100.0%##O#-#                        ######################################################################## 100.0%
>>> Installing ollama to /usr/local/bin...
>>> Creating ollama user...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
WARNING: Unable to detect NVIDIA/AMD GPU. Install lspci or lshw to automatically detect and install GPU dependencies.
>>> The Ollama API is now available at 127.0.0.1:11434.
>>> Install complete. Run "ollama" from the command line.

Cards:

root@C.11391672:~$ nvidia-smi
Fri Jul  5 03:11:36 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.15              Driver Version: 550.54.15      CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA H100 PCIe               On  |   00000000:81:00.0 Off |                    0 |
| N/A   35C    P0             47W /  350W |       0MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA H100 PCIe               On  |   00000000:C1:00.0 Off |                    0 |
| N/A   36C    P0             48W /  350W |       0MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

Afterwards it seems like Ollama is OKish:

2024/07/05 03:18:08 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-05T03:18:08.637Z level=INFO source=images.go:730 msg="total blobs: 0"
time=2024-07-05T03:18:08.637Z level=INFO source=images.go:737 msg="total unused blobs removed: 0"
time=2024-07-05T03:18:08.637Z level=INFO source=routes.go:1111 msg="Listening on 127.0.0.1:11434 (version 0.1.48)"
time=2024-07-05T03:18:08.637Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama905009637/runners
time=2024-07-05T03:18:11.747Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60101]"
time=2024-07-05T03:18:13.034Z level=INFO source=types.go:98 msg="inference compute" id=GPU-fdd47323-bd9b-906a-1ccf-cb58ff4b3a69 library=cuda compute=9.0 driver=12.4 name="NVIDIA H100 PCIe" total="79.1 GiB" available="78.7 GiB"
time=2024-07-05T03:18:13.034Z level=INFO source=types.go:98 msg="inference compute" id=GPU-2bdee957-ed23-55e0-868a-2d6df4880e54 library=cuda compute=9.0 driver=12.4 name="NVIDIA H100 PCIe" total="79.1 GiB" available="78.7 GiB"

Downloading the bigger 70b model is unpredictable. On 2 boxes I experienced the behavior where i had to restart downloading. The ollama pull worked at the end however, and since vast.ai appears to have boxes scattered around the world, I assume it could be transient Internet problems.

But then when I try ollama run llama3:70b, this output is there for a very long time:

2024/07/05 03:18:08 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-05T03:18:08.637Z level=INFO source=images.go:730 msg="total blobs: 0"
time=2024-07-05T03:18:08.637Z level=INFO source=images.go:737 msg="total unused blobs removed: 0"
time=2024-07-05T03:18:08.637Z level=INFO source=routes.go:1111 msg="Listening on 127.0.0.1:11434 (version 0.1.48)"
time=2024-07-05T03:18:08.637Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama905009637/runners
time=2024-07-05T03:18:11.747Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60101]"
time=2024-07-05T03:18:13.034Z level=INFO source=types.go:98 msg="inference compute" id=GPU-fdd47323-bd9b-906a-1ccf-cb58ff4b3a69 library=cuda compute=9.0 driver=12.4 name="NVIDIA H100 PCIe" total="79.1 GiB" available="78.7 GiB"
time=2024-07-05T03:18:13.034Z level=INFO source=types.go:98 msg="inference compute" id=GPU-2bdee957-ed23-55e0-868a-2d6df4880e54 library=cuda compute=9.0 driver=12.4 name="NVIDIA H100 PCIe" total="79.1 GiB" available="78.7 GiB"
[GIN] 2024/07/05 - 03:18:52 | 200 |      42.692µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/05 - 03:18:52 | 404 |     169.287µs |       127.0.0.1 | POST     "/api/show"
time=2024-07-05T03:18:53.416Z level=INFO source=download.go:136 msg="downloading 0bd51f8f0c97 in 64 624 MB part(s)"
time=2024-07-05T03:19:16.417Z level=INFO source=download.go:251 msg="0bd51f8f0c97 part 0 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2024-07-05T03:19:22.416Z level=INFO source=download.go:251 msg="0bd51f8f0c97 part 29 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2024-07-05T03:20:51.417Z level=INFO source=download.go:251 msg="0bd51f8f0c97 part 33 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
[GIN] 2024/07/05 - 03:21:15 | 200 |         2m23s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2024/07/05 - 03:21:25 | 200 |      37.251µs |       127.0.0.1 | HEAD     "/"
time=2024-07-05T03:21:25.762Z level=INFO source=download.go:136 msg="downloading 0bd51f8f0c97 in 64 624 MB part(s)"
time=2024-07-05T03:22:26.763Z level=INFO source=download.go:251 msg="0bd51f8f0c97 part 34 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2024-07-05T03:24:56.633Z level=INFO source=download.go:136 msg="downloading 4fa551d4f938 in 1 12 KB part(s)"
time=2024-07-05T03:24:58.641Z level=INFO source=download.go:136 msg="downloading 8ab4849b038c in 1 254 B part(s)"
time=2024-07-05T03:25:00.477Z level=INFO source=download.go:136 msg="downloading 577073ffcc6c in 1 110 B part(s)"
time=2024-07-05T03:25:02.359Z level=INFO source=download.go:136 msg="downloading ea8e06d28e47 in 1 486 B part(s)"
[GIN] 2024/07/05 - 03:25:27 | 200 |          4m2s |       127.0.0.1 | POST     "/api/pull"
[GIN] 2024/07/05 - 03:27:11 | 200 |      25.411µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/05 - 03:27:11 | 200 |   31.092966ms |       127.0.0.1 | POST     "/api/show"
time=2024-07-05T03:27:12.294Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split="" memory.available="[78.7 GiB]" memory.required.full="38.5 GiB" memory.required.partial="38.5 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[38.5 GiB]" memory.weights.total="36.5 GiB" memory.weights.repeating="35.7 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="324.0 MiB" memory.graph.partial="1.1 GiB"
time=2024-07-05T03:27:12.294Z level=INFO source=server.go:368 msg="starting llama server" cmd="/tmp/ollama905009637/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-0bd51f8f0c975ce910ed067dcb962a9af05b77bafcdc595ef02178387f10e51d --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 81 --parallel 1 --port 42741"
time=2024-07-05T03:27:12.294Z level=INFO source=sched.go:382 msg="loaded runners" count=1
time=2024-07-05T03:27:12.294Z level=INFO source=server.go:556 msg="waiting for llama runner to start responding"
time=2024-07-05T03:27:12.295Z level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="7c26775" tid="139743846481920" timestamp=1720150032
INFO [main] system info | n_threads=24 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139743846481920" timestamp=1720150032 total_threads=48
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="47" port="42741" tid="139743846481920" timestamp=1720150032
llama_model_loader: loaded meta data with 22 key-value pairs and 723 tensors from /root/.ollama/models/blobs/sha256-0bd51f8f0c975ce910ed067dcb962a9af05b77bafcdc595ef02178387f10e51d (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = Meta-Llama-3-70B-Instruct
llama_model_loader: - kv   2:                          llama.block_count u32              = 80
llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 8192
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 28672
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 64
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  14:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  17:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  20:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv  21:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  161 tensors
llama_model_loader: - type q4_0:  561 tensors
llama_model_loader: - type q6_K:    1 tensors
time=2024-07-05T03:27:12.546Z level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.8000 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 8192
llm_load_print_meta: n_head           = 64
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 80
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 8
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 28672
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 70B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 70.55 B
llm_load_print_meta: model size       = 37.22 GiB (4.53 BPW)
llm_load_print_meta: general.name     = Meta-Llama-3-70B-Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   yes
ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA H100 PCIe, compute capability 9.0, VMM: yes
llm_load_tensors: ggml ctx size =    0.74 MiB
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors:        CPU buffer size =   563.62 MiB
llm_load_tensors:      CUDA0 buffer size = 37546.98 MiB
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   640.00 MiB
llama_new_context_with_model: KV self size  =  640.00 MiB, K (f16):  320.00 MiB, V (f16):  320.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.52 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   324.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    20.01 MiB
llama_new_context_with_model: graph nodes  = 2566
llama_new_context_with_model: graph splits = 2

^^ long long time. Then I got:

llama_new_context_with_model: graph splits = 2
[GIN] 2024/07/05 - 03:34:19 | 200 |      29.761µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/05 - 03:34:19 | 200 |     103.763µs |       127.0.0.1 | GET      "/api/ps"
time=2024-07-05T03:34:25.550Z level=ERROR source=sched.go:388 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 1.00 - "
[GIN] 2024/07/05 - 03:34:25 | 500 |         7m13s |       127.0.0.1 | POST     "/api/chat"

When I restart it 2nd time, it worked:


llama_new_context_with_model: graph splits = 2
time=2024-07-05T03:35:17.850Z level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model"
INFO [main] model loaded | tid="139733853585408" timestamp=1720150554
time=2024-07-05T03:35:54.246Z level=INFO source=server.go:599 msg="llama runner started in 41.84 seconds"
[GIN] 2024/07/05 - 03:35:54 | 200 | 42.582047329s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/07/05 - 03:36:17 | 200 |  3.434329762s |       127.0.0.1 | POST     "/api/chat"

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

ollama version is 0.1.48

Originally created by @wkoszek on GitHub (Jul 5, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5494 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? I tried 1xH100 box and got an error during installation. Got the same output from another bigger 2xH100 box too: ``` root@C.11391672:~$ curl -fsSL https://ollama.com/install.sh | sh >>> Downloading ollama... ######################################################################## 100.0%##O#-# ######################################################################## 100.0% >>> Installing ollama to /usr/local/bin... >>> Creating ollama user... >>> Adding ollama user to video group... >>> Adding current user to ollama group... >>> Creating ollama systemd service... WARNING: Unable to detect NVIDIA/AMD GPU. Install lspci or lshw to automatically detect and install GPU dependencies. >>> The Ollama API is now available at 127.0.0.1:11434. >>> Install complete. Run "ollama" from the command line. ``` Cards: ``` root@C.11391672:~$ nvidia-smi Fri Jul 5 03:11:36 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA H100 PCIe On | 00000000:81:00.0 Off | 0 | | N/A 35C P0 47W / 350W | 0MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA H100 PCIe On | 00000000:C1:00.0 Off | 0 | | N/A 36C P0 48W / 350W | 0MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ ``` Afterwards it seems like Ollama is OKish: ``` 2024/07/05 03:18:08 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-07-05T03:18:08.637Z level=INFO source=images.go:730 msg="total blobs: 0" time=2024-07-05T03:18:08.637Z level=INFO source=images.go:737 msg="total unused blobs removed: 0" time=2024-07-05T03:18:08.637Z level=INFO source=routes.go:1111 msg="Listening on 127.0.0.1:11434 (version 0.1.48)" time=2024-07-05T03:18:08.637Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama905009637/runners time=2024-07-05T03:18:11.747Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60101]" time=2024-07-05T03:18:13.034Z level=INFO source=types.go:98 msg="inference compute" id=GPU-fdd47323-bd9b-906a-1ccf-cb58ff4b3a69 library=cuda compute=9.0 driver=12.4 name="NVIDIA H100 PCIe" total="79.1 GiB" available="78.7 GiB" time=2024-07-05T03:18:13.034Z level=INFO source=types.go:98 msg="inference compute" id=GPU-2bdee957-ed23-55e0-868a-2d6df4880e54 library=cuda compute=9.0 driver=12.4 name="NVIDIA H100 PCIe" total="79.1 GiB" available="78.7 GiB" ``` Downloading the bigger 70b model is unpredictable. On 2 boxes I experienced the behavior where i had to restart downloading. The `ollama pull` worked at the end however, and since vast.ai appears to have boxes scattered around the world, I assume it could be transient Internet problems. But then when I try `ollama run llama3:70b`, this output is there for a very long time: ``` 2024/07/05 03:18:08 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-07-05T03:18:08.637Z level=INFO source=images.go:730 msg="total blobs: 0" time=2024-07-05T03:18:08.637Z level=INFO source=images.go:737 msg="total unused blobs removed: 0" time=2024-07-05T03:18:08.637Z level=INFO source=routes.go:1111 msg="Listening on 127.0.0.1:11434 (version 0.1.48)" time=2024-07-05T03:18:08.637Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama905009637/runners time=2024-07-05T03:18:11.747Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60101]" time=2024-07-05T03:18:13.034Z level=INFO source=types.go:98 msg="inference compute" id=GPU-fdd47323-bd9b-906a-1ccf-cb58ff4b3a69 library=cuda compute=9.0 driver=12.4 name="NVIDIA H100 PCIe" total="79.1 GiB" available="78.7 GiB" time=2024-07-05T03:18:13.034Z level=INFO source=types.go:98 msg="inference compute" id=GPU-2bdee957-ed23-55e0-868a-2d6df4880e54 library=cuda compute=9.0 driver=12.4 name="NVIDIA H100 PCIe" total="79.1 GiB" available="78.7 GiB" [GIN] 2024/07/05 - 03:18:52 | 200 | 42.692µs | 127.0.0.1 | HEAD "/" [GIN] 2024/07/05 - 03:18:52 | 404 | 169.287µs | 127.0.0.1 | POST "/api/show" time=2024-07-05T03:18:53.416Z level=INFO source=download.go:136 msg="downloading 0bd51f8f0c97 in 64 624 MB part(s)" time=2024-07-05T03:19:16.417Z level=INFO source=download.go:251 msg="0bd51f8f0c97 part 0 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2024-07-05T03:19:22.416Z level=INFO source=download.go:251 msg="0bd51f8f0c97 part 29 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2024-07-05T03:20:51.417Z level=INFO source=download.go:251 msg="0bd51f8f0c97 part 33 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." [GIN] 2024/07/05 - 03:21:15 | 200 | 2m23s | 127.0.0.1 | POST "/api/pull" [GIN] 2024/07/05 - 03:21:25 | 200 | 37.251µs | 127.0.0.1 | HEAD "/" time=2024-07-05T03:21:25.762Z level=INFO source=download.go:136 msg="downloading 0bd51f8f0c97 in 64 624 MB part(s)" time=2024-07-05T03:22:26.763Z level=INFO source=download.go:251 msg="0bd51f8f0c97 part 34 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2024-07-05T03:24:56.633Z level=INFO source=download.go:136 msg="downloading 4fa551d4f938 in 1 12 KB part(s)" time=2024-07-05T03:24:58.641Z level=INFO source=download.go:136 msg="downloading 8ab4849b038c in 1 254 B part(s)" time=2024-07-05T03:25:00.477Z level=INFO source=download.go:136 msg="downloading 577073ffcc6c in 1 110 B part(s)" time=2024-07-05T03:25:02.359Z level=INFO source=download.go:136 msg="downloading ea8e06d28e47 in 1 486 B part(s)" [GIN] 2024/07/05 - 03:25:27 | 200 | 4m2s | 127.0.0.1 | POST "/api/pull" [GIN] 2024/07/05 - 03:27:11 | 200 | 25.411µs | 127.0.0.1 | HEAD "/" [GIN] 2024/07/05 - 03:27:11 | 200 | 31.092966ms | 127.0.0.1 | POST "/api/show" time=2024-07-05T03:27:12.294Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split="" memory.available="[78.7 GiB]" memory.required.full="38.5 GiB" memory.required.partial="38.5 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[38.5 GiB]" memory.weights.total="36.5 GiB" memory.weights.repeating="35.7 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="324.0 MiB" memory.graph.partial="1.1 GiB" time=2024-07-05T03:27:12.294Z level=INFO source=server.go:368 msg="starting llama server" cmd="/tmp/ollama905009637/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-0bd51f8f0c975ce910ed067dcb962a9af05b77bafcdc595ef02178387f10e51d --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 81 --parallel 1 --port 42741" time=2024-07-05T03:27:12.294Z level=INFO source=sched.go:382 msg="loaded runners" count=1 time=2024-07-05T03:27:12.294Z level=INFO source=server.go:556 msg="waiting for llama runner to start responding" time=2024-07-05T03:27:12.295Z level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="7c26775" tid="139743846481920" timestamp=1720150032 INFO [main] system info | n_threads=24 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139743846481920" timestamp=1720150032 total_threads=48 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="47" port="42741" tid="139743846481920" timestamp=1720150032 llama_model_loader: loaded meta data with 22 key-value pairs and 723 tensors from /root/.ollama/models/blobs/sha256-0bd51f8f0c975ce910ed067dcb962a9af05b77bafcdc595ef02178387f10e51d (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct llama_model_loader: - kv 2: llama.block_count u32 = 80 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 8192 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 6: llama.attention.head_count u32 = 64 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 21: general.quantization_version u32 = 2 llama_model_loader: - type f32: 161 tensors llama_model_loader: - type q4_0: 561 tensors llama_model_loader: - type q6_K: 1 tensors time=2024-07-05T03:27:12.546Z level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.8000 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 37.22 GiB (4.53 BPW) llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes ggml_cuda_init: CUDA_USE_TENSOR_CORES: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA H100 PCIe, compute capability 9.0, VMM: yes llm_load_tensors: ggml ctx size = 0.74 MiB llm_load_tensors: offloading 80 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 81/81 layers to GPU llm_load_tensors: CPU buffer size = 563.62 MiB llm_load_tensors: CUDA0 buffer size = 37546.98 MiB llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 640.00 MiB llama_new_context_with_model: KV self size = 640.00 MiB, K (f16): 320.00 MiB, V (f16): 320.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.52 MiB llama_new_context_with_model: CUDA0 compute buffer size = 324.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 20.01 MiB llama_new_context_with_model: graph nodes = 2566 llama_new_context_with_model: graph splits = 2 ``` ^^ long long time. Then I got: ``` llama_new_context_with_model: graph splits = 2 [GIN] 2024/07/05 - 03:34:19 | 200 | 29.761µs | 127.0.0.1 | HEAD "/" [GIN] 2024/07/05 - 03:34:19 | 200 | 103.763µs | 127.0.0.1 | GET "/api/ps" time=2024-07-05T03:34:25.550Z level=ERROR source=sched.go:388 msg="error loading llama server" error="timed out waiting for llama runner to start - progress 1.00 - " [GIN] 2024/07/05 - 03:34:25 | 500 | 7m13s | 127.0.0.1 | POST "/api/chat" ``` When I restart it 2nd time, it worked: ``` llama_new_context_with_model: graph splits = 2 time=2024-07-05T03:35:17.850Z level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model" INFO [main] model loaded | tid="139733853585408" timestamp=1720150554 time=2024-07-05T03:35:54.246Z level=INFO source=server.go:599 msg="llama runner started in 41.84 seconds" [GIN] 2024/07/05 - 03:35:54 | 200 | 42.582047329s | 127.0.0.1 | POST "/api/chat" [GIN] 2024/07/05 - 03:36:17 | 200 | 3.434329762s | 127.0.0.1 | POST "/api/chat" ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version ollama version is 0.1.48
GiteaMirror added the performancebugnvidia labels 2026-05-03 21:25:44 -05:00
Author
Owner

@wkoszek commented on GitHub (Jul 5, 2024):

Benchmarking

Benchmark that I borrowed and adopted from:

https://github.com/aidatatools/ollama-benchmark/blob/main/llm_benchmark/query_llm.py

Shows unexpected results:

  • 4x4090 seemed like the model got split evenly across 4 GPUs
  • 1xH100 was rightfully faster
  • 2xH100 result is super slow, and looks like the model is only loaded into one H100.

On 2xH100 it seems like model is also loaded into 1 card only:

root@C.11391672:~$ nvidia-smi
Fri Jul  5 03:51:01 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.15              Driver Version: 550.54.15      CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA H100 PCIe               On  |   00000000:81:00.0 Off |                    0 |
| N/A   37C    P0             47W /  350W |   39125MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA H100 PCIe               On  |   00000000:C1:00.0 Off |                    0 |
| N/A   36C    P0             48W /  350W |       3MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
+-----------------------------------------------------------------------------------------+

Benchmark code

#!/usr/bin/env python3

import requests
import json
import sys

headers = {
    'Content-Type': 'application/json',
}

d = {
    "model":"llama3:70b",
    #"model":"llama3:8b",
    "prompt":"Why is the sky blue?",
  "stream": False
}

#curl http://localhost:11434/api/generate -d '{
#  "model": "llama3:8b",
#  "prompt": "Why is the sky blue?",
#  "stream": false
#}'

data = json.dumps(d)
print(data)

response = requests.post('http://localhost:11434/api/generate', headers=headers, data=data)
print("resp", response.text)

jsonResponse = response.json()

print("js", jsonResponse)

model = ""
total_duration = 0.0
load_duration = 0.0
prompt_eval_count = 0
prompt_eval_duration = 0.0
eval_count = 0
eval_duration = 0.0
for key, value in jsonResponse.items():
    if (key=="response"):
        pass
    elif (key=="context"):
        pass
    elif (key=="model"):
        model = value
    elif (key=="total_duration"):
        total_duration = float(value)/(10**6)
    elif (key=="load_duration"):
        load_duration = float(value)/(10**6)
    elif (key=="prompt_eval_count"):
        prompt_eval_count=int(value)
    elif (key=="prompt_eval_duration"):
        prompt_eval_duration=float(value)/(10**6)
    elif (key=="eval_count"):
        eval_count=int(value)
    elif (key=="eval_duration"):
        eval_duration=float(value)/(10**6)

print(f"model = {model}")

print(f"{'total_duration time': >20} = {total_duration:10.2f} ms")
print(f"{'load_duration time': >20} = {load_duration:10.2f} ms")

print(f"{'prompt eval time ': >20} = {prompt_eval_duration:10.2f} ms / {prompt_eval_count:>6} tokens")
print(f"{'eval time ': >20} = {eval_duration:10.2f} ms / {eval_count:>6} tokens ")
print(f"Performance: {eval_count/eval_duration*1000:10.2f}(tokens/s)")

Benchmark results

Output from a 4x4090 box:

model = llama3:70b
 total_duration time =   19983.39 ms
  load_duration time =       1.52 ms
   prompt eval time  =      53.19 ms /      0 tokens
          eval time  =   19927.12 ms /    389 tokens
Performance:      19.52(tokens/s)

LLM speed 1xH100 SXM

model = llama3:70b
 total_duration time =   11072.88 ms
  load_duration time =       1.10 ms
   prompt eval time  =      27.58 ms /      0 tokens
          eval time  =   11000.60 ms /    388 tokens
Performance:      35.27(tokens/s)

LLM speed 2xH100 SXM:

model = llama3:70b
 total_duration time =   44201.61 ms
  load_duration time =       1.62 ms
   prompt eval time  =     119.56 ms /      0 tokens
          eval time  =   44036.56 ms /    351 tokens
Performance:       7.97(tokens/s)
<!-- gh-comment-id:2210055347 --> @wkoszek commented on GitHub (Jul 5, 2024): ## Benchmarking Benchmark that I borrowed and adopted from: https://github.com/aidatatools/ollama-benchmark/blob/main/llm_benchmark/query_llm.py Shows unexpected results: - 4x4090 seemed like the model got split evenly across 4 GPUs - 1xH100 was rightfully faster - 2xH100 result is super slow, and looks like the model is only loaded into one H100. On 2xH100 it seems like model is also loaded into 1 card only: ``` root@C.11391672:~$ nvidia-smi Fri Jul 5 03:51:01 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA H100 PCIe On | 00000000:81:00.0 Off | 0 | | N/A 37C P0 47W / 350W | 39125MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA H100 PCIe On | 00000000:C1:00.0 Off | 0 | | N/A 36C P0 48W / 350W | 3MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| +-----------------------------------------------------------------------------------------+ ``` ## Benchmark code ``` #!/usr/bin/env python3 import requests import json import sys headers = { 'Content-Type': 'application/json', } d = { "model":"llama3:70b", #"model":"llama3:8b", "prompt":"Why is the sky blue?", "stream": False } #curl http://localhost:11434/api/generate -d '{ # "model": "llama3:8b", # "prompt": "Why is the sky blue?", # "stream": false #}' data = json.dumps(d) print(data) response = requests.post('http://localhost:11434/api/generate', headers=headers, data=data) print("resp", response.text) jsonResponse = response.json() print("js", jsonResponse) model = "" total_duration = 0.0 load_duration = 0.0 prompt_eval_count = 0 prompt_eval_duration = 0.0 eval_count = 0 eval_duration = 0.0 for key, value in jsonResponse.items(): if (key=="response"): pass elif (key=="context"): pass elif (key=="model"): model = value elif (key=="total_duration"): total_duration = float(value)/(10**6) elif (key=="load_duration"): load_duration = float(value)/(10**6) elif (key=="prompt_eval_count"): prompt_eval_count=int(value) elif (key=="prompt_eval_duration"): prompt_eval_duration=float(value)/(10**6) elif (key=="eval_count"): eval_count=int(value) elif (key=="eval_duration"): eval_duration=float(value)/(10**6) print(f"model = {model}") print(f"{'total_duration time': >20} = {total_duration:10.2f} ms") print(f"{'load_duration time': >20} = {load_duration:10.2f} ms") print(f"{'prompt eval time ': >20} = {prompt_eval_duration:10.2f} ms / {prompt_eval_count:>6} tokens") print(f"{'eval time ': >20} = {eval_duration:10.2f} ms / {eval_count:>6} tokens ") print(f"Performance: {eval_count/eval_duration*1000:10.2f}(tokens/s)") ``` ## Benchmark results ### Output from a 4x4090 box: ``` model = llama3:70b total_duration time = 19983.39 ms load_duration time = 1.52 ms prompt eval time = 53.19 ms / 0 tokens eval time = 19927.12 ms / 389 tokens Performance: 19.52(tokens/s) ``` ### LLM speed 1xH100 SXM ``` model = llama3:70b total_duration time = 11072.88 ms load_duration time = 1.10 ms prompt eval time = 27.58 ms / 0 tokens eval time = 11000.60 ms / 388 tokens Performance: 35.27(tokens/s) ``` ### LLM speed 2xH100 SXM: ``` model = llama3:70b total_duration time = 44201.61 ms load_duration time = 1.62 ms prompt eval time = 119.56 ms / 0 tokens eval time = 44036.56 ms / 351 tokens Performance: 7.97(tokens/s) ```
Author
Owner

@jmorganca commented on GitHub (Jul 5, 2024):

Hi there! Wow, that's an awesome machine. Sorry about the performance issues, we'll take a look! cc @dhiltgen

<!-- gh-comment-id:2210070946 --> @jmorganca commented on GitHub (Jul 5, 2024): Hi there! Wow, that's an awesome machine. Sorry about the performance issues, we'll take a look! cc @dhiltgen
Author
Owner

@wkoszek commented on GitHub (Jul 5, 2024):

I forced model spreading. Appeared to have worked. However the benchmark result is the same.

root@C.11391672:~$ nvidia-smi
Fri Jul  5 03:55:29 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.15              Driver Version: 550.54.15      CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA H100 PCIe               On  |   00000000:81:00.0 Off |                    0 |
| N/A   36C    P0             47W /  350W |   20163MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA H100 PCIe               On  |   00000000:C1:00.0 Off |                    0 |
| N/A   37C    P0             48W /  350W |   20051MiB /  81559MiB |      0%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
+-----------------------------------------------------------------------------------------+

Loading;

root@C.11391672:~$ export OLLAMA_SCHED_SPREAD=yes
root@C.11391672:~$ ollama serve
2024/07/05 03:55:07 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:true OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-05T03:55:07.956Z level=INFO source=images.go:730 msg="total blobs: 5"
time=2024-07-05T03:55:07.956Z level=INFO source=images.go:737 msg="total unused blobs removed: 0"
time=2024-07-05T03:55:07.956Z level=INFO source=routes.go:1111 msg="Listening on 127.0.0.1:11434 (version 0.1.48)"
time=2024-07-05T03:55:07.957Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama929624685/runners
time=2024-07-05T03:55:10.962Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60101]"
time=2024-07-05T03:55:12.270Z level=INFO source=types.go:98 msg="inference compute" id=GPU-fdd47323-bd9b-906a-1ccf-cb58ff4b3a69 library=cuda compute=9.0 driver=12.4 name="NVIDIA H100 PCIe" total="79.1 GiB" available="78.7 GiB"
time=2024-07-05T03:55:12.270Z level=INFO source=types.go:98 msg="inference compute" id=GPU-2bdee957-ed23-55e0-868a-2d6df4880e54 library=cuda compute=9.0 driver=12.4 name="NVIDIA H100 PCIe" total="79.1 GiB" available="78.7 GiB"
time=2024-07-05T03:55:15.920Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=41,40 memory.available="[78.7 GiB 78.7 GiB]" memory.required.full="41.3 GiB" memory.required.partial="41.3 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[21.0 GiB 20.2 GiB]" memory.weights.total="36.5 GiB" memory.weights.repeating="35.7 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2024-07-05T03:55:15.920Z level=INFO source=server.go:368 msg="starting llama server" cmd="/tmp/ollama929624685/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-0bd51f8f0c975ce910ed067dcb962a9af05b77bafcdc595ef02178387f10e51d --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 81 --parallel 1 --tensor-split 41,40 --tensor-split 41,40 --port 45485"
time=2024-07-05T03:55:15.921Z level=INFO source=sched.go:382 msg="loaded runners" count=1
time=2024-07-05T03:55:15.921Z level=INFO source=server.go:556 msg="waiting for llama runner to start responding"
time=2024-07-05T03:55:15.921Z level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="7c26775" tid="139707075055616" timestamp=1720151715
INFO [main] system info | n_threads=24 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139707075055616" timestamp=1720151715 total_threads=48
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="47" port="45485" tid="139707075055616" timestamp=1720151715
llama_model_loader: loaded meta data with 22 key-value pairs and 723 tensors from /root/.ollama/models/blobs/sha256-0bd51f8f0c975ce910ed067dcb962a9af05b77bafcdc595ef02178387f10e51d (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = Meta-Llama-3-70B-Instruct
llama_model_loader: - kv   2:                          llama.block_count u32              = 80
llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 8192
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 28672
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 64
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  12:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  14:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  17:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  20:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv  21:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  161 tensors
llama_model_loader: - type q4_0:  561 tensors
llama_model_loader: - type q6_K:    1 tensors
time=2024-07-05T03:55:16.172Z level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.8000 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 8192
llm_load_print_meta: n_head           = 64
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 80
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 8
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 28672
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 70B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 70.55 B
llm_load_print_meta: model size       = 37.22 GiB (4.53 BPW)
llm_load_print_meta: general.name     = Meta-Llama-3-70B-Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   yes
ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
ggml_cuda_init: found 2 CUDA devices:
  Device 0: NVIDIA H100 PCIe, compute capability 9.0, VMM: yes
  Device 1: NVIDIA H100 PCIe, compute capability 9.0, VMM: yes
llm_load_tensors: ggml ctx size =    1.10 MiB
time=2024-07-05T03:55:17.627Z level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server not responding"
time=2024-07-05T03:55:17.987Z level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model"
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors:        CPU buffer size =   563.62 MiB
llm_load_tensors:      CUDA0 buffer size = 18821.56 MiB
llm_load_tensors:      CUDA1 buffer size = 18725.42 MiB
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   328.00 MiB
llama_kv_cache_init:      CUDA1 KV buffer size =   312.00 MiB
llama_new_context_with_model: KV self size  =  640.00 MiB, K (f16):  320.00 MiB, V (f16):  320.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.52 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model:      CUDA0 compute buffer size =   400.01 MiB
llama_new_context_with_model:      CUDA1 compute buffer size =   400.02 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    32.02 MiB
llama_new_context_with_model: graph nodes  = 2566
llama_new_context_with_model: graph splits = 3
INFO [main] model loaded | tid="139707075055616" timestamp=1720151723
time=2024-07-05T03:55:23.600Z level=INFO source=server.go:599 msg="llama runner started in 7.68 seconds"
[GIN] 2024/07/05 - 03:56:13 | 200 | 58.013869455s |       127.0.0.1 | POST     "/api/generate"

Benchmark result

model = llama3:70b
 total_duration time =   57970.90 ms
  load_duration time =    8394.79 ms
   prompt eval time  =     328.10 ms /     16 tokens
          eval time  =   49204.53 ms /    390 tokens
Performance:       7.93(tokens/s)
<!-- gh-comment-id:2210079015 --> @wkoszek commented on GitHub (Jul 5, 2024): I forced model spreading. Appeared to have worked. However the benchmark result is the same. ``` root@C.11391672:~$ nvidia-smi Fri Jul 5 03:55:29 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA H100 PCIe On | 00000000:81:00.0 Off | 0 | | N/A 36C P0 47W / 350W | 20163MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA H100 PCIe On | 00000000:C1:00.0 Off | 0 | | N/A 37C P0 48W / 350W | 20051MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| +-----------------------------------------------------------------------------------------+ ``` Loading; ``` root@C.11391672:~$ export OLLAMA_SCHED_SPREAD=yes root@C.11391672:~$ ollama serve 2024/07/05 03:55:07 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:true OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-07-05T03:55:07.956Z level=INFO source=images.go:730 msg="total blobs: 5" time=2024-07-05T03:55:07.956Z level=INFO source=images.go:737 msg="total unused blobs removed: 0" time=2024-07-05T03:55:07.956Z level=INFO source=routes.go:1111 msg="Listening on 127.0.0.1:11434 (version 0.1.48)" time=2024-07-05T03:55:07.957Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama929624685/runners time=2024-07-05T03:55:10.962Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60101]" time=2024-07-05T03:55:12.270Z level=INFO source=types.go:98 msg="inference compute" id=GPU-fdd47323-bd9b-906a-1ccf-cb58ff4b3a69 library=cuda compute=9.0 driver=12.4 name="NVIDIA H100 PCIe" total="79.1 GiB" available="78.7 GiB" time=2024-07-05T03:55:12.270Z level=INFO source=types.go:98 msg="inference compute" id=GPU-2bdee957-ed23-55e0-868a-2d6df4880e54 library=cuda compute=9.0 driver=12.4 name="NVIDIA H100 PCIe" total="79.1 GiB" available="78.7 GiB" time=2024-07-05T03:55:15.920Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=41,40 memory.available="[78.7 GiB 78.7 GiB]" memory.required.full="41.3 GiB" memory.required.partial="41.3 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[21.0 GiB 20.2 GiB]" memory.weights.total="36.5 GiB" memory.weights.repeating="35.7 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2024-07-05T03:55:15.920Z level=INFO source=server.go:368 msg="starting llama server" cmd="/tmp/ollama929624685/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-0bd51f8f0c975ce910ed067dcb962a9af05b77bafcdc595ef02178387f10e51d --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 81 --parallel 1 --tensor-split 41,40 --tensor-split 41,40 --port 45485" time=2024-07-05T03:55:15.921Z level=INFO source=sched.go:382 msg="loaded runners" count=1 time=2024-07-05T03:55:15.921Z level=INFO source=server.go:556 msg="waiting for llama runner to start responding" time=2024-07-05T03:55:15.921Z level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="7c26775" tid="139707075055616" timestamp=1720151715 INFO [main] system info | n_threads=24 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139707075055616" timestamp=1720151715 total_threads=48 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="47" port="45485" tid="139707075055616" timestamp=1720151715 llama_model_loader: loaded meta data with 22 key-value pairs and 723 tensors from /root/.ollama/models/blobs/sha256-0bd51f8f0c975ce910ed067dcb962a9af05b77bafcdc595ef02178387f10e51d (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Meta-Llama-3-70B-Instruct llama_model_loader: - kv 2: llama.block_count u32 = 80 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 8192 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 6: llama.attention.head_count u32 = 64 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 21: general.quantization_version u32 = 2 llama_model_loader: - type f32: 161 tensors llama_model_loader: - type q4_0: 561 tensors llama_model_loader: - type q6_K: 1 tensors time=2024-07-05T03:55:16.172Z level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.8000 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 37.22 GiB (4.53 BPW) llm_load_print_meta: general.name = Meta-Llama-3-70B-Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes ggml_cuda_init: CUDA_USE_TENSOR_CORES: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA H100 PCIe, compute capability 9.0, VMM: yes Device 1: NVIDIA H100 PCIe, compute capability 9.0, VMM: yes llm_load_tensors: ggml ctx size = 1.10 MiB time=2024-07-05T03:55:17.627Z level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server not responding" time=2024-07-05T03:55:17.987Z level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model" llm_load_tensors: offloading 80 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 81/81 layers to GPU llm_load_tensors: CPU buffer size = 563.62 MiB llm_load_tensors: CUDA0 buffer size = 18821.56 MiB llm_load_tensors: CUDA1 buffer size = 18725.42 MiB llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 328.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 312.00 MiB llama_new_context_with_model: KV self size = 640.00 MiB, K (f16): 320.00 MiB, V (f16): 320.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.52 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 400.01 MiB llama_new_context_with_model: CUDA1 compute buffer size = 400.02 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 32.02 MiB llama_new_context_with_model: graph nodes = 2566 llama_new_context_with_model: graph splits = 3 INFO [main] model loaded | tid="139707075055616" timestamp=1720151723 time=2024-07-05T03:55:23.600Z level=INFO source=server.go:599 msg="llama runner started in 7.68 seconds" [GIN] 2024/07/05 - 03:56:13 | 200 | 58.013869455s | 127.0.0.1 | POST "/api/generate" ``` ### Benchmark result ``` model = llama3:70b total_duration time = 57970.90 ms load_duration time = 8394.79 ms prompt eval time = 328.10 ms / 16 tokens eval time = 49204.53 ms / 390 tokens Performance: 7.93(tokens/s) ```
Author
Owner

@wkoszek commented on GitHub (Jul 5, 2024):

One thing that I wondered about - is ... it OK that even on a 1xH100 working setup, the token/s speed is an order of magnitude slower?

Reading nVidia's numbers out:

71d8d4d3dc/docs/source/performance.md

Should I just assume that their TensorRT runtime is so much better?

<!-- gh-comment-id:2210082279 --> @wkoszek commented on GitHub (Jul 5, 2024): One thing that I wondered about - is ... it OK that even on a 1xH100 working setup, the token/s speed is an order of magnitude slower? Reading nVidia's numbers out: https://github.com/NVIDIA/TensorRT-LLM/blob/71d8d4d3dc655671f32535d6d2b60cab87f36e87/docs/source/performance.md Should I just assume that their TensorRT runtime is so much better?
Author
Owner

@dhiltgen commented on GitHub (Jul 5, 2024):

There are a few different topics in here...

Install warning

Our intent is to try to get the right dependencies installed so it "just works" but we need to discover what GPUs are present, hence using lspci or lshw. Without those tools we can't discover the GPUs in our current install script. In your case, the driver was already installed, so this was moot.

Recoverable Pull Errors

We're actively working on improving both the cloud servers and client to improve resiliency on pulls.

Favoring single GPU

Our current algorithm for loading models in a multi-GPU setup is to try to get the model to fit into a single GPU if we can. What we've found in testing is generally the memory bandwidth performance across GPUs causes a larger performance penalty vs. trying to use the compute across multiple GPUs. We support forcing a spread algorithm with the new OLLAMA_SCHED_SPREAD setting.

Slower Performance on newer GPUs

We currently build with CUDA v11 to maximize our support matrix for older GPUs, however this does mean features of newer GPUs aren't being taken advantage of yet, which seems like it is likely related to the poorer performance you're seeing. I'm working on a change to be able to have both a v11 and v12 variation so for modern GPUs we can switch to the newer CUDA library, which should provider a performance boost. #5049

<!-- gh-comment-id:2211135379 --> @dhiltgen commented on GitHub (Jul 5, 2024): There are a few different topics in here... ## Install warning Our intent is to try to get the right dependencies installed so it "just works" but we need to discover what GPUs are present, hence using lspci or lshw. Without those tools we can't discover the GPUs in our current install script. In your case, the driver was already installed, so this was moot. ## Recoverable Pull Errors We're actively working on improving both the cloud servers and client to improve resiliency on pulls. ## Favoring single GPU Our current algorithm for loading models in a multi-GPU setup is to try to get the model to fit into a single GPU if we can. What we've found in testing is generally the memory bandwidth performance across GPUs causes a larger performance penalty vs. trying to use the compute across multiple GPUs. We support forcing a spread algorithm with the new `OLLAMA_SCHED_SPREAD` setting. ## Slower Performance on newer GPUs We currently build with CUDA v11 to maximize our support matrix for older GPUs, however this does mean features of newer GPUs aren't being taken advantage of yet, which seems like it is likely related to the poorer performance you're seeing. I'm working on a change to be able to have both a v11 and v12 variation so for modern GPUs we can switch to the newer CUDA library, which should provider a performance boost. #5049
Author
Owner

@dhiltgen commented on GitHub (Jul 5, 2024):

Slow model loading on some Cloud instances

We've been adjusting our algorithm on when to use mmap vs. regular file read, and it may still need some adjusting for large cloud instances. As you noticed, the first load with mmap can be slow, but the file winds up getting warmed up in the cache, so subsequent loads are very fast. For low memory systems, we switch to file read as that seems to perform better and avoid thrashing. For large memory cloud instances though it seems this may still not be the optimal algorithm. I'm curious how much system memory this instance has, and how the performance changes if you set use_mmap: false in the request.

<!-- gh-comment-id:2211155275 --> @dhiltgen commented on GitHub (Jul 5, 2024): ## Slow model loading on some Cloud instances We've been adjusting our algorithm on when to use mmap vs. regular file read, and it may still need some adjusting for large cloud instances. As you noticed, the first load with mmap can be slow, but the file winds up getting warmed up in the cache, so subsequent loads are very fast. For low memory systems, we switch to file read as that seems to perform better and avoid thrashing. For large memory cloud instances though it seems this may still not be the optimal algorithm. I'm curious how much system memory this instance has, and how the performance changes if you set `use_mmap: false` in the request.
Author
Owner

@wkoszek commented on GitHub (Jul 6, 2024):

@dhiltgen @jmorganca Thanks. That's super helpful.

<!-- gh-comment-id:2211812116 --> @wkoszek commented on GitHub (Jul 6, 2024): @dhiltgen @jmorganca Thanks. That's super helpful.
Author
Owner

@wkoszek commented on GitHub (Jul 8, 2024):

@dhiltgen @jmorganca Let me know if you need any help with this 2xH100. I was trying Ollama in hope that I could get decent-ish performance on 2-card first, because the end goal was to try 8xH100 box.

<!-- gh-comment-id:2215025369 --> @wkoszek commented on GitHub (Jul 8, 2024): @dhiltgen @jmorganca Let me know if you need any help with this 2xH100. I was trying Ollama in hope that I could get decent-ish performance on 2-card first, because the end goal was to try 8xH100 box.
Author
Owner

@jslin commented on GitHub (Jul 22, 2024):

Build from Ollama source code under Linux to support nVidia H100 :

  1. First, install golang, cmake, and NVIDIA CUDA Development Kit, which requires the use of system adminstrator account. Because we need to support nVidia H100, download a package and runtime after CUDA 12 to install it.
  2. You can customize a set of target CUDA architectures by setting CMAKE_CUDA_ARCHITECTURES environment parameter to "50;60;70;80;90", where 90 refers to H100. Enter export CMAKE_CUDA_ARCHITECTURES="50;60;70;80;90" under the shell.
  3. Download the ollama source code on Github: git clone https://github.com/ollama/ollama.git
  4. Switch to the ollama subdirectory.
  5. Enter go generate ./... to start linking related packages and creating configuration files for compilation. This step takes a while.
  6. Switch to the version subdirectory and edit the version number in the version.go file, for example: "0.2.7 - H100"
  7. Switch back to the previous directory. cd ..
  8. To start build code, enter go build .
  9. After successful compilation, you will see an executable file named ollama in this directory.
  10. Move it to '/usr/local/bin', or make a symbolic link pointing to it.
  11. Restart ollama, enter ollama --version to check whether it is the version number you programmed. If so, congratulations.

Benchmark Result

Python 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:36:13) [GCC 12.3.0]
OS: Linux, PyTorch version: 2.3.1.post300
Current Device: NVIDIA H100 NVL, CUDA: cuda 0, GPUs: 1
model = llama3:70b
 total_duration time =   16672.96 ms
  load_duration time =    7700.90 ms
   prompt eval time  =      72.30 ms /     16 tokens
          eval time  =    8896.52 ms /    342 tokens 
Performance:      38.44(tokens/s)

I write a blog here

<!-- gh-comment-id:2243416635 --> @jslin commented on GitHub (Jul 22, 2024): ## Build from Ollama source code under Linux to support nVidia H100 : 1. First, install [golang](https://go.dev/doc/install), cmake, and [NVIDIA CUDA Development Kit](https://developer.nvidia.com/cuda-downloads), which requires the use of system adminstrator account. Because we need to support nVidia H100, download a package and runtime after CUDA 12 to install it. 2. You can customize a set of target CUDA architectures by setting CMAKE_CUDA_ARCHITECTURES environment parameter to `"50;60;70;80;90"`, where `90` refers to H100. Enter `export CMAKE_CUDA_ARCHITECTURES="50;60;70;80;90"` under the shell. 3. Download the ollama source code on Github: `git clone https://github.com/ollama/ollama.git` 4. Switch to the ollama subdirectory. 5. Enter `go generate ./...` to start linking related packages and creating configuration files for compilation. This step takes a while. 6. Switch to the version subdirectory and edit the version number in the version.go file, for example: `"0.2.7 - H100"` 7. Switch back to the previous directory. `cd ..` 8. To start build code, enter `go build .` 9. After successful compilation, you will see an executable file named ollama in this directory. 10. Move it to '/usr/local/bin', or make a symbolic link pointing to it. 11. Restart ollama, enter `ollama --version` to check whether it is the version number you programmed. If so, congratulations. ### Benchmark Result ``` Python 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:36:13) [GCC 12.3.0] OS: Linux, PyTorch version: 2.3.1.post300 Current Device: NVIDIA H100 NVL, CUDA: cuda 0, GPUs: 1 model = llama3:70b total_duration time = 16672.96 ms load_duration time = 7700.90 ms prompt eval time = 72.30 ms / 16 tokens eval time = 8896.52 ms / 342 tokens Performance: 38.44(tokens/s) ``` I write a blog [here](https://hackmd.io/@jslin09/SJ4hAu9dA)
Author
Owner

@dhiltgen commented on GitHub (Jul 22, 2024):

Thanks @jslin!

Once #5049 merges the CC 9.0 architecture will get built in to our official builds with a new v12 runner.

<!-- gh-comment-id:2243489771 --> @dhiltgen commented on GitHub (Jul 22, 2024): Thanks @jslin! Once #5049 merges the CC 9.0 architecture will get built in to our official builds with a new v12 runner.
Author
Owner

@dhiltgen commented on GitHub (Sep 12, 2024):

@wkoszek can you give the latest version a try and let us know if you're still seeing performance problems, or if we can close this one out.

<!-- gh-comment-id:2347315880 --> @dhiltgen commented on GitHub (Sep 12, 2024): @wkoszek can you give the latest version a try and let us know if you're still seeing performance problems, or if we can close this one out.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65474