[GH-ISSUE #6382] cuda error out of memory #4008

Open
opened 2026-04-12 14:52:45 -05:00 by GiteaMirror · 14 comments
Owner

Originally created by @qazimurtazafair on GitHub (Aug 15, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6382

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Hello Team,

Below is the attached server log; I am trying to run llama3.1 70B on
5700x, 23GB RAM and p100 16GB,

the model loads successfully, but as soon as the prompt is sent, within seconds, I receive the error:

"Error: error reading llm response: read tcp 127.0.0.1:49245->127.0.0.1:49210: wsarecv: An existing connection was forcibly closed by the remote host."

I have set OLLAMA_MAX_VRAM in environment variables, but it is not in the server logs below.

lama3.1 normal size is working fine; anything larger results in the same.

2024/08/16 07:56:25 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Dummy\\.ollama\\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\Dummy\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-16T07:56:25.534+10:00 level=INFO source=images.go:782 msg="total blobs: 35"
time=2024-08-16T07:56:25.537+10:00 level=INFO source=images.go:790 msg="total unused blobs removed: 0"
time=2024-08-16T07:56:25.539+10:00 level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.6)"
time=2024-08-16T07:56:25.540+10:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [rocm_v6.1 cpu cpu_avx cpu_avx2 cuda_v11.3]"
time=2024-08-16T07:56:25.540+10:00 level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
time=2024-08-16T07:56:25.692+10:00 level=INFO source=gpu.go:288 msg="detected OS VRAM overhead" id=GPU-1c56ec58-85cd-2097-8b24-bca0994cb6a5 library=cuda compute=6.0 driver=12.4 name="Tesla P100-PCIE-16GB" overhead="254.6 MiB"
time=2024-08-16T07:56:25.693+10:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-1c56ec58-85cd-2097-8b24-bca0994cb6a5 library=cuda compute=6.0 driver=12.4 name="Tesla P100-PCIE-16GB" total="15.9 GiB" available="15.6 GiB"
[GIN] 2024/08/16 - 07:56:25 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2024/08/16 - 07:56:25 | 200 |     18.1911ms |       127.0.0.1 | POST     "/api/show"
time=2024-08-16T07:56:26.010+10:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=29 layers.split="" memory.available="[15.6 GiB]" memory.required.full="39.3 GiB" memory.required.partial="15.2 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[15.2 GiB]" memory.weights.total="36.5 GiB" memory.weights.repeating="35.7 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="324.0 MiB" memory.graph.partial="1.1 GiB"
time=2024-08-16T07:56:26.022+10:00 level=INFO source=server.go:393 msg="starting llama server" cmd="C:\\Users\\Dummy\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model C:\\Users\\Dummy\\.ollama\\models\\blobs\\sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 29 --no-mmap --parallel 1 --port 49305"
time=2024-08-16T07:56:26.026+10:00 level=INFO source=sched.go:445 msg="loaded runners" count=1
time=2024-08-16T07:56:26.026+10:00 level=INFO source=server.go:593 msg="waiting for llama runner to start responding"
time=2024-08-16T07:56:26.026+10:00 level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3535 commit="1e6f6554" tid="20688" timestamp=1723758986
INFO [wmain] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="20688" timestamp=1723758986 total_threads=16
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="49305" tid="20688" timestamp=1723758986
llama_model_loader: loaded meta data with 29 key-value pairs and 724 tensors from C:\Users\Dummy\.ollama\models\blobs\sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 70B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Meta-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 70B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   9:                          llama.block_count u32              = 80
llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
llama_model_loader: - kv  11:                     llama.embedding_length u32              = 8192
llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 28672
llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 64
llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                          general.file_type u32              = 2
llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  27:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  162 tensors
llama_model_loader: - type q4_0:  561 tensors
llama_model_loader: - type q6_K:    1 tensors
time=2024-08-16T07:56:26.287+10:00 level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 8192
llm_load_print_meta: n_layer          = 80
llm_load_print_meta: n_head           = 64
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 8
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 28672
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 70B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 70.55 B
llm_load_print_meta: model size       = 37.22 GiB (4.53 BPW) 
llm_load_print_meta: general.name     = Meta Llama 3.1 70B Instruct
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: Tesla P100-PCIE-16GB, compute capability 6.0, VMM: no
llm_load_tensors: ggml ctx size =    0.68 MiB
llm_load_tensors: offloading 29 repeating layers to GPU
llm_load_tensors: offloaded 29/81 layers to GPU
llm_load_tensors:  CUDA_Host buffer size = 24797.81 MiB
llm_load_tensors:      CUDA0 buffer size = 13312.82 MiB

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.3.6

Originally created by @qazimurtazafair on GitHub (Aug 15, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6382 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Hello Team, Below is the attached server log; I am trying to run llama3.1 70B on 5700x, 23GB RAM and p100 16GB, the model loads successfully, but as soon as the prompt is sent, within seconds, I receive the error: "_Error: error reading llm response: read tcp 127.0.0.1:49245->127.0.0.1:49210: wsarecv: An existing connection was forcibly closed by the remote host._" I have set OLLAMA_MAX_VRAM in environment variables, but it is not in the server logs below. lama3.1 normal size is working fine; anything larger results in the same. ``` 2024/08/16 07:56:25 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Dummy\\.ollama\\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\Dummy\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-08-16T07:56:25.534+10:00 level=INFO source=images.go:782 msg="total blobs: 35" time=2024-08-16T07:56:25.537+10:00 level=INFO source=images.go:790 msg="total unused blobs removed: 0" time=2024-08-16T07:56:25.539+10:00 level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.6)" time=2024-08-16T07:56:25.540+10:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [rocm_v6.1 cpu cpu_avx cpu_avx2 cuda_v11.3]" time=2024-08-16T07:56:25.540+10:00 level=INFO source=gpu.go:204 msg="looking for compatible GPUs" time=2024-08-16T07:56:25.692+10:00 level=INFO source=gpu.go:288 msg="detected OS VRAM overhead" id=GPU-1c56ec58-85cd-2097-8b24-bca0994cb6a5 library=cuda compute=6.0 driver=12.4 name="Tesla P100-PCIE-16GB" overhead="254.6 MiB" time=2024-08-16T07:56:25.693+10:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-1c56ec58-85cd-2097-8b24-bca0994cb6a5 library=cuda compute=6.0 driver=12.4 name="Tesla P100-PCIE-16GB" total="15.9 GiB" available="15.6 GiB" [GIN] 2024/08/16 - 07:56:25 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/08/16 - 07:56:25 | 200 | 18.1911ms | 127.0.0.1 | POST "/api/show" time=2024-08-16T07:56:26.010+10:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=29 layers.split="" memory.available="[15.6 GiB]" memory.required.full="39.3 GiB" memory.required.partial="15.2 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[15.2 GiB]" memory.weights.total="36.5 GiB" memory.weights.repeating="35.7 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="324.0 MiB" memory.graph.partial="1.1 GiB" time=2024-08-16T07:56:26.022+10:00 level=INFO source=server.go:393 msg="starting llama server" cmd="C:\\Users\\Dummy\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model C:\\Users\\Dummy\\.ollama\\models\\blobs\\sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 29 --no-mmap --parallel 1 --port 49305" time=2024-08-16T07:56:26.026+10:00 level=INFO source=sched.go:445 msg="loaded runners" count=1 time=2024-08-16T07:56:26.026+10:00 level=INFO source=server.go:593 msg="waiting for llama runner to start responding" time=2024-08-16T07:56:26.026+10:00 level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server error" INFO [wmain] build info | build=3535 commit="1e6f6554" tid="20688" timestamp=1723758986 INFO [wmain] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="20688" timestamp=1723758986 total_threads=16 INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="49305" tid="20688" timestamp=1723758986 llama_model_loader: loaded meta data with 29 key-value pairs and 724 tensors from C:\Users\Dummy\.ollama\models\blobs\sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 70B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 70B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 9: llama.block_count u32 = 80 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 8192 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 13: llama.attention.head_count u32 = 64 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: general.file_type u32 = 2 llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_0: 561 tensors llama_model_loader: - type q6_K: 1 tensors time=2024-08-16T07:56:26.287+10:00 level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 37.22 GiB (4.53 BPW) llm_load_print_meta: general.name = Meta Llama 3.1 70B Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: Tesla P100-PCIE-16GB, compute capability 6.0, VMM: no llm_load_tensors: ggml ctx size = 0.68 MiB llm_load_tensors: offloading 29 repeating layers to GPU llm_load_tensors: offloaded 29/81 layers to GPU llm_load_tensors: CUDA_Host buffer size = 24797.81 MiB llm_load_tensors: CUDA0 buffer size = 13312.82 MiB ``` ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.3.6
GiteaMirror added the memorynvidiabug labels 2026-04-12 14:52:45 -05:00
Author
Owner

@eust-w commented on GitHub (Aug 15, 2024):

Ensure your system has enough resources. Llama 70B is very resource-intensive. Your P100 GPU has 16GB of VRAM, which might not be sufficient for such a large model. The server log indicates that the required memory is much higher than what is available:

Memory required: 39.3 GiB (full), 15.2 GiB (partial), 640.0 MiB (kv)
Memory available: 15.6 GiB

<!-- gh-comment-id:2292370112 --> @eust-w commented on GitHub (Aug 15, 2024): Ensure your system has enough resources. Llama 70B is very resource-intensive. Your P100 GPU has 16GB of VRAM, which might not be sufficient for such a large model. The server log indicates that the required memory is much higher than what is available: Memory required: 39.3 GiB (full), 15.2 GiB (partial), 640.0 MiB (kv) Memory available: 15.6 GiB
Author
Owner

@qazimurtazafair commented on GitHub (Aug 15, 2024):

Ensure your system has enough resources. Llama 70B is very resource-intensive. Your P100 GPU has 16GB of VRAM, which might not be sufficient for such a large model. The server log indicates that the required memory is much higher than what is available:

Memory required: 39.3 GiB (full), 15.2 GiB (partial), 640.0 MiB (kv) Memory available: 15.6 GiB

Will it not load it partially? I get a similar error for gemma2:27b

I saw some closed bugs which were resolved by using OLLAMA_MAX_VRAM

<!-- gh-comment-id:2292405621 --> @qazimurtazafair commented on GitHub (Aug 15, 2024): > Ensure your system has enough resources. Llama 70B is very resource-intensive. Your P100 GPU has 16GB of VRAM, which might not be sufficient for such a large model. The server log indicates that the required memory is much higher than what is available: > > Memory required: 39.3 GiB (full), 15.2 GiB (partial), 640.0 MiB (kv) Memory available: 15.6 GiB Will it not load it partially? I get a similar error for gemma2:27b I saw some closed bugs which were resolved by using OLLAMA_MAX_VRAM
Author
Owner

@eust-w commented on GitHub (Aug 15, 2024):

The setting of OLLAMA_MAX_VRAM should not exceed the size of the physical video memory. It is recommended to be slightly lower than the physical video memory to ensure system stability and normal operation of the model. Forcibly setting a higher OLLAMA_MAX_VRAM may cause program errors. If you need more video memory, the only solution is to upgrade to a GPU with more video memory.

<!-- gh-comment-id:2292415638 --> @eust-w commented on GitHub (Aug 15, 2024): The setting of OLLAMA_MAX_VRAM should not exceed the size of the physical video memory. It is recommended to be slightly lower than the physical video memory to ensure system stability and normal operation of the model. Forcibly setting a higher OLLAMA_MAX_VRAM may cause program errors. If you need more video memory, the only solution is to upgrade to a GPU with more video memory.
Author
Owner

@eust-w commented on GitHub (Aug 15, 2024):

Use nvidia-smi to get information about your available physical video memory

<!-- gh-comment-id:2292417150 --> @eust-w commented on GitHub (Aug 15, 2024): Use nvidia-smi to get information about your available physical video memory
Author
Owner

@rick-github commented on GitHub (Aug 15, 2024):

I don't think OLLAMA_MAX_VRAM is a supported variable in the current code base. It may have been used in the past, but now it just sets the value of MaxVRAM which is not referenced anywhere else in the code base as far as I can tell. However, the amount of VRAM used by a model can be controlled by setting the number of layers to be offloaded to the GPU with num_gpu, either in the CLI with /set parameter num_gpu xx, via the API with curl localhost:11434/api/generate -d '{"model":"model-name","options":{"num_gpu",xx}}', or by creating a new model by setting PARAMETER num_gpu xx in a Modelfile.

<!-- gh-comment-id:2292418728 --> @rick-github commented on GitHub (Aug 15, 2024): I don't think `OLLAMA_MAX_VRAM` is a supported variable in the current code base. It may have been used in the past, but now it just sets the value of `MaxVRAM` which is not referenced anywhere else in the code base as far as I can tell. However, the amount of VRAM used by a model can be controlled by setting the number of layers to be offloaded to the GPU with `num_gpu`, either in the CLI with `/set parameter num_gpu xx`, via the API with `curl localhost:11434/api/generate -d '{"model":"model-name","options":{"num_gpu",xx}}'`, or by creating a new model by setting `PARAMETER num_gpu xx` in a Modelfile.
Author
Owner

@qazimurtazafair commented on GitHub (Aug 15, 2024):

I don't think OLLAMA_MAX_VRAM is a supported variable in the current code base. It may have been used in the past, but now it just sets the value of MaxVRAM which is not referenced anywhere else in the code base as far as I can tell. However, the amount of VRAM used by a model can be controlled by setting the number of layers to be offloaded to the GPU with num_gpu, either in the CLI with /set parameter num_gpu xx, via the API with curl localhost:11434/api/generate -d '{"model":"model-name","options":{"num_gpu",xx}}', or by creating a new model by setting PARAMETER num_gpu xx in a Modelfile.

image

The model loads perfectly, but when I send a prompt, it crashes; I have seen people use heavier models with less Vram.

also could I set num_gpu in environment variables, I cannot seem to find any proper documentation on the usage of this parameter

<!-- gh-comment-id:2292434974 --> @qazimurtazafair commented on GitHub (Aug 15, 2024): > I don't think `OLLAMA_MAX_VRAM` is a supported variable in the current code base. It may have been used in the past, but now it just sets the value of `MaxVRAM` which is not referenced anywhere else in the code base as far as I can tell. However, the amount of VRAM used by a model can be controlled by setting the number of layers to be offloaded to the GPU with `num_gpu`, either in the CLI with `/set parameter num_gpu xx`, via the API with `curl localhost:11434/api/generate -d '{"model":"model-name","options":{"num_gpu",xx}}'`, or by creating a new model by setting `PARAMETER num_gpu xx` in a Modelfile. ![image](https://github.com/user-attachments/assets/dc00a9dc-76b5-4914-8d26-069d7b14ad54) The model loads perfectly, but when I send a prompt, it crashes; I have seen people use heavier models with less Vram. also could I set num_gpu in environment variables, I cannot seem to find any proper documentation on the usage of this parameter
Author
Owner

@rick-github commented on GitHub (Aug 15, 2024):

How are you loading the model? How do you send a prompt?

<!-- gh-comment-id:2292437191 --> @rick-github commented on GitHub (Aug 15, 2024): How are you loading the model? How do you send a prompt?
Author
Owner

@qazimurtazafair commented on GitHub (Aug 15, 2024):

How are you loading the model? How do you send a prompt?
from cmd cli
ollama run gemma2:27b

<!-- gh-comment-id:2292437633 --> @qazimurtazafair commented on GitHub (Aug 15, 2024): > How are you loading the model? How do you send a prompt? from cmd cli ollama run gemma2:27b
Author
Owner

@rick-github commented on GitHub (Aug 15, 2024):

OK, after the model is loaded, check the logs for a line that says llm_load_tensors: offloaded 29/81 layers to GPU. The first number is the number layers offloaded for this model - each model will be different, so if you load llama3.1:70b, you need to find this number again.

Now, at the >>> prompt of the cmd cli, type

/set parameter num_gpu 20

where 20 is some number less than the one you found in the log. ollama will reload the model with a smaller VRAM footprint, and now you can send a normal prompt to see if it works any better. If it does, it means that ollama is over optimistic in its memory calculations and offloading too much to the GPU. The ollama team have mentioned in some tickets recently that they are looking in to the issue. You can also try enabling flash attention (OLLAMA_FLASH_ATTENTION=1) as that make more efficient use of VRAM, but is not supported for all models (although I think llama3.1 and gemma2 are supported).

<!-- gh-comment-id:2292446338 --> @rick-github commented on GitHub (Aug 15, 2024): OK, after the model is loaded, check the logs for a line that says `llm_load_tensors: offloaded 29/81 layers to GPU`. The first number is the number layers offloaded for this model - each model will be different, so if you load llama3.1:70b, you need to find this number again. Now, at the `>>>` prompt of the cmd cli, type ``` /set parameter num_gpu 20 ``` where `20` is some number less than the one you found in the log. ollama will reload the model with a smaller VRAM footprint, and now you can send a normal prompt to see if it works any better. If it does, it means that ollama is over optimistic in its memory calculations and offloading too much to the GPU. The ollama team have mentioned in some tickets recently that they are looking in to the issue. You can also try enabling flash attention (`OLLAMA_FLASH_ATTENTION=1`) as that make more efficient use of VRAM, but is not supported for all models (although I think llama3.1 and gemma2 are supported).
Author
Owner

@qazimurtazafair commented on GitHub (Aug 15, 2024):

llm_load_tensors: ggml ctx size =    0.45 MiB
llm_load_tensors: offloading 42 repeating layers to GPU
llm_load_tensors: offloaded 42/47 layers to GPU

it worked, I set it to 32

image

Awesome, thank you so much

<!-- gh-comment-id:2292458647 --> @qazimurtazafair commented on GitHub (Aug 15, 2024): ``` llm_load_tensors: ggml ctx size = 0.45 MiB llm_load_tensors: offloading 42 repeating layers to GPU llm_load_tensors: offloaded 42/47 layers to GPU ``` it worked, I set it to 32 ![image](https://github.com/user-attachments/assets/6c351e8d-a259-48f7-b58a-239f7ab1f959) Awesome, thank you so much
Author
Owner

@rick-github commented on GitHub (Aug 15, 2024):

See here for a way to change the default num_gpu value for a model, so that you don't need to /set parameter num_gpu xx every time you load a model.

<!-- gh-comment-id:2292459516 --> @rick-github commented on GitHub (Aug 15, 2024): See [here](https://github.com/ollama/ollama/issues/5913#issuecomment-2248262520) for a way to change the default `num_gpu` value for a model, so that you don't need to `/set parameter num_gpu xx` every time you load a model.
Author
Owner

@dhiltgen commented on GitHub (Sep 4, 2024):

@qazimurtazafair you mention a 5700 as well, but I don't see it discovered in the logs. Keep in mind we can not split a single model across different vendors GPUs, so a mixed AMD + NVIDIA setup like this is only useful for loading multiple models concurrently. Do you have the AMD drivers installed on your system? If so, we should dig into that to understand why it's not being discovered. OLLAMA_DEBUG=1 logs may help with that.

Looking at the initial logs you shared, it looks like you're attempting to load a model that is 39.3G on a system that has 16G VRAM + 23G RAM (=39G in total) so you're likely pushing into swap and thrashing, which may be contributing to the problem. We have some checks in place to detect and prevent loading a model that is impossible on the current system, but we include swap in the calculation since some users are OK with slow performance to load larger models.

As to gemma2:27b, that's going to be a tad big to fully fit on your GPU given other overheads, but our goal is it should load the correct number of layers automatically. I don't have an identical test system, but on a 16G GPU on Linux with no overhead, we're able to load 44/47 layers without crashing. Could you share a log of ollama attempting to load it with the default gpu setting (and crashing) and then one with your manually reduced num_gpu setting, along with the nvidia-smi output after it loads?

<!-- gh-comment-id:2327678473 --> @dhiltgen commented on GitHub (Sep 4, 2024): @qazimurtazafair you mention a 5700 as well, but I don't see it discovered in the logs. Keep in mind we can not split a single model across different vendors GPUs, so a mixed AMD + NVIDIA setup like this is only useful for loading multiple models concurrently. Do you have the AMD drivers installed on your system? If so, we should dig into that to understand why it's not being discovered. OLLAMA_DEBUG=1 logs may help with that. Looking at the initial logs you shared, it looks like you're attempting to load a model that is ~39.3G on a system that has 16G VRAM + 23G RAM (~=39G in total) so you're likely pushing into swap and thrashing, which may be contributing to the problem. We have some checks in place to detect and prevent loading a model that is impossible on the current system, but we include swap in the calculation since some users are OK with slow performance to load larger models. As to gemma2:27b, that's going to be a tad big to fully fit on your GPU given other overheads, but our goal is it should load the correct number of layers automatically. I don't have an identical test system, but on a 16G GPU on Linux with no overhead, we're able to load 44/47 layers without crashing. Could you share a log of ollama attempting to load it with the default gpu setting (and crashing) and then one with your manually reduced num_gpu setting, along with the nvidia-smi output after it loads?
Author
Owner

@oussemah commented on GitHub (Jan 5, 2025):

Is it normal to see this error when I still have a couple of G of VRAM free after loading the model
RTX 3090 + RTX 4060 Ti (total 40G vram) , model is qwen 32b q4, size is 36 , 100% on GPU and I still see the issue

May be the following log line showing when error happens can be helpful to understand:
Jan 05 19:21:28 node1 ollama[2611]: time=2025-01-05T19:21:28.341+01:00 level=DEBUG source=gpu.go:398 msg="updating system memory data" before.total="46.8 GiB" before.free="38.4 GiB" before.free_swap="62.5 GiB" now.total="46.8 GiB" now.free="37.8 GiB" now.free_swap="62.5 GiB"

when loading the model i see this :

Jan 05 19:16:25 node1 ollama[2611]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Jan 05 19:16:25 node1 ollama[2611]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Jan 05 19:16:25 node1 ollama[2611]: ggml_cuda_init: found 2 CUDA devices:
Jan 05 19:16:25 node1 ollama[2611]: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Jan 05 19:16:25 node1 ollama[2611]: Device 1: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes
Jan 05 19:16:25 node1 ollama[2611]: llm_load_tensors: ggml ctx size = 1.01 MiB
Jan 05 19:16:25 node1 ollama[2611]: llm_load_tensors: offloading 64 repeating layers to GPU
Jan 05 19:16:25 node1 ollama[2611]: llm_load_tensors: offloading non-repeating layers to GPU
Jan 05 19:16:25 node1 ollama[2611]: llm_load_tensors: offloaded 65/65 layers to GPU
Jan 05 19:16:25 node1 ollama[2611]: llm_load_tensors: CPU buffer size = 417.66 MiB
Jan 05 19:16:25 node1 ollama[2611]: llm_load_tensors: CUDA0 buffer size = 10032.23 MiB
Jan 05 19:16:25 node1 ollama[2611]: llm_load_tensors: CUDA1 buffer size = 8476.12 MiB
Jan 05 19:16:26 node1 ollama[2611]: time=2025-01-05T19:16:26.080+01:00 level=DEBUG source=server.go:621 msg="model load progress 0.12"
Jan 05 19:16:26 node1 ollama[2611]: time=2025-01-05T19:16:26.331+01:00 level=DEBUG source=server.go:621 msg="model load progress 0.31"
Jan 05 19:16:26 node1 ollama[2611]: time=2025-01-05T19:16:26.582+01:00 level=DEBUG source=server.go:621 msg="model load progress 0.50"
Jan 05 19:16:26 node1 ollama[2611]: time=2025-01-05T19:16:26.833+01:00 level=DEBUG source=server.go:621 msg="model load progress 0.61"
Jan 05 19:16:27 node1 ollama[2611]: [GIN] 2025/01/05 - 19:16:27 | 200 | 1.505264ms | 127.0.0.1 | GET "/api/tags"
Jan 05 19:16:27 node1 ollama[2611]: time=2025-01-05T19:16:27.083+01:00 level=DEBUG source=server.go:621 msg="model load progress 0.70"
Jan 05 19:16:27 node1 ollama[2611]: time=2025-01-05T19:16:27.334+01:00 level=DEBUG source=server.go:621 msg="model load progress 0.78"
Jan 05 19:16:27 node1 ollama[2611]: time=2025-01-05T19:16:27.585+01:00 level=DEBUG source=server.go:621 msg="model load progress 0.86"
Jan 05 19:16:27 node1 ollama[2611]: time=2025-01-05T19:16:27.836+01:00 level=DEBUG source=server.go:621 msg="model load progress 0.94"
Jan 05 19:16:28 node1 ollama[2611]: time=2025-01-05T19:16:28.087+01:00 level=DEBUG source=server.go:621 msg="model load progress 1.00"
Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: n_ctx = 32000
Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: n_batch = 512
Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: n_ubatch = 512
Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: flash_attn = 0
Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: freq_base = 1000000.0
Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: freq_scale = 1
Jan 05 19:16:28 node1 ollama[2611]: llama_kv_cache_init: CUDA0 KV buffer size = 4500.00 MiB
Jan 05 19:16:28 node1 ollama[2611]: llama_kv_cache_init: CUDA1 KV buffer size = 3500.00 MiB
Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: KV self size = 8000.00 MiB, K (f16): 4000.00 MiB, V (f16): 4000.00 MiB
Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: CUDA_Host output buffer size = 0.60 MiB
Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: CUDA0 compute buffer size = 2830.01 MiB
Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: CUDA1 compute buffer size = 2830.02 MiB
Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: CUDA_Host compute buffer size = 260.02 MiB
Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: graph nodes = 2246
Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: graph splits = 3
Jan 05 19:16:28 node1 ollama[2611]: time=2025-01-05T19:16:28.339+01:00 level=INFO source=server.go:615 msg="llama runner started in 3.26 seconds"

<!-- gh-comment-id:2571711588 --> @oussemah commented on GitHub (Jan 5, 2025): Is it normal to see this error when I still have a couple of G of VRAM free after loading the model RTX 3090 + RTX 4060 Ti (total 40G vram) , model is qwen 32b q4, size is 36 , 100% on GPU and I still see the issue May be the following log line showing when error happens can be helpful to understand: Jan 05 19:21:28 node1 ollama[2611]: time=2025-01-05T19:21:28.341+01:00 level=DEBUG source=gpu.go:398 msg="updating system memory data" before.total="46.8 GiB" before.free="38.4 GiB" before.free_swap="62.5 GiB" now.total="46.8 GiB" now.free="37.8 GiB" now.free_swap="62.5 GiB" when loading the model i see this : Jan 05 19:16:25 node1 ollama[2611]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Jan 05 19:16:25 node1 ollama[2611]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Jan 05 19:16:25 node1 ollama[2611]: ggml_cuda_init: found 2 CUDA devices: Jan 05 19:16:25 node1 ollama[2611]: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Jan 05 19:16:25 node1 ollama[2611]: Device 1: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes Jan 05 19:16:25 node1 ollama[2611]: llm_load_tensors: ggml ctx size = 1.01 MiB Jan 05 19:16:25 node1 ollama[2611]: llm_load_tensors: offloading 64 repeating layers to GPU Jan 05 19:16:25 node1 ollama[2611]: llm_load_tensors: offloading non-repeating layers to GPU Jan 05 19:16:25 node1 ollama[2611]: llm_load_tensors: offloaded 65/65 layers to GPU Jan 05 19:16:25 node1 ollama[2611]: llm_load_tensors: CPU buffer size = 417.66 MiB Jan 05 19:16:25 node1 ollama[2611]: llm_load_tensors: CUDA0 buffer size = 10032.23 MiB Jan 05 19:16:25 node1 ollama[2611]: llm_load_tensors: CUDA1 buffer size = 8476.12 MiB Jan 05 19:16:26 node1 ollama[2611]: time=2025-01-05T19:16:26.080+01:00 level=DEBUG source=server.go:621 msg="model load progress 0.12" Jan 05 19:16:26 node1 ollama[2611]: time=2025-01-05T19:16:26.331+01:00 level=DEBUG source=server.go:621 msg="model load progress 0.31" Jan 05 19:16:26 node1 ollama[2611]: time=2025-01-05T19:16:26.582+01:00 level=DEBUG source=server.go:621 msg="model load progress 0.50" Jan 05 19:16:26 node1 ollama[2611]: time=2025-01-05T19:16:26.833+01:00 level=DEBUG source=server.go:621 msg="model load progress 0.61" Jan 05 19:16:27 node1 ollama[2611]: [GIN] 2025/01/05 - 19:16:27 | 200 | 1.505264ms | 127.0.0.1 | GET "/api/tags" Jan 05 19:16:27 node1 ollama[2611]: time=2025-01-05T19:16:27.083+01:00 level=DEBUG source=server.go:621 msg="model load progress 0.70" Jan 05 19:16:27 node1 ollama[2611]: time=2025-01-05T19:16:27.334+01:00 level=DEBUG source=server.go:621 msg="model load progress 0.78" Jan 05 19:16:27 node1 ollama[2611]: time=2025-01-05T19:16:27.585+01:00 level=DEBUG source=server.go:621 msg="model load progress 0.86" Jan 05 19:16:27 node1 ollama[2611]: time=2025-01-05T19:16:27.836+01:00 level=DEBUG source=server.go:621 msg="model load progress 0.94" Jan 05 19:16:28 node1 ollama[2611]: time=2025-01-05T19:16:28.087+01:00 level=DEBUG source=server.go:621 msg="model load progress 1.00" Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: n_ctx = 32000 Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: n_batch = 512 Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: n_ubatch = 512 Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: flash_attn = 0 Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: freq_base = 1000000.0 Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: freq_scale = 1 Jan 05 19:16:28 node1 ollama[2611]: llama_kv_cache_init: CUDA0 KV buffer size = 4500.00 MiB Jan 05 19:16:28 node1 ollama[2611]: llama_kv_cache_init: CUDA1 KV buffer size = 3500.00 MiB Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: KV self size = 8000.00 MiB, K (f16): 4000.00 MiB, V (f16): 4000.00 MiB Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: CUDA_Host output buffer size = 0.60 MiB Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: CUDA0 compute buffer size = 2830.01 MiB Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: CUDA1 compute buffer size = 2830.02 MiB Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: CUDA_Host compute buffer size = 260.02 MiB Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: graph nodes = 2246 Jan 05 19:16:28 node1 ollama[2611]: llama_new_context_with_model: graph splits = 3 Jan 05 19:16:28 node1 ollama[2611]: time=2025-01-05T19:16:28.339+01:00 level=INFO source=server.go:615 msg="llama runner started in 3.26 seconds"
Author
Owner

@rick-github commented on GitHub (Jan 5, 2025):

It's easier to debug if the full log is available.

<!-- gh-comment-id:2571760723 --> @rick-github commented on GitHub (Jan 5, 2025): It's easier to debug if the full log is available.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4008