[GH-ISSUE #5331] version 1.47 downloaded, gemma2 error #3337

Closed
opened 2026-04-12 13:55:52 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @tinycrops on GitHub (Jun 27, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5331

What is the issue?

Jun 27 12:06:15 ollama[11759]: INFO [main] build info | build=1 commit="7c26775" tid="124734763667456" timestamp=1719504375
Jun 27 12:06:15 ollama[11759]: INFO [main] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VN>
Jun 27 12:06:15 ollama[11759]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="42375" tid="124734763667456" timestamp=1719504375
Jun 27 12:06:15 ollama[10798]: llama_model_loader: loaded meta data with 32 key-value pairs and 464 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-e84ed7399c82fbf7db>
Jun 27 12:06:15 ollama[10798]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv   0:                gemma2.attention.head_count u32              = 16
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv   1:             gemma2.attention.head_count_kv u32              = 8
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv   2:                gemma2.attention.key_length u32              = 256
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv   3:    gemma2.attention.layer_norm_rms_epsilon f32              = 0.000001
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv   4:              gemma2.attention.value_length u32              = 256
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv   5:                         gemma2.block_count u32              = 42
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv   6:                      gemma2.context_length u32              = 8192
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv   7:                    gemma2.embedding_length u32              = 3584
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv   8:                 gemma2.feed_forward_length u32              = 14336
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv   9:                       general.architecture str              = gemma2
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv  10:                          general.file_type u32              = 2
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv  11:                               general.name str              = gemma2
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv  12:               general.quantization_version u32              = 2
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv  13:                    tokenizer.chat_template str              = {{ bos_token }}{% if messages[0]['rol...
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv  14:               tokenizer.ggml.add_bos_token bool             = true
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv  15:               tokenizer.ggml.add_eos_token bool             = false
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv  16:           tokenizer.ggml.add_padding_token bool             = false
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv  17:           tokenizer.ggml.add_unknown_token bool             = false
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 2
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 1
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv  20:                tokenizer.ggml.eot_token_id u32              = 107
Jun 27 12:06:15 ollama[10798]: time=2024-06-27T12:06:15.876-04:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model"
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv  21:                      tokenizer.ggml.merges arr[str,580604]  = ["\n \n", "\n \n\n", "\n\n \n", "\n \n\n\n", "\n\n ...
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv  22:             tokenizer.ggml.middle_token_id u32              = 68
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = llama
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv  24:            tokenizer.ggml.padding_token_id u32              = 0
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv  25:                         tokenizer.ggml.pre str              = default
Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv  26:             tokenizer.ggml.prefix_token_id u32              = 67
Jun 27 12:06:16 ollama[10798]: llama_model_loader: - kv  27:                      tokenizer.ggml.scores arr[f32,256000]  = [0.000000, 0.000000, 0.000000, 0.0000...
Jun 27 12:06:16 ollama[10798]: llama_model_loader: - kv  28:             tokenizer.ggml.suffix_token_id u32              = 69
Jun 27 12:06:16 ollama[10798]: llama_model_loader: - kv  29:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
Jun 27 12:06:16 ollama[10798]: llama_model_loader: - kv  30:                      tokenizer.ggml.tokens arr[str,256000]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
Jun 27 12:06:16 ollama[10798]: llama_model_loader: - kv  31:            tokenizer.ggml.unknown_token_id u32              = 3
Jun 27 12:06:16 ollama[10798]: llama_model_loader: - type  f32:  169 tensors
Jun 27 12:06:16 ollama[10798]: llama_model_loader: - type q4_0:  294 tensors
Jun 27 12:06:16 ollama[10798]: llama_model_loader: - type q6_K:    1 tensors
Jun 27 12:06:16 ollama[10798]: llm_load_vocab: special tokens cache size = 260
Jun 27 12:06:16 ollama[10798]: llm_load_vocab: token to piece cache size = 1.6014 MB
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: format           = GGUF V3 (latest)
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: arch             = gemma2
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: vocab type       = SPM
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_vocab          = 256000
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_merges         = 0
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_ctx_train      = 8192
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_embd           = 3584
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_head           = 16
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_head_kv        = 8
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_layer          = 42
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_rot            = 224
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_embd_head_k    = 256
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_embd_head_v    = 256
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_gqa            = 2
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_embd_k_gqa     = 2048
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_embd_v_gqa     = 2048
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_ff             = 14336
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_expert         = 0
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_expert_used    = 0
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: causal attn      = 1
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: pooling type     = 0
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: rope type        = 2
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: rope scaling     = linear
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: freq_base_train  = 10000.0
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: freq_scale_train = 1
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_ctx_orig_yarn  = 8192
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: rope_finetuned   = unknown
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: ssm_d_conv       = 0
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: ssm_d_inner      = 0
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: ssm_d_state      = 0
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: ssm_dt_rank      = 0
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: model type       = ?B
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: model ftype      = Q4_0
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: model params     = 9.24 B
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: model size       = 5.06 GiB (4.71 BPW)
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: general.name     = gemma2
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: BOS token        = 2 '<bos>'
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: EOS token        = 1 '<eos>'
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: UNK token        = 3 '<unk>'
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: PAD token        = 0 '<pad>'
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: LF token         = 227 '<0x0A>'
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: PRE token        = 67 '<unused60>'
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: SUF token        = 69 '<unused62>'
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: MID token        = 68 '<unused61>'
Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: EOT token        = 107 '<end_of_turn>'
Jun 27 12:06:16 ollama[10798]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   yes
Jun 27 12:06:16 ollama[10798]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
Jun 27 12:06:16 ollama[10798]: ggml_cuda_init: found 2 CUDA devices:
Jun 27 12:06:16 ollama[10798]:   Device 0: NVIDIA GeForce GTX 1060 6GB, compute capability 6.1, VMM: yes
Jun 27 12:06:16 ollama[10798]:   Device 1: NVIDIA GeForce GTX 1060 3GB, compute capability 6.1, VMM: yes
Jun 27 12:06:16 ollama[10798]: llm_load_tensors: ggml ctx size =    0.68 MiB
Jun 27 12:06:16 ollama[10798]: llm_load_tensors: offloading 42 repeating layers to GPU
Jun 27 12:06:16 ollama[10798]: llm_load_tensors: offloading non-repeating layers to GPU
Jun 27 12:06:16 ollama[10798]: llm_load_tensors: offloaded 43/43 layers to GPU
Jun 27 12:06:16 ollama[10798]: llm_load_tensors:        CPU buffer size =   717.77 MiB
Jun 27 12:06:16 ollama[10798]: llm_load_tensors:      CUDA0 buffer size =  2765.55 MiB
Jun 27 12:06:16 ollama[10798]: llm_load_tensors:      CUDA1 buffer size =  2419.66 MiB
Jun 27 12:06:18 ollama[10798]: llama_new_context_with_model: n_ctx      = 2048
Jun 27 12:06:18 ollama[10798]: llama_new_context_with_model: n_batch    = 512
Jun 27 12:06:18 ollama[10798]: llama_new_context_with_model: n_ubatch   = 512
Jun 27 12:06:18 ollama[10798]: llama_new_context_with_model: flash_attn = 0
Jun 27 12:06:18 ollama[10798]: llama_new_context_with_model: freq_base  = 10000.0
Jun 27 12:06:18 ollama[10798]: llama_new_context_with_model: freq_scale = 1
Jun 27 12:06:18 ollama[10798]: llama_kv_cache_init:      CUDA0 KV buffer size =   416.00 MiB
Jun 27 12:06:18 ollama[10798]: llama_kv_cache_init:      CUDA1 KV buffer size =   256.00 MiB
Jun 27 12:06:18 ollama[10798]: llama_new_context_with_model: KV self size  =  672.00 MiB, K (f16):  336.00 MiB, V (f16):  336.00 MiB
Jun 27 12:06:18 ollama[10798]: llama_new_context_with_model:  CUDA_Host  output buffer size =     0.99 MiB
Jun 27 12:06:18 ollama[10798]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
Jun 27 12:06:18 ollama[10798]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 551.02 MiB on device 1: cudaMalloc failed: out of memory
Jun 27 12:06:18 ollama[10798]: ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 577781760
Jun 27 12:06:18 ollama[10798]: llama_new_context_with_model: failed to allocate compute buffers
Jun 27 12:06:18 ollama[10798]: llama_init_from_gpt_params: error: failed to create context with model '/usr/share/ollama/.ollama/models/blobs/sha256-e84ed7399c82fbf7dbd6cdef3f12>
Jun 27 12:06:18 ollama[11759]: ERROR [load_model] unable to load model | model="/usr/share/ollama/.ollama/models/blobs/sha256-e84ed7399c82fbf7dbd6cdef3f12d356c3cdb5512e5d8b2a989>
Jun 27 12:06:18 ollama[10798]: terminate called without an active exception
Jun 27 12:06:18 ollama[10798]: time=2024-06-27T12:06:18.506-04:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server error"
Jun 27 12:06:18 ollama[10798]: time=2024-06-27T12:06:18.757-04:00 level=ERROR source=sched.go:388  msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) error:failed to create context with model 
Jun 27 12:06:18 ollama[10798]: [GIN] 2024/06/27 - 12:06:18 | 500 |  3.394458507s |       127.0.0.1 | POST     "/api/chat"
Jun 27 12:06:23 ollama[10798]: time=2024-06-27T12:06:23.929-04:00 level=WARN source=sched.go:575 msg="gpu VRAM usage didn't recover within timeout" seconds=5.171683469 model=/us>
Jun 27 12:06:24 ollama[10798]: time=2024-06-27T12:06:24.179-04:00 level=WARN source=sched.go:575 msg="gpu VRAM usage didn't recover within timeout" seconds=5.421977546 model=/us>
Jun 27 12:06:24 ollama[10798]: time=2024-06-27T12:06:24.429-04:00 level=WARN source=sched.go:575 msg="gpu VRAM usage didn't recover within timeout" seconds=5.67175286 model=/usr>```

### OS

Linux

### GPU

Nvidia

### CPU

Intel

### Ollama version

1.47
Originally created by @tinycrops on GitHub (Jun 27, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5331 ### What is the issue? ``` Jun 27 12:06:15 ollama[11759]: INFO [main] build info | build=1 commit="7c26775" tid="124734763667456" timestamp=1719504375 Jun 27 12:06:15 ollama[11759]: INFO [main] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VN> Jun 27 12:06:15 ollama[11759]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="42375" tid="124734763667456" timestamp=1719504375 Jun 27 12:06:15 ollama[10798]: llama_model_loader: loaded meta data with 32 key-value pairs and 464 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-e84ed7399c82fbf7db> Jun 27 12:06:15 ollama[10798]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 0: gemma2.attention.head_count u32 = 16 Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 1: gemma2.attention.head_count_kv u32 = 8 Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 2: gemma2.attention.key_length u32 = 256 Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 3: gemma2.attention.layer_norm_rms_epsilon f32 = 0.000001 Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 4: gemma2.attention.value_length u32 = 256 Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 5: gemma2.block_count u32 = 42 Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 6: gemma2.context_length u32 = 8192 Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 7: gemma2.embedding_length u32 = 3584 Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 8: gemma2.feed_forward_length u32 = 14336 Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 9: general.architecture str = gemma2 Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 10: general.file_type u32 = 2 Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 11: general.name str = gemma2 Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 12: general.quantization_version u32 = 2 Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 13: tokenizer.chat_template str = {{ bos_token }}{% if messages[0]['rol... Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 14: tokenizer.ggml.add_bos_token bool = true Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 15: tokenizer.ggml.add_eos_token bool = false Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 16: tokenizer.ggml.add_padding_token bool = false Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 17: tokenizer.ggml.add_unknown_token bool = false Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 2 Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 1 Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 20: tokenizer.ggml.eot_token_id u32 = 107 Jun 27 12:06:15 ollama[10798]: time=2024-06-27T12:06:15.876-04:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model" Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,580604] = ["\n \n", "\n \n\n", "\n\n \n", "\n \n\n\n", "\n\n ... Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 22: tokenizer.ggml.middle_token_id u32 = 68 Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 23: tokenizer.ggml.model str = llama Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 0 Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 25: tokenizer.ggml.pre str = default Jun 27 12:06:15 ollama[10798]: llama_model_loader: - kv 26: tokenizer.ggml.prefix_token_id u32 = 67 Jun 27 12:06:16 ollama[10798]: llama_model_loader: - kv 27: tokenizer.ggml.scores arr[f32,256000] = [0.000000, 0.000000, 0.000000, 0.0000... Jun 27 12:06:16 ollama[10798]: llama_model_loader: - kv 28: tokenizer.ggml.suffix_token_id u32 = 69 Jun 27 12:06:16 ollama[10798]: llama_model_loader: - kv 29: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ... Jun 27 12:06:16 ollama[10798]: llama_model_loader: - kv 30: tokenizer.ggml.tokens arr[str,256000] = ["<pad>", "<eos>", "<bos>", "<unk>", ... Jun 27 12:06:16 ollama[10798]: llama_model_loader: - kv 31: tokenizer.ggml.unknown_token_id u32 = 3 Jun 27 12:06:16 ollama[10798]: llama_model_loader: - type f32: 169 tensors Jun 27 12:06:16 ollama[10798]: llama_model_loader: - type q4_0: 294 tensors Jun 27 12:06:16 ollama[10798]: llama_model_loader: - type q6_K: 1 tensors Jun 27 12:06:16 ollama[10798]: llm_load_vocab: special tokens cache size = 260 Jun 27 12:06:16 ollama[10798]: llm_load_vocab: token to piece cache size = 1.6014 MB Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: format = GGUF V3 (latest) Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: arch = gemma2 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: vocab type = SPM Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_vocab = 256000 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_merges = 0 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_ctx_train = 8192 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_embd = 3584 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_head = 16 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_head_kv = 8 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_layer = 42 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_rot = 224 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_embd_head_k = 256 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_embd_head_v = 256 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_gqa = 2 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_embd_k_gqa = 2048 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_embd_v_gqa = 2048 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: f_norm_eps = 0.0e+00 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: f_logit_scale = 0.0e+00 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_ff = 14336 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_expert = 0 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_expert_used = 0 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: causal attn = 1 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: pooling type = 0 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: rope type = 2 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: rope scaling = linear Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: freq_base_train = 10000.0 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: freq_scale_train = 1 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: n_ctx_orig_yarn = 8192 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: rope_finetuned = unknown Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: ssm_d_conv = 0 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: ssm_d_inner = 0 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: ssm_d_state = 0 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: ssm_dt_rank = 0 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: model type = ?B Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: model ftype = Q4_0 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: model params = 9.24 B Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: model size = 5.06 GiB (4.71 BPW) Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: general.name = gemma2 Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: BOS token = 2 '<bos>' Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: EOS token = 1 '<eos>' Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: UNK token = 3 '<unk>' Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: PAD token = 0 '<pad>' Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: LF token = 227 '<0x0A>' Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: PRE token = 67 '<unused60>' Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: SUF token = 69 '<unused62>' Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: MID token = 68 '<unused61>' Jun 27 12:06:16 ollama[10798]: llm_load_print_meta: EOT token = 107 '<end_of_turn>' Jun 27 12:06:16 ollama[10798]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes Jun 27 12:06:16 ollama[10798]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no Jun 27 12:06:16 ollama[10798]: ggml_cuda_init: found 2 CUDA devices: Jun 27 12:06:16 ollama[10798]: Device 0: NVIDIA GeForce GTX 1060 6GB, compute capability 6.1, VMM: yes Jun 27 12:06:16 ollama[10798]: Device 1: NVIDIA GeForce GTX 1060 3GB, compute capability 6.1, VMM: yes Jun 27 12:06:16 ollama[10798]: llm_load_tensors: ggml ctx size = 0.68 MiB Jun 27 12:06:16 ollama[10798]: llm_load_tensors: offloading 42 repeating layers to GPU Jun 27 12:06:16 ollama[10798]: llm_load_tensors: offloading non-repeating layers to GPU Jun 27 12:06:16 ollama[10798]: llm_load_tensors: offloaded 43/43 layers to GPU Jun 27 12:06:16 ollama[10798]: llm_load_tensors: CPU buffer size = 717.77 MiB Jun 27 12:06:16 ollama[10798]: llm_load_tensors: CUDA0 buffer size = 2765.55 MiB Jun 27 12:06:16 ollama[10798]: llm_load_tensors: CUDA1 buffer size = 2419.66 MiB Jun 27 12:06:18 ollama[10798]: llama_new_context_with_model: n_ctx = 2048 Jun 27 12:06:18 ollama[10798]: llama_new_context_with_model: n_batch = 512 Jun 27 12:06:18 ollama[10798]: llama_new_context_with_model: n_ubatch = 512 Jun 27 12:06:18 ollama[10798]: llama_new_context_with_model: flash_attn = 0 Jun 27 12:06:18 ollama[10798]: llama_new_context_with_model: freq_base = 10000.0 Jun 27 12:06:18 ollama[10798]: llama_new_context_with_model: freq_scale = 1 Jun 27 12:06:18 ollama[10798]: llama_kv_cache_init: CUDA0 KV buffer size = 416.00 MiB Jun 27 12:06:18 ollama[10798]: llama_kv_cache_init: CUDA1 KV buffer size = 256.00 MiB Jun 27 12:06:18 ollama[10798]: llama_new_context_with_model: KV self size = 672.00 MiB, K (f16): 336.00 MiB, V (f16): 336.00 MiB Jun 27 12:06:18 ollama[10798]: llama_new_context_with_model: CUDA_Host output buffer size = 0.99 MiB Jun 27 12:06:18 ollama[10798]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) Jun 27 12:06:18 ollama[10798]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 551.02 MiB on device 1: cudaMalloc failed: out of memory Jun 27 12:06:18 ollama[10798]: ggml_gallocr_reserve_n: failed to allocate CUDA1 buffer of size 577781760 Jun 27 12:06:18 ollama[10798]: llama_new_context_with_model: failed to allocate compute buffers Jun 27 12:06:18 ollama[10798]: llama_init_from_gpt_params: error: failed to create context with model '/usr/share/ollama/.ollama/models/blobs/sha256-e84ed7399c82fbf7dbd6cdef3f12> Jun 27 12:06:18 ollama[11759]: ERROR [load_model] unable to load model | model="/usr/share/ollama/.ollama/models/blobs/sha256-e84ed7399c82fbf7dbd6cdef3f12d356c3cdb5512e5d8b2a989> Jun 27 12:06:18 ollama[10798]: terminate called without an active exception Jun 27 12:06:18 ollama[10798]: time=2024-06-27T12:06:18.506-04:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server error" Jun 27 12:06:18 ollama[10798]: time=2024-06-27T12:06:18.757-04:00 level=ERROR source=sched.go:388 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) error:failed to create context with model Jun 27 12:06:18 ollama[10798]: [GIN] 2024/06/27 - 12:06:18 | 500 | 3.394458507s | 127.0.0.1 | POST "/api/chat" Jun 27 12:06:23 ollama[10798]: time=2024-06-27T12:06:23.929-04:00 level=WARN source=sched.go:575 msg="gpu VRAM usage didn't recover within timeout" seconds=5.171683469 model=/us> Jun 27 12:06:24 ollama[10798]: time=2024-06-27T12:06:24.179-04:00 level=WARN source=sched.go:575 msg="gpu VRAM usage didn't recover within timeout" seconds=5.421977546 model=/us> Jun 27 12:06:24 ollama[10798]: time=2024-06-27T12:06:24.429-04:00 level=WARN source=sched.go:575 msg="gpu VRAM usage didn't recover within timeout" seconds=5.67175286 model=/usr>``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 1.47
GiteaMirror added the bug label 2026-04-12 13:55:52 -05:00
Author
Owner

@Qualzz commented on GitHub (Jun 27, 2024):

The model does load for me, but the quantized version are really bad.
There is still this llama.cpp pull request that is not merged yet, so I don't know if it's related or not
https://github.com/ggerganov/llama.cpp/pull/8156

<!-- gh-comment-id:2195235026 --> @Qualzz commented on GitHub (Jun 27, 2024): The model does load for me, but the quantized version are really bad. There is still this llama.cpp pull request that is not merged yet, so I don't know if it's related or not [https://github.com/ggerganov/llama.cpp/pull/8156](https://github.com/ggerganov/llama.cpp/pull/8156)
Author
Owner

@mchiang0610 commented on GitHub (Jun 27, 2024):

@Qualzz sorry about that. What are you seeing in terms of really bad to help us troubleshoot.

we did not use llama.cpp PR since we were collaborating together directly with Google, and other llama.cpp maintainers were in the same thread. Looking into this though to help troubleshoot, since it'll help make llama.cpp implementation good too.

<!-- gh-comment-id:2195240240 --> @mchiang0610 commented on GitHub (Jun 27, 2024): @Qualzz sorry about that. What are you seeing in terms of really bad to help us troubleshoot. we did not use llama.cpp PR since we were collaborating together directly with Google, and other llama.cpp maintainers were in the same thread. Looking into this though to help troubleshoot, since it'll help make llama.cpp implementation good too.
Author
Owner

@bartowski1182 commented on GitHub (Jun 27, 2024):

isn't the issue here that you ran out of memory..?

<!-- gh-comment-id:2195245787 --> @bartowski1182 commented on GitHub (Jun 27, 2024): isn't the issue here that you ran out of memory..?
Author
Owner

@alessandromalacarne commented on GitHub (Jun 27, 2024):

I have the same issue with every Gemma 2 9B quantization

<!-- gh-comment-id:2195408735 --> @alessandromalacarne commented on GitHub (Jun 27, 2024): I have the same issue with every Gemma 2 9B quantization
Author
Owner

@rick-github commented on GitHub (Jun 27, 2024):

Also having problems loading the model. Reducing the num_gpu value allows the model to load. Seems very similar to the deepseek2 problems earlier (#4799, #5113).

<!-- gh-comment-id:2195470307 --> @rick-github commented on GitHub (Jun 27, 2024): Also having problems loading the model. Reducing the `num_gpu` value allows the model to load. Seems very similar to the deepseek2 problems earlier (#4799, #5113).
Author
Owner

@jmagder commented on GitHub (Jun 27, 2024):

I am running the rocm version through docker, and when I run ollama --version it still says 1.46, despite downloading the 1.47 tag. Perhaps the 1.47 build didn't actually succeed?

<!-- gh-comment-id:2195631552 --> @jmagder commented on GitHub (Jun 27, 2024): I am running the rocm version through docker, and when I run `ollama --version` it still says 1.46, despite downloading the 1.47 tag. Perhaps the 1.47 build didn't actually succeed?
Author
Owner

@s-kostyaev commented on GitHub (Jun 28, 2024):

What are you seeing in terms of really bad to help us troubleshoot.

ollama run gemma2:27b-instruct-q4_K_M                                  
>>> Расскажи мне про срыв контакта в Гештальт терапии
В гештальт-терапии **"срыв контакта"** (контакт-прерывание) - это распространённый термин, который описывает способ, которым люди избегают или не могут столкнуться с реальностью. 

Это происходит, когда мы не можем или не хотим принимать что-то из нашего опыта, будь то чувство, мысль, потребность или ситуацию.

**Примеры срыва контакта:**

* **Избегание:** Срыв контакта может проявляться в том, что человек избегает определенных людей, мест, ситуаций или чувств, которые вызывают у него тревогу или дискомфорт.
* **Слияние:** Срыв контакта может также быть результатом слишком сильного стремления к слиянию с другими, боясь потерять связь и не желая быть собой.
* **Подавление:** Человек может подавлять свои эмоции, потребности или желания, чтобы избежать конфликта.
* **Интеллектуализация:** Слишком много внимания уделяется интеллектуальному анализу ситуации, и мало -  эмоциональному опыту.
* **Негативизм:**

**Причины срыва контакта:**

* **Неудовлетворенные потребности:** Необходимость в одобрении, любви, безопасности и признании.
* **Непонимание:** Недостаток ясности в своих потребностях, желаниях и чувствах.
* **Недостаток навыков:** Неумение идентифицировать и удовлетворять свои потребности в здоровом, осознанном и конструктивном способе.

* **Страх быть отвергнутым:**

* **Негативные установки:**

**Как терапевт помогает при срыве контакта:**

В гештальт-терапии, терапевт помогает клиенту

* **Осознать невыраженные потребности и чувства:**

* **Понять, как страх и проекты мешают быть настоящим:**

* **Развить навыки, необходимые для установления и поддержания здорового контакта:**

* **Принять ответственность за свой опыт:**

* **Работать с этими проблемами в терапии:**

**Важно:**

* Терапевт не дает советов, а

* **Работать с этими про��ктами и работать над его разрешением:**

* **Проанализировать причины:**

* **Проектирование:**

* **Проработки срыва контакта:**

* **Поддержка в терапии:**

В гештальт-терапии, терапевт

* **Помогает клиенту понять**

* **Терапевт не "лечит"**

* **Вместо того, чтобы давать советы, терапевт помогает**

* **Поиск истребителей:**

* **Признание и принятие проекций:**

* **Поиск помощи:**

В гештальт-терапии терапевт не осуждает клиента за его

* **Терапевт не даёт советы, а помогает клиенту:**

* **Поиск ресурсов для удовлетворения:**

* **Проработка проекций:**

В гештальт-терапии,

* **"Проекция"**

* **Терапевт не навязывает клиенту никаких:**

* **Понимание и принятие того, что клиент не может:**

* **Вместо этого, терапевт помогает клиенту**

* **Понять, как он сам**

* **С помощью терапевта, клиент может:**

* **Поиск и признание проекций**

* **В гештальт-терапии,

* **Важно отметить, что**

* **Терапевт как "зеркало":**

* **"Срыв контакта"**

* **Срыв контакта в тераштапит-терапии -

* **Это может быть**

* **"Контакт" в этом контексте**

* **означает**

* **значит, что**

* **включает в себя**

* **Потребность в "контакте"**

* **встречается в**

* **Срыв контакта в терапии**

* **может быть**

* **Проработка "срыва"**

* **означает:**

* **"Срыв" в терапии**

* **может быть результатом**

* **"контакта"**

* **позволяет клиенту**

* **встретить**

* **и "контакта" с собой:**

* **Признания и**

* **понять**

* **его роль в**

* **уходит с собой, и**

* **и**

* **влияет на**

* **и**

* **в**

* **и**

* **"Срыв контакта"**

* **в терапии**

* **означает, что**

* **"контакта"**

* **"контакта"**

* **в терапии**

* **и**

* **"Проблемы":**

* **

* **"Проблемы"**

* **

* **что*

* **

* **Терапия "здесь и сейчас":**

* **"Проблемы"**

* **в**

* **успешно**

* **решится**

* **и**

* **

* **

* **

* **

* **

* **

* **

* **"Здесь и сейчас":**

* **

*

* **

* **

*

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

*

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

*

* **

* **

* **

* **

*

* **

*

* **

*

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

*

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

*

* **

* **

* **

* **

*

* **

* **

* **

* **

* **

* **

* **

* **

*

* **

* **

* **

*

* **

*

* **

* **

* **

* **

* **

* **

* **

*

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

* **

*

* **

* **

* **

* **

* **

* **

* **

* **

* **

*

* **

* **

* **

* **

* **

* **

* **

* **

*

* **

* **

*

* **

*

*

* **

*

* **

*

*

* **

* **

* **

*

* **

* **

* **

* **

*

* **

* **

* **

* **

* **

* **

*

*

* **

*

*

* **

*

*

*

* **

*

*

*

*

* **

*

*

* **

* **

* **

* **

* **

* **

*

* **

* **

* **

*

* **

* **

*

* **

*

* **

* **

* **

* **

*

*

* **

* **

* **

*

* **

*

*

*

*

*

*

* **

*

* **

*

*

* **

*

*

*

* **

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

* **

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

* **

*

*

* **

*

*

##

**

**

* **

**

*

**

**

**

*

**

**

*



**

**

*

**

**

**

**

**

* **

 **

**

**

* **

##



**

**

* **

**

* **

**

*

**

**

**

* **

**

* **

*

**

* **

**

*

**

**

*

**

*

* **

**

**

**

* **

**

* **

**

*

**

**

**

* **

**

**

*

**

**

**

**

**

*

**

**

*

**

**

**

*

**

**

*

**

*

* **

**

* **

^C

Sorry for long message, not sure how to make it readable, but show the problem.

<!-- gh-comment-id:2196109318 --> @s-kostyaev commented on GitHub (Jun 28, 2024): > What are you seeing in terms of really bad to help us troubleshoot. ``` ollama run gemma2:27b-instruct-q4_K_M >>> Расскажи мне про срыв контакта в Гештальт терапии В гештальт-терапии **"срыв контакта"** (контакт-прерывание) - это распространённый термин, который описывает способ, которым люди избегают или не могут столкнуться с реальностью. Это происходит, когда мы не можем или не хотим принимать что-то из нашего опыта, будь то чувство, мысль, потребность или ситуацию. **Примеры срыва контакта:** * **Избегание:** Срыв контакта может проявляться в том, что человек избегает определенных людей, мест, ситуаций или чувств, которые вызывают у него тревогу или дискомфорт. * **Слияние:** Срыв контакта может также быть результатом слишком сильного стремления к слиянию с другими, боясь потерять связь и не желая быть собой. * **Подавление:** Человек может подавлять свои эмоции, потребности или желания, чтобы избежать конфликта. * **Интеллектуализация:** Слишком много внимания уделяется интеллектуальному анализу ситуации, и мало - эмоциональному опыту. * **Негативизм:** **Причины срыва контакта:** * **Неудовлетворенные потребности:** Необходимость в одобрении, любви, безопасности и признании. * **Непонимание:** Недостаток ясности в своих потребностях, желаниях и чувствах. * **Недостаток навыков:** Неумение идентифицировать и удовлетворять свои потребности в здоровом, осознанном и конструктивном способе. * **Страх быть отвергнутым:** * **Негативные установки:** **Как терапевт помогает при срыве контакта:** В гештальт-терапии, терапевт помогает клиенту * **Осознать невыраженные потребности и чувства:** * **Понять, как страх и проекты мешают быть настоящим:** * **Развить навыки, необходимые для установления и поддержания здорового контакта:** * **Принять ответственность за свой опыт:** * **Работать с этими проблемами в терапии:** **Важно:** * Терапевт не дает советов, а * **Работать с этими про��ктами и работать над его разрешением:** * **Проанализировать причины:** * **Проектирование:** * **Проработки срыва контакта:** * **Поддержка в терапии:** В гештальт-терапии, терапевт * **Помогает клиенту понять** * **Терапевт не "лечит"** * **Вместо того, чтобы давать советы, терапевт помогает** * **Поиск истребителей:** * **Признание и принятие проекций:** * **Поиск помощи:** В гештальт-терапии терапевт не осуждает клиента за его * **Терапевт не даёт советы, а помогает клиенту:** * **Поиск ресурсов для удовлетворения:** * **Проработка проекций:** В гештальт-терапии, * **"Проекция"** * **Терапевт не навязывает клиенту никаких:** * **Понимание и принятие того, что клиент не может:** * **Вместо этого, терапевт помогает клиенту** * **Понять, как он сам** * **С помощью терапевта, клиент может:** * **Поиск и признание проекций** * **В гештальт-терапии, * **Важно отметить, что** * **Терапевт как "зеркало":** * **"Срыв контакта"** * **Срыв контакта в тераштапит-терапии - * **Это может быть** * **"Контакт" в этом контексте** * **означает** * **значит, что** * **включает в себя** * **Потребность в "контакте"** * **встречается в** * **Срыв контакта в терапии** * **может быть** * **Проработка "срыва"** * **означает:** * **"Срыв" в терапии** * **может быть результатом** * **"контакта"** * **позволяет клиенту** * **встретить** * **и "контакта" с собой:** * **Признания и** * **понять** * **его роль в** * **уходит с собой, и** * **и** * **влияет на** * **и** * **в** * **и** * **"Срыв контакта"** * **в терапии** * **означает, что** * **"контакта"** * **"контакта"** * **в терапии** * **и** * **"Проблемы":** * ** * **"Проблемы"** * ** * **что* * ** * **Терапия "здесь и сейчас":** * **"Проблемы"** * **в** * **успешно** * **решится** * **и** * ** * ** * ** * ** * ** * ** * ** * **"Здесь и сейчас":** * ** * * ** * ** * * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * * ** * ** * ** * ** * * ** * * ** * * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * * ** * ** * ** * ** * * ** * ** * ** * ** * ** * ** * ** * ** * * ** * ** * ** * * ** * * ** * ** * ** * ** * ** * ** * ** * * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * ** * * ** * ** * ** * ** * ** * ** * ** * ** * ** * * ** * ** * ** * ** * ** * ** * ** * ** * * ** * ** * * ** * * * ** * * ** * * * ** * ** * ** * * ** * ** * ** * ** * * ** * ** * ** * ** * ** * ** * * * ** * * * ** * * * * ** * * * * * ** * * * ** * ** * ** * ** * ** * ** * * ** * ** * ** * * ** * ** * * ** * * ** * ** * ** * ** * * * ** * ** * ** * * ** * * * * * * * ** * * ** * * * ** * * * * ** * * * * * * * * * * * * * * * * * ** * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * ** * * * ** * * ## ** ** * ** ** * ** ** ** * ** ** * ** ** * ** ** ** ** ** * ** ** ** ** * ** ## ** ** * ** ** * ** ** * ** ** ** * ** ** * ** * ** * ** ** * ** ** * ** * * ** ** ** ** * ** ** * ** ** * ** ** ** * ** ** ** * ** ** ** ** ** * ** ** * ** ** ** * ** ** * ** * * ** ** * ** ^C ``` Sorry for long message, not sure how to make it readable, but show the problem.
Author
Owner

@helloworld00 commented on GitHub (Jun 28, 2024):

same error with gemma2:27b.
It works in cpu-only model with CUDA_VISIBLE_DEVICES=-1.

Platform

A pve guest

OS

ubuntu 20.04

GPU

Nvidia 4060ti 16GB(GPU passthrough)

CPU

Intel(host)

Ollama version

ollama version is 0.1.47

Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: loaded meta data with 32 key-value pairs and 508 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-b6ee2328408ebc031359e9745973b09963df9269468d37e1ea7912862aadec72 (version GGUF V3 (latest))
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 0: gemma2.attention.head_count u32 = 32
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 1: gemma2.attention.head_count_kv u32 = 16
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 2: gemma2.attention.key_length u32 = 128
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 3: gemma2.attention.layer_norm_rms_epsilon f32 = 0.000001
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 4: gemma2.attention.value_length u32 = 128
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 5: gemma2.block_count u32 = 46
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 6: gemma2.context_length u32 = 8192
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 7: gemma2.embedding_length u32 = 4608
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 8: gemma2.feed_forward_length u32 = 36864
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 9: general.architecture str = gemma2
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 10: general.file_type u32 = 2
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 11: general.name str = gemma2
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 12: general.quantization_version u32 = 2
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 13: tokenizer.chat_template str = {{ bos_token }}{% if messages[0]['rol...
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 14: tokenizer.ggml.add_bos_token bool = true
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 15: tokenizer.ggml.add_eos_token bool = false
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 16: tokenizer.ggml.add_padding_token bool = false
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 17: tokenizer.ggml.add_unknown_token bool = false
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 2
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 1
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 20: tokenizer.ggml.eot_token_id u32 = 107
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,580604] = ["\n \n", "\n \n\n", "\n\n \n", "\n \n\n\n", "\n\n ...
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 22: tokenizer.ggml.middle_token_id u32 = 68
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 23: tokenizer.ggml.model str = llama
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 0
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 25: tokenizer.ggml.pre str = default
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 26: tokenizer.ggml.prefix_token_id u32 = 67
Jun 28 15:06:27 zw ollama[57669]: time=2024-06-28T15:06:27.668+08:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model"
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 27: tokenizer.ggml.scores arr[f32,256000] = [0.000000, 0.000000, 0.000000, 0.0000...
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 28: tokenizer.ggml.suffix_token_id u32 = 69
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 29: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 30: tokenizer.ggml.tokens arr[str,256000] = ["", "", "", "", ...
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 31: tokenizer.ggml.unknown_token_id u32 = 3
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - type f32: 185 tensors
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - type q4_0: 322 tensors
Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - type q6_K: 1 tensors
Jun 28 15:06:27 zw ollama[57669]: llm_load_vocab: special tokens cache size = 260
Jun 28 15:06:28 zw ollama[57669]: llm_load_vocab: token to piece cache size = 1.6014 MB
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: format = GGUF V3 (latest)
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: arch = gemma2
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: vocab type = SPM
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_vocab = 256000
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_merges = 0
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_ctx_train = 8192
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_embd = 4608
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_head = 32
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_head_kv = 16
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_layer = 46
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_rot = 144
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_embd_head_k = 128
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_embd_head_v = 128
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_gqa = 2
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_embd_k_gqa = 2048
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_embd_v_gqa = 2048
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: f_norm_eps = 0.0e+00
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: f_logit_scale = 0.0e+00
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_ff = 36864
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_expert = 0
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_expert_used = 0
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: causal attn = 1
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: pooling type = 0
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: rope type = 2
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: rope scaling = linear
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: freq_base_train = 10000.0
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: freq_scale_train = 1
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_ctx_orig_yarn = 8192
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: rope_finetuned = unknown
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: ssm_d_conv = 0
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: ssm_d_inner = 0
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: ssm_d_state = 0
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: ssm_dt_rank = 0
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: model type = ?B
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: model ftype = Q4_0
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: model params = 27.23 B
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: model size = 14.55 GiB (4.59 BPW)
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: general.name = gemma2
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: BOS token = 2 ''
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: EOS token = 1 ''
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: UNK token = 3 ''
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: PAD token = 0 ''
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: LF token = 227 '<0x0A>'
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: PRE token = 67 ''
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: SUF token = 69 ''
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: MID token = 68 ''
Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: EOT token = 107 '<end_of_turn>'
Jun 28 15:06:28 zw ollama[57669]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes
Jun 28 15:06:28 zw ollama[57669]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
Jun 28 15:06:28 zw ollama[57669]: ggml_cuda_init: found 1 CUDA devices:
Jun 28 15:06:28 zw ollama[57669]: Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes
Jun 28 15:06:28 zw ollama[57669]: llm_load_tensors: ggml ctx size = 0.49 MiB
Jun 28 15:06:28 zw ollama[57669]: llm_load_tensors: offloading 46 repeating layers to GPU
Jun 28 15:06:28 zw ollama[57669]: llm_load_tensors: offloaded 46/47 layers to GPU
Jun 28 15:06:28 zw ollama[57669]: llm_load_tensors: CPU buffer size = 922.87 MiB
Jun 28 15:06:28 zw ollama[57669]: llm_load_tensors: CUDA0 buffer size = 13975.73 MiB
Jun 28 15:06:31 zw ollama[57669]: llama_new_context_with_model: n_ctx = 2048
Jun 28 15:06:31 zw ollama[57669]: llama_new_context_with_model: n_batch = 512
Jun 28 15:06:31 zw ollama[57669]: llama_new_context_with_model: n_ubatch = 512
Jun 28 15:06:31 zw ollama[57669]: llama_new_context_with_model: flash_attn = 0
Jun 28 15:06:31 zw ollama[57669]: llama_new_context_with_model: freq_base = 10000.0
Jun 28 15:06:31 zw ollama[57669]: llama_new_context_with_model: freq_scale = 1
Jun 28 15:06:31 zw ollama[57669]: llama_kv_cache_init: CUDA0 KV buffer size = 736.00 MiB
Jun 28 15:06:31 zw ollama[57669]: llama_new_context_with_model: KV self size = 736.00 MiB, K (f16): 368.00 MiB, V (f16): 368.00 MiB
Jun 28 15:06:31 zw ollama[57669]: llama_new_context_with_model: CUDA_Host output buffer size = 0.99 MiB
Jun 28 15:06:31 zw ollama[57669]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 1431.85 MiB on device 0: cudaMalloc failed: out of memory
Jun 28 15:06:31 zw ollama[57669]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 1501405184
Jun 28 15:06:31 zw ollama[57669]: llama_new_context_with_model: failed to allocate compute buffers
Jun 28 15:06:31 zw ollama[57669]: llama_init_from_gpt_params: error: failed to create context with model '/usr/share/ollama/.ollama/models/blobs/sha256-b6ee2328408ebc031359e9745973b09963df9269468d37e1ea7912862aadec72'
Jun 28 15:06:31 zw ollama[57722]: ERROR [load_model] unable to load model | model="/usr/share/ollama/.ollama/models/blobs/sha256-b6ee2328408ebc031359e9745973b09963df9269468d37e1ea7912862aadec72" tid="140134877859840" timestamp=1719558391
Jun 28 15:06:31 zw ollama[57669]: terminate called without an active exception
Jun 28 15:06:31 zw ollama[57669]: time=2024-06-28T15:06:31.335+08:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server error"
Jun 28 15:06:31 zw ollama[57669]: time=2024-06-28T15:06:31.585+08:00 level=ERROR source=sched.go:388 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) error:failed to create context with model '/usr/share/ollama/.ollama/models/blobs/sha256-b6ee2328408ebc031359e9745973b09963df9269468d37e1ea7912862aadec72'"
Jun 28 15:06:31 zw ollama[57669]: [GIN] 2024/06/28 - 15:06:31 | 500 | 4.349121077s | 127.0.0.1 | POST "/api/chat"
Jun 28 15:06:36 zw ollama[57669]: time=2024-06-28T15:06:36.723+08:00 level=WARN source=sched.go:575 msg="gpu VRAM usage didn't recover within timeout" seconds=5.137967677 model=/usr/share/ollama/.ollama/models/blobs/sha256-b6ee2328408ebc031359e9745973b09963df9269468d37e1ea7912862aadec72
Jun 28 15:06:36 zw ollama[57669]: time=2024-06-28T15:06:36.973+08:00 level=WARN source=sched.go:575 msg="gpu VRAM usage didn't recover within timeout" seconds=5.387437444 model=/usr/share/ollama/.ollama/models/blobs/sha256-b6ee2328408ebc031359e9745973b09963df9269468d37e1ea7912862aadec72
Jun 28 15:06:37 zw ollama[57669]: time=2024-06-28T15:06:37.222+08:00 level=WARN source=sched.go:575 msg="gpu VRAM usage didn't recover within timeout" seconds=5.637020539 model=/usr/share/ollama/.ollama/models/blobs/sha256-b6ee2328408ebc031359e9745973b09963df9269468d37e1ea7912862aadec72

<!-- gh-comment-id:2196284250 --> @helloworld00 commented on GitHub (Jun 28, 2024): same error with gemma2:27b. It works in cpu-only model with CUDA_VISIBLE_DEVICES=-1. ### Platform A pve guest ### OS ubuntu 20.04 ### GPU Nvidia 4060ti 16GB(GPU passthrough) ### CPU Intel(host) ### Ollama version ollama version is 0.1.47 Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: loaded meta data with 32 key-value pairs and 508 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-b6ee2328408ebc031359e9745973b09963df9269468d37e1ea7912862aadec72 (version GGUF V3 (latest)) Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 0: gemma2.attention.head_count u32 = 32 Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 1: gemma2.attention.head_count_kv u32 = 16 Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 2: gemma2.attention.key_length u32 = 128 Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 3: gemma2.attention.layer_norm_rms_epsilon f32 = 0.000001 Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 4: gemma2.attention.value_length u32 = 128 Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 5: gemma2.block_count u32 = 46 Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 6: gemma2.context_length u32 = 8192 Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 7: gemma2.embedding_length u32 = 4608 Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 8: gemma2.feed_forward_length u32 = 36864 Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 9: general.architecture str = gemma2 Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 10: general.file_type u32 = 2 Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 11: general.name str = gemma2 Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 12: general.quantization_version u32 = 2 Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 13: tokenizer.chat_template str = {{ bos_token }}{% if messages[0]['rol... Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 14: tokenizer.ggml.add_bos_token bool = true Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 15: tokenizer.ggml.add_eos_token bool = false Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 16: tokenizer.ggml.add_padding_token bool = false Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 17: tokenizer.ggml.add_unknown_token bool = false Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 2 Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 1 Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 20: tokenizer.ggml.eot_token_id u32 = 107 Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,580604] = ["\n \n", "\n \n\n", "\n\n \n", "\n \n\n\n", "\n\n ... Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 22: tokenizer.ggml.middle_token_id u32 = 68 Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 23: tokenizer.ggml.model str = llama Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 0 Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 25: tokenizer.ggml.pre str = default Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 26: tokenizer.ggml.prefix_token_id u32 = 67 Jun 28 15:06:27 zw ollama[57669]: time=2024-06-28T15:06:27.668+08:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model" Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 27: tokenizer.ggml.scores arr[f32,256000] = [0.000000, 0.000000, 0.000000, 0.0000... Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 28: tokenizer.ggml.suffix_token_id u32 = 69 Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 29: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ... Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 30: tokenizer.ggml.tokens arr[str,256000] = ["<pad>", "<eos>", "<bos>", "<unk>", ... Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - kv 31: tokenizer.ggml.unknown_token_id u32 = 3 Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - type f32: 185 tensors Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - type q4_0: 322 tensors Jun 28 15:06:27 zw ollama[57669]: llama_model_loader: - type q6_K: 1 tensors Jun 28 15:06:27 zw ollama[57669]: llm_load_vocab: special tokens cache size = 260 Jun 28 15:06:28 zw ollama[57669]: llm_load_vocab: token to piece cache size = 1.6014 MB Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: format = GGUF V3 (latest) Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: arch = gemma2 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: vocab type = SPM Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_vocab = 256000 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_merges = 0 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_ctx_train = 8192 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_embd = 4608 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_head = 32 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_head_kv = 16 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_layer = 46 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_rot = 144 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_embd_head_k = 128 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_embd_head_v = 128 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_gqa = 2 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_embd_k_gqa = 2048 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_embd_v_gqa = 2048 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: f_norm_eps = 0.0e+00 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: f_logit_scale = 0.0e+00 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_ff = 36864 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_expert = 0 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_expert_used = 0 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: causal attn = 1 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: pooling type = 0 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: rope type = 2 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: rope scaling = linear Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: freq_base_train = 10000.0 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: freq_scale_train = 1 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: n_ctx_orig_yarn = 8192 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: rope_finetuned = unknown Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: ssm_d_conv = 0 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: ssm_d_inner = 0 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: ssm_d_state = 0 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: ssm_dt_rank = 0 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: model type = ?B Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: model ftype = Q4_0 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: model params = 27.23 B Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: model size = 14.55 GiB (4.59 BPW) Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: general.name = gemma2 Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: BOS token = 2 '<bos>' Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: EOS token = 1 '<eos>' Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: UNK token = 3 '<unk>' Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: PAD token = 0 '<pad>' Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: LF token = 227 '<0x0A>' Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: PRE token = 67 '<unused60>' Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: SUF token = 69 '<unused62>' Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: MID token = 68 '<unused61>' Jun 28 15:06:28 zw ollama[57669]: llm_load_print_meta: EOT token = 107 '<end_of_turn>' Jun 28 15:06:28 zw ollama[57669]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes Jun 28 15:06:28 zw ollama[57669]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no Jun 28 15:06:28 zw ollama[57669]: ggml_cuda_init: found 1 CUDA devices: Jun 28 15:06:28 zw ollama[57669]: Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes Jun 28 15:06:28 zw ollama[57669]: llm_load_tensors: ggml ctx size = 0.49 MiB Jun 28 15:06:28 zw ollama[57669]: llm_load_tensors: offloading 46 repeating layers to GPU Jun 28 15:06:28 zw ollama[57669]: llm_load_tensors: offloaded 46/47 layers to GPU Jun 28 15:06:28 zw ollama[57669]: llm_load_tensors: CPU buffer size = 922.87 MiB Jun 28 15:06:28 zw ollama[57669]: llm_load_tensors: CUDA0 buffer size = 13975.73 MiB Jun 28 15:06:31 zw ollama[57669]: llama_new_context_with_model: n_ctx = 2048 Jun 28 15:06:31 zw ollama[57669]: llama_new_context_with_model: n_batch = 512 Jun 28 15:06:31 zw ollama[57669]: llama_new_context_with_model: n_ubatch = 512 Jun 28 15:06:31 zw ollama[57669]: llama_new_context_with_model: flash_attn = 0 Jun 28 15:06:31 zw ollama[57669]: llama_new_context_with_model: freq_base = 10000.0 Jun 28 15:06:31 zw ollama[57669]: llama_new_context_with_model: freq_scale = 1 Jun 28 15:06:31 zw ollama[57669]: llama_kv_cache_init: CUDA0 KV buffer size = 736.00 MiB Jun 28 15:06:31 zw ollama[57669]: llama_new_context_with_model: KV self size = 736.00 MiB, K (f16): 368.00 MiB, V (f16): 368.00 MiB Jun 28 15:06:31 zw ollama[57669]: llama_new_context_with_model: CUDA_Host output buffer size = 0.99 MiB Jun 28 15:06:31 zw ollama[57669]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 1431.85 MiB on device 0: cudaMalloc failed: out of memory Jun 28 15:06:31 zw ollama[57669]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 1501405184 Jun 28 15:06:31 zw ollama[57669]: llama_new_context_with_model: failed to allocate compute buffers Jun 28 15:06:31 zw ollama[57669]: llama_init_from_gpt_params: error: failed to create context with model '/usr/share/ollama/.ollama/models/blobs/sha256-b6ee2328408ebc031359e9745973b09963df9269468d37e1ea7912862aadec72' Jun 28 15:06:31 zw ollama[57722]: ERROR [load_model] unable to load model | model="/usr/share/ollama/.ollama/models/blobs/sha256-b6ee2328408ebc031359e9745973b09963df9269468d37e1ea7912862aadec72" tid="140134877859840" timestamp=1719558391 Jun 28 15:06:31 zw ollama[57669]: terminate called without an active exception Jun 28 15:06:31 zw ollama[57669]: time=2024-06-28T15:06:31.335+08:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server error" Jun 28 15:06:31 zw ollama[57669]: time=2024-06-28T15:06:31.585+08:00 level=ERROR source=sched.go:388 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) error:failed to create context with model '/usr/share/ollama/.ollama/models/blobs/sha256-b6ee2328408ebc031359e9745973b09963df9269468d37e1ea7912862aadec72'" Jun 28 15:06:31 zw ollama[57669]: [GIN] 2024/06/28 - 15:06:31 | 500 | 4.349121077s | 127.0.0.1 | POST "/api/chat" Jun 28 15:06:36 zw ollama[57669]: time=2024-06-28T15:06:36.723+08:00 level=WARN source=sched.go:575 msg="gpu VRAM usage didn't recover within timeout" seconds=5.137967677 model=/usr/share/ollama/.ollama/models/blobs/sha256-b6ee2328408ebc031359e9745973b09963df9269468d37e1ea7912862aadec72 Jun 28 15:06:36 zw ollama[57669]: time=2024-06-28T15:06:36.973+08:00 level=WARN source=sched.go:575 msg="gpu VRAM usage didn't recover within timeout" seconds=5.387437444 model=/usr/share/ollama/.ollama/models/blobs/sha256-b6ee2328408ebc031359e9745973b09963df9269468d37e1ea7912862aadec72 Jun 28 15:06:37 zw ollama[57669]: time=2024-06-28T15:06:37.222+08:00 level=WARN source=sched.go:575 msg="gpu VRAM usage didn't recover within timeout" seconds=5.637020539 model=/usr/share/ollama/.ollama/models/blobs/sha256-b6ee2328408ebc031359e9745973b09963df9269468d37e1ea7912862aadec72
Author
Owner

@welkson commented on GitHub (Jun 28, 2024):

Same problem here on gemma2:27b (9b works):

llama_kv_cache_init: CUDA_Host KV buffer size = 48.00 MiB
llama_kv_cache_init: CUDA0 KV buffer size = 688.00 MiB
llama_new_context_with_model: KV self size = 736.00 MiB, K (f16): 368.00 MiB, V (f16): 368.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.99 MiB
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 1431.85 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 1501405184
llama_new_context_with_model: failed to allocate compute buffers
llama_init_from_gpt_params: error: failed to create context with model '/root/.ollama/models/blobs/sha256-b6ee2328408ebc031359e9745973b09963df9269468d37e1ea7912862aadec72'
ERROR [load_model] unable to load model | model="/root/.ollama/models/blobs/sha256-b6ee2328408ebc031359e9745973b09963df9269468d37e1ea7912862aadec72" tid="139853533863936" timestamp=1719570519
terminate called without an active exception
time=2024-06-28T10:28:40.115Z level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server error"
time=2024-06-28T10:28:40.616Z level=ERROR source=sched.go:388 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) error:failed to create context with model '/root/.ollama/models/blobs/sha256-b6ee2328408ebc031359e9745973b09963df9269468d37e1ea7912862aadec72'"

Ollama version:

kubectl -n ifrn-openwebui exec -it open-webui-ollama-6fc89c5fd4-jmqr6 -- ollama --version

ollama version is 0.1.47

GPU info:

kubectl -n ifrn-openwebui exec -it open-webui-ollama-6fc89c5fd4-jmqr6 -- nvidia-smi                                                                                                                                          
Fri Jun 28 10:36:09 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.223.02   Driver Version: 470.223.02   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            On   | 00000000:13:00.0 Off |                    0 |
| N/A   40C    P8    15W /  70W |      3MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

<!-- gh-comment-id:2196603105 --> @welkson commented on GitHub (Jun 28, 2024): Same problem here on gemma2:27b (9b works): > llama_kv_cache_init: CUDA_Host KV buffer size = 48.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 688.00 MiB llama_new_context_with_model: KV self size = 736.00 MiB, K (f16): 368.00 MiB, V (f16): 368.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.99 MiB ggml_backend_cuda_buffer_type_alloc_buffer: allocating 1431.85 MiB on device 0: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 1501405184 llama_new_context_with_model: failed to allocate compute buffers llama_init_from_gpt_params: error: failed to create context with model '/root/.ollama/models/blobs/sha256-b6ee2328408ebc031359e9745973b09963df9269468d37e1ea7912862aadec72' ERROR [load_model] unable to load model | model="/root/.ollama/models/blobs/sha256-b6ee2328408ebc031359e9745973b09963df9269468d37e1ea7912862aadec72" tid="139853533863936" timestamp=1719570519 terminate called without an active exception time=2024-06-28T10:28:40.115Z level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server error" time=2024-06-28T10:28:40.616Z level=ERROR source=sched.go:388 msg="error loading llama server" error="llama runner process has terminated: signal: aborted (core dumped) error:failed to create context with model '/root/.ollama/models/blobs/sha256-b6ee2328408ebc031359e9745973b09963df9269468d37e1ea7912862aadec72'" Ollama version: ```bash kubectl -n ifrn-openwebui exec -it open-webui-ollama-6fc89c5fd4-jmqr6 -- ollama --version ollama version is 0.1.47 ``` GPU info: ```bash kubectl -n ifrn-openwebui exec -it open-webui-ollama-6fc89c5fd4-jmqr6 -- nvidia-smi Fri Jun 28 10:36:09 2024 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 470.223.02 Driver Version: 470.223.02 CUDA Version: 11.4 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 On | 00000000:13:00.0 Off | 0 | | N/A 40C P8 15W / 70W | 3MiB / 15109MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ ```
Author
Owner

@rick-github commented on GitHub (Jun 28, 2024):

1ed4f521c4 resolves (for me) the problem of OOM during model load. Still having problems with some prompts generating rubbish.

<!-- gh-comment-id:2196850196 --> @rick-github commented on GitHub (Jun 28, 2024): https://github.com/ollama/ollama/commit/1ed4f521c403025050c509394fb4ac3ca2466865 resolves (for me) the problem of OOM during model load. Still having problems with some prompts generating rubbish.
Author
Owner

@jmorganca commented on GitHub (Jun 29, 2024):

Hi folks, this should be fixed as of 0.1.48. @rick-github try re-pulling the model as well: ollama pull gemma. Let me know if you're still seeing errors!

<!-- gh-comment-id:2198368767 --> @jmorganca commented on GitHub (Jun 29, 2024): Hi folks, this should be fixed as of 0.1.48. @rick-github try re-pulling the model as well: `ollama pull gemma`. Let me know if you're still seeing errors!
Author
Owner

@rick-github commented on GitHub (Jun 29, 2024):

Yep, all good, thanks.

<!-- gh-comment-id:2198369048 --> @rick-github commented on GitHub (Jun 29, 2024): Yep, all good, thanks.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#3337