[GH-ISSUE #3683] mixtral:22b OLLAMA 0.1.32 llama runner process no longer running: -1 cudaMalloc failed: out of memory #64305

Closed
opened 2026-05-03 17:01:11 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @subhashdasyam on GitHub (Apr 16, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3683

What is the issue?

Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.106+04:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.106+04:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.107+04:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3415574743/runners/cuda_v11/libcudart.so.11.0]"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.108+04:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.108+04:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.201+04:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.240+04:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.240+04:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.241+04:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3415574743/runners/cuda_v11/libcudart.so.11.0]"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.241+04:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.241+04:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.318+04:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.358+04:00 level=INFO source=server.go:120 msg="offload to gpu" reallayers=34 layers=34 required="76868.7 MiB" used="46864.5 MiB" available="47268.4 MiB" kv="448.0 MiB" fulloffload="244.0 MiB">
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.358+04:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.358+04:00 level=INFO source=server.go:257 msg="starting llama server" cmd="/tmp/ollama3415574743/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-373>
Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.359+04:00 level=INFO source=server.go:382 msg="waiting for llama runner to start responding"
Apr 17 01:25:09 ai-pc ollama[58558]: {"function":"server_params_parse","level":"INFO","line":2599,"msg":"logging to file is disabled.","tid":"136407237222400","timestamp":1713302709}
Apr 17 01:25:09 ai-pc ollama[58558]: {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2795,"msg":"build info","tid":"136407237222400","timestamp":1713302709}
Apr 17 01:25:09 ai-pc ollama[58558]: {"function":"main","level":"INFO","line":2798,"msg":"system info","n_threads":14,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON>
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: loaded meta data with 25 key-value pairs and 563 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-373c4038c2d0dad733d6d29d5f635b7fda61ffa972ab3c4d89e516a7c0bdd80c (version GGUF V3 (lates>
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv   0:                       general.architecture str              = llama
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv   1:                               general.name str              = v2ray
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv   2:                           llama.vocab_size u32              = 32000
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv   3:                       llama.context_length u32              = 65536
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv   4:                     llama.embedding_length u32              = 6144
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv   5:                          llama.block_count u32              = 56
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv   6:                  llama.feed_forward_length u32              = 16384
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv   7:                 llama.rope.dimension_count u32              = 128
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv   8:                 llama.attention.head_count u32              = 48
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv   9:              llama.attention.head_count_kv u32              = 8
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv  10:                         llama.expert_count u32              = 8
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv  11:                    llama.expert_used_count u32              = 2
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv  12:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv  13:                       llama.rope.freq_base f32              = 1000000.000000
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv  14:                          general.file_type u32              = 2
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = llama
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv  17:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 1
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 2
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv  21:            tokenizer.ggml.unknown_token_id u32              = 0
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv  24:               general.quantization_version u32              = 2
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - type  f32:  113 tensors
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - type  f16:   56 tensors
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - type q4_0:  281 tensors
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - type q8_0:  112 tensors
Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - type q6_K:    1 tensors
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_vocab: special tokens definition check successful ( 259/32000 ).
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: format           = GGUF V3 (latest)
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: arch             = llama
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: vocab type       = SPM
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_vocab          = 32000
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_merges         = 0
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_ctx_train      = 65536
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_embd           = 6144
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_head           = 48
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_head_kv        = 8
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_layer          = 56
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_rot            = 128
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_embd_head_k    = 128
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_embd_head_v    = 128
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_gqa            = 6
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_embd_k_gqa     = 1024
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_embd_v_gqa     = 1024
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_ff             = 16384
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_expert         = 8
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_expert_used    = 2
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: causal attn      = 1
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: pooling type     = 0
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: rope type        = 0
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: rope scaling     = linear
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: freq_base_train  = 1000000.0
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: freq_scale_train = 1
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_yarn_orig_ctx  = 65536
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: rope_finetuned   = unknown
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: ssm_d_conv       = 0
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: ssm_d_inner      = 0
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: ssm_d_state      = 0
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: ssm_dt_rank      = 0
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: model type       = 8x22B
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: model ftype      = Q4_0
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: model params     = 140.62 B
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: model size       = 74.05 GiB (4.52 BPW)
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: general.name     = v2ray
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: BOS token        = 1 '<s>'
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: EOS token        = 2 '</s>'
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: UNK token        = 0 '<unk>'
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: LF token         = 13 '<0x0A>'
Apr 17 01:25:09 ai-pc ollama[56169]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   yes
Apr 17 01:25:09 ai-pc ollama[56169]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
Apr 17 01:25:09 ai-pc ollama[56169]: ggml_cuda_init: found 2 CUDA devices:
Apr 17 01:25:09 ai-pc ollama[56169]:   Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
Apr 17 01:25:09 ai-pc ollama[56169]:   Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_tensors: ggml ctx size =    1.16 MiB
Apr 17 01:25:10 ai-pc ollama[56169]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 24289.03 MiB on device 0: cudaMalloc failed: out of memory
Apr 17 01:25:11 ai-pc ollama[56169]: llama_model_load: error loading model: unable to allocate backend buffer
Apr 17 01:25:11 ai-pc ollama[56169]: llama_load_model_from_file: exception loading model
Apr 17 01:25:11 ai-pc ollama[56169]: terminate called after throwing an instance of 'std::runtime_error'
Apr 17 01:25:11 ai-pc ollama[56169]:   what():  unable to allocate backend buffer
Apr 17 01:25:11 ai-pc ollama[56169]: time=2024-04-17T01:25:11.819+04:00 level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: -1 cudaMalloc failed: out of memory"

What did you expect to see?

No response

Steps to reproduce

No response

Are there any recent changes that introduced the issue?

No response

OS

Linux

Architecture

amd64

Platform

No response

Ollama version

0.1.32

GPU

Nvidia

GPU info

Wed Apr 17 01:30:45 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.67 Driver Version: 550.67 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4090 Off | 00000000:01:00.0 Off | Off |
| 0% 49C P8 27W / 450W | 15MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA GeForce RTX 3090 Off | 00000000:0D:00.0 On | N/A |
| 0% 53C P0 109W / 370W | 527MiB / 24576MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 2581 G /usr/lib/xorg/Xorg 4MiB |
| 1 N/A N/A 2581 G /usr/lib/xorg/Xorg 290MiB |
| 1 N/A N/A 2698 G /usr/bin/gnome-shell 51MiB |
| 1 N/A N/A 4417 G firefox 168MiB |
+-----------------------------------------------------------------------------------------+

CPU

Intel

Other software

No response

Originally created by @subhashdasyam on GitHub (Apr 16, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3683 ### What is the issue? ``` Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.106+04:00 level=INFO source=gpu.go:121 msg="Detecting GPU type" Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.106+04:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.107+04:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3415574743/runners/cuda_v11/libcudart.so.11.0]" Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.108+04:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart" Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.108+04:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.201+04:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6" Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.240+04:00 level=INFO source=gpu.go:121 msg="Detecting GPU type" Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.240+04:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.241+04:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3415574743/runners/cuda_v11/libcudart.so.11.0]" Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.241+04:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart" Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.241+04:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.318+04:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6" Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.358+04:00 level=INFO source=server.go:120 msg="offload to gpu" reallayers=34 layers=34 required="76868.7 MiB" used="46864.5 MiB" available="47268.4 MiB" kv="448.0 MiB" fulloffload="244.0 MiB"> Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.358+04:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.358+04:00 level=INFO source=server.go:257 msg="starting llama server" cmd="/tmp/ollama3415574743/runners/cuda_v11/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-373> Apr 17 01:25:09 ai-pc ollama[56169]: time=2024-04-17T01:25:09.359+04:00 level=INFO source=server.go:382 msg="waiting for llama runner to start responding" Apr 17 01:25:09 ai-pc ollama[58558]: {"function":"server_params_parse","level":"INFO","line":2599,"msg":"logging to file is disabled.","tid":"136407237222400","timestamp":1713302709} Apr 17 01:25:09 ai-pc ollama[58558]: {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2795,"msg":"build info","tid":"136407237222400","timestamp":1713302709} Apr 17 01:25:09 ai-pc ollama[58558]: {"function":"main","level":"INFO","line":2798,"msg":"system info","n_threads":14,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON> Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: loaded meta data with 25 key-value pairs and 563 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-373c4038c2d0dad733d6d29d5f635b7fda61ffa972ab3c4d89e516a7c0bdd80c (version GGUF V3 (lates> Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 0: general.architecture str = llama Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 1: general.name str = v2ray Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 2: llama.vocab_size u32 = 32000 Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 3: llama.context_length u32 = 65536 Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 4: llama.embedding_length u32 = 6144 Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 5: llama.block_count u32 = 56 Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 6: llama.feed_forward_length u32 = 16384 Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 7: llama.rope.dimension_count u32 = 128 Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 8: llama.attention.head_count u32 = 48 Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 9: llama.attention.head_count_kv u32 = 8 Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 10: llama.expert_count u32 = 8 Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 11: llama.expert_used_count u32 = 2 Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 13: llama.rope.freq_base f32 = 1000000.000000 Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 14: general.file_type u32 = 2 Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 15: tokenizer.ggml.model str = llama Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 1 Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 2 Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 0 Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - kv 24: general.quantization_version u32 = 2 Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - type f32: 113 tensors Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - type f16: 56 tensors Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - type q4_0: 281 tensors Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - type q8_0: 112 tensors Apr 17 01:25:09 ai-pc ollama[56169]: llama_model_loader: - type q6_K: 1 tensors Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_vocab: special tokens definition check successful ( 259/32000 ). Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: format = GGUF V3 (latest) Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: arch = llama Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: vocab type = SPM Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_vocab = 32000 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_merges = 0 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_ctx_train = 65536 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_embd = 6144 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_head = 48 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_head_kv = 8 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_layer = 56 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_rot = 128 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_embd_head_k = 128 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_embd_head_v = 128 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_gqa = 6 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_embd_k_gqa = 1024 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_embd_v_gqa = 1024 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: f_norm_eps = 0.0e+00 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: f_logit_scale = 0.0e+00 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_ff = 16384 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_expert = 8 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_expert_used = 2 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: causal attn = 1 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: pooling type = 0 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: rope type = 0 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: rope scaling = linear Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: freq_base_train = 1000000.0 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: freq_scale_train = 1 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: n_yarn_orig_ctx = 65536 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: rope_finetuned = unknown Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: ssm_d_conv = 0 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: ssm_d_inner = 0 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: ssm_d_state = 0 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: ssm_dt_rank = 0 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: model type = 8x22B Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: model ftype = Q4_0 Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: model params = 140.62 B Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: model size = 74.05 GiB (4.52 BPW) Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: general.name = v2ray Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: BOS token = 1 '<s>' Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: EOS token = 2 '</s>' Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: UNK token = 0 '<unk>' Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_print_meta: LF token = 13 '<0x0A>' Apr 17 01:25:09 ai-pc ollama[56169]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes Apr 17 01:25:09 ai-pc ollama[56169]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: no Apr 17 01:25:09 ai-pc ollama[56169]: ggml_cuda_init: found 2 CUDA devices: Apr 17 01:25:09 ai-pc ollama[56169]: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes Apr 17 01:25:09 ai-pc ollama[56169]: Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Apr 17 01:25:09 ai-pc ollama[56169]: llm_load_tensors: ggml ctx size = 1.16 MiB Apr 17 01:25:10 ai-pc ollama[56169]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 24289.03 MiB on device 0: cudaMalloc failed: out of memory Apr 17 01:25:11 ai-pc ollama[56169]: llama_model_load: error loading model: unable to allocate backend buffer Apr 17 01:25:11 ai-pc ollama[56169]: llama_load_model_from_file: exception loading model Apr 17 01:25:11 ai-pc ollama[56169]: terminate called after throwing an instance of 'std::runtime_error' Apr 17 01:25:11 ai-pc ollama[56169]: what(): unable to allocate backend buffer Apr 17 01:25:11 ai-pc ollama[56169]: time=2024-04-17T01:25:11.819+04:00 level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: -1 cudaMalloc failed: out of memory" ``` ### What did you expect to see? _No response_ ### Steps to reproduce _No response_ ### Are there any recent changes that introduced the issue? _No response_ ### OS Linux ### Architecture amd64 ### Platform _No response_ ### Ollama version 0.1.32 ### GPU Nvidia ### GPU info Wed Apr 17 01:30:45 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.67 Driver Version: 550.67 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 4090 Off | 00000000:01:00.0 Off | Off | | 0% 49C P8 27W / 450W | 15MiB / 24564MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA GeForce RTX 3090 Off | 00000000:0D:00.0 On | N/A | | 0% 53C P0 109W / 370W | 527MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 2581 G /usr/lib/xorg/Xorg 4MiB | | 1 N/A N/A 2581 G /usr/lib/xorg/Xorg 290MiB | | 1 N/A N/A 2698 G /usr/bin/gnome-shell 51MiB | | 1 N/A N/A 4417 G firefox 168MiB | +-----------------------------------------------------------------------------------------+ ### CPU Intel ### Other software _No response_
GiteaMirror added the gpubugnvidia labels 2026-05-03 17:01:13 -05:00
Author
Owner

@jmorganca commented on GitHub (Apr 17, 2024):

Hi there, this should be fixed as of the final release version of 0.1.32 (you may need to install it again). If you see more OOM errors please create an issue - thanks so much for all the details 😄

<!-- gh-comment-id:2060124891 --> @jmorganca commented on GitHub (Apr 17, 2024): Hi there, this should be fixed as of the final release version of 0.1.32 (you may need to install it again). If you see more OOM errors please create an issue - thanks so much for all the details 😄
Author
Owner

@hbqdev commented on GitHub (Apr 19, 2024):

Hi @jmorganca
I used the latest install as of today and still see this issue. Can you please confirm if it has been fixed?

Thank you so much

<!-- gh-comment-id:2065607007 --> @hbqdev commented on GitHub (Apr 19, 2024): Hi @jmorganca I used the latest install as of today and still see this issue. Can you please confirm if it has been fixed? Thank you so much
Author
Owner

@subhashdasyam commented on GitHub (Apr 19, 2024):

Hi @jmorganca I used the latest install as of today and still see this issue. Can you please confirm if it has been fixed?

Thank you so much

Nope Its not fixed for me, I am waiting for final release hopefully it gets fixed by then.
My Specs are

4090
3090
128GB DDR 5 RAM
i7 14th Gen

<!-- gh-comment-id:2066256857 --> @subhashdasyam commented on GitHub (Apr 19, 2024): > Hi @jmorganca I used the latest install as of today and still see this issue. Can you please confirm if it has been fixed? > > Thank you so much Nope Its not fixed for me, I am waiting for final release hopefully it gets fixed by then. My Specs are 4090 3090 128GB DDR 5 RAM i7 14th Gen
Author
Owner

@vasanthsarathy commented on GitHub (Apr 19, 2024):

Is this a similar error to:

Error: llama runner process no longer running: 1 error:failed to create context with model '/usr/share/ollama/.ollama/models/blobs/sha256-7ec0c94a95cafef2780d00679e83f172ac343bc828aebbe2a5475fbe2daf76ff'

when I ollama run mixtral:8x22b

my gpu: A6000

<!-- gh-comment-id:2066728652 --> @vasanthsarathy commented on GitHub (Apr 19, 2024): Is this a similar error to: ``` Error: llama runner process no longer running: 1 error:failed to create context with model '/usr/share/ollama/.ollama/models/blobs/sha256-7ec0c94a95cafef2780d00679e83f172ac343bc828aebbe2a5475fbe2daf76ff' ``` when I `ollama run mixtral:8x22b` my gpu: A6000
Author
Owner

@bozo32 commented on GitHub (Apr 20, 2024):

getting a similar error on an a100 (80gb) with 120gb ram
Device 0: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes
⠦ llm_load_tensors: ggml ctx size = 0.77 MiB
⠹ llm_load_tensors: offloading 47 repeating layers to GPU
llm_load_tensors: offloaded 47/57 layers to GPU
llm_load_tensors: CPU buffer size = 18753.40 MiB
llm_load_tensors: CUDA0 buffer size = 79522.36 MiB
⠧ .
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA_Host KV buffer size = 72.00 MiB
llama_kv_cache_init: CUDA0 KV buffer size = 376.00 MiB
llama_new_context_with_model: KV self size = 448.00 MiB, K (f16): 224.00 MiB, V (f16): 224.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.15 MiB
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 1766.75 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 1852573696
llama_new_context_with_model: failed to allocate compute buffers
llama_init_from_gpt_params: error: failed to create context with model
'/home/WUR/tamas002/.ollama/models/blobs/sha256-630983e98a0c92b38850c213cb1d4a8a724635ccabf84fdf70f3fad6a862ce52'
⠏ {"function":"load_model","level":"ERR","line":410,"model":"/home/WUR/tamas002/.ollama/models/blobs/sha256-630983e98a0c92b38850c213cb1d4a8a724635ccabf84fdf70f3fad6a862ce52","msg":"unable to load model","tid":"139830607912960","timestamp":1713646941}
⠙ time=2024-04-20T23:02:21.493+02:00 level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runnerprocess no longer running: 1 error:failed to create context with model '/home/WUR/tamas002/.ollama/models/blobs/sha256-630983e98a0c92b38850c213cb1d4a8a724635ccabf84fdf70f3fad6a862ce52'"
[GIN] 2024/04/20 - 23:02:21 | 500 | 4m43s | 127.0.0.1 | POST "/api/chat"
Error: llama runner process no longer running: 1 error:failed to create context with model '/home/WUR/tamas002/.ollama/models/blobs/sha256-630983e98a0c92b38850c213cb1d4a8a724635ccabf84fdf70f3fad6a862ce52'

running the binary of .32

<!-- gh-comment-id:2067780949 --> @bozo32 commented on GitHub (Apr 20, 2024): getting a similar error on an a100 (80gb) with 120gb ram Device 0: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes ⠦ llm_load_tensors: ggml ctx size = 0.77 MiB ⠹ llm_load_tensors: offloading 47 repeating layers to GPU llm_load_tensors: offloaded 47/57 layers to GPU llm_load_tensors: CPU buffer size = 18753.40 MiB llm_load_tensors: CUDA0 buffer size = 79522.36 MiB ⠧ . llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA_Host KV buffer size = 72.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 376.00 MiB llama_new_context_with_model: KV self size = 448.00 MiB, K (f16): 224.00 MiB, V (f16): 224.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.15 MiB **ggml_backend_cuda_buffer_type_alloc_buffer: allocating 1766.75 MiB on device 0: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 1852573696 llama_new_context_with_model: failed to allocate compute buffers llama_init_from_gpt_params: error: failed to create context with model** '/home/WUR/tamas002/.ollama/models/blobs/sha256-630983e98a0c92b38850c213cb1d4a8a724635ccabf84fdf70f3fad6a862ce52' ⠏ {"function":"load_model","level":"ERR","line":410,"model":"/home/WUR/tamas002/.ollama/models/blobs/sha256-630983e98a0c92b38850c213cb1d4a8a724635ccabf84fdf70f3fad6a862ce52","msg":"unable to load model","tid":"139830607912960","timestamp":1713646941} ⠙ time=2024-04-20T23:02:21.493+02:00 level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runnerprocess no longer running: 1 error:failed to create context with model '/home/WUR/tamas002/.ollama/models/blobs/sha256-630983e98a0c92b38850c213cb1d4a8a724635ccabf84fdf70f3fad6a862ce52'" [GIN] 2024/04/20 - 23:02:21 | 500 | 4m43s | 127.0.0.1 | POST "/api/chat" Error: llama runner process no longer running: 1 error:failed to create context with model '/home/WUR/tamas002/.ollama/models/blobs/sha256-630983e98a0c92b38850c213cb1d4a8a724635ccabf84fdf70f3fad6a862ce52' running the binary of .32
Author
Owner

@subhashdasyam commented on GitHub (Apr 23, 2024):

@jmorganca any update on this issue ?

<!-- gh-comment-id:2072310803 --> @subhashdasyam commented on GitHub (Apr 23, 2024): @jmorganca any update on this issue ?
Author
Owner

@Hakim3i commented on GitHub (Apr 27, 2024):

I have same issue with 7900xtx

<!-- gh-comment-id:2081104224 --> @Hakim3i commented on GitHub (Apr 27, 2024): I have same issue with 7900xtx
Author
Owner

@abluejay-piyo commented on GitHub (May 1, 2024):

Error: llama runner process no longer running: 1 error:failed to create context with model '/root/.ollama/models/blobs/sha256-7ec0c94a95cafef2780d00679e83f172ac343bc828aebbe2a5475fbe2daf76ff'

<!-- gh-comment-id:2088209528 --> @abluejay-piyo commented on GitHub (May 1, 2024): Error: llama runner process no longer running: 1 error:failed to create context with model '/root/.ollama/models/blobs/sha256-7ec0c94a95cafef2780d00679e83f172ac343bc828aebbe2a5475fbe2daf76ff'
Author
Owner

@amamrnaf commented on GitHub (May 7, 2024):

image
im having the same error

<!-- gh-comment-id:2098592538 --> @amamrnaf commented on GitHub (May 7, 2024): ![image](https://github.com/ollama/ollama/assets/119628666/3f999e3d-8837-47b5-818f-88d74af8c9da) im having the same error
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64305