[GH-ISSUE #3787] OOM with mixtral 8x22b #2339

Closed
opened 2026-04-12 12:39:39 -05:00 by GiteaMirror · 7 comments
Owner

Originally created by @bozo32 on GitHub (Apr 20, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3787

What is the issue?

OOM with mixtral on an A100 80gb. gets 47/57 layers onto the GPU and then chokes.
running off the binary.
just redownloaded it and re-ran and still got the same issue
no probs with models that fit entirely into vram.

(base) tamas002@gpun201:~/ai$ ./ollama run mixtral:8x22b-instruct-v0.1-q5_K_M
[GIN] 2024/04/20 - 23:17:40 | 200 | 13.807µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/04/20 - 23:17:40 | 200 | 1.937606ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/04/20 - 23:17:40 | 200 | 2.0645ms | 127.0.0.1 | POST "/api/show"
⠙ time=2024-04-20T23:17:40.214+02:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-04-20T23:17:40.214+02:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
time=2024-04-20T23:17:40.215+02:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3504175971/runners/cuda_v11/libcudart.so.11.0]"
time=2024-04-20T23:17:40.219+02:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
time=2024-04-20T23:17:40.219+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
⠹ time=2024-04-20T23:17:40.334+02:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.0"
time=2024-04-20T23:17:40.397+02:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-04-20T23:17:40.397+02:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
time=2024-04-20T23:17:40.398+02:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3504175971/runners/cuda_v11/libcudart.so.11.0]"
time=2024-04-20T23:17:40.400+02:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
time=2024-04-20T23:17:40.400+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
⠸ time=2024-04-20T23:17:40.510+02:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.0"
⠼ time=2024-04-20T23:17:40.573+02:00 level=INFO source=server.go:127 msg="offload to gpu" reallayers=47 layers=47 required="96360.4 MiB" used="80509.6 MiB" available="80627.6 MiB" kv="448.0 MiB" fulloffload="244.0 MiB" partialoffload="256.3 MiB"
time=2024-04-20T23:17:40.573+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-20T23:17:40.573+02:00 level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama3504175971/runners/cuda_v11/ollama_llama_server --model /home/WUR/tamas002/.ollama/models/blobs/sha256-630983e98a0c92b38850c213cb1d4a8a724635ccabf84fdf70f3fad6a862ce52 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 47 --port 34211"
time=2024-04-20T23:17:40.573+02:00 level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
⠇ {"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"140660910362624","timestamp":1713647860}
{"build":1,"commit":"7593639","function":"main","level":"INFO","line":2819,"msg":"build info","tid":"140660910362624","timestamp":1713647860}
{"function":"main","level":"INFO","line":2822,"msg":"system info","n_threads":16,"n_threads_batch":-1,"system_info":"AVX =1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0| FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"140660910362624","timestamp":1713647860,"total_threads":32}
llama_model_loader: loaded meta data with 26 key-value pairs and 563 tensors from /home/WUR/tamas002/.ollama/models/blobs/sha256-630983e98a0c92b38850c213cb1d4a8a724635ccabf84fdf70f3fad6a862ce52 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = mistralai
llama_model_loader: - kv 2: llama.vocab_size u32 = 32768
llama_model_loader: - kv 3: llama.context_length u32 = 65536
llama_model_loader: - kv 4: llama.embedding_length u32 = 6144
llama_model_loader: - kv 5: llama.block_count u32 = 56
llama_model_loader: - kv 6: llama.feed_forward_length u32 = 16384
llama_model_loader: - kv 7: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 8: llama.attention.head_count u32 = 48
llama_model_loader: - kv 9: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 10: llama.expert_count u32 = 8
llama_model_loader: - kv 11: llama.expert_used_count u32 = 2
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 14: general.file_type u32 = 17
llama_model_loader: - kv 15: tokenizer.ggml.model str = llama
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,32768] = ["", "", "", "[INST]", "[...
⠏ llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,32768] = [-1000.000000, -1000.000000,-1000.00...
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,32768] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1,1, 1, ...
llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: tokenizer.chat_template str = {{bos_token}}{% for message inmessag...
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - type f32: 113 tensors
llama_model_loader: - type f16: 56 tensors
llama_model_loader: - type q8_0: 112 tensors
llama_model_loader: - type q5_K: 253 tensors
llama_model_loader: - type q6_K: 29 tensors
llm_load_vocab: mismatch in special tokens definition ( 1027/32768 vs 259/32768 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32768
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 65536
llm_load_print_meta: n_embd = 6144
llm_load_print_meta: n_head = 48
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 56
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 6
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 16384
llm_load_print_meta: n_expert = 8
llm_load_print_meta: n_expert_used = 2
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 65536
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8x22B
llm_load_print_meta: model ftype = Q5_K - Medium
llm_load_print_meta: model params = 140.63 B
llm_load_print_meta: model size = 93.11 GiB (5.69 BPW)
llm_load_print_meta: general.name = mistralai
llm_load_print_meta: BOS token = 1 ''
llm_load_print_meta: EOS token = 2 '
'
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: LF token = 781 '<0x0A>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes
ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes
⠋ llm_load_tensors: ggml ctx size = 0.77 MiB
⠸ llm_load_tensors: offloading 47 repeating layers to GPU
llm_load_tensors: offloaded 47/57 layers to GPU
llm_load_tensors: CPU buffer size = 18753.40 MiB
llm_load_tensors: CUDA0 buffer size = 79522.36 MiB
⠹ .
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA_Host KV buffer size = 72.00 MiB
llama_kv_cache_init: CUDA0 KV buffer size = 376.00 MiB
llama_new_context_with_model: KV self size = 448.00 MiB, K (f16): 224.00 MiB, V (f16): 224.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.15 MiB
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 1766.75 MiB on device 0: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 1852573696
llama_new_context_with_model: failed to allocate compute buffers
llama_init_from_gpt_params: error: failed to create context with model '/home/WUR/tamas002/.ollama/models/blobs/sha256-630983e98a0c92b38850c213cb1d4a8a724635ccabf84fdf70f3fad6a862ce52'
⠴ {"function":"load_model","level":"ERR","line":410,"model":"/home/WUR/tamas002/.ollama/models/blobs/sha256-630983e98a0c92b38850c213cb1d4a8a724635ccabf84fdf70f3fad6a862ce52","msg":"unable to load model","tid":"140660910362624","timestamp":1713647871}
⠧ time=2024-04-20T23:17:51.849+02:00 level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runnerprocess no longer running: 1 error:failed to create context with model '/home/WUR/tamas002/.ollama/models/blobs/sha256-630983e98a0c92b38850c213cb1d4a8a724635ccabf84fdf70f3fad6a862ce52'"
[GIN] 2024/04/20 - 23:17:51 | 500 | 11.732787815s | 127.0.0.1 | POST "/api/chat"
Error: llama runner process no longer running: 1 error:failed to create context with model '/home/WUR/tamas002/.ollama/models/blobs/sha256-630983e98a0c92b38850c213cb1d4a8a724635ccabf84fdf70f3fad6a862ce52'

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.32 @ 20 April

Originally created by @bozo32 on GitHub (Apr 20, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3787 ### What is the issue? OOM with mixtral on an A100 80gb. gets 47/57 layers onto the GPU and then chokes. running off the binary. just redownloaded it and re-ran and still got the same issue no probs with models that fit entirely into vram. (base) tamas002@gpun201:~/ai$ ./ollama run mixtral:8x22b-instruct-v0.1-q5_K_M [GIN] 2024/04/20 - 23:17:40 | 200 | 13.807µs | 127.0.0.1 | HEAD "/" [GIN] 2024/04/20 - 23:17:40 | 200 | 1.937606ms | 127.0.0.1 | POST "/api/show" [GIN] 2024/04/20 - 23:17:40 | 200 | 2.0645ms | 127.0.0.1 | POST "/api/show" ⠙ time=2024-04-20T23:17:40.214+02:00 level=INFO source=gpu.go:121 msg="Detecting GPU type" time=2024-04-20T23:17:40.214+02:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" time=2024-04-20T23:17:40.215+02:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3504175971/runners/cuda_v11/libcudart.so.11.0]" time=2024-04-20T23:17:40.219+02:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart" time=2024-04-20T23:17:40.219+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ⠹ time=2024-04-20T23:17:40.334+02:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.0" time=2024-04-20T23:17:40.397+02:00 level=INFO source=gpu.go:121 msg="Detecting GPU type" time=2024-04-20T23:17:40.397+02:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" time=2024-04-20T23:17:40.398+02:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama3504175971/runners/cuda_v11/libcudart.so.11.0]" time=2024-04-20T23:17:40.400+02:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart" time=2024-04-20T23:17:40.400+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ⠸ time=2024-04-20T23:17:40.510+02:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.0" ⠼ time=2024-04-20T23:17:40.573+02:00 level=INFO source=server.go:127 msg="offload to gpu" reallayers=47 layers=47 required="96360.4 MiB" used="80509.6 MiB" available="80627.6 MiB" kv="448.0 MiB" fulloffload="244.0 MiB" partialoffload="256.3 MiB" time=2024-04-20T23:17:40.573+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-20T23:17:40.573+02:00 level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama3504175971/runners/cuda_v11/ollama_llama_server --model /home/WUR/tamas002/.ollama/models/blobs/sha256-630983e98a0c92b38850c213cb1d4a8a724635ccabf84fdf70f3fad6a862ce52 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 47 --port 34211" time=2024-04-20T23:17:40.573+02:00 level=INFO source=server.go:389 msg="waiting for llama runner to start responding" ⠇ {"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"140660910362624","timestamp":1713647860} {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2819,"msg":"build info","tid":"140660910362624","timestamp":1713647860} {"function":"main","level":"INFO","line":2822,"msg":"system info","n_threads":16,"n_threads_batch":-1,"system_info":"AVX =1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0| FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"140660910362624","timestamp":1713647860,"total_threads":32} llama_model_loader: loaded meta data with 26 key-value pairs and 563 tensors from /home/WUR/tamas002/.ollama/models/blobs/sha256-630983e98a0c92b38850c213cb1d4a8a724635ccabf84fdf70f3fad6a862ce52 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = mistralai llama_model_loader: - kv 2: llama.vocab_size u32 = 32768 llama_model_loader: - kv 3: llama.context_length u32 = 65536 llama_model_loader: - kv 4: llama.embedding_length u32 = 6144 llama_model_loader: - kv 5: llama.block_count u32 = 56 llama_model_loader: - kv 6: llama.feed_forward_length u32 = 16384 llama_model_loader: - kv 7: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 8: llama.attention.head_count u32 = 48 llama_model_loader: - kv 9: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 10: llama.expert_count u32 = 8 llama_model_loader: - kv 11: llama.expert_used_count u32 = 2 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 14: general.file_type u32 = 17 llama_model_loader: - kv 15: tokenizer.ggml.model str = llama llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,32768] = ["<unk>", "<s>", "</s>", "[INST]", "[... ⠏ llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,32768] = [-1000.000000, -1000.000000,-1000.00... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,32768] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1,1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: tokenizer.chat_template str = {{bos_token}}{% for message inmessag... llama_model_loader: - kv 25: general.quantization_version u32 = 2 llama_model_loader: - type f32: 113 tensors llama_model_loader: - type f16: 56 tensors llama_model_loader: - type q8_0: 112 tensors llama_model_loader: - type q5_K: 253 tensors llama_model_loader: - type q6_K: 29 tensors llm_load_vocab: mismatch in special tokens definition ( 1027/32768 vs 259/32768 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32768 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 65536 llm_load_print_meta: n_embd = 6144 llm_load_print_meta: n_head = 48 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 56 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 6 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 16384 llm_load_print_meta: n_expert = 8 llm_load_print_meta: n_expert_used = 2 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 65536 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8x22B llm_load_print_meta: model ftype = Q5_K - Medium llm_load_print_meta: model params = 140.63 B llm_load_print_meta: model size = 93.11 GiB (5.69 BPW) llm_load_print_meta: general.name = mistralai llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 781 '<0x0A>' ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes ggml_cuda_init: CUDA_USE_TENSOR_CORES: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes ⠋ llm_load_tensors: ggml ctx size = 0.77 MiB ⠸ llm_load_tensors: offloading 47 repeating layers to GPU llm_load_tensors: offloaded 47/57 layers to GPU llm_load_tensors: CPU buffer size = 18753.40 MiB llm_load_tensors: CUDA0 buffer size = 79522.36 MiB ⠹ . llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA_Host KV buffer size = 72.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 376.00 MiB llama_new_context_with_model: KV self size = 448.00 MiB, K (f16): 224.00 MiB, V (f16): 224.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.15 MiB ggml_backend_cuda_buffer_type_alloc_buffer: allocating 1766.75 MiB on device 0: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 1852573696 llama_new_context_with_model: failed to allocate compute buffers llama_init_from_gpt_params: error: failed to create context with model '/home/WUR/tamas002/.ollama/models/blobs/sha256-630983e98a0c92b38850c213cb1d4a8a724635ccabf84fdf70f3fad6a862ce52' ⠴ {"function":"load_model","level":"ERR","line":410,"model":"/home/WUR/tamas002/.ollama/models/blobs/sha256-630983e98a0c92b38850c213cb1d4a8a724635ccabf84fdf70f3fad6a862ce52","msg":"unable to load model","tid":"140660910362624","timestamp":1713647871} ⠧ time=2024-04-20T23:17:51.849+02:00 level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runnerprocess no longer running: 1 error:failed to create context with model '/home/WUR/tamas002/.ollama/models/blobs/sha256-630983e98a0c92b38850c213cb1d4a8a724635ccabf84fdf70f3fad6a862ce52'" [GIN] 2024/04/20 - 23:17:51 | 500 | 11.732787815s | 127.0.0.1 | POST "/api/chat" Error: llama runner process no longer running: 1 error:failed to create context with model '/home/WUR/tamas002/.ollama/models/blobs/sha256-630983e98a0c92b38850c213cb1d4a8a724635ccabf84fdf70f3fad6a862ce52' ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.32 @ 20 April
GiteaMirror added the bugmemorynvidia labels 2026-04-12 12:39:39 -05:00
Author
Owner

@jmorganca commented on GitHub (Apr 20, 2024):

Hi I'm sorry this happened, more improvements to memory allocation are coming soon – in the meantime you can override the gpu allocation using num_gpu in the API or via a custom model:

# Modelfile
FROM mixtral:8x22b
PARAMETER NUM_GPU 46 # load less layers 

Then finally create the model:

ollama create mixtral-fix -f Modelfile
ollama run mixtral-fix
<!-- gh-comment-id:2067788227 --> @jmorganca commented on GitHub (Apr 20, 2024): Hi I'm sorry this happened, more improvements to memory allocation are coming soon – in the meantime you can override the gpu allocation using `num_gpu` in the API or via a custom model: ``` # Modelfile FROM mixtral:8x22b PARAMETER NUM_GPU 46 # load less layers ``` Then finally create the model: ``` ollama create mixtral-fix -f Modelfile ollama run mixtral-fix ```
Author
Owner

@wwoodsTM commented on GitHub (Apr 23, 2024):

Just a quick note that for me I had to change the parameter to be lowercase (num_gpu) for it to work in the modelfile.

<!-- gh-comment-id:2071320461 --> @wwoodsTM commented on GitHub (Apr 23, 2024): Just a quick note that for me I had to change the parameter to be lowercase (num_gpu) for it to work in the modelfile.
Author
Owner

@bozo32 commented on GitHub (Apr 23, 2024):

Thank you

I’ll try again.

From: wwoodsTM @.>
Reply to: ollama/ollama @.
>
Date: Tuesday, 23 April 2024 at 05:02
To: ollama/ollama @.>
Cc: peter tamas @.
>, Author @.***>
Subject: Re: [ollama/ollama] OOM with mixtral 8x22b (Issue #3787)

Just a quick note that for me I had to change the parameter to be lowercase (num_gpu) for it to work in the modelfile.


Reply to this email directly, view it on GitHubhttps://github.com/ollama/ollama/issues/3787#issuecomment-2071320461, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AYKOUNPPNSI7YRISUBUWDHTY6XFLPAVCNFSM6AAAAABGQZFARSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANZRGMZDANBWGE.
You are receiving this because you authored the thread.Message ID: @.***>

<!-- gh-comment-id:2071486666 --> @bozo32 commented on GitHub (Apr 23, 2024): Thank you I’ll try again. From: wwoodsTM ***@***.***> Reply to: ollama/ollama ***@***.***> Date: Tuesday, 23 April 2024 at 05:02 To: ollama/ollama ***@***.***> Cc: peter tamas ***@***.***>, Author ***@***.***> Subject: Re: [ollama/ollama] OOM with mixtral 8x22b (Issue #3787) Just a quick note that for me I had to change the parameter to be lowercase (num_gpu) for it to work in the modelfile. — Reply to this email directly, view it on GitHub<https://github.com/ollama/ollama/issues/3787#issuecomment-2071320461>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AYKOUNPPNSI7YRISUBUWDHTY6XFLPAVCNFSM6AAAAABGQZFARSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANZRGMZDANBWGE>. You are receiving this because you authored the thread.Message ID: ***@***.***>
Author
Owner

@Readon commented on GitHub (Apr 29, 2024):

Is that possible to have ollama determine the VRAM it uses? May be one possible way is by retrying but directly terminate the program.

<!-- gh-comment-id:2082216822 --> @Readon commented on GitHub (Apr 29, 2024): Is that possible to have ollama determine the VRAM it uses? May be one possible way is by retrying but directly terminate the program.
Author
Owner

@dhiltgen commented on GitHub (May 2, 2024):

We've recently fixed VRAM prediction calculations for mixtral. Please give the latest RC of 0.1.33 a try and it should be fixed.

https://github.com/ollama/ollama/releases

<!-- gh-comment-id:2089325888 --> @dhiltgen commented on GitHub (May 2, 2024): We've recently fixed VRAM prediction calculations for mixtral. Please give the latest RC of 0.1.33 a try and it should be fixed. https://github.com/ollama/ollama/releases
Author
Owner

@Readon commented on GitHub (May 3, 2024):

We've recently fixed VRAM prediction calculations for mixtral. Please give the latest RC of 0.1.33 a try and it should be fixed.

https://github.com/ollama/ollama/releases

It works really good. One problem is that if mixtral is an MOE architecture, is that possible to load the activated only expert into GPU but all of the whole large model?

<!-- gh-comment-id:2093305311 --> @Readon commented on GitHub (May 3, 2024): > We've recently fixed VRAM prediction calculations for mixtral. Please give the latest RC of 0.1.33 a try and it should be fixed. > > https://github.com/ollama/ollama/releases It works really good. One problem is that if mixtral is an MOE architecture, is that possible to load the activated only expert into GPU but all of the whole large model?
Author
Owner

@pdevine commented on GitHub (May 18, 2024):

OK, I just tested this on on an a100 80 and it's working (albeit slowly!).

$ ollama run mixtral:8x22b-instruct-v0.1-q5_K_M
pulling manifest
pulling a83a0ad30b31... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏  99 GB
pulling 43070e2d4e53... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏  11 KB
pulling c43332387573... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏   67 B
pulling ed11eda7790d... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏   30 B
pulling b023bd629227... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏  487 B
verifying sha256 digest
writing manifest
removing any unused layers
success
>>> hi there
 Hello! How can I help you today? Is there something specific on your mind or a question you'd like to ask me? I'm here to provide information and answer any
questions you have.

The ollama ps output:

$ ollama ps
NAME                              	ID          	SIZE  	PROCESSOR      	UNTIL
mixtral:8x22b-instruct-v0.1-q5_K_M	7db9ceb11c70	102 GB	18%/82% CPU/GPU	4 minutes from now

@Readon I think one of the issues with doing that would be slow load times for the moe layers during every prompt, so I'm not sure how performant that would be. It's an interesting idea, but I think maybe out of scope for this issue.

<!-- gh-comment-id:2118676271 --> @pdevine commented on GitHub (May 18, 2024): OK, I just tested this on on an a100 80 and it's working (albeit slowly!). ``` $ ollama run mixtral:8x22b-instruct-v0.1-q5_K_M pulling manifest pulling a83a0ad30b31... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 99 GB pulling 43070e2d4e53... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 11 KB pulling c43332387573... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 67 B pulling ed11eda7790d... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 30 B pulling b023bd629227... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 487 B verifying sha256 digest writing manifest removing any unused layers success >>> hi there Hello! How can I help you today? Is there something specific on your mind or a question you'd like to ask me? I'm here to provide information and answer any questions you have. ``` The `ollama ps` output: ``` $ ollama ps NAME ID SIZE PROCESSOR UNTIL mixtral:8x22b-instruct-v0.1-q5_K_M 7db9ceb11c70 102 GB 18%/82% CPU/GPU 4 minutes from now ``` @Readon I think one of the issues with doing that would be slow load times for the moe layers during every prompt, so I'm not sure how performant that would be. It's an interesting idea, but I think maybe out of scope for this issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#2339