[GH-ISSUE #2706] CUDA error: out of memory with llava:7b-v1.6 when providing an image #1618

Closed
opened 2026-04-12 11:33:01 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @lucaboulard on GitHub (Feb 23, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2706

Originally assigned to: @mxyng on GitHub.

Hi,
I'm using ollama 0.1.26 to run llava:7b-v1.6 on WSL on Windows (Ubuntu 22.04.3 LTS).
It works just fine as long as I just use textual prompts, but as soon as I go multimodal and pass an image as well ollama crashes with this message:

time=2024-02-23T09:49:45.496+01:00 level=INFO source=dyn_ext_server.go:171 msg="loaded 1 images"
encode_image_with_clip: image embedding created: 576 tokens
encode_image_with_clip: image encoded in  1236.17 ms by CLIP (    2.15 ms per image patch)
CUDA error: out of memory
   current device: 0, in function ggml_cuda_pool_malloc_vmm at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:7991
   cuMemSetAccess(g_cuda_pool_addr[device] + g_cuda_pool_size[device], reserve_size, &access, 1)
GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:244: !"CUDA error"
Aborted  

My laptop has a NVIDIA GeForce MX150.
Can anybody help me to understand what is going wrong and how can I fix it?
Is it necessary to install NVIDIA related stuff/libraries or just installing ollama should suffice?

When I start ollama and run llava these are the logs:

time=2024-02-23T10:01:02.806+01:00 level=INFO source=images.go:710 msg="total blobs: 6"
time=2024-02-23T10:01:02.806+01:00 level=INFO source=images.go:717 msg="total unused blobs removed: 0"
time=2024-02-23T10:01:02.807+01:00 level=INFO source=routes.go:1019 msg="Listening on 127.0.0.1:11434 (version 0.1.26)"
time=2024-02-23T10:01:02.807+01:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-02-23T10:01:05.827+01:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cuda_v11 rocm_v6 cpu cpu_avx2 rocm_v5 cpu_avx]"
time=2024-02-23T10:01:05.827+01:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-02-23T10:01:05.827+01:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-02-23T10:01:07.787+01:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/usr/lib/wsl/lib/libnvidia-ml.so.1 /usr/lib/wsl/drivers/nvam.inf_amd64_73ddbc5a9852db46/libnvidia-ml.so.1]"
time=2024-02-23T10:01:08.554+01:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected"
time=2024-02-23T10:01:08.554+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-23T10:01:08.570+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 6.1"
[GIN] 2024/02/23 - 10:01:23 | 200 |       112.1µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/02/23 - 10:01:23 | 200 |       657.3µs |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/02/23 - 10:01:23 | 200 |       332.1µs |       127.0.0.1 | POST     "/api/show"
time=2024-02-23T10:01:24.065+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-23T10:01:24.557+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 6.1"
time=2024-02-23T10:01:24.558+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-02-23T10:01:24.559+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 6.1"
time=2024-02-23T10:01:24.559+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
loading library /tmp/ollama3815001611/cuda_v11/libext_server.so
time=2024-02-23T10:01:24.583+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama3815001611/cuda_v11/libext_server.so"
time=2024-02-23T10:01:24.583+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   yes
ggml_init_cublas: CUDA_USE_TENSOR_CORES: no
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce MX150, compute capability 6.1, VMM: yes
clip_model_load: model name:   openai/clip-vit-large-patch14-336
clip_model_load: description:  image encoder for LLaVA
clip_model_load: GGUF version: 3
clip_model_load: alignment:    32
clip_model_load: n_tensors:    377
clip_model_load: n_kv:         19
clip_model_load: ftype:        f16
clip_model_load: loaded meta data with 19 key-value pairs and 377 tensors from /home/luca/.ollama/models/blobs/sha256:72d6f08a42f656d36b356dbe0920675899a99ce21192fd66266fb7d82ed07539
clip_model_load: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
clip_model_load: - kv   0:                       general.architecture str              = clip
clip_model_load: - kv   1:                      clip.has_text_encoder bool             = false
clip_model_load: - kv   2:                    clip.has_vision_encoder bool             = true
clip_model_load: - kv   3:                   clip.has_llava_projector bool             = true
clip_model_load: - kv   4:                          general.file_type u32              = 1
clip_model_load: - kv   5:                               general.name str              = openai/clip-vit-large-patch14-336
clip_model_load: - kv   6:                        general.description str              = image encoder for LLaVA
clip_model_load: - kv   7:                        clip.projector_type str              = mlp
clip_model_load: - kv   8:                     clip.vision.image_size u32              = 336
clip_model_load: - kv   9:                     clip.vision.patch_size u32              = 14
clip_model_load: - kv  10:               clip.vision.embedding_length u32              = 1024
clip_model_load: - kv  11:            clip.vision.feed_forward_length u32              = 4096
clip_model_load: - kv  12:                 clip.vision.projection_dim u32              = 768
clip_model_load: - kv  13:           clip.vision.attention.head_count u32              = 16
clip_model_load: - kv  14:   clip.vision.attention.layer_norm_epsilon f32              = 0.000010
clip_model_load: - kv  15:                    clip.vision.block_count u32              = 23
clip_model_load: - kv  16:                     clip.vision.image_mean arr[f32,3]       = [0.481455, 0.457828, 0.408211]
clip_model_load: - kv  17:                      clip.vision.image_std arr[f32,3]       = [0.268630, 0.261303, 0.275777]
clip_model_load: - kv  18:                              clip.use_gelu bool             = false
clip_model_load: - type  f32:  235 tensors
clip_model_load: - type  f16:  142 tensors
clip_model_load: CLIP using CUDA backend
clip_model_load: text_encoder:   0
clip_model_load: vision_encoder: 1
clip_model_load: llava_projector:  1
clip_model_load: model size:     595.49 MB
clip_model_load: metadata size:  0.14 MB
clip_model_load: params backend buffer size =  595.49 MB (377 tensors)
key clip.vision.image_grid_pinpoints not found in file
key clip.vision.mm_patch_merge_type not found in file
key clip.vision.image_crop_resolution not found in file
clip_model_load: compute allocated memory: 32.89 MB
llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /home/luca/.ollama/models/blobs/sha256:170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 (version GGUF V3 (latest))                         llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = liuhaotian
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 2
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  19:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  20:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  21:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  22:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  23:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 7.24 B
llm_load_print_meta: model size       = 3.83 GiB (4.54 BPW)
llm_load_print_meta: general.name     = liuhaotian
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.22 MiB
llm_load_tensors: offloading 6 repeating layers to GPU
llm_load_tensors: offloaded 6/33 layers to GPU
llm_load_tensors:        CPU buffer size =  3917.87 MiB
llm_load_tensors:      CUDA0 buffer size =   702.19 MiB

llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:  CUDA_Host KV buffer size =   208.00 MiB
llama_kv_cache_init:      CUDA0 KV buffer size =    48.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:  CUDA_Host input buffer size   =    13.02 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   164.01 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =   168.00 MiB
llama_new_context_with_model: graph splits (measure): 5
time=2024-02-23T10:01:27.991+01:00 level=INFO source=dyn_ext_server.go:161 msg="Starting llama main loop"
[GIN] 2024/02/23 - 10:01:27 | 200 |    4.0520283s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/02/23 - 10:06:14 | 200 |        58.9µs |       127.0.0.1 | GET      "/api/version"
Originally created by @lucaboulard on GitHub (Feb 23, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2706 Originally assigned to: @mxyng on GitHub. Hi, I'm using ollama 0.1.26 to run llava:7b-v1.6 on WSL on Windows (Ubuntu 22.04.3 LTS). It works just fine as long as I just use textual prompts, but as soon as I go multimodal and pass an image as well ollama crashes with this message: ``` time=2024-02-23T09:49:45.496+01:00 level=INFO source=dyn_ext_server.go:171 msg="loaded 1 images" encode_image_with_clip: image embedding created: 576 tokens encode_image_with_clip: image encoded in 1236.17 ms by CLIP ( 2.15 ms per image patch) CUDA error: out of memory current device: 0, in function ggml_cuda_pool_malloc_vmm at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:7991 cuMemSetAccess(g_cuda_pool_addr[device] + g_cuda_pool_size[device], reserve_size, &access, 1) GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:244: !"CUDA error" Aborted ``` My laptop has a NVIDIA GeForce MX150. Can anybody help me to understand what is going wrong and how can I fix it? Is it necessary to install NVIDIA related stuff/libraries or just installing ollama should suffice? When I start ollama and run llava these are the logs: ``` time=2024-02-23T10:01:02.806+01:00 level=INFO source=images.go:710 msg="total blobs: 6" time=2024-02-23T10:01:02.806+01:00 level=INFO source=images.go:717 msg="total unused blobs removed: 0" time=2024-02-23T10:01:02.807+01:00 level=INFO source=routes.go:1019 msg="Listening on 127.0.0.1:11434 (version 0.1.26)" time=2024-02-23T10:01:02.807+01:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..." time=2024-02-23T10:01:05.827+01:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cuda_v11 rocm_v6 cpu cpu_avx2 rocm_v5 cpu_avx]" time=2024-02-23T10:01:05.827+01:00 level=INFO source=gpu.go:94 msg="Detecting GPU type" time=2024-02-23T10:01:05.827+01:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so" time=2024-02-23T10:01:07.787+01:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/usr/lib/wsl/lib/libnvidia-ml.so.1 /usr/lib/wsl/drivers/nvam.inf_amd64_73ddbc5a9852db46/libnvidia-ml.so.1]" time=2024-02-23T10:01:08.554+01:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected" time=2024-02-23T10:01:08.554+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-23T10:01:08.570+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 6.1" [GIN] 2024/02/23 - 10:01:23 | 200 | 112.1µs | 127.0.0.1 | HEAD "/" [GIN] 2024/02/23 - 10:01:23 | 200 | 657.3µs | 127.0.0.1 | POST "/api/show" [GIN] 2024/02/23 - 10:01:23 | 200 | 332.1µs | 127.0.0.1 | POST "/api/show" time=2024-02-23T10:01:24.065+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-23T10:01:24.557+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 6.1" time=2024-02-23T10:01:24.558+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-02-23T10:01:24.559+01:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 6.1" time=2024-02-23T10:01:24.559+01:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" loading library /tmp/ollama3815001611/cuda_v11/libext_server.so time=2024-02-23T10:01:24.583+01:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /tmp/ollama3815001611/cuda_v11/libext_server.so" time=2024-02-23T10:01:24.583+01:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" ggml_init_cublas: GGML_CUDA_FORCE_MMQ: yes ggml_init_cublas: CUDA_USE_TENSOR_CORES: no ggml_init_cublas: found 1 CUDA devices: Device 0: NVIDIA GeForce MX150, compute capability 6.1, VMM: yes clip_model_load: model name: openai/clip-vit-large-patch14-336 clip_model_load: description: image encoder for LLaVA clip_model_load: GGUF version: 3 clip_model_load: alignment: 32 clip_model_load: n_tensors: 377 clip_model_load: n_kv: 19 clip_model_load: ftype: f16 clip_model_load: loaded meta data with 19 key-value pairs and 377 tensors from /home/luca/.ollama/models/blobs/sha256:72d6f08a42f656d36b356dbe0920675899a99ce21192fd66266fb7d82ed07539 clip_model_load: Dumping metadata keys/values. Note: KV overrides do not apply in this output. clip_model_load: - kv 0: general.architecture str = clip clip_model_load: - kv 1: clip.has_text_encoder bool = false clip_model_load: - kv 2: clip.has_vision_encoder bool = true clip_model_load: - kv 3: clip.has_llava_projector bool = true clip_model_load: - kv 4: general.file_type u32 = 1 clip_model_load: - kv 5: general.name str = openai/clip-vit-large-patch14-336 clip_model_load: - kv 6: general.description str = image encoder for LLaVA clip_model_load: - kv 7: clip.projector_type str = mlp clip_model_load: - kv 8: clip.vision.image_size u32 = 336 clip_model_load: - kv 9: clip.vision.patch_size u32 = 14 clip_model_load: - kv 10: clip.vision.embedding_length u32 = 1024 clip_model_load: - kv 11: clip.vision.feed_forward_length u32 = 4096 clip_model_load: - kv 12: clip.vision.projection_dim u32 = 768 clip_model_load: - kv 13: clip.vision.attention.head_count u32 = 16 clip_model_load: - kv 14: clip.vision.attention.layer_norm_epsilon f32 = 0.000010 clip_model_load: - kv 15: clip.vision.block_count u32 = 23 clip_model_load: - kv 16: clip.vision.image_mean arr[f32,3] = [0.481455, 0.457828, 0.408211] clip_model_load: - kv 17: clip.vision.image_std arr[f32,3] = [0.268630, 0.261303, 0.275777] clip_model_load: - kv 18: clip.use_gelu bool = false clip_model_load: - type f32: 235 tensors clip_model_load: - type f16: 142 tensors clip_model_load: CLIP using CUDA backend clip_model_load: text_encoder: 0 clip_model_load: vision_encoder: 1 clip_model_load: llava_projector: 1 clip_model_load: model size: 595.49 MB clip_model_load: metadata size: 0.14 MB clip_model_load: params backend buffer size = 595.49 MB (377 tensors) key clip.vision.image_grid_pinpoints not found in file key clip.vision.mm_patch_merge_type not found in file key clip.vision.image_crop_resolution not found in file clip_model_load: compute allocated memory: 32.89 MB llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /home/luca/.ollama/models/blobs/sha256:170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = liuhaotian llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 7.24 B llm_load_print_meta: model size = 3.83 GiB (4.54 BPW) llm_load_print_meta: general.name = liuhaotian llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: PAD token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.22 MiB llm_load_tensors: offloading 6 repeating layers to GPU llm_load_tensors: offloaded 6/33 layers to GPU llm_load_tensors: CPU buffer size = 3917.87 MiB llm_load_tensors: CUDA0 buffer size = 702.19 MiB llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA_Host KV buffer size = 208.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 48.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CUDA_Host input buffer size = 13.02 MiB llama_new_context_with_model: CUDA0 compute buffer size = 164.01 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 168.00 MiB llama_new_context_with_model: graph splits (measure): 5 time=2024-02-23T10:01:27.991+01:00 level=INFO source=dyn_ext_server.go:161 msg="Starting llama main loop" [GIN] 2024/02/23 - 10:01:27 | 200 | 4.0520283s | 127.0.0.1 | POST "/api/chat" [GIN] 2024/02/23 - 10:06:14 | 200 | 58.9µs | 127.0.0.1 | GET "/api/version" ```
GiteaMirror added the bugnvidia labels 2026-04-12 11:33:01 -05:00
Author
Owner

@dhiltgen commented on GitHub (Mar 11, 2024):

Unfortunately it looks like our memory prediction algorithm didn't work correctly for this setup and we attempted to load too many layers into the GPU and it ran out of VRAM. We're continuing to improve our calculations to avoid this.

In the next release (0.1.29) we'll be adding a new setting that can allow you to set a lower VRAM setting to workaround this type of crash until we get the prediction logic fixed. OLLAMA_MAX_VRAM=<bytes> For example, I believe your GPUs is a 2G card, so you could start with 1.5G and experiment until you find a setting that loads the as many layers as possible without hitting the OOM crash. OLLAMA_MAX_VRAM=1610612736

Be aware though that with such a small amount of VRAM, most layers will have to run on the CPU and you won't see much performance benefit of the GPU.

<!-- gh-comment-id:1989495185 --> @dhiltgen commented on GitHub (Mar 11, 2024): Unfortunately it looks like our memory prediction algorithm didn't work correctly for this setup and we attempted to load too many layers into the GPU and it ran out of VRAM. We're continuing to improve our calculations to avoid this. In the next release (0.1.29) we'll be adding a new setting that can allow you to set a lower VRAM setting to workaround this type of crash until we get the prediction logic fixed. `OLLAMA_MAX_VRAM=<bytes>` For example, I believe your GPUs is a 2G card, so you could start with 1.5G and experiment until you find a setting that loads the as many layers as possible without hitting the OOM crash. `OLLAMA_MAX_VRAM=1610612736` Be aware though that with such a small amount of VRAM, most layers will have to run on the CPU and you won't see much performance benefit of the GPU.
Author
Owner

@dhiltgen commented on GitHub (Jun 1, 2024):

The latest version will correctly flag this model is too large to fit on a 2G card and fall back to CPU mode instead of trying to do a partial load and hitting OOM.

<!-- gh-comment-id:2143580336 --> @dhiltgen commented on GitHub (Jun 1, 2024): The latest version will correctly flag this model is too large to fit on a 2G card and fall back to CPU mode instead of trying to do a partial load and hitting OOM.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1618