[GH-ISSUE #3232] CUDA error: out of memory when use gemma model #64028

Closed
opened 2026-05-03 15:54:27 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @ycyy on GitHub (Mar 19, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/3232

Originally assigned to: @mxyng on GitHub.

What is the issue?

system_message = {
        'role': 'system', 
        'content': 'XXXX'
    }
    user_message = {
        'role': 'user', 
        'content': 'XXXX'
    }
    messages.append(system_message)
    messages.append(user_message)
    stream = ollama.chat(
        model = model_name,
        messages = messages,       
        stream=True
    )

when i use gemma do this work,it will be CUDA error: out of memory.the log is here.

ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3070 Laptop GPU, compute capability 8.6, VMM: yes
llama_model_loader: loaded meta data with 24 key-value pairs and 254 tensors from D:\Ollama\models\blobs\sha256-456402914e838a953e0cf80caa6adbe75383d9e63584a964f504a7bbb8f7aad9 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gemma
llama_model_loader: - kv   1:                               general.name str              = gemma-7b-it
llama_model_loader: - kv   2:                       gemma.context_length u32              = 8192
llama_model_loader: - kv   3:                     gemma.embedding_length u32              = 3072
llama_model_loader: - kv   4:                          gemma.block_count u32              = 28
llama_model_loader: - kv   5:                  gemma.feed_forward_length u32              = 24576
llama_model_loader: - kv   6:                 gemma.attention.head_count u32              = 16
llama_model_loader: - kv   7:              gemma.attention.head_count_kv u32              = 16
llama_model_loader: - kv   8:     gemma.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   9:                 gemma.attention.key_length u32              = 256
llama_model_loader: - kv  10:               gemma.attention.value_length u32              = 256
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,256000]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,256000]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  17:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  18:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  19:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  20:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  21:                    tokenizer.chat_template str              = {% if messages[0]['role'] == 'system'...
llama_model_loader: - kv  22:               general.quantization_version u32              = 2
llama_model_loader: - kv  23:                          general.file_type u32              = 2
llama_model_loader: - type  f32:   57 tensors
llama_model_loader: - type q4_0:  196 tensors
llama_model_loader: - type q8_0:    1 tensors
llm_load_vocab: mismatch in special tokens definition ( 416/256000 vs 260/256000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = gemma
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 256000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 3072
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 16
llm_load_print_meta: n_layer          = 28
llm_load_print_meta: n_rot            = 192
llm_load_print_meta: n_embd_head_k    = 256
llm_load_print_meta: n_embd_head_v    = 256
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 24576
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 8.54 B
llm_load_print_meta: model size       = 4.84 GiB (4.87 BPW) 
llm_load_print_meta: general.name     = gemma-7b-it
llm_load_print_meta: BOS token        = 2 '<bos>'
llm_load_print_meta: EOS token        = 1 '<eos>'
llm_load_print_meta: UNK token        = 3 '<unk>'
llm_load_print_meta: PAD token        = 0 '<pad>'
llm_load_print_meta: LF token         = 227 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.19 MiB
llm_load_tensors: offloading 28 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 29/29 layers to GPU
llm_load_tensors:        CPU buffer size =   796.88 MiB
llm_load_tensors:      CUDA0 buffer size =  4955.54 MiB
...........................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   896.00 MiB
llama_new_context_with_model: KV self size  =  896.00 MiB, K (f16):  448.00 MiB, V (f16):  448.00 MiB
llama_new_context_with_model:  CUDA_Host input buffer size   =    11.02 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   506.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =     6.00 MiB
llama_new_context_with_model: graph splits (measure): 2
{"function":"initialize","level":"INFO","line":440,"msg":"initializing slots","n_slots":1,"tid":"20160","timestamp":1710811450}
{"function":"initialize","level":"INFO","line":452,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"20160","timestamp":1710811450}
time=2024-03-19T09:24:10.303+08:00 level=INFO source=dyn_ext_server.go:162 msg="Starting llama main loop"
{"function":"update_slots","level":"INFO","line":1590,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"29364","timestamp":1710811450}
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"29364","timestamp":1710811450}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1828,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":63,"slot_id":0,"task_id":0,"tid":"29364","timestamp":1710811450}
{"function":"update_slots","level":"INFO","line":1852,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"29364","timestamp":1710811450}
CUDA error: out of memory
  current device: 0, in function ggml_cuda_pool_malloc_vmm at C:\Users\jeff\git\ollama\llm\llama.cpp\ggml-cuda.cu:8658
  cuMemSetAccess(g_cuda_pool_addr[device] + g_cuda_pool_size[device], reserve_size, &access, 1)
GGML_ASSERT: C:\Users\jeff\git\ollama\llm\llama.cpp\ggml-cuda.cu:256: !"CUDA error"

windows 11 and RTX 3070 Laptop GPU.Does anyone know if it’s a model error or another reason?

What did you expect to see?

user_message = {
        'role': 'user', 
        'content': 'XXXX'
    }
    messages.append(system_message)
    stream = ollama.chat(
        model = model_name,
        messages = messages,       
        stream=True
    )

if i use just like this it's ok.however other model(dolphin-mistral:latest,qwen:7b) doesn't have this problem.

Steps to reproduce

No response

Are there any recent changes that introduced the issue?

No response

OS

Windows

Architecture

x86

Platform

No response

Ollama version

0.1.29

GPU

Nvidia

GPU info

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 527.99       Driver Version: 527.99       CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ... WDDM  | 00000000:01:00.0  On |                  N/A |
| N/A   45C    P0    32W / 115W |    701MiB /  8192MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      5916    C+G   ...y\ShellExperienceHost.exe    N/A      |
|    0   N/A  N/A      7564    C+G   C:\Windows\explorer.exe         N/A      |
|    0   N/A  N/A      8684    C+G   ...n1h2txyewy\SearchHost.exe    N/A      |
|    0   N/A  N/A      8708    C+G   ...artMenuExperienceHost.exe    N/A      |
|    0   N/A  N/A      9864    C+G   ...8bbwe\Notepad\Notepad.exe    N/A      |
|    0   N/A  N/A     10100    C+G   ...2txyewy\TextInputHost.exe    N/A      |
|    0   N/A  N/A     11228    C+G   ...cw5n1h2txyewy\LockApp.exe    N/A      |
|    0   N/A  N/A     16052    C+G   ...cal\Obsidian\Obsidian.exe    N/A      |
|    0   N/A  N/A     19716    C+G   ...2gh52qy24etm\Nahimic3.exe    N/A      |
|    0   N/A  N/A     21696    C+G   ...l-0.15.0\WeaselServer.exe    N/A      |
|    0   N/A  N/A     22360    C+G   ...8bbwe\WindowsTerminal.exe    N/A      |
|    0   N/A  N/A     22876    C+G   ...lPanel\SystemSettings.exe    N/A      |
|    0   N/A  N/A     23712    C+G   ...d\runtime\WeChatAppEx.exe    N/A      |
|    0   N/A  N/A     26532    C+G   E:\VSCode\Code.exe              N/A      |
|    0   N/A  N/A     28660    C+G   ...me\Application\chrome.exe    N/A      |
|    0   N/A  N/A     30704    C+G   ...ge\Application\msedge.exe    N/A      |
+-----------------------------------------------------------------------------+

CPU

Intel

Other software

No response

Originally created by @ycyy on GitHub (Mar 19, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/3232 Originally assigned to: @mxyng on GitHub. ### What is the issue? ``` system_message = { 'role': 'system', 'content': 'XXXX' } user_message = { 'role': 'user', 'content': 'XXXX' } messages.append(system_message) messages.append(user_message) stream = ollama.chat( model = model_name, messages = messages, stream=True ) ``` when i use gemma do this work,it will be CUDA error: out of memory.the log is here. ``` ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3070 Laptop GPU, compute capability 8.6, VMM: yes llama_model_loader: loaded meta data with 24 key-value pairs and 254 tensors from D:\Ollama\models\blobs\sha256-456402914e838a953e0cf80caa6adbe75383d9e63584a964f504a7bbb8f7aad9 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = gemma llama_model_loader: - kv 1: general.name str = gemma-7b-it llama_model_loader: - kv 2: gemma.context_length u32 = 8192 llama_model_loader: - kv 3: gemma.embedding_length u32 = 3072 llama_model_loader: - kv 4: gemma.block_count u32 = 28 llama_model_loader: - kv 5: gemma.feed_forward_length u32 = 24576 llama_model_loader: - kv 6: gemma.attention.head_count u32 = 16 llama_model_loader: - kv 7: gemma.attention.head_count_kv u32 = 16 llama_model_loader: - kv 8: gemma.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 9: gemma.attention.key_length u32 = 256 llama_model_loader: - kv 10: gemma.attention.value_length u32 = 256 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,256000] = ["<pad>", "<eos>", "<bos>", "<unk>", ... llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,256000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 2 llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 18: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'... llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - kv 23: general.file_type u32 = 2 llama_model_loader: - type f32: 57 tensors llama_model_loader: - type q4_0: 196 tensors llama_model_loader: - type q8_0: 1 tensors llm_load_vocab: mismatch in special tokens definition ( 416/256000 vs 260/256000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = gemma llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 256000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 3072 llm_load_print_meta: n_head = 16 llm_load_print_meta: n_head_kv = 16 llm_load_print_meta: n_layer = 28 llm_load_print_meta: n_rot = 192 llm_load_print_meta: n_embd_head_k = 256 llm_load_print_meta: n_embd_head_v = 256 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 24576 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 8.54 B llm_load_print_meta: model size = 4.84 GiB (4.87 BPW) llm_load_print_meta: general.name = gemma-7b-it llm_load_print_meta: BOS token = 2 '<bos>' llm_load_print_meta: EOS token = 1 '<eos>' llm_load_print_meta: UNK token = 3 '<unk>' llm_load_print_meta: PAD token = 0 '<pad>' llm_load_print_meta: LF token = 227 '<0x0A>' llm_load_tensors: ggml ctx size = 0.19 MiB llm_load_tensors: offloading 28 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 29/29 layers to GPU llm_load_tensors: CPU buffer size = 796.88 MiB llm_load_tensors: CUDA0 buffer size = 4955.54 MiB ........................................................................... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 896.00 MiB llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB llama_new_context_with_model: CUDA_Host input buffer size = 11.02 MiB llama_new_context_with_model: CUDA0 compute buffer size = 506.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 6.00 MiB llama_new_context_with_model: graph splits (measure): 2 {"function":"initialize","level":"INFO","line":440,"msg":"initializing slots","n_slots":1,"tid":"20160","timestamp":1710811450} {"function":"initialize","level":"INFO","line":452,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"20160","timestamp":1710811450} time=2024-03-19T09:24:10.303+08:00 level=INFO source=dyn_ext_server.go:162 msg="Starting llama main loop" {"function":"update_slots","level":"INFO","line":1590,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"29364","timestamp":1710811450} {"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"29364","timestamp":1710811450} {"function":"update_slots","ga_i":0,"level":"INFO","line":1828,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":63,"slot_id":0,"task_id":0,"tid":"29364","timestamp":1710811450} {"function":"update_slots","level":"INFO","line":1852,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"29364","timestamp":1710811450} CUDA error: out of memory current device: 0, in function ggml_cuda_pool_malloc_vmm at C:\Users\jeff\git\ollama\llm\llama.cpp\ggml-cuda.cu:8658 cuMemSetAccess(g_cuda_pool_addr[device] + g_cuda_pool_size[device], reserve_size, &access, 1) GGML_ASSERT: C:\Users\jeff\git\ollama\llm\llama.cpp\ggml-cuda.cu:256: !"CUDA error" ``` windows 11 and RTX 3070 Laptop GPU.Does anyone know if it’s a model error or another reason? ### What did you expect to see? ``` user_message = { 'role': 'user', 'content': 'XXXX' } messages.append(system_message) stream = ollama.chat( model = model_name, messages = messages, stream=True ) ``` if i use just like this it's ok.however other model(dolphin-mistral:latest,qwen:7b) doesn't have this problem. ### Steps to reproduce _No response_ ### Are there any recent changes that introduced the issue? _No response_ ### OS Windows ### Architecture x86 ### Platform _No response_ ### Ollama version 0.1.29 ### GPU Nvidia ### GPU info ``` +-----------------------------------------------------------------------------+ | NVIDIA-SMI 527.99 Driver Version: 527.99 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... WDDM | 00000000:01:00.0 On | N/A | | N/A 45C P0 32W / 115W | 701MiB / 8192MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 5916 C+G ...y\ShellExperienceHost.exe N/A | | 0 N/A N/A 7564 C+G C:\Windows\explorer.exe N/A | | 0 N/A N/A 8684 C+G ...n1h2txyewy\SearchHost.exe N/A | | 0 N/A N/A 8708 C+G ...artMenuExperienceHost.exe N/A | | 0 N/A N/A 9864 C+G ...8bbwe\Notepad\Notepad.exe N/A | | 0 N/A N/A 10100 C+G ...2txyewy\TextInputHost.exe N/A | | 0 N/A N/A 11228 C+G ...cw5n1h2txyewy\LockApp.exe N/A | | 0 N/A N/A 16052 C+G ...cal\Obsidian\Obsidian.exe N/A | | 0 N/A N/A 19716 C+G ...2gh52qy24etm\Nahimic3.exe N/A | | 0 N/A N/A 21696 C+G ...l-0.15.0\WeaselServer.exe N/A | | 0 N/A N/A 22360 C+G ...8bbwe\WindowsTerminal.exe N/A | | 0 N/A N/A 22876 C+G ...lPanel\SystemSettings.exe N/A | | 0 N/A N/A 23712 C+G ...d\runtime\WeChatAppEx.exe N/A | | 0 N/A N/A 26532 C+G E:\VSCode\Code.exe N/A | | 0 N/A N/A 28660 C+G ...me\Application\chrome.exe N/A | | 0 N/A N/A 30704 C+G ...ge\Application\msedge.exe N/A | +-----------------------------------------------------------------------------+ ``` ### CPU Intel ### Other software _No response_
GiteaMirror added the bugnvidia labels 2026-05-03 15:54:27 -05:00
Author
Owner

@OPDEV001 commented on GitHub (Apr 6, 2024):

Hello,

I tested Gemma:7b on my machine(not laptop) on CPU only, it works. I can tell you Gemma:7b does not take too much resource on computer.

Thanks,

<!-- gh-comment-id:2040937361 --> @OPDEV001 commented on GitHub (Apr 6, 2024): Hello, I tested Gemma:7b on my machine(not laptop) on CPU only, it works. I can tell you Gemma:7b does not take too much resource on computer. Thanks,
Author
Owner

@ycyy commented on GitHub (Apr 6, 2024):

Hello,

I tested Gemma:7b on my machine(not laptop) on CPU only, it works. I can tell you Gemma:7b does not take too much resource on computer.

Thanks,

I have also tested it, and it runs normally in most cases. The problem only occurs when I run it according to the prompt format I mentioned above. I'm not sure if it's a format issue. Thank you!

<!-- gh-comment-id:2041052486 --> @ycyy commented on GitHub (Apr 6, 2024): > Hello, > > I tested Gemma:7b on my machine(not laptop) on CPU only, it works. I can tell you Gemma:7b does not take too much resource on computer. > > Thanks, I have also tested it, and it runs normally in most cases. The problem only occurs when I run it according to the prompt format I mentioned above. I'm not sure if it's a format issue. Thank you!
Author
Owner

@jmorganca commented on GitHub (Apr 17, 2024):

Hi there, this should be improved as of 0.1.32 - please let me know if that's not the case!

<!-- gh-comment-id:2062613231 --> @jmorganca commented on GitHub (Apr 17, 2024): Hi there, this should be improved as of 0.1.32 - please let me know if that's not the case!
Author
Owner

@ycyy commented on GitHub (Apr 18, 2024):

Hi there, this should be improved as of 0.1.32 - please let me know if that's not the case!您好,这应该从 0.1.32 开始改进 - 如果不是这种情况,请告诉我!

Thank you, I have tested it on version 0.1.32 and everything works fine.

<!-- gh-comment-id:2063035299 --> @ycyy commented on GitHub (Apr 18, 2024): > Hi there, this should be improved as of 0.1.32 - please let me know if that's not the case!您好,这应该从 0.1.32 开始改进 - 如果不是这种情况,请告诉我! Thank you, I have tested it on version 0.1.32 and everything works fine.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#64028