[GH-ISSUE #2889] Windows CUDA OOM GTX 1650 switching models between mistral and gemma #48277

Closed
opened 2026-04-28 07:34:07 -05:00 by GiteaMirror · 6 comments
Owner

Originally created by @qianjun1985 on GitHub (Mar 3, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2889

Originally assigned to: @mxyng on GitHub.

When I use an AI translator program that can use ollama to load local Llms, at first it worked well with one model (mistral), but after I downloaded another, memma 7b, both models failed to work. The UI of that translator program shows error information as follows:

Failed to call API, error sending request for url (http://127.0.0.1:11434/v1/chat/completions),error trying to connect, TCP connect error or Error 10061

Originally created by @qianjun1985 on GitHub (Mar 3, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2889 Originally assigned to: @mxyng on GitHub. When I use an AI translator program that can use ollama to load local Llms, at first it worked well with one model (mistral), but after I downloaded another, memma 7b, both models failed to work. The UI of that translator program shows error information as follows: Failed to call API, error sending request for url (http://127.0.0.1:11434/v1/chat/completions),error trying to connect, TCP connect error or Error 10061
GiteaMirror added the bug label 2026-04-28 07:34:07 -05:00
Author
Owner

@dhiltgen commented on GitHub (Mar 6, 2024):

Can you share the server log file so we can see why it crashed?

<!-- gh-comment-id:1982002867 --> @dhiltgen commented on GitHub (Mar 6, 2024): Can you share the server log file so we can see why it crashed?
Author
Owner

@qianjun1985 commented on GitHub (Mar 7, 2024):

Can you share the server log file so we can see why it crashed?

I tried again and found that if I set the default model as Mistral after restarting that translator program, all returned to normal, but if I again shift to Gemma 7b (which is actually 9b), the error took place again so that both models fail to work. Now I suspect that this happens because my own RAM, 8G only, is not enough to run Gemma 7b/9b.

The app.log file of ollama repeatedly shows two messages:
Level=WARN source=server.go:113 msg="server crash 1 -exit code 3221226505 -respawning"
Level=ERROR source=server.go:116 msg="failed to restart server exec: already started"

As a beginner user of GitHub, I have no idea as to how to upload a log file from my smartphone.

I would appreciate your help if you would bother to view my following copy-paste content from the server.log file at your convenient time:

time=2024-03-07T23:53:29.097+08:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-03-07T23:53:29.097+08:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library nvml.dll"
time=2024-03-07T23:53:29.245+08:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [c:\Windows\System32\nvml.dll C:\Windows\system32\nvml.dll C:\WINDOWS\system32\nvml.dll]"
time=2024-03-07T23:53:29.264+08:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected"
time=2024-03-07T23:53:29.264+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-07T23:53:29.297+08:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 7.5"
time=2024-03-07T23:53:29.307+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-07T23:53:29.315+08:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 7.5"
time=2024-03-07T23:53:29.315+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-07T23:53:29.315+08:00 level=INFO source=dyn_ext_server.go:385 msg="Updating PATH to C:\temp\ollama708038235\cuda_v11.3;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\;C:\Program Files\dotnet\;C:\Program Files\Git\cmd;C:\Program Files\Cloudflare\Cloudflare WARP\;D:\Program Files\Python\;C:\Users\Qian Jun\AppData\Local\Microsoft\WindowsApps;C:\Program Files\JetBrains\PyCharm Community Edition 2022.3.2\bin;;C:\Users\Qian Jun\AppData\Local\Pandoc\;C:\Program Files\Git\bin;D:\Program Files\Python\Scripts;C:\ProgramData\Anaconda3;;C:\Users\Qian Jun\AppData\Local\Programs\Ollama"
time=2024-03-07T23:53:29.390+08:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\temp\ollama708038235\cuda_v11.3\ext_server.dll"
time=2024-03-07T23:53:29.390+08:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce GTX 1650, compute capability 7.5, VMM: yes
llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from C:\Users\Qian Jun.ollama\models\blobs\sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = mistralai
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 11: general.file_type u32 = 2
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv 23: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 7.24 B
llm_load_print_meta: model size = 3.83 GiB (4.54 BPW)
llm_load_print_meta: general.name = mistralai
llm_load_print_meta: BOS token = 1 ''
llm_load_print_meta: EOS token = 2 '
'
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.22 MiB
llm_load_tensors: offloading 20 repeating layers to GPU
llm_load_tensors: offloaded 20/33 layers to GPU
llm_load_tensors: CPU buffer size = 3917.87 MiB
llm_load_tensors: CUDA0 buffer size = 2340.62 MiB
..................................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA_Host KV buffer size = 96.00 MiB
llama_kv_cache_init: CUDA0 KV buffer size = 160.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CUDA_Host input buffer size = 13.02 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 164.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 168.00 MiB
llama_new_context_with_model: graph splits (measure): 3
{"function":"initialize","level":"INFO","line":433,"msg":"initializing slots","n_slots":1,"tid":"11500","timestamp":1709826819}
{"function":"initialize","level":"INFO","line":445,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"11500","timestamp":1709826819}
time=2024-03-07T23:53:39.456+08:00 level=INFO source=dyn_ext_server.go:161 msg="Starting llama main loop"
{"function":"update_slots","level":"INFO","line":1565,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"11260","timestamp":1709826819}
{"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"11260","timestamp":1709826819}
{"function":"update_slots","level":"INFO","line":1801,"msg":"slot progression","n_past":0,"n_prompt_tokens_processed":202,"slot_id":0,"task_id":0,"tid":"11260","timestamp":1709826819}
{"function":"update_slots","level":"INFO","line":1825,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"11260","timestamp":1709826819}
{"function":"print_timings","level":"INFO","line":264,"msg":"prompt eval time = 9948.85 ms / 202 tokens ( 49.25 ms per token, 20.30 tokens per second)","n_prompt_tokens_processed":202,"n_tokens_second":20.30385217348214,"slot_id":0,"t_prompt_processing":9948.851,"t_token":49.25173762376238,"task_id":0,"tid":"11260","timestamp":1709826834}
{"function":"print_timings","level":"INFO","line":278,"msg":"generation eval time = 4628.68 ms / 27 runs ( 171.43 ms per token, 5.83 tokens per second)","n_decoded":27,"n_tokens_second":5.83320154618729,"slot_id":0,"t_token":171.43244444444446,"t_token_generation":4628.676,"task_id":0,"tid":"11260","timestamp":1709826834}
{"function":"print_timings","level":"INFO","line":287,"msg":" total time = 14577.53 ms","slot_id":0,"t_prompt_processing":9948.851,"t_token_generation":4628.676,"t_total":14577.527000000002,"task_id":0,"tid":"11260","timestamp":1709826834}
{"function":"update_slots","level":"INFO","line":1635,"msg":"slot released","n_cache_tokens":229,"n_ctx":2048,"n_past":228,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"11260","timestamp":1709826834,"truncated":false}
[GIN] 2024/03/07 - 23:53:54 | 200 | 25.7154015s | 127.0.0.1 | POST "/v1/chat/completions"
time=2024-03-07T23:56:09.422+08:00 level=INFO source=routes.go:78 msg="changing loaded model"
time=2024-03-07T23:56:16.650+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-07T23:56:17.861+08:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 7.5"
time=2024-03-07T23:56:18.164+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-07T23:56:18.172+08:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 7.5"
time=2024-03-07T23:56:18.265+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-07T23:56:18.458+08:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\temp\ollama708038235\cuda_v11.3\ext_server.dll"
time=2024-03-07T23:56:18.458+08:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server"
llama_model_loader: loaded meta data with 24 key-value pairs and 254 tensors from C:\Users\Qian Jun.ollama\models\blobs\sha256-456402914e838a953e0cf80caa6adbe75383d9e63584a964f504a7bbb8f7aad9 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = gemma
llama_model_loader: - kv 1: general.name str = gemma-7b-it
llama_model_loader: - kv 2: gemma.context_length u32 = 8192
llama_model_loader: - kv 3: gemma.embedding_length u32 = 3072
llama_model_loader: - kv 4: gemma.block_count u32 = 28
llama_model_loader: - kv 5: gemma.feed_forward_length u32 = 24576
llama_model_loader: - kv 6: gemma.attention.head_count u32 = 16
llama_model_loader: - kv 7: gemma.attention.head_count_kv u32 = 16
llama_model_loader: - kv 8: gemma.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 9: gemma.attention.key_length u32 = 256
llama_model_loader: - kv 10: gemma.attention.value_length u32 = 256
llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,256000] = ["", "", "", "", ...
llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,256000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 2
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 3
llama_model_loader: - kv 18: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'...
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - kv 23: general.file_type u32 = 2
llama_model_loader: - type f32: 57 tensors
llama_model_loader: - type q4_0: 196 tensors
llama_model_loader: - type q8_0: 1 tensors
llm_load_vocab: mismatch in special tokens definition ( 416/256000 vs 260/256000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = gemma
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 256000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 3072
llm_load_print_meta: n_head = 16
llm_load_print_meta: n_head_kv = 16
llm_load_print_meta: n_layer = 28
llm_load_print_meta: n_rot = 192
llm_load_print_meta: n_embd_head_k = 256
llm_load_print_meta: n_embd_head_v = 256
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 4096
llm_load_print_meta: n_embd_v_gqa = 4096
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 24576
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 8.54 B
llm_load_print_meta: model size = 4.84 GiB (4.87 BPW)
llm_load_print_meta: general.name = gemma-7b-it
llm_load_print_meta: BOS token = 2 ''
llm_load_print_meta: EOS token = 1 ''
llm_load_print_meta: UNK token = 3 ''
llm_load_print_meta: PAD token = 0 ''
llm_load_print_meta: LF token = 227 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.19 MiB
llm_load_tensors: offloading 12 repeating layers to GPU
llm_load_tensors: offloaded 12/29 layers to GPU
llm_load_tensors: CPU buffer size = 4955.54 MiB
llm_load_tensors: CUDA0 buffer size = 1782.28 MiB
...........................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce GTX 1650, compute capability 7.5, VMM: yes
llama_kv_cache_init: CUDA_Host KV buffer size = 512.00 MiB
llama_kv_cache_init: CUDA0 KV buffer size = 384.00 MiB
llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB
llama_new_context_with_model: CUDA_Host input buffer size = 11.02 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 112.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 518.00 MiB
llama_new_context_with_model: graph splits (measure): 3
{"function":"initialize","level":"INFO","line":433,"msg":"initializing slots","n_slots":1,"tid":"12600","timestamp":1709827111}
{"function":"initialize","level":"INFO","line":445,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"12600","timestamp":1709827111}
time=2024-03-07T23:58:31.125+08:00 level=INFO source=dyn_ext_server.go:161 msg="Starting llama main loop"
{"function":"update_slots","level":"INFO","line":1565,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"10216","timestamp":1709827111}
{"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"10216","timestamp":1709827111}
{"function":"update_slots","level":"INFO","line":1801,"msg":"slot progression","n_past":0,"n_prompt_tokens_processed":207,"slot_id":0,"task_id":0,"tid":"10216","timestamp":1709827111}
{"function":"update_slots","level":"INFO","line":1825,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"10216","timestamp":1709827111}
CUDA error: out of memory
current device: 0, in function ggml_cuda_pool_malloc_vmm at C:\Users\jmorg\git\ollama\llm\llama.cpp\ggml-cuda.cu:8601
cuMemSetAccess(g_cuda_pool_addr[device] + g_cuda_pool_size[device], reserve_size, &access, 1)
GGML_ASSERT: C:\Users\jmorg\git\ollama\llm\llama.cpp\ggml-cuda.cu:256: !"CUDA error"

<!-- gh-comment-id:1983959167 --> @qianjun1985 commented on GitHub (Mar 7, 2024): > Can you share the server log file so we can see why it crashed? I tried again and found that if I set the default model as Mistral after restarting that translator program, all returned to normal, but if I again shift to Gemma 7b (which is actually 9b), the error took place again so that both models fail to work. Now I suspect that this happens because my own RAM, 8G only, is not enough to run Gemma 7b/9b. The app.log file of ollama repeatedly shows two messages: Level=WARN source=server.go:113 msg="server crash 1 -exit code 3221226505 -respawning" Level=ERROR source=server.go:116 msg="failed to restart server exec: already started" As a beginner user of GitHub, I have no idea as to how to upload a log file from my smartphone. I would appreciate your help if you would bother to view my following copy-paste content from the server.log file at your convenient time: time=2024-03-07T23:53:29.097+08:00 level=INFO source=gpu.go:94 msg="Detecting GPU type" time=2024-03-07T23:53:29.097+08:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library nvml.dll" time=2024-03-07T23:53:29.245+08:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [c:\\Windows\\System32\\nvml.dll C:\\Windows\\system32\\nvml.dll C:\\WINDOWS\\system32\\nvml.dll]" time=2024-03-07T23:53:29.264+08:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected" time=2024-03-07T23:53:29.264+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-07T23:53:29.297+08:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 7.5" time=2024-03-07T23:53:29.307+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-07T23:53:29.315+08:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 7.5" time=2024-03-07T23:53:29.315+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-07T23:53:29.315+08:00 level=INFO source=dyn_ext_server.go:385 msg="Updating PATH to C:\\temp\\ollama708038235\\cuda_v11.3;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files\\dotnet\\;C:\\Program Files\\Git\\cmd;C:\\Program Files\\Cloudflare\\Cloudflare WARP\\;D:\\Program Files\\Python\\;C:\\Users\\Qian Jun\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Program Files\\JetBrains\\PyCharm Community Edition 2022.3.2\\bin;;C:\\Users\\Qian Jun\\AppData\\Local\\Pandoc\\;C:\\Program Files\\Git\\bin;D:\\Program Files\\Python\\Scripts;C:\\ProgramData\\Anaconda3;;C:\\Users\\Qian Jun\\AppData\\Local\\Programs\\Ollama" time=2024-03-07T23:53:29.390+08:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\temp\\ollama708038235\\cuda_v11.3\\ext_server.dll" time=2024-03-07T23:53:29.390+08:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 1 CUDA devices: Device 0: NVIDIA GeForce GTX 1650, compute capability 7.5, VMM: yes llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from C:\Users\Qian Jun\.ollama\models\blobs\sha256-e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = mistralai llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 11: general.file_type u32 = 2 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 7.24 B llm_load_print_meta: model size = 3.83 GiB (4.54 BPW) llm_load_print_meta: general.name = mistralai llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.22 MiB llm_load_tensors: offloading 20 repeating layers to GPU llm_load_tensors: offloaded 20/33 layers to GPU llm_load_tensors: CPU buffer size = 3917.87 MiB llm_load_tensors: CUDA0 buffer size = 2340.62 MiB .................................................................................................. llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA_Host KV buffer size = 96.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 160.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CUDA_Host input buffer size = 13.02 MiB llama_new_context_with_model: CUDA0 compute buffer size = 164.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 168.00 MiB llama_new_context_with_model: graph splits (measure): 3 {"function":"initialize","level":"INFO","line":433,"msg":"initializing slots","n_slots":1,"tid":"11500","timestamp":1709826819} {"function":"initialize","level":"INFO","line":445,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"11500","timestamp":1709826819} time=2024-03-07T23:53:39.456+08:00 level=INFO source=dyn_ext_server.go:161 msg="Starting llama main loop" {"function":"update_slots","level":"INFO","line":1565,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"11260","timestamp":1709826819} {"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"11260","timestamp":1709826819} {"function":"update_slots","level":"INFO","line":1801,"msg":"slot progression","n_past":0,"n_prompt_tokens_processed":202,"slot_id":0,"task_id":0,"tid":"11260","timestamp":1709826819} {"function":"update_slots","level":"INFO","line":1825,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"11260","timestamp":1709826819} {"function":"print_timings","level":"INFO","line":264,"msg":"prompt eval time = 9948.85 ms / 202 tokens ( 49.25 ms per token, 20.30 tokens per second)","n_prompt_tokens_processed":202,"n_tokens_second":20.30385217348214,"slot_id":0,"t_prompt_processing":9948.851,"t_token":49.25173762376238,"task_id":0,"tid":"11260","timestamp":1709826834} {"function":"print_timings","level":"INFO","line":278,"msg":"generation eval time = 4628.68 ms / 27 runs ( 171.43 ms per token, 5.83 tokens per second)","n_decoded":27,"n_tokens_second":5.83320154618729,"slot_id":0,"t_token":171.43244444444446,"t_token_generation":4628.676,"task_id":0,"tid":"11260","timestamp":1709826834} {"function":"print_timings","level":"INFO","line":287,"msg":" total time = 14577.53 ms","slot_id":0,"t_prompt_processing":9948.851,"t_token_generation":4628.676,"t_total":14577.527000000002,"task_id":0,"tid":"11260","timestamp":1709826834} {"function":"update_slots","level":"INFO","line":1635,"msg":"slot released","n_cache_tokens":229,"n_ctx":2048,"n_past":228,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"11260","timestamp":1709826834,"truncated":false} [GIN] 2024/03/07 - 23:53:54 | 200 | 25.7154015s | 127.0.0.1 | POST "/v1/chat/completions" time=2024-03-07T23:56:09.422+08:00 level=INFO source=routes.go:78 msg="changing loaded model" time=2024-03-07T23:56:16.650+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-07T23:56:17.861+08:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 7.5" time=2024-03-07T23:56:18.164+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-07T23:56:18.172+08:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 7.5" time=2024-03-07T23:56:18.265+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-07T23:56:18.458+08:00 level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: C:\\temp\\ollama708038235\\cuda_v11.3\\ext_server.dll" time=2024-03-07T23:56:18.458+08:00 level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" llama_model_loader: loaded meta data with 24 key-value pairs and 254 tensors from C:\Users\Qian Jun\.ollama\models\blobs\sha256-456402914e838a953e0cf80caa6adbe75383d9e63584a964f504a7bbb8f7aad9 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = gemma llama_model_loader: - kv 1: general.name str = gemma-7b-it llama_model_loader: - kv 2: gemma.context_length u32 = 8192 llama_model_loader: - kv 3: gemma.embedding_length u32 = 3072 llama_model_loader: - kv 4: gemma.block_count u32 = 28 llama_model_loader: - kv 5: gemma.feed_forward_length u32 = 24576 llama_model_loader: - kv 6: gemma.attention.head_count u32 = 16 llama_model_loader: - kv 7: gemma.attention.head_count_kv u32 = 16 llama_model_loader: - kv 8: gemma.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 9: gemma.attention.key_length u32 = 256 llama_model_loader: - kv 10: gemma.attention.value_length u32 = 256 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,256000] = ["<pad>", "<eos>", "<bos>", "<unk>", ... llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,256000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 2 llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 1 llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 18: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 21: tokenizer.chat_template str = {% if messages[0]['role'] == 'system'... llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - kv 23: general.file_type u32 = 2 llama_model_loader: - type f32: 57 tensors llama_model_loader: - type q4_0: 196 tensors llama_model_loader: - type q8_0: 1 tensors llm_load_vocab: mismatch in special tokens definition ( 416/256000 vs 260/256000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = gemma llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 256000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 3072 llm_load_print_meta: n_head = 16 llm_load_print_meta: n_head_kv = 16 llm_load_print_meta: n_layer = 28 llm_load_print_meta: n_rot = 192 llm_load_print_meta: n_embd_head_k = 256 llm_load_print_meta: n_embd_head_v = 256 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 24576 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 8.54 B llm_load_print_meta: model size = 4.84 GiB (4.87 BPW) llm_load_print_meta: general.name = gemma-7b-it llm_load_print_meta: BOS token = 2 '<bos>' llm_load_print_meta: EOS token = 1 '<eos>' llm_load_print_meta: UNK token = 3 '<unk>' llm_load_print_meta: PAD token = 0 '<pad>' llm_load_print_meta: LF token = 227 '<0x0A>' llm_load_tensors: ggml ctx size = 0.19 MiB llm_load_tensors: offloading 12 repeating layers to GPU llm_load_tensors: offloaded 12/29 layers to GPU llm_load_tensors: CPU buffer size = 4955.54 MiB llm_load_tensors: CUDA0 buffer size = 1782.28 MiB ........................................................................... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 1 CUDA devices: Device 0: NVIDIA GeForce GTX 1650, compute capability 7.5, VMM: yes llama_kv_cache_init: CUDA_Host KV buffer size = 512.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 384.00 MiB llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB llama_new_context_with_model: CUDA_Host input buffer size = 11.02 MiB llama_new_context_with_model: CUDA0 compute buffer size = 112.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 518.00 MiB llama_new_context_with_model: graph splits (measure): 3 {"function":"initialize","level":"INFO","line":433,"msg":"initializing slots","n_slots":1,"tid":"12600","timestamp":1709827111} {"function":"initialize","level":"INFO","line":445,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"12600","timestamp":1709827111} time=2024-03-07T23:58:31.125+08:00 level=INFO source=dyn_ext_server.go:161 msg="Starting llama main loop" {"function":"update_slots","level":"INFO","line":1565,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"10216","timestamp":1709827111} {"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"10216","timestamp":1709827111} {"function":"update_slots","level":"INFO","line":1801,"msg":"slot progression","n_past":0,"n_prompt_tokens_processed":207,"slot_id":0,"task_id":0,"tid":"10216","timestamp":1709827111} {"function":"update_slots","level":"INFO","line":1825,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"10216","timestamp":1709827111} CUDA error: out of memory current device: 0, in function ggml_cuda_pool_malloc_vmm at C:\Users\jmorg\git\ollama\llm\llama.cpp\ggml-cuda.cu:8601 cuMemSetAccess(g_cuda_pool_addr[device] + g_cuda_pool_size[device], reserve_size, &access, 1) GGML_ASSERT: C:\Users\jmorg\git\ollama\llm\llama.cpp\ggml-cuda.cu:256: !"CUDA error"
Author
Owner

@dhiltgen commented on GitHub (Mar 7, 2024):

Unfortunately it looks like our memory prediction algorithm didn't work correctly for this setup and we attempted to load too many layers into the GPU and it ran out of VRAM. We're continuing to improve our calculations to avoid this.

In the next release (0.1.29) we'll be adding a new setting that can allow you to set a lower VRAM setting to workaround this type of crash until we get the prediction logic fixed. OLLAMA_MAX_VRAM=<bytes> You could start with 7G and experiment until you find a setting that loads the as many layers as possible without hitting the OOM crash. OLLAMA_MAX_VRAM=7516192768

<!-- gh-comment-id:1984130334 --> @dhiltgen commented on GitHub (Mar 7, 2024): Unfortunately it looks like our memory prediction algorithm didn't work correctly for this setup and we attempted to load too many layers into the GPU and it ran out of VRAM. We're continuing to improve our calculations to avoid this. In the next release (0.1.29) we'll be adding a new setting that can allow you to set a lower VRAM setting to workaround this type of crash until we get the prediction logic fixed. `OLLAMA_MAX_VRAM=<bytes>` You could start with 7G and experiment until you find a setting that loads the as many layers as possible without hitting the OOM crash. `OLLAMA_MAX_VRAM=7516192768`
Author
Owner

@qianjun1985 commented on GitHub (Mar 8, 2024):

Unfortunately it looks like our memory prediction algorithm didn't work correctly for this setup and we attempted to load too many layers into the GPU and it ran out of VRAM. We're continuing to improve our calculations to avoid this.

In the next release (0.1.29) we'll be adding a new setting that can allow you to set a lower VRAM setting to workaround this type of crash until we get the prediction logic fixed. OLLAMA_MAX_VRAM=<bytes> You could start with 7G and experiment until you find a setting that loads the as many layers as possible without hitting the OOM crash. OLLAMA_MAX_VRAM=7516192768

thanks for the information

<!-- gh-comment-id:1984852743 --> @qianjun1985 commented on GitHub (Mar 8, 2024): > Unfortunately it looks like our memory prediction algorithm didn't work correctly for this setup and we attempted to load too many layers into the GPU and it ran out of VRAM. We're continuing to improve our calculations to avoid this. > > In the next release (0.1.29) we'll be adding a new setting that can allow you to set a lower VRAM setting to workaround this type of crash until we get the prediction logic fixed. `OLLAMA_MAX_VRAM=<bytes>` You could start with 7G and experiment until you find a setting that loads the as many layers as possible without hitting the OOM crash. `OLLAMA_MAX_VRAM=7516192768` > > thanks for the information
Author
Owner

@dhiltgen commented on GitHub (May 5, 2024):

In addition to fixing bugs in our prediction logic, one of the fixes that went into 0.1.33 was waiting for prior subprocesses to exit before reevaluating the available VRAM, which might have a bearing on this issue of switching between models. Can you try your scenario again with 0.1.33 and see if it has been resolved?

<!-- gh-comment-id:2094905867 --> @dhiltgen commented on GitHub (May 5, 2024): In addition to fixing bugs in our prediction logic, one of the fixes that went into 0.1.33 was waiting for prior subprocesses to exit before reevaluating the available VRAM, which might have a bearing on this issue of switching between models. Can you try your scenario again with 0.1.33 and see if it has been resolved?
Author
Owner

@pdevine commented on GitHub (May 18, 2024):

Going to go ahead and close this. @qianjun1985 feel free to respond if it's not working and we can reopen the issue.

<!-- gh-comment-id:2118613589 --> @pdevine commented on GitHub (May 18, 2024): Going to go ahead and close this. @qianjun1985 feel free to respond if it's not working and we can reopen the issue.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#48277