[GH-ISSUE #6715] Windows BSOD with ollama and deepseek #29990

Closed
opened 2026-04-22 09:23:36 -05:00 by GiteaMirror · 18 comments
Owner

Originally created by @pitziro on GitHub (Sep 9, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6715

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Ollama kept crashing GPU and getting flickers with my pc while working with VScode and Continue extension.
Works fine for about an hour then throws message about not connecting properly to port.. then resets GPU config.. then BSOD.

When disabling Ollama and Continue, vsCode works fine. Is there any other extension to try?

PC:
Ryzen 5900X
Radeon 6800XT - 24.08.01 drivers
Ollama - latest
Continue - lastest

OS

Windows

GPU

AMD

CPU

AMD

Ollama version

0.3.9

Originally created by @pitziro on GitHub (Sep 9, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6715 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Ollama kept crashing GPU and getting flickers with my pc while working with VScode and Continue extension. Works fine for about an hour then throws message about not connecting properly to port.. then resets GPU config.. then BSOD. When disabling Ollama and Continue, vsCode works fine. Is there any other extension to try? PC: Ryzen 5900X Radeon 6800XT - 24.08.01 drivers Ollama - latest Continue - lastest ### OS Windows ### GPU AMD ### CPU AMD ### Ollama version 0.3.9
GiteaMirror added the bugneeds more infoamdwindows labels 2026-04-22 09:23:36 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 9, 2024):

deepseek has always been a problematic family of models, search for deepseek in the issues tracker and you will see a whole bunch of tickets. A possible workaround was mentioned a little while ago so that's something you could try. Failing that, there are plenty of other models that are code-aware, you could pull some of those and try them out.

<!-- gh-comment-id:2339278048 --> @rick-github commented on GitHub (Sep 9, 2024): deepseek has always been a problematic family of models, [search for deepseek](https://github.com/ollama/ollama/issues?q=is%3Aissue+deepseek) in the issues tracker and you will see a whole bunch of tickets. A [possible workaround](https://github.com/ollama/ollama/issues/6199#issuecomment-2295952982) was mentioned a little while ago so that's something you could try. Failing that, there are plenty of other models that are [code-aware](https://ollama.com/search?c=code), you could pull some of those and try them out.
Author
Owner

@jmorganca commented on GitHub (Sep 10, 2024):

Sorry this isn't more stable, we'll work on improving it. @rick-github summarized this well (thanks!!)

<!-- gh-comment-id:2339518144 --> @jmorganca commented on GitHub (Sep 10, 2024): Sorry this isn't more stable, we'll work on improving it. @rick-github summarized this well (thanks!!)
Author
Owner

@ghost commented on GitHub (Sep 20, 2024):

I just wanted to toss in qwen2.5-coder:7b is doing the same thing. In fact it's worse.

<!-- gh-comment-id:2364691972 --> @ghost commented on GitHub (Sep 20, 2024): I just wanted to toss in qwen2.5-coder:7b is doing the same thing. In fact it's worse.
Author
Owner

@rick-github commented on GitHub (Sep 20, 2024):

Server logs would aid in debugging.

<!-- gh-comment-id:2364737400 --> @rick-github commented on GitHub (Sep 20, 2024): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) would aid in debugging.
Author
Owner

@ghost commented on GitHub (Sep 20, 2024):

In theory this is from qwen2.5 crashes. Deepseek I no longer have any logs. I only seem to have these crashes on "code" focused models.

Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv  11:                     llama.embedding_length u32              = 4096
Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 14336
Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 32
Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv  17:                          general.file_type u32              = 2
Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv  27:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv  28:               general.quantization_version u32              = 2
Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - type  f32:   66 tensors
Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - type q4_0:  225 tensors
Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - type q6_K:    1 tensors
Sep 20 16:35:42 pillar ollama[503765]: time=2024-09-20T16:35:42.376-06:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
Sep 20 16:35:42 pillar ollama[503765]: llm_load_vocab: special tokens cache size = 256
Sep 20 16:35:42 pillar ollama[503765]: llm_load_vocab: token to piece cache size = 0.7999 MB
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: format           = GGUF V3 (latest)
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: arch             = llama
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: vocab type       = BPE
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_vocab          = 128256
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_merges         = 280147
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: vocab_only       = 0
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_ctx_train      = 131072
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_embd           = 4096
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_layer          = 32
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_head           = 32
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_head_kv        = 8
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_rot            = 128
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_swa            = 0
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_embd_head_k    = 128
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_embd_head_v    = 128
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_gqa            = 4
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_embd_k_gqa     = 1024
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_embd_v_gqa     = 1024
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: f_norm_eps       = 0.0e+00
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: f_clamp_kqv      = 0.0e+00
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: f_logit_scale    = 0.0e+00
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_ff             = 14336
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_expert         = 0
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_expert_used    = 0
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: causal attn      = 1
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: pooling type     = 0
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: rope type        = 0
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: rope scaling     = linear
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: freq_base_train  = 500000.0
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: freq_scale_train = 1
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_ctx_orig_yarn  = 131072
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: rope_finetuned   = unknown
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: ssm_d_conv       = 0
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: ssm_d_inner      = 0
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: ssm_d_state      = 0
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: ssm_dt_rank      = 0
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: model type       = 8B
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: model ftype      = Q4_0
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: model params     = 8.03 B
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: model size       = 4.33 GiB (4.64 BPW)
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: general.name     = Meta Llama 3.1 8B Instruct
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: EOS token        = 128009 '<|eot_id|>'
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: LF token         = 128 'Ä'
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: max token length = 256
Sep 20 16:35:42 pillar ollama[503765]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
Sep 20 16:35:42 pillar ollama[503765]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Sep 20 16:35:42 pillar ollama[503765]: ggml_cuda_init: found 2 CUDA devices:
Sep 20 16:35:42 pillar ollama[503765]:   Device 0: NVIDIA GeForce GTX 1060 6GB, compute capability 6.1, VMM: yes
Sep 20 16:35:42 pillar ollama[503765]:   Device 1: NVIDIA GeForce GTX 1660, compute capability 7.5, VMM: yes
Sep 20 16:35:42 pillar ollama[503765]: llm_load_tensors: ggml ctx size =    0.41 MiB
Sep 20 16:35:43 pillar ollama[503765]: llm_load_tensors: offloading 32 repeating layers to GPU
Sep 20 16:35:43 pillar ollama[503765]: llm_load_tensors: offloading non-repeating layers to GPU
Sep 20 16:35:43 pillar ollama[503765]: llm_load_tensors: offloaded 33/33 layers to GPU
Sep 20 16:35:43 pillar ollama[503765]: llm_load_tensors:        CPU buffer size =   281.81 MiB
Sep 20 16:35:43 pillar ollama[503765]: llm_load_tensors:      CUDA0 buffer size =  1989.54 MiB
Sep 20 16:35:43 pillar ollama[503765]: llm_load_tensors:      CUDA1 buffer size =  2166.46 MiB
Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: n_ctx      = 8192
Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: n_batch    = 512
Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: n_ubatch   = 512
Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: flash_attn = 0
Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: freq_base  = 500000.0
Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: freq_scale = 1
Sep 20 16:35:45 pillar ollama[503765]: llama_kv_cache_init:      CUDA0 KV buffer size =   544.00 MiB
Sep 20 16:35:45 pillar ollama[503765]: llama_kv_cache_init:      CUDA1 KV buffer size =   480.00 MiB
Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model:  CUDA_Host  output buffer size =     2.02 MiB
Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model:      CUDA0 compute buffer size =   640.01 MiB
Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model:      CUDA1 compute buffer size =   640.02 MiB
Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model:  CUDA_Host compute buffer size =    72.02 MiB
Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: graph nodes  = 1030
Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: graph splits = 3
Sep 20 16:35:45 pillar ollama[671265]: INFO [main] model loaded | tid="128451186053120" timestamp=1726871745
Sep 20 16:35:46 pillar ollama[503765]: time=2024-09-20T16:35:46.136-06:00 level=INFO source=server.go:630 msg="llama runner started in 4.01 seconds"
Sep 20 16:35:46 pillar ollama[503765]: [GIN] 2024/09/20 - 16:35:46 | 200 |  4.255382639s |       127.0.0.1 | POST     "/api/chat"
Sep 20 16:36:13 pillar ollama[503765]: [GIN] 2024/09/20 - 16:36:13 | 200 |  5.705392865s |       127.0.0.1 | POST     "/api/chat"
Sep 20 16:36:17 pillar ollama[503765]: [GIN] 2024/09/20 - 16:36:17 | 200 |  1.016583332s |       127.0.0.1 | POST     "/api/chat"
Sep 20 16:36:23 pillar ollama[503765]: [GIN] 2024/09/20 - 16:36:23 | 200 |  1.116937633s |       127.0.0.1 | POST     "/api/chat"
Sep 20 16:36:47 pillar ollama[503765]: [GIN] 2024/09/20 - 16:36:47 | 200 | 14.711946452s |       127.0.0.1 | POST     "/api/chat"
Sep 20 16:37:51 pillar ollama[503765]: [GIN] 2024/09/20 - 16:37:51 | 200 |  5.752310231s |       127.0.0.1 | POST     "/api/chat"
Sep 20 16:46:56 pillar ollama[503765]: [GIN] 2024/09/20 - 16:46:56 | 200 |        17.2µs |       127.0.0.1 | HEAD     "/"
Sep 20 16:46:56 pillar ollama[503765]: [GIN] 2024/09/20 - 16:46:56 | 404 |      63.449µs |       127.0.0.1 | POST     "/api/show"
Sep 20 16:46:58 pillar ollama[503765]: time=2024-09-20T16:46:58.809-06:00 level=INFO source=download.go:175 msg="downloading 78b0a71988a5 in 16 294 MB part(s)"
Sep 20 16:48:21 pillar ollama[503765]: [GIN] 2024/09/20 - 16:48:21 | 200 |         1m24s |       127.0.0.1 | POST     "/api/pull"
Sep 20 16:48:26 pillar ollama[503765]: [GIN] 2024/09/20 - 16:48:26 | 200 |       17.85µs |       127.0.0.1 | HEAD     "/"
Sep 20 16:48:26 pillar ollama[503765]: [GIN] 2024/09/20 - 16:48:26 | 500 |     122.689µs |       127.0.0.1 | DELETE   "/api/delete"
Sep 20 16:59:47 pillar ollama[503765]: [GIN] 2024/09/20 - 16:59:47 | 200 |      16.801µs |       127.0.0.1 | HEAD     "/"
Sep 20 16:59:47 pillar ollama[503765]: [GIN] 2024/09/20 - 16:59:47 | 404 |        47.8µs |       127.0.0.1 | POST     "/api/show"
<!-- gh-comment-id:2364744055 --> @ghost commented on GitHub (Sep 20, 2024): In theory this is from qwen2.5 crashes. Deepseek I no longer have any logs. I only seem to have these crashes on "code" focused models. ``` Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv 11: llama.embedding_length u32 = 4096 Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336 Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv 17: general.file_type u32 = 2 Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - kv 28: general.quantization_version u32 = 2 Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - type f32: 66 tensors Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - type q4_0: 225 tensors Sep 20 16:35:42 pillar ollama[503765]: llama_model_loader: - type q6_K: 1 tensors Sep 20 16:35:42 pillar ollama[503765]: time=2024-09-20T16:35:42.376-06:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" Sep 20 16:35:42 pillar ollama[503765]: llm_load_vocab: special tokens cache size = 256 Sep 20 16:35:42 pillar ollama[503765]: llm_load_vocab: token to piece cache size = 0.7999 MB Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: format = GGUF V3 (latest) Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: arch = llama Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: vocab type = BPE Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_vocab = 128256 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_merges = 280147 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: vocab_only = 0 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_ctx_train = 131072 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_embd = 4096 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_layer = 32 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_head = 32 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_head_kv = 8 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_rot = 128 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_swa = 0 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_embd_head_k = 128 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_embd_head_v = 128 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_gqa = 4 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_embd_k_gqa = 1024 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_embd_v_gqa = 1024 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: f_norm_eps = 0.0e+00 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: f_logit_scale = 0.0e+00 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_ff = 14336 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_expert = 0 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_expert_used = 0 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: causal attn = 1 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: pooling type = 0 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: rope type = 0 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: rope scaling = linear Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: freq_base_train = 500000.0 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: freq_scale_train = 1 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: n_ctx_orig_yarn = 131072 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: rope_finetuned = unknown Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: ssm_d_conv = 0 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: ssm_d_inner = 0 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: ssm_d_state = 0 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: ssm_dt_rank = 0 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: model type = 8B Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: model ftype = Q4_0 Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: model params = 8.03 B Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: EOS token = 128009 '<|eot_id|>' Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: LF token = 128 'Ä' Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>' Sep 20 16:35:42 pillar ollama[503765]: llm_load_print_meta: max token length = 256 Sep 20 16:35:42 pillar ollama[503765]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no Sep 20 16:35:42 pillar ollama[503765]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no Sep 20 16:35:42 pillar ollama[503765]: ggml_cuda_init: found 2 CUDA devices: Sep 20 16:35:42 pillar ollama[503765]: Device 0: NVIDIA GeForce GTX 1060 6GB, compute capability 6.1, VMM: yes Sep 20 16:35:42 pillar ollama[503765]: Device 1: NVIDIA GeForce GTX 1660, compute capability 7.5, VMM: yes Sep 20 16:35:42 pillar ollama[503765]: llm_load_tensors: ggml ctx size = 0.41 MiB Sep 20 16:35:43 pillar ollama[503765]: llm_load_tensors: offloading 32 repeating layers to GPU Sep 20 16:35:43 pillar ollama[503765]: llm_load_tensors: offloading non-repeating layers to GPU Sep 20 16:35:43 pillar ollama[503765]: llm_load_tensors: offloaded 33/33 layers to GPU Sep 20 16:35:43 pillar ollama[503765]: llm_load_tensors: CPU buffer size = 281.81 MiB Sep 20 16:35:43 pillar ollama[503765]: llm_load_tensors: CUDA0 buffer size = 1989.54 MiB Sep 20 16:35:43 pillar ollama[503765]: llm_load_tensors: CUDA1 buffer size = 2166.46 MiB Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: n_ctx = 8192 Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: n_batch = 512 Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: n_ubatch = 512 Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: flash_attn = 0 Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: freq_base = 500000.0 Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: freq_scale = 1 Sep 20 16:35:45 pillar ollama[503765]: llama_kv_cache_init: CUDA0 KV buffer size = 544.00 MiB Sep 20 16:35:45 pillar ollama[503765]: llama_kv_cache_init: CUDA1 KV buffer size = 480.00 MiB Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: CUDA_Host output buffer size = 2.02 MiB Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: CUDA0 compute buffer size = 640.01 MiB Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: CUDA1 compute buffer size = 640.02 MiB Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: CUDA_Host compute buffer size = 72.02 MiB Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: graph nodes = 1030 Sep 20 16:35:45 pillar ollama[503765]: llama_new_context_with_model: graph splits = 3 Sep 20 16:35:45 pillar ollama[671265]: INFO [main] model loaded | tid="128451186053120" timestamp=1726871745 Sep 20 16:35:46 pillar ollama[503765]: time=2024-09-20T16:35:46.136-06:00 level=INFO source=server.go:630 msg="llama runner started in 4.01 seconds" Sep 20 16:35:46 pillar ollama[503765]: [GIN] 2024/09/20 - 16:35:46 | 200 | 4.255382639s | 127.0.0.1 | POST "/api/chat" Sep 20 16:36:13 pillar ollama[503765]: [GIN] 2024/09/20 - 16:36:13 | 200 | 5.705392865s | 127.0.0.1 | POST "/api/chat" Sep 20 16:36:17 pillar ollama[503765]: [GIN] 2024/09/20 - 16:36:17 | 200 | 1.016583332s | 127.0.0.1 | POST "/api/chat" Sep 20 16:36:23 pillar ollama[503765]: [GIN] 2024/09/20 - 16:36:23 | 200 | 1.116937633s | 127.0.0.1 | POST "/api/chat" Sep 20 16:36:47 pillar ollama[503765]: [GIN] 2024/09/20 - 16:36:47 | 200 | 14.711946452s | 127.0.0.1 | POST "/api/chat" Sep 20 16:37:51 pillar ollama[503765]: [GIN] 2024/09/20 - 16:37:51 | 200 | 5.752310231s | 127.0.0.1 | POST "/api/chat" Sep 20 16:46:56 pillar ollama[503765]: [GIN] 2024/09/20 - 16:46:56 | 200 | 17.2µs | 127.0.0.1 | HEAD "/" Sep 20 16:46:56 pillar ollama[503765]: [GIN] 2024/09/20 - 16:46:56 | 404 | 63.449µs | 127.0.0.1 | POST "/api/show" Sep 20 16:46:58 pillar ollama[503765]: time=2024-09-20T16:46:58.809-06:00 level=INFO source=download.go:175 msg="downloading 78b0a71988a5 in 16 294 MB part(s)" Sep 20 16:48:21 pillar ollama[503765]: [GIN] 2024/09/20 - 16:48:21 | 200 | 1m24s | 127.0.0.1 | POST "/api/pull" Sep 20 16:48:26 pillar ollama[503765]: [GIN] 2024/09/20 - 16:48:26 | 200 | 17.85µs | 127.0.0.1 | HEAD "/" Sep 20 16:48:26 pillar ollama[503765]: [GIN] 2024/09/20 - 16:48:26 | 500 | 122.689µs | 127.0.0.1 | DELETE "/api/delete" Sep 20 16:59:47 pillar ollama[503765]: [GIN] 2024/09/20 - 16:59:47 | 200 | 16.801µs | 127.0.0.1 | HEAD "/" Sep 20 16:59:47 pillar ollama[503765]: [GIN] 2024/09/20 - 16:59:47 | 404 | 47.8µs | 127.0.0.1 | POST "/api/show" ```
Author
Owner

@rick-github commented on GitHub (Sep 20, 2024):

This looks like llama3.1:8b-instruct-q4_0, not qwen2.5 or deepseek. Do you have a sample query that causes crashes with qwen2.5? That would make it easier to test on a local model and would speed debugging.

<!-- gh-comment-id:2364757930 --> @rick-github commented on GitHub (Sep 20, 2024): This looks like [llama3.1:8b-instruct-q4_0](https://ollama.com/library/llama3.1:8b-instruct-q4_0), not qwen2.5 or deepseek. Do you have a sample query that causes crashes with qwen2.5? That would make it easier to test on a local model and would speed debugging.
Author
Owner

@ghost commented on GitHub (Sep 20, 2024):

After a crash of asking where it came up with two options I tried to expedite the question. After the initial crash asking it the following it would get half a word on screen and core dump.

"I had asked you for help with .ctwmrc and you suggested two options, refocus and refocusonclose, neither of which exist in the docs. So where did you come up with that?"

As a note I also just tried another model which resulted in another core dump saying the model was incompatible with my version of ollama. I reran the installer to update and the new model ran. So if this might just have been an ollama 0.3.8 issue.

<!-- gh-comment-id:2364759852 --> @ghost commented on GitHub (Sep 20, 2024): After a crash of asking where it came up with two options I tried to expedite the question. After the initial crash asking it the following it would get half a word on screen and core dump. "I had asked you for help with .ctwmrc and you suggested two options, refocus and refocusonclose, neither of which exist in the docs. So where did you come up with that?" As a note I also just tried another model which resulted in another core dump saying the model was incompatible with my version of ollama. I reran the installer to update and the new model ran. So if this might just have been an ollama 0.3.8 issue.
Author
Owner

@rick-github commented on GitHub (Sep 20, 2024):

OK, if the new version crashes, set OLLAMA_DEBUG=1, capture some log files from the crash and that will help in resolving the issue.

<!-- gh-comment-id:2364764705 --> @rick-github commented on GitHub (Sep 20, 2024): OK, if the new version crashes, set `OLLAMA_DEBUG=1`, capture some log files from the crash and that will help in resolving the issue.
Author
Owner

@ghost commented on GitHub (Sep 21, 2024):

I'll try but I suspect this is chasing a ghost. In frustration a while ago I fired up qwen and started with STOP CRASHING! and the damn thing worked...too stupid to answer a question, smart enough to troll.

In Soviet Hollywood Ghost chases you!

<!-- gh-comment-id:2364765810 --> @ghost commented on GitHub (Sep 21, 2024): I'll try but I suspect this is chasing a ghost. In frustration a while ago I fired up qwen and started with STOP CRASHING! and the damn thing worked...too stupid to answer a question, smart enough to troll. In Soviet Hollywood Ghost chases you!
Author
Owner

@dhiltgen commented on GitHub (Sep 25, 2024):

I think we're mixing two different issues in here... @pitziro mentioned flickering and a BSOD on AMD. Other logs are showing what I believe is an Ollama crash on NVIDIA. Let's focus this issue on the AMD OS level crash, and move the discussion of Ollama app crashes to another issue. (please check our backlog for others reporting crashes on the same model and/or GPU before submitting a new one)

<!-- gh-comment-id:2375364728 --> @dhiltgen commented on GitHub (Sep 25, 2024): I think we're mixing two different issues in here... @pitziro mentioned flickering and a BSOD on AMD. Other logs are showing what I believe is an Ollama crash on NVIDIA. Let's focus this issue on the AMD OS level crash, and move the discussion of Ollama app crashes to another issue. (please check our backlog for others reporting crashes on the same model and/or GPU before submitting a new one)
Author
Owner

@ghost commented on GitHub (Sep 25, 2024):

Well @pdevine marked my report as a dupe of this Core Dump | applicationError #6882 when I'm multi-nVidia, he's AMD and even more "distance" between things is he's on Windows, I'm on Linux.

<!-- gh-comment-id:2375368890 --> @ghost commented on GitHub (Sep 25, 2024): Well @pdevine marked my report as a dupe of this Core Dump | applicationError #6882 when I'm multi-nVidia, he's AMD and even more "distance" between things is he's on Windows, I'm on Linux.
Author
Owner

@pdevine commented on GitHub (Sep 25, 2024):

@nPHYN1T3 sorry, man. If you think it's a different bug and I closed it incorrectly, we can always reopen. I try to get the LLM to find the dupes of issues using embeddings and then go through any matches manually to see if they look the same.

<!-- gh-comment-id:2375411264 --> @pdevine commented on GitHub (Sep 25, 2024): @nPHYN1T3 sorry, man. If you think it's a different bug and I closed it incorrectly, we can always reopen. I try to get the LLM to find the dupes of issues using embeddings and then go through any matches manually to see if they look the same.
Author
Owner

@ghost commented on GitHub (Sep 25, 2024):

Heh I'm not upset, just with the whole "keep it on the AMD / Win issue" that kinda then separates my report. That said I'm not like O M G FIX NOW MUST HAV! I've found ollama useful here and there but stability aside any model I try hallucinates, ignores details or just spews non-sense so really crashes are almost the least of the issues. I can imagine there is a large degree of aggravation in figuring out if issues are ollama based or the models fault. I'd love to have a little "buddy" on my machine I can ask questions to expedite various tasks but between the brutally out of date training data (for technical stuff) and the fun little lies it's going to be a good many years before I see any of this having real functionality.

<!-- gh-comment-id:2375442091 --> @ghost commented on GitHub (Sep 25, 2024): Heh I'm not upset, just with the whole "keep it on the AMD / Win issue" that kinda then separates my report. That said I'm not like O M G FIX NOW MUST HAV! I've found ollama useful here and there but stability aside any model I try hallucinates, ignores details or just spews non-sense so really crashes are almost the least of the issues. I can imagine there is a large degree of aggravation in figuring out if issues are ollama based or the models fault. I'd love to have a little "buddy" on my machine I can ask questions to expedite various tasks but between the brutally out of date training data (for technical stuff) and the fun little lies it's going to be a good many years before I see any of this having real functionality.
Author
Owner

@dhiltgen commented on GitHub (Oct 7, 2024):

@pitziro in general a BSOD is typically going to be a bug the driver vendor has to fix, so on first boot after the BSOD, you should submit the bug report when prompted if you're OK with sharing that. It's been a while since you filed this issue, so the crash data may no longer be saved on your system, but if you start up the Radeon Settings, try clicking on the bug icon and see if it has data to upload

image (2)

<!-- gh-comment-id:2398050997 --> @dhiltgen commented on GitHub (Oct 7, 2024): @pitziro in general a BSOD is typically going to be a bug the driver vendor has to fix, so on first boot after the BSOD, you should submit the bug report when prompted if you're OK with sharing that. It's been a while since you filed this issue, so the crash data may no longer be saved on your system, but if you start up the Radeon Settings, try clicking on the bug icon and see if it has data to upload ![image (2)](https://github.com/user-attachments/assets/d61a3d97-aa40-42ff-b1ef-265187fbf0ae)
Author
Owner

@pitziro commented on GitHub (Oct 7, 2024):

I was trying llama 3.1 and qwen 2.5. Both still failing after some hours.
I have given up. Currently using another completion tool and disabled both Ollama and Continue.

<!-- gh-comment-id:2398057866 --> @pitziro commented on GitHub (Oct 7, 2024): I was trying llama 3.1 and qwen 2.5. Both still failing after some hours. I have given up. Currently using another completion tool and disabled both Ollama and Continue.
Author
Owner

@dhiltgen commented on GitHub (Oct 7, 2024):

Sorry to hear that.

Both still failing after some hours.

Was this still a BSOD OS level crash, or is the failure only a crash of ollama itself?

<!-- gh-comment-id:2398147370 --> @dhiltgen commented on GitHub (Oct 7, 2024): Sorry to hear that. > Both still failing after some hours. Was this still a BSOD OS level crash, or is the failure only a crash of ollama itself?
Author
Owner

@pitziro commented on GitHub (Oct 8, 2024):

it was a failure in the Ollama/Continue application that caused my screen to freeze and stutter like it was installing graphic drivers. It only stopped when killing ollama process.

<!-- gh-comment-id:2398605096 --> @pitziro commented on GitHub (Oct 8, 2024): it was a failure in the Ollama/Continue application that caused my screen to freeze and stutter like it was installing graphic drivers. It only stopped when killing ollama process.
Author
Owner

@saman-amd commented on GitHub (Oct 9, 2024):

hey @pitziro,
Could you please try DDU and see if it improves anything ? use DDU (better to be in safe mode) to wipe your current driver and reinstall the 24.8.1 driver and if you can still reproduce the issue:
- please submit a BRT report if your OS crashed as mentioned above by Daniel
- please submit share a detailed steps for how we can reproduce the issue on our side: including the code if possible
- take a video, pictures could help a lot as well.
- also please include your OS version, monitor model etc.

<!-- gh-comment-id:2401180804 --> @saman-amd commented on GitHub (Oct 9, 2024): hey @pitziro, Could you please try [DDU ](https://www.guru3d.com/download/display-driver-uninstaller-download/)and see if it improves anything ? use DDU (better to be in safe mode) to wipe your current driver and reinstall the 24.8.1 driver and if you can still reproduce the issue: - please submit a BRT report if your OS crashed as mentioned above by Daniel - please submit share a detailed steps for how we can reproduce the issue on our side: including the code if possible - take a video, pictures could help a lot as well. - also please include your OS version, monitor model etc.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#29990