Error: an error was encountered while running the model: CUDA error: unspecified launch failure #6030

Open
opened 2025-11-12 13:19:38 -06:00 by GiteaMirror · 2 comments
Owner

Originally created by @CJJ-Michael on GitHub (Feb 20, 2025).

What is the issue?

When I ask a question to LLM, it alaways give the error as the title shows. So, I have to use the CPU ONLY mode to run the model.

The error message is:

Error: an error was encountered while running the model: CUDA error: unspecified launch failure
current device: 0, in function ggml_backend_cuda_synchronize at llama/ggml-cuda/ggml-cuda.cu:2317
cudaStreamSynchronize(cuda_ctx->stream())
llama/ggml-cuda/ggml-cuda.cu:96: CUDA error

I tried changing different versions of model (size), Ollama, Nvidia driver and CUDA, but it's useless.

Relevant log output

ollama run deepseek-r1:7b
>>> 解释一下勾股定理
<think>
嗯,今天我在数学课上听老师讲到了勾股定理,但是我不太明白它具体是什么意思。勾股定理是什么呢Error: an error was encountered while running the model: CUDA error: unspecified launch failure
  current device: 0, in function ggml_backend_cuda_synchronize at llama/ggml-cuda/ggml-cuda.cu:2317
  cudaStreamSynchronize(cuda_ctx->stream())
llama/ggml-cuda/ggml-cuda.cu:96: CUDA error

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.7 and 0.5.11

Originally created by @CJJ-Michael on GitHub (Feb 20, 2025). ### What is the issue? When I ask a question to LLM, it alaways give the error as the title shows. So, I have to use the CPU ONLY mode to run the model. The error message is: Error: an error was encountered while running the model: CUDA error: unspecified launch failure current device: 0, in function ggml_backend_cuda_synchronize at llama/ggml-cuda/ggml-cuda.cu:2317 cudaStreamSynchronize(cuda_ctx->stream()) llama/ggml-cuda/ggml-cuda.cu:96: CUDA error I tried changing different versions of model (size), Ollama, Nvidia driver and CUDA, but it's useless. ### Relevant log output ```shell ollama run deepseek-r1:7b >>> 解释一下勾股定理 <think> 嗯,今天我在数学课上听老师讲到了勾股定理,但是我不太明白它具体是什么意思。勾股定理是什么呢Error: an error was encountered while running the model: CUDA error: unspecified launch failure current device: 0, in function ggml_backend_cuda_synchronize at llama/ggml-cuda/ggml-cuda.cu:2317 cudaStreamSynchronize(cuda_ctx->stream()) llama/ggml-cuda/ggml-cuda.cu:96: CUDA error ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.7 and 0.5.11
GiteaMirror added the bug label 2025-11-12 13:19:38 -06:00
Author
Owner

@rick-github commented on GitHub (Feb 20, 2025):

Server logs may help in debugging.

@rick-github commented on GitHub (Feb 20, 2025): [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may help in debugging.
Author
Owner

@CJJ-Michael commented on GitHub (Feb 21, 2025):

Server logs may help in debugging.

Logs below belong to ollama server after the bug happend:

2月 21 08:52:10 GBITDeepSeek ollama[698750]: [GIN] 2025/02/21 - 08:52:10 | 200 | 3.857530363s | 127.0.0.1 | POST "/api/chat"
2月 21 08:55:16 GBITDeepSeek ollama[698750]: [GIN] 2025/02/21 - 08:55:16 | 200 | 27.206µs | 172.17.0.2 | GET "/"
2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.213+08:00 level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-99e4c40f-5e53-642f-aa9b-9c3b203cfebb library=cuda total="23.6 GiB" available="18.1 GiB"
2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.213+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/home/gbit/ollama/models/blobs/sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 gpu=GPU-99e4c40f-5e53-642f-aa9b-9c3b203cfebb parallel=1 available=19391528960 required="809.9 MiB"
2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.309+08:00 level=INFO source=server.go:105 msg="system memory" total="251.5 GiB" free="238.0 GiB" free_swap="2.0 GiB"
2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.309+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=13 layers.offload=13 layers.split="" memory.available="[18.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="809.9 MiB" memory.required.partial="809.9 MiB" memory.required.kv="24.0 MiB" memory.required.allocations="[809.9 MiB]" memory.weights.total="240.1 MiB" memory.weights.repeating="195.4 MiB" memory.weights.nonrepeating="44.7 MiB" memory.graph.full="48.0 MiB" memory.graph.partial="48.0 MiB"
2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.311+08:00 level=INFO source=server.go:397 msg="starting llama server" cmd="/tmp/ollama3857967831/runners/cuda_v12/ollama_llama_server --model /home/gbit/ollama/models/blobs/sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 --ctx-size 8192 --batch-size 512 --n-gpu-layers 13 --threads 52 --parallel 1 --port 42335"
2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.312+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=2
2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.312+08:00 level=INFO source=server.go:576 msg="waiting for llama runner to start responding"
2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.312+08:00 level=INFO source=server.go:610 msg="waiting for server to become available" status="llm server error"
2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.372+08:00 level=INFO source=runner.go:941 msg="starting go runner"
2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.372+08:00 level=INFO source=runner.go:942 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=52
2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.372+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:42335"
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: loaded meta data with 24 key-value pairs and 112 tensors from /home/gbit/ollama/models/blobs/sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 (version GGUF V3 (latest))
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 0: general.architecture str = nomic-bert
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 1: general.name str = nomic-embed-text-v1.5
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 2: nomic-bert.block_count u32 = 12
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 3: nomic-bert.context_length u32 = 2048
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 4: nomic-bert.embedding_length u32 = 768
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 5: nomic-bert.feed_forward_length u32 = 3072
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 6: nomic-bert.attention.head_count u32 = 12
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 7: nomic-bert.attention.layer_norm_epsilon f32 = 0.000000
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 8: general.file_type u32 = 1
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 9: nomic-bert.attention.causal bool = false
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 10: nomic-bert.pooling_type u32 = 1
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 11: nomic-bert.rope.freq_base f32 = 1000.000000
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 12: tokenizer.ggml.token_type_count u32 = 2
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 13: tokenizer.ggml.bos_token_id u32 = 101
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 102
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 15: tokenizer.ggml.model str = bert
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,30522] = ["[PAD]", "[unused0]", "[unused1]", "...
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,30522] = [-1000.000000, -1000.000000, -1000.00...
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,30522] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 100
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 20: tokenizer.ggml.seperator_token_id u32 = 102
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 0
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 22: tokenizer.ggml.cls_token_id u32 = 101
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 23: tokenizer.ggml.mask_token_id u32 = 103
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - type f32: 51 tensors
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - type f16: 61 tensors
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_vocab: special tokens cache size = 5
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_vocab: token to piece cache size = 0.2032 MB
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: format = GGUF V3 (latest)
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: arch = nomic-bert
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: vocab type = WPM
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_vocab = 30522
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_merges = 0
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: vocab_only = 0
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_ctx_train = 2048
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_embd = 768
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_layer = 12
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_head = 12
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_head_kv = 12
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_rot = 64
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_swa = 0
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_embd_head_k = 64
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_embd_head_v = 64
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_gqa = 1
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_embd_k_gqa = 768
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_embd_v_gqa = 768
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: f_norm_eps = 1.0e-12
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: f_norm_rms_eps = 0.0e+00
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: f_logit_scale = 0.0e+00
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_ff = 3072
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_expert = 0
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_expert_used = 0
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: causal attn = 0
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: pooling type = 1
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: rope type = 2
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: rope scaling = linear
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: freq_base_train = 1000.0
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: freq_scale_train = 1
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_ctx_orig_yarn = 2048
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: rope_finetuned = unknown
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: ssm_d_conv = 0
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: ssm_d_inner = 0
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: ssm_d_state = 0
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: ssm_dt_rank = 0
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: ssm_dt_b_c_rms = 0
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: model type = 137M
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: model ftype = F16
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: model params = 136.73 M
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: model size = 260.86 MiB (16.00 BPW)
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: general.name = nomic-embed-text-v1.5
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: BOS token = 101 '[CLS]'
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: EOS token = 102 '[SEP]'
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: UNK token = 100 '[UNK]'
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: SEP token = 102 '[SEP]'
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: PAD token = 0 '[PAD]'
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: CLS token = 101 '[CLS]'
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: MASK token = 103 '[MASK]'
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: LF token = 0 '[PAD]'
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: EOG token = 102 '[SEP]'
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: max token length = 21
2月 21 08:55:16 GBITDeepSeek ollama[698750]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
2月 21 08:55:16 GBITDeepSeek ollama[698750]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
2月 21 08:55:16 GBITDeepSeek ollama[698750]: ggml_cuda_init: found 1 CUDA devices:
2月 21 08:55:16 GBITDeepSeek ollama[698750]: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_tensors: ggml ctx size = 0.10 MiB
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_tensors: offloading 12 repeating layers to GPU
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_tensors: offloading non-repeating layers to GPU
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_tensors: offloaded 13/13 layers to GPU
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_tensors: CPU buffer size = 44.72 MiB
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_tensors: CUDA0 buffer size = 216.15 MiB
2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.565+08:00 level=INFO source=server.go:610 msg="waiting for server to become available" status="llm server loading model"
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: n_ctx = 8192
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: n_batch = 512
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: n_ubatch = 512
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: flash_attn = 0
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: freq_base = 1000.0
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: freq_scale = 1
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_kv_cache_init: CUDA0 KV buffer size = 288.00 MiB
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: KV self size = 288.00 MiB, K (f16): 144.00 MiB, V (f16): 144.00 MiB
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: CPU output buffer size = 0.00 MiB
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: CUDA0 compute buffer size = 23.00 MiB
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: CUDA_Host compute buffer size = 3.50 MiB
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: graph nodes = 453
2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: graph splits = 2
2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.816+08:00 level=INFO source=server.go:615 msg="llama runner started in 0.50 seconds"
2月 21 08:55:17 GBITDeepSeek ollama[698750]: [GIN] 2025/02/21 - 08:55:17 | 200 | 919.068568ms | 172.17.0.2 | POST "/api/embeddings"
2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.487+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.095851007 model=/home/gbit/ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49
2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.701+08:00 level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-99e4c40f-5e53-642f-aa9b-9c3b203cfebb library=cuda total="23.6 GiB" available="21.4 GiB"
2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.701+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/home/gbit/ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 gpu=GPU-99e4c40f-5e53-642f-aa9b-9c3b203cfebb parallel=4 available=22992453632 required="6.5 GiB"
2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.737+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.345946677 model=/home/gbit/ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49
2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.789+08:00 level=INFO source=server.go:105 msg="system memory" total="251.5 GiB" free="237.6 GiB" free_swap="2.0 GiB"
2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.789+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[21.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.5 GiB" memory.required.partial="6.5 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[6.5 GiB]" memory.weights.total="4.5 GiB" memory.weights.repeating="4.1 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="942.0 MiB" memory.graph.partial="1.1 GiB"
2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.791+08:00 level=INFO source=server.go:397 msg="starting llama server" cmd="/tmp/ollama3857967831/runners/cuda_v12/ollama_llama_server --model /home/gbit/ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 --ctx-size 16384 --batch-size 512 --n-gpu-layers 29 --threads 52 --mlock --parallel 4 --port 35995"
2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.792+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=2
2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.792+08:00 level=INFO source=server.go:576 msg="waiting for llama runner to start responding"
2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.792+08:00 level=INFO source=server.go:610 msg="waiting for server to become available" status="llm server error"
2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.849+08:00 level=INFO source=runner.go:941 msg="starting go runner"
2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.850+08:00 level=INFO source=runner.go:942 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=52
2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.850+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:35995"
2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /home/gbit/ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 (version GGUF V3 (latest))
2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 0: general.architecture str = qwen2
2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 1: general.type str = model
2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 7B
2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen
2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 4: general.size_label str = 7B
2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 5: qwen2.block_count u32 = 28
2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 6: qwen2.context_length u32 = 131072
2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 7: qwen2.embedding_length u32 = 3584
2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 18944
2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 28
2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 4
2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000
2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 13: general.file_type u32 = 15
2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2
2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2
2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.986+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.59510752 model=/home/gbit/ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de...
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 25: general.quantization_version u32 = 2
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - type f32: 141 tensors
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - type q4_K: 169 tensors
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - type q6_K: 29 tensors
2月 21 08:55:23 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:23.044+08:00 level=INFO source=server.go:610 msg="waiting for server to become available" status="llm server loading model"
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_vocab: special tokens cache size = 22
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_vocab: token to piece cache size = 0.9310 MB
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: format = GGUF V3 (latest)
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: arch = qwen2
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: vocab type = BPE
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_vocab = 152064
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_merges = 151387
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: vocab_only = 0
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_ctx_train = 131072
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_embd = 3584
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_layer = 28
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_head = 28
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_head_kv = 4
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_rot = 128
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_swa = 0
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_embd_head_k = 128
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_embd_head_v = 128
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_gqa = 7
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_embd_k_gqa = 512
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_embd_v_gqa = 512
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: f_norm_eps = 0.0e+00
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: f_logit_scale = 0.0e+00
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_ff = 18944
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_expert = 0
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_expert_used = 0
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: causal attn = 1
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: pooling type = 0
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: rope type = 2
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: rope scaling = linear
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: freq_base_train = 10000.0
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: freq_scale_train = 1
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_ctx_orig_yarn = 131072
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: rope_finetuned = unknown
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: ssm_d_conv = 0
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: ssm_d_inner = 0
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: ssm_d_state = 0
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: ssm_dt_rank = 0
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: ssm_dt_b_c_rms = 0
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: model type = ?B
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: model ftype = Q4_K - Medium
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: model params = 7.62 B
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: model size = 4.36 GiB (4.91 BPW)
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 7B
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>'
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>'
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>'
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: LF token = 148848 'ÄĬ'
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>'
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: max token length = 256
2月 21 08:55:23 GBITDeepSeek ollama[698750]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
2月 21 08:55:23 GBITDeepSeek ollama[698750]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
2月 21 08:55:23 GBITDeepSeek ollama[698750]: ggml_cuda_init: found 1 CUDA devices:
2月 21 08:55:23 GBITDeepSeek ollama[698750]: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_tensors: ggml ctx size = 0.30 MiB
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_tensors: offloading 28 repeating layers to GPU
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_tensors: offloading non-repeating layers to GPU
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_tensors: offloaded 29/29 layers to GPU
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_tensors: CPU buffer size = 292.36 MiB
2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_tensors: CUDA0 buffer size = 4168.09 MiB
2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: n_ctx = 16384
2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: n_batch = 2048
2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: n_ubatch = 512
2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: flash_attn = 0
2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: freq_base = 10000.0
2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: freq_scale = 1
2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_kv_cache_init: CUDA0 KV buffer size = 896.00 MiB
2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB
2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: CUDA_Host output buffer size = 2.38 MiB
2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: CUDA0 compute buffer size = 956.00 MiB
2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: CUDA_Host compute buffer size = 39.01 MiB
2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: graph nodes = 986
2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: graph splits = 2
2月 21 08:55:25 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:25.055+08:00 level=INFO source=server.go:615 msg="llama runner started in 2.26 seconds"
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /home/gbit/ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 (version GGUF V3 (latest))
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 0: general.architecture str = qwen2
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 1: general.type str = model
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 7B
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 4: general.size_label str = 7B
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 5: qwen2.block_count u32 = 28
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 6: qwen2.context_length u32 = 131072
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 7: qwen2.embedding_length u32 = 3584
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 18944
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 28
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 4
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 13: general.file_type u32 = 15
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de...
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 25: general.quantization_version u32 = 2
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - type f32: 141 tensors
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - type q4_K: 169 tensors
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - type q6_K: 29 tensors
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_vocab: special tokens cache size = 22
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_vocab: token to piece cache size = 0.9310 MB
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: format = GGUF V3 (latest)
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: arch = qwen2
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: vocab type = BPE
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_vocab = 152064
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_merges = 151387
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: vocab_only = 1
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: model type = ?B
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: model ftype = all F32
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: model params = 7.62 B
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: model size = 4.36 GiB (4.91 BPW)
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 7B
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>'
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>'
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>'
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: LF token = 148848 'ÄĬ'
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>'
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: max token length = 256
2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_load: vocab only - skipping tensors
2月 21 08:55:27 GBITDeepSeek ollama[698750]: CUDA error: misaligned address
2月 21 08:55:27 GBITDeepSeek ollama[698750]: current device: 0, in function ggml_backend_cuda_synchronize at ggml-cuda.cu:2508
2月 21 08:55:27 GBITDeepSeek ollama[698750]: cudaStreamSynchronize(cuda_ctx->stream())
2月 21 08:55:27 GBITDeepSeek ollama[698750]: ggml-cuda.cu:132: CUDA error
2月 21 08:55:27 GBITDeepSeek ollama[717146]: [New LWP 716998]
2月 21 08:55:27 GBITDeepSeek ollama[717146]: [New LWP 716999]
2月 21 08:55:27 GBITDeepSeek ollama[717146]: [New LWP 717000]
2月 21 08:55:27 GBITDeepSeek ollama[717146]: [New LWP 717001]
2月 21 08:55:27 GBITDeepSeek ollama[717146]: [New LWP 717002]
2月 21 08:55:27 GBITDeepSeek ollama[717146]: [New LWP 717009]
2月 21 08:55:27 GBITDeepSeek ollama[717146]: [New LWP 717022]
2月 21 08:55:27 GBITDeepSeek ollama[717146]: [New LWP 717023]
2月 21 08:55:27 GBITDeepSeek ollama[717146]: [New LWP 717029]
2月 21 08:55:27 GBITDeepSeek ollama[717146]: [Thread debugging using libthread_db enabled]
2月 21 08:55:27 GBITDeepSeek ollama[717146]: Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
2月 21 08:55:27 GBITDeepSeek ollama[717146]: 0x00005ed082619083 in ?? ()
2月 21 08:55:27 GBITDeepSeek ollama[717146]: #0 0x00005ed082619083 in ?? ()
2月 21 08:55:27 GBITDeepSeek ollama[717146]: #1 0x00005ed0825de3d0 in _start ()
2月 21 08:55:27 GBITDeepSeek ollama[717146]: [Inferior 1 (process 716996) detached]
2月 21 08:55:27 GBITDeepSeek ollama[698750]: SIGABRT: abort
2月 21 08:55:27 GBITDeepSeek ollama[698750]: PC=0x7440f20969fc m=4 sigcode=18446744073709551610
2月 21 08:55:27 GBITDeepSeek ollama[698750]: signal arrived during cgo execution
2月 21 08:55:27 GBITDeepSeek ollama[698750]: goroutine 7 gp=0xc000252000 m=4 mp=0xc000139808 [syscall]:
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.cgocall(0x5ed08282d8c0, 0xc000147c60)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/cgocall.go:157 +0x4b fp=0xc000147c38 sp=0xc000147c00 pc=0x5ed0825ae8ab
2月 21 08:55:27 GBITDeepSeek ollama[698750]: github.com/ollama/ollama/llama._Cfunc_gpt_sampler_csample(0x744098008400, 0x74409c006450, 0x0)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: _cgo_gotypes.go:468 +0x4f fp=0xc000147c60 sp=0xc000147c38 pc=0x5ed0826ab98f
2月 21 08:55:27 GBITDeepSeek ollama[698750]: main.(*Server).processBatch.(*SamplingContext).Sample.func4(0x5ed082e32100?, 0x0?, 0x0)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: github.com/ollama/ollama/llama/llama.go:699 +0x86 fp=0xc000147cb0 sp=0xc000147c60 pc=0x5ed082829346
2月 21 08:55:27 GBITDeepSeek ollama[698750]: github.com/ollama/ollama/llama.(*SamplingContext).Sample(...)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: github.com/ollama/ollama/llama/llama.go:699
2月 21 08:55:27 GBITDeepSeek ollama[698750]: main.(*Server).processBatch(0xc000214120, 0xc0002ae000, 0xc000147f10)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: github.com/ollama/ollama/llama/runner/runner.go:487 +0x6ab fp=0xc000147ed0 sp=0xc000147cb0 pc=0x5ed08282860b
2月 21 08:55:27 GBITDeepSeek ollama[698750]: main.(*Server).run(0xc000214120, {0x5ed082b7a9a0, 0xc00016a0a0})
2月 21 08:55:27 GBITDeepSeek ollama[698750]: github.com/ollama/ollama/llama/runner/runner.go:342 +0x1e5 fp=0xc000147fb8 sp=0xc000147ed0 pc=0x5ed082827c25
2月 21 08:55:27 GBITDeepSeek ollama[698750]: main.main.gowrap2()
2月 21 08:55:27 GBITDeepSeek ollama[698750]: github.com/ollama/ollama/llama/runner/runner.go:980 +0x28 fp=0xc000147fe0 sp=0xc000147fb8 pc=0x5ed08282ca88
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goexit({})
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000147fe8 sp=0xc000147fe0 pc=0x5ed0826172c1
2月 21 08:55:27 GBITDeepSeek ollama[698750]: created by main.main in goroutine 1
2月 21 08:55:27 GBITDeepSeek ollama[698750]: github.com/ollama/ollama/llama/runner/runner.go:980 +0xd3e
2月 21 08:55:27 GBITDeepSeek ollama[698750]: goroutine 1 gp=0xc0000061c0 m=nil [IO wait]:
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.gopark(0x1?, 0xc00002b8e0?, 0xd4?, 0x52?, 0xc00002b8c0?)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:402 +0xce fp=0xc00002b860 sp=0xc00002b840 pc=0x5ed0825e54ee
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.netpollblock(0x10?, 0x825ae006?, 0xd0?)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/netpoll.go:573 +0xf7 fp=0xc00002b898 sp=0xc00002b860 pc=0x5ed0825dd737
2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll.runtime_pollWait(0x744117b88020, 0x72)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/netpoll.go:345 +0x85 fp=0xc00002b8b8 sp=0xc00002b898 pc=0x5ed082611f85
2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll.(*pollDesc).wait(0x3?, 0x744117bc3e88?, 0x0)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00002b8e0 sp=0xc00002b8b8 pc=0x5ed082661ea7
2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll.(*pollDesc).waitRead(...)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll/fd_poll_runtime.go:89
2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll.(*FD).Accept(0xc00024c080)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll/fd_unix.go:611 +0x2ac fp=0xc00002b988 sp=0xc00002b8e0 pc=0x5ed08266336c
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net.(*netFD).accept(0xc00024c080)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/fd_unix.go:172 +0x29 fp=0xc00002ba40 sp=0xc00002b988 pc=0x5ed0826d1fa9
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net.(*TCPListener).accept(0xc0001521e0)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/tcpsock_posix.go:159 +0x1e fp=0xc00002ba68 sp=0xc00002ba40 pc=0x5ed0826e2cde
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net.(*TCPListener).Accept(0xc0001521e0)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/tcpsock.go:327 +0x30 fp=0xc00002ba98 sp=0xc00002ba68 pc=0x5ed0826e2030
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http.(*onceCloseListener).Accept(0xc000298090?)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: :1 +0x24 fp=0xc00002bab0 sp=0xc00002ba98 pc=0x5ed082809244
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http.(*Server).Serve(0xc0000181e0, {0x5ed082b7a360, 0xc0001521e0})
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http/server.go:3260 +0x33e fp=0xc00002bbe0 sp=0xc00002bab0 pc=0x5ed08280005e
2月 21 08:55:27 GBITDeepSeek ollama[698750]: main.main()
2月 21 08:55:27 GBITDeepSeek ollama[698750]: github.com/ollama/ollama/llama/runner/runner.go:1000 +0x10cd fp=0xc00002bf50 sp=0xc00002bbe0 pc=0x5ed08282c80d
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.main()
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:271 +0x29d fp=0xc00002bfe0 sp=0xc00002bf50 pc=0x5ed0825e50bd
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goexit({})
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/asm_amd64.s:1695 +0x1 fp=0xc00002bfe8 sp=0xc00002bfe0 pc=0x5ed0826172c1
2月 21 08:55:27 GBITDeepSeek ollama[698750]: goroutine 2 gp=0xc000006c40 m=nil [force gc (idle)]:
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:402 +0xce fp=0xc000132fa8 sp=0xc000132f88 pc=0x5ed0825e54ee
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goparkunlock(...)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:408
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.forcegchelper()
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:326 +0xb8 fp=0xc000132fe0 sp=0xc000132fa8 pc=0x5ed0825e5378
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goexit({})
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000132fe8 sp=0xc000132fe0 pc=0x5ed0826172c1
2月 21 08:55:27 GBITDeepSeek ollama[698750]: created by runtime.init.6 in goroutine 1
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:314 +0x1a
2月 21 08:55:27 GBITDeepSeek ollama[698750]: goroutine 3 gp=0xc000007180 m=nil [GC sweep wait]:
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:402 +0xce fp=0xc000133780 sp=0xc000133760 pc=0x5ed0825e54ee
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goparkunlock(...)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:408
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.bgsweep(0xc00015c000)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/mgcsweep.go:278 +0x94 fp=0xc0001337c8 sp=0xc000133780 pc=0x5ed0825d0034
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.gcenable.gowrap1()
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/mgc.go:203 +0x25 fp=0xc0001337e0 sp=0xc0001337c8 pc=0x5ed0825c4b65
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goexit({})
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0001337e8 sp=0xc0001337e0 pc=0x5ed0826172c1
2月 21 08:55:27 GBITDeepSeek ollama[698750]: created by runtime.gcenable in goroutine 1
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/mgc.go:203 +0x66
2月 21 08:55:27 GBITDeepSeek ollama[698750]: goroutine 4 gp=0xc000007340 m=nil [GC scavenge wait]:
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.gopark(0xc00015c000?, 0x5ed082a774f0?, 0x1?, 0x0?, 0xc000007340?)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:402 +0xce fp=0xc000133f78 sp=0xc000133f58 pc=0x5ed0825e54ee
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goparkunlock(...)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:408
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.(*scavengerState).park(0x5ed082d49560)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/mgcscavenge.go:425 +0x49 fp=0xc000133fa8 sp=0xc000133f78 pc=0x5ed0825cda29
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.bgscavenge(0xc00015c000)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/mgcscavenge.go:653 +0x3c fp=0xc000133fc8 sp=0xc000133fa8 pc=0x5ed0825cdfbc
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.gcenable.gowrap2()
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/mgc.go:204 +0x25 fp=0xc000133fe0 sp=0xc000133fc8 pc=0x5ed0825c4b05
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goexit({})
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000133fe8 sp=0xc000133fe0 pc=0x5ed0826172c1
2月 21 08:55:27 GBITDeepSeek ollama[698750]: created by runtime.gcenable in goroutine 1
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/mgc.go:204 +0xa5
2月 21 08:55:27 GBITDeepSeek ollama[698750]: goroutine 5 gp=0xc000007c00 m=nil [finalizer wait]:
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.gopark(0xc000132648?, 0x5ed0825b8465?, 0xa8?, 0x1?, 0xc0000061c0?)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:402 +0xce fp=0xc000132620 sp=0xc000132600 pc=0x5ed0825e54ee
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.runfinq()
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/mfinal.go:194 +0x107 fp=0xc0001327e0 sp=0xc000132620 pc=0x5ed0825c3ba7
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goexit({})
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0001327e8 sp=0xc0001327e0 pc=0x5ed0826172c1
2月 21 08:55:27 GBITDeepSeek ollama[698750]: created by runtime.createfing in goroutine 1
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/mfinal.go:164 +0x3d
2月 21 08:55:27 GBITDeepSeek ollama[698750]: goroutine 35 gp=0xc0002521c0 m=nil [select]:
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.gopark(0xc0001d5a28?, 0x2?, 0xe0?, 0x56?, 0xc0001d57ec?)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:402 +0xce fp=0xc0001d5660 sp=0xc0001d5640 pc=0x5ed0825e54ee
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.selectgo(0xc0001d5a28, 0xc0001d57e8, 0xc00024cd00?, 0x0, 0x1?, 0x1)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/select.go:327 +0x725 fp=0xc0001d5780 sp=0xc0001d5660 pc=0x5ed0825f68c5
2月 21 08:55:27 GBITDeepSeek ollama[698750]: main.(*Server).completion(0xc000214120, {0x5ed082b7a510, 0xc0001a2380}, 0xc000190480)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: github.com/ollama/ollama/llama/runner/runner.go:698 +0xa86 fp=0xc0001d5ab8 sp=0xc0001d5780 pc=0x5ed08282a006
2月 21 08:55:27 GBITDeepSeek ollama[698750]: main.(*Server).completion-fm({0x5ed082b7a510?, 0xc0001a2380?}, 0x5ed08280438d?)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: :1 +0x36 fp=0xc0001d5ae8 sp=0xc0001d5ab8 pc=0x5ed08282d2b6
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http.HandlerFunc.ServeHTTP(0xc00017ab60?, {0x5ed082b7a510?, 0xc0001a2380?}, 0x10?)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http/server.go:2171 +0x29 fp=0xc0001d5b10 sp=0xc0001d5ae8 pc=0x5ed0827fce29
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http.(*ServeMux).ServeHTTP(0x5ed0825b8465?, {0x5ed082b7a510, 0xc0001a2380}, 0xc000190480)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http/server.go:2688 +0x1ad fp=0xc0001d5b60 sp=0xc0001d5b10 pc=0x5ed0827fecad
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http.serverHandler.ServeHTTP({0x5ed082b79860?}, {0x5ed082b7a510?, 0xc0001a2380?}, 0x6?)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http/server.go:3142 +0x8e fp=0xc0001d5b90 sp=0xc0001d5b60 pc=0x5ed0827ffcce
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http.(*conn).serve(0xc000298090, {0x5ed082b7a968, 0xc000178db0})
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http/server.go:2044 +0x5e8 fp=0xc0001d5fb8 sp=0xc0001d5b90 pc=0x5ed0827fba68
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http.(*Server).Serve.gowrap3()
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http/server.go:3290 +0x28 fp=0xc0001d5fe0 sp=0xc0001d5fb8 pc=0x5ed082800448
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goexit({})
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0001d5fe8 sp=0xc0001d5fe0 pc=0x5ed0826172c1
2月 21 08:55:27 GBITDeepSeek ollama[698750]: created by net/http.(*Server).Serve in goroutine 1
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http/server.go:3290 +0x4b4
2月 21 08:55:27 GBITDeepSeek ollama[698750]: goroutine 22 gp=0xc000252700 m=nil [IO wait]:
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.gopark(0x10?, 0x10?, 0xf0?, 0x5d?, 0xb?)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:402 +0xce fp=0xc000135da8 sp=0xc000135d88 pc=0x5ed0825e54ee
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.netpollblock(0x5ed08264ba38?, 0x825ae006?, 0xd0?)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/netpoll.go:573 +0xf7 fp=0xc000135de0 sp=0xc000135da8 pc=0x5ed0825dd737
2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll.runtime_pollWait(0x744117b87f28, 0x72)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/netpoll.go:345 +0x85 fp=0xc000135e00 sp=0xc000135de0 pc=0x5ed082611f85
2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll.(*pollDesc).wait(0xc0002b4000?, 0xc0002921f1?, 0x0)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000135e28 sp=0xc000135e00 pc=0x5ed082661ea7
2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll.(*pollDesc).waitRead(...)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll/fd_poll_runtime.go:89
2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll.(*FD).Read(0xc0002b4000, {0xc0002921f1, 0x1, 0x1})
2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll/fd_unix.go:164 +0x27a fp=0xc000135ec0 sp=0xc000135e28 pc=0x5ed0826629fa
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net.(*netFD).Read(0xc0002b4000, {0xc0002921f1?, 0xc000135f48?, 0x5ed082613bb0?})
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/fd_posix.go:55 +0x25 fp=0xc000135f08 sp=0xc000135ec0 pc=0x5ed0826d0ea5
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net.(*conn).Read(0xc0002b6000, {0xc0002921f1?, 0x0?, 0x5ed082e32100?})
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/net.go:185 +0x45 fp=0xc000135f50 sp=0xc000135f08 pc=0x5ed0826db165
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net.(*TCPConn).Read(0x5ed082d0a890?, {0xc0002921f1?, 0x0?, 0x0?})
2月 21 08:55:27 GBITDeepSeek ollama[698750]: :1 +0x25 fp=0xc000135f80 sp=0xc000135f50 pc=0x5ed0826e6b45
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http.(*connReader).backgroundRead(0xc0002921e0)
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http/server.go:681 +0x37 fp=0xc000135fc8 sp=0xc000135f80 pc=0x5ed0827f59d7
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http.(*connReader).startBackgroundRead.gowrap2()
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http/server.go:677 +0x25 fp=0xc000135fe0 sp=0xc000135fc8 pc=0x5ed0827f5905
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goexit({})
2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000135fe8 sp=0xc000135fe0 pc=0x5ed0826172c1
2月 21 08:55:27 GBITDeepSeek ollama[698750]: created by net/http.(*connReader).startBackgroundRead in goroutine 35
2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http/server.go:677 +0xba
2月 21 08:55:27 GBITDeepSeek ollama[698750]: rax 0x0
2月 21 08:55:27 GBITDeepSeek ollama[698750]: rbx 0x7440aa200000
2月 21 08:55:27 GBITDeepSeek ollama[698750]: rcx 0x7440f20969fc
2月 21 08:55:27 GBITDeepSeek ollama[698750]: rdx 0x6
2月 21 08:55:27 GBITDeepSeek ollama[698750]: rdi 0xaf0c4
2月 21 08:55:27 GBITDeepSeek ollama[698750]: rsi 0xaf0c8
2月 21 08:55:27 GBITDeepSeek ollama[698750]: rbp 0xaf0c8
2月 21 08:55:27 GBITDeepSeek ollama[698750]: rsp 0x7440aa1de3e0
2月 21 08:55:27 GBITDeepSeek ollama[698750]: r8 0x7440aa1de4b0
2月 21 08:55:27 GBITDeepSeek ollama[698750]: r9 0x7440aa1de480
2月 21 08:55:27 GBITDeepSeek ollama[698750]: r10 0x8
2月 21 08:55:27 GBITDeepSeek ollama[698750]: r11 0x246
2月 21 08:55:27 GBITDeepSeek ollama[698750]: r12 0x6
2月 21 08:55:27 GBITDeepSeek ollama[698750]: r13 0x16
2月 21 08:55:27 GBITDeepSeek ollama[698750]: r14 0x74411788cd30
2月 21 08:55:27 GBITDeepSeek ollama[698750]: r15 0x0
2月 21 08:55:27 GBITDeepSeek ollama[698750]: rip 0x7440f20969fc
2月 21 08:55:27 GBITDeepSeek ollama[698750]: rflags 0x246
2月 21 08:55:27 GBITDeepSeek ollama[698750]: cs 0x33
2月 21 08:55:27 GBITDeepSeek ollama[698750]: fs 0x0
2月 21 08:55:27 GBITDeepSeek ollama[698750]: gs 0x0
2月 21 08:55:27 GBITDeepSeek ollama[698750]: [GIN] 2025/02/21 - 08:55:27 | 200 | 10.421581803s | 172.17.0.2 | POST "/api/chat"

@CJJ-Michael commented on GitHub (Feb 21, 2025): > [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may help in debugging. Logs below belong to ollama server after the bug happend: 2月 21 08:52:10 GBITDeepSeek ollama[698750]: [GIN] 2025/02/21 - 08:52:10 | 200 | 3.857530363s | 127.0.0.1 | POST "/api/chat" 2月 21 08:55:16 GBITDeepSeek ollama[698750]: [GIN] 2025/02/21 - 08:55:16 | 200 | 27.206µs | 172.17.0.2 | GET "/" 2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.213+08:00 level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-99e4c40f-5e53-642f-aa9b-9c3b203cfebb library=cuda total="23.6 GiB" available="18.1 GiB" 2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.213+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/home/gbit/ollama/models/blobs/sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 gpu=GPU-99e4c40f-5e53-642f-aa9b-9c3b203cfebb parallel=1 available=19391528960 required="809.9 MiB" 2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.309+08:00 level=INFO source=server.go:105 msg="system memory" total="251.5 GiB" free="238.0 GiB" free_swap="2.0 GiB" 2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.309+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=13 layers.offload=13 layers.split="" memory.available="[18.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="809.9 MiB" memory.required.partial="809.9 MiB" memory.required.kv="24.0 MiB" memory.required.allocations="[809.9 MiB]" memory.weights.total="240.1 MiB" memory.weights.repeating="195.4 MiB" memory.weights.nonrepeating="44.7 MiB" memory.graph.full="48.0 MiB" memory.graph.partial="48.0 MiB" 2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.311+08:00 level=INFO source=server.go:397 msg="starting llama server" cmd="/tmp/ollama3857967831/runners/cuda_v12/ollama_llama_server --model /home/gbit/ollama/models/blobs/sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 --ctx-size 8192 --batch-size 512 --n-gpu-layers 13 --threads 52 --parallel 1 --port 42335" 2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.312+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=2 2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.312+08:00 level=INFO source=server.go:576 msg="waiting for llama runner to start responding" 2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.312+08:00 level=INFO source=server.go:610 msg="waiting for server to become available" status="llm server error" 2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.372+08:00 level=INFO source=runner.go:941 msg="starting go runner" 2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.372+08:00 level=INFO source=runner.go:942 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=52 2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.372+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:42335" 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: loaded meta data with 24 key-value pairs and 112 tensors from /home/gbit/ollama/models/blobs/sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 (version GGUF V3 (latest)) 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 0: general.architecture str = nomic-bert 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 1: general.name str = nomic-embed-text-v1.5 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 2: nomic-bert.block_count u32 = 12 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 3: nomic-bert.context_length u32 = 2048 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 4: nomic-bert.embedding_length u32 = 768 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 5: nomic-bert.feed_forward_length u32 = 3072 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 6: nomic-bert.attention.head_count u32 = 12 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 7: nomic-bert.attention.layer_norm_epsilon f32 = 0.000000 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 8: general.file_type u32 = 1 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 9: nomic-bert.attention.causal bool = false 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 10: nomic-bert.pooling_type u32 = 1 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 11: nomic-bert.rope.freq_base f32 = 1000.000000 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 12: tokenizer.ggml.token_type_count u32 = 2 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 13: tokenizer.ggml.bos_token_id u32 = 101 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 102 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 15: tokenizer.ggml.model str = bert 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,30522] = ["[PAD]", "[unused0]", "[unused1]", "... 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,30522] = [-1000.000000, -1000.000000, -1000.00... 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,30522] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 100 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 20: tokenizer.ggml.seperator_token_id u32 = 102 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 0 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 22: tokenizer.ggml.cls_token_id u32 = 101 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 23: tokenizer.ggml.mask_token_id u32 = 103 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - type f32: 51 tensors 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_model_loader: - type f16: 61 tensors 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_vocab: special tokens cache size = 5 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_vocab: token to piece cache size = 0.2032 MB 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: format = GGUF V3 (latest) 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: arch = nomic-bert 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: vocab type = WPM 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_vocab = 30522 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_merges = 0 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: vocab_only = 0 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_ctx_train = 2048 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_embd = 768 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_layer = 12 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_head = 12 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_head_kv = 12 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_rot = 64 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_swa = 0 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_embd_head_k = 64 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_embd_head_v = 64 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_gqa = 1 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_embd_k_gqa = 768 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_embd_v_gqa = 768 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: f_norm_eps = 1.0e-12 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: f_norm_rms_eps = 0.0e+00 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: f_logit_scale = 0.0e+00 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_ff = 3072 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_expert = 0 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_expert_used = 0 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: causal attn = 0 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: pooling type = 1 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: rope type = 2 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: rope scaling = linear 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: freq_base_train = 1000.0 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: freq_scale_train = 1 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_ctx_orig_yarn = 2048 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: rope_finetuned = unknown 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: ssm_d_conv = 0 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: ssm_d_inner = 0 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: ssm_d_state = 0 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: ssm_dt_rank = 0 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: ssm_dt_b_c_rms = 0 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: model type = 137M 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: model ftype = F16 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: model params = 136.73 M 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: model size = 260.86 MiB (16.00 BPW) 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: general.name = nomic-embed-text-v1.5 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: BOS token = 101 '[CLS]' 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: EOS token = 102 '[SEP]' 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: UNK token = 100 '[UNK]' 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: SEP token = 102 '[SEP]' 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: PAD token = 0 '[PAD]' 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: CLS token = 101 '[CLS]' 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: MASK token = 103 '[MASK]' 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: LF token = 0 '[PAD]' 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: EOG token = 102 '[SEP]' 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_print_meta: max token length = 21 2月 21 08:55:16 GBITDeepSeek ollama[698750]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 2月 21 08:55:16 GBITDeepSeek ollama[698750]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 2月 21 08:55:16 GBITDeepSeek ollama[698750]: ggml_cuda_init: found 1 CUDA devices: 2月 21 08:55:16 GBITDeepSeek ollama[698750]: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_tensors: ggml ctx size = 0.10 MiB 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_tensors: offloading 12 repeating layers to GPU 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_tensors: offloading non-repeating layers to GPU 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_tensors: offloaded 13/13 layers to GPU 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_tensors: CPU buffer size = 44.72 MiB 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llm_load_tensors: CUDA0 buffer size = 216.15 MiB 2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.565+08:00 level=INFO source=server.go:610 msg="waiting for server to become available" status="llm server loading model" 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: n_ctx = 8192 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: n_batch = 512 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: n_ubatch = 512 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: flash_attn = 0 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: freq_base = 1000.0 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: freq_scale = 1 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_kv_cache_init: CUDA0 KV buffer size = 288.00 MiB 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: KV self size = 288.00 MiB, K (f16): 144.00 MiB, V (f16): 144.00 MiB 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: CPU output buffer size = 0.00 MiB 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: CUDA0 compute buffer size = 23.00 MiB 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: CUDA_Host compute buffer size = 3.50 MiB 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: graph nodes = 453 2月 21 08:55:16 GBITDeepSeek ollama[698750]: llama_new_context_with_model: graph splits = 2 2月 21 08:55:16 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:16.816+08:00 level=INFO source=server.go:615 msg="llama runner started in 0.50 seconds" 2月 21 08:55:17 GBITDeepSeek ollama[698750]: [GIN] 2025/02/21 - 08:55:17 | 200 | 919.068568ms | 172.17.0.2 | POST "/api/embeddings" 2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.487+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.095851007 model=/home/gbit/ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.701+08:00 level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-99e4c40f-5e53-642f-aa9b-9c3b203cfebb library=cuda total="23.6 GiB" available="21.4 GiB" 2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.701+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/home/gbit/ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 gpu=GPU-99e4c40f-5e53-642f-aa9b-9c3b203cfebb parallel=4 available=22992453632 required="6.5 GiB" 2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.737+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.345946677 model=/home/gbit/ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.789+08:00 level=INFO source=server.go:105 msg="system memory" total="251.5 GiB" free="237.6 GiB" free_swap="2.0 GiB" 2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.789+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[21.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.5 GiB" memory.required.partial="6.5 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[6.5 GiB]" memory.weights.total="4.5 GiB" memory.weights.repeating="4.1 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="942.0 MiB" memory.graph.partial="1.1 GiB" 2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.791+08:00 level=INFO source=server.go:397 msg="starting llama server" cmd="/tmp/ollama3857967831/runners/cuda_v12/ollama_llama_server --model /home/gbit/ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 --ctx-size 16384 --batch-size 512 --n-gpu-layers 29 --threads 52 --mlock --parallel 4 --port 35995" 2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.792+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=2 2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.792+08:00 level=INFO source=server.go:576 msg="waiting for llama runner to start responding" 2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.792+08:00 level=INFO source=server.go:610 msg="waiting for server to become available" status="llm server error" 2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.849+08:00 level=INFO source=runner.go:941 msg="starting go runner" 2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.850+08:00 level=INFO source=runner.go:942 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=52 2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.850+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:35995" 2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /home/gbit/ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 (version GGUF V3 (latest)) 2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 0: general.architecture str = qwen2 2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 1: general.type str = model 2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 7B 2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen 2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 4: general.size_label str = 7B 2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 5: qwen2.block_count u32 = 28 2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 7: qwen2.embedding_length u32 = 3584 2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 18944 2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 28 2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 4 2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000 2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 13: general.file_type u32 = 15 2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... 2月 21 08:55:22 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... 2月 21 08:55:22 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:22.986+08:00 level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.59510752 model=/home/gbit/ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 25: general.quantization_version u32 = 2 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - type f32: 141 tensors 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - type q4_K: 169 tensors 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llama_model_loader: - type q6_K: 29 tensors 2月 21 08:55:23 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:23.044+08:00 level=INFO source=server.go:610 msg="waiting for server to become available" status="llm server loading model" 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_vocab: special tokens cache size = 22 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_vocab: token to piece cache size = 0.9310 MB 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: format = GGUF V3 (latest) 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: arch = qwen2 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: vocab type = BPE 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_vocab = 152064 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_merges = 151387 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: vocab_only = 0 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_ctx_train = 131072 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_embd = 3584 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_layer = 28 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_head = 28 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_head_kv = 4 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_rot = 128 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_swa = 0 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_embd_head_k = 128 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_embd_head_v = 128 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_gqa = 7 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_embd_k_gqa = 512 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_embd_v_gqa = 512 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: f_norm_eps = 0.0e+00 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: f_logit_scale = 0.0e+00 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_ff = 18944 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_expert = 0 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_expert_used = 0 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: causal attn = 1 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: pooling type = 0 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: rope type = 2 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: rope scaling = linear 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: freq_base_train = 10000.0 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: freq_scale_train = 1 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_ctx_orig_yarn = 131072 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: rope_finetuned = unknown 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: ssm_d_conv = 0 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: ssm_d_inner = 0 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: ssm_d_state = 0 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: ssm_dt_rank = 0 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: ssm_dt_b_c_rms = 0 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: model type = ?B 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: model ftype = Q4_K - Medium 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: model params = 7.62 B 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: model size = 4.36 GiB (4.91 BPW) 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 7B 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>' 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>' 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>' 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: LF token = 148848 'ÄĬ' 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>' 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_print_meta: max token length = 256 2月 21 08:55:23 GBITDeepSeek ollama[698750]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 2月 21 08:55:23 GBITDeepSeek ollama[698750]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 2月 21 08:55:23 GBITDeepSeek ollama[698750]: ggml_cuda_init: found 1 CUDA devices: 2月 21 08:55:23 GBITDeepSeek ollama[698750]: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_tensors: ggml ctx size = 0.30 MiB 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_tensors: offloading 28 repeating layers to GPU 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_tensors: offloading non-repeating layers to GPU 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_tensors: offloaded 29/29 layers to GPU 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_tensors: CPU buffer size = 292.36 MiB 2月 21 08:55:23 GBITDeepSeek ollama[698750]: llm_load_tensors: CUDA0 buffer size = 4168.09 MiB 2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: n_ctx = 16384 2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: n_batch = 2048 2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: n_ubatch = 512 2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: flash_attn = 0 2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: freq_base = 10000.0 2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: freq_scale = 1 2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_kv_cache_init: CUDA0 KV buffer size = 896.00 MiB 2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB 2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: CUDA_Host output buffer size = 2.38 MiB 2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: CUDA0 compute buffer size = 956.00 MiB 2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: CUDA_Host compute buffer size = 39.01 MiB 2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: graph nodes = 986 2月 21 08:55:24 GBITDeepSeek ollama[698750]: llama_new_context_with_model: graph splits = 2 2月 21 08:55:25 GBITDeepSeek ollama[698750]: time=2025-02-21T08:55:25.055+08:00 level=INFO source=server.go:615 msg="llama runner started in 2.26 seconds" 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: loaded meta data with 26 key-value pairs and 339 tensors from /home/gbit/ollama/models/blobs/sha256-96c415656d377afbff962f6cdb2394ab092ccbcbaab4b82525bc4ca800fe8a49 (version GGUF V3 (latest)) 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 0: general.architecture str = qwen2 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 1: general.type str = model 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Qwen 7B 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Qwen 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 4: general.size_label str = 7B 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 5: qwen2.block_count u32 = 28 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 6: qwen2.context_length u32 = 131072 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 7: qwen2.embedding_length u32 = 3584 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 8: qwen2.feed_forward_length u32 = 18944 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 9: qwen2.attention.head_count u32 = 28 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 10: qwen2.attention.head_count_kv u32 = 4 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 11: qwen2.rope.freq_base f32 = 10000.000000 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 12: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 13: general.file_type u32 = 15 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 15: tokenizer.ggml.pre str = qwen2 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 151646 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151643 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 24: tokenizer.chat_template str = {% if not add_generation_prompt is de... 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - kv 25: general.quantization_version u32 = 2 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - type f32: 141 tensors 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - type q4_K: 169 tensors 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_loader: - type q6_K: 29 tensors 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_vocab: special tokens cache size = 22 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_vocab: token to piece cache size = 0.9310 MB 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: format = GGUF V3 (latest) 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: arch = qwen2 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: vocab type = BPE 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_vocab = 152064 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: n_merges = 151387 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: vocab_only = 1 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: model type = ?B 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: model ftype = all F32 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: model params = 7.62 B 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: model size = 4.36 GiB (4.91 BPW) 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: general.name = DeepSeek R1 Distill Qwen 7B 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: BOS token = 151646 '<|begin▁of▁sentence|>' 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: EOS token = 151643 '<|end▁of▁sentence|>' 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: PAD token = 151643 '<|end▁of▁sentence|>' 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: LF token = 148848 'ÄĬ' 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: EOG token = 151643 '<|end▁of▁sentence|>' 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llm_load_print_meta: max token length = 256 2月 21 08:55:25 GBITDeepSeek ollama[698750]: llama_model_load: vocab only - skipping tensors 2月 21 08:55:27 GBITDeepSeek ollama[698750]: CUDA error: misaligned address 2月 21 08:55:27 GBITDeepSeek ollama[698750]: current device: 0, in function ggml_backend_cuda_synchronize at ggml-cuda.cu:2508 2月 21 08:55:27 GBITDeepSeek ollama[698750]: cudaStreamSynchronize(cuda_ctx->stream()) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: ggml-cuda.cu:132: CUDA error 2月 21 08:55:27 GBITDeepSeek ollama[717146]: [New LWP 716998] 2月 21 08:55:27 GBITDeepSeek ollama[717146]: [New LWP 716999] 2月 21 08:55:27 GBITDeepSeek ollama[717146]: [New LWP 717000] 2月 21 08:55:27 GBITDeepSeek ollama[717146]: [New LWP 717001] 2月 21 08:55:27 GBITDeepSeek ollama[717146]: [New LWP 717002] 2月 21 08:55:27 GBITDeepSeek ollama[717146]: [New LWP 717009] 2月 21 08:55:27 GBITDeepSeek ollama[717146]: [New LWP 717022] 2月 21 08:55:27 GBITDeepSeek ollama[717146]: [New LWP 717023] 2月 21 08:55:27 GBITDeepSeek ollama[717146]: [New LWP 717029] 2月 21 08:55:27 GBITDeepSeek ollama[717146]: [Thread debugging using libthread_db enabled] 2月 21 08:55:27 GBITDeepSeek ollama[717146]: Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". 2月 21 08:55:27 GBITDeepSeek ollama[717146]: 0x00005ed082619083 in ?? () 2月 21 08:55:27 GBITDeepSeek ollama[717146]: #0 0x00005ed082619083 in ?? () 2月 21 08:55:27 GBITDeepSeek ollama[717146]: #1 0x00005ed0825de3d0 in _start () 2月 21 08:55:27 GBITDeepSeek ollama[717146]: [Inferior 1 (process 716996) detached] 2月 21 08:55:27 GBITDeepSeek ollama[698750]: SIGABRT: abort 2月 21 08:55:27 GBITDeepSeek ollama[698750]: PC=0x7440f20969fc m=4 sigcode=18446744073709551610 2月 21 08:55:27 GBITDeepSeek ollama[698750]: signal arrived during cgo execution 2月 21 08:55:27 GBITDeepSeek ollama[698750]: goroutine 7 gp=0xc000252000 m=4 mp=0xc000139808 [syscall]: 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.cgocall(0x5ed08282d8c0, 0xc000147c60) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/cgocall.go:157 +0x4b fp=0xc000147c38 sp=0xc000147c00 pc=0x5ed0825ae8ab 2月 21 08:55:27 GBITDeepSeek ollama[698750]: github.com/ollama/ollama/llama._Cfunc_gpt_sampler_csample(0x744098008400, 0x74409c006450, 0x0) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: _cgo_gotypes.go:468 +0x4f fp=0xc000147c60 sp=0xc000147c38 pc=0x5ed0826ab98f 2月 21 08:55:27 GBITDeepSeek ollama[698750]: main.(*Server).processBatch.(*SamplingContext).Sample.func4(0x5ed082e32100?, 0x0?, 0x0) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: github.com/ollama/ollama/llama/llama.go:699 +0x86 fp=0xc000147cb0 sp=0xc000147c60 pc=0x5ed082829346 2月 21 08:55:27 GBITDeepSeek ollama[698750]: github.com/ollama/ollama/llama.(*SamplingContext).Sample(...) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: github.com/ollama/ollama/llama/llama.go:699 2月 21 08:55:27 GBITDeepSeek ollama[698750]: main.(*Server).processBatch(0xc000214120, 0xc0002ae000, 0xc000147f10) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: github.com/ollama/ollama/llama/runner/runner.go:487 +0x6ab fp=0xc000147ed0 sp=0xc000147cb0 pc=0x5ed08282860b 2月 21 08:55:27 GBITDeepSeek ollama[698750]: main.(*Server).run(0xc000214120, {0x5ed082b7a9a0, 0xc00016a0a0}) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: github.com/ollama/ollama/llama/runner/runner.go:342 +0x1e5 fp=0xc000147fb8 sp=0xc000147ed0 pc=0x5ed082827c25 2月 21 08:55:27 GBITDeepSeek ollama[698750]: main.main.gowrap2() 2月 21 08:55:27 GBITDeepSeek ollama[698750]: github.com/ollama/ollama/llama/runner/runner.go:980 +0x28 fp=0xc000147fe0 sp=0xc000147fb8 pc=0x5ed08282ca88 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goexit({}) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000147fe8 sp=0xc000147fe0 pc=0x5ed0826172c1 2月 21 08:55:27 GBITDeepSeek ollama[698750]: created by main.main in goroutine 1 2月 21 08:55:27 GBITDeepSeek ollama[698750]: github.com/ollama/ollama/llama/runner/runner.go:980 +0xd3e 2月 21 08:55:27 GBITDeepSeek ollama[698750]: goroutine 1 gp=0xc0000061c0 m=nil [IO wait]: 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.gopark(0x1?, 0xc00002b8e0?, 0xd4?, 0x52?, 0xc00002b8c0?) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:402 +0xce fp=0xc00002b860 sp=0xc00002b840 pc=0x5ed0825e54ee 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.netpollblock(0x10?, 0x825ae006?, 0xd0?) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/netpoll.go:573 +0xf7 fp=0xc00002b898 sp=0xc00002b860 pc=0x5ed0825dd737 2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll.runtime_pollWait(0x744117b88020, 0x72) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/netpoll.go:345 +0x85 fp=0xc00002b8b8 sp=0xc00002b898 pc=0x5ed082611f85 2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll.(*pollDesc).wait(0x3?, 0x744117bc3e88?, 0x0) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00002b8e0 sp=0xc00002b8b8 pc=0x5ed082661ea7 2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll.(*pollDesc).waitRead(...) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll/fd_poll_runtime.go:89 2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll.(*FD).Accept(0xc00024c080) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll/fd_unix.go:611 +0x2ac fp=0xc00002b988 sp=0xc00002b8e0 pc=0x5ed08266336c 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net.(*netFD).accept(0xc00024c080) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/fd_unix.go:172 +0x29 fp=0xc00002ba40 sp=0xc00002b988 pc=0x5ed0826d1fa9 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net.(*TCPListener).accept(0xc0001521e0) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/tcpsock_posix.go:159 +0x1e fp=0xc00002ba68 sp=0xc00002ba40 pc=0x5ed0826e2cde 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net.(*TCPListener).Accept(0xc0001521e0) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/tcpsock.go:327 +0x30 fp=0xc00002ba98 sp=0xc00002ba68 pc=0x5ed0826e2030 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http.(*onceCloseListener).Accept(0xc000298090?) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: <autogenerated>:1 +0x24 fp=0xc00002bab0 sp=0xc00002ba98 pc=0x5ed082809244 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http.(*Server).Serve(0xc0000181e0, {0x5ed082b7a360, 0xc0001521e0}) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http/server.go:3260 +0x33e fp=0xc00002bbe0 sp=0xc00002bab0 pc=0x5ed08280005e 2月 21 08:55:27 GBITDeepSeek ollama[698750]: main.main() 2月 21 08:55:27 GBITDeepSeek ollama[698750]: github.com/ollama/ollama/llama/runner/runner.go:1000 +0x10cd fp=0xc00002bf50 sp=0xc00002bbe0 pc=0x5ed08282c80d 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.main() 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:271 +0x29d fp=0xc00002bfe0 sp=0xc00002bf50 pc=0x5ed0825e50bd 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goexit({}) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/asm_amd64.s:1695 +0x1 fp=0xc00002bfe8 sp=0xc00002bfe0 pc=0x5ed0826172c1 2月 21 08:55:27 GBITDeepSeek ollama[698750]: goroutine 2 gp=0xc000006c40 m=nil [force gc (idle)]: 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:402 +0xce fp=0xc000132fa8 sp=0xc000132f88 pc=0x5ed0825e54ee 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goparkunlock(...) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:408 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.forcegchelper() 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:326 +0xb8 fp=0xc000132fe0 sp=0xc000132fa8 pc=0x5ed0825e5378 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goexit({}) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000132fe8 sp=0xc000132fe0 pc=0x5ed0826172c1 2月 21 08:55:27 GBITDeepSeek ollama[698750]: created by runtime.init.6 in goroutine 1 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:314 +0x1a 2月 21 08:55:27 GBITDeepSeek ollama[698750]: goroutine 3 gp=0xc000007180 m=nil [GC sweep wait]: 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:402 +0xce fp=0xc000133780 sp=0xc000133760 pc=0x5ed0825e54ee 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goparkunlock(...) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:408 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.bgsweep(0xc00015c000) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/mgcsweep.go:278 +0x94 fp=0xc0001337c8 sp=0xc000133780 pc=0x5ed0825d0034 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.gcenable.gowrap1() 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/mgc.go:203 +0x25 fp=0xc0001337e0 sp=0xc0001337c8 pc=0x5ed0825c4b65 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goexit({}) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0001337e8 sp=0xc0001337e0 pc=0x5ed0826172c1 2月 21 08:55:27 GBITDeepSeek ollama[698750]: created by runtime.gcenable in goroutine 1 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/mgc.go:203 +0x66 2月 21 08:55:27 GBITDeepSeek ollama[698750]: goroutine 4 gp=0xc000007340 m=nil [GC scavenge wait]: 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.gopark(0xc00015c000?, 0x5ed082a774f0?, 0x1?, 0x0?, 0xc000007340?) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:402 +0xce fp=0xc000133f78 sp=0xc000133f58 pc=0x5ed0825e54ee 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goparkunlock(...) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:408 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.(*scavengerState).park(0x5ed082d49560) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/mgcscavenge.go:425 +0x49 fp=0xc000133fa8 sp=0xc000133f78 pc=0x5ed0825cda29 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.bgscavenge(0xc00015c000) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/mgcscavenge.go:653 +0x3c fp=0xc000133fc8 sp=0xc000133fa8 pc=0x5ed0825cdfbc 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.gcenable.gowrap2() 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/mgc.go:204 +0x25 fp=0xc000133fe0 sp=0xc000133fc8 pc=0x5ed0825c4b05 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goexit({}) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000133fe8 sp=0xc000133fe0 pc=0x5ed0826172c1 2月 21 08:55:27 GBITDeepSeek ollama[698750]: created by runtime.gcenable in goroutine 1 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/mgc.go:204 +0xa5 2月 21 08:55:27 GBITDeepSeek ollama[698750]: goroutine 5 gp=0xc000007c00 m=nil [finalizer wait]: 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.gopark(0xc000132648?, 0x5ed0825b8465?, 0xa8?, 0x1?, 0xc0000061c0?) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:402 +0xce fp=0xc000132620 sp=0xc000132600 pc=0x5ed0825e54ee 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.runfinq() 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/mfinal.go:194 +0x107 fp=0xc0001327e0 sp=0xc000132620 pc=0x5ed0825c3ba7 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goexit({}) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0001327e8 sp=0xc0001327e0 pc=0x5ed0826172c1 2月 21 08:55:27 GBITDeepSeek ollama[698750]: created by runtime.createfing in goroutine 1 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/mfinal.go:164 +0x3d 2月 21 08:55:27 GBITDeepSeek ollama[698750]: goroutine 35 gp=0xc0002521c0 m=nil [select]: 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.gopark(0xc0001d5a28?, 0x2?, 0xe0?, 0x56?, 0xc0001d57ec?) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:402 +0xce fp=0xc0001d5660 sp=0xc0001d5640 pc=0x5ed0825e54ee 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.selectgo(0xc0001d5a28, 0xc0001d57e8, 0xc00024cd00?, 0x0, 0x1?, 0x1) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/select.go:327 +0x725 fp=0xc0001d5780 sp=0xc0001d5660 pc=0x5ed0825f68c5 2月 21 08:55:27 GBITDeepSeek ollama[698750]: main.(*Server).completion(0xc000214120, {0x5ed082b7a510, 0xc0001a2380}, 0xc000190480) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: github.com/ollama/ollama/llama/runner/runner.go:698 +0xa86 fp=0xc0001d5ab8 sp=0xc0001d5780 pc=0x5ed08282a006 2月 21 08:55:27 GBITDeepSeek ollama[698750]: main.(*Server).completion-fm({0x5ed082b7a510?, 0xc0001a2380?}, 0x5ed08280438d?) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: <autogenerated>:1 +0x36 fp=0xc0001d5ae8 sp=0xc0001d5ab8 pc=0x5ed08282d2b6 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http.HandlerFunc.ServeHTTP(0xc00017ab60?, {0x5ed082b7a510?, 0xc0001a2380?}, 0x10?) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http/server.go:2171 +0x29 fp=0xc0001d5b10 sp=0xc0001d5ae8 pc=0x5ed0827fce29 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http.(*ServeMux).ServeHTTP(0x5ed0825b8465?, {0x5ed082b7a510, 0xc0001a2380}, 0xc000190480) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http/server.go:2688 +0x1ad fp=0xc0001d5b60 sp=0xc0001d5b10 pc=0x5ed0827fecad 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http.serverHandler.ServeHTTP({0x5ed082b79860?}, {0x5ed082b7a510?, 0xc0001a2380?}, 0x6?) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http/server.go:3142 +0x8e fp=0xc0001d5b90 sp=0xc0001d5b60 pc=0x5ed0827ffcce 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http.(*conn).serve(0xc000298090, {0x5ed082b7a968, 0xc000178db0}) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http/server.go:2044 +0x5e8 fp=0xc0001d5fb8 sp=0xc0001d5b90 pc=0x5ed0827fba68 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http.(*Server).Serve.gowrap3() 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http/server.go:3290 +0x28 fp=0xc0001d5fe0 sp=0xc0001d5fb8 pc=0x5ed082800448 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goexit({}) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/asm_amd64.s:1695 +0x1 fp=0xc0001d5fe8 sp=0xc0001d5fe0 pc=0x5ed0826172c1 2月 21 08:55:27 GBITDeepSeek ollama[698750]: created by net/http.(*Server).Serve in goroutine 1 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http/server.go:3290 +0x4b4 2月 21 08:55:27 GBITDeepSeek ollama[698750]: goroutine 22 gp=0xc000252700 m=nil [IO wait]: 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.gopark(0x10?, 0x10?, 0xf0?, 0x5d?, 0xb?) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/proc.go:402 +0xce fp=0xc000135da8 sp=0xc000135d88 pc=0x5ed0825e54ee 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.netpollblock(0x5ed08264ba38?, 0x825ae006?, 0xd0?) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/netpoll.go:573 +0xf7 fp=0xc000135de0 sp=0xc000135da8 pc=0x5ed0825dd737 2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll.runtime_pollWait(0x744117b87f28, 0x72) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/netpoll.go:345 +0x85 fp=0xc000135e00 sp=0xc000135de0 pc=0x5ed082611f85 2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll.(*pollDesc).wait(0xc0002b4000?, 0xc0002921f1?, 0x0) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000135e28 sp=0xc000135e00 pc=0x5ed082661ea7 2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll.(*pollDesc).waitRead(...) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll/fd_poll_runtime.go:89 2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll.(*FD).Read(0xc0002b4000, {0xc0002921f1, 0x1, 0x1}) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: internal/poll/fd_unix.go:164 +0x27a fp=0xc000135ec0 sp=0xc000135e28 pc=0x5ed0826629fa 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net.(*netFD).Read(0xc0002b4000, {0xc0002921f1?, 0xc000135f48?, 0x5ed082613bb0?}) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/fd_posix.go:55 +0x25 fp=0xc000135f08 sp=0xc000135ec0 pc=0x5ed0826d0ea5 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net.(*conn).Read(0xc0002b6000, {0xc0002921f1?, 0x0?, 0x5ed082e32100?}) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/net.go:185 +0x45 fp=0xc000135f50 sp=0xc000135f08 pc=0x5ed0826db165 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net.(*TCPConn).Read(0x5ed082d0a890?, {0xc0002921f1?, 0x0?, 0x0?}) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: <autogenerated>:1 +0x25 fp=0xc000135f80 sp=0xc000135f50 pc=0x5ed0826e6b45 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http.(*connReader).backgroundRead(0xc0002921e0) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http/server.go:681 +0x37 fp=0xc000135fc8 sp=0xc000135f80 pc=0x5ed0827f59d7 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http.(*connReader).startBackgroundRead.gowrap2() 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http/server.go:677 +0x25 fp=0xc000135fe0 sp=0xc000135fc8 pc=0x5ed0827f5905 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime.goexit({}) 2月 21 08:55:27 GBITDeepSeek ollama[698750]: runtime/asm_amd64.s:1695 +0x1 fp=0xc000135fe8 sp=0xc000135fe0 pc=0x5ed0826172c1 2月 21 08:55:27 GBITDeepSeek ollama[698750]: created by net/http.(*connReader).startBackgroundRead in goroutine 35 2月 21 08:55:27 GBITDeepSeek ollama[698750]: net/http/server.go:677 +0xba 2月 21 08:55:27 GBITDeepSeek ollama[698750]: rax 0x0 2月 21 08:55:27 GBITDeepSeek ollama[698750]: rbx 0x7440aa200000 2月 21 08:55:27 GBITDeepSeek ollama[698750]: rcx 0x7440f20969fc 2月 21 08:55:27 GBITDeepSeek ollama[698750]: rdx 0x6 2月 21 08:55:27 GBITDeepSeek ollama[698750]: rdi 0xaf0c4 2月 21 08:55:27 GBITDeepSeek ollama[698750]: rsi 0xaf0c8 2月 21 08:55:27 GBITDeepSeek ollama[698750]: rbp 0xaf0c8 2月 21 08:55:27 GBITDeepSeek ollama[698750]: rsp 0x7440aa1de3e0 2月 21 08:55:27 GBITDeepSeek ollama[698750]: r8 0x7440aa1de4b0 2月 21 08:55:27 GBITDeepSeek ollama[698750]: r9 0x7440aa1de480 2月 21 08:55:27 GBITDeepSeek ollama[698750]: r10 0x8 2月 21 08:55:27 GBITDeepSeek ollama[698750]: r11 0x246 2月 21 08:55:27 GBITDeepSeek ollama[698750]: r12 0x6 2月 21 08:55:27 GBITDeepSeek ollama[698750]: r13 0x16 2月 21 08:55:27 GBITDeepSeek ollama[698750]: r14 0x74411788cd30 2月 21 08:55:27 GBITDeepSeek ollama[698750]: r15 0x0 2月 21 08:55:27 GBITDeepSeek ollama[698750]: rip 0x7440f20969fc 2月 21 08:55:27 GBITDeepSeek ollama[698750]: rflags 0x246 2月 21 08:55:27 GBITDeepSeek ollama[698750]: cs 0x33 2月 21 08:55:27 GBITDeepSeek ollama[698750]: fs 0x0 2月 21 08:55:27 GBITDeepSeek ollama[698750]: gs 0x0 2月 21 08:55:27 GBITDeepSeek ollama[698750]: [GIN] 2025/02/21 - 08:55:27 | 200 | 10.421581803s | 172.17.0.2 | POST "/api/chat"
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#6030