[GH-ISSUE #8922] Multiple questions cannot be answered simultaneously. #52298

Closed
opened 2026-04-28 22:56:27 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @YasinFu on GitHub (Feb 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8922

What is the issue?

I am using Docker to deploy Ollama on a Linux server (private network IP 192.168.10.1). The command used is:

docker run -d --gpus=all  -e OLLAMA_NUM_PARALLEL=4   -e OLLAMA_MAX_LOADED_MODELS=4   -v ollama:/root/.ollama   -p 11434:11434   --name ollama   ollama/ollama

Running docker ps -a shows:

CONTAINER ID   IMAGE                                                   COMMAND                  CREATED          STATUS          PORTS                                                            NAMES
faaf74f55470   ollama/ollama                                           "/bin/ollama serve"      23 minutes ago   Up 23 minutes   0.0.0.0:11434->11434/tcp, :::11434->11434/tcp                    ollama

Using docker exec -it ollama ollama run deepseek-r1:70b starts a conversation. Running the same command again opens a second conversation. Tests show these two conversations have separate contexts and are independent.

I have installed anythingllm and cherrystudio on a Windows machine in the same local network, both connecting to deepseek at 192.168.10.1:11434. Single conversations work fine. However, when trying to have simultaneous conversations in both software applications, they need to queue up. They cannot function simultaneously like the two shell instances do. Is this due to a lack of settings or how these software applications are designed?

Relevant log output


OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.5.7-0-ga420a45-dirty

Originally created by @YasinFu on GitHub (Feb 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8922 ### What is the issue? I am using Docker to deploy Ollama on a Linux server (private network IP 192.168.10.1). The command used is: ```docker docker run -d --gpus=all -e OLLAMA_NUM_PARALLEL=4 -e OLLAMA_MAX_LOADED_MODELS=4 -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama ``` Running `docker ps -a` shows: ``` CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES faaf74f55470 ollama/ollama "/bin/ollama serve" 23 minutes ago Up 23 minutes 0.0.0.0:11434->11434/tcp, :::11434->11434/tcp ollama ``` Using `docker exec -it ollama ollama run deepseek-r1:70b` starts a conversation. Running the same command again opens a second conversation. Tests show these two conversations have separate contexts and are independent. I have installed anythingllm and cherrystudio on a Windows machine in the same local network, both connecting to deepseek at 192.168.10.1:11434. Single conversations work fine. However, when trying to have simultaneous conversations in both software applications, they need to queue up. They cannot function simultaneously like the two shell instances do. Is this due to a lack of settings or how these software applications are designed? ### Relevant log output ```shell ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.7-0-ga420a45-dirty
GiteaMirror added the bug label 2026-04-28 22:56:27 -05:00
Author
Owner

@rick-github commented on GitHub (Feb 7, 2025):

Running simultaneous conversations with deepseek-r1:70b works for me. Server logs might shed some light.

<!-- gh-comment-id:2642476333 --> @rick-github commented on GitHub (Feb 7, 2025): Running simultaneous conversations with deepseek-r1:70b works for me. [Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) might shed some light.
Author
Owner

@YasinFu commented on GitHub (Feb 8, 2025):

@rick-github
(base) aitt:/data/ollama$ docker logs faaf74f55470
2025/02/07 08:27:29 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:4 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-02-07T08:27:29.032Z level=INFO source=images.go:432 msg="total blobs: 5"
time=2025-02-07T08:27:29.032Z level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.

  • using env: export GIN_MODE=release
  • using code: gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-02-07T08:27:29.033Z level=INFO source=routes.go:1238 msg="Listening on [::]:11434 (version 0.5.7-0-ga420a45-dirty)"
time=2025-02-07T08:27:29.033Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cuda_v12_avx cpu cpu_avx cpu_avx2 cuda_v11_avx]"
time=2025-02-07T08:27:29.033Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-02-07T08:27:29.356Z level=INFO source=types.go:131 msg="inference compute" id=GPU-a82cec6a-430f-e6c1-1625-80c082c923cd library=cuda variant=v11 compute=8.6 driver=11.6 name="NVIDIA GeForce RTX 3090" total="23.7 GiB" available="23.5 GiB"
time=2025-02-07T08:27:29.356Z level=INFO source=types.go:131 msg="inference compute" id=GPU-74591cc7-de56-ecca-da11-c770f68e11f0 library=cuda variant=v11 compute=8.6 driver=11.6 name="NVIDIA GeForce RTX 3090" total="23.7 GiB" available="23.5 GiB"
time=2025-02-07T08:27:29.356Z level=INFO source=types.go:131 msg="inference compute" id=GPU-e417e629-3bff-ec54-f753-1c759af7a562 library=cuda variant=v11 compute=8.6 driver=11.6 name="NVIDIA GeForce RTX 3090" total="23.7 GiB" available="23.5 GiB"
[GIN] 2025/02/07 - 08:27:38 | 200 | 54.325?s | 127.0.0.1 | HEAD "/"
[GIN] 2025/02/07 - 08:27:38 | 200 | 20.244728ms | 127.0.0.1 | POST "/api/show"
time=2025-02-07T08:27:39.140Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB"
time=2025-02-07T08:27:39.403Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="89.6 GiB" free_swap="6.3 GiB"
time=2025-02-07T08:27:39.404Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2025-02-07T08:27:39.405Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --parallel 4 --tensor-split 27,27,27 --port 46259"
time=2025-02-07T08:27:39.405Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-07T08:27:39.405Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-07T08:27:39.405Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-07T08:27:39.413Z level=INFO source=runner.go:936 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
time=2025-02-07T08:27:39.420Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24
time=2025-02-07T08:27:39.421Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:46259"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free
time=2025-02-07T08:27:39.656Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB
llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB
llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB
llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1
llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB
llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB
llama_new_context_with_model: graph nodes = 2566
llama_new_context_with_model: graph splits = 4
time=2025-02-07T08:27:47.929Z level=INFO source=server.go:594 msg="llama runner started in 8.52 seconds"
[GIN] 2025/02/07 - 08:27:47 | 200 | 9.1447991s | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/02/07 - 08:30:40 | 200 | 41.183?s | 127.0.0.1 | HEAD "/"
[GIN] 2025/02/07 - 08:30:41 | 200 | 19.547918ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/02/07 - 08:30:41 | 200 | 19.971768ms | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/02/07 - 08:30:57 | 200 | 3.172700115s | 127.0.0.1 | POST "/api/chat"
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors
[GIN] 2025/02/07 - 08:32:03 | 200 | 52.464382665s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/02/07 - 08:32:12 | 200 | 1m6s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/02/07 - 08:33:26 | 200 | 1m0s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/02/07 - 08:33:55 | 200 | 346.521?s | 192.168.41.23 | GET "/api/tags"
[GIN] 2025/02/07 - 08:34:07 | 200 | 206.438?s | 192.168.41.23 | GET "/api/tags"
[GIN] 2025/02/07 - 08:34:14 | 200 | 263.344?s | 192.168.41.23 | GET "/api/tags"
[GIN] 2025/02/07 - 08:35:48 | 200 | 1m3s | 127.0.0.1 | POST "/api/chat"
time=2025-02-07T08:35:49.846Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB"
time=2025-02-07T08:35:50.109Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="89.4 GiB" free_swap="6.3 GiB"
time=2025-02-07T08:35:50.110Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2025-02-07T08:35:50.110Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --mlock --parallel 4 --tensor-split 27,27,27 --port 32841"
time=2025-02-07T08:35:50.111Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-07T08:35:50.111Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-07T08:35:50.111Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-07T08:35:50.118Z level=INFO source=runner.go:936 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
time=2025-02-07T08:35:50.146Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24
time=2025-02-07T08:35:50.146Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:32841"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free
time=2025-02-07T08:35:50.362Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB
llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB
llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB
llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB
warning: failed to mlock 1460781056-byte buffer (after previously locking 0 bytes): Cannot allocate memory
Try increasing RLIMIT_MEMLOCK ('ulimit -l' as root).
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1
llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB
llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB
llama_new_context_with_model: graph nodes = 2566
llama_new_context_with_model: graph splits = 4
time=2025-02-07T08:35:58.884Z level=INFO source=server.go:594 msg="llama runner started in 8.77 seconds"
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors
[GIN] 2025/02/07 - 08:36:02 | 200 | 1m6s | 192.168.41.23 | POST "/api/chat"
time=2025-02-07T08:36:04.127Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB"
time=2025-02-07T08:36:04.382Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="89.3 GiB" free_swap="6.3 GiB"
time=2025-02-07T08:36:04.383Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2025-02-07T08:36:04.383Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --parallel 4 --tensor-split 27,27,27 --port 37759"
time=2025-02-07T08:36:04.383Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-07T08:36:04.383Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-07T08:36:04.384Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-07T08:36:04.391Z level=INFO source=runner.go:936 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
time=2025-02-07T08:36:04.418Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24
time=2025-02-07T08:36:04.418Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:37759"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free
time=2025-02-07T08:36:04.635Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB
llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB
llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB
llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1
llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB
llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB
llama_new_context_with_model: graph nodes = 2566
llama_new_context_with_model: graph splits = 4
time=2025-02-07T08:36:12.909Z level=INFO source=server.go:594 msg="llama runner started in 8.53 seconds"
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors
[GIN] 2025/02/07 - 08:36:16 | 200 | 44.33575588s | 127.0.0.1 | POST "/api/chat"
time=2025-02-07T08:36:18.267Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB"
time=2025-02-07T08:36:18.541Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="89.2 GiB" free_swap="6.3 GiB"
time=2025-02-07T08:36:18.542Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2025-02-07T08:36:18.543Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --mlock --parallel 4 --tensor-split 27,27,27 --port 46163"
time=2025-02-07T08:36:18.543Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-07T08:36:18.543Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-07T08:36:18.543Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-07T08:36:18.551Z level=INFO source=runner.go:936 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
time=2025-02-07T08:36:18.578Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24
time=2025-02-07T08:36:18.578Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:46163"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free
time=2025-02-07T08:36:18.794Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB
llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB
llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB
llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB
warning: failed to mlock 1460781056-byte buffer (after previously locking 0 bytes): Cannot allocate memory
Try increasing RLIMIT_MEMLOCK ('ulimit -l' as root).
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1
llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB
llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB
llama_new_context_with_model: graph nodes = 2566
llama_new_context_with_model: graph splits = 4
time=2025-02-07T08:36:27.066Z level=INFO source=server.go:594 msg="llama runner started in 8.52 seconds"
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors
[GIN] 2025/02/07 - 08:36:34 | 200 | 35.685958166s | 192.168.41.23 | POST "/api/chat"
time=2025-02-07T08:36:36.194Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB"
time=2025-02-07T08:36:36.453Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="89.0 GiB" free_swap="6.3 GiB"
time=2025-02-07T08:36:36.454Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2025-02-07T08:36:36.455Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --parallel 4 --tensor-split 27,27,27 --port 36625"
time=2025-02-07T08:36:36.455Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-07T08:36:36.455Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-07T08:36:36.455Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-07T08:36:36.463Z level=INFO source=runner.go:936 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
time=2025-02-07T08:36:36.490Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24
time=2025-02-07T08:36:36.490Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:36625"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free
time=2025-02-07T08:36:36.706Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
time=2025-02-07T08:36:37.458Z level=WARN source=server.go:562 msg="client connection closed before server finished loading, aborting load"
time=2025-02-07T08:36:37.458Z level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled"
[GIN] 2025/02/07 - 08:36:37 | 499 | 19.03880517s | 127.0.0.1 | POST "/api/chat"
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
time=2025-02-07T08:36:41.141Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB"
time=2025-02-07T08:36:41.872Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="89.0 GiB" free_swap="6.3 GiB"
time=2025-02-07T08:36:41.873Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2025-02-07T08:36:41.873Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --parallel 4 --tensor-split 27,27,27 --port 34327"
time=2025-02-07T08:36:41.874Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-07T08:36:41.874Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-07T08:36:41.874Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-07T08:36:41.882Z level=INFO source=runner.go:936 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
time=2025-02-07T08:36:41.911Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24
time=2025-02-07T08:36:41.911Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:34327"
time=2025-02-07T08:36:42.125Z level=WARN source=server.go:562 msg="client connection closed before server finished loading, aborting load"
time=2025-02-07T08:36:42.125Z level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled"
[GIN] 2025/02/07 - 08:36:42 | 499 | 1.835103705s | 127.0.0.1 | POST "/api/chat"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free
time=2025-02-07T08:36:42.458Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.000016771 model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339
llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
time=2025-02-07T08:36:42.700Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.242472822 model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
time=2025-02-07T08:36:42.940Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.48187986 model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
[GIN] 2025/02/07 - 08:36:44 | 200 | 64.266?s | 127.0.0.1 | HEAD "/"
[GIN] 2025/02/07 - 08:36:44 | 200 | 20.797396ms | 127.0.0.1 | POST "/api/show"
time=2025-02-07T08:36:45.658Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB"
time=2025-02-07T08:36:46.391Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="89.0 GiB" free_swap="6.3 GiB"
time=2025-02-07T08:36:46.393Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2025-02-07T08:36:46.394Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --parallel 4 --tensor-split 27,27,27 --port 35517"
time=2025-02-07T08:36:46.394Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-07T08:36:46.394Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-07T08:36:46.394Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-07T08:36:46.402Z level=INFO source=runner.go:936 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
time=2025-02-07T08:36:46.428Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24
time=2025-02-07T08:36:46.428Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:35517"
time=2025-02-07T08:36:46.646Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23719 MiB free
llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
time=2025-02-07T08:36:47.458Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.000550834 model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
time=2025-02-07T08:36:47.701Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.243716651 model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
time=2025-02-07T08:36:47.947Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.489360843 model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB
llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB
llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB
llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1
llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB
llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB
llama_new_context_with_model: graph nodes = 2566
llama_new_context_with_model: graph splits = 4
time=2025-02-07T08:36:55.170Z level=INFO source=server.go:594 msg="llama runner started in 8.78 seconds"
[GIN] 2025/02/07 - 08:36:55 | 200 | 10.221404863s | 127.0.0.1 | POST "/api/generate"
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors
[GIN] 2025/02/07 - 08:37:59 | 200 | 40.941639174s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/02/07 - 08:38:18 | 200 | 1m11s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/02/07 - 08:39:38 | 200 | 1m24s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/02/07 - 08:43:21 | 200 | 29.377711305s | 192.168.41.23 | POST "/v1/chat/completions"
time=2025-02-07T08:51:33.868Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB"
time=2025-02-07T08:51:34.149Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="89.0 GiB" free_swap="6.3 GiB"
time=2025-02-07T08:51:34.150Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2025-02-07T08:51:34.151Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --parallel 4 --tensor-split 27,27,27 --port 40121"
time=2025-02-07T08:51:34.151Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-07T08:51:34.151Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-07T08:51:34.151Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-07T08:51:34.159Z level=INFO source=runner.go:936 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
time=2025-02-07T08:51:34.186Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24
time=2025-02-07T08:51:34.186Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:40121"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free
time=2025-02-07T08:51:34.403Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB
llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB
llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB
llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1
llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB
llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB
llama_new_context_with_model: graph nodes = 2566
llama_new_context_with_model: graph splits = 4
time=2025-02-07T08:51:42.927Z level=INFO source=server.go:594 msg="llama runner started in 8.78 seconds"
[GIN] 2025/02/07 - 08:51:47 | 200 | 14.073972572s | 192.168.41.23 | POST "/v1/chat/completions"
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors
[GIN] 2025/02/07 - 08:51:48 | 200 | 1.004460363s | 192.168.41.23 | POST "/v1/chat/completions"
time=2025-02-07T08:52:27.850Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB"
time=2025-02-07T08:52:28.108Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="88.9 GiB" free_swap="6.3 GiB"
time=2025-02-07T08:52:28.109Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2025-02-07T08:52:28.109Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --mlock --parallel 4 --tensor-split 27,27,27 --port 40541"
time=2025-02-07T08:52:28.110Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-07T08:52:28.110Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-07T08:52:28.110Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-07T08:52:28.117Z level=INFO source=runner.go:936 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
time=2025-02-07T08:52:28.142Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24
time=2025-02-07T08:52:28.142Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:40541"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free
time=2025-02-07T08:52:28.361Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB
llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB
llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB
llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB
warning: failed to mlock 1460781056-byte buffer (after previously locking 0 bytes): Cannot allocate memory
Try increasing RLIMIT_MEMLOCK ('ulimit -l' as root).
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1
llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB
llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB
llama_new_context_with_model: graph nodes = 2566
llama_new_context_with_model: graph splits = 4
time=2025-02-07T08:52:36.633Z level=INFO source=server.go:594 msg="llama runner started in 8.52 seconds"
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors
[GIN] 2025/02/07 - 08:52:54 | 200 | 27.931728098s | 192.168.41.23 | POST "/api/chat"
[GIN] 2025/02/07 - 08:53:12 | 200 | 15.425189647s | 192.168.41.23 | POST "/api/chat"
time=2025-02-07T08:53:13.896Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB"
time=2025-02-07T08:53:14.156Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="88.8 GiB" free_swap="6.3 GiB"
time=2025-02-07T08:53:14.158Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2025-02-07T08:53:14.158Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --parallel 4 --tensor-split 27,27,27 --port 33617"
time=2025-02-07T08:53:14.158Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-07T08:53:14.158Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-07T08:53:14.159Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-07T08:53:14.166Z level=INFO source=runner.go:936 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
time=2025-02-07T08:53:14.190Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24
time=2025-02-07T08:53:14.190Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:33617"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free
time=2025-02-07T08:53:14.409Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB
llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB
llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB
llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1
llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB
llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB
llama_new_context_with_model: graph nodes = 2566
llama_new_context_with_model: graph splits = 4
time=2025-02-07T08:53:22.682Z level=INFO source=server.go:594 msg="llama runner started in 8.52 seconds"
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors
[GIN] 2025/02/07 - 08:54:41 | 200 | 1m31s | 192.168.41.23 | POST "/v1/chat/completions"
time=2025-02-07T08:54:43.109Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB"
time=2025-02-07T08:54:43.365Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="88.8 GiB" free_swap="6.3 GiB"
time=2025-02-07T08:54:43.366Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2025-02-07T08:54:43.367Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --mlock --parallel 4 --tensor-split 27,27,27 --port 36121"
time=2025-02-07T08:54:43.367Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-07T08:54:43.367Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-07T08:54:43.367Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-07T08:54:43.375Z level=INFO source=runner.go:936 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
time=2025-02-07T08:54:43.402Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24
time=2025-02-07T08:54:43.402Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:36121"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free
time=2025-02-07T08:54:43.618Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB
llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB
llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB
llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB
warning: failed to mlock 1460781056-byte buffer (after previously locking 0 bytes): Cannot allocate memory
Try increasing RLIMIT_MEMLOCK ('ulimit -l' as root).
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1
llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB
llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB
llama_new_context_with_model: graph nodes = 2566
llama_new_context_with_model: graph splits = 4
time=2025-02-07T08:54:51.891Z level=INFO source=server.go:594 msg="llama runner started in 8.52 seconds"
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors
[GIN] 2025/02/07 - 08:55:33 | 200 | 2m10s | 192.168.41.23 | POST "/api/chat"
[GIN] 2025/02/07 - 08:57:13 | 200 | 36.074?s | 127.0.0.1 | HEAD "/"
[GIN] 2025/02/07 - 08:57:13 | 200 | 19.514387ms | 127.0.0.1 | POST "/api/show"
time=2025-02-07T08:57:15.398Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB"
time=2025-02-07T08:57:15.660Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="88.8 GiB" free_swap="6.3 GiB"
time=2025-02-07T08:57:15.661Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2025-02-07T08:57:15.661Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --parallel 4 --tensor-split 27,27,27 --port 40465"
time=2025-02-07T08:57:15.662Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-07T08:57:15.662Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-07T08:57:15.662Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-07T08:57:15.669Z level=INFO source=runner.go:936 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
time=2025-02-07T08:57:15.694Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24
time=2025-02-07T08:57:15.694Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:40465"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free
time=2025-02-07T08:57:15.913Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB
llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB
llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB
llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1
llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB
llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB
llama_new_context_with_model: graph nodes = 2566
llama_new_context_with_model: graph splits = 4
time=2025-02-07T08:57:24.185Z level=INFO source=server.go:594 msg="llama runner started in 8.52 seconds"
[GIN] 2025/02/07 - 08:57:24 | 200 | 10.189888235s | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/02/07 - 08:57:29 | 200 | 29.604?s | 127.0.0.1 | HEAD "/"
[GIN] 2025/02/07 - 08:57:29 | 200 | 19.497976ms | 127.0.0.1 | POST "/api/show"
[GIN] 2025/02/07 - 08:57:29 | 200 | 20.025482ms | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/02/07 - 08:57:41 | 200 | 4.279929781s | 127.0.0.1 | POST "/api/chat"
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors
[GIN] 2025/02/07 - 08:58:10 | 200 | 11.106219521s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/02/07 - 08:58:16 | 200 | 2.310364117s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/02/07 - 08:58:21 | 200 | 2.827767804s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/02/07 - 09:03:13 | 200 | 2.454667383s | 127.0.0.1 | POST "/api/chat"
[GIN] 2025/02/07 - 09:04:35 | 200 | 44.658629137s | 192.168.41.23 | POST "/v1/chat/completions"
[GIN] 2025/02/07 - 09:06:21 | 200 | 27.692934479s | 192.168.41.23 | POST "/v1/chat/completions"
[GIN] 2025/02/07 - 09:07:12 | 200 | 48.047?s | 127.0.0.1 | GET "/api/version"
time=2025-02-08T00:49:04.587Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB"
time=2025-02-08T00:49:04.859Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="88.6 GiB" free_swap="6.3 GiB"
time=2025-02-08T00:49:04.860Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2025-02-08T00:49:04.862Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --parallel 4 --tensor-split 27,27,27 --port 44189"
time=2025-02-08T00:49:04.864Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-08T00:49:04.864Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-08T00:49:04.864Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-08T00:49:04.896Z level=INFO source=runner.go:936 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
time=2025-02-08T00:49:04.924Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24
time=2025-02-08T00:49:04.924Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:44189"
time=2025-02-08T00:49:05.115Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB
llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB
llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB
llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1
llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB
llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB
llama_new_context_with_model: graph nodes = 2566
llama_new_context_with_model: graph splits = 4
time=2025-02-08T00:49:33.935Z level=INFO source=server.go:594 msg="llama runner started in 29.07 seconds"
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors
[GIN] 2025/02/08 - 00:49:59 | 200 | 55.330563473s | 192.168.41.23 | POST "/v1/chat/completions"
time=2025-02-08T00:53:30.858Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB"
time=2025-02-08T00:53:31.134Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="88.6 GiB" free_swap="6.3 GiB"
time=2025-02-08T00:53:31.135Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2025-02-08T00:53:31.135Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --mlock --parallel 4 --tensor-split 27,27,27 --port 34895"
time=2025-02-08T00:53:31.136Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-08T00:53:31.136Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-08T00:53:31.136Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-08T00:53:31.144Z level=INFO source=runner.go:936 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
time=2025-02-08T00:53:31.170Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24
time=2025-02-08T00:53:31.170Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:34895"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free
time=2025-02-08T00:53:31.387Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB
llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB
llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB
llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB
warning: failed to mlock 1460781056-byte buffer (after previously locking 0 bytes): Cannot allocate memory
Try increasing RLIMIT_MEMLOCK ('ulimit -l' as root).
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1
llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB
llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB
llama_new_context_with_model: graph nodes = 2566
llama_new_context_with_model: graph splits = 4
time=2025-02-08T00:53:39.409Z level=INFO source=server.go:594 msg="llama runner started in 8.27 seconds"
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors
[GIN] 2025/02/08 - 00:54:40 | 200 | 1m11s | 192.168.41.23 | POST "/api/chat"
time=2025-02-08T00:54:42.249Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB"
time=2025-02-08T00:54:42.513Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="88.5 GiB" free_swap="6.3 GiB"
time=2025-02-08T00:54:42.514Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB"
time=2025-02-08T00:54:42.514Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --parallel 4 --tensor-split 27,27,27 --port 37793"
time=2025-02-08T00:54:42.514Z level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-02-08T00:54:42.514Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-02-08T00:54:42.514Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2025-02-08T00:54:42.522Z level=INFO source=runner.go:936 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
time=2025-02-08T00:54:42.550Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24
time=2025-02-08T00:54:42.550Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:37793"
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free
time=2025-02-08T00:54:42.766Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB
llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB
llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB
llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1
llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB
llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB
llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB
llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB
llama_new_context_with_model: graph nodes = 2566
llama_new_context_with_model: graph splits = 4
time=2025-02-08T00:54:50.787Z level=INFO source=server.go:594 msg="llama runner started in 8.27 seconds"
[GIN] 2025/02/08 - 00:55:55 | 200 | 2m24s | 192.168.41.23 | POST "/v1/chat/completions"
llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B
llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama
llama_model_loader: - kv 4: general.size_label str = 70B
llama_model_loader: - kv 5: llama.block_count u32 = 80
llama_model_loader: - kv 6: llama.context_length u32 = 131072
llama_model_loader: - kv 7: llama.embedding_length u32 = 8192
llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 9: llama.attention.head_count u32 = 64
llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 13: llama.attention.key_length u32 = 128
llama_model_loader: - kv 14: llama.attention.value_length u32 = 128
llama_model_loader: - kv 15: general.file_type u32 = 15
llama_model_loader: - kv 16: llama.vocab_size u32 = 128256
llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001
llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_K: 441 tensors
llama_model_loader: - type q5_K: 40 tensors
llama_model_loader: - type q6_K: 81 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 39.59 GiB (4.82 BPW)
llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B
llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>'
llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: LF token = 128 '?'
llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors

<!-- gh-comment-id:2644395732 --> @YasinFu commented on GitHub (Feb 8, 2025): @rick-github (base) aitt:/data/ollama$ docker logs faaf74f55470 2025/02/07 08:27:29 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:4 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-02-07T08:27:29.032Z level=INFO source=images.go:432 msg="total blobs: 5" time=2025-02-07T08:27:29.032Z level=INFO source=images.go:439 msg="total unused blobs removed: 0" [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) time=2025-02-07T08:27:29.033Z level=INFO source=routes.go:1238 msg="Listening on [::]:11434 (version 0.5.7-0-ga420a45-dirty)" time=2025-02-07T08:27:29.033Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cuda_v12_avx cpu cpu_avx cpu_avx2 cuda_v11_avx]" time=2025-02-07T08:27:29.033Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-02-07T08:27:29.356Z level=INFO source=types.go:131 msg="inference compute" id=GPU-a82cec6a-430f-e6c1-1625-80c082c923cd library=cuda variant=v11 compute=8.6 driver=11.6 name="NVIDIA GeForce RTX 3090" total="23.7 GiB" available="23.5 GiB" time=2025-02-07T08:27:29.356Z level=INFO source=types.go:131 msg="inference compute" id=GPU-74591cc7-de56-ecca-da11-c770f68e11f0 library=cuda variant=v11 compute=8.6 driver=11.6 name="NVIDIA GeForce RTX 3090" total="23.7 GiB" available="23.5 GiB" time=2025-02-07T08:27:29.356Z level=INFO source=types.go:131 msg="inference compute" id=GPU-e417e629-3bff-ec54-f753-1c759af7a562 library=cuda variant=v11 compute=8.6 driver=11.6 name="NVIDIA GeForce RTX 3090" total="23.7 GiB" available="23.5 GiB" [GIN] 2025/02/07 - 08:27:38 | 200 | 54.325?s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/07 - 08:27:38 | 200 | 20.244728ms | 127.0.0.1 | POST "/api/show" time=2025-02-07T08:27:39.140Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB" time=2025-02-07T08:27:39.403Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="89.6 GiB" free_swap="6.3 GiB" time=2025-02-07T08:27:39.404Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2025-02-07T08:27:39.405Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --parallel 4 --tensor-split 27,27,27 --port 46259" time=2025-02-07T08:27:39.405Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-07T08:27:39.405Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-07T08:27:39.405Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-07T08:27:39.413Z level=INFO source=runner.go:936 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes time=2025-02-07T08:27:39.420Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24 time=2025-02-07T08:27:39.421Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:46259" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free time=2025-02-07T08:27:39.656Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 80 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 81/81 layers to GPU llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB llama_new_context_with_model: graph nodes = 2566 llama_new_context_with_model: graph splits = 4 time=2025-02-07T08:27:47.929Z level=INFO source=server.go:594 msg="llama runner started in 8.52 seconds" [GIN] 2025/02/07 - 08:27:47 | 200 | 9.1447991s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/02/07 - 08:30:40 | 200 | 41.183?s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/07 - 08:30:41 | 200 | 19.547918ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/02/07 - 08:30:41 | 200 | 19.971768ms | 127.0.0.1 | POST "/api/generate" [GIN] 2025/02/07 - 08:30:57 | 200 | 3.172700115s | 127.0.0.1 | POST "/api/chat" llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors [GIN] 2025/02/07 - 08:32:03 | 200 | 52.464382665s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/02/07 - 08:32:12 | 200 | 1m6s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/02/07 - 08:33:26 | 200 | 1m0s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/02/07 - 08:33:55 | 200 | 346.521?s | 192.168.41.23 | GET "/api/tags" [GIN] 2025/02/07 - 08:34:07 | 200 | 206.438?s | 192.168.41.23 | GET "/api/tags" [GIN] 2025/02/07 - 08:34:14 | 200 | 263.344?s | 192.168.41.23 | GET "/api/tags" [GIN] 2025/02/07 - 08:35:48 | 200 | 1m3s | 127.0.0.1 | POST "/api/chat" time=2025-02-07T08:35:49.846Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB" time=2025-02-07T08:35:50.109Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="89.4 GiB" free_swap="6.3 GiB" time=2025-02-07T08:35:50.110Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2025-02-07T08:35:50.110Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --mlock --parallel 4 --tensor-split 27,27,27 --port 32841" time=2025-02-07T08:35:50.111Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-07T08:35:50.111Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-07T08:35:50.111Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-07T08:35:50.118Z level=INFO source=runner.go:936 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes time=2025-02-07T08:35:50.146Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24 time=2025-02-07T08:35:50.146Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:32841" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free time=2025-02-07T08:35:50.362Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 80 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 81/81 layers to GPU llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB warning: failed to mlock 1460781056-byte buffer (after previously locking 0 bytes): Cannot allocate memory Try increasing RLIMIT_MEMLOCK ('ulimit -l' as root). llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB llama_new_context_with_model: graph nodes = 2566 llama_new_context_with_model: graph splits = 4 time=2025-02-07T08:35:58.884Z level=INFO source=server.go:594 msg="llama runner started in 8.77 seconds" llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors [GIN] 2025/02/07 - 08:36:02 | 200 | 1m6s | 192.168.41.23 | POST "/api/chat" time=2025-02-07T08:36:04.127Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB" time=2025-02-07T08:36:04.382Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="89.3 GiB" free_swap="6.3 GiB" time=2025-02-07T08:36:04.383Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2025-02-07T08:36:04.383Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --parallel 4 --tensor-split 27,27,27 --port 37759" time=2025-02-07T08:36:04.383Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-07T08:36:04.383Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-07T08:36:04.384Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-07T08:36:04.391Z level=INFO source=runner.go:936 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes time=2025-02-07T08:36:04.418Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24 time=2025-02-07T08:36:04.418Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:37759" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free time=2025-02-07T08:36:04.635Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 80 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 81/81 layers to GPU llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB llama_new_context_with_model: graph nodes = 2566 llama_new_context_with_model: graph splits = 4 time=2025-02-07T08:36:12.909Z level=INFO source=server.go:594 msg="llama runner started in 8.53 seconds" llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors [GIN] 2025/02/07 - 08:36:16 | 200 | 44.33575588s | 127.0.0.1 | POST "/api/chat" time=2025-02-07T08:36:18.267Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB" time=2025-02-07T08:36:18.541Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="89.2 GiB" free_swap="6.3 GiB" time=2025-02-07T08:36:18.542Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2025-02-07T08:36:18.543Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --mlock --parallel 4 --tensor-split 27,27,27 --port 46163" time=2025-02-07T08:36:18.543Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-07T08:36:18.543Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-07T08:36:18.543Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-07T08:36:18.551Z level=INFO source=runner.go:936 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes time=2025-02-07T08:36:18.578Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24 time=2025-02-07T08:36:18.578Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:46163" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free time=2025-02-07T08:36:18.794Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 80 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 81/81 layers to GPU llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB warning: failed to mlock 1460781056-byte buffer (after previously locking 0 bytes): Cannot allocate memory Try increasing RLIMIT_MEMLOCK ('ulimit -l' as root). llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB llama_new_context_with_model: graph nodes = 2566 llama_new_context_with_model: graph splits = 4 time=2025-02-07T08:36:27.066Z level=INFO source=server.go:594 msg="llama runner started in 8.52 seconds" llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors [GIN] 2025/02/07 - 08:36:34 | 200 | 35.685958166s | 192.168.41.23 | POST "/api/chat" time=2025-02-07T08:36:36.194Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB" time=2025-02-07T08:36:36.453Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="89.0 GiB" free_swap="6.3 GiB" time=2025-02-07T08:36:36.454Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2025-02-07T08:36:36.455Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --parallel 4 --tensor-split 27,27,27 --port 36625" time=2025-02-07T08:36:36.455Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-07T08:36:36.455Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-07T08:36:36.455Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-07T08:36:36.463Z level=INFO source=runner.go:936 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes time=2025-02-07T08:36:36.490Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24 time=2025-02-07T08:36:36.490Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:36625" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free time=2025-02-07T08:36:36.706Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors time=2025-02-07T08:36:37.458Z level=WARN source=server.go:562 msg="client connection closed before server finished loading, aborting load" time=2025-02-07T08:36:37.458Z level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled" [GIN] 2025/02/07 - 08:36:37 | 499 | 19.03880517s | 127.0.0.1 | POST "/api/chat" llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 time=2025-02-07T08:36:41.141Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB" time=2025-02-07T08:36:41.872Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="89.0 GiB" free_swap="6.3 GiB" time=2025-02-07T08:36:41.873Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2025-02-07T08:36:41.873Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --parallel 4 --tensor-split 27,27,27 --port 34327" time=2025-02-07T08:36:41.874Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-07T08:36:41.874Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-07T08:36:41.874Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-07T08:36:41.882Z level=INFO source=runner.go:936 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes time=2025-02-07T08:36:41.911Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24 time=2025-02-07T08:36:41.911Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:34327" time=2025-02-07T08:36:42.125Z level=WARN source=server.go:562 msg="client connection closed before server finished loading, aborting load" time=2025-02-07T08:36:42.125Z level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled" [GIN] 2025/02/07 - 08:36:42 | 499 | 1.835103705s | 127.0.0.1 | POST "/api/chat" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free time=2025-02-07T08:36:42.458Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.000016771 model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... time=2025-02-07T08:36:42.700Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.242472822 model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors time=2025-02-07T08:36:42.940Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.48187986 model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 [GIN] 2025/02/07 - 08:36:44 | 200 | 64.266?s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/07 - 08:36:44 | 200 | 20.797396ms | 127.0.0.1 | POST "/api/show" time=2025-02-07T08:36:45.658Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB" time=2025-02-07T08:36:46.391Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="89.0 GiB" free_swap="6.3 GiB" time=2025-02-07T08:36:46.393Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2025-02-07T08:36:46.394Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --parallel 4 --tensor-split 27,27,27 --port 35517" time=2025-02-07T08:36:46.394Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-07T08:36:46.394Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-07T08:36:46.394Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-07T08:36:46.402Z level=INFO source=runner.go:936 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes time=2025-02-07T08:36:46.428Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24 time=2025-02-07T08:36:46.428Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:35517" time=2025-02-07T08:36:46.646Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23719 MiB free llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors time=2025-02-07T08:36:47.458Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.000550834 model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 time=2025-02-07T08:36:47.701Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.243716651 model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 time=2025-02-07T08:36:47.947Z level=WARN source=sched.go:646 msg="gpu VRAM usage didn't recover within timeout" seconds=5.489360843 model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 llm_load_tensors: offloading 80 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 81/81 layers to GPU llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB llama_new_context_with_model: graph nodes = 2566 llama_new_context_with_model: graph splits = 4 time=2025-02-07T08:36:55.170Z level=INFO source=server.go:594 msg="llama runner started in 8.78 seconds" [GIN] 2025/02/07 - 08:36:55 | 200 | 10.221404863s | 127.0.0.1 | POST "/api/generate" llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors [GIN] 2025/02/07 - 08:37:59 | 200 | 40.941639174s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/02/07 - 08:38:18 | 200 | 1m11s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/02/07 - 08:39:38 | 200 | 1m24s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/02/07 - 08:43:21 | 200 | 29.377711305s | 192.168.41.23 | POST "/v1/chat/completions" time=2025-02-07T08:51:33.868Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB" time=2025-02-07T08:51:34.149Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="89.0 GiB" free_swap="6.3 GiB" time=2025-02-07T08:51:34.150Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2025-02-07T08:51:34.151Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --parallel 4 --tensor-split 27,27,27 --port 40121" time=2025-02-07T08:51:34.151Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-07T08:51:34.151Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-07T08:51:34.151Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-07T08:51:34.159Z level=INFO source=runner.go:936 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes time=2025-02-07T08:51:34.186Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24 time=2025-02-07T08:51:34.186Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:40121" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free time=2025-02-07T08:51:34.403Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 80 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 81/81 layers to GPU llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB llama_new_context_with_model: graph nodes = 2566 llama_new_context_with_model: graph splits = 4 time=2025-02-07T08:51:42.927Z level=INFO source=server.go:594 msg="llama runner started in 8.78 seconds" [GIN] 2025/02/07 - 08:51:47 | 200 | 14.073972572s | 192.168.41.23 | POST "/v1/chat/completions" llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors [GIN] 2025/02/07 - 08:51:48 | 200 | 1.004460363s | 192.168.41.23 | POST "/v1/chat/completions" time=2025-02-07T08:52:27.850Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB" time=2025-02-07T08:52:28.108Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="88.9 GiB" free_swap="6.3 GiB" time=2025-02-07T08:52:28.109Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2025-02-07T08:52:28.109Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --mlock --parallel 4 --tensor-split 27,27,27 --port 40541" time=2025-02-07T08:52:28.110Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-07T08:52:28.110Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-07T08:52:28.110Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-07T08:52:28.117Z level=INFO source=runner.go:936 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes time=2025-02-07T08:52:28.142Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24 time=2025-02-07T08:52:28.142Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:40541" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free time=2025-02-07T08:52:28.361Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 80 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 81/81 layers to GPU llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB warning: failed to mlock 1460781056-byte buffer (after previously locking 0 bytes): Cannot allocate memory Try increasing RLIMIT_MEMLOCK ('ulimit -l' as root). llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB llama_new_context_with_model: graph nodes = 2566 llama_new_context_with_model: graph splits = 4 time=2025-02-07T08:52:36.633Z level=INFO source=server.go:594 msg="llama runner started in 8.52 seconds" llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors [GIN] 2025/02/07 - 08:52:54 | 200 | 27.931728098s | 192.168.41.23 | POST "/api/chat" [GIN] 2025/02/07 - 08:53:12 | 200 | 15.425189647s | 192.168.41.23 | POST "/api/chat" time=2025-02-07T08:53:13.896Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB" time=2025-02-07T08:53:14.156Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="88.8 GiB" free_swap="6.3 GiB" time=2025-02-07T08:53:14.158Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2025-02-07T08:53:14.158Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --parallel 4 --tensor-split 27,27,27 --port 33617" time=2025-02-07T08:53:14.158Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-07T08:53:14.158Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-07T08:53:14.159Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-07T08:53:14.166Z level=INFO source=runner.go:936 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes time=2025-02-07T08:53:14.190Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24 time=2025-02-07T08:53:14.190Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:33617" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free time=2025-02-07T08:53:14.409Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 80 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 81/81 layers to GPU llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB llama_new_context_with_model: graph nodes = 2566 llama_new_context_with_model: graph splits = 4 time=2025-02-07T08:53:22.682Z level=INFO source=server.go:594 msg="llama runner started in 8.52 seconds" llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors [GIN] 2025/02/07 - 08:54:41 | 200 | 1m31s | 192.168.41.23 | POST "/v1/chat/completions" time=2025-02-07T08:54:43.109Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB" time=2025-02-07T08:54:43.365Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="88.8 GiB" free_swap="6.3 GiB" time=2025-02-07T08:54:43.366Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2025-02-07T08:54:43.367Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --mlock --parallel 4 --tensor-split 27,27,27 --port 36121" time=2025-02-07T08:54:43.367Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-07T08:54:43.367Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-07T08:54:43.367Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-07T08:54:43.375Z level=INFO source=runner.go:936 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes time=2025-02-07T08:54:43.402Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24 time=2025-02-07T08:54:43.402Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:36121" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free time=2025-02-07T08:54:43.618Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 80 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 81/81 layers to GPU llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB warning: failed to mlock 1460781056-byte buffer (after previously locking 0 bytes): Cannot allocate memory Try increasing RLIMIT_MEMLOCK ('ulimit -l' as root). llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB llama_new_context_with_model: graph nodes = 2566 llama_new_context_with_model: graph splits = 4 time=2025-02-07T08:54:51.891Z level=INFO source=server.go:594 msg="llama runner started in 8.52 seconds" llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors [GIN] 2025/02/07 - 08:55:33 | 200 | 2m10s | 192.168.41.23 | POST "/api/chat" [GIN] 2025/02/07 - 08:57:13 | 200 | 36.074?s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/07 - 08:57:13 | 200 | 19.514387ms | 127.0.0.1 | POST "/api/show" time=2025-02-07T08:57:15.398Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB" time=2025-02-07T08:57:15.660Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="88.8 GiB" free_swap="6.3 GiB" time=2025-02-07T08:57:15.661Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2025-02-07T08:57:15.661Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --parallel 4 --tensor-split 27,27,27 --port 40465" time=2025-02-07T08:57:15.662Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-07T08:57:15.662Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-07T08:57:15.662Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-07T08:57:15.669Z level=INFO source=runner.go:936 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes time=2025-02-07T08:57:15.694Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24 time=2025-02-07T08:57:15.694Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:40465" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free time=2025-02-07T08:57:15.913Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 80 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 81/81 layers to GPU llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB llama_new_context_with_model: graph nodes = 2566 llama_new_context_with_model: graph splits = 4 time=2025-02-07T08:57:24.185Z level=INFO source=server.go:594 msg="llama runner started in 8.52 seconds" [GIN] 2025/02/07 - 08:57:24 | 200 | 10.189888235s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/02/07 - 08:57:29 | 200 | 29.604?s | 127.0.0.1 | HEAD "/" [GIN] 2025/02/07 - 08:57:29 | 200 | 19.497976ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/02/07 - 08:57:29 | 200 | 20.025482ms | 127.0.0.1 | POST "/api/generate" [GIN] 2025/02/07 - 08:57:41 | 200 | 4.279929781s | 127.0.0.1 | POST "/api/chat" llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors [GIN] 2025/02/07 - 08:58:10 | 200 | 11.106219521s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/02/07 - 08:58:16 | 200 | 2.310364117s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/02/07 - 08:58:21 | 200 | 2.827767804s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/02/07 - 09:03:13 | 200 | 2.454667383s | 127.0.0.1 | POST "/api/chat" [GIN] 2025/02/07 - 09:04:35 | 200 | 44.658629137s | 192.168.41.23 | POST "/v1/chat/completions" [GIN] 2025/02/07 - 09:06:21 | 200 | 27.692934479s | 192.168.41.23 | POST "/v1/chat/completions" [GIN] 2025/02/07 - 09:07:12 | 200 | 48.047?s | 127.0.0.1 | GET "/api/version" time=2025-02-08T00:49:04.587Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB" time=2025-02-08T00:49:04.859Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="88.6 GiB" free_swap="6.3 GiB" time=2025-02-08T00:49:04.860Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2025-02-08T00:49:04.862Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --parallel 4 --tensor-split 27,27,27 --port 44189" time=2025-02-08T00:49:04.864Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-08T00:49:04.864Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-08T00:49:04.864Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-08T00:49:04.896Z level=INFO source=runner.go:936 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes time=2025-02-08T00:49:04.924Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24 time=2025-02-08T00:49:04.924Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:44189" time=2025-02-08T00:49:05.115Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 80 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 81/81 layers to GPU llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB llama_new_context_with_model: graph nodes = 2566 llama_new_context_with_model: graph splits = 4 time=2025-02-08T00:49:33.935Z level=INFO source=server.go:594 msg="llama runner started in 29.07 seconds" llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors [GIN] 2025/02/08 - 00:49:59 | 200 | 55.330563473s | 192.168.41.23 | POST "/v1/chat/completions" time=2025-02-08T00:53:30.858Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB" time=2025-02-08T00:53:31.134Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="88.6 GiB" free_swap="6.3 GiB" time=2025-02-08T00:53:31.135Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2025-02-08T00:53:31.135Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --mlock --parallel 4 --tensor-split 27,27,27 --port 34895" time=2025-02-08T00:53:31.136Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-08T00:53:31.136Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-08T00:53:31.136Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-08T00:53:31.144Z level=INFO source=runner.go:936 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes time=2025-02-08T00:53:31.170Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24 time=2025-02-08T00:53:31.170Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:34895" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free time=2025-02-08T00:53:31.387Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 80 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 81/81 layers to GPU llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB warning: failed to mlock 1460781056-byte buffer (after previously locking 0 bytes): Cannot allocate memory Try increasing RLIMIT_MEMLOCK ('ulimit -l' as root). llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB llama_new_context_with_model: graph nodes = 2566 llama_new_context_with_model: graph splits = 4 time=2025-02-08T00:53:39.409Z level=INFO source=server.go:594 msg="llama runner started in 8.27 seconds" llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors [GIN] 2025/02/08 - 00:54:40 | 200 | 1m11s | 192.168.41.23 | POST "/api/chat" time=2025-02-08T00:54:42.249Z level=INFO source=sched.go:730 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 library=cuda parallel=4 required="47.8 GiB" time=2025-02-08T00:54:42.513Z level=INFO source=server.go:104 msg="system memory" total="125.6 GiB" free="88.5 GiB" free_swap="6.3 GiB" time=2025-02-08T00:54:42.514Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=27,27,27 memory.available="[23.5 GiB 23.5 GiB 23.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="47.8 GiB" memory.required.partial="47.8 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[16.6 GiB 15.5 GiB 15.7 GiB]" memory.weights.total="40.7 GiB" memory.weights.repeating="39.9 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.1 GiB" time=2025-02-08T00:54:42.514Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v11_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 --ctx-size 8192 --batch-size 512 --n-gpu-layers 81 --threads 24 --parallel 4 --tensor-split 27,27,27 --port 37793" time=2025-02-08T00:54:42.514Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2025-02-08T00:54:42.514Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding" time=2025-02-08T00:54:42.514Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error" time=2025-02-08T00:54:42.522Z level=INFO source=runner.go:936 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 3 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes time=2025-02-08T00:54:42.550Z level=INFO source=runner.go:937 msg=system info="CUDA : USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=24 time=2025-02-08T00:54:42.550Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:37793" llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 23922 MiB free time=2025-02-08T00:54:42.766Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model" llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_load_model_from_file: using device CUDA2 (NVIDIA GeForce RTX 3090) - 23922 MiB free llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: offloading 80 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 81/81 layers to GPU llm_load_tensors: CPU_Mapped model buffer size = 563.62 MiB llm_load_tensors: CUDA0 model buffer size = 13303.88 MiB llm_load_tensors: CUDA1 model buffer size = 12951.00 MiB llm_load_tensors: CUDA2 model buffer size = 13724.61 MiB llama_new_context_with_model: n_seq_max = 4 llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_ctx_per_seq = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 80, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA1 KV buffer size = 864.00 MiB llama_kv_cache_init: CUDA2 KV buffer size = 832.00 MiB llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA1 compute buffer size = 1216.01 MiB llama_new_context_with_model: CUDA2 compute buffer size = 1216.02 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 80.02 MiB llama_new_context_with_model: graph nodes = 2566 llama_new_context_with_model: graph splits = 4 time=2025-02-08T00:54:50.787Z level=INFO source=server.go:594 msg="llama runner started in 8.27 seconds" [GIN] 2025/02/08 - 00:55:55 | 200 | 2m24s | 192.168.41.23 | POST "/v1/chat/completions" llama_model_loader: loaded meta data with 30 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-4cd576d9aa16961244012223abf01445567b061f1814b57dfef699e4cf8df339 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = DeepSeek R1 Distill Llama 70B llama_model_loader: - kv 3: general.basename str = DeepSeek-R1-Distill-Llama llama_model_loader: - kv 4: general.size_label str = 70B llama_model_loader: - kv 5: llama.block_count u32 = 80 llama_model_loader: - kv 6: llama.context_length u32 = 131072 llama_model_loader: - kv 7: llama.embedding_length u32 = 8192 llama_model_loader: - kv 8: llama.feed_forward_length u32 = 28672 llama_model_loader: - kv 9: llama.attention.head_count u32 = 64 llama_model_loader: - kv 10: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 13: llama.attention.key_length u32 = 128 llama_model_loader: - kv 14: llama.attention.value_length u32 = 128 llama_model_loader: - kv 15: general.file_type u32 = 15 llama_model_loader: - kv 16: llama.vocab_size u32 = 128256 llama_model_loader: - kv 17: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 18: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 19: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 20: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 21: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 22: tokenizer.ggml.merges arr[str,280147] = ["? ?", "? ???", "?? ??", "... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 25: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 27: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 28: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 29: general.quantization_version u32 = 2 llama_model_loader: - type f32: 162 tensors llama_model_loader: - type q4_K: 441 tensors llama_model_loader: - type q5_K: 40 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 70.55 B llm_load_print_meta: model size = 39.59 GiB (4.82 BPW) llm_load_print_meta: general.name = DeepSeek R1 Distill Llama 70B llm_load_print_meta: BOS token = 128000 '<|begin?of?sentence|>' llm_load_print_meta: EOS token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: PAD token = 128001 '<|end?of?sentence|>' llm_load_print_meta: LF token = 128 '?' llm_load_print_meta: EOG token = 128001 '<|end?of?sentence|>' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors
Author
Owner

@rick-github commented on GitHub (Feb 8, 2025):

Your clients are sending different API options, causing the model to be evicted and immediately reloaded. Specifically, one client is sending "use_mlock":true while the other is not. Either configure the one client using use_mlock to not use it, or configure the other client to use it.

Coincidentally, I was working on a PR to fix this today, if it gets merged this problem will go away.

<!-- gh-comment-id:2644406360 --> @rick-github commented on GitHub (Feb 8, 2025): Your clients are sending different API options, causing the model to be evicted and immediately reloaded. Specifically, one client is sending `"use_mlock":true` while the other is not. Either configure the one client using `use_mlock` to not use it, or configure the other client to use it. Coincidentally, I was working on a [PR](https://github.com/ollama/ollama/pull/8935) to fix this today, if it gets merged this problem will go away.
Author
Owner

@YasinFu commented on GitHub (Feb 8, 2025):

Thank you for the response and fix. really appreciate it.

<!-- gh-comment-id:2644413101 --> @YasinFu commented on GitHub (Feb 8, 2025): Thank you for the response and fix. really appreciate it.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#52298