[GH-ISSUE #6926] Unable to use multiple GPUs #66430

Closed
opened 2026-05-04 05:07:13 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @bluebirdlinlin on GitHub (Sep 24, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6926

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

my GPUS info: 6 numbers of A10, but when i run ollama (qwen:32b),it works oom. and i see only a gpu works, others can't use. this is the logs. please help me this issue, thanks.
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.name str = Qwen2-beta-7B-Chat
llama_model_loader: - kv 2: qwen2.block_count u32 = 32
llama_model_loader: - kv 3: qwen2.context_length u32 = 32768
llama_model_loader: - kv 4: qwen2.embedding_length u32 = 4096
llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 11008
llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 32
llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 32
llama_model_loader: - kv 8: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 9: qwen2.use_parallel_residual bool = true
llama_model_loader: - kv 10: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 11: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 12: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
time=2024-09-24T03:32:27.691Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv 13: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 151643
llama_model_loader: - kv 15: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 17: tokenizer.chat_template str = {% for message in messages %}{{'<|im_...
llama_model_loader: - kv 18: general.quantization_version u32 = 2
llama_model_loader: - kv 19: general.file_type u32 = 2
llama_model_loader: - type f32: 161 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
llm_load_vocab: special tokens cache size = 293
llm_load_vocab: token to piece cache size = 0.9338 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 151936
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 4096
llm_load_print_meta: n_embd_v_gqa = 4096
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 11008
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 7.72 B
llm_load_print_meta: model size = 4.20 GiB (4.67 BPW)
llm_load_print_meta: general.name = Qwen2-beta-7B-Chat
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151643 '<|endoftext|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA A10, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 0.34 MiB
time=2024-09-24T03:32:29.148Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU buffer size = 333.84 MiB
llm_load_tensors: CUDA0 buffer size = 3963.38 MiB
time=2024-09-24T03:32:29.400Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 4096.00 MiB
llama_new_context_with_model: KV self size = 4096.00 MiB, K (f16): 2048.00 MiB, V (f16): 2048.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.38 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 560.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB
llama_new_context_with_model: graph nodes = 1126
llama_new_context_with_model: graph splits = 2
INFO [main] model loaded | tid="140002756288512" timestamp=1727148752
time=2024-09-24T03:32:32.416Z level=INFO source=server.go:630 msg="llama runner started in 4.98 seconds"
[GIN] 2024/09/24 - 03:32:32 | 200 | 6.624226742s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/09/24 - 03:32:43 | 200 | 1.193765361s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/09/24 - 03:32:53 | 200 | 947.380979ms | 127.0.0.1 | POST "/api/chat"
CUDA error: an illegal memory access was encountered
current device: 0, in function ggml_backend_cuda_synchronize at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:2416
cudaStreamSynchronize(cuda_ctx->stream())
/go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:101: CUDA error
free(): corrupted unsorted chunks

OS

Linux, Docker

GPU

Nvidia

CPU

No response

Ollama version

latest

Originally created by @bluebirdlinlin on GitHub (Sep 24, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6926 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? my GPUS info: 6 numbers of A10, but when i run ollama (qwen:32b),it works oom. and i see only a gpu works, others can't use. this is the logs. please help me this issue, thanks. llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.name str = Qwen2-beta-7B-Chat llama_model_loader: - kv 2: qwen2.block_count u32 = 32 llama_model_loader: - kv 3: qwen2.context_length u32 = 32768 llama_model_loader: - kv 4: qwen2.embedding_length u32 = 4096 llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 11008 llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 32 llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 32 llama_model_loader: - kv 8: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 9: qwen2.use_parallel_residual bool = true llama_model_loader: - kv 10: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 11: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 12: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... time=2024-09-24T03:32:27.691Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 13: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 15: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 17: tokenizer.chat_template str = {% for message in messages %}{{'<|im_... llama_model_loader: - kv 18: general.quantization_version u32 = 2 llama_model_loader: - kv 19: general.file_type u32 = 2 llama_model_loader: - type f32: 161 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default' llm_load_vocab: special tokens cache size = 293 llm_load_vocab: token to piece cache size = 0.9338 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 151936 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 11008 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 7.72 B llm_load_print_meta: model size = 4.20 GiB (4.67 BPW) llm_load_print_meta: general.name = Qwen2-beta-7B-Chat llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151643 '<|endoftext|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA A10, compute capability 8.6, VMM: yes llm_load_tensors: ggml ctx size = 0.34 MiB time=2024-09-24T03:32:29.148Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU buffer size = 333.84 MiB llm_load_tensors: CUDA0 buffer size = 3963.38 MiB time=2024-09-24T03:32:29.400Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 4096.00 MiB llama_new_context_with_model: KV self size = 4096.00 MiB, K (f16): 2048.00 MiB, V (f16): 2048.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.38 MiB llama_new_context_with_model: CUDA0 compute buffer size = 560.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB llama_new_context_with_model: graph nodes = 1126 llama_new_context_with_model: graph splits = 2 INFO [main] model loaded | tid="140002756288512" timestamp=1727148752 time=2024-09-24T03:32:32.416Z level=INFO source=server.go:630 msg="llama runner started in 4.98 seconds" [GIN] 2024/09/24 - 03:32:32 | 200 | 6.624226742s | 127.0.0.1 | POST "/api/chat" [GIN] 2024/09/24 - 03:32:43 | 200 | 1.193765361s | 127.0.0.1 | POST "/api/chat" [GIN] 2024/09/24 - 03:32:53 | 200 | 947.380979ms | 127.0.0.1 | POST "/api/chat" CUDA error: an illegal memory access was encountered current device: 0, in function ggml_backend_cuda_synchronize at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:2416 cudaStreamSynchronize(cuda_ctx->stream()) /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:101: CUDA error free(): corrupted unsorted chunks ### OS Linux, Docker ### GPU Nvidia ### CPU _No response_ ### Ollama version latest
GiteaMirror added the bugnvidialinux labels 2026-05-04 05:07:14 -05:00
Author
Owner

@rick-github commented on GitHub (Sep 24, 2024):

Please post the full log, there is information about devices and resources earlier in the log that may be useful.

<!-- gh-comment-id:2370079421 --> @rick-github commented on GitHub (Sep 24, 2024): Please post the full log, there is information about devices and resources earlier in the log that may be useful.
Author
Owner

@bluebirdlinlin commented on GitHub (Sep 24, 2024):

this is the full logs with command " docker logs ollama".

2024/09/23 11:14:40 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-09-23T11:14:40.309Z level=INFO source=images.go:753 msg="total blobs: 9"
time=2024-09-23T11:14:40.316Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-09-23T11:14:40.317Z level=INFO source=routes.go:1172 msg="Listening on [::]:11434 (version 0.3.9)"
time=2024-09-23T11:14:40.318Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1195458266/runners
time=2024-09-23T11:14:51.240Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11 cuda_v12 rocm_v60102 cpu cpu_avx cpu_avx2]"
time=2024-09-23T11:14:51.240Z level=INFO source=gpu.go:200 msg="looking for compatible GPUs"
time=2024-09-23T11:14:52.754Z level=INFO source=types.go:107 msg="inference compute" id=GPU-6ac68bc8-cfaa-e488-7a4d-c0e83003f725 library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="20.2 GiB"
time=2024-09-23T11:14:52.754Z level=INFO source=types.go:107 msg="inference compute" id=GPU-277417f5-4a9a-c14c-352a-f564eb23262b library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="21.8 GiB"
time=2024-09-23T11:14:52.754Z level=INFO source=types.go:107 msg="inference compute" id=GPU-8e537520-8db9-abd8-2aa1-b078d7a394bb library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="21.8 GiB"
time=2024-09-23T11:14:52.754Z level=INFO source=types.go:107 msg="inference compute" id=GPU-2140965a-169d-9073-9668-6ef8c8cbf034 library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="21.8 GiB"
time=2024-09-23T11:14:52.754Z level=INFO source=types.go:107 msg="inference compute" id=GPU-67d912c0-8efd-e31a-f9bc-8c4834119698 library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="21.8 GiB"
time=2024-09-23T11:14:52.754Z level=INFO source=types.go:107 msg="inference compute" id=GPU-ad1d8f91-03c7-e6a8-9977-71e6b18227d8 library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="19.3 GiB"
[GIN] 2024/09/23 - 11:17:23 | 200 | 420.258µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/09/23 - 11:17:23 | 200 | 372.984µs | 127.0.0.1 | GET "/api/ps"
[GIN] 2024/09/23 - 11:17:47 | 200 | 38.455µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/09/23 - 11:17:47 | 404 | 591.168µs | 127.0.0.1 | POST "/api/show"
time=2024-09-23T11:17:51.297Z level=INFO source=download.go:175 msg="downloading 87f26aae09c7 in 16 281 MB part(s)"
time=2024-09-23T11:31:21.652Z level=INFO source=download.go:370 msg="87f26aae09c7 part 5 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection."
time=2024-09-23T11:32:08.259Z level=INFO source=download.go:175 msg="downloading c0312cf22ef0 in 1 483 B part(s)"
[GIN] 2024/09/23 - 11:32:24 | 200 | 14m36s | 127.0.0.1 | POST "/api/pull"
[GIN] 2024/09/23 - 11:32:24 | 200 | 27.073395ms | 127.0.0.1 | POST "/api/show"
time=2024-09-23T11:32:25.791Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 gpu=GPU-277417f5-4a9a-c14c-352a-f564eb23262b parallel=4 available=23436918784 required="9.1 GiB"
time=2024-09-23T11:32:25.792Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[21.8 GiB]" memory.required.full="9.1 GiB" memory.required.partial="9.1 GiB" memory.required.kv="4.0 GiB" memory.required.allocations="[9.1 GiB]" memory.weights.total="7.4 GiB" memory.weights.repeating="6.9 GiB" memory.weights.nonrepeating="486.9 MiB" memory.graph.full="544.0 MiB" memory.graph.partial="791.6 MiB"
time=2024-09-23T11:32:25.819Z level=INFO source=server.go:391 msg="starting llama server" cmd="/tmp/ollama1195458266/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --port 39511"
time=2024-09-23T11:32:25.820Z level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2024-09-23T11:32:25.820Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2024-09-23T11:32:25.820Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="1e6f655" tid="139820253798400" timestamp=1727091146
INFO [main] system info | n_threads=88 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139820253798400" timestamp=1727091146 total_threads=176
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="175" port="39511" tid="139820253798400" timestamp=1727091146
llama_model_loader: loaded meta data with 20 key-value pairs and 387 tensors from /root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.name str = Qwen2-beta-7B-Chat
llama_model_loader: - kv 2: qwen2.block_count u32 = 32
llama_model_loader: - kv 3: qwen2.context_length u32 = 32768
llama_model_loader: - kv 4: qwen2.embedding_length u32 = 4096
llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 11008
llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 32
llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 32
llama_model_loader: - kv 8: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 9: qwen2.use_parallel_residual bool = true
llama_model_loader: - kv 10: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 11: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 12: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 13: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 151643
llama_model_loader: - kv 15: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 17: tokenizer.chat_template str = {% for message in messages %}{{'<|im_...
llama_model_loader: - kv 18: general.quantization_version u32 = 2
llama_model_loader: - kv 19: general.file_type u32 = 2
llama_model_loader: - type f32: 161 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-09-23T11:32:26.574Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
llm_load_vocab: special tokens cache size = 293
llm_load_vocab: token to piece cache size = 0.9338 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 151936
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 4096
llm_load_print_meta: n_embd_v_gqa = 4096
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 11008
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 7.72 B
llm_load_print_meta: model size = 4.20 GiB (4.67 BPW)
llm_load_print_meta: general.name = Qwen2-beta-7B-Chat
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151643 '<|endoftext|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA A10, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 0.34 MiB
time=2024-09-23T11:32:28.030Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU buffer size = 333.84 MiB
llm_load_tensors: CUDA0 buffer size = 3963.38 MiB
time=2024-09-23T11:32:28.282Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 4096.00 MiB
llama_new_context_with_model: KV self size = 4096.00 MiB, K (f16): 2048.00 MiB, V (f16): 2048.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.38 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 560.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB
llama_new_context_with_model: graph nodes = 1126
llama_new_context_with_model: graph splits = 2
INFO [main] model loaded | tid="139820253798400" timestamp=1727091153
time=2024-09-23T11:32:33.556Z level=INFO source=server.go:630 msg="llama runner started in 7.74 seconds"
[GIN] 2024/09/23 - 11:32:33 | 200 | 9.134411817s | 127.0.0.1 | POST "/api/chat"
time=2024-09-24T03:32:27.411Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 gpu=GPU-277417f5-4a9a-c14c-352a-f564eb23262b parallel=4 available=23436918784 required="9.1 GiB"
time=2024-09-24T03:32:27.412Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[21.8 GiB]" memory.required.full="9.1 GiB" memory.required.partial="9.1 GiB" memory.required.kv="4.0 GiB" memory.required.allocations="[9.1 GiB]" memory.weights.total="7.4 GiB" memory.weights.repeating="6.9 GiB" memory.weights.nonrepeating="486.9 MiB" memory.graph.full="544.0 MiB" memory.graph.partial="791.6 MiB"
time=2024-09-24T03:32:27.438Z level=INFO source=server.go:391 msg="starting llama server" cmd="/tmp/ollama1195458266/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --port 33174"
time=2024-09-24T03:32:27.438Z level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2024-09-24T03:32:27.439Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2024-09-24T03:32:27.439Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="1e6f655" tid="140002756288512" timestamp=1727148747
INFO [main] system info | n_threads=88 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140002756288512" timestamp=1727148747 total_threads=176
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="175" port="33174" tid="140002756288512" timestamp=1727148747
llama_model_loader: loaded meta data with 20 key-value pairs and 387 tensors from /root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.name str = Qwen2-beta-7B-Chat
llama_model_loader: - kv 2: qwen2.block_count u32 = 32
llama_model_loader: - kv 3: qwen2.context_length u32 = 32768
llama_model_loader: - kv 4: qwen2.embedding_length u32 = 4096
llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 11008
llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 32
llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 32
llama_model_loader: - kv 8: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 9: qwen2.use_parallel_residual bool = true
llama_model_loader: - kv 10: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 11: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 12: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
time=2024-09-24T03:32:27.691Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv 13: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 151643
llama_model_loader: - kv 15: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 17: tokenizer.chat_template str = {% for message in messages %}{{'<|im_...
llama_model_loader: - kv 18: general.quantization_version u32 = 2
llama_model_loader: - kv 19: general.file_type u32 = 2
llama_model_loader: - type f32: 161 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
llm_load_vocab: special tokens cache size = 293
llm_load_vocab: token to piece cache size = 0.9338 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 151936
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 4096
llm_load_print_meta: n_embd_v_gqa = 4096
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 11008
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 7.72 B
llm_load_print_meta: model size = 4.20 GiB (4.67 BPW)
llm_load_print_meta: general.name = Qwen2-beta-7B-Chat
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151643 '<|endoftext|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA A10, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 0.34 MiB
time=2024-09-24T03:32:29.148Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU buffer size = 333.84 MiB
llm_load_tensors: CUDA0 buffer size = 3963.38 MiB
time=2024-09-24T03:32:29.400Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 4096.00 MiB
llama_new_context_with_model: KV self size = 4096.00 MiB, K (f16): 2048.00 MiB, V (f16): 2048.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.38 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 560.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB
llama_new_context_with_model: graph nodes = 1126
llama_new_context_with_model: graph splits = 2
INFO [main] model loaded | tid="140002756288512" timestamp=1727148752
time=2024-09-24T03:32:32.416Z level=INFO source=server.go:630 msg="llama runner started in 4.98 seconds"
[GIN] 2024/09/24 - 03:32:32 | 200 | 6.624226742s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/09/24 - 03:32:43 | 200 | 1.193765361s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/09/24 - 03:32:53 | 200 | 947.380979ms | 127.0.0.1 | POST "/api/chat"
CUDA error: an illegal memory access was encountered
current device: 0, in function ggml_backend_cuda_synchronize at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:2416
cudaStreamSynchronize(cuda_ctx->stream())
/go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:101: CUDA error
free(): corrupted unsorted chunks
[GIN] 2024/09/24 - 03:33:22 | 200 | 19.601848568s | 127.0.0.1 | POST "/api/chat"
time=2024-09-24T03:38:27.364Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.23048076 model=/root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4
time=2024-09-24T03:38:28.539Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=6.405489463 model=/root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4
time=2024-09-24T03:38:29.792Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=7.658436873 model=/root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4
2024/09/24 03:41:30 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:
https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-09-24T03:41:30.132Z level=INFO source=images.go:753 msg="total blobs: 11"
time=2024-09-24T03:41:30.133Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-09-24T03:41:30.133Z level=INFO source=routes.go:1172 msg="Listening on [::]:11434 (version 0.3.9)"
time=2024-09-24T03:41:30.134Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2193641396/runners
time=2024-09-24T03:41:45.131Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11 cuda_v12 rocm_v60102 cpu cpu_avx]"
time=2024-09-24T03:41:45.132Z level=INFO source=gpu.go:200 msg="looking for compatible GPUs"
time=2024-09-24T03:41:46.474Z level=INFO source=types.go:107 msg="inference compute" id=GPU-6ac68bc8-cfaa-e488-7a4d-c0e83003f725 library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="19.7 GiB"
time=2024-09-24T03:41:46.475Z level=INFO source=types.go:107 msg="inference compute" id=GPU-277417f5-4a9a-c14c-352a-f564eb23262b library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="21.8 GiB"
time=2024-09-24T03:41:46.475Z level=INFO source=types.go:107 msg="inference compute" id=GPU-8e537520-8db9-abd8-2aa1-b078d7a394bb library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="21.8 GiB"
time=2024-09-24T03:41:46.475Z level=INFO source=types.go:107 msg="inference compute" id=GPU-2140965a-169d-9073-9668-6ef8c8cbf034 library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="21.8 GiB"
time=2024-09-24T03:41:46.475Z level=INFO source=types.go:107 msg="inference compute" id=GPU-67d912c0-8efd-e31a-f9bc-8c4834119698 library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="21.8 GiB"
time=2024-09-24T03:41:46.475Z level=INFO source=types.go:107 msg="inference compute" id=GPU-ad1d8f91-03c7-e6a8-9977-71e6b18227d8 library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="19.3 GiB"
[GIN] 2024/09/24 - 03:41:46 | 200 | 122.101µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/09/24 - 03:41:46 | 200 | 3.130863ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/09/24 - 03:42:13 | 200 | 77.442µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/09/24 - 03:42:13 | 200 | 25.277651ms | 127.0.0.1 | POST "/api/show"
time=2024-09-24T03:42:14.851Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 gpu=GPU-277417f5-4a9a-c14c-352a-f564eb23262b parallel=4 available=23436918784 required="9.1 GiB"
time=2024-09-24T03:42:14.852Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[21.8 GiB]" memory.required.full="9.1 GiB" memory.required.partial="9.1 GiB" memory.required.kv="4.0 GiB" memory.required.allocations="[9.1 GiB]" memory.weights.total="7.4 GiB" memory.weights.repeating="6.9 GiB" memory.weights.nonrepeating="486.9 MiB" memory.graph.full="544.0 MiB" memory.graph.partial="791.6 MiB"
time=2024-09-24T03:42:14.878Z level=INFO source=server.go:391 msg="starting llama server" cmd="/tmp/ollama2193641396/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --port 44751"
time=2024-09-24T03:42:14.879Z level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2024-09-24T03:42:14.879Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2024-09-24T03:42:14.880Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="1e6f655" tid="140548118855680" timestamp=1727149334
INFO [main] system info | n_threads=88 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140548118855680" timestamp=1727149334 total_threads=176
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="175" port="44751" tid="140548118855680" timestamp=1727149334
llama_model_loader: loaded meta data with 20 key-value pairs and 387 tensors from /root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.name str = Qwen2-beta-7B-Chat
llama_model_loader: - kv 2: qwen2.block_count u32 = 32
llama_model_loader: - kv 3: qwen2.context_length u32 = 32768
llama_model_loader: - kv 4: qwen2.embedding_length u32 = 4096
llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 11008
llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 32
llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 32
llama_model_loader: - kv 8: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 9: qwen2.use_parallel_residual bool = true
llama_model_loader: - kv 10: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 11: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 12: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 13: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 151643
llama_model_loader: - kv 15: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 17: tokenizer.chat_template str = {% for message in messages %}{{'<|im_...
llama_model_loader: - kv 18: general.quantization_version u32 = 2
llama_model_loader: - kv 19: general.file_type u32 = 2
llama_model_loader: - type f32: 161 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-09-24T03:42:15.136Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
llm_load_vocab: special tokens cache size = 293
llm_load_vocab: token to piece cache size = 0.9338 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 151936
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 4096
llm_load_print_meta: n_embd_v_gqa = 4096
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 11008
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 7.72 B
llm_load_print_meta: model size = 4.20 GiB (4.67 BPW)
llm_load_print_meta: general.name = Qwen2-beta-7B-Chat
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151643 '<|endoftext|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA A10, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 0.34 MiB
time=2024-09-24T03:42:16.594Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU buffer size = 333.84 MiB
llm_load_tensors: CUDA0 buffer size = 3963.38 MiB
time=2024-09-24T03:42:16.845Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 4096.00 MiB
llama_new_context_with_model: KV self size = 4096.00 MiB, K (f16): 2048.00 MiB, V (f16): 2048.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.38 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 560.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB
llama_new_context_with_model: graph nodes = 1126
llama_new_context_with_model: graph splits = 2
INFO [main] model loaded | tid="140548118855680" timestamp=1727149340
time=2024-09-24T03:42:20.115Z level=INFO source=server.go:630 msg="llama runner started in 5.24 seconds"
[GIN] 2024/09/24 - 03:42:20 | 200 | 6.629076508s | 127.0.0.1 | POST "/api/chat"
CUDA error: an illegal memory access was encountered
current device: 0, in function ggml_backend_cuda_synchronize at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:2416
cudaStreamSynchronize(cuda_ctx->stream())
/go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:101: CUDA error
free(): corrupted unsorted chunks
[GIN] 2024/09/24 - 03:42:55 | 200 | 23.429741491s | 127.0.0.1 | POST "/api/chat"
time=2024-09-24T03:48:01.034Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.198929151 model=/root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4
time=2024-09-24T03:48:02.234Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=6.398749147 model=/root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4
time=2024-09-24T03:48:03.374Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=7.538628215 model=/root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4

<!-- gh-comment-id:2370279656 --> @bluebirdlinlin commented on GitHub (Sep 24, 2024): this is the full logs with command " docker logs ollama". 2024/09/23 11:14:40 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-09-23T11:14:40.309Z level=INFO source=images.go:753 msg="total blobs: 9" time=2024-09-23T11:14:40.316Z level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-09-23T11:14:40.317Z level=INFO source=routes.go:1172 msg="Listening on [::]:11434 (version 0.3.9)" time=2024-09-23T11:14:40.318Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1195458266/runners time=2024-09-23T11:14:51.240Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11 cuda_v12 rocm_v60102 cpu cpu_avx cpu_avx2]" time=2024-09-23T11:14:51.240Z level=INFO source=gpu.go:200 msg="looking for compatible GPUs" time=2024-09-23T11:14:52.754Z level=INFO source=types.go:107 msg="inference compute" id=GPU-6ac68bc8-cfaa-e488-7a4d-c0e83003f725 library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="20.2 GiB" time=2024-09-23T11:14:52.754Z level=INFO source=types.go:107 msg="inference compute" id=GPU-277417f5-4a9a-c14c-352a-f564eb23262b library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="21.8 GiB" time=2024-09-23T11:14:52.754Z level=INFO source=types.go:107 msg="inference compute" id=GPU-8e537520-8db9-abd8-2aa1-b078d7a394bb library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="21.8 GiB" time=2024-09-23T11:14:52.754Z level=INFO source=types.go:107 msg="inference compute" id=GPU-2140965a-169d-9073-9668-6ef8c8cbf034 library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="21.8 GiB" time=2024-09-23T11:14:52.754Z level=INFO source=types.go:107 msg="inference compute" id=GPU-67d912c0-8efd-e31a-f9bc-8c4834119698 library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="21.8 GiB" time=2024-09-23T11:14:52.754Z level=INFO source=types.go:107 msg="inference compute" id=GPU-ad1d8f91-03c7-e6a8-9977-71e6b18227d8 library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="19.3 GiB" [GIN] 2024/09/23 - 11:17:23 | 200 | 420.258µs | 127.0.0.1 | HEAD "/" [GIN] 2024/09/23 - 11:17:23 | 200 | 372.984µs | 127.0.0.1 | GET "/api/ps" [GIN] 2024/09/23 - 11:17:47 | 200 | 38.455µs | 127.0.0.1 | HEAD "/" [GIN] 2024/09/23 - 11:17:47 | 404 | 591.168µs | 127.0.0.1 | POST "/api/show" time=2024-09-23T11:17:51.297Z level=INFO source=download.go:175 msg="downloading 87f26aae09c7 in 16 281 MB part(s)" time=2024-09-23T11:31:21.652Z level=INFO source=download.go:370 msg="87f26aae09c7 part 5 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2024-09-23T11:32:08.259Z level=INFO source=download.go:175 msg="downloading c0312cf22ef0 in 1 483 B part(s)" [GIN] 2024/09/23 - 11:32:24 | 200 | 14m36s | 127.0.0.1 | POST "/api/pull" [GIN] 2024/09/23 - 11:32:24 | 200 | 27.073395ms | 127.0.0.1 | POST "/api/show" time=2024-09-23T11:32:25.791Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 gpu=GPU-277417f5-4a9a-c14c-352a-f564eb23262b parallel=4 available=23436918784 required="9.1 GiB" time=2024-09-23T11:32:25.792Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[21.8 GiB]" memory.required.full="9.1 GiB" memory.required.partial="9.1 GiB" memory.required.kv="4.0 GiB" memory.required.allocations="[9.1 GiB]" memory.weights.total="7.4 GiB" memory.weights.repeating="6.9 GiB" memory.weights.nonrepeating="486.9 MiB" memory.graph.full="544.0 MiB" memory.graph.partial="791.6 MiB" time=2024-09-23T11:32:25.819Z level=INFO source=server.go:391 msg="starting llama server" cmd="/tmp/ollama1195458266/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --port 39511" time=2024-09-23T11:32:25.820Z level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2024-09-23T11:32:25.820Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2024-09-23T11:32:25.820Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="1e6f655" tid="139820253798400" timestamp=1727091146 INFO [main] system info | n_threads=88 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139820253798400" timestamp=1727091146 total_threads=176 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="175" port="39511" tid="139820253798400" timestamp=1727091146 llama_model_loader: loaded meta data with 20 key-value pairs and 387 tensors from /root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.name str = Qwen2-beta-7B-Chat llama_model_loader: - kv 2: qwen2.block_count u32 = 32 llama_model_loader: - kv 3: qwen2.context_length u32 = 32768 llama_model_loader: - kv 4: qwen2.embedding_length u32 = 4096 llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 11008 llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 32 llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 32 llama_model_loader: - kv 8: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 9: qwen2.use_parallel_residual bool = true llama_model_loader: - kv 10: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 11: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 12: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 13: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 15: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 17: tokenizer.chat_template str = {% for message in messages %}{{'<|im_... llama_model_loader: - kv 18: general.quantization_version u32 = 2 llama_model_loader: - kv 19: general.file_type u32 = 2 llama_model_loader: - type f32: 161 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors time=2024-09-23T11:32:26.574Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default' llm_load_vocab: special tokens cache size = 293 llm_load_vocab: token to piece cache size = 0.9338 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 151936 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 11008 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 7.72 B llm_load_print_meta: model size = 4.20 GiB (4.67 BPW) llm_load_print_meta: general.name = Qwen2-beta-7B-Chat llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151643 '<|endoftext|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA A10, compute capability 8.6, VMM: yes llm_load_tensors: ggml ctx size = 0.34 MiB time=2024-09-23T11:32:28.030Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU buffer size = 333.84 MiB llm_load_tensors: CUDA0 buffer size = 3963.38 MiB time=2024-09-23T11:32:28.282Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 4096.00 MiB llama_new_context_with_model: KV self size = 4096.00 MiB, K (f16): 2048.00 MiB, V (f16): 2048.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.38 MiB llama_new_context_with_model: CUDA0 compute buffer size = 560.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB llama_new_context_with_model: graph nodes = 1126 llama_new_context_with_model: graph splits = 2 INFO [main] model loaded | tid="139820253798400" timestamp=1727091153 time=2024-09-23T11:32:33.556Z level=INFO source=server.go:630 msg="llama runner started in 7.74 seconds" [GIN] 2024/09/23 - 11:32:33 | 200 | 9.134411817s | 127.0.0.1 | POST "/api/chat" time=2024-09-24T03:32:27.411Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 gpu=GPU-277417f5-4a9a-c14c-352a-f564eb23262b parallel=4 available=23436918784 required="9.1 GiB" time=2024-09-24T03:32:27.412Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[21.8 GiB]" memory.required.full="9.1 GiB" memory.required.partial="9.1 GiB" memory.required.kv="4.0 GiB" memory.required.allocations="[9.1 GiB]" memory.weights.total="7.4 GiB" memory.weights.repeating="6.9 GiB" memory.weights.nonrepeating="486.9 MiB" memory.graph.full="544.0 MiB" memory.graph.partial="791.6 MiB" time=2024-09-24T03:32:27.438Z level=INFO source=server.go:391 msg="starting llama server" cmd="/tmp/ollama1195458266/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --port 33174" time=2024-09-24T03:32:27.438Z level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2024-09-24T03:32:27.439Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2024-09-24T03:32:27.439Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="1e6f655" tid="140002756288512" timestamp=1727148747 INFO [main] system info | n_threads=88 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140002756288512" timestamp=1727148747 total_threads=176 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="175" port="33174" tid="140002756288512" timestamp=1727148747 llama_model_loader: loaded meta data with 20 key-value pairs and 387 tensors from /root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.name str = Qwen2-beta-7B-Chat llama_model_loader: - kv 2: qwen2.block_count u32 = 32 llama_model_loader: - kv 3: qwen2.context_length u32 = 32768 llama_model_loader: - kv 4: qwen2.embedding_length u32 = 4096 llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 11008 llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 32 llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 32 llama_model_loader: - kv 8: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 9: qwen2.use_parallel_residual bool = true llama_model_loader: - kv 10: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 11: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 12: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... time=2024-09-24T03:32:27.691Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 13: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 15: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 17: tokenizer.chat_template str = {% for message in messages %}{{'<|im_... llama_model_loader: - kv 18: general.quantization_version u32 = 2 llama_model_loader: - kv 19: general.file_type u32 = 2 llama_model_loader: - type f32: 161 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default' llm_load_vocab: special tokens cache size = 293 llm_load_vocab: token to piece cache size = 0.9338 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 151936 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 11008 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 7.72 B llm_load_print_meta: model size = 4.20 GiB (4.67 BPW) llm_load_print_meta: general.name = Qwen2-beta-7B-Chat llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151643 '<|endoftext|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA A10, compute capability 8.6, VMM: yes llm_load_tensors: ggml ctx size = 0.34 MiB time=2024-09-24T03:32:29.148Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU buffer size = 333.84 MiB llm_load_tensors: CUDA0 buffer size = 3963.38 MiB time=2024-09-24T03:32:29.400Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 4096.00 MiB llama_new_context_with_model: KV self size = 4096.00 MiB, K (f16): 2048.00 MiB, V (f16): 2048.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.38 MiB llama_new_context_with_model: CUDA0 compute buffer size = 560.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB llama_new_context_with_model: graph nodes = 1126 llama_new_context_with_model: graph splits = 2 INFO [main] model loaded | tid="140002756288512" timestamp=1727148752 time=2024-09-24T03:32:32.416Z level=INFO source=server.go:630 msg="llama runner started in 4.98 seconds" [GIN] 2024/09/24 - 03:32:32 | 200 | 6.624226742s | 127.0.0.1 | POST "/api/chat" [GIN] 2024/09/24 - 03:32:43 | 200 | 1.193765361s | 127.0.0.1 | POST "/api/chat" [GIN] 2024/09/24 - 03:32:53 | 200 | 947.380979ms | 127.0.0.1 | POST "/api/chat" CUDA error: an illegal memory access was encountered current device: 0, in function ggml_backend_cuda_synchronize at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:2416 cudaStreamSynchronize(cuda_ctx->stream()) /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:101: CUDA error free(): corrupted unsorted chunks [GIN] 2024/09/24 - 03:33:22 | 200 | 19.601848568s | 127.0.0.1 | POST "/api/chat" time=2024-09-24T03:38:27.364Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.23048076 model=/root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 time=2024-09-24T03:38:28.539Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=6.405489463 model=/root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 time=2024-09-24T03:38:29.792Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=7.658436873 model=/root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 2024/09/24 03:41:30 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-09-24T03:41:30.132Z level=INFO source=images.go:753 msg="total blobs: 11" time=2024-09-24T03:41:30.133Z level=INFO source=images.go:760 msg="total unused blobs removed: 0" time=2024-09-24T03:41:30.133Z level=INFO source=routes.go:1172 msg="Listening on [::]:11434 (version 0.3.9)" time=2024-09-24T03:41:30.134Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2193641396/runners time=2024-09-24T03:41:45.131Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11 cuda_v12 rocm_v60102 cpu cpu_avx]" time=2024-09-24T03:41:45.132Z level=INFO source=gpu.go:200 msg="looking for compatible GPUs" time=2024-09-24T03:41:46.474Z level=INFO source=types.go:107 msg="inference compute" id=GPU-6ac68bc8-cfaa-e488-7a4d-c0e83003f725 library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="19.7 GiB" time=2024-09-24T03:41:46.475Z level=INFO source=types.go:107 msg="inference compute" id=GPU-277417f5-4a9a-c14c-352a-f564eb23262b library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="21.8 GiB" time=2024-09-24T03:41:46.475Z level=INFO source=types.go:107 msg="inference compute" id=GPU-8e537520-8db9-abd8-2aa1-b078d7a394bb library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="21.8 GiB" time=2024-09-24T03:41:46.475Z level=INFO source=types.go:107 msg="inference compute" id=GPU-2140965a-169d-9073-9668-6ef8c8cbf034 library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="21.8 GiB" time=2024-09-24T03:41:46.475Z level=INFO source=types.go:107 msg="inference compute" id=GPU-67d912c0-8efd-e31a-f9bc-8c4834119698 library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="21.8 GiB" time=2024-09-24T03:41:46.475Z level=INFO source=types.go:107 msg="inference compute" id=GPU-ad1d8f91-03c7-e6a8-9977-71e6b18227d8 library=cuda variant=v12 compute=8.6 driver=12.0 name="NVIDIA A10" total="22.1 GiB" available="19.3 GiB" [GIN] 2024/09/24 - 03:41:46 | 200 | 122.101µs | 127.0.0.1 | HEAD "/" [GIN] 2024/09/24 - 03:41:46 | 200 | 3.130863ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/09/24 - 03:42:13 | 200 | 77.442µs | 127.0.0.1 | HEAD "/" [GIN] 2024/09/24 - 03:42:13 | 200 | 25.277651ms | 127.0.0.1 | POST "/api/show" time=2024-09-24T03:42:14.851Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 gpu=GPU-277417f5-4a9a-c14c-352a-f564eb23262b parallel=4 available=23436918784 required="9.1 GiB" time=2024-09-24T03:42:14.852Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[21.8 GiB]" memory.required.full="9.1 GiB" memory.required.partial="9.1 GiB" memory.required.kv="4.0 GiB" memory.required.allocations="[9.1 GiB]" memory.weights.total="7.4 GiB" memory.weights.repeating="6.9 GiB" memory.weights.nonrepeating="486.9 MiB" memory.graph.full="544.0 MiB" memory.graph.partial="791.6 MiB" time=2024-09-24T03:42:14.878Z level=INFO source=server.go:391 msg="starting llama server" cmd="/tmp/ollama2193641396/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --port 44751" time=2024-09-24T03:42:14.879Z level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2024-09-24T03:42:14.879Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding" time=2024-09-24T03:42:14.880Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="1e6f655" tid="140548118855680" timestamp=1727149334 INFO [main] system info | n_threads=88 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140548118855680" timestamp=1727149334 total_threads=176 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="175" port="44751" tid="140548118855680" timestamp=1727149334 llama_model_loader: loaded meta data with 20 key-value pairs and 387 tensors from /root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.name str = Qwen2-beta-7B-Chat llama_model_loader: - kv 2: qwen2.block_count u32 = 32 llama_model_loader: - kv 3: qwen2.context_length u32 = 32768 llama_model_loader: - kv 4: qwen2.embedding_length u32 = 4096 llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 11008 llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 32 llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 32 llama_model_loader: - kv 8: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 9: qwen2.use_parallel_residual bool = true llama_model_loader: - kv 10: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 11: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 12: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 13: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 151643 llama_model_loader: - kv 15: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 17: tokenizer.chat_template str = {% for message in messages %}{{'<|im_... llama_model_loader: - kv 18: general.quantization_version u32 = 2 llama_model_loader: - kv 19: general.file_type u32 = 2 llama_model_loader: - type f32: 161 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors time=2024-09-24T03:42:15.136Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default' llm_load_vocab: special tokens cache size = 293 llm_load_vocab: token to piece cache size = 0.9338 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 151936 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 11008 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 7.72 B llm_load_print_meta: model size = 4.20 GiB (4.67 BPW) llm_load_print_meta: general.name = Qwen2-beta-7B-Chat llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151643 '<|endoftext|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA A10, compute capability 8.6, VMM: yes llm_load_tensors: ggml ctx size = 0.34 MiB time=2024-09-24T03:42:16.594Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding" llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU buffer size = 333.84 MiB llm_load_tensors: CUDA0 buffer size = 3963.38 MiB time=2024-09-24T03:42:16.845Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model" llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 4096.00 MiB llama_new_context_with_model: KV self size = 4096.00 MiB, K (f16): 2048.00 MiB, V (f16): 2048.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.38 MiB llama_new_context_with_model: CUDA0 compute buffer size = 560.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB llama_new_context_with_model: graph nodes = 1126 llama_new_context_with_model: graph splits = 2 INFO [main] model loaded | tid="140548118855680" timestamp=1727149340 time=2024-09-24T03:42:20.115Z level=INFO source=server.go:630 msg="llama runner started in 5.24 seconds" [GIN] 2024/09/24 - 03:42:20 | 200 | 6.629076508s | 127.0.0.1 | POST "/api/chat" CUDA error: an illegal memory access was encountered current device: 0, in function ggml_backend_cuda_synchronize at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:2416 cudaStreamSynchronize(cuda_ctx->stream()) /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:101: CUDA error free(): corrupted unsorted chunks [GIN] 2024/09/24 - 03:42:55 | 200 | 23.429741491s | 127.0.0.1 | POST "/api/chat" time=2024-09-24T03:48:01.034Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.198929151 model=/root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 time=2024-09-24T03:48:02.234Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=6.398749147 model=/root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4 time=2024-09-24T03:48:03.374Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=7.538628215 model=/root/.ollama/models/blobs/sha256-87f26aae09c7f052de93ff98a2282f05822cc6de4af1a2a159c5bd1acbd10ec4
Author
Owner

@bluebirdlinlin commented on GitHub (Sep 24, 2024):

Please post the full log, there is information about devices and resources earlier in the log that may be useful.

i posted full logs below, help me please, thanks!

<!-- gh-comment-id:2370284848 --> @bluebirdlinlin commented on GitHub (Sep 24, 2024): > Please post the full log, there is information about devices and resources earlier in the log that may be useful. i posted full logs below, help me please, thanks!
Author
Owner

@rick-github commented on GitHub (Sep 24, 2024):

What's the output of nvidia-smi?

<!-- gh-comment-id:2370300669 --> @rick-github commented on GitHub (Sep 24, 2024): What's the output of `nvidia-smi`?
Author
Owner

@bluebirdlinlin commented on GitHub (Sep 24, 2024):

What's the output of nvidia-smi?

i can use nvidia-smi in linux and ollama docker, also use nvcc --version in linux, but i can't use nvcc--version in ollama docker. this is the info.

(base) [root@localhost Governchat-main]# nvidia-smi
Tue Sep 24 14:29:17 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.60.13 Driver Version: 525.60.13 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A10 Off | 00000000:04:00.0 Off | 0 |
| 0% 48C P0 58W / 150W | 2160MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA A10 Off | 00000000:44:00.0 Off | 0 |
| 0% 29C P8 12W / 150W | 2MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 NVIDIA A10 Off | 00000000:81:00.0 Off | 0 |
| 0% 28C P8 8W / 150W | 2MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 NVIDIA A10 Off | 00000000:87:00.0 Off | 0 |
| 0% 27C P8 16W / 150W | 2MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 4 NVIDIA A10 Off | 00000000:C1:00.0 Off | 0 |
| 0% 27C P8 8W / 150W | 2MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 5 NVIDIA A10 Off | 00000000:C4:00.0 Off | 0 |
| 0% 45C P0 55W / 150W | 2620MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 60210 C ...nvs/Governchat-main_envs/bin/python 2158MiB |
| 5 N/A N/A 54844 C python 2618MiB |
+-----------------------------------------------------------------------------+
(base) [root@localhost Governchat-main]# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Mon_Oct_24_19:12:58_PDT_2022
Cuda compilation tools, release 12.0, V12.0.76
Build cuda_12.0.r12.0/compiler.31968024_0

(base) [root@localhost (base) [root@localhost Governchat-main]# docker exec -it ollama bash
root@96f68cb577e1:/# nvidia-smi
Tue Sep 24 06:32:24 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.60.13 Driver Version: 525.60.13 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A10 Off | 00000000:04:00.0 Off | 0 |
| 0% 48C P0 58W / 150W | 2160MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA A10 Off | 00000000:44:00.0 Off | 0 |
| 0% 29C P8 12W / 150W | 2MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 NVIDIA A10 Off | 00000000:81:00.0 Off | 0 |
| 0% 29C P8 12W / 150W | 2MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 NVIDIA A10 Off | 00000000:87:00.0 Off | 0 |
| 0% 29C P8 12W / 150W | 2MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 4 NVIDIA A10 Off | 00000000:C1:00.0 Off | 0 |
| 0% 29C P8 12W / 150W | 2MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 5 NVIDIA A10 Off | 00000000:C4:00.0 Off | 0 |
| 0% 45C P0 55W / 150W | 2620MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
root@96f68cb577e1:/# nvcc --version
bash: nvcc: command not found]
root@96f68cb577e1:/# nvidia-smi
Tue Sep 24 14:29:17 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.60.13 Driver Version: 525.60.13 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A10 Off | 00000000:04:00.0 Off | 0 |
| 0% 48C P0 58W / 150W | 2160MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA A10 Off | 00000000:44:00.0 Off | 0 |
| 0% 29C P8 12W / 150W | 2MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 NVIDIA A10 Off | 00000000:81:00.0 Off | 0 |
| 0% 28C P8 8W / 150W | 2MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 NVIDIA A10 Off | 00000000:87:00.0 Off | 0 |
| 0% 27C P8 16W / 150W | 2MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 4 NVIDIA A10 Off | 00000000:C1:00.0 Off | 0 |
| 0% 27C P8 8W / 150W | 2MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 5 NVIDIA A10 Off | 00000000:C4:00.0 Off | 0 |
| 0% 45C P0 55W / 150W | 2620MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

<!-- gh-comment-id:2370314673 --> @bluebirdlinlin commented on GitHub (Sep 24, 2024): > What's the output of `nvidia-smi`? i can use nvidia-smi in linux and ollama docker, also use nvcc --version in linux, but i can't use nvcc--version in ollama docker. this is the info. (base) [root@localhost Governchat-main]# nvidia-smi Tue Sep 24 14:29:17 2024 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.60.13 Driver Version: 525.60.13 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA A10 Off | 00000000:04:00.0 Off | 0 | | 0% 48C P0 58W / 150W | 2160MiB / 23028MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 NVIDIA A10 Off | 00000000:44:00.0 Off | 0 | | 0% 29C P8 12W / 150W | 2MiB / 23028MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 2 NVIDIA A10 Off | 00000000:81:00.0 Off | 0 | | 0% 28C P8 8W / 150W | 2MiB / 23028MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 3 NVIDIA A10 Off | 00000000:87:00.0 Off | 0 | | 0% 27C P8 16W / 150W | 2MiB / 23028MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 4 NVIDIA A10 Off | 00000000:C1:00.0 Off | 0 | | 0% 27C P8 8W / 150W | 2MiB / 23028MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 5 NVIDIA A10 Off | 00000000:C4:00.0 Off | 0 | | 0% 45C P0 55W / 150W | 2620MiB / 23028MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 60210 C ...nvs/Governchat-main_envs/bin/python 2158MiB | | 5 N/A N/A 54844 C python 2618MiB | +-----------------------------------------------------------------------------+ (base) [root@localhost Governchat-main]# nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2022 NVIDIA Corporation Built on Mon_Oct_24_19:12:58_PDT_2022 Cuda compilation tools, release 12.0, V12.0.76 Build cuda_12.0.r12.0/compiler.31968024_0 (base) [root@localhost (base) [root@localhost Governchat-main]# docker exec -it ollama bash root@96f68cb577e1:/# nvidia-smi Tue Sep 24 06:32:24 2024 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.60.13 Driver Version: 525.60.13 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA A10 Off | 00000000:04:00.0 Off | 0 | | 0% 48C P0 58W / 150W | 2160MiB / 23028MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 NVIDIA A10 Off | 00000000:44:00.0 Off | 0 | | 0% 29C P8 12W / 150W | 2MiB / 23028MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 2 NVIDIA A10 Off | 00000000:81:00.0 Off | 0 | | 0% 29C P8 12W / 150W | 2MiB / 23028MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 3 NVIDIA A10 Off | 00000000:87:00.0 Off | 0 | | 0% 29C P8 12W / 150W | 2MiB / 23028MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 4 NVIDIA A10 Off | 00000000:C1:00.0 Off | 0 | | 0% 29C P8 12W / 150W | 2MiB / 23028MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 5 NVIDIA A10 Off | 00000000:C4:00.0 Off | 0 | | 0% 45C P0 55W / 150W | 2620MiB / 23028MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| +-----------------------------------------------------------------------------+ root@96f68cb577e1:/# nvcc --version bash: nvcc: command not found] root@96f68cb577e1:/# nvidia-smi Tue Sep 24 14:29:17 2024 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.60.13 Driver Version: 525.60.13 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA A10 Off | 00000000:04:00.0 Off | 0 | | 0% 48C P0 58W / 150W | 2160MiB / 23028MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 NVIDIA A10 Off | 00000000:44:00.0 Off | 0 | | 0% 29C P8 12W / 150W | 2MiB / 23028MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 2 NVIDIA A10 Off | 00000000:81:00.0 Off | 0 | | 0% 28C P8 8W / 150W | 2MiB / 23028MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 3 NVIDIA A10 Off | 00000000:87:00.0 Off | 0 | | 0% 27C P8 16W / 150W | 2MiB / 23028MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 4 NVIDIA A10 Off | 00000000:C1:00.0 Off | 0 | | 0% 27C P8 8W / 150W | 2MiB / 23028MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 5 NVIDIA A10 Off | 00000000:C4:00.0 Off | 0 | | 0% 45C P0 55W / 150W | 2620MiB / 23028MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+
Author
Owner

@rick-github commented on GitHub (Sep 24, 2024):

Try upgrading ollama to 0.3.10 or later. There is a problem with CUDA 12.0 and the cuda_v12 runner (https://github.com/ollama/ollama/issues/6556). Either that or upgrade the Nvidia drivers on your machine.

<!-- gh-comment-id:2370330050 --> @rick-github commented on GitHub (Sep 24, 2024): Try upgrading ollama to 0.3.10 or later. There is a problem with CUDA 12.0 and the cuda_v12 runner (https://github.com/ollama/ollama/issues/6556). Either that or upgrade the Nvidia drivers on your machine.
Author
Owner

@bluebirdlinlin commented on GitHub (Sep 24, 2024):

Try upgrading ollama to 0.3.10 or later. There is a problem with CUDA 12.0 and the cuda_v12 runner (#6556). Either that or upgrade the Nvidia drivers on your machine.

my ollama version is 0.3.9

Try upgrading ollama to 0.3.10 or later. There is a problem with CUDA 12.0 and the cuda_v12 runner (#6556). Either that or upgrade the Nvidia drivers on your machine.

thanks for your help. oom disappear after i upgread ollama to 0.3.12

<!-- gh-comment-id:2370831316 --> @bluebirdlinlin commented on GitHub (Sep 24, 2024): > Try upgrading ollama to 0.3.10 or later. There is a problem with CUDA 12.0 and the cuda_v12 runner (#6556). Either that or upgrade the Nvidia drivers on your machine. my ollama version is 0.3.9 > Try upgrading ollama to 0.3.10 or later. There is a problem with CUDA 12.0 and the cuda_v12 runner (#6556). Either that or upgrade the Nvidia drivers on your machine. thanks for your help. oom disappear after i upgread ollama to 0.3.12
Author
Owner

@dhiltgen commented on GitHub (Sep 25, 2024):

Great to hear upgrading cleared up the problem.

<!-- gh-comment-id:2372606397 --> @dhiltgen commented on GitHub (Sep 25, 2024): Great to hear upgrading cleared up the problem.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#66430