[GH-ISSUE #7573] 503 error after using api/generate for some time #51337

Closed
opened 2026-04-28 19:34:07 -05:00 by GiteaMirror · 17 comments
Owner

Originally created by @JTHesse on GitHub (Nov 8, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7573

Originally assigned to: @jessegross on GitHub.

What is the issue?

After upgrading to the new 0.4.0 version yesterday, Ollama stops responding after a few minutes.
The first API calls are fine, but then we receive only 503 errors:

[GIN] 2024/11/08 - 09:00:10 | 503 | 17.200814ms | 192.169.0.5 | POST "/api/generate"

This is happening for different models and even for the older 0.4.0-rc5 docker image.
After simply restarting the container everything works fine again.

Additionally, I noticed that after restarting there is another error message visible:

time=2024-11-08T09:02:48.934Z level=ERROR source=server.go:695 msg="Failed to acquire semaphore" error="context canceled"

Full log:

time=2024-11-08T08:20:37.985Z level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 library=cuda total="10.9 GiB" available="4.2 GiB"
time=2024-11-08T08:20:37.986Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-28bfdfaeba9f51611c00ed322ba684ce6db076756dbc46643f98a8a748c5199e gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 parallel=1 available=4540532736 required="2.8 GiB"
time=2024-11-08T08:20:38.119Z level=INFO source=server.go:105 msg="system memory" total="62.7 GiB" free="57.6 GiB" free_swap="8.0 GiB"
time=2024-11-08T08:20:38.120Z level=INFO source=memory.go:343 msg="offload to cuda" layers.requested=-1 layers.model=31 layers.offload=31 layers.split="" memory.available="[4.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="2.8 GiB" memory.required.partial="2.8 GiB" memory.required.kv="237.2 MiB" memory.required.allocations="[2.8 GiB]" memory.weights.total="1.7 GiB" memory.weights.repeating="1.7 GiB" memory.weights.nonrepeating="81.0 MiB" memory.graph.full="474.4 MiB" memory.graph.partial="474.4 MiB"
time=2024-11-08T08:20:38.120Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-28bfdfaeba9f51611c00ed322ba684ce6db076756dbc46643f98a8a748c5199e --ctx-size 8096 --batch-size 512 --embedding --n-gpu-layers 31 --threads 12 --parallel 1 --port 45039"
time=2024-11-08T08:20:38.120Z level=INFO source=sched.go:449 msg="loaded runners" count=3
time=2024-11-08T08:20:38.120Z level=INFO source=server.go:567 msg="waiting for llama runner to start responding"
time=2024-11-08T08:20:38.121Z level=INFO source=server.go:601 msg="waiting for server to become available" status="llm server error"
time=2024-11-08T08:20:38.173Z level=INFO source=runner.go:869 msg="starting go runner"
time=2024-11-08T08:20:38.173Z level=INFO source=runner.go:870 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=12
time=2024-11-08T08:20:38.173Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:45039"
llama_model_loader: loaded meta data with 19 key-value pairs and 483 tensors from /root/.ollama/models/blobs/sha256-28bfdfaeba9f51611c00ed322ba684ce6db076756dbc46643f98a8a748c5199e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = starcoder2
llama_model_loader: - kv   1:                               general.name str              = starcoder2-3b
llama_model_loader: - kv   2:                     starcoder2.block_count u32              = 30
llama_model_loader: - kv   3:                  starcoder2.context_length u32              = 16384
llama_model_loader: - kv   4:                starcoder2.embedding_length u32              = 3072
llama_model_loader: - kv   5:             starcoder2.feed_forward_length u32              = 12288
llama_model_loader: - kv   6:            starcoder2.attention.head_count u32              = 24
llama_model_loader: - kv   7:         starcoder2.attention.head_count_kv u32              = 2
llama_model_loader: - kv   8:                  starcoder2.rope.freq_base f32              = 999999.437500
llama_model_loader: - kv   9:    starcoder2.attention.layer_norm_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,49152]   = ["<|endoftext|>", "<fim_prefix>", "<f...
llama_model_loader: - kv  13:                  tokenizer.ggml.token_type arr[i32,49152]   = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  14:                      tokenizer.ggml.merges arr[str,48872]   = ["Ġ Ġ", "ĠĠ ĠĠ", "ĠĠĠĠ ĠĠ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 0
llama_model_loader: - kv  17:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  18:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  302 tensors
llama_model_loader: - type q4_0:  181 tensors
llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
time=2024-11-08T08:20:38.372Z level=INFO source=server.go:601 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 38
llm_load_vocab: token to piece cache size = 0.2828 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = starcoder2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 49152
llm_load_print_meta: n_merges         = 48872
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 16384
llm_load_print_meta: n_embd           = 3072
llm_load_print_meta: n_layer          = 30
llm_load_print_meta: n_head           = 24
llm_load_print_meta: n_head_kv        = 2
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 12
llm_load_print_meta: n_embd_k_gqa     = 256
llm_load_print_meta: n_embd_v_gqa     = 256
llm_load_print_meta: f_norm_eps       = 1.0e-05
llm_load_print_meta: f_norm_rms_eps   = 0.0e+00
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 12288
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 999999.4
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 16384
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 3B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 3.03 B
llm_load_print_meta: model size       = 1.59 GiB (4.51 BPW)
llm_load_print_meta: general.name     = starcoder2-3b
llm_load_print_meta: BOS token        = 0 '<|endoftext|>'
llm_load_print_meta: EOS token        = 0 '<|endoftext|>'
llm_load_print_meta: UNK token        = 0 '<|endoftext|>'
llm_load_print_meta: LF token         = 164 'Ä'
llm_load_print_meta: EOT token        = 0 '<|endoftext|>'
llm_load_print_meta: EOG token        = 0 '<|endoftext|>'
llm_load_print_meta: max token length = 512
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1, VMM: yes
llm_load_tensors: ggml ctx size =    0.40 MiB
llm_load_tensors: offloading 30 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 31/31 layers to GPU
llm_load_tensors:        CPU buffer size =    81.00 MiB
llm_load_tensors:      CUDA0 buffer size =  1629.01 MiB
llama_new_context_with_model: n_ctx      = 8096
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 999999.4
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   237.19 MiB
llama_new_context_with_model: KV self size  =  237.19 MiB, K (f16):  118.59 MiB, V (f16):  118.59 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.20 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   419.32 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    21.82 MiB
llama_new_context_with_model: graph nodes  = 1147
llama_new_context_with_model: graph splits = 2
time=2024-11-08T08:20:39.126Z level=INFO source=server.go:606 msg="llama runner started in 1.01 seconds"
[GIN] 2024/11/08 - 08:20:39 | 200 |   1.70608099s |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 08:20:40 | 200 |   340.53753ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 08:20:41 | 200 |  214.851596ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 08:20:46 | 200 |   17.788409ms |     192.169.0.5 | POST     "/api/show"
[GIN] 2024/11/08 - 08:20:50 | 200 |  196.637376ms |     192.169.0.5 | POST     "/api/chat"
[GIN] 2024/11/08 - 08:20:52 | 200 |  1.119738864s |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 08:20:53 | 200 |  242.303671ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 08:20:53 | 200 |  549.895561ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 08:20:54 | 200 |  429.541196ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 08:20:55 | 200 |  625.374384ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 08:20:56 | 200 |   232.57433ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 08:20:57 | 200 |  617.051107ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 08:20:57 | 200 |  118.607634ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 08:20:58 | 200 |  437.360751ms |     192.169.0.5 | POST     "/api/chat"
[GIN] 2024/11/08 - 08:21:08 | 200 |    17.04984ms |     192.169.0.5 | POST     "/api/show"
[GIN] 2024/11/08 - 08:21:56 | 200 |    4.331851ms |     192.169.0.5 | GET      "/api/tags"
[GIN] 2024/11/08 - 08:21:56 | 200 |      41.467µs |     192.169.0.5 | GET      "/api/version"
...
[GIN] 2024/11/08 - 08:59:14 | 200 |    4.189295ms |     192.169.0.5 | GET      "/api/tags"
[GIN] 2024/11/08 - 08:59:14 | 200 |      4.1343ms |     192.169.0.5 | GET      "/api/tags"
[GIN] 2024/11/08 - 08:59:14 | 200 |      42.295µs |     192.169.0.5 | GET      "/api/version"
[GIN] 2024/11/08 - 09:00:10 | 503 |   17.200814ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 09:00:14 | 503 |   17.079306ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 09:00:15 | 503 |   17.155908ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 09:00:15 | 503 |    17.15066ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 09:00:16 | 503 |    17.10698ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 09:00:53 | 503 |    17.41418ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 09:01:03 | 200 |    4.409588ms |     192.169.0.5 | GET      "/api/tags"
[GIN] 2024/11/08 - 09:01:05 | 503 |    17.38414ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 09:01:07 | 503 |   17.168875ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 09:01:08 | 503 |   54.181867ms |     192.169.0.5 | POST     "/api/chat"
[GIN] 2024/11/08 - 09:01:08 | 503 |   17.375092ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 09:01:08 | 503 |    17.24719ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 09:01:09 | 503 |   17.354882ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 09:01:10 | 503 |   17.155581ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 09:01:10 | 503 |   17.387027ms |     192.169.0.5 | POST     "/api/generate"
[GIN] 2024/11/08 - 09:01:15 | 200 |      47.063µs |     192.169.0.5 | GET      "/api/version"
[GIN] 2024/11/08 - 09:01:26 | 503 |   55.327475ms |     192.169.0.5 | POST     "/api/chat"

Restart happened here!

time=2024-11-08T09:02:48.934Z level=ERROR source=server.go:695 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/08 - 09:02:48 | 200 |        56m37s |     192.169.0.5 | POST     "/api/generate"
time=2024-11-08T09:02:48.934Z level=ERROR source=server.go:695 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/08 - 09:02:48 | 200 |        55m43s |     192.169.0.5 | POST     "/api/generate"

OS

Docker

GPU

Nvidia

CPU

Intel

Ollama version

0.4.0

Originally created by @JTHesse on GitHub (Nov 8, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7573 Originally assigned to: @jessegross on GitHub. ### What is the issue? After upgrading to the new 0.4.0 version yesterday, Ollama stops responding after a few minutes. The first API calls are fine, but then we receive only 503 errors: `[GIN] 2024/11/08 - 09:00:10 | 503 | 17.200814ms | 192.169.0.5 | POST "/api/generate"` This is happening for different models and even for the older 0.4.0-rc5 docker image. After simply restarting the container everything works fine again. Additionally, I noticed that after restarting there is another error message visible: `time=2024-11-08T09:02:48.934Z level=ERROR source=server.go:695 msg="Failed to acquire semaphore" error="context canceled"` Full log: ``` time=2024-11-08T08:20:37.985Z level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 library=cuda total="10.9 GiB" available="4.2 GiB" time=2024-11-08T08:20:37.986Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-28bfdfaeba9f51611c00ed322ba684ce6db076756dbc46643f98a8a748c5199e gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 parallel=1 available=4540532736 required="2.8 GiB" time=2024-11-08T08:20:38.119Z level=INFO source=server.go:105 msg="system memory" total="62.7 GiB" free="57.6 GiB" free_swap="8.0 GiB" time=2024-11-08T08:20:38.120Z level=INFO source=memory.go:343 msg="offload to cuda" layers.requested=-1 layers.model=31 layers.offload=31 layers.split="" memory.available="[4.2 GiB]" memory.gpu_overhead="0 B" memory.required.full="2.8 GiB" memory.required.partial="2.8 GiB" memory.required.kv="237.2 MiB" memory.required.allocations="[2.8 GiB]" memory.weights.total="1.7 GiB" memory.weights.repeating="1.7 GiB" memory.weights.nonrepeating="81.0 MiB" memory.graph.full="474.4 MiB" memory.graph.partial="474.4 MiB" time=2024-11-08T08:20:38.120Z level=INFO source=server.go:388 msg="starting llama server" cmd="/usr/lib/ollama/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-28bfdfaeba9f51611c00ed322ba684ce6db076756dbc46643f98a8a748c5199e --ctx-size 8096 --batch-size 512 --embedding --n-gpu-layers 31 --threads 12 --parallel 1 --port 45039" time=2024-11-08T08:20:38.120Z level=INFO source=sched.go:449 msg="loaded runners" count=3 time=2024-11-08T08:20:38.120Z level=INFO source=server.go:567 msg="waiting for llama runner to start responding" time=2024-11-08T08:20:38.121Z level=INFO source=server.go:601 msg="waiting for server to become available" status="llm server error" time=2024-11-08T08:20:38.173Z level=INFO source=runner.go:869 msg="starting go runner" time=2024-11-08T08:20:38.173Z level=INFO source=runner.go:870 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=12 time=2024-11-08T08:20:38.173Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:45039" llama_model_loader: loaded meta data with 19 key-value pairs and 483 tensors from /root/.ollama/models/blobs/sha256-28bfdfaeba9f51611c00ed322ba684ce6db076756dbc46643f98a8a748c5199e (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = starcoder2 llama_model_loader: - kv 1: general.name str = starcoder2-3b llama_model_loader: - kv 2: starcoder2.block_count u32 = 30 llama_model_loader: - kv 3: starcoder2.context_length u32 = 16384 llama_model_loader: - kv 4: starcoder2.embedding_length u32 = 3072 llama_model_loader: - kv 5: starcoder2.feed_forward_length u32 = 12288 llama_model_loader: - kv 6: starcoder2.attention.head_count u32 = 24 llama_model_loader: - kv 7: starcoder2.attention.head_count_kv u32 = 2 llama_model_loader: - kv 8: starcoder2.rope.freq_base f32 = 999999.437500 llama_model_loader: - kv 9: starcoder2.attention.layer_norm_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,49152] = ["<|endoftext|>", "<fim_prefix>", "<f... llama_model_loader: - kv 13: tokenizer.ggml.token_type arr[i32,49152] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 14: tokenizer.ggml.merges arr[str,48872] = ["Ġ Ġ", "ĠĠ ĠĠ", "ĠĠĠĠ ĠĠ... llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 0 llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 18: general.quantization_version u32 = 2 llama_model_loader: - type f32: 302 tensors llama_model_loader: - type q4_0: 181 tensors llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default' time=2024-11-08T08:20:38.372Z level=INFO source=server.go:601 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 38 llm_load_vocab: token to piece cache size = 0.2828 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = starcoder2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 49152 llm_load_print_meta: n_merges = 48872 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 16384 llm_load_print_meta: n_embd = 3072 llm_load_print_meta: n_layer = 30 llm_load_print_meta: n_head = 24 llm_load_print_meta: n_head_kv = 2 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 12 llm_load_print_meta: n_embd_k_gqa = 256 llm_load_print_meta: n_embd_v_gqa = 256 llm_load_print_meta: f_norm_eps = 1.0e-05 llm_load_print_meta: f_norm_rms_eps = 0.0e+00 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 12288 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 999999.4 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 16384 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 3B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 3.03 B llm_load_print_meta: model size = 1.59 GiB (4.51 BPW) llm_load_print_meta: general.name = starcoder2-3b llm_load_print_meta: BOS token = 0 '<|endoftext|>' llm_load_print_meta: EOS token = 0 '<|endoftext|>' llm_load_print_meta: UNK token = 0 '<|endoftext|>' llm_load_print_meta: LF token = 164 'Ä' llm_load_print_meta: EOT token = 0 '<|endoftext|>' llm_load_print_meta: EOG token = 0 '<|endoftext|>' llm_load_print_meta: max token length = 512 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1, VMM: yes llm_load_tensors: ggml ctx size = 0.40 MiB llm_load_tensors: offloading 30 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 31/31 layers to GPU llm_load_tensors: CPU buffer size = 81.00 MiB llm_load_tensors: CUDA0 buffer size = 1629.01 MiB llama_new_context_with_model: n_ctx = 8096 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 999999.4 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 237.19 MiB llama_new_context_with_model: KV self size = 237.19 MiB, K (f16): 118.59 MiB, V (f16): 118.59 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.20 MiB llama_new_context_with_model: CUDA0 compute buffer size = 419.32 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 21.82 MiB llama_new_context_with_model: graph nodes = 1147 llama_new_context_with_model: graph splits = 2 time=2024-11-08T08:20:39.126Z level=INFO source=server.go:606 msg="llama runner started in 1.01 seconds" [GIN] 2024/11/08 - 08:20:39 | 200 | 1.70608099s | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 08:20:40 | 200 | 340.53753ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 08:20:41 | 200 | 214.851596ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 08:20:46 | 200 | 17.788409ms | 192.169.0.5 | POST "/api/show" [GIN] 2024/11/08 - 08:20:50 | 200 | 196.637376ms | 192.169.0.5 | POST "/api/chat" [GIN] 2024/11/08 - 08:20:52 | 200 | 1.119738864s | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 08:20:53 | 200 | 242.303671ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 08:20:53 | 200 | 549.895561ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 08:20:54 | 200 | 429.541196ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 08:20:55 | 200 | 625.374384ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 08:20:56 | 200 | 232.57433ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 08:20:57 | 200 | 617.051107ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 08:20:57 | 200 | 118.607634ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 08:20:58 | 200 | 437.360751ms | 192.169.0.5 | POST "/api/chat" [GIN] 2024/11/08 - 08:21:08 | 200 | 17.04984ms | 192.169.0.5 | POST "/api/show" [GIN] 2024/11/08 - 08:21:56 | 200 | 4.331851ms | 192.169.0.5 | GET "/api/tags" [GIN] 2024/11/08 - 08:21:56 | 200 | 41.467µs | 192.169.0.5 | GET "/api/version" ... [GIN] 2024/11/08 - 08:59:14 | 200 | 4.189295ms | 192.169.0.5 | GET "/api/tags" [GIN] 2024/11/08 - 08:59:14 | 200 | 4.1343ms | 192.169.0.5 | GET "/api/tags" [GIN] 2024/11/08 - 08:59:14 | 200 | 42.295µs | 192.169.0.5 | GET "/api/version" [GIN] 2024/11/08 - 09:00:10 | 503 | 17.200814ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 09:00:14 | 503 | 17.079306ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 09:00:15 | 503 | 17.155908ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 09:00:15 | 503 | 17.15066ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 09:00:16 | 503 | 17.10698ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 09:00:53 | 503 | 17.41418ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 09:01:03 | 200 | 4.409588ms | 192.169.0.5 | GET "/api/tags" [GIN] 2024/11/08 - 09:01:05 | 503 | 17.38414ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 09:01:07 | 503 | 17.168875ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 09:01:08 | 503 | 54.181867ms | 192.169.0.5 | POST "/api/chat" [GIN] 2024/11/08 - 09:01:08 | 503 | 17.375092ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 09:01:08 | 503 | 17.24719ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 09:01:09 | 503 | 17.354882ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 09:01:10 | 503 | 17.155581ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 09:01:10 | 503 | 17.387027ms | 192.169.0.5 | POST "/api/generate" [GIN] 2024/11/08 - 09:01:15 | 200 | 47.063µs | 192.169.0.5 | GET "/api/version" [GIN] 2024/11/08 - 09:01:26 | 503 | 55.327475ms | 192.169.0.5 | POST "/api/chat" Restart happened here! time=2024-11-08T09:02:48.934Z level=ERROR source=server.go:695 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/08 - 09:02:48 | 200 | 56m37s | 192.169.0.5 | POST "/api/generate" time=2024-11-08T09:02:48.934Z level=ERROR source=server.go:695 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/08 - 09:02:48 | 200 | 55m43s | 192.169.0.5 | POST "/api/generate" ``` ### OS Docker ### GPU Nvidia ### CPU Intel ### Ollama version 0.4.0
GiteaMirror added the bug label 2026-04-28 19:34:07 -05:00
Author
Owner

@JTHesse commented on GitHub (Nov 8, 2024):

Related to #4545

<!-- gh-comment-id:2464285377 --> @JTHesse commented on GitHub (Nov 8, 2024): Related to #4545
Author
Owner

@dhiltgen commented on GitHub (Nov 8, 2024):

The "context canceled" typically indicates the client closed the connection before the server was able to get the lock to process the request. Are you sending a large number of parallel requests? Are any requests getting responses, or is the system hung with no forward progress? Can you share a little more info about what your client scenario looks like?

<!-- gh-comment-id:2465417753 --> @dhiltgen commented on GitHub (Nov 8, 2024): The "context canceled" typically indicates the client closed the connection before the server was able to get the lock to process the request. Are you sending a large number of parallel requests? Are any requests getting responses, or is the system hung with no forward progress? Can you share a little more info about what your client scenario looks like?
Author
Owner

@JTHesse commented on GitHub (Nov 11, 2024):

Thank you for your response, yes there are a few parallel requests. The system is hung after an initial 503 error.

Unfortunately that's hard to summarize, we are using ollama inside our corporation. I would say that there are ~100 users a day. Most of the request are triggered via the continue extension, but there are also a few colleagues using the API in python or other tools.
Do you think the problem is the number of requests or a specific problematic API Call?

<!-- gh-comment-id:2467523909 --> @JTHesse commented on GitHub (Nov 11, 2024): Thank you for your response, yes there are a few parallel requests. The system is hung after an initial 503 error. Unfortunately that's hard to summarize, we are using ollama inside our corporation. I would say that there are ~100 users a day. Most of the request are triggered via the continue extension, but there are also a few colleagues using the API in python or other tools. Do you think the problem is the number of requests or a specific problematic API Call?
Author
Owner

@JTHesse commented on GitHub (Nov 11, 2024):

It seems that ollama is failing silently, and the 503 error is only the result of a stuck service.
Below I added an example log, the request in this case is coming from the continue extension:

llama_model_loader: loaded meta data with 19 key-value pairs and 483 tensors from /root/.ollama/models/blobs/sha256-28bfdfaeba9f51611c00ed322ba684ce6db076756dbc46643f98a8a748c5199e (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = starcoder2
llama_model_loader: - kv   1:                               general.name str              = starcoder2-3b
llama_model_loader: - kv   2:                     starcoder2.block_count u32              = 30
llama_model_loader: - kv   3:                  starcoder2.context_length u32              = 16384
llama_model_loader: - kv   4:                starcoder2.embedding_length u32              = 3072
llama_model_loader: - kv   5:             starcoder2.feed_forward_length u32              = 12288
llama_model_loader: - kv   6:            starcoder2.attention.head_count u32              = 24
llama_model_loader: - kv   7:         starcoder2.attention.head_count_kv u32              = 2
llama_model_loader: - kv   8:                  starcoder2.rope.freq_base f32              = 999999.437500
llama_model_loader: - kv   9:    starcoder2.attention.layer_norm_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,49152]   = ["<|endoftext|>", "<fim_prefix>", "<f...
llama_model_loader: - kv  13:                  tokenizer.ggml.token_type arr[i32,49152]   = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  14:                      tokenizer.ggml.merges arr[str,48872]   = ["Ġ Ġ", "ĠĠ ĠĠ", "ĠĠĠĠ ĠĠ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 0
llama_model_loader: - kv  17:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  18:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  302 tensors
llama_model_loader: - type q4_0:  181 tensors
llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
llm_load_vocab: special tokens cache size = 38
llm_load_vocab: token to piece cache size = 0.2828 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = starcoder2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 49152
llm_load_print_meta: n_merges         = 48872
llm_load_print_meta: vocab_only       = 1
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = all F32
llm_load_print_meta: model params     = 3.03 B
llm_load_print_meta: model size       = 1.59 GiB (4.51 BPW)
llm_load_print_meta: general.name     = starcoder2-3b
llm_load_print_meta: BOS token        = 0 '<|endoftext|>'
llm_load_print_meta: EOS token        = 0 '<|endoftext|>'
llm_load_print_meta: UNK token        = 0 '<|endoftext|>'
llm_load_print_meta: LF token         = 164 'Ä'
llm_load_print_meta: EOT token        = 0 '<|endoftext|>'
llm_load_print_meta: EOG token        = 0 '<|endoftext|>'
llm_load_print_meta: max token length = 512
llama_model_load: vocab only - skipping tensors
[GIN] 2024/11/11 - 09:13:03 | 200 |  2.310452675s |     192.169.1.5 | POST     "/api/generate"
[GIN] 2024/11/11 - 09:13:03 | 200 |  142.377884ms |     192.169.1.5 | POST     "/api/generate"
[GIN] 2024/11/11 - 09:13:05 | 200 |  2.315162671s |     192.169.1.5 | POST     "/api/generate"
[GIN] 2024/11/11 - 09:13:05 | 200 |  137.373364ms |     192.169.1.5 | POST     "/api/generate"
time=2024-11-11T09:13:06.119Z level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 library=cuda total="10.9 GiB" available="6.7 GiB"
[GIN] 2024/11/11 - 09:13:07 | 200 |  1.540323683s |     192.169.1.5 | POST     "/api/generate"
------HERE OLLAM IS STUCK-----
[GIN] 2024/11/11 - 09:14:02 | 200 |    4.397553ms |     192.169.1.5 | GET      "/api/tags"
[GIN] 2024/11/11 - 09:14:02 | 200 |      60.782µs |     192.169.1.5 | GET      "/api/version"
[GIN] 2024/11/11 - 09:15:03 | 200 |    4.393229ms |     192.169.1.5 | GET      "/api/tags"
[GIN] 2024/11/11 - 09:15:03 | 200 |      42.264µs |     192.169.1.5 | GET      "/api/version"
<!-- gh-comment-id:2467626339 --> @JTHesse commented on GitHub (Nov 11, 2024): It seems that ollama is failing silently, and the 503 error is only the result of a stuck service. Below I added an example log, the request in this case is coming from the continue extension: ``` llama_model_loader: loaded meta data with 19 key-value pairs and 483 tensors from /root/.ollama/models/blobs/sha256-28bfdfaeba9f51611c00ed322ba684ce6db076756dbc46643f98a8a748c5199e (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = starcoder2 llama_model_loader: - kv 1: general.name str = starcoder2-3b llama_model_loader: - kv 2: starcoder2.block_count u32 = 30 llama_model_loader: - kv 3: starcoder2.context_length u32 = 16384 llama_model_loader: - kv 4: starcoder2.embedding_length u32 = 3072 llama_model_loader: - kv 5: starcoder2.feed_forward_length u32 = 12288 llama_model_loader: - kv 6: starcoder2.attention.head_count u32 = 24 llama_model_loader: - kv 7: starcoder2.attention.head_count_kv u32 = 2 llama_model_loader: - kv 8: starcoder2.rope.freq_base f32 = 999999.437500 llama_model_loader: - kv 9: starcoder2.attention.layer_norm_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,49152] = ["<|endoftext|>", "<fim_prefix>", "<f... llama_model_loader: - kv 13: tokenizer.ggml.token_type arr[i32,49152] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 14: tokenizer.ggml.merges arr[str,48872] = ["Ġ Ġ", "ĠĠ ĠĠ", "ĠĠĠĠ ĠĠ... llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 0 llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 18: general.quantization_version u32 = 2 llama_model_loader: - type f32: 302 tensors llama_model_loader: - type q4_0: 181 tensors llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default' llm_load_vocab: special tokens cache size = 38 llm_load_vocab: token to piece cache size = 0.2828 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = starcoder2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 49152 llm_load_print_meta: n_merges = 48872 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 3.03 B llm_load_print_meta: model size = 1.59 GiB (4.51 BPW) llm_load_print_meta: general.name = starcoder2-3b llm_load_print_meta: BOS token = 0 '<|endoftext|>' llm_load_print_meta: EOS token = 0 '<|endoftext|>' llm_load_print_meta: UNK token = 0 '<|endoftext|>' llm_load_print_meta: LF token = 164 'Ä' llm_load_print_meta: EOT token = 0 '<|endoftext|>' llm_load_print_meta: EOG token = 0 '<|endoftext|>' llm_load_print_meta: max token length = 512 llama_model_load: vocab only - skipping tensors [GIN] 2024/11/11 - 09:13:03 | 200 | 2.310452675s | 192.169.1.5 | POST "/api/generate" [GIN] 2024/11/11 - 09:13:03 | 200 | 142.377884ms | 192.169.1.5 | POST "/api/generate" [GIN] 2024/11/11 - 09:13:05 | 200 | 2.315162671s | 192.169.1.5 | POST "/api/generate" [GIN] 2024/11/11 - 09:13:05 | 200 | 137.373364ms | 192.169.1.5 | POST "/api/generate" time=2024-11-11T09:13:06.119Z level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 library=cuda total="10.9 GiB" available="6.7 GiB" [GIN] 2024/11/11 - 09:13:07 | 200 | 1.540323683s | 192.169.1.5 | POST "/api/generate" ------HERE OLLAM IS STUCK----- [GIN] 2024/11/11 - 09:14:02 | 200 | 4.397553ms | 192.169.1.5 | GET "/api/tags" [GIN] 2024/11/11 - 09:14:02 | 200 | 60.782µs | 192.169.1.5 | GET "/api/version" [GIN] 2024/11/11 - 09:15:03 | 200 | 4.393229ms | 192.169.1.5 | GET "/api/tags" [GIN] 2024/11/11 - 09:15:03 | 200 | 42.264µs | 192.169.1.5 | GET "/api/version" ```
Author
Owner

@JTHesse commented on GitHub (Nov 11, 2024):

Maybe it's related to the "updated VRAM based on existing loaded models" message?

<!-- gh-comment-id:2467637862 --> @JTHesse commented on GitHub (Nov 11, 2024): Maybe it's related to the "updated VRAM based on existing loaded models" message?
Author
Owner

@JTHesse commented on GitHub (Nov 11, 2024):

Full log:

[GIN] 2024/11/11 - 10:00:48 | 200 |  273.837584ms |     192.169.1.5 | POST     "/api/generate"
time=2024-11-11T10:00:48.319Z level=DEBUG source=sched.go:407 msg="context for request finished"
time=2024-11-11T10:00:48.319Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-28bfdfaeba9f51611c00ed322ba684ce6db076756dbc46643f98a8a748c5199e refCount=2
time=2024-11-11T10:02:27.760Z level=DEBUG source=gpu.go:398 msg="updating system memory data" before.total="62.7 GiB" before.free="57.4 GiB" before.free_swap="8.0 GiB" now.total="62.7 GiB" now.free="56.7 GiB" now.free_swap="8.0 GiB"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01
dlsym: cuInit - 0x7f3d45e54520
dlsym: cuDriverGetVersion - 0x7f3d45e54540
dlsym: cuDeviceGetCount - 0x7f3d45e54580
dlsym: cuDeviceGet - 0x7f3d45e54560
dlsym: cuDeviceGetAttribute - 0x7f3d45e54660
dlsym: cuDeviceGetUuid - 0x7f3d45e545c0
dlsym: cuDeviceGetName - 0x7f3d45e545a0
dlsym: cuCtxCreate_v3 - 0x7f3d45e5c220
dlsym: cuMemGetInfo_v2 - 0x7f3d45e676f0
dlsym: cuCtxDestroy - 0x7f3d45eb66f0
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 1
time=2024-11-11T10:02:27.986Z level=DEBUG source=gpu.go:448 msg="updating cuda memory data" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 name="NVIDIA GeForce GTX 1080 Ti" overhead="0 B" before.total="10.9 GiB" before.free="7.1 GiB" now.total="10.9 GiB" now.free="1.7 GiB" now.used="9.2 GiB"
releasing cuda driver library
time=2024-11-11T10:02:28.119Z level=DEBUG source=sched.go:496 msg="gpu reported" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 library=cuda available="1.7 GiB"
time=2024-11-11T10:02:28.119Z level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 library=cuda total="10.9 GiB" available="926.1 MiB"
time=2024-11-11T10:02:28.119Z level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[926.1 MiB]"
time=2024-11-11T10:02:28.120Z level=DEBUG source=memory.go:173 msg="gpu has too little memory to allocate any layers" id=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 library=cuda variant=v12 compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1080 Ti" total="10.9 GiB" available="926.1 MiB" minimum_memory=479199232 layer_size="154.4 MiB" gpu_zer_overhead="0 B" partial_offload="1.2 GiB" full_offload="507.0 MiB"
time=2024-11-11T10:02:28.121Z level=DEBUG source=memory.go:317 msg="insufficient VRAM to load any model layers"
time=2024-11-11T10:02:28.121Z level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[926.1 MiB]"
time=2024-11-11T10:02:28.122Z level=DEBUG source=memory.go:173 msg="gpu has too little memory to allocate any layers" id=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 library=cuda variant=v12 compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1080 Ti" total="10.9 GiB" available="926.1 MiB" minimum_memory=479199232 layer_size="154.4 MiB" gpu_zer_overhead="0 B" partial_offload="1.2 GiB" full_offload="507.0 MiB"
time=2024-11-11T10:02:28.122Z level=DEBUG source=memory.go:317 msg="insufficient VRAM to load any model layers"
time=2024-11-11T10:02:28.122Z level=DEBUG source=sched.go:784 msg="found an idle runner to unload"
time=2024-11-11T10:02:28.122Z level=DEBUG source=sched.go:283 msg="resetting model to expire immediately to make room" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe refCount=0
time=2024-11-11T10:02:28.122Z level=DEBUG source=sched.go:296 msg="waiting for pending requests to complete and unload to occur" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
time=2024-11-11T10:02:28.122Z level=DEBUG source=sched.go:360 msg="runner expired event received" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
time=2024-11-11T10:02:28.122Z level=DEBUG source=sched.go:375 msg="got lock to unload" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
time=2024-11-11T10:02:28.123Z level=DEBUG source=gpu.go:398 msg="updating system memory data" before.total="62.7 GiB" before.free="56.7 GiB" before.free_swap="8.0 GiB" now.total="62.7 GiB" now.free="56.7 GiB" now.free_swap="8.0 GiB"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01
dlsym: cuInit - 0x7f3d45e54520
dlsym: cuDriverGetVersion - 0x7f3d45e54540
dlsym: cuDeviceGetCount - 0x7f3d45e54580
dlsym: cuDeviceGet - 0x7f3d45e54560
dlsym: cuDeviceGetAttribute - 0x7f3d45e54660
dlsym: cuDeviceGetUuid - 0x7f3d45e545c0
dlsym: cuDeviceGetName - 0x7f3d45e545a0
dlsym: cuCtxCreate_v3 - 0x7f3d45e5c220
dlsym: cuMemGetInfo_v2 - 0x7f3d45e676f0
dlsym: cuCtxDestroy - 0x7f3d45eb66f0
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 1
time=2024-11-11T10:02:28.336Z level=DEBUG source=gpu.go:448 msg="updating cuda memory data" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 name="NVIDIA GeForce GTX 1080 Ti" overhead="0 B" before.total="10.9 GiB" before.free="1.7 GiB" now.total="10.9 GiB" now.free="1.7 GiB" now.used="9.2 GiB"
releasing cuda driver library
time=2024-11-11T10:02:28.540Z level=DEBUG source=server.go:1068 msg="stopping llama server"
time=2024-11-11T10:02:28.540Z level=DEBUG source=server.go:1074 msg="waiting for llama server to exit"
time=2024-11-11T10:02:28.586Z level=DEBUG source=gpu.go:398 msg="updating system memory data" before.total="62.7 GiB" before.free="56.7 GiB" before.free_swap="8.0 GiB" now.total="62.7 GiB" now.free="56.8 GiB" now.free_swap="8.0 GiB"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01
dlsym: cuInit - 0x7f3d45e54520
dlsym: cuDriverGetVersion - 0x7f3d45e54540
dlsym: cuDeviceGetCount - 0x7f3d45e54580
dlsym: cuDeviceGet - 0x7f3d45e54560
dlsym: cuDeviceGetAttribute - 0x7f3d45e54660
dlsym: cuDeviceGetUuid - 0x7f3d45e545c0
dlsym: cuDeviceGetName - 0x7f3d45e545a0
dlsym: cuCtxCreate_v3 - 0x7f3d45e5c220
dlsym: cuMemGetInfo_v2 - 0x7f3d45e676f0
dlsym: cuCtxDestroy - 0x7f3d45eb66f0
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 1
time=2024-11-11T10:02:28.751Z level=DEBUG source=server.go:1078 msg="llama server stopped"
time=2024-11-11T10:02:28.751Z level=DEBUG source=sched.go:380 msg="runner released" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
time=2024-11-11T10:02:28.926Z level=DEBUG source=gpu.go:448 msg="updating cuda memory data" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 name="NVIDIA GeForce GTX 1080 Ti" overhead="0 B" before.total="10.9 GiB" before.free="1.7 GiB" now.total="10.9 GiB" now.free="7.1 GiB" now.used="3.8 GiB"
releasing cuda driver library
time=2024-11-11T10:02:28.926Z level=DEBUG source=sched.go:659 msg="gpu VRAM free memory converged after 0.80 seconds" model=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
time=2024-11-11T10:02:28.926Z level=DEBUG source=sched.go:384 msg="sending an unloaded event" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
time=2024-11-11T10:02:28.926Z level=DEBUG source=sched.go:302 msg="unload completed" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
time=2024-11-11T10:02:28.926Z level=DEBUG source=gpu.go:398 msg="updating system memory data" before.total="62.7 GiB" before.free="56.8 GiB" before.free_swap="8.0 GiB" now.total="62.7 GiB" now.free="57.1 GiB" now.free_swap="8.0 GiB"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01
dlsym: cuInit - 0x7f3d45e54520
dlsym: cuDriverGetVersion - 0x7f3d45e54540
dlsym: cuDeviceGetCount - 0x7f3d45e54580
dlsym: cuDeviceGet - 0x7f3d45e54560
dlsym: cuDeviceGetAttribute - 0x7f3d45e54660
dlsym: cuDeviceGetUuid - 0x7f3d45e545c0
dlsym: cuDeviceGetName - 0x7f3d45e545a0
dlsym: cuCtxCreate_v3 - 0x7f3d45e5c220
dlsym: cuMemGetInfo_v2 - 0x7f3d45e676f0
dlsym: cuCtxDestroy - 0x7f3d45eb66f0
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 1
time=2024-11-11T10:02:29.093Z level=DEBUG source=gpu.go:448 msg="updating cuda memory data" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 name="NVIDIA GeForce GTX 1080 Ti" overhead="0 B" before.total="10.9 GiB" before.free="7.1 GiB" now.total="10.9 GiB" now.free="7.1 GiB" now.used="3.8 GiB"
releasing cuda driver library
time=2024-11-11T10:02:29.227Z level=DEBUG source=sched.go:496 msg="gpu reported" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 library=cuda available="7.1 GiB"
time=2024-11-11T10:02:29.227Z level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 library=cuda total="10.9 GiB" available="6.7 GiB"
time=2024-11-11T10:02:29.227Z level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[6.7 GiB]"
time=2024-11-11T10:02:29.228Z level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[6.7 GiB]"
time=2024-11-11T10:02:29.228Z level=DEBUG source=sched.go:789 msg="no idle runners, picking the shortest duration" count=1
time=2024-11-11T10:02:29.228Z level=DEBUG source=sched.go:283 msg="resetting model to expire immediately to make room" modelPath=/root/.ollama/models/blobs/sha256-28bfdfaeba9f51611c00ed322ba684ce6db076756dbc46643f98a8a748c5199e refCount=2
time=2024-11-11T10:02:29.229Z level=DEBUG source=sched.go:296 msg="waiting for pending requests to complete and unload to occur" modelPath=/root/.ollama/models/blobs/sha256-28bfdfaeba9f51611c00ed322ba684ce6db076756dbc46643f98a8a748c5199e
<!-- gh-comment-id:2467742403 --> @JTHesse commented on GitHub (Nov 11, 2024): Full log: ``` [GIN] 2024/11/11 - 10:00:48 | 200 | 273.837584ms | 192.169.1.5 | POST "/api/generate" time=2024-11-11T10:00:48.319Z level=DEBUG source=sched.go:407 msg="context for request finished" time=2024-11-11T10:00:48.319Z level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-28bfdfaeba9f51611c00ed322ba684ce6db076756dbc46643f98a8a748c5199e refCount=2 time=2024-11-11T10:02:27.760Z level=DEBUG source=gpu.go:398 msg="updating system memory data" before.total="62.7 GiB" before.free="57.4 GiB" before.free_swap="8.0 GiB" now.total="62.7 GiB" now.free="56.7 GiB" now.free_swap="8.0 GiB" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01 dlsym: cuInit - 0x7f3d45e54520 dlsym: cuDriverGetVersion - 0x7f3d45e54540 dlsym: cuDeviceGetCount - 0x7f3d45e54580 dlsym: cuDeviceGet - 0x7f3d45e54560 dlsym: cuDeviceGetAttribute - 0x7f3d45e54660 dlsym: cuDeviceGetUuid - 0x7f3d45e545c0 dlsym: cuDeviceGetName - 0x7f3d45e545a0 dlsym: cuCtxCreate_v3 - 0x7f3d45e5c220 dlsym: cuMemGetInfo_v2 - 0x7f3d45e676f0 dlsym: cuCtxDestroy - 0x7f3d45eb66f0 calling cuInit calling cuDriverGetVersion raw version 0x2ef4 CUDA driver version: 12.2 calling cuDeviceGetCount device count 1 time=2024-11-11T10:02:27.986Z level=DEBUG source=gpu.go:448 msg="updating cuda memory data" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 name="NVIDIA GeForce GTX 1080 Ti" overhead="0 B" before.total="10.9 GiB" before.free="7.1 GiB" now.total="10.9 GiB" now.free="1.7 GiB" now.used="9.2 GiB" releasing cuda driver library time=2024-11-11T10:02:28.119Z level=DEBUG source=sched.go:496 msg="gpu reported" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 library=cuda available="1.7 GiB" time=2024-11-11T10:02:28.119Z level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 library=cuda total="10.9 GiB" available="926.1 MiB" time=2024-11-11T10:02:28.119Z level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[926.1 MiB]" time=2024-11-11T10:02:28.120Z level=DEBUG source=memory.go:173 msg="gpu has too little memory to allocate any layers" id=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 library=cuda variant=v12 compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1080 Ti" total="10.9 GiB" available="926.1 MiB" minimum_memory=479199232 layer_size="154.4 MiB" gpu_zer_overhead="0 B" partial_offload="1.2 GiB" full_offload="507.0 MiB" time=2024-11-11T10:02:28.121Z level=DEBUG source=memory.go:317 msg="insufficient VRAM to load any model layers" time=2024-11-11T10:02:28.121Z level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[926.1 MiB]" time=2024-11-11T10:02:28.122Z level=DEBUG source=memory.go:173 msg="gpu has too little memory to allocate any layers" id=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 library=cuda variant=v12 compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1080 Ti" total="10.9 GiB" available="926.1 MiB" minimum_memory=479199232 layer_size="154.4 MiB" gpu_zer_overhead="0 B" partial_offload="1.2 GiB" full_offload="507.0 MiB" time=2024-11-11T10:02:28.122Z level=DEBUG source=memory.go:317 msg="insufficient VRAM to load any model layers" time=2024-11-11T10:02:28.122Z level=DEBUG source=sched.go:784 msg="found an idle runner to unload" time=2024-11-11T10:02:28.122Z level=DEBUG source=sched.go:283 msg="resetting model to expire immediately to make room" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe refCount=0 time=2024-11-11T10:02:28.122Z level=DEBUG source=sched.go:296 msg="waiting for pending requests to complete and unload to occur" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe time=2024-11-11T10:02:28.122Z level=DEBUG source=sched.go:360 msg="runner expired event received" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe time=2024-11-11T10:02:28.122Z level=DEBUG source=sched.go:375 msg="got lock to unload" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe time=2024-11-11T10:02:28.123Z level=DEBUG source=gpu.go:398 msg="updating system memory data" before.total="62.7 GiB" before.free="56.7 GiB" before.free_swap="8.0 GiB" now.total="62.7 GiB" now.free="56.7 GiB" now.free_swap="8.0 GiB" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01 dlsym: cuInit - 0x7f3d45e54520 dlsym: cuDriverGetVersion - 0x7f3d45e54540 dlsym: cuDeviceGetCount - 0x7f3d45e54580 dlsym: cuDeviceGet - 0x7f3d45e54560 dlsym: cuDeviceGetAttribute - 0x7f3d45e54660 dlsym: cuDeviceGetUuid - 0x7f3d45e545c0 dlsym: cuDeviceGetName - 0x7f3d45e545a0 dlsym: cuCtxCreate_v3 - 0x7f3d45e5c220 dlsym: cuMemGetInfo_v2 - 0x7f3d45e676f0 dlsym: cuCtxDestroy - 0x7f3d45eb66f0 calling cuInit calling cuDriverGetVersion raw version 0x2ef4 CUDA driver version: 12.2 calling cuDeviceGetCount device count 1 time=2024-11-11T10:02:28.336Z level=DEBUG source=gpu.go:448 msg="updating cuda memory data" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 name="NVIDIA GeForce GTX 1080 Ti" overhead="0 B" before.total="10.9 GiB" before.free="1.7 GiB" now.total="10.9 GiB" now.free="1.7 GiB" now.used="9.2 GiB" releasing cuda driver library time=2024-11-11T10:02:28.540Z level=DEBUG source=server.go:1068 msg="stopping llama server" time=2024-11-11T10:02:28.540Z level=DEBUG source=server.go:1074 msg="waiting for llama server to exit" time=2024-11-11T10:02:28.586Z level=DEBUG source=gpu.go:398 msg="updating system memory data" before.total="62.7 GiB" before.free="56.7 GiB" before.free_swap="8.0 GiB" now.total="62.7 GiB" now.free="56.8 GiB" now.free_swap="8.0 GiB" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01 dlsym: cuInit - 0x7f3d45e54520 dlsym: cuDriverGetVersion - 0x7f3d45e54540 dlsym: cuDeviceGetCount - 0x7f3d45e54580 dlsym: cuDeviceGet - 0x7f3d45e54560 dlsym: cuDeviceGetAttribute - 0x7f3d45e54660 dlsym: cuDeviceGetUuid - 0x7f3d45e545c0 dlsym: cuDeviceGetName - 0x7f3d45e545a0 dlsym: cuCtxCreate_v3 - 0x7f3d45e5c220 dlsym: cuMemGetInfo_v2 - 0x7f3d45e676f0 dlsym: cuCtxDestroy - 0x7f3d45eb66f0 calling cuInit calling cuDriverGetVersion raw version 0x2ef4 CUDA driver version: 12.2 calling cuDeviceGetCount device count 1 time=2024-11-11T10:02:28.751Z level=DEBUG source=server.go:1078 msg="llama server stopped" time=2024-11-11T10:02:28.751Z level=DEBUG source=sched.go:380 msg="runner released" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe time=2024-11-11T10:02:28.926Z level=DEBUG source=gpu.go:448 msg="updating cuda memory data" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 name="NVIDIA GeForce GTX 1080 Ti" overhead="0 B" before.total="10.9 GiB" before.free="1.7 GiB" now.total="10.9 GiB" now.free="7.1 GiB" now.used="3.8 GiB" releasing cuda driver library time=2024-11-11T10:02:28.926Z level=DEBUG source=sched.go:659 msg="gpu VRAM free memory converged after 0.80 seconds" model=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe time=2024-11-11T10:02:28.926Z level=DEBUG source=sched.go:384 msg="sending an unloaded event" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe time=2024-11-11T10:02:28.926Z level=DEBUG source=sched.go:302 msg="unload completed" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe time=2024-11-11T10:02:28.926Z level=DEBUG source=gpu.go:398 msg="updating system memory data" before.total="62.7 GiB" before.free="56.8 GiB" before.free_swap="8.0 GiB" now.total="62.7 GiB" now.free="57.1 GiB" now.free_swap="8.0 GiB" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01 dlsym: cuInit - 0x7f3d45e54520 dlsym: cuDriverGetVersion - 0x7f3d45e54540 dlsym: cuDeviceGetCount - 0x7f3d45e54580 dlsym: cuDeviceGet - 0x7f3d45e54560 dlsym: cuDeviceGetAttribute - 0x7f3d45e54660 dlsym: cuDeviceGetUuid - 0x7f3d45e545c0 dlsym: cuDeviceGetName - 0x7f3d45e545a0 dlsym: cuCtxCreate_v3 - 0x7f3d45e5c220 dlsym: cuMemGetInfo_v2 - 0x7f3d45e676f0 dlsym: cuCtxDestroy - 0x7f3d45eb66f0 calling cuInit calling cuDriverGetVersion raw version 0x2ef4 CUDA driver version: 12.2 calling cuDeviceGetCount device count 1 time=2024-11-11T10:02:29.093Z level=DEBUG source=gpu.go:448 msg="updating cuda memory data" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 name="NVIDIA GeForce GTX 1080 Ti" overhead="0 B" before.total="10.9 GiB" before.free="7.1 GiB" now.total="10.9 GiB" now.free="7.1 GiB" now.used="3.8 GiB" releasing cuda driver library time=2024-11-11T10:02:29.227Z level=DEBUG source=sched.go:496 msg="gpu reported" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 library=cuda available="7.1 GiB" time=2024-11-11T10:02:29.227Z level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 library=cuda total="10.9 GiB" available="6.7 GiB" time=2024-11-11T10:02:29.227Z level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[6.7 GiB]" time=2024-11-11T10:02:29.228Z level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[6.7 GiB]" time=2024-11-11T10:02:29.228Z level=DEBUG source=sched.go:789 msg="no idle runners, picking the shortest duration" count=1 time=2024-11-11T10:02:29.228Z level=DEBUG source=sched.go:283 msg="resetting model to expire immediately to make room" modelPath=/root/.ollama/models/blobs/sha256-28bfdfaeba9f51611c00ed322ba684ce6db076756dbc46643f98a8a748c5199e refCount=2 time=2024-11-11T10:02:29.229Z level=DEBUG source=sched.go:296 msg="waiting for pending requests to complete and unload to occur" modelPath=/root/.ollama/models/blobs/sha256-28bfdfaeba9f51611c00ed322ba684ce6db076756dbc46643f98a8a748c5199e ```
Author
Owner

@rick-github commented on GitHub (Nov 11, 2024):

Does it help if you set OLLAMA_NUM_PARALLEL?

<!-- gh-comment-id:2467782013 --> @rick-github commented on GitHub (Nov 11, 2024): Does it help if you set `OLLAMA_NUM_PARALLEL`?
Author
Owner

@JTHesse commented on GitHub (Nov 11, 2024):

I tried OLLAMA_NUM_PARALLEL=3, 2 and now 1.
I'll let you know in a bit if the latter is working.

<!-- gh-comment-id:2467801904 --> @JTHesse commented on GitHub (Nov 11, 2024): I tried OLLAMA_NUM_PARALLEL=3, 2 and now 1. I'll let you know in a bit if the latter is working.
Author
Owner

@JTHesse commented on GitHub (Nov 11, 2024):

Even with OLLAMA_NUM_PARALLEL=1 the server is stopping after some time:

time=2024-11-11T11:29:39.306Z level=DEBUG source=sched.go:341 msg="timer expired, expiring to unload" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
time=2024-11-11T11:29:39.306Z level=DEBUG source=sched.go:360 msg="runner expired event received" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
time=2024-11-11T11:29:39.306Z level=DEBUG source=sched.go:375 msg="got lock to unload" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe
time=2024-11-11T11:29:39.307Z level=DEBUG source=gpu.go:398 msg="updating system memory data" before.total="62.7 GiB" before.free="56.4 GiB" before.free_swap="8.0 GiB" now.total="62.7 GiB" now.free="56.1 GiB" now.free_swap="8.0 GiB"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01
dlsym: cuInit - 0x7ff881e54520
dlsym: cuDriverGetVersion - 0x7ff881e54540
dlsym: cuDeviceGetCount - 0x7ff881e54580
dlsym: cuDeviceGet - 0x7ff881e54560
dlsym: cuDeviceGetAttribute - 0x7ff881e54660
dlsym: cuDeviceGetUuid - 0x7ff881e545c0
dlsym: cuDeviceGetName - 0x7ff881e545a0
dlsym: cuCtxCreate_v3 - 0x7ff881e5c220
dlsym: cuMemGetInfo_v2 - 0x7ff881e676f0
dlsym: cuCtxDestroy - 0x7ff881eb66f0
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 1
time=2024-11-11T11:29:39.549Z level=DEBUG source=gpu.go:448 msg="updating cuda memory data" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 name="NVIDIA GeForce GTX 1080 Ti" overhead="0 B" before.total="10.9 GiB" before.free="3.4 GiB" now.total="10.9 GiB" now.free="1.0 GiB" now.used="9.9 GiB"
releasing cuda driver library
time=2024-11-11T11:29:39.715Z level=DEBUG source=server.go:1068 msg="stopping llama server"
time=2024-11-11T11:29:39.715Z level=DEBUG source=server.go:1074 msg="waiting for llama server to exit"
time=2024-11-11T11:29:39.800Z level=DEBUG source=gpu.go:398 msg="updating system memory data" before.total="62.7 GiB" before.free="56.1 GiB" before.free_swap="8.0 GiB" now.total="62.7 GiB" now.free="56.3 GiB" now.free_swap="8.0 GiB"
initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01
dlsym: cuInit - 0x7ff881e54520
dlsym: cuDriverGetVersion - 0x7ff881e54540
dlsym: cuDeviceGetCount - 0x7ff881e54580
dlsym: cuDeviceGet - 0x7ff881e54560
dlsym: cuDeviceGetAttribute - 0x7ff881e54660
dlsym: cuDeviceGetUuid - 0x7ff881e545c0
dlsym: cuDeviceGetName - 0x7ff881e545a0
dlsym: cuCtxCreate_v3 - 0x7ff881e5c220
dlsym: cuMemGetInfo_v2 - 0x7ff881e676f0
dlsym: cuCtxDestroy - 0x7ff881eb66f0
calling cuInit
calling cuDriverGetVersion
raw version 0x2ef4
CUDA driver version: 12.2
calling cuDeviceGetCount
device count 1
time=2024-11-11T11:29:39.850Z level=DEBUG source=server.go:1078 msg="llama server stopped"
<!-- gh-comment-id:2467978779 --> @JTHesse commented on GitHub (Nov 11, 2024): Even with OLLAMA_NUM_PARALLEL=1 the server is stopping after some time: ``` time=2024-11-11T11:29:39.306Z level=DEBUG source=sched.go:341 msg="timer expired, expiring to unload" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe time=2024-11-11T11:29:39.306Z level=DEBUG source=sched.go:360 msg="runner expired event received" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe time=2024-11-11T11:29:39.306Z level=DEBUG source=sched.go:375 msg="got lock to unload" modelPath=/root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe time=2024-11-11T11:29:39.307Z level=DEBUG source=gpu.go:398 msg="updating system memory data" before.total="62.7 GiB" before.free="56.4 GiB" before.free_swap="8.0 GiB" now.total="62.7 GiB" now.free="56.1 GiB" now.free_swap="8.0 GiB" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01 dlsym: cuInit - 0x7ff881e54520 dlsym: cuDriverGetVersion - 0x7ff881e54540 dlsym: cuDeviceGetCount - 0x7ff881e54580 dlsym: cuDeviceGet - 0x7ff881e54560 dlsym: cuDeviceGetAttribute - 0x7ff881e54660 dlsym: cuDeviceGetUuid - 0x7ff881e545c0 dlsym: cuDeviceGetName - 0x7ff881e545a0 dlsym: cuCtxCreate_v3 - 0x7ff881e5c220 dlsym: cuMemGetInfo_v2 - 0x7ff881e676f0 dlsym: cuCtxDestroy - 0x7ff881eb66f0 calling cuInit calling cuDriverGetVersion raw version 0x2ef4 CUDA driver version: 12.2 calling cuDeviceGetCount device count 1 time=2024-11-11T11:29:39.549Z level=DEBUG source=gpu.go:448 msg="updating cuda memory data" gpu=GPU-49bf339d-470b-fd4e-5c40-ab2ae072a1f8 name="NVIDIA GeForce GTX 1080 Ti" overhead="0 B" before.total="10.9 GiB" before.free="3.4 GiB" now.total="10.9 GiB" now.free="1.0 GiB" now.used="9.9 GiB" releasing cuda driver library time=2024-11-11T11:29:39.715Z level=DEBUG source=server.go:1068 msg="stopping llama server" time=2024-11-11T11:29:39.715Z level=DEBUG source=server.go:1074 msg="waiting for llama server to exit" time=2024-11-11T11:29:39.800Z level=DEBUG source=gpu.go:398 msg="updating system memory data" before.total="62.7 GiB" before.free="56.1 GiB" before.free_swap="8.0 GiB" now.total="62.7 GiB" now.free="56.3 GiB" now.free_swap="8.0 GiB" initializing /usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01 dlsym: cuInit - 0x7ff881e54520 dlsym: cuDriverGetVersion - 0x7ff881e54540 dlsym: cuDeviceGetCount - 0x7ff881e54580 dlsym: cuDeviceGet - 0x7ff881e54560 dlsym: cuDeviceGetAttribute - 0x7ff881e54660 dlsym: cuDeviceGetUuid - 0x7ff881e545c0 dlsym: cuDeviceGetName - 0x7ff881e545a0 dlsym: cuCtxCreate_v3 - 0x7ff881e5c220 dlsym: cuMemGetInfo_v2 - 0x7ff881e676f0 dlsym: cuCtxDestroy - 0x7ff881eb66f0 calling cuInit calling cuDriverGetVersion raw version 0x2ef4 CUDA driver version: 12.2 calling cuDeviceGetCount device count 1 time=2024-11-11T11:29:39.850Z level=DEBUG source=server.go:1078 msg="llama server stopped" ```
Author
Owner

@jessegross commented on GitHub (Nov 12, 2024):

@JTHesse Is it possible for you to capture this while running with OLLAMA_DEBUG=1 and post the full log?

<!-- gh-comment-id:2471694582 --> @jessegross commented on GitHub (Nov 12, 2024): @JTHesse Is it possible for you to capture this while running with OLLAMA_DEBUG=1 and post the full log?
Author
Owner

@JTHesse commented on GitHub (Nov 13, 2024):

Hi @jessegross, yes sure, just needed to remove the prompts
full_log.txt

<!-- gh-comment-id:2473034750 --> @JTHesse commented on GitHub (Nov 13, 2024): Hi @jessegross, yes sure, just needed to remove the prompts [full_log.txt](https://github.com/user-attachments/files/17730250/full_log.txt)
Author
Owner

@JTHesse commented on GitHub (Nov 13, 2024):

Additionally, looking at nvtop I often see two or three different llama_server processes with a total memory usage of around 80%.
Is this expected for OLLAMA_NUM_PARALLEL=1?

Ollama also gets stuck for OLLAMA_KEEP_ALIVE=0

<!-- gh-comment-id:2473878443 --> @JTHesse commented on GitHub (Nov 13, 2024): Additionally, looking at nvtop I often see two or three different llama_server processes with a total memory usage of around 80%. Is this expected for OLLAMA_NUM_PARALLEL=1? Ollama also gets stuck for OLLAMA_KEEP_ALIVE=0
Author
Owner

@jamine2024 commented on GitHub (Nov 14, 2024):

2024/11/14 13:48:32 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\OllamaModel OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-11-14T13:48:32.202+08:00 level=INFO source=images.go:755 msg="total blobs: 15"
time=2024-11-14T13:48:32.203+08:00 level=INFO source=images.go:762 msg="total unused blobs removed: 0"
time=2024-11-14T13:48:32.203+08:00 level=INFO source=routes.go:1240 msg="Listening on 127.0.0.1:11434 (version 0.4.1)"
time=2024-11-14T13:48:32.204+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11 cuda_v12 rocm cpu cpu_avx]"
time=2024-11-14T13:48:32.204+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-11-14T13:48:32.204+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2024-11-14T13:48:32.204+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16
time=2024-11-14T13:48:32.338+08:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-8432859a-0af3-df5c-67bc-b11d589ed1bd library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB"
[GIN] 2024/11/14 - 13:49:21 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/11/14 - 13:49:21 | 200 | 1.0247ms | 127.0.0.1 | GET "/api/tags"
time=2024-11-14T13:49:23.508+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=D:\OllamaModel\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 gpu=GPU-8432859a-0af3-df5c-67bc-b11d589ed1bd parallel=4 available=11369033728 required="5.6 GiB"
time=2024-11-14T13:49:23.526+08:00 level=INFO source=server.go:105 msg="system memory" total="31.9 GiB" free="18.1 GiB" free_swap="17.3 GiB"
time=2024-11-14T13:49:23.526+08:00 level=INFO source=memory.go:343 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[10.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.6 GiB" memory.required.partial="5.6 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[5.6 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="478.0 MiB" memory.graph.partial="730.4 MiB"
time=2024-11-14T13:49:23.531+08:00 level=INFO source=server.go:383 msg="starting llama server" cmd="C:\Users\Administrator\AppData\Local\Programs\Ollama\lib\ollama\runners\cuda_v12\ollama_llama_server.exe --model D:\OllamaModel\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 8 --no-mmap --parallel 4 --port 61105"
time=2024-11-14T13:49:23.565+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-11-14T13:49:23.565+08:00 level=INFO source=server.go:562 msg="waiting for llama runner to start responding"
time=2024-11-14T13:49:23.565+08:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error"
time=2024-11-14T13:49:23.637+08:00 level=INFO source=runner.go:863 msg="starting go runner"
time=2024-11-14T13:49:23.645+08:00 level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(clang)" threads=8
time=2024-11-14T13:49:23.646+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:61105"
llama_model_loader: loaded meta data with 34 key-value pairs and 339 tensors from D:\OllamaModel\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen2.5 7B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Qwen2.5
llama_model_loader: - kv 5: general.size_label str = 7B
llama_model_loader: - kv 6: general.license str = apache-2.0
llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-7...
llama_model_loader: - kv 8: general.base_model.count u32 = 1
llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 7B
llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-7B
llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"]
llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 14: qwen2.block_count u32 = 28
llama_model_loader: - kv 15: qwen2.context_length u32 = 32768
llama_model_loader: - kv 16: qwen2.embedding_length u32 = 3584
llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 18944
llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 28
llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 4
llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 22: general.file_type u32 = 15
llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 33: general.quantization_version u32 = 2
llama_model_loader: - type f32: 141 tensors
llama_model_loader: - type q4_K: 169 tensors
llama_model_loader: - type q6_K: 29 tensors
time=2024-11-14T13:49:23.819+08:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 3584
llm_load_print_meta: n_layer = 28
llm_load_print_meta: n_head = 28
llm_load_print_meta: n_head_kv = 4
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 7
llm_load_print_meta: n_embd_k_gqa = 512
llm_load_print_meta: n_embd_v_gqa = 512
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 18944
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 7.62 B
llm_load_print_meta: model size = 4.36 GiB (4.91 BPW)
llm_load_print_meta: general.name = Qwen2.5 7B Instruct
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151645 '<|im_end|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_print_meta: EOG token = 151643 '<|endoftext|>'
llm_load_print_meta: EOG token = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size = 0.30 MiB
llm_load_tensors: offloading 28 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 29/29 layers to GPU
llm_load_tensors: CUDA_Host buffer size = 292.36 MiB
llm_load_tensors: CUDA0 buffer size = 4168.09 MiB
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 448.00 MiB
llama_new_context_with_model: KV self size = 448.00 MiB, K (f16): 224.00 MiB, V (f16): 224.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.38 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 492.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 23.01 MiB
llama_new_context_with_model: graph nodes = 986
llama_new_context_with_model: graph splits = 2
time=2024-11-14T13:49:25.404+08:00 level=INFO source=server.go:601 msg="llama runner started in 1.84 seconds"
llama_model_loader: loaded meta data with 34 key-value pairs and 339 tensors from D:\OllamaModel\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen2.5 7B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Qwen2.5
llama_model_loader: - kv 5: general.size_label str = 7B
llama_model_loader: - kv 6: general.license str = apache-2.0
llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-7...
llama_model_loader: - kv 8: general.base_model.count u32 = 1
llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 7B
llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-7B
llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"]
llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"]
llama_model_loader: - kv 14: qwen2.block_count u32 = 28
llama_model_loader: - kv 15: qwen2.context_length u32 = 32768
llama_model_loader: - kv 16: qwen2.embedding_length u32 = 3584
llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 18944
llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 28
llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 4
llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 22: general.file_type u32 = 15
llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 33: general.quantization_version u32 = 2
llama_model_loader: - type f32: 141 tensors
llama_model_loader: - type q4_K: 169 tensors
llama_model_loader: - type q6_K: 29 tensors
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 7.62 B
llm_load_print_meta: model size = 4.36 GiB (4.91 BPW)
llm_load_print_meta: general.name = Qwen2.5 7B Instruct
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151645 '<|im_end|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_print_meta: EOG token = 151643 '<|endoftext|>'
llm_load_print_meta: EOG token = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors
[GIN] 2024/11/14 - 13:49:43 | 200 | 19.8676275s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:49:48 | 200 | 24.6217747s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:06 | 200 | 43.2720815s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:07 | 200 | 43.9698999s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:08 | 200 | 44.6615459s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:09 | 200 | 46.4125805s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:19 | 200 | 56.5700098s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:50:29.899+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:50:29.899+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:50:29.901+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:50:29.901+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:50:29.901+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:50:29.901+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:50:29.901+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:50:29.901+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 44.5457192s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:51:02 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/11/14 - 13:51:02 | 200 | 1.0218ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/11/14 - 13:51:40 | 200 | 16.5192359s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:51:42 | 200 | 18.8407033s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:51:49 | 200 | 25.6416889s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:51:52 | 200 | 28.5033595s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:52:02 | 200 | 39.1118619s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:52:02 | 200 | 39.3753104s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:52:09 | 200 | 46.0375867s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:52:19 | 200 | 55.5760544s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:52:20 | 200 | 56.566788s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:52:29 | 200 | 1m5s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:52:35 | 200 | 1m12s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:52:54 | 200 | 1m31s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:04 | 200 | 1m40s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:06 | 200 | 1m43s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:22 | 200 | 1m58s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:53:23.545+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:53:23.545+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:53:23.545+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:53:23.545+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:53:23.545+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:53:23.545+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:53:23.545+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:53:23.545+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:53:23.545+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:53:23.546+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:53:23.546+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:53:23.546+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:53:23.546+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:53:23.560+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:53:23.560+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:53:23.560+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:53:23.560+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:53:23.560+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:53:23.560+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:53:23.560+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:53:23.560+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:53:23.561+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:53:23.561+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:53:23.561+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:53:23.561+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:53:23.561+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:53:23.561+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:53:23.561+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:53:23.561+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:53:23.561+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:53:23.561+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:24 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/11/14 - 13:53:24 | 200 | 0s | 127.0.0.1 | GET "/api/ps"
[GIN] 2024/11/14 - 13:53:27 | 200 | 2m4s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:29 | 200 | 1m47s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:32 | 200 | 2m9s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:55 | 200 | 2m31s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:56 | 200 | 30.9828253s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:53:58 | 200 | 33.0794984s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:54:29 | 200 | 1m3s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:54:31.784+08:00 level=ERROR source=runner.go:426 msg="failed to decode batch" error="could not find a KV slot for the batch - try reducing the size of the batch or increase the context. code: 1"
time=2024-11-14T13:55:25.579+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:55:25.579+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:55:25.595+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:55:25.595+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:55:25.595+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:55:25.595+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:55:25.595+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:55:25.595+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:55:25.595+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:55:25.595+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:55:25.595+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:55:25.610+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:55:25.610+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:55:25.610+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:55:25.610+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:55:25.610+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:55:25.610+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:55:25.611+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:55:25.611+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:55:25.611+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:55:25.611+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:55:25.611+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:55:25 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:55:25.611+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:55:25.625+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:55:25.625+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:55:25 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:55:32.328+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:55:32 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:56:31 | 200 | 5m8s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:56:31 | 200 | 3m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:56:31 | 200 | 3m6s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:56:31 | 200 | 3m6s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:27.664+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:57:27.665+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:27.665+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:27.667+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:27.667+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:27.680+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:57:27.680+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:27.680+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:27.680+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:27 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:34 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:51.385+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6794578s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:51.385+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6614422s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:51.385+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6481985s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:51.385+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:57:51.385+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6637183s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6788632s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:51.385+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:57:51.385+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6788632s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6637183s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:51 | 200 | 1m17s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:51.385+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:51 | 200 | 4m20s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:51.385+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6632113s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:51 | 200 | 14.9258599s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6642382s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6625932s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6794654s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6793831s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6647526s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:51 | 200 | 1m17s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:51 | 200 | 1m17s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:51 | 200 | 1m17s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6804937s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6642472s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6642472s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6492344s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6642472s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6737381s | 127.0.0.1 | POST "/api/generate"
time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6647542s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6655005s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6647542s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6804937s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6629811s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/11/14 - 13:57:51 | 200 | 21.6805104s | 127.0.0.1 | POST "/api/generate"

<!-- gh-comment-id:2475484409 --> @jamine2024 commented on GitHub (Nov 14, 2024): 2024/11/14 13:48:32 routes.go:1189: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\OllamaModel OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-11-14T13:48:32.202+08:00 level=INFO source=images.go:755 msg="total blobs: 15" time=2024-11-14T13:48:32.203+08:00 level=INFO source=images.go:762 msg="total unused blobs removed: 0" time=2024-11-14T13:48:32.203+08:00 level=INFO source=routes.go:1240 msg="Listening on 127.0.0.1:11434 (version 0.4.1)" time=2024-11-14T13:48:32.204+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11 cuda_v12 rocm cpu cpu_avx]" time=2024-11-14T13:48:32.204+08:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" time=2024-11-14T13:48:32.204+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2024-11-14T13:48:32.204+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16 time=2024-11-14T13:48:32.338+08:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-8432859a-0af3-df5c-67bc-b11d589ed1bd library=cuda variant=v12 compute=8.6 driver=12.6 name="NVIDIA GeForce RTX 3060" total="12.0 GiB" available="11.0 GiB" [GIN] 2024/11/14 - 13:49:21 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/11/14 - 13:49:21 | 200 | 1.0247ms | 127.0.0.1 | GET "/api/tags" time=2024-11-14T13:49:23.508+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=D:\OllamaModel\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 gpu=GPU-8432859a-0af3-df5c-67bc-b11d589ed1bd parallel=4 available=11369033728 required="5.6 GiB" time=2024-11-14T13:49:23.526+08:00 level=INFO source=server.go:105 msg="system memory" total="31.9 GiB" free="18.1 GiB" free_swap="17.3 GiB" time=2024-11-14T13:49:23.526+08:00 level=INFO source=memory.go:343 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[10.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.6 GiB" memory.required.partial="5.6 GiB" memory.required.kv="448.0 MiB" memory.required.allocations="[5.6 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="3.7 GiB" memory.weights.nonrepeating="426.4 MiB" memory.graph.full="478.0 MiB" memory.graph.partial="730.4 MiB" time=2024-11-14T13:49:23.531+08:00 level=INFO source=server.go:383 msg="starting llama server" cmd="C:\\Users\\Administrator\\AppData\\Local\\Programs\\Ollama\\lib\\ollama\\runners\\cuda_v12\\ollama_llama_server.exe --model D:\\OllamaModel\\blobs\\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 8 --no-mmap --parallel 4 --port 61105" time=2024-11-14T13:49:23.565+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-11-14T13:49:23.565+08:00 level=INFO source=server.go:562 msg="waiting for llama runner to start responding" time=2024-11-14T13:49:23.565+08:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server error" time=2024-11-14T13:49:23.637+08:00 level=INFO source=runner.go:863 msg="starting go runner" time=2024-11-14T13:49:23.645+08:00 level=INFO source=runner.go:864 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(clang)" threads=8 time=2024-11-14T13:49:23.646+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:61105" llama_model_loader: loaded meta data with 34 key-value pairs and 339 tensors from D:\OllamaModel\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 7B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 7B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-7... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 7B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-7B llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 28 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 3584 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 18944 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 28 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 4 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 15 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q4_K: 169 tensors llama_model_loader: - type q6_K: 29 tensors time=2024-11-14T13:49:23.819+08:00 level=INFO source=server.go:596 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 3584 llm_load_print_meta: n_layer = 28 llm_load_print_meta: n_head = 28 llm_load_print_meta: n_head_kv = 4 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 7 llm_load_print_meta: n_embd_k_gqa = 512 llm_load_print_meta: n_embd_v_gqa = 512 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 18944 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 7.62 B llm_load_print_meta: model size = 4.36 GiB (4.91 BPW) llm_load_print_meta: general.name = Qwen2.5 7B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: EOG token = 151643 '<|endoftext|>' llm_load_print_meta: EOG token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes llm_load_tensors: ggml ctx size = 0.30 MiB llm_load_tensors: offloading 28 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 29/29 layers to GPU llm_load_tensors: CUDA_Host buffer size = 292.36 MiB llm_load_tensors: CUDA0 buffer size = 4168.09 MiB llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 448.00 MiB llama_new_context_with_model: KV self size = 448.00 MiB, K (f16): 224.00 MiB, V (f16): 224.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.38 MiB llama_new_context_with_model: CUDA0 compute buffer size = 492.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 23.01 MiB llama_new_context_with_model: graph nodes = 986 llama_new_context_with_model: graph splits = 2 time=2024-11-14T13:49:25.404+08:00 level=INFO source=server.go:601 msg="llama runner started in 1.84 seconds" llama_model_loader: loaded meta data with 34 key-value pairs and 339 tensors from D:\OllamaModel\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 7B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 7B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-7... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 7B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-7B llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 28 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 3584 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 18944 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 28 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 4 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 15 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q4_K: 169 tensors llama_model_loader: - type q6_K: 29 tensors llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 1 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = all F32 llm_load_print_meta: model params = 7.62 B llm_load_print_meta: model size = 4.36 GiB (4.91 BPW) llm_load_print_meta: general.name = Qwen2.5 7B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: EOG token = 151643 '<|endoftext|>' llm_load_print_meta: EOG token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 llama_model_load: vocab only - skipping tensors [GIN] 2024/11/14 - 13:49:43 | 200 | 19.8676275s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:49:48 | 200 | 24.6217747s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:06 | 200 | 43.2720815s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:07 | 200 | 43.9698999s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:08 | 200 | 44.6615459s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:09 | 200 | 46.4125805s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:19 | 200 | 56.5700098s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:50:29.899+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:50:29.899+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:50:29.900+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:50:29.901+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:50:29.901+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:50:29.901+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:50:29.901+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:50:29.901+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:50:29.901+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 44.5457192s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:50:29 | 200 | 1m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:51:02 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/11/14 - 13:51:02 | 200 | 1.0218ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/11/14 - 13:51:40 | 200 | 16.5192359s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:51:42 | 200 | 18.8407033s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:51:49 | 200 | 25.6416889s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:51:52 | 200 | 28.5033595s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:52:02 | 200 | 39.1118619s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:52:02 | 200 | 39.3753104s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:52:09 | 200 | 46.0375867s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:52:19 | 200 | 55.5760544s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:52:20 | 200 | 56.566788s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:52:29 | 200 | 1m5s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:52:35 | 200 | 1m12s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:52:54 | 200 | 1m31s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:04 | 200 | 1m40s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:06 | 200 | 1m43s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:22 | 200 | 1m58s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:53:23.545+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:53:23.545+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:53:23.545+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:53:23.545+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:53:23.545+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:53:23.545+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:53:23.545+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:53:23.545+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:53:23.545+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:53:23.546+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:53:23.546+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:53:23.546+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:53:23.546+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:53:23.560+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:53:23.560+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:53:23.560+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:53:23.560+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:53:23.560+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:53:23.560+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:53:23.560+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:53:23.560+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:53:23.561+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:53:23.561+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:53:23.561+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:53:23.561+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:53:23.561+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:53:23.561+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:53:23.561+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:53:23.561+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:53:23.561+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:53:23.561+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:53:23 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:23 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:24 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/11/14 - 13:53:24 | 200 | 0s | 127.0.0.1 | GET "/api/ps" [GIN] 2024/11/14 - 13:53:27 | 200 | 2m4s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:29 | 200 | 1m47s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:32 | 200 | 2m9s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:55 | 200 | 2m31s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:56 | 200 | 30.9828253s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:53:58 | 200 | 33.0794984s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:54:29 | 200 | 1m3s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:54:31.784+08:00 level=ERROR source=runner.go:426 msg="failed to decode batch" error="could not find a KV slot for the batch - try reducing the size of the batch or increase the context. code: 1" time=2024-11-14T13:55:25.579+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:55:25.579+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:55:25.595+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:55:25.595+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:55:25.595+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:55:25.595+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:55:25.595+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:55:25.595+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:55:25.595+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:55:25.595+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:55:25.595+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:55:25.610+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:55:25.610+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:55:25.610+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:55:25.610+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:55:25.610+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:55:25.610+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:55:25.611+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:55:25.611+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:55:25.611+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:55:25.611+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:55:25.611+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:55:25 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:55:25.611+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:55:25.625+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:55:25.625+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:55:25 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:55:25 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:55:32.328+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:55:32 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:56:31 | 200 | 5m8s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:56:31 | 200 | 3m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:56:31 | 200 | 3m6s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:56:31 | 200 | 3m6s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:27.664+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:57:27.665+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:27.665+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:27.667+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:27.667+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:27.680+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:57:27.680+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:27.680+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:27.680+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:27 | 200 | 1m59s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:27 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:34 | 200 | 2m0s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:51.385+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6794578s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:51.385+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6614422s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:51.385+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6481985s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:51.385+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:57:51.385+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6637183s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6788632s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:51.385+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:57:51.385+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6788632s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6637183s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:51 | 200 | 1m17s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:51.385+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:51 | 200 | 4m20s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:51.385+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6632113s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:51 | 200 | 14.9258599s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6642382s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6625932s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6794654s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6793831s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6647526s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:51 | 200 | 1m17s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:51 | 200 | 1m17s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:51 | 200 | 1m17s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6804937s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6642472s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6642472s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6492344s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6642472s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6737381s | 127.0.0.1 | POST "/api/generate" time=2024-11-14T13:57:51.386+08:00 level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6647542s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6655005s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6647542s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6804937s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6629811s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/11/14 - 13:57:51 | 200 | 21.6805104s | 127.0.0.1 | POST "/api/generate"
Author
Owner

@jamine2024 commented on GitHub (Nov 14, 2024):

I also have the same problem with the multithreaded request API, which eventually caused the entire card dead request to take longer, and eventually no longer work

<!-- gh-comment-id:2475485773 --> @jamine2024 commented on GitHub (Nov 14, 2024): I also have the same problem with the multithreaded request API, which eventually caused the entire card dead request to take longer, and eventually no longer work
Author
Owner

@JTHesse commented on GitHub (Nov 19, 2024):

Thank you for the work @jessegross, unfortunately the issue persist with ollama:0.4.2
I am now using the 0.3.14 image, which is working fine.

<!-- gh-comment-id:2485035159 --> @JTHesse commented on GitHub (Nov 19, 2024): Thank you for the work @jessegross, unfortunately the issue persist with ollama:0.4.2 I am now using the 0.3.14 image, which is working fine.
Author
Owner

@znfgnu commented on GitHub (Nov 20, 2024):

Unfortunately I have the same issue. Logs tell about:

ollama[3944749]: time=2024-11-20T10:49:29.413Z level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled"
<!-- gh-comment-id:2488255952 --> @znfgnu commented on GitHub (Nov 20, 2024): Unfortunately I have the same issue. Logs tell about: ``` ollama[3944749]: time=2024-11-20T10:49:29.413Z level=ERROR source=server.go:690 msg="Failed to acquire semaphore" error="context canceled" ```
Author
Owner

@JTHesse commented on GitHub (Nov 25, 2024):

Version 0.4.4 is now working as expected, thank you!

<!-- gh-comment-id:2497278120 --> @JTHesse commented on GitHub (Nov 25, 2024): Version 0.4.4 is now working as expected, thank you!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51337