[GH-ISSUE #9994] ollama cannot running until restrat it #32309

Open
opened 2026-04-22 13:26:35 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @Liuyuan0803 on GitHub (Mar 26, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/9994

What is the issue?

In sometimes,ollama cannot chat or embed successfully (I only use these two api), but in localhost:11434, it shows Ollama is running.

Relevant log output

2025/03/26 15:03:39 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES:GPU-d8040dfd-0bc8-ad27-6eaf-8a27a2b07beb GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\lipuyun\\Ollama_Model OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-03-26T15:03:39.139+08:00 level=INFO source=images.go:432 msg="total blobs: 10"
time=2025-03-26T15:03:39.143+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-26T15:03:39.146+08:00 level=INFO source=routes.go:1297 msg="Listening on [::]:11434 (version 0.6.1)"
time=2025-03-26T15:03:39.147+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-26T15:03:39.147+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2
time=2025-03-26T15:03:39.147+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=48 efficiency=0 threads=96
time=2025-03-26T15:03:39.148+08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=48 efficiency=0 threads=96
time=2025-03-26T15:03:39.638+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-d8040dfd-0bc8-ad27-6eaf-8a27a2b07beb library=cuda compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" overhead="2.2 GiB"
time=2025-03-26T15:03:39.642+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-d8040dfd-0bc8-ad27-6eaf-8a27a2b07beb library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="19.7 GiB"
time=2025-03-26T15:09:56.581+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.vision.block_count default=0
time=2025-03-26T15:09:56.582+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-03-26T15:09:56.583+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-03-26T15:09:56.584+08:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=D:\lipuyun\Ollama_Model\blobs\sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 gpu=GPU-d8040dfd-0bc8-ad27-6eaf-8a27a2b07beb parallel=4 available=21116650291 required="10.8 GiB"
time=2025-03-26T15:09:56.598+08:00 level=INFO source=server.go:105 msg="system memory" total="127.5 GiB" free="73.9 GiB" free_swap="117.1 GiB"
time=2025-03-26T15:09:56.598+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.vision.block_count default=0
time=2025-03-26T15:09:56.600+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128
time=2025-03-26T15:09:56.601+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128
time=2025-03-26T15:09:56.603+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[19.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="10.8 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="7.4 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB"
llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from D:\lipuyun\Ollama_Model\blobs\sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen2.5 14B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5
llama_model_loader: - kv   5:                         general.size_label str              = 14B
llama_model_loader: - kv   6:                            general.license str              = apache-2.0
llama_model_loader: - kv   7:                       general.license.link str              = https://huggingface.co/Qwen/Qwen2.5-1...
llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
llama_model_loader: - kv   9:                  general.base_model.0.name str              = Qwen2.5 14B
llama_model_loader: - kv  10:          general.base_model.0.organization str              = Qwen
llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen2.5-14B
llama_model_loader: - kv  12:                               general.tags arr[str,2]       = ["chat", "text-generation"]
llama_model_loader: - kv  13:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  14:                          qwen2.block_count u32              = 48
llama_model_loader: - kv  15:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv  16:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv  17:                  qwen2.feed_forward_length u32              = 13824
llama_model_loader: - kv  18:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  19:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  20:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  21:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  22:                          general.file_type u32              = 15
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  29:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  30:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  32:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  33:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  241 tensors
llama_model_loader: - type q4_K:  289 tensors
llama_model_loader: - type q6_K:   49 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 8.37 GiB (4.87 BPW)
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 14.77 B
print_info: general.name     = Qwen2.5 14B Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 152064
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-03-26T15:09:56.986+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="D:\\lipuyun\\Ollama\\ollama.exe runner --model D:\\lipuyun\\Ollama_Model\\blobs\\sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 96 --no-mmap --parallel 4 --port 58610"
time=2025-03-26T15:09:56.990+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-26T15:09:56.991+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-03-26T15:09:56.993+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-03-26T15:09:57.101+08:00 level=INFO source=runner.go:931 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from D:\lipuyun\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
load_backend: loaded CPU backend from D:\lipuyun\Ollama\lib\ollama\ggml-cpu-icelake.dll
time=2025-03-26T15:09:57.283+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-03-26T15:09:57.292+08:00 level=INFO source=runner.go:991 msg="Server listening on 127.0.0.1:58610"
time=2025-03-26T15:09:57.505+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) - 20140 MiB free
llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from D:\lipuyun\Ollama_Model\blobs\sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen2.5 14B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Qwen2.5
llama_model_loader: - kv   5:                         general.size_label str              = 14B
llama_model_loader: - kv   6:                            general.license str              = apache-2.0
llama_model_loader: - kv   7:                       general.license.link str              = https://huggingface.co/Qwen/Qwen2.5-1...
llama_model_loader: - kv   8:                   general.base_model.count u32              = 1
llama_model_loader: - kv   9:                  general.base_model.0.name str              = Qwen2.5 14B
llama_model_loader: - kv  10:          general.base_model.0.organization str              = Qwen
llama_model_loader: - kv  11:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen2.5-14B
llama_model_loader: - kv  12:                               general.tags arr[str,2]       = ["chat", "text-generation"]
llama_model_loader: - kv  13:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  14:                          qwen2.block_count u32              = 48
llama_model_loader: - kv  15:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv  16:                     qwen2.embedding_length u32              = 5120
llama_model_loader: - kv  17:                  qwen2.feed_forward_length u32              = 13824
llama_model_loader: - kv  18:                 qwen2.attention.head_count u32              = 40
llama_model_loader: - kv  19:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv  20:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  21:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  22:                          general.file_type u32              = 15
llama_model_loader: - kv  23:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  24:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  25:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  26:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  27:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  29:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  30:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  31:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  32:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  33:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  241 tensors
llama_model_loader: - type q4_K:  289 tensors
llama_model_loader: - type q6_K:   49 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 8.37 GiB (4.87 BPW)
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 32768
print_info: n_embd           = 5120
print_info: n_layer          = 48
print_info: n_head           = 40
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 5
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: n_ff             = 13824
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 32768
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 14B
print_info: model params     = 14.77 B
print_info: general.name     = Qwen2.5 14B Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 152064
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 48 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 49/49 layers to GPU
load_tensors:        CUDA0 model buffer size =  8148.38 MiB
load_tensors:          CPU model buffer size =   417.66 MiB
llama_init_from_model: n_seq_max     = 4
llama_init_from_model: n_ctx         = 8192
llama_init_from_model: n_ctx_per_seq = 2048
llama_init_from_model: n_batch       = 2048
llama_init_from_model: n_ubatch      = 512
llama_init_from_model: flash_attn    = 0
llama_init_from_model: freq_base     = 1000000.0
llama_init_from_model: freq_scale    = 1
llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (32768) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1
llama_kv_cache_init:      CUDA0 KV buffer size =  1536.00 MiB
llama_init_from_model: KV self size  = 1536.00 MiB, K (f16):  768.00 MiB, V (f16):  768.00 MiB
llama_init_from_model:  CUDA_Host  output buffer size =     2.40 MiB
llama_init_from_model:      CUDA0 compute buffer size =   696.00 MiB
llama_init_from_model:  CUDA_Host compute buffer size =    26.01 MiB
llama_init_from_model: graph nodes  = 1686
llama_init_from_model: graph splits = 2
time=2025-03-26T15:10:01.790+08:00 level=INFO source=server.go:624 msg="llama runner started in 4.80 seconds"
time=2025-03-26T15:10:01.860+08:00 level=WARN source=runner.go:130 msg="truncating input prompt" limit=2048 prompt=6023 keep=4 new=2048
[GIN] 2025/03/26 - 15:10:08 | 200 |   12.4676306s |             ::1 | POST     "/v1/chat/completions"
time=2025-03-26T15:11:17.787+08:00 level=WARN source=runner.go:130 msg="truncating input prompt" limit=2048 prompt=4652 keep=4 new=2048
[GIN] 2025/03/26 - 15:11:20 | 200 |    2.5896677s |             ::1 | POST     "/v1/chat/completions"
[GIN] 2025/03/26 - 15:22:41 | 200 |         202µs |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/26 - 15:22:41 | 200 |       494.2µs |       127.0.0.1 | GET      "/api/ps"
[GIN] 2025/03/26 - 15:27:18 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/03/26 - 15:27:18 | 200 |       362.6µs |       127.0.0.1 | GET      "/api/ps"

OS

No response

GPU

No response

CPU

No response

Ollama version

No response

Originally created by @Liuyuan0803 on GitHub (Mar 26, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/9994 ### What is the issue? In sometimes,ollama cannot chat or embed successfully (I only use these two api), but in localhost:11434, it shows Ollama is running. ### Relevant log output ```shell 2025/03/26 15:03:39 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES:GPU-d8040dfd-0bc8-ad27-6eaf-8a27a2b07beb GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\lipuyun\\Ollama_Model OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-03-26T15:03:39.139+08:00 level=INFO source=images.go:432 msg="total blobs: 10" time=2025-03-26T15:03:39.143+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-03-26T15:03:39.146+08:00 level=INFO source=routes.go:1297 msg="Listening on [::]:11434 (version 0.6.1)" time=2025-03-26T15:03:39.147+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-03-26T15:03:39.147+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2 time=2025-03-26T15:03:39.147+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=48 efficiency=0 threads=96 time=2025-03-26T15:03:39.148+08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=48 efficiency=0 threads=96 time=2025-03-26T15:03:39.638+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-d8040dfd-0bc8-ad27-6eaf-8a27a2b07beb library=cuda compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" overhead="2.2 GiB" time=2025-03-26T15:03:39.642+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-d8040dfd-0bc8-ad27-6eaf-8a27a2b07beb library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="19.7 GiB" time=2025-03-26T15:09:56.581+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.vision.block_count default=0 time=2025-03-26T15:09:56.582+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-03-26T15:09:56.583+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-03-26T15:09:56.584+08:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=D:\lipuyun\Ollama_Model\blobs\sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 gpu=GPU-d8040dfd-0bc8-ad27-6eaf-8a27a2b07beb parallel=4 available=21116650291 required="10.8 GiB" time=2025-03-26T15:09:56.598+08:00 level=INFO source=server.go:105 msg="system memory" total="127.5 GiB" free="73.9 GiB" free_swap="117.1 GiB" time=2025-03-26T15:09:56.598+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.vision.block_count default=0 time=2025-03-26T15:09:56.600+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.key_length default=128 time=2025-03-26T15:09:56.601+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen2.attention.value_length default=128 time=2025-03-26T15:09:56.603+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=49 layers.offload=49 layers.split="" memory.available="[19.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="10.8 GiB" memory.required.kv="1.5 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="7.4 GiB" memory.weights.repeating="7.4 GiB" memory.weights.nonrepeating="609.1 MiB" memory.graph.full="676.0 MiB" memory.graph.partial="916.1 MiB" llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from D:\lipuyun\Ollama_Model\blobs\sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 14B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 14B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-1... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 14B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-14B llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 48 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 15 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.37 GiB (4.87 BPW) load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 14.77 B print_info: general.name = Qwen2.5 14B Instruct print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 llama_model_load: vocab only - skipping tensors time=2025-03-26T15:09:56.986+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="D:\\lipuyun\\Ollama\\ollama.exe runner --model D:\\lipuyun\\Ollama_Model\\blobs\\sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 --ctx-size 8192 --batch-size 512 --n-gpu-layers 49 --threads 96 --no-mmap --parallel 4 --port 58610" time=2025-03-26T15:09:56.990+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-26T15:09:56.991+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding" time=2025-03-26T15:09:56.993+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" time=2025-03-26T15:09:57.101+08:00 level=INFO source=runner.go:931 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from D:\lipuyun\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll load_backend: loaded CPU backend from D:\lipuyun\Ollama\lib\ollama\ggml-cpu-icelake.dll time=2025-03-26T15:09:57.283+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-03-26T15:09:57.292+08:00 level=INFO source=runner.go:991 msg="Server listening on 127.0.0.1:58610" time=2025-03-26T15:09:57.505+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) - 20140 MiB free llama_model_loader: loaded meta data with 34 key-value pairs and 579 tensors from D:\lipuyun\Ollama_Model\blobs\sha256-2049f5674b1e92b4464e5729975c9689fcfbf0b0e4443ccf10b5339f370f9a54 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 14B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 14B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-1... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 14B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-14B llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 48 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 5120 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 13824 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 40 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 15 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 241 tensors llama_model_loader: - type q4_K: 289 tensors llama_model_loader: - type q6_K: 49 tensors print_info: file format = GGUF V3 (latest) print_info: file type = Q4_K - Medium print_info: file size = 8.37 GiB (4.87 BPW) load: special tokens cache size = 22 load: token to piece cache size = 0.9310 MB print_info: arch = qwen2 print_info: vocab_only = 0 print_info: n_ctx_train = 32768 print_info: n_embd = 5120 print_info: n_layer = 48 print_info: n_head = 40 print_info: n_head_kv = 8 print_info: n_rot = 128 print_info: n_swa = 0 print_info: n_embd_head_k = 128 print_info: n_embd_head_v = 128 print_info: n_gqa = 5 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 0.0e+00 print_info: f_norm_rms_eps = 1.0e-06 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: n_ff = 13824 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 1 print_info: pooling type = 0 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 1000000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 32768 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 14B print_info: model params = 14.77 B print_info: general.name = Qwen2.5 14B Instruct print_info: vocab type = BPE print_info: n_vocab = 152064 print_info: n_merges = 151387 print_info: BOS token = 151643 '<|endoftext|>' print_info: EOS token = 151645 '<|im_end|>' print_info: EOT token = 151645 '<|im_end|>' print_info: PAD token = 151643 '<|endoftext|>' print_info: LF token = 198 'Ċ' print_info: FIM PRE token = 151659 '<|fim_prefix|>' print_info: FIM SUF token = 151661 '<|fim_suffix|>' print_info: FIM MID token = 151660 '<|fim_middle|>' print_info: FIM PAD token = 151662 '<|fim_pad|>' print_info: FIM REP token = 151663 '<|repo_name|>' print_info: FIM SEP token = 151664 '<|file_sep|>' print_info: EOG token = 151643 '<|endoftext|>' print_info: EOG token = 151645 '<|im_end|>' print_info: EOG token = 151662 '<|fim_pad|>' print_info: EOG token = 151663 '<|repo_name|>' print_info: EOG token = 151664 '<|file_sep|>' print_info: max token length = 256 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 48 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 49/49 layers to GPU load_tensors: CUDA0 model buffer size = 8148.38 MiB load_tensors: CPU model buffer size = 417.66 MiB llama_init_from_model: n_seq_max = 4 llama_init_from_model: n_ctx = 8192 llama_init_from_model: n_ctx_per_seq = 2048 llama_init_from_model: n_batch = 2048 llama_init_from_model: n_ubatch = 512 llama_init_from_model: flash_attn = 0 llama_init_from_model: freq_base = 1000000.0 llama_init_from_model: freq_scale = 1 llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (32768) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 48, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 1536.00 MiB llama_init_from_model: KV self size = 1536.00 MiB, K (f16): 768.00 MiB, V (f16): 768.00 MiB llama_init_from_model: CUDA_Host output buffer size = 2.40 MiB llama_init_from_model: CUDA0 compute buffer size = 696.00 MiB llama_init_from_model: CUDA_Host compute buffer size = 26.01 MiB llama_init_from_model: graph nodes = 1686 llama_init_from_model: graph splits = 2 time=2025-03-26T15:10:01.790+08:00 level=INFO source=server.go:624 msg="llama runner started in 4.80 seconds" time=2025-03-26T15:10:01.860+08:00 level=WARN source=runner.go:130 msg="truncating input prompt" limit=2048 prompt=6023 keep=4 new=2048 [GIN] 2025/03/26 - 15:10:08 | 200 | 12.4676306s | ::1 | POST "/v1/chat/completions" time=2025-03-26T15:11:17.787+08:00 level=WARN source=runner.go:130 msg="truncating input prompt" limit=2048 prompt=4652 keep=4 new=2048 [GIN] 2025/03/26 - 15:11:20 | 200 | 2.5896677s | ::1 | POST "/v1/chat/completions" [GIN] 2025/03/26 - 15:22:41 | 200 | 202µs | 127.0.0.1 | HEAD "/" [GIN] 2025/03/26 - 15:22:41 | 200 | 494.2µs | 127.0.0.1 | GET "/api/ps" [GIN] 2025/03/26 - 15:27:18 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/26 - 15:27:18 | 200 | 362.6µs | 127.0.0.1 | GET "/api/ps" ``` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
GiteaMirror added the needs more infobug labels 2026-04-22 13:26:35 -05:00
Author
Owner

@rick-github commented on GitHub (Mar 26, 2025):

What client are you using? What error is returned by the client?

<!-- gh-comment-id:2753660012 --> @rick-github commented on GitHub (Mar 26, 2025): What client are you using? What error is returned by the client?
Author
Owner

@Liuyuan0803 commented on GitHub (Mar 26, 2025):

cherry studio, nothing information is returned including error, and I use wireshark to monitor port 11434, that shows the information is sending to ollama successfully, but nothing is returned

<!-- gh-comment-id:2753694184 --> @Liuyuan0803 commented on GitHub (Mar 26, 2025): cherry studio, nothing information is returned including error, and I use wireshark to monitor port 11434, that shows the information is sending to ollama successfully, but nothing is returned
Author
Owner

@rick-github commented on GitHub (Mar 26, 2025):

Does the task monitor show that the GPU is at 100% when it doesn't return a result? It could be that the model has lost coherence and is "rambling", ie just generating tokens without hitting a termination token. This can happen when the context buffer overflows, and your log shows that the input is being truncated. You can try increasing num_ctx to give the model a larger context buffer, or setting num_predict to make the model stop generating tokens after a limit.

<!-- gh-comment-id:2753713038 --> @rick-github commented on GitHub (Mar 26, 2025): Does the task monitor show that the GPU is at 100% when it doesn't return a result? It could be that the model has lost coherence and is "rambling", ie just generating tokens without hitting a termination token. This can happen when the context buffer overflows, and your log shows that the input is being truncated. You can try increasing [`num_ctx`](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values:~:text=mirostat_tau%205.0-,num_ctx,-Sets%20the%20size) to give the model a larger context buffer, or setting [`num_predict`](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values:~:text=stop%20%22AI%20assistant%3A%22-,num_predict,-Maximum%20number%20of) to make the model stop generating tokens after a limit.
Author
Owner

@Liuyuan0803 commented on GitHub (Mar 26, 2025):

Thankyou, but I have another question, when I use "/api/embed" to use embedding model to get vectors, sometimes ollama cannot work too

<!-- gh-comment-id:2753740426 --> @Liuyuan0803 commented on GitHub (Mar 26, 2025): Thankyou, but I have another question, when I use "/api/embed" to use embedding model to get vectors, sometimes ollama cannot work too
Author
Owner

@rick-github commented on GitHub (Mar 26, 2025):

Do you have ollama logs showing calls to /api/embed when it doesn't work?

<!-- gh-comment-id:2753756316 --> @rick-github commented on GitHub (Mar 26, 2025): Do you have ollama logs showing calls to /api/embed when it doesn't work?
Author
Owner

@Liuyuan0803 commented on GitHub (Mar 28, 2025):

### 1. logs:
2025/03/28 11:09:44 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES:GPU-d8040dfd-0bc8-ad27-6eaf-8a27a2b07beb GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\lipuyun\Ollama_Model OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-03-28T11:09:44.566+08:00 level=INFO source=images.go:432 msg="total blobs: 10"
time=2025-03-28T11:09:44.568+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-28T11:09:44.569+08:00 level=INFO source=routes.go:1297 msg="Listening on [::]:11434 (version 0.6.1)"
time=2025-03-28T11:09:44.570+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-28T11:09:44.571+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2
time=2025-03-28T11:09:44.571+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=48 efficiency=0 threads=96
time=2025-03-28T11:09:44.572+08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=48 efficiency=0 threads=96
time=2025-03-28T11:09:45.061+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-d8040dfd-0bc8-ad27-6eaf-8a27a2b07beb library=cuda compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" overhead="2.3 GiB"
time=2025-03-28T11:09:45.065+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-d8040dfd-0bc8-ad27-6eaf-8a27a2b07beb library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="19.7 GiB"
time=2025-03-28T11:11:38.111+08:00 level=WARN source=ggml.go:149 msg="key not found" key=bert.vision.block_count default=0
time=2025-03-28T11:11:38.112+08:00 level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-28T11:11:38.114+08:00 level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.key_length default=64
time=2025-03-28T11:11:38.116+08:00 level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.value_length default=64
time=2025-03-28T11:11:38.117+08:00 level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-28T11:11:38.118+08:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=D:\lipuyun\Ollama_Model\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c gpu=GPU-d8040dfd-0bc8-ad27-6eaf-8a27a2b07beb parallel=1 available=21119054643 required="1.6 GiB"
time=2025-03-28T11:11:38.137+08:00 level=INFO source=server.go:105 msg="system memory" total="127.5 GiB" free="73.8 GiB" free_swap="117.4 GiB"
time=2025-03-28T11:11:38.137+08:00 level=WARN source=ggml.go:149 msg="key not found" key=bert.vision.block_count default=0
time=2025-03-28T11:11:38.143+08:00 level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-28T11:11:38.144+08:00 level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.key_length default=64
time=2025-03-28T11:11:38.145+08:00 level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.value_length default=64
time=2025-03-28T11:11:38.147+08:00 level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1
time=2025-03-28T11:11:38.148+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[19.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.6 GiB" memory.required.partial="1.6 GiB" memory.required.kv="12.0 MiB" memory.required.allocations="[1.6 GiB]" memory.weights.total="577.2 MiB" memory.weights.repeating="577.2 MiB" memory.weights.nonrepeating="488.3 MiB" memory.graph.full="32.0 MiB" memory.graph.partial="32.0 MiB"
llama_model_loader: loaded meta data with 33 key-value pairs and 389 tensors from D:\lipuyun\Ollama_Model\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = bert
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.size_label str = 567M
llama_model_loader: - kv 3: general.license str = mit
llama_model_loader: - kv 4: general.tags arr[str,4] = ["sentence-transformers", "feature-ex...
llama_model_loader: - kv 5: bert.block_count u32 = 24
llama_model_loader: - kv 6: bert.context_length u32 = 8192
llama_model_loader: - kv 7: bert.embedding_length u32 = 1024
llama_model_loader: - kv 8: bert.feed_forward_length u32 = 4096
llama_model_loader: - kv 9: bert.attention.head_count u32 = 16
llama_model_loader: - kv 10: bert.attention.layer_norm_epsilon f32 = 0.000010
llama_model_loader: - kv 11: general.file_type u32 = 1
llama_model_loader: - kv 12: bert.attention.causal bool = false
llama_model_loader: - kv 13: bert.pooling_type u32 = 2
llama_model_loader: - kv 14: tokenizer.ggml.model str = t5
llama_model_loader: - kv 15: tokenizer.ggml.pre str = default
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,250002] = ["", "", "", "", ","...
llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,250002] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,250002] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 19: tokenizer.ggml.add_space_prefix bool = true
llama_model_loader: - kv 20: tokenizer.ggml.token_type_count u32 = 1
llama_model_loader: - kv 21: tokenizer.ggml.remove_extra_whitespaces bool = true
llama_model_loader: - kv 22: tokenizer.ggml.precompiled_charsmap arr[u8,237539] = [0, 180, 2, 0, 0, 132, 0, 0, 0, 0, 0,...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 0
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 25: tokenizer.ggml.unknown_token_id u32 = 3
llama_model_loader: - kv 26: tokenizer.ggml.seperator_token_id u32 = 2
llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 1
llama_model_loader: - kv 28: tokenizer.ggml.cls_token_id u32 = 0
llama_model_loader: - kv 29: tokenizer.ggml.mask_token_id u32 = 250001
llama_model_loader: - kv 30: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 31: tokenizer.ggml.add_eos_token bool = true
llama_model_loader: - kv 32: general.quantization_version u32 = 2
llama_model_loader: - type f32: 244 tensors
llama_model_loader: - type f16: 145 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = F16
print_info: file size = 1.07 GiB (16.25 BPW)
load: model vocab missing newline token, using special_pad_id instead
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 4
load: token to piece cache size = 2.1668 MB
print_info: arch = bert
print_info: vocab_only = 1
print_info: model type = ?B
print_info: model params = 566.70 M
print_info: general.name = n/a
print_info: vocab type = UGM
print_info: n_vocab = 250002
print_info: n_merges = 0
print_info: BOS token = 0 ''
print_info: EOS token = 2 '
'
print_info: UNK token = 3 ''
print_info: SEP token = 2 ''
print_info: PAD token = 1 ''
print_info: MASK token = 250001 '[PAD250000]'
print_info: LF token = 0 ''
print_info: EOG token = 2 '
'
print_info: max token length = 48
llama_model_load: vocab only - skipping tensors
time=2025-03-28T11:11:38.882+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="D:\lipuyun\Ollama\ollama.exe runner --model D:\lipuyun\Ollama_Model\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c --ctx-size 2048 --batch-size 512 --n-gpu-layers 25 --threads 96 --no-mmap --parallel 1 --port 60323"
time=2025-03-28T11:11:38.893+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-03-28T11:11:38.896+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding"
time=2025-03-28T11:11:38.898+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error"
time=2025-03-28T11:11:39.006+08:00 level=INFO source=runner.go:931 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
load_backend: loaded CUDA backend from D:\lipuyun\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll
load_backend: loaded CPU backend from D:\lipuyun\Ollama\lib\ollama\ggml-cpu-icelake.dll
time=2025-03-28T11:11:39.200+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang)
time=2025-03-28T11:11:39.226+08:00 level=INFO source=runner.go:991 msg="Server listening on 127.0.0.1:60323"
time=2025-03-28T11:11:39.409+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model"
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) - 20140 MiB free
llama_model_loader: loaded meta data with 33 key-value pairs and 389 tensors from D:\lipuyun\Ollama_Model\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = bert
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.size_label str = 567M
llama_model_loader: - kv 3: general.license str = mit
llama_model_loader: - kv 4: general.tags arr[str,4] = ["sentence-transformers", "feature-ex...
llama_model_loader: - kv 5: bert.block_count u32 = 24
llama_model_loader: - kv 6: bert.context_length u32 = 8192
llama_model_loader: - kv 7: bert.embedding_length u32 = 1024
llama_model_loader: - kv 8: bert.feed_forward_length u32 = 4096
llama_model_loader: - kv 9: bert.attention.head_count u32 = 16
llama_model_loader: - kv 10: bert.attention.layer_norm_epsilon f32 = 0.000010
llama_model_loader: - kv 11: general.file_type u32 = 1
llama_model_loader: - kv 12: bert.attention.causal bool = false
llama_model_loader: - kv 13: bert.pooling_type u32 = 2
llama_model_loader: - kv 14: tokenizer.ggml.model str = t5
llama_model_loader: - kv 15: tokenizer.ggml.pre str = default
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,250002] = ["", "", "", "", ","...
llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,250002] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,250002] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 19: tokenizer.ggml.add_space_prefix bool = true
llama_model_loader: - kv 20: tokenizer.ggml.token_type_count u32 = 1
llama_model_loader: - kv 21: tokenizer.ggml.remove_extra_whitespaces bool = true
llama_model_loader: - kv 22: tokenizer.ggml.precompiled_charsmap arr[u8,237539] = [0, 180, 2, 0, 0, 132, 0, 0, 0, 0, 0,...
llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 0
llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 25: tokenizer.ggml.unknown_token_id u32 = 3
llama_model_loader: - kv 26: tokenizer.ggml.seperator_token_id u32 = 2
llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 1
llama_model_loader: - kv 28: tokenizer.ggml.cls_token_id u32 = 0
llama_model_loader: - kv 29: tokenizer.ggml.mask_token_id u32 = 250001
llama_model_loader: - kv 30: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 31: tokenizer.ggml.add_eos_token bool = true
llama_model_loader: - kv 32: general.quantization_version u32 = 2
llama_model_loader: - type f32: 244 tensors
llama_model_loader: - type f16: 145 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = F16
print_info: file size = 1.07 GiB (16.25 BPW)
load: model vocab missing newline token, using special_pad_id instead
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 4
load: token to piece cache size = 2.1668 MB
print_info: arch = bert
print_info: vocab_only = 0
print_info: n_ctx_train = 8192
print_info: n_embd = 1024
print_info: n_layer = 24
print_info: n_head = 16
print_info: n_head_kv = 16
print_info: n_rot = 64
print_info: n_swa = 0
print_info: n_embd_head_k = 64
print_info: n_embd_head_v = 64
print_info: n_gqa = 1
print_info: n_embd_k_gqa = 1024
print_info: n_embd_v_gqa = 1024
print_info: f_norm_eps = 1.0e-05
print_info: f_norm_rms_eps = 0.0e+00
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: n_ff = 4096
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 0
print_info: pooling type = 2
print_info: rope type = 2
print_info: rope scaling = linear
print_info: freq_base_train = 10000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 8192
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 335M
print_info: model params = 566.70 M
print_info: general.name = n/a
print_info: vocab type = UGM
print_info: n_vocab = 250002
print_info: n_merges = 0
print_info: BOS token = 0 ''
print_info: EOS token = 2 '
'
print_info: UNK token = 3 ''
print_info: SEP token = 2 ''
print_info: PAD token = 1 ''
print_info: MASK token = 250001 '[PAD250000]'
print_info: LF token = 0 ''
print_info: EOG token = 2 '
'
print_info: max token length = 48
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 24 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 25/25 layers to GPU
load_tensors: CUDA_Host model buffer size = 520.30 MiB
load_tensors: CUDA0 model buffer size = 577.22 MiB
llama_init_from_model: n_seq_max = 1
llama_init_from_model: n_ctx = 2048
llama_init_from_model: n_ctx_per_seq = 2048
llama_init_from_model: n_batch = 512
llama_init_from_model: n_ubatch = 512
llama_init_from_model: flash_attn = 0
llama_init_from_model: freq_base = 10000.0
llama_init_from_model: freq_scale = 1
llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (8192) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 24, can_shift = 1
llama_kv_cache_init: CUDA0 KV buffer size = 192.00 MiB
llama_init_from_model: KV self size = 192.00 MiB, K (f16): 96.00 MiB, V (f16): 96.00 MiB
llama_init_from_model: CUDA_Host output buffer size = 0.00 MiB
llama_init_from_model: CUDA0 compute buffer size = 25.01 MiB
llama_init_from_model: CUDA_Host compute buffer size = 5.01 MiB
llama_init_from_model: graph nodes = 849
llama_init_from_model: graph splits = 4 (with bs=512), 2 (with bs=1)
time=2025-03-28T11:11:40.921+08:00 level=INFO source=server.go:624 msg="llama runner started in 2.02 seconds"
[GIN] 2025/03/28 - 11:11:41 | 200 | 3.2086079s | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:41 | 200 | 158.5419ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:41 | 200 | 143.8341ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:41 | 200 | 119.812ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:41 | 200 | 154.2558ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:42 | 200 | 152.3745ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:42 | 200 | 161.5102ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:42 | 200 | 96.7754ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:42 | 200 | 150.9797ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:42 | 200 | 161.3874ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:42 | 200 | 131.177ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:43 | 200 | 121.8086ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:43 | 200 | 111.3074ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:43 | 200 | 110.0062ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:43 | 200 | 101.0394ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:46 | 200 | 2.7701523s | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:46 | 200 | 132.3933ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:49 | 200 | 2.8192538s | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:49 | 200 | 174.3936ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:52 | 200 | 2.9610501s | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:52 | 200 | 180.535ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:55 | 200 | 2.5931473s | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:55 | 200 | 217.086ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:55 | 200 | 198.8857ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:55 | 200 | 174.0897ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:56 | 200 | 194.5948ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:56 | 200 | 188.8659ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:56 | 200 | 183.4675ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:56 | 200 | 188.5184ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:57 | 200 | 184.6032ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:57 | 200 | 188.0136ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:57 | 200 | 202.4551ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:57 | 200 | 189.0539ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:57 | 200 | 182.562ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:58 | 200 | 176.8825ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:58 | 200 | 163.398ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:58 | 200 | 185.7276ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:58 | 200 | 197.0806ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:59 | 200 | 189.1101ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:59 | 200 | 184.7041ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:59 | 200 | 170.5567ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:11:59 | 200 | 174.865ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:11 | 200 | 182.0829ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:12 | 200 | 324.4937ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:12 | 200 | 148.9159ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:12 | 200 | 106.9827ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:12 | 200 | 122.7874ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:12 | 200 | 125.8575ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:12 | 200 | 130.3733ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:12 | 200 | 122.163ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:13 | 200 | 116.2861ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:13 | 200 | 122.8919ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:13 | 200 | 136.2746ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:13 | 200 | 128.3028ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:13 | 200 | 110.0891ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:13 | 200 | 122.4463ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:14 | 200 | 124.8436ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:14 | 200 | 119.9132ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:14 | 200 | 128.8288ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:14 | 200 | 125.7882ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:14 | 200 | 123.6601ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:14 | 200 | 111.2645ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:14 | 200 | 129.1993ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:12:15 | 200 | 123.9483ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:13:10 | 200 | 3.4333857s | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:14:30 | 200 | 3.6304015s | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:17:40 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/03/28 - 11:18:25 | 200 | 522µs | 127.0.0.1 | GET "/api/ps"
2. C:\Users\liuyuan>ollama ps
NAME ID SIZE PROCESSOR UNTIL
bge-m3:latest 790764642607 1.7 GB 100% GPU 24 hours from now

  1. after 11:14:30, I send a sentence to ollama again, it cannot return the result to me. But when I check the running model information, ollama can return me the result successfully
<!-- gh-comment-id:2760089592 --> @Liuyuan0803 commented on GitHub (Mar 28, 2025): **### 1. logs:** 2025/03/28 11:09:44 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES:GPU-d8040dfd-0bc8-ad27-6eaf-8a27a2b07beb GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\lipuyun\\Ollama_Model OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-03-28T11:09:44.566+08:00 level=INFO source=images.go:432 msg="total blobs: 10" time=2025-03-28T11:09:44.568+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-03-28T11:09:44.569+08:00 level=INFO source=routes.go:1297 msg="Listening on [::]:11434 (version 0.6.1)" time=2025-03-28T11:09:44.570+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs" time=2025-03-28T11:09:44.571+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=2 time=2025-03-28T11:09:44.571+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=48 efficiency=0 threads=96 time=2025-03-28T11:09:44.572+08:00 level=INFO source=gpu_windows.go:214 msg="" package=1 cores=48 efficiency=0 threads=96 time=2025-03-28T11:09:45.061+08:00 level=INFO source=gpu.go:319 msg="detected OS VRAM overhead" id=GPU-d8040dfd-0bc8-ad27-6eaf-8a27a2b07beb library=cuda compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" overhead="2.3 GiB" time=2025-03-28T11:09:45.065+08:00 level=INFO source=types.go:130 msg="inference compute" id=GPU-d8040dfd-0bc8-ad27-6eaf-8a27a2b07beb library=cuda variant=v12 compute=8.9 driver=12.4 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="19.7 GiB" time=2025-03-28T11:11:38.111+08:00 level=WARN source=ggml.go:149 msg="key not found" key=bert.vision.block_count default=0 time=2025-03-28T11:11:38.112+08:00 level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-28T11:11:38.114+08:00 level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.key_length default=64 time=2025-03-28T11:11:38.116+08:00 level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.value_length default=64 time=2025-03-28T11:11:38.117+08:00 level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-28T11:11:38.118+08:00 level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=D:\lipuyun\Ollama_Model\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c gpu=GPU-d8040dfd-0bc8-ad27-6eaf-8a27a2b07beb parallel=1 available=21119054643 required="1.6 GiB" time=2025-03-28T11:11:38.137+08:00 level=INFO source=server.go:105 msg="system memory" total="127.5 GiB" free="73.8 GiB" free_swap="117.4 GiB" time=2025-03-28T11:11:38.137+08:00 level=WARN source=ggml.go:149 msg="key not found" key=bert.vision.block_count default=0 time=2025-03-28T11:11:38.143+08:00 level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-28T11:11:38.144+08:00 level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.key_length default=64 time=2025-03-28T11:11:38.145+08:00 level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.value_length default=64 time=2025-03-28T11:11:38.147+08:00 level=WARN source=ggml.go:149 msg="key not found" key=bert.attention.head_count_kv default=1 time=2025-03-28T11:11:38.148+08:00 level=INFO source=server.go:138 msg=offload library=cuda layers.requested=-1 layers.model=25 layers.offload=25 layers.split="" memory.available="[19.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.6 GiB" memory.required.partial="1.6 GiB" memory.required.kv="12.0 MiB" memory.required.allocations="[1.6 GiB]" memory.weights.total="577.2 MiB" memory.weights.repeating="577.2 MiB" memory.weights.nonrepeating="488.3 MiB" memory.graph.full="32.0 MiB" memory.graph.partial="32.0 MiB" llama_model_loader: loaded meta data with 33 key-value pairs and 389 tensors from D:\lipuyun\Ollama_Model\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = bert llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.size_label str = 567M llama_model_loader: - kv 3: general.license str = mit llama_model_loader: - kv 4: general.tags arr[str,4] = ["sentence-transformers", "feature-ex... llama_model_loader: - kv 5: bert.block_count u32 = 24 llama_model_loader: - kv 6: bert.context_length u32 = 8192 llama_model_loader: - kv 7: bert.embedding_length u32 = 1024 llama_model_loader: - kv 8: bert.feed_forward_length u32 = 4096 llama_model_loader: - kv 9: bert.attention.head_count u32 = 16 llama_model_loader: - kv 10: bert.attention.layer_norm_epsilon f32 = 0.000010 llama_model_loader: - kv 11: general.file_type u32 = 1 llama_model_loader: - kv 12: bert.attention.causal bool = false llama_model_loader: - kv 13: bert.pooling_type u32 = 2 llama_model_loader: - kv 14: tokenizer.ggml.model str = t5 llama_model_loader: - kv 15: tokenizer.ggml.pre str = default llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,250002] = ["<s>", "<pad>", "</s>", "<unk>", ","... llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,250002] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,250002] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.add_space_prefix bool = true llama_model_loader: - kv 20: tokenizer.ggml.token_type_count u32 = 1 llama_model_loader: - kv 21: tokenizer.ggml.remove_extra_whitespaces bool = true llama_model_loader: - kv 22: tokenizer.ggml.precompiled_charsmap arr[u8,237539] = [0, 180, 2, 0, 0, 132, 0, 0, 0, 0, 0,... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 25: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 26: tokenizer.ggml.seperator_token_id u32 = 2 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 1 llama_model_loader: - kv 28: tokenizer.ggml.cls_token_id u32 = 0 llama_model_loader: - kv 29: tokenizer.ggml.mask_token_id u32 = 250001 llama_model_loader: - kv 30: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 31: tokenizer.ggml.add_eos_token bool = true llama_model_loader: - kv 32: general.quantization_version u32 = 2 llama_model_loader: - type f32: 244 tensors llama_model_loader: - type f16: 145 tensors print_info: file format = GGUF V3 (latest) print_info: file type = F16 print_info: file size = 1.07 GiB (16.25 BPW) load: model vocab missing newline token, using special_pad_id instead load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 4 load: token to piece cache size = 2.1668 MB print_info: arch = bert print_info: vocab_only = 1 print_info: model type = ?B print_info: model params = 566.70 M print_info: general.name = n/a print_info: vocab type = UGM print_info: n_vocab = 250002 print_info: n_merges = 0 print_info: BOS token = 0 '<s>' print_info: EOS token = 2 '</s>' print_info: UNK token = 3 '<unk>' print_info: SEP token = 2 '</s>' print_info: PAD token = 1 '<pad>' print_info: MASK token = 250001 '[PAD250000]' print_info: LF token = 0 '<s>' print_info: EOG token = 2 '</s>' print_info: max token length = 48 llama_model_load: vocab only - skipping tensors time=2025-03-28T11:11:38.882+08:00 level=INFO source=server.go:405 msg="starting llama server" cmd="D:\\lipuyun\\Ollama\\ollama.exe runner --model D:\\lipuyun\\Ollama_Model\\blobs\\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c --ctx-size 2048 --batch-size 512 --n-gpu-layers 25 --threads 96 --no-mmap --parallel 1 --port 60323" time=2025-03-28T11:11:38.893+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2025-03-28T11:11:38.896+08:00 level=INFO source=server.go:585 msg="waiting for llama runner to start responding" time=2025-03-28T11:11:38.898+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server error" time=2025-03-28T11:11:39.006+08:00 level=INFO source=runner.go:931 msg="starting go runner" ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes load_backend: loaded CUDA backend from D:\lipuyun\Ollama\lib\ollama\cuda_v12\ggml-cuda.dll load_backend: loaded CPU backend from D:\lipuyun\Ollama\lib\ollama\ggml-cpu-icelake.dll time=2025-03-28T11:11:39.200+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.AVX512=1 CPU.0.AVX512_VBMI=1 CPU.0.AVX512_VNNI=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 CUDA.0.ARCHS=500,600,610,700,750,800,860,870,890,900,1200 CUDA.0.USE_GRAPHS=1 CUDA.0.PEER_MAX_BATCH_SIZE=128 compiler=cgo(clang) time=2025-03-28T11:11:39.226+08:00 level=INFO source=runner.go:991 msg="Server listening on 127.0.0.1:60323" time=2025-03-28T11:11:39.409+08:00 level=INFO source=server.go:619 msg="waiting for server to become available" status="llm server loading model" llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 4090) - 20140 MiB free llama_model_loader: loaded meta data with 33 key-value pairs and 389 tensors from D:\lipuyun\Ollama_Model\blobs\sha256-daec91ffb5dd0c27411bd71f29932917c49cf529a641d0168496c3a501e3062c (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = bert llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.size_label str = 567M llama_model_loader: - kv 3: general.license str = mit llama_model_loader: - kv 4: general.tags arr[str,4] = ["sentence-transformers", "feature-ex... llama_model_loader: - kv 5: bert.block_count u32 = 24 llama_model_loader: - kv 6: bert.context_length u32 = 8192 llama_model_loader: - kv 7: bert.embedding_length u32 = 1024 llama_model_loader: - kv 8: bert.feed_forward_length u32 = 4096 llama_model_loader: - kv 9: bert.attention.head_count u32 = 16 llama_model_loader: - kv 10: bert.attention.layer_norm_epsilon f32 = 0.000010 llama_model_loader: - kv 11: general.file_type u32 = 1 llama_model_loader: - kv 12: bert.attention.causal bool = false llama_model_loader: - kv 13: bert.pooling_type u32 = 2 llama_model_loader: - kv 14: tokenizer.ggml.model str = t5 llama_model_loader: - kv 15: tokenizer.ggml.pre str = default llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,250002] = ["<s>", "<pad>", "</s>", "<unk>", ","... llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,250002] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,250002] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.add_space_prefix bool = true llama_model_loader: - kv 20: tokenizer.ggml.token_type_count u32 = 1 llama_model_loader: - kv 21: tokenizer.ggml.remove_extra_whitespaces bool = true llama_model_loader: - kv 22: tokenizer.ggml.precompiled_charsmap arr[u8,237539] = [0, 180, 2, 0, 0, 132, 0, 0, 0, 0, 0,... llama_model_loader: - kv 23: tokenizer.ggml.bos_token_id u32 = 0 llama_model_loader: - kv 24: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 25: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 26: tokenizer.ggml.seperator_token_id u32 = 2 llama_model_loader: - kv 27: tokenizer.ggml.padding_token_id u32 = 1 llama_model_loader: - kv 28: tokenizer.ggml.cls_token_id u32 = 0 llama_model_loader: - kv 29: tokenizer.ggml.mask_token_id u32 = 250001 llama_model_loader: - kv 30: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 31: tokenizer.ggml.add_eos_token bool = true llama_model_loader: - kv 32: general.quantization_version u32 = 2 llama_model_loader: - type f32: 244 tensors llama_model_loader: - type f16: 145 tensors print_info: file format = GGUF V3 (latest) print_info: file type = F16 print_info: file size = 1.07 GiB (16.25 BPW) load: model vocab missing newline token, using special_pad_id instead load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect load: special tokens cache size = 4 load: token to piece cache size = 2.1668 MB print_info: arch = bert print_info: vocab_only = 0 print_info: n_ctx_train = 8192 print_info: n_embd = 1024 print_info: n_layer = 24 print_info: n_head = 16 print_info: n_head_kv = 16 print_info: n_rot = 64 print_info: n_swa = 0 print_info: n_embd_head_k = 64 print_info: n_embd_head_v = 64 print_info: n_gqa = 1 print_info: n_embd_k_gqa = 1024 print_info: n_embd_v_gqa = 1024 print_info: f_norm_eps = 1.0e-05 print_info: f_norm_rms_eps = 0.0e+00 print_info: f_clamp_kqv = 0.0e+00 print_info: f_max_alibi_bias = 0.0e+00 print_info: f_logit_scale = 0.0e+00 print_info: n_ff = 4096 print_info: n_expert = 0 print_info: n_expert_used = 0 print_info: causal attn = 0 print_info: pooling type = 2 print_info: rope type = 2 print_info: rope scaling = linear print_info: freq_base_train = 10000.0 print_info: freq_scale_train = 1 print_info: n_ctx_orig_yarn = 8192 print_info: rope_finetuned = unknown print_info: ssm_d_conv = 0 print_info: ssm_d_inner = 0 print_info: ssm_d_state = 0 print_info: ssm_dt_rank = 0 print_info: ssm_dt_b_c_rms = 0 print_info: model type = 335M print_info: model params = 566.70 M print_info: general.name = n/a print_info: vocab type = UGM print_info: n_vocab = 250002 print_info: n_merges = 0 print_info: BOS token = 0 '<s>' print_info: EOS token = 2 '</s>' print_info: UNK token = 3 '<unk>' print_info: SEP token = 2 '</s>' print_info: PAD token = 1 '<pad>' print_info: MASK token = 250001 '[PAD250000]' print_info: LF token = 0 '<s>' print_info: EOG token = 2 '</s>' print_info: max token length = 48 load_tensors: loading model tensors, this can take a while... (mmap = false) load_tensors: offloading 24 repeating layers to GPU load_tensors: offloading output layer to GPU load_tensors: offloaded 25/25 layers to GPU load_tensors: CUDA_Host model buffer size = 520.30 MiB load_tensors: CUDA0 model buffer size = 577.22 MiB llama_init_from_model: n_seq_max = 1 llama_init_from_model: n_ctx = 2048 llama_init_from_model: n_ctx_per_seq = 2048 llama_init_from_model: n_batch = 512 llama_init_from_model: n_ubatch = 512 llama_init_from_model: flash_attn = 0 llama_init_from_model: freq_base = 10000.0 llama_init_from_model: freq_scale = 1 llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (8192) -- the full capacity of the model will not be utilized llama_kv_cache_init: kv_size = 2048, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 24, can_shift = 1 llama_kv_cache_init: CUDA0 KV buffer size = 192.00 MiB llama_init_from_model: KV self size = 192.00 MiB, K (f16): 96.00 MiB, V (f16): 96.00 MiB llama_init_from_model: CUDA_Host output buffer size = 0.00 MiB llama_init_from_model: CUDA0 compute buffer size = 25.01 MiB llama_init_from_model: CUDA_Host compute buffer size = 5.01 MiB llama_init_from_model: graph nodes = 849 llama_init_from_model: graph splits = 4 (with bs=512), 2 (with bs=1) time=2025-03-28T11:11:40.921+08:00 level=INFO source=server.go:624 msg="llama runner started in 2.02 seconds" [GIN] 2025/03/28 - 11:11:41 | 200 | 3.2086079s | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:41 | 200 | 158.5419ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:41 | 200 | 143.8341ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:41 | 200 | 119.812ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:41 | 200 | 154.2558ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:42 | 200 | 152.3745ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:42 | 200 | 161.5102ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:42 | 200 | 96.7754ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:42 | 200 | 150.9797ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:42 | 200 | 161.3874ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:42 | 200 | 131.177ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:43 | 200 | 121.8086ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:43 | 200 | 111.3074ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:43 | 200 | 110.0062ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:43 | 200 | 101.0394ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:46 | 200 | 2.7701523s | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:46 | 200 | 132.3933ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:49 | 200 | 2.8192538s | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:49 | 200 | 174.3936ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:52 | 200 | 2.9610501s | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:52 | 200 | 180.535ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:55 | 200 | 2.5931473s | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:55 | 200 | 217.086ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:55 | 200 | 198.8857ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:55 | 200 | 174.0897ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:56 | 200 | 194.5948ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:56 | 200 | 188.8659ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:56 | 200 | 183.4675ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:56 | 200 | 188.5184ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:57 | 200 | 184.6032ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:57 | 200 | 188.0136ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:57 | 200 | 202.4551ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:57 | 200 | 189.0539ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:57 | 200 | 182.562ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:58 | 200 | 176.8825ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:58 | 200 | 163.398ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:58 | 200 | 185.7276ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:58 | 200 | 197.0806ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:59 | 200 | 189.1101ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:59 | 200 | 184.7041ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:59 | 200 | 170.5567ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:11:59 | 200 | 174.865ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:11 | 200 | 182.0829ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:12 | 200 | 324.4937ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:12 | 200 | 148.9159ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:12 | 200 | 106.9827ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:12 | 200 | 122.7874ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:12 | 200 | 125.8575ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:12 | 200 | 130.3733ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:12 | 200 | 122.163ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:13 | 200 | 116.2861ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:13 | 200 | 122.8919ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:13 | 200 | 136.2746ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:13 | 200 | 128.3028ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:13 | 200 | 110.0891ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:13 | 200 | 122.4463ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:14 | 200 | 124.8436ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:14 | 200 | 119.9132ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:14 | 200 | 128.8288ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:14 | 200 | 125.7882ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:14 | 200 | 123.6601ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:14 | 200 | 111.2645ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:14 | 200 | 129.1993ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:12:15 | 200 | 123.9483ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:13:10 | 200 | 3.4333857s | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:14:30 | 200 | 3.6304015s | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:17:40 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2025/03/28 - 11:18:25 | 200 | 522µs | 127.0.0.1 | GET "/api/ps" 2. C:\Users\liuyuan>ollama ps NAME ID SIZE PROCESSOR UNTIL bge-m3:latest 790764642607 1.7 GB 100% GPU 24 hours from now 3. after 11:14:30, I send a sentence to ollama again, it cannot return the result to me. But when I check the running model information, ollama can return me the result successfully
Author
Owner

@rick-github commented on GitHub (Mar 28, 2025):

[GIN] 2025/03/28 - 11:12:15 | 200 | 123.9483ms | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:13:10 | 200 | 3.4333857s | 10.29.1.125 | POST "/api/embed"
[GIN] 2025/03/28 - 11:14:30 | 200 | 3.6304015s | 10.29.1.125 | POST "/api/embed"

The model got slow at the end. Other than that, the log looks normal. Set OLLAMA_DEBUG=1 in the server environment, there might be something relevant in the more detailed log.

<!-- gh-comment-id:2761183392 --> @rick-github commented on GitHub (Mar 28, 2025): ``` [GIN] 2025/03/28 - 11:12:15 | 200 | 123.9483ms | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:13:10 | 200 | 3.4333857s | 10.29.1.125 | POST "/api/embed" [GIN] 2025/03/28 - 11:14:30 | 200 | 3.6304015s | 10.29.1.125 | POST "/api/embed" ``` The model got slow at the end. Other than that, the log looks normal. Set `OLLAMA_DEBUG=1` in the server environment, there might be something relevant in the more detailed log.
Author
Owner

@Liuyuan0803 commented on GitHub (Apr 11, 2025):

more logs:

server.log:

time=2025-04-11T15:38:36.581+08:00 level=INFO source=server.go:619 msg="llama runner started in 3.54 seconds"
time=2025-04-11T15:38:36.581+08:00 level=DEBUG source=sched.go:464 msg="finished setting up runner" model=D:\ly\Ollama\Models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730
time=2025-04-11T15:38:36.582+08:00 level=DEBUG source=sched.go:468 msg="context for request finished"
time=2025-04-11T15:38:36.582+08:00 level=DEBUG source=sched.go:341 msg="runner with non-zero duration has gone idle, adding timer" modelPath=D:\ly\Ollama\Models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 duration=24h0m0s
time=2025-04-11T15:38:36.582+08:00 level=DEBUG source=sched.go:359 msg="after processing request finished event" modelPath=D:\ly\Ollama\Models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 refCount=0
[GIN] 2025/04/11 - 15:38:36 | 200 | 4.01209s | 127.0.0.1 | POST "/api/generate"
time=2025-04-11T15:39:41.107+08:00 level=DEBUG source=sched.go:577 msg="evaluating already loaded" model=D:\ly\Ollama\Models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730
time=2025-04-11T15:39:41.114+08:00 level=DEBUG source=routes.go:1522 msg="chat request" images=0 prompt="<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\n用Java写一个费纳波切数列<|im_end|>\n<|im_start|>assistant\n"
time=2025-04-11T15:39:41.120+08:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=39 used=0 remaining=39
[GIN] 2025/04/11 - 15:39:44 | 200 | 3.782699s | 127.0.0.1 | POST "/api/chat"
time=2025-04-11T15:39:44.847+08:00 level=DEBUG source=sched.go:409 msg="context for request finished"
time=2025-04-11T15:39:44.847+08:00 level=DEBUG source=sched.go:341 msg="runner with non-zero duration has gone idle, adding timer" modelPath=D:\ly\Ollama\Models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 duration=24h0m0s
time=2025-04-11T15:39:44.847+08:00 level=DEBUG source=sched.go:359 msg="after processing request finished event" modelPath=D:\ly\Ollama\Models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 refCount=0
time=2025-04-11T15:41:06.490+08:00 level=DEBUG source=sched.go:577 msg="evaluating already loaded" model=D:\ly\Ollama\Models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730
time=2025-04-11T15:41:06.510+08:00 level=DEBUG source=routes.go:1522 msg="chat request" images=0 prompt="<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\n用Java写一个费纳波切数列<|im_end|>\n<|im_start|>assistant\n当然可以!斐波那契数列是一个非常经典的序列,其特点是每个数字是前两个数字的和。下面我将提供一个简单的 Java 程序来生成斐波那契数列。这个程序将允许用户输入想要生成的斐波那契数的数量。\n\njava\nimport java.util.Scanner;\n\npublic class FibonacciSequence {\n public static void main(String[] args) {\n Scanner scanner = new Scanner(System.in);\n \n // 获取用户输入,即要输出的斐波那契数列项的数量\n System.out.print(\"请输入您想要生成的斐波那契数列项的数量: \");\n int n = scanner.nextInt();\n \n if (n <= 0) {\n System.out.println(\"请输入一个正整数\");\n } else {\n // 初始化前两个斐波那契数为0和1\n long first = 0;\n long second = 1;\n\n // 输出斐波那契数列的前n项\n for (int i = 0; i < n; i++) {\n System.out.print(first + \" \");\n // 更新两个前驱数\n long next = first + second;\n first = second;\n second = next;\n }\n }\n\n scanner.close();\n }\n}\n\n\n这个程序首先通过 Scanner 类获取用户输入的项数,然后使用一个简单的循环来生成并打印出相应的斐波那契数列。这里我们定义了两个变量 firstsecond 来分别存储当前和下一个斐波那契数值。\n\n希望这段代码对你有所帮助!如果你有任何其他问题或需要进一步的帮助,请随时告诉我。<|im_end|>\n<|im_start|>user\n用python写一段<|im_end|>\n<|im_start|>assistant\n"
time=2025-04-11T15:41:06.521+08:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=387 prompt=401 used=387 remaining=14
[GIN] 2025/04/11 - 15:41:11 | 200 | 4.7855432s | 127.0.0.1 | POST "/api/chat"
time=2025-04-11T15:41:11.236+08:00 level=DEBUG source=sched.go:409 msg="context for request finished"
time=2025-04-11T15:41:11.236+08:00 level=DEBUG source=sched.go:341 msg="runner with non-zero duration has gone idle, adding timer" modelPath=D:\ly\Ollama\Models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 duration=24h0m0s
time=2025-04-11T15:41:11.236+08:00 level=DEBUG source=sched.go:359 msg="after processing request finished event" modelPath=D:\ly\Ollama\Models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 refCount=0

app.log:

time=2025-04-11T15:36:19.867+08:00 level=INFO source=server.go:141 msg="starting server..."
time=2025-04-11T15:36:19.868+08:00 level=WARN source=logging.go:76 msg="Failed to rotate log" older=C:\Users\liuyuan\AppData\Local\Ollama\server-1.log newer=C:\Users\liuyuan\AppData\Local\Ollama\server.log error="rename C:\Users\liuyuan\AppData\Local\Ollama\server.log C:\Users\liuyuan\AppData\Local\Ollama\server-1.log: The process cannot access the file because it is being used by another process."
time=2025-04-11T15:36:19.869+08:00 level=ERROR source=server.go:145 msg="failed to start server failed to start server context canceled"
time=2025-04-11T15:36:47.659+08:00 level=DEBUG source=eventloop.go:145 msg="unmanaged app message, lParm: 0x204"
time=2025-04-11T15:36:48.497+08:00 level=DEBUG source=lifecycle.go:38 msg="quit called"
time=2025-04-11T15:36:48.590+08:00 level=INFO source=lifecycle.go:89 msg="Waiting for ollama server to shutdown..."
time=2025-04-11T15:36:53.056+08:00 level=INFO source=logging.go:50 msg="ollama app started"
time=2025-04-11T15:36:53.056+08:00 level=INFO source=lifecycle.go:19 msg="app config" env="map[CUDA_VISIBLE_DEVICES:GPU-d8040dfd-0bc8-ad27-6eaf-8a27a2b07beb GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\ly\Ollama\Models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-04-11T15:36:53.112+08:00 level=DEBUG source=store.go:60 msg="loaded existing store C:\Users\liuyuan\AppData\Local\Ollama\config.json - ID: 0224cbd5-1cc6-46b1-9273-38ed3e0917c8"
time=2025-04-11T15:36:53.113+08:00 level=DEBUG source=lifecycle.go:68 msg="Not first time, skipping first run notification"
time=2025-04-11T15:36:53.112+08:00 level=DEBUG source=lifecycle.go:34 msg="starting callback loop"
time=2025-04-11T15:36:53.131+08:00 level=DEBUG source=server.go:181 msg="heartbeat from server: Head "http://0.0.0.0:11434/": EOF"
time=2025-04-11T15:36:53.131+08:00 level=INFO source=server.go:182 msg="unable to connect to server"
time=2025-04-11T15:36:53.131+08:00 level=DEBUG source=eventloop.go:22 msg="starting event handling loop"
time=2025-04-11T15:36:53.131+08:00 level=INFO source=server.go:141 msg="starting server..."
time=2025-04-11T15:36:53.132+08:00 level=WARN source=logging.go:76 msg="Failed to rotate log" older=C:\Users\liuyuan\AppData\Local\Ollama\server-1.log newer=C:\Users\liuyuan\AppData\Local\Ollama\server.log error="rename C:\Users\liuyuan\AppData\Local\Ollama\server.log C:\Users\liuyuan\AppData\Local\Ollama\server-1.log: The process cannot access the file because it is being used by another process."
time=2025-04-11T15:36:53.151+08:00 level=INFO source=server.go:127 msg="started ollama server with pid 20520"
time=2025-04-11T15:36:53.151+08:00 level=INFO source=server.go:129 msg="ollama server logs C:\Users\liuyuan\AppData\Local\Ollama\server.log"
time=2025-04-11T15:36:53.431+08:00 level=INFO source=server.go:158 msg="server shutdown with exit code 0"
time=2025-04-11T15:36:53.431+08:00 level=INFO source=lifecycle.go:93 msg="Ollama app exiting"
time=2025-04-11T15:36:56.138+08:00 level=DEBUG source=updater.go:74 msg="checking for available update" requestURL="https://ollama.com/api/update?arch=amd64&nonce=6c_mgc1SQghAgjwNmP0vOg&os=windows&ts=1744357016&version=0.6.5"
time=2025-04-11T15:36:56.279+08:00 level=DEBUG source=logging_windows.go:12 msg="viewing logs with start C:\Users\liuyuan\AppData\Local\Ollama"
time=2025-04-11T15:36:56.831+08:00 level=DEBUG source=updater.go:83 msg="check update response 204 (current version is up to date)"
time=2025-04-11T15:37:12.194+08:00 level=INFO source=server.go:141 msg="starting server..."
time=2025-04-11T15:37:12.196+08:00 level=WARN source=logging.go:76 msg="Failed to rotate log" older=C:\Users\liuyuan\AppData\Local\Ollama\server-1.log newer=C:\Users\liuyuan\AppData\Local\Ollama\server.log error="rename C:\Users\liuyuan\AppData\Local\Ollama\server.log C:\Users\liuyuan\AppData\Local\Ollama\server-1.log: The process cannot access the file because it is being used by another process."
time=2025-04-11T15:37:12.196+08:00 level=ERROR source=server.go:145 msg="failed to start server failed to start server context canceled"
time=2025-04-11T15:38:43.370+08:00 level=INFO source=server.go:141 msg="starting server..."
time=2025-04-11T15:38:43.372+08:00 level=WARN source=logging.go:76 msg="Failed to rotate log" older=C:\Users\liuyuan\AppData\Local\Ollama\server-1.log newer=C:\Users\liuyuan\AppData\Local\Ollama\server.log error="rename C:\Users\liuyuan\AppData\Local\Ollama\server.log C:\Users\liuyuan\AppData\Local\Ollama\server-1.log: The process cannot access the file because it is being used by another process."
time=2025-04-11T15:38:43.373+08:00 level=ERROR source=server.go:145 msg="failed to start server failed to start server context canceled"
time=2025-04-11T15:39:37.197+08:00 level=INFO source=server.go:141 msg="starting server..."
time=2025-04-11T15:39:37.198+08:00 level=WARN source=logging.go:76 msg="Failed to rotate log" older=C:\Users\liuyuan\AppData\Local\Ollama\server-1.log newer=C:\Users\liuyuan\AppData\Local\Ollama\server.log error="rename C:\Users\liuyuan\AppData\Local\Ollama\server.log C:\Users\liuyuan\AppData\Local\Ollama\server-1.log: The process cannot access the file because it is being used by another process."
time=2025-04-11T15:39:37.200+08:00 level=ERROR source=server.go:145 msg="failed to start server failed to start server context canceled"
time=2025-04-11T15:41:07.374+08:00 level=INFO source=server.go:141 msg="starting server..."
time=2025-04-11T15:41:07.376+08:00 level=WARN source=logging.go:76 msg="Failed to rotate log" older=C:\Users\liuyuan\AppData\Local\Ollama\server-1.log newer=C:\Users\liuyuan\AppData\Local\Ollama\server.log error="rename C:\Users\liuyuan\AppData\Local\Ollama\server.log C:\Users\liuyuan\AppData\Local\Ollama\server-1.log: The process cannot access the file because it is being used by another process."
time=2025-04-11T15:41:07.377+08:00 level=ERROR source=server.go:145 msg="failed to start server failed to start server context canceled"
time=2025-04-11T15:42:02.702+08:00 level=INFO source=server.go:141 msg="starting server..."
time=2025-04-11T15:42:02.703+08:00 level=WARN source=logging.go:76 msg="Failed to rotate log" older=C:\Users\liuyuan\AppData\Local\Ollama\server-1.log newer=C:\Users\liuyuan\AppData\Local\Ollama\server.log error="rename C:\Users\liuyuan\AppData\Local\Ollama\server.log C:\Users\liuyuan\AppData\Local\Ollama\server-1.log: The process cannot access the file because it is being used by another process."
time=2025-04-11T15:42:02.703+08:00 level=ERROR source=server.go:145 msg="failed to start server failed to start server context canceled"
time=2025-04-11T15:43:31.878+08:00 level=INFO source=server.go:141 msg="starting server..."
time=2025-04-11T15:43:31.880+08:00 level=WARN source=logging.go:76 msg="Failed to rotate log" older=C:\Users\liuyuan\AppData\Local\Ollama\server-1.log newer=C:\Users\liuyuan\AppData\Local\Ollama\server.log error="rename C:\Users\liuyuan\AppData\Local\Ollama\server.log C:\Users\liuyuan\AppData\Local\Ollama\server-1.log: The process cannot access the file because it is being used by another process."
time=2025-04-11T15:43:31.880+08:00 level=ERROR source=server.go:145 msg="failed to start server failed to start server context canceled"
time=2025-04-11T15:44:28.704+08:00 level=INFO source=server.go:141 msg="starting server..."
time=2025-04-11T15:44:28.705+08:00 level=WARN source=logging.go:76 msg="Failed to rotate log" older=C:\Users\liuyuan\AppData\Local\Ollama\server-1.log newer=C:\Users\liuyuan\AppData\Local\Ollama\server.log error="rename C:\Users\liuyuan\AppData\Local\Ollama\server.log C:\Users\liuyuan\AppData\Local\Ollama\server-1.log: The process cannot access the file because it is being used by another process."
time=2025-04-11T15:44:28.707+08:00 level=ERROR source=server.go:145 msg="failed to start server failed to start server context canceled"
time=2025-04-11T15:45:56.882+08:00 level=INFO source=server.go:141 msg="starting server..."
time=2025-04-11T15:45:56.884+08:00 level=WARN source=logging.go:76 msg="Failed to rotate log" older=C:\Users\liuyuan\AppData\Local\Ollama\server-1.log newer=C:\Users\liuyuan\AppData\Local\Ollama\server.log error="rename C:\Users\liuyuan\AppData\Local\Ollama\server.log C:\Users\liuyuan\AppData\Local\Ollama\server-1.log: The process cannot access the file because it is being used by another process."
time=2025-04-11T15:45:56.885+08:00 level=ERROR source=server.go:145 msg="failed to start server failed to start server context canceled"

<!-- gh-comment-id:2796121841 --> @Liuyuan0803 commented on GitHub (Apr 11, 2025): # more logs: ## server.log: time=2025-04-11T15:38:36.581+08:00 level=INFO source=server.go:619 msg="llama runner started in 3.54 seconds" time=2025-04-11T15:38:36.581+08:00 level=DEBUG source=sched.go:464 msg="finished setting up runner" model=D:\ly\Ollama\Models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 time=2025-04-11T15:38:36.582+08:00 level=DEBUG source=sched.go:468 msg="context for request finished" time=2025-04-11T15:38:36.582+08:00 level=DEBUG source=sched.go:341 msg="runner with non-zero duration has gone idle, adding timer" modelPath=D:\ly\Ollama\Models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 duration=24h0m0s time=2025-04-11T15:38:36.582+08:00 level=DEBUG source=sched.go:359 msg="after processing request finished event" modelPath=D:\ly\Ollama\Models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 refCount=0 [GIN] 2025/04/11 - 15:38:36 | 200 | 4.01209s | 127.0.0.1 | POST "/api/generate" time=2025-04-11T15:39:41.107+08:00 level=DEBUG source=sched.go:577 msg="evaluating already loaded" model=D:\ly\Ollama\Models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 time=2025-04-11T15:39:41.114+08:00 level=DEBUG source=routes.go:1522 msg="chat request" images=0 prompt="<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\n用Java写一个费纳波切数列<|im_end|>\n<|im_start|>assistant\n" time=2025-04-11T15:39:41.120+08:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=39 used=0 remaining=39 [GIN] 2025/04/11 - 15:39:44 | 200 | 3.782699s | 127.0.0.1 | POST "/api/chat" time=2025-04-11T15:39:44.847+08:00 level=DEBUG source=sched.go:409 msg="context for request finished" time=2025-04-11T15:39:44.847+08:00 level=DEBUG source=sched.go:341 msg="runner with non-zero duration has gone idle, adding timer" modelPath=D:\ly\Ollama\Models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 duration=24h0m0s time=2025-04-11T15:39:44.847+08:00 level=DEBUG source=sched.go:359 msg="after processing request finished event" modelPath=D:\ly\Ollama\Models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 refCount=0 time=2025-04-11T15:41:06.490+08:00 level=DEBUG source=sched.go:577 msg="evaluating already loaded" model=D:\ly\Ollama\Models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 time=2025-04-11T15:41:06.510+08:00 level=DEBUG source=routes.go:1522 msg="chat request" images=0 prompt="<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\n用Java写一个费纳波切数列<|im_end|>\n<|im_start|>assistant\n当然可以!斐波那契数列是一个非常经典的序列,其特点是每个数字是前两个数字的和。下面我将提供一个简单的 Java 程序来生成斐波那契数列。这个程序将允许用户输入想要生成的斐波那契数的数量。\n\n```java\nimport java.util.Scanner;\n\npublic class FibonacciSequence {\n public static void main(String[] args) {\n Scanner scanner = new Scanner(System.in);\n \n // 获取用户输入,即要输出的斐波那契数列项的数量\n System.out.print(\"请输入您想要生成的斐波那契数列项的数量: \");\n int n = scanner.nextInt();\n \n if (n <= 0) {\n System.out.println(\"请输入一个正整数\");\n } else {\n // 初始化前两个斐波那契数为0和1\n long first = 0;\n long second = 1;\n\n // 输出斐波那契数列的前n项\n for (int i = 0; i < n; i++) {\n System.out.print(first + \" \");\n // 更新两个前驱数\n long next = first + second;\n first = second;\n second = next;\n }\n }\n\n scanner.close();\n }\n}\n```\n\n这个程序首先通过 `Scanner` 类获取用户输入的项数,然后使用一个简单的循环来生成并打印出相应的斐波那契数列。这里我们定义了两个变量 `first` 和 `second` 来分别存储当前和下一个斐波那契数值。\n\n希望这段代码对你有所帮助!如果你有任何其他问题或需要进一步的帮助,请随时告诉我。<|im_end|>\n<|im_start|>user\n用python写一段<|im_end|>\n<|im_start|>assistant\n" time=2025-04-11T15:41:06.521+08:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=387 prompt=401 used=387 remaining=14 [GIN] 2025/04/11 - 15:41:11 | 200 | 4.7855432s | 127.0.0.1 | POST "/api/chat" time=2025-04-11T15:41:11.236+08:00 level=DEBUG source=sched.go:409 msg="context for request finished" time=2025-04-11T15:41:11.236+08:00 level=DEBUG source=sched.go:341 msg="runner with non-zero duration has gone idle, adding timer" modelPath=D:\ly\Ollama\Models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 duration=24h0m0s time=2025-04-11T15:41:11.236+08:00 level=DEBUG source=sched.go:359 msg="after processing request finished event" modelPath=D:\ly\Ollama\Models\blobs\sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730 refCount=0 ## app.log: time=2025-04-11T15:36:19.867+08:00 level=INFO source=server.go:141 msg="starting server..." time=2025-04-11T15:36:19.868+08:00 level=WARN source=logging.go:76 msg="Failed to rotate log" older=C:\Users\liuyuan\AppData\Local\Ollama\server-1.log newer=C:\Users\liuyuan\AppData\Local\Ollama\server.log error="rename C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\server.log C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\server-1.log: The process cannot access the file because it is being used by another process." time=2025-04-11T15:36:19.869+08:00 level=ERROR source=server.go:145 msg="failed to start server failed to start server context canceled" time=2025-04-11T15:36:47.659+08:00 level=DEBUG source=eventloop.go:145 msg="unmanaged app message, lParm: 0x204" time=2025-04-11T15:36:48.497+08:00 level=DEBUG source=lifecycle.go:38 msg="quit called" time=2025-04-11T15:36:48.590+08:00 level=INFO source=lifecycle.go:89 msg="Waiting for ollama server to shutdown..." time=2025-04-11T15:36:53.056+08:00 level=INFO source=logging.go:50 msg="ollama app started" time=2025-04-11T15:36:53.056+08:00 level=INFO source=lifecycle.go:19 msg="app config" env="map[CUDA_VISIBLE_DEVICES:GPU-d8040dfd-0bc8-ad27-6eaf-8a27a2b07beb GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\ly\\Ollama\\Models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-04-11T15:36:53.112+08:00 level=DEBUG source=store.go:60 msg="loaded existing store C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\config.json - ID: 0224cbd5-1cc6-46b1-9273-38ed3e0917c8" time=2025-04-11T15:36:53.113+08:00 level=DEBUG source=lifecycle.go:68 msg="Not first time, skipping first run notification" time=2025-04-11T15:36:53.112+08:00 level=DEBUG source=lifecycle.go:34 msg="starting callback loop" time=2025-04-11T15:36:53.131+08:00 level=DEBUG source=server.go:181 msg="heartbeat from server: Head \"http://0.0.0.0:11434/\": EOF" time=2025-04-11T15:36:53.131+08:00 level=INFO source=server.go:182 msg="unable to connect to server" time=2025-04-11T15:36:53.131+08:00 level=DEBUG source=eventloop.go:22 msg="starting event handling loop" time=2025-04-11T15:36:53.131+08:00 level=INFO source=server.go:141 msg="starting server..." time=2025-04-11T15:36:53.132+08:00 level=WARN source=logging.go:76 msg="Failed to rotate log" older=C:\Users\liuyuan\AppData\Local\Ollama\server-1.log newer=C:\Users\liuyuan\AppData\Local\Ollama\server.log error="rename C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\server.log C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\server-1.log: The process cannot access the file because it is being used by another process." time=2025-04-11T15:36:53.151+08:00 level=INFO source=server.go:127 msg="started ollama server with pid 20520" time=2025-04-11T15:36:53.151+08:00 level=INFO source=server.go:129 msg="ollama server logs C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\server.log" time=2025-04-11T15:36:53.431+08:00 level=INFO source=server.go:158 msg="server shutdown with exit code 0" time=2025-04-11T15:36:53.431+08:00 level=INFO source=lifecycle.go:93 msg="Ollama app exiting" time=2025-04-11T15:36:56.138+08:00 level=DEBUG source=updater.go:74 msg="checking for available update" requestURL="https://ollama.com/api/update?arch=amd64&nonce=6c_mgc1SQghAgjwNmP0vOg&os=windows&ts=1744357016&version=0.6.5" time=2025-04-11T15:36:56.279+08:00 level=DEBUG source=logging_windows.go:12 msg="viewing logs with start C:\\Users\\liuyuan\\AppData\\Local\\Ollama" time=2025-04-11T15:36:56.831+08:00 level=DEBUG source=updater.go:83 msg="check update response 204 (current version is up to date)" time=2025-04-11T15:37:12.194+08:00 level=INFO source=server.go:141 msg="starting server..." time=2025-04-11T15:37:12.196+08:00 level=WARN source=logging.go:76 msg="Failed to rotate log" older=C:\Users\liuyuan\AppData\Local\Ollama\server-1.log newer=C:\Users\liuyuan\AppData\Local\Ollama\server.log error="rename C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\server.log C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\server-1.log: The process cannot access the file because it is being used by another process." time=2025-04-11T15:37:12.196+08:00 level=ERROR source=server.go:145 msg="failed to start server failed to start server context canceled" time=2025-04-11T15:38:43.370+08:00 level=INFO source=server.go:141 msg="starting server..." time=2025-04-11T15:38:43.372+08:00 level=WARN source=logging.go:76 msg="Failed to rotate log" older=C:\Users\liuyuan\AppData\Local\Ollama\server-1.log newer=C:\Users\liuyuan\AppData\Local\Ollama\server.log error="rename C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\server.log C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\server-1.log: The process cannot access the file because it is being used by another process." time=2025-04-11T15:38:43.373+08:00 level=ERROR source=server.go:145 msg="failed to start server failed to start server context canceled" time=2025-04-11T15:39:37.197+08:00 level=INFO source=server.go:141 msg="starting server..." time=2025-04-11T15:39:37.198+08:00 level=WARN source=logging.go:76 msg="Failed to rotate log" older=C:\Users\liuyuan\AppData\Local\Ollama\server-1.log newer=C:\Users\liuyuan\AppData\Local\Ollama\server.log error="rename C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\server.log C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\server-1.log: The process cannot access the file because it is being used by another process." time=2025-04-11T15:39:37.200+08:00 level=ERROR source=server.go:145 msg="failed to start server failed to start server context canceled" time=2025-04-11T15:41:07.374+08:00 level=INFO source=server.go:141 msg="starting server..." time=2025-04-11T15:41:07.376+08:00 level=WARN source=logging.go:76 msg="Failed to rotate log" older=C:\Users\liuyuan\AppData\Local\Ollama\server-1.log newer=C:\Users\liuyuan\AppData\Local\Ollama\server.log error="rename C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\server.log C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\server-1.log: The process cannot access the file because it is being used by another process." time=2025-04-11T15:41:07.377+08:00 level=ERROR source=server.go:145 msg="failed to start server failed to start server context canceled" time=2025-04-11T15:42:02.702+08:00 level=INFO source=server.go:141 msg="starting server..." time=2025-04-11T15:42:02.703+08:00 level=WARN source=logging.go:76 msg="Failed to rotate log" older=C:\Users\liuyuan\AppData\Local\Ollama\server-1.log newer=C:\Users\liuyuan\AppData\Local\Ollama\server.log error="rename C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\server.log C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\server-1.log: The process cannot access the file because it is being used by another process." time=2025-04-11T15:42:02.703+08:00 level=ERROR source=server.go:145 msg="failed to start server failed to start server context canceled" time=2025-04-11T15:43:31.878+08:00 level=INFO source=server.go:141 msg="starting server..." time=2025-04-11T15:43:31.880+08:00 level=WARN source=logging.go:76 msg="Failed to rotate log" older=C:\Users\liuyuan\AppData\Local\Ollama\server-1.log newer=C:\Users\liuyuan\AppData\Local\Ollama\server.log error="rename C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\server.log C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\server-1.log: The process cannot access the file because it is being used by another process." time=2025-04-11T15:43:31.880+08:00 level=ERROR source=server.go:145 msg="failed to start server failed to start server context canceled" time=2025-04-11T15:44:28.704+08:00 level=INFO source=server.go:141 msg="starting server..." time=2025-04-11T15:44:28.705+08:00 level=WARN source=logging.go:76 msg="Failed to rotate log" older=C:\Users\liuyuan\AppData\Local\Ollama\server-1.log newer=C:\Users\liuyuan\AppData\Local\Ollama\server.log error="rename C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\server.log C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\server-1.log: The process cannot access the file because it is being used by another process." time=2025-04-11T15:44:28.707+08:00 level=ERROR source=server.go:145 msg="failed to start server failed to start server context canceled" time=2025-04-11T15:45:56.882+08:00 level=INFO source=server.go:141 msg="starting server..." time=2025-04-11T15:45:56.884+08:00 level=WARN source=logging.go:76 msg="Failed to rotate log" older=C:\Users\liuyuan\AppData\Local\Ollama\server-1.log newer=C:\Users\liuyuan\AppData\Local\Ollama\server.log error="rename C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\server.log C:\\Users\\liuyuan\\AppData\\Local\\Ollama\\server-1.log: The process cannot access the file because it is being used by another process." time=2025-04-11T15:45:56.885+08:00 level=ERROR source=server.go:145 msg="failed to start server failed to start server context canceled"
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32309