[GH-ISSUE #4053] The server-side output gets mixed with the responses. #2516

Closed
opened 2026-04-12 12:50:32 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @JialeLiLab on GitHub (Apr 30, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4053

What is the issue?

Hi all,

I'm running ollama that mixes its server logs with my outputs directly in the terminal, making it hard to interact with. Does anyone else experience this? Any advice on how to separate these so I can just see my inputs and outputs without the clutter of continuous server logs?

Thanks for any suggestions!

For example:

(base) root@gpumall-ins-542069835358213:/gm-data# ./ollama run llama3
[GIN] 2024/04/30 - 21:35:49 | 200 | 147.959µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/04/30 - 21:35:49 | 200 | 4.389429ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/04/30 - 21:35:49 | 200 | 761.46µs | 127.0.0.1 | POST "/api/show"
⠋ time=2024-04-30T21:35:52.303+08:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-04-30T21:35:52.303+08:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
time=2024-04-30T21:35:52.306+08:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2135940480/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.11.8.89]"
time=2024-04-30T21:35:52.309+08:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
time=2024-04-30T21:35:52.309+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
⠹ time=2024-04-30T21:35:52.465+08:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6"
time=2024-04-30T21:35:52.535+08:00 level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-04-30T21:35:52.536+08:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*"
⠸ time=2024-04-30T21:35:52.540+08:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2135940480/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.11.8.89]"
time=2024-04-30T21:35:52.542+08:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
time=2024-04-30T21:35:52.542+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
⠼ time=2024-04-30T21:35:52.691+08:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6"
⠴ time=2024-04-30T21:35:52.746+08:00 level=INFO source=server.go:127 msg="offload to gpu" reallayers=33 layers=33 required="5033.0 MiB" used="5033.0 MiB" available="23996.7 MiB" kv="256.0 MiB" fulloffload="164.0 MiB" partialoffload="677.5 MiB"
time=2024-04-30T21:35:52.746+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-30T21:35:52.747+08:00 level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama2135940480/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --port 38085"
time=2024-04-30T21:35:52.747+08:00 level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
{"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"23059827412992","timestamp":1714484152}
{"build":1,"commit":"7593639","function":"main","level":"INFO","line":2819,"msg":"build info","tid":"23059827412992","timestamp":1714484152}
{"function":"main","level":"INFO","line":2822,"msg":"system info","n_threads":32,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"23059827412992","timestamp":1714484152,"total_threads":64}
⠦ llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
⠧ llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
⠸ llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token = 128 'Ä'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes
ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
⠴ llm_load_tensors: ggml ctx size = 0.22 MiB
⠇ llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU buffer size = 281.81 MiB
llm_load_tensors: CUDA0 buffer size = 4155.99 MiB
⠦ .
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.50 MiB
⠧ llama_new_context_with_model: CUDA0 compute buffer size = 258.50 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 12.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 2
{"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"23059827412992","timestamp":1714484154}
{"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"23059827412992","timestamp":1714484154}
{"function":"main","level":"INFO","line":3064,"msg":"model loaded","tid":"23059827412992","timestamp":1714484154}
{"function":"validate_model_chat_template","level":"ERR","line":437,"msg":"The chat template comes with this model is not yet supported, falling back to chatml. This may cause the model to output suboptimal responses","tid":"23059827412992","timestamp":1714484154}
{"function":"main","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"63","port":"38085","tid":"23059827412992","timestamp":1714484154}
{"function":"update_slots","level":"INFO","line":1578,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"23059827412992","timestamp":1714484154}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"23059827412992","timestamp":1714484154}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":38750,"status":200,"tid":"23059020898304","timestamp":1714484154}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"23059827412992","timestamp":1714484154}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"23059827412992","timestamp":1714484154}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":38762,"status":200,"tid":"23059018797056","timestamp":1714484154}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":3,"tid":"23059827412992","timestamp":1714484154}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33876,"status":200,"tid":"23058758840320","timestamp":1714484154}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33892,"status":200,"tid":"23058943975424","timestamp":1714484154}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":4,"tid":"23059827412992","timestamp":1714484154}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33908,"status":200,"tid":"23058760941568","timestamp":1714484154}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":5,"tid":"23059827412992","timestamp":1714484154}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33924,"status":200,"tid":"23058946076672","timestamp":1714484154}
⠇ {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":6,"tid":"23059827412992","timestamp":1714484155}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33960,"status":200,"tid":"23058756739072","timestamp":1714484155}
[GIN] 2024/04/30 - 21:35:55 | 200 | 5.818520362s | 127.0.0.1 | POST "/api/chat"

how are you?
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":7,"tid":"23059827412992","timestamp":1714484161}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33968,"status":200,"tid":"23058754637824","timestamp":1714484161}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":8,"tid":"23059827412992","timestamp":1714484161}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33968,"status":200,"tid":"23058754637824","timestamp":1714484161}
{"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/tokenize","remote_addr":"127.0.0.1","remote_port":33968,"status":200,"tid":"23058754637824","timestamp":1714484161}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":9,"tid":"23059827412992","timestamp":1714484161}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33968,"status":200,"tid":"23058754637824","timestamp":1714484161}
⠙ {"function":"launch_slot_with_data","level":"INFO","line":830,"msg":"slot is processing task","slot_id":0,"task_id":10,"tid":"23059827412992","timestamp":1714484161}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1809,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":14,"slot_id":0,"task_id":10,"tid":"23059827412992","timestamp":1714484161}
{"function":"update_slots","level":"INFO","line":1836,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":10,"tid":"23059827412992","timestamp":1714484161}
I'm just an AI, so I don't have feelings like humans do. But I'm functioning properly and ready to help answer
your questions or chat with you about a topic of your choice! How can I assist you today?{"function":"print_timings","level":"INFO","line":269,"msg":"prompt eval time = 91.42 ms / 14 tokens ( 6.53 ms per token, 153.14 tokens per second)","n_prompt_tokens_processed":14,"n_tokens_second":153.1443823358894,"slot_id":0,"t_prompt_processing":91.417,"t_token":6.5297857142857145,"task_id":10,"tid":"23059827412992","timestamp":1714484162}
{"function":"print_timings","level":"INFO","line":283,"msg":"generation eval time = 535.35 ms / 47 runs ( 11.39 ms per token, 87.79 tokens per second)","n_decoded":47,"n_tokens_second":87.79319658764655,"slot_id":0,"t_token":11.39040425531915,"t_token_generation":535.349,"task_id":10,"tid":"23059827412992","timestamp":1714484162}
{"function":"print_timings","level":"INFO","line":293,"msg":" total time = 626.77 ms","slot_id":0,"t_prompt_processing":91.417,"t_token_generation":535.349,"t_total":626.7660000000001,"task_id":10,"tid":"23059827412992","timestamp":1714484162}
{"function":"update_slots","level":"INFO","line":1640,"msg":"slot released","n_cache_tokens":61,"n_ctx":2048,"n_past":60,"n_system_tokens":0,"slot_id":0,"task_id":10,"tid":"23059827412992","timestamp":1714484162,"truncated":false}
{"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/completion","remote_addr":"127.0.0.1","remote_port":33968,"status":200,"tid":"23058754637824","timestamp":1714484162}
[GIN] 2024/04/30 - 21:36:02 | 200 | 759.646073ms | 127.0.0.1 | POST "/api/chat"

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.1.32

Originally created by @JialeLiLab on GitHub (Apr 30, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4053 ### What is the issue? Hi all, I'm running ollama that mixes its server logs with my outputs directly in the terminal, making it hard to interact with. Does anyone else experience this? Any advice on how to separate these so I can just see my inputs and outputs without the clutter of continuous server logs? Thanks for any suggestions! For example: (base) root@gpumall-ins-542069835358213:/gm-data# ./ollama run llama3 [GIN] 2024/04/30 - 21:35:49 | 200 | 147.959µs | 127.0.0.1 | HEAD "/" [GIN] 2024/04/30 - 21:35:49 | 200 | 4.389429ms | 127.0.0.1 | POST "/api/show" [GIN] 2024/04/30 - 21:35:49 | 200 | 761.46µs | 127.0.0.1 | POST "/api/show" ⠋ time=2024-04-30T21:35:52.303+08:00 level=INFO source=gpu.go:121 msg="Detecting GPU type" time=2024-04-30T21:35:52.303+08:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" time=2024-04-30T21:35:52.306+08:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2135940480/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.11.8.89]" time=2024-04-30T21:35:52.309+08:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart" time=2024-04-30T21:35:52.309+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ⠹ time=2024-04-30T21:35:52.465+08:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6" time=2024-04-30T21:35:52.535+08:00 level=INFO source=gpu.go:121 msg="Detecting GPU type" time=2024-04-30T21:35:52.536+08:00 level=INFO source=gpu.go:268 msg="Searching for GPU management library libcudart.so*" ⠸ time=2024-04-30T21:35:52.540+08:00 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [/tmp/ollama2135940480/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.11.8.89]" time=2024-04-30T21:35:52.542+08:00 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart" time=2024-04-30T21:35:52.542+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" ⠼ time=2024-04-30T21:35:52.691+08:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6" ⠴ time=2024-04-30T21:35:52.746+08:00 level=INFO source=server.go:127 msg="offload to gpu" reallayers=33 layers=33 required="5033.0 MiB" used="5033.0 MiB" available="23996.7 MiB" kv="256.0 MiB" fulloffload="164.0 MiB" partialoffload="677.5 MiB" time=2024-04-30T21:35:52.746+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-04-30T21:35:52.747+08:00 level=INFO source=server.go:264 msg="starting llama server" cmd="/tmp/ollama2135940480/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --port 38085" time=2024-04-30T21:35:52.747+08:00 level=INFO source=server.go:389 msg="waiting for llama runner to start responding" {"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"23059827412992","timestamp":1714484152} {"build":1,"commit":"7593639","function":"main","level":"INFO","line":2819,"msg":"build info","tid":"23059827412992","timestamp":1714484152} {"function":"main","level":"INFO","line":2822,"msg":"system info","n_threads":32,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"23059827412992","timestamp":1714484152,"total_threads":64} ⠦ llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ⠧ llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 20: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors ⠸ llm_load_vocab: special tokens definition check successful ( 256/128256 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' llm_load_print_meta: LF token = 128 'Ä' ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes ggml_cuda_init: CUDA_USE_TENSOR_CORES: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes ⠴ llm_load_tensors: ggml ctx size = 0.22 MiB ⠇ llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU buffer size = 281.81 MiB llm_load_tensors: CUDA0 buffer size = 4155.99 MiB ⠦ . llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.50 MiB ⠧ llama_new_context_with_model: CUDA0 compute buffer size = 258.50 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 12.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 2 {"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"23059827412992","timestamp":1714484154} {"function":"initialize","level":"INFO","line":457,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"23059827412992","timestamp":1714484154} {"function":"main","level":"INFO","line":3064,"msg":"model loaded","tid":"23059827412992","timestamp":1714484154} {"function":"validate_model_chat_template","level":"ERR","line":437,"msg":"The chat template comes with this model is not yet supported, falling back to chatml. This may cause the model to output suboptimal responses","tid":"23059827412992","timestamp":1714484154} {"function":"main","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"63","port":"38085","tid":"23059827412992","timestamp":1714484154} {"function":"update_slots","level":"INFO","line":1578,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"23059827412992","timestamp":1714484154} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"23059827412992","timestamp":1714484154} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":38750,"status":200,"tid":"23059020898304","timestamp":1714484154} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"23059827412992","timestamp":1714484154} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"23059827412992","timestamp":1714484154} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":38762,"status":200,"tid":"23059018797056","timestamp":1714484154} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":3,"tid":"23059827412992","timestamp":1714484154} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33876,"status":200,"tid":"23058758840320","timestamp":1714484154} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33892,"status":200,"tid":"23058943975424","timestamp":1714484154} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":4,"tid":"23059827412992","timestamp":1714484154} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33908,"status":200,"tid":"23058760941568","timestamp":1714484154} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":5,"tid":"23059827412992","timestamp":1714484154} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33924,"status":200,"tid":"23058946076672","timestamp":1714484154} ⠇ {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":6,"tid":"23059827412992","timestamp":1714484155} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33960,"status":200,"tid":"23058756739072","timestamp":1714484155} [GIN] 2024/04/30 - 21:35:55 | 200 | 5.818520362s | 127.0.0.1 | POST "/api/chat" >>> how are you? {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":7,"tid":"23059827412992","timestamp":1714484161} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33968,"status":200,"tid":"23058754637824","timestamp":1714484161} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":8,"tid":"23059827412992","timestamp":1714484161} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33968,"status":200,"tid":"23058754637824","timestamp":1714484161} {"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/tokenize","remote_addr":"127.0.0.1","remote_port":33968,"status":200,"tid":"23058754637824","timestamp":1714484161} {"function":"process_single_task","level":"INFO","line":1506,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":9,"tid":"23059827412992","timestamp":1714484161} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":33968,"status":200,"tid":"23058754637824","timestamp":1714484161} ⠙ {"function":"launch_slot_with_data","level":"INFO","line":830,"msg":"slot is processing task","slot_id":0,"task_id":10,"tid":"23059827412992","timestamp":1714484161} {"function":"update_slots","ga_i":0,"level":"INFO","line":1809,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":14,"slot_id":0,"task_id":10,"tid":"23059827412992","timestamp":1714484161} {"function":"update_slots","level":"INFO","line":1836,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":10,"tid":"23059827412992","timestamp":1714484161} I'm just an AI, so I don't have feelings like humans do. But I'm functioning properly and ready to help answer your questions or chat with you about a topic of your choice! How can I assist you today?{"function":"print_timings","level":"INFO","line":269,"msg":"prompt eval time = 91.42 ms / 14 tokens ( 6.53 ms per token, 153.14 tokens per second)","n_prompt_tokens_processed":14,"n_tokens_second":153.1443823358894,"slot_id":0,"t_prompt_processing":91.417,"t_token":6.5297857142857145,"task_id":10,"tid":"23059827412992","timestamp":1714484162} {"function":"print_timings","level":"INFO","line":283,"msg":"generation eval time = 535.35 ms / 47 runs ( 11.39 ms per token, 87.79 tokens per second)","n_decoded":47,"n_tokens_second":87.79319658764655,"slot_id":0,"t_token":11.39040425531915,"t_token_generation":535.349,"task_id":10,"tid":"23059827412992","timestamp":1714484162} {"function":"print_timings","level":"INFO","line":293,"msg":" total time = 626.77 ms","slot_id":0,"t_prompt_processing":91.417,"t_token_generation":535.349,"t_total":626.7660000000001,"task_id":10,"tid":"23059827412992","timestamp":1714484162} {"function":"update_slots","level":"INFO","line":1640,"msg":"slot released","n_cache_tokens":61,"n_ctx":2048,"n_past":60,"n_system_tokens":0,"slot_id":0,"task_id":10,"tid":"23059827412992","timestamp":1714484162,"truncated":false} {"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"request","params":{},"path":"/completion","remote_addr":"127.0.0.1","remote_port":33968,"status":200,"tid":"23058754637824","timestamp":1714484162} [GIN] 2024/04/30 - 21:36:02 | 200 | 759.646073ms | 127.0.0.1 | POST "/api/chat" ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.1.32
GiteaMirror added the bug label 2026-04-12 12:50:32 -05:00
Author
Owner

@danbeibei commented on GitHub (Apr 30, 2024):

Hi,
I run the server in a separate terminal. If you are using something like Tmux, you can run the server in a separate Tmux window or Tmux pane.
You can also run the server as a service following this doc : https://github.com/ollama/ollama/blob/main/docs/linux.md

<!-- gh-comment-id:2085474641 --> @danbeibei commented on GitHub (Apr 30, 2024): Hi, I run the server in a separate terminal. If you are using something like Tmux, you can run the server in a separate Tmux window or Tmux pane. You can also run the server as a service following this doc : https://github.com/ollama/ollama/blob/main/docs/linux.md
Author
Owner

@JialeLiLab commented on GitHub (Apr 30, 2024):

有什么问题吗?

大家好,

我正在运行 ollama,它直接在终端中将其服务器日志与我的输出混合在一起,使其难以交互。还有其他人经历过吗?关于如何分离这些内容的任何建议,以便我可以看到我的输入和输出,而不会受到连续服务器日志的混乱?

感谢您的任何建议!

例如:

(base) root@gpumall-ins-542069835358213:/gm-data# ./ollama 运行 llama3 [GIN] 2024/04/30 - 21:35:49 | 200 | 200 147.959μs | 127.0.0.1 |头“/” [杜松子酒] 2024/04/30 - 21:35:49 | 200 | 200 4.389429 毫秒 | 127.0.0.1 |发布“/api/show” [GIN] 2024/04/30 - 21:35:49 | 200 | 200 761.46μs | 127.0.0.1 | POST "/api/show" ⠋ time=2024-04-30T21:35:52.303+08:00 level=INFO source=gpu.go:121 msg="检测 GPU 类型" time=2024-04-30T21:35: 52.303+08:00 level=INFO source=gpu.go:268 msg="搜索GPU管理库libcudart.so*" time=2024-04-30T21:35:52.306+08:00 level=INFO source=gpu. go:314 msg =“发现GPU库:[/tmp/ollama2135940480/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.11.8.89]” 时间= 2024-04-30T21 :35:52.309 + 08:00级别= INFO源= gpu.go:126 msg =“通过cudart检测到Nvidia GPU” 时间= 2024-04-30T21:35:52.309 + 08:00级别= INFO源= cpu_common.go :11 msg="CPU 有 AVX2" ⠹ time=2024-04-30T21:35:52.465+08:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA 计算能力检测到:8.6" 时间=2024-04-30T21:35:52.535+08:00 level=INFO source=gpu.go:121 msg="检测GPU类型" time=2024-04-30T21:35:52.536+08:00 level=INFO source =gpu.go:268 msg="搜索GPU管理库libcudart.so*" ⠸ time=2024-04-30T21:35:52.540+08:00 level=INFO source=gpu.go:314 msg="发现GPU库:[/tmp/ollama2135940480/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.11.8.89]” 时间=2024-04-30T21:35:52.542+08:00 level=INFO source=gpu.go:126 msg="通过 cudart 检测到 Nvidia GPU" time=2024-04-30T21:35:52.542+08:00 level=INFO source=cpu_common.go:11 msg="CPU 有 AVX2 " ⠼ time=2024-04-30T21:35:52.691+08:00 level=INFO source=gpu.go:202 msg="[cudart] 检测到 CUDART CUDA 计算能力:8.6" ⠴ time=2024-04-30T21: 35:52.746 + 08:00级别=信息源= server.go:127 msg =“卸载到GPU”reallayers = 33层= 33必需=“5033.0 MiB”使用=“5033.0 MiB”可用=“23996.7 MiB”kv = “256.0 MiB”fulloffload =“164.0 MiB”partialoffload =“677.5 MiB” 时间= 2024-04-30T21:35:52.746 + 08:00级别= INFO源= cpu_common.go:11 msg =“CPU有AVX2” 时间= 2024-04-30T21:35:52.747 + 08:00 level = INFO source = server.go:264 msg =“启动llama服务器”cmd =“/tmp/ollama2135940480/runners/cuda_v11/ollama_llama_server --model /root/。 ollama/models/blob/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --port 38085" 时间=2024-04-30T21: 35:52.747+08:00 level=INFO source=server.go:389 msg="等待美洲驼跑步者开始响应" {“function”:“server_params_parse”,“level”:“INFO”,“line”:2603,“msg”:“禁用记录到文件。”,“tid”:“23059827412992”,“timestamp”:1714484152} {“build”:1,“commit”:“7593639”,“function”:“main”,“level”:“INFO”,“line”:2819,“msg”:“构建信息”,“tid”: "23059827412992","时间戳":1714484152} {"function":"main","level":"INFO","line":2822,"msg":"系统信息","n_threads":32,"n_threads_batch “:-1,”系统信息“:”AVX = 1 | AVX2 = 0 | AVX512_VNNI = 0 | NEON = 0 | FP16_VA = 0 | BLAS = 1 | SSSE3 = 0 | ","tid":"23059827412992","时间戳":1714484152,"总线程数":64 ⠦ llama_model_loader:从/root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29(版本GGUF V3(最新))加载元数据,包含21个键值对和291个张量 _model_loader:转储元数据键/值。注意:KV 覆盖不适用于此输出。 llama_model_loader: - kv 0:general.architecture str = llama llama_model_loader: - kv 1:general.name str = Meta-Llama-3-8B-Instruct llama_model_loader: - kv 2:llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - k v 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: 一般.file_type u32 = 2 llama_model_loader: - kv 11:llama.vocab_size u32 = 128256 llama_model_loader: - kv 12:llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13:tokenizer.ggml。model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'" , ... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ⠧ llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 18:tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 19:tokenizer.chat_template str = {% set Loop_messages = messages %}{% ... llama_model_loader: - kv 20:general.quantization_version u32 = 2 llama_model_loader: - 类型 f32:65 个张量 llama_model_loader: - 类型 q4_0:225 个张量 llama_model_loader: - 类型 q6_K:1 个张量⠸ llm_load_vocab : 特殊标记定义检查成功( 256/128256 )。 llm_load_print_meta:arch = llama llm_load_print_meta:词汇类型= BPE llm_load_print_meta:n_vocab = 128256 llm_load_print_meta:n_merges = 280147 llm_load_print_meta :n_ctx_train = 8192 llm_load_print_meta:n_embd = 40 96 llm_load_print_meta:n_head = 32 llm_load_print_meta:n_head_kv = 8 llm_load_print_meta:n_layer = 32 llm_load_print_meta:n_rot = 128 llm_load_print_meta:n_embd_head_k = 128 llm_load_print_meta:n_embd_head_v = 128 llm_load_print_meta :n_gqa = 4 llm_load_print_meta:n_embd_k_gqa = 1024 llm_load_print_meta:n_embd_v_gqa = 10 24 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta :f_norm_rms_eps = 1.0e-05 llm_load_print_meta:f_clamp_kqv = 0.0e+00 llm_load_print_meta:f_max_alibi_bias = 0.0e + 00 llm_load_print_meta:f_logit_scale = 0.0e + 00 llm_load_print_meta:n_ff = 14336 llm_load_print_meta:n_expert = 0 llm_load_print_meta:n_expert_used = 0 a :因果 attn = 1 llm_load_print_meta:池类型 = 0 llm_load_print_meta:绳索类型 = 0 llm_load_print_meta :绳索缩放 = 线性 llm_load_print_meta:freq_base_train = 500000.0 llm_load_print_meta:freq_scale_train = 1 llm_load_print_meta:n_yarn_orig_ctx = 8192 llm_load_print_meta:rope_finetuned = 未知 llm_load_print_meta:ssm_d_conv = 0 llm_load_print_meta:ssm_d_inner = 0 llm_load_print_meta:ssm_d_state = 0 llm_load_print_meta:ssm_dt_rank = 0 llm_load_print_meta:模型类型 = 7B llm_load_print_meta:模型 ftype = Q4_0 llm_load_print_meta:模型参数 = 8.03 B llm_load_print_meta:模型大小 = 4.33 GiB (4.64 BPW) llm_load_print_meta:general.name = Meta-Llama-3-8B-Instruct llm_load_print _meta:BOS 代币 = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS 令牌 = 128001 '<|end_of_text|>' llm_load_print_meta: LF 令牌 = 128 'ä' ggml_cuda_init: GGML_CUDA_FORCE_MMQ: 是 ggml_cuda_init:没有 ggml_cuda_init:找到 1 个 CUDA 设备: 设备 0: NVIDIA GeForce RTX 3090,计算能力 8.6,VMM:是 ⠴ llm_load_tensors:ggml ctx 大小 = 0.22 MiB ⠇ llm_load_tensors:将 32 个重复层卸载到 GPU llm_load_tensors:将非重复层卸载到 GPU llm_load_tensors:将 33/33 层卸载到 GPU : CPU 缓冲区大小 = 281.81 MiB llm_load_tensors:CUDA0 缓冲区大小 = 4155.99 MiB ⠦ 。 llama_new_context_with_model:n_ctx = 2048 llama_new_context_with_model:n_batch = 512 llama_new_context_with_model:n_ubatch = 512 llama_new_context_with_model :freq_base = 500000.0 llama_new_context_with_model:freq_scale = 1 cache_init:CUDA0 KV缓冲区大小= 256.00 MiB llama_new_context_with_model:KV自身大小= 256.00 MiB,K(f16):128.00 MiB ,V (f16):128.00 MiB llama_new_context_with_model:CUDA_Host 输出缓冲区大小 = 0.50 MiB ⠧ llama_new_context_with_model:CUDA0 计算缓冲区大小 = 258.50 MiB llama_new_context_with_model:CUDA_Host 计算缓冲区大小 = 12.01 MiB llama_new_context_with_model :图节点 = 1030 llama_new_context_with_model:图分割 = 2 {" function":"初始化","level":"INFO","line":448,"msg":"初始化插槽","n_slots":1,"tid":"23059827412992","timestamp":1714484154} {“function”:“初始化”,“level”:“INFO”,“line”:457,“msg”:“新插槽”,“n_ctx_slot”:2048,“slot_id”:0,“tid”:“23059827412992 ","timestamp":1714484154} {"function":"main","level":"INFO","line":3064,"msg":"模型已加载","tid":"23059827412992","timestamp “:1714484154} {“功能”:“validate_model_chat_template”,“级别”:“ERR”,“行”:437,“msg”:"尚不支持此模型附带的聊天模板,请回退到 chatml。这可能会导致模型输出次优响应","tid":"23059827412992","timestamp":1714484154} {"function":"main ","主机名":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP 服务器监听","n_threads_http":"63","port":"38085 ","tid":"23059827412992","时间戳":1714484154} {"function":"update_slots","level":"INFO","line":1578,"msg":"所有slot都空闲且系统提示为空,清除KV缓存","tid":"23059827412992 ","timestamp":1714484154} {"function":"process_single_task","level":"INFO","line":1506,"msg":"槽数据","n_idle_slots":1,"n_processing_slots": 0,"task_id":0,"tid":"23059827412992","timestamp":1714484154} {"function":"log_server_request","level":"INFO","line":2734,"method":" GET","msg":"请求","params":{},"path":"/health","re​​mote_addr":"127.0.0.1","re​​mote_port":38750,"status":200," tid":"23059020898304","timestamp":1714484154} {"function":"process_single_task","level":"INFO","line":1506,"msg":"槽数据","n_idle_slots":1 ,"n_processing_slots":0,"task_id":1,"tid":"23059827412992","timestamp":1714484154} {"function":"process_single_task","level":"INFO","line":1506, "msg":"槽数据","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"23059827412992","timestamp":1714484154} {"function":"log_server_request", "level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","re​​mote_addr": "127.0.0.1","re​​mote_port":38762,"status":200,"tid":"23059018797056","timestamp":1714484154} {"function":"process_single_task","level":"INFO","行":1506,"msg":"槽数据","n_idle_slots":1,"n_processing_slots":0,"task_id":3,"tid":"23059827412992","timestamp":1714484154} { "函数" :"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health" ,"remote_addr":"127.0.0.1","re​​mote_port":33876,"status":200,"tid":"23058758840320","timestamp":1714484154} {"function":"log_server_request","level": "INFO","行":2734,"方法":"获取","msg":"request","params":{},"path":"/health","re​​mote_addr":"127.0.0.1","re​​mote_port":33892,"status":200,"tid": "23058943975424","时间戳":1714484154} {"function":"process_single_task","level":"INFO","line":1506,"msg":"槽数据","n_idle_slots":1,"n_processing_slots ":0,"task_id":4,"tid":"23059827412992","timestamp":1714484154} {"function":"log_server_request","level":"INFO","line":2734,"method" :"GET","msg":"请求","params":{},"path":"/health","re​​mote_addr":"127.0.0.1","re​​mote_port":33908,"status":200 ,"tid":"23058760941568","timestamp":1714484154} {"function":"process_single_task","level":"INFO","line":1506,"msg":"槽数据","n_idle_slots" :1,“n_processing_slots”:0,“task_id”:5,“tid”:“23059827412992”,“时间戳”:1714484154} {“function”:“log_server_request”,“level”:“INFO”,“line”:第2734章,“方法”:“GET”,“消息”:“请求”,“参数”:{},“路径”:“/health”,“remote_addr”:“127.0.0.1”,“remote_port”:33924, “状态”:200,“tid”:“23058946076672”,“时间戳”:1714484154} ⠇ {“功能”:“process_single_task”,“level”:“INFO”,“line”:1506,“msg”:“插槽数据”,“n_idle_slots”:1,“n_processing_slots”:0,“task_id”:6,“tid”:“23059827412992”,“时间戳”:1714484155}33908,“状态”:200,“tid”:“23058760941568”,“时间戳”:1714484154} {“功能”:“process_single_task”,“level”:“INFO”,“line”:1506,“msg”:“槽数据","n_idle_slots":1,"n_processing_slots":0,"task_id":5,"tid":"23059827412992","timestamp":1714484154} {"function":"log_server_request","level":" INFO","line":2734,"method":"GET","msg":"请求","params":{},"path":"/health","re​​mote_addr":"127.0.0.1" ,“remote_port”:33924,“status”:200,“tid”:“23058946076672”,“timestamp”:1714484154} ⠇ {“function”:“process_single_task”,“level”:“INFO”,“line”:1506 ,“msg”:“槽数据”,“n_idle_slots”:1,“n_processing_slots”:0,“task_id”:6,“tid”:“23059827412992”,“时间戳”:1714484155}33908,“状态”:200,“tid”:“23058760941568”,“时间戳”:1714484154} {“功能”:“process_single_task”,“level”:“INFO”,“line”:1506,“msg”:“槽数据","n_idle_slots":1,"n_processing_slots":0,"task_id":5,"tid":"23059827412992","timestamp":1714484154} {"function":"log_server_request","level":" INFO","line":2734,"method":"GET","msg":"请求","params":{},"path":"/health","re​​mote_addr":"127.0.0.1" ,“remote_port”:33924,“status”:200,“tid”:“23058946076672”,“timestamp”:1714484154} ⠇ {“function”:“process_single_task”,“level”:“INFO”,“line”:1506 ,“msg”:“槽数据”,“n_idle_slots”:1,“n_processing_slots”:0,“task_id”:6,“tid”:“23059827412992”,“时间戳”:1714484155} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path": “/health”,“remote_addr”:“127.0.0.1”,“remote_port”:33960,“status”:200,“tid”:“23058756739072”,“timestamp”:1714484155} [GIN] 2024/04/30 - 21:35:55 | 21:35:55 200 | 200 5.818520362s | 127.0.0.1 |发布“/api/聊天”

你好吗?
{“function”:“process_single_task”,“level”:“INFO”,“line”:1506,“msg”:“槽数据”,“n_idle_slots”:1,“n_processing_slots”:0,“task_id”:7, “tid”:“23059827412992”,“时间戳”:1714484161}
{“function”:“log_server_request”,“level”:“INFO”,“line”:2734,“method”:“GET”,“msg”:“请求","params":{},"path":"/health","re​​mote_addr":"127.0.0.1","re​​mote_port":33968,"status":200,"tid":"23058754637824","时间戳":1714484161}
{"function":"process_single_task","level":"INFO","line":1506,"msg":"槽数据","n_idle_slots":1,"n_processing_slots":0," task_id":8,"tid":"23059827412992","timestamp":1714484161}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET", "msg":"request","params":{},"path":"/health","re​​mote_addr":"127.0.0.1","re​​mote_port":33968,"status":200,"tid": "23058754637824","时间戳":1714484161}
{"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"请求"," params":{},"path":"/tokenize","re​​mote_addr":"127.0.0.1","re​​mote_port":33968,"status":200,"tid":"23058754637824","timestamp":1714484161 }
{“function”:“process_single_task”,“level”:“INFO”,“line”:1506,“msg”:“槽数据”,“n_idle_slots”:1,“n_processing_slots”:0,“task_id”:9 ,"tid":"23059827412992","timestamp":1714484161}
{"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg": “请求”,“参数”:{},“路径”:“/health”,“remote_addr”:“127.0.0.1”,“remote_port”:33968,“状态”:200,“tid”:“23058754637824”, "timestamp":1714484161}
⠙ {"function":"launch_slot_with_data","level":"INFO","line":830,"msg":"slot 正在处理任务","slot_id":0,"task_id" :10,"tid":"23059827412992","时间戳":1714484161}
{"function":"update_slots","ga_i":0,"level":"INFO","line":1809,"msg":"槽进度","n_past":0,"n_past_se": 0,"n_prompt_tokens_processed":14,"slot_id":0,"task_id":10,"tid":"23059827412992","timestamp":1714484161}
{"function":"update_slots","level":"INFO" ,"line":1836,"msg":"kv 缓存 rm [p0, end)","p0":0,"slot_id":0,"task_id":10,"tid":"23059827412992","时间戳":1714484161}
我只是一个人工智能,所以我没有人类那样的感觉。但我功能正常,随时准备帮助回答
您的问题或与您讨论您选择的主题!今天我能为您提供什么帮助?{"function":"print_timings","level":"INFO","line":269,"msg":"提示评估时间 = 91.42 ms / 14 个令牌(每个令牌 6.53 ms,每秒 153.14 个令牌)","n_prompt_tokens_processed":14,"n_tokens_second":153.1443823358894,"slot_id":0,"t_prompt_processing":91.417,"t_token":6.5297857142857145,"task_id":10, “tid”:“23059827412992” ,“时间戳”:1714484162}
{“function”:“print_timings”,“level”:“INFO”,“line”:283,“msg”:“生成评估时间= 535.35毫秒/ 47次运行(每个令牌11.39毫秒,每秒87.79个令牌)”, “n_decoded”:47,“n_tokens_second”:87.79319658764655,“slot_id”:0,“t_token”:11.39040425531915,“t_token_ Generation”:535.349,“task_id”:10,“tid”:“23059827412992”,“时间戳”:1714484162}
{“function”:“print_timings”,“level”:“INFO”,“line”:293,“msg”:“总时间= 626.77 ms”,“slot_id”:0,“t_prompt_processing”:91.417,“t_token_ Generation” :535.349,“t_total”:626.7660000000001,“task_id”:10,“tid”:“23059827412992”,“时间戳”:1714484162}
{“function”:“update_slots”,“level”:“INFO”,“line”:第1640章:“23059827412992”,“时间戳”:1714484162,“截断”:false}
{“功能”:“log_server_request”,“级别”:“INFO”,“行”:2734,“方法”:“POST”,“msg ":"request","params":{},"path":"/completion","re​​mote_addr":"127.0.0.1","re​​mote_port":33968,"status":200,"tid":"23058754637824 ","时间戳":1714484162}
[GIN] 2024/04/30 - 21:36:02 | 200 | 200 759.646073 毫秒 | 127.0.0.1 |发布“/api/聊天”

操作系统

Linux

图形处理器

英伟达

中央处理器

AMD

奥拉玛版

0.1.32

<!-- gh-comment-id:2085548978 --> @JialeLiLab commented on GitHub (Apr 30, 2024): > ### 有什么问题吗? > 大家好, > > 我正在运行 ollama,它直接在终端中将其服务器日志与我的输出混合在一起,使其难以交互。还有其他人经历过吗?关于如何分离这些内容的任何建议,以便我可以看到我的输入和输出,而不会受到连续服务器日志的混乱? > > 感谢您的任何建议! > > 例如: > > (base) root@gpumall-ins-542069835358213:/gm-data# ./ollama 运行 llama3 [GIN] 2024/04/30 - 21:35:49 | 200 | 200 147.959μs | 127.0.0.1 |头“/” [杜松子酒] 2024/04/30 - 21:35:49 | 200 | 200 4.389429 毫秒 | 127.0.0.1 |发布“/api/show” [GIN] 2024/04/30 - 21:35:49 | 200 | 200 761.46μs | 127.0.0.1 | POST "/api/show" ⠋ time=2024-04-30T21:35:52.303+08:00 level=INFO source=gpu.go:121 msg="检测 GPU 类型" time=2024-04-30T21:35: 52.303+08:00 level=INFO source=gpu.go:268 msg="搜索GPU管理库libcudart.so*" time=2024-04-30T21:35:52.306+08:00 level=INFO source=gpu. go:314 msg =“发现GPU库:[/tmp/ollama2135940480/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.11.8.89]” 时间= 2024-04-30T21 :35:52.309 + 08:00级别= INFO源= gpu.go:126 msg =“通过cudart检测到Nvidia GPU” 时间= 2024-04-30T21:35:52.309 + 08:00级别= INFO源= cpu_common.go :11 msg="CPU 有 AVX2" ⠹ time=2024-04-30T21:35:52.465+08:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA 计算能力检测到:8.6" 时间=2024-04-30T21:35:52.535+08:00 level=INFO source=gpu.go:121 msg="检测GPU类型" time=2024-04-30T21:35:52.536+08:00 level=INFO source =gpu.go:268 msg="搜索GPU管理库libcudart.so*" ⠸ time=2024-04-30T21:35:52.540+08:00 level=INFO source=gpu.go:314 msg="发现GPU库:[/tmp/ollama2135940480/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.11.8.89]” 时间=2024-04-30T21:35:52.542+08:00 level=INFO source=gpu.go:126 msg="通过 cudart 检测到 Nvidia GPU" time=2024-04-30T21:35:52.542+08:00 level=INFO source=cpu_common.go:11 msg="CPU 有 AVX2 " ⠼ time=2024-04-30T21:35:52.691+08:00 level=INFO source=gpu.go:202 msg="[cudart] 检测到 CUDART CUDA 计算能力:8.6" ⠴ time=2024-04-30T21: 35:52.746 + 08:00级别=信息源= server.go:127 msg =“卸载到GPU”reallayers = 33层= 33必需=“5033.0 MiB”使用=“5033.0 MiB”可用=“23996.7 MiB”kv = “256.0 MiB”fulloffload =“164.0 MiB”partialoffload =“677.5 MiB” 时间= 2024-04-30T21:35:52.746 + 08:00级别= INFO源= cpu_common.go:11 msg =“CPU有AVX2” 时间= 2024-04-30T21:35:52.747 + 08:00 level = INFO source = server.go:264 msg =“启动llama服务器”cmd =“/tmp/ollama2135940480/runners/cuda_v11/ollama_llama_server --model /root/。 ollama/models/blob/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --port 38085" 时间=2024-04-30T21: 35:52.747+08:00 level=INFO source=server.go:389 msg="等待美洲驼跑步者开始响应" {“function”:“server_params_parse”,“level”:“INFO”,“line”:2603,“msg”:“禁用记录到文件。”,“tid”:“23059827412992”,“timestamp”:1714484152} {“build”:1,“commit”:“7593639”,“function”:“main”,“level”:“INFO”,“line”:2819,“msg”:“构建信息”,“tid”: "23059827412992","时间戳":1714484152} {"function":"main","level":"INFO","line":2822,"msg":"系统信息","n_threads":32,"n_threads_batch “:-1,”系统信息“:”AVX = 1 | AVX2 = 0 | AVX512_VNNI = 0 | NEON = 0 | FP16_VA = 0 | BLAS = 1 | SSSE3 = 0 | ","tid":"23059827412992","时间戳":1714484152,"总线程数":64 ⠦ llama_model_loader:从/root/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29(版本GGUF V3(最新))加载元数据,包含21个键值对和291个张量 _model_loader:转储元数据键/值。注意:KV 覆盖不适用于此输出。 llama_model_loader: - kv 0:general.architecture str = llama llama_model_loader: - kv 1:general.name str = Meta-Llama-3-8B-Instruct llama_model_loader: - kv 2:llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - k v 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: 一般.file_type u32 = 2 llama_model_loader: - kv 11:llama.vocab_size u32 = 128256 llama_model_loader: - kv 12:llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13:tokenizer.ggml。model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'" , ... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ⠧ llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 18:tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 19:tokenizer.chat_template str = {% set Loop_messages = messages %}{% ... llama_model_loader: - kv 20:general.quantization_version u32 = 2 llama_model_loader: - 类型 f32:65 个张量 llama_model_loader: - 类型 q4_0:225 个张量 llama_model_loader: - 类型 q6_K:1 个张量⠸ llm_load_vocab : 特殊标记定义检查成功( 256/128256 )。 llm_load_print_meta:arch = llama llm_load_print_meta:词汇类型= BPE llm_load_print_meta:n_vocab = 128256 llm_load_print_meta:n_merges = 280147 llm_load_print_meta :n_ctx_train = 8192 llm_load_print_meta:n_embd = 40 96 llm_load_print_meta:n_head = 32 llm_load_print_meta:n_head_kv = 8 llm_load_print_meta:n_layer = 32 llm_load_print_meta:n_rot = 128 llm_load_print_meta:n_embd_head_k = 128 llm_load_print_meta:n_embd_head_v = 128 llm_load_print_meta :n_gqa = 4 llm_load_print_meta:n_embd_k_gqa = 1024 llm_load_print_meta:n_embd_v_gqa = 10 24 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta :f_norm_rms_eps = 1.0e-05 llm_load_print_meta:f_clamp_kqv = 0.0e+00 llm_load_print_meta:f_max_alibi_bias = 0.0e + 00 llm_load_print_meta:f_logit_scale = 0.0e + 00 llm_load_print_meta:n_ff = 14336 llm_load_print_meta:n_expert = 0 llm_load_print_meta:n_expert_used = 0 a :因果 attn = 1 llm_load_print_meta:池类型 = 0 llm_load_print_meta:绳索类型 = 0 llm_load_print_meta :绳索缩放 = 线性 llm_load_print_meta:freq_base_train = 500000.0 llm_load_print_meta:freq_scale_train = 1 llm_load_print_meta:n_yarn_orig_ctx = 8192 llm_load_print_meta:rope_finetuned = 未知 llm_load_print_meta:ssm_d_conv = 0 llm_load_print_meta:ssm_d_inner = 0 llm_load_print_meta:ssm_d_state = 0 llm_load_print_meta:ssm_dt_rank = 0 llm_load_print_meta:模型类型 = 7B llm_load_print_meta:模型 ftype = Q4_0 llm_load_print_meta:模型参数 = 8.03 B llm_load_print_meta:模型大小 = 4.33 GiB (4.64 BPW) llm_load_print_meta:general.name = Meta-Llama-3-8B-Instruct llm_load_print _meta:BOS 代币 = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS 令牌 = 128001 '<|end_of_text|>' llm_load_print_meta: LF 令牌 = 128 'ä' ggml_cuda_init: GGML_CUDA_FORCE_MMQ: 是 ggml_cuda_init:没有 ggml_cuda_init:找到 1 个 CUDA 设备: 设备 0: NVIDIA GeForce RTX 3090,计算能力 8.6,VMM:是 ⠴ llm_load_tensors:ggml ctx 大小 = 0.22 MiB ⠇ llm_load_tensors:将 32 个重复层卸载到 GPU llm_load_tensors:将非重复层卸载到 GPU llm_load_tensors:将 33/33 层卸载到 GPU : CPU 缓冲区大小 = 281.81 MiB llm_load_tensors:CUDA0 缓冲区大小 = 4155.99 MiB ⠦ 。 llama_new_context_with_model:n_ctx = 2048 llama_new_context_with_model:n_batch = 512 llama_new_context_with_model:n_ubatch = 512 llama_new_context_with_model :freq_base = 500000.0 llama_new_context_with_model:freq_scale = 1 cache_init:CUDA0 KV缓冲区大小= 256.00 MiB llama_new_context_with_model:KV自身大小= 256.00 MiB,K(f16):128.00 MiB ,V (f16):128.00 MiB llama_new_context_with_model:CUDA_Host 输出缓冲区大小 = 0.50 MiB ⠧ llama_new_context_with_model:CUDA0 计算缓冲区大小 = 258.50 MiB llama_new_context_with_model:CUDA_Host 计算缓冲区大小 = 12.01 MiB llama_new_context_with_model :图节点 = 1030 llama_new_context_with_model:图分割 = 2 {" function":"初始化","level":"INFO","line":448,"msg":"初始化插槽","n_slots":1,"tid":"23059827412992","timestamp":1714484154} {“function”:“初始化”,“level”:“INFO”,“line”:457,“msg”:“新插槽”,“n_ctx_slot”:2048,“slot_id”:0,“tid”:“23059827412992 ","timestamp":1714484154} {"function":"main","level":"INFO","line":3064,"msg":"模型已加载","tid":"23059827412992","timestamp “:1714484154} {“功能”:“validate_model_chat_template”,“级别”:“ERR”,“行”:437,“msg”:"尚不支持此模型附带的聊天模板,请回退到 chatml。这可能会导致模型输出次优响应","tid":"23059827412992","timestamp":1714484154} {"function":"main ","主机名":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP 服务器监听","n_threads_http":"63","port":"38085 ","tid":"23059827412992","时间戳":1714484154} {"function":"update_slots","level":"INFO","line":1578,"msg":"所有slot都空闲且系统提示为空,清除KV缓存","tid":"23059827412992 ","timestamp":1714484154} {"function":"process_single_task","level":"INFO","line":1506,"msg":"槽数据","n_idle_slots":1,"n_processing_slots": 0,"task_id":0,"tid":"23059827412992","timestamp":1714484154} {"function":"log_server_request","level":"INFO","line":2734,"method":" GET","msg":"请求","params":{},"path":"/health","re​​mote_addr":"127.0.0.1","re​​mote_port":38750,"status":200," tid":"23059020898304","timestamp":1714484154} {"function":"process_single_task","level":"INFO","line":1506,"msg":"槽数据","n_idle_slots":1 ,"n_processing_slots":0,"task_id":1,"tid":"23059827412992","timestamp":1714484154} {"function":"process_single_task","level":"INFO","line":1506, "msg":"槽数据","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"23059827412992","timestamp":1714484154} {"function":"log_server_request", "level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health","re​​mote_addr": "127.0.0.1","re​​mote_port":38762,"status":200,"tid":"23059018797056","timestamp":1714484154} {"function":"process_single_task","level":"INFO","行":1506,"msg":"槽数据","n_idle_slots":1,"n_processing_slots":0,"task_id":3,"tid":"23059827412992","timestamp":1714484154} { "函数" :"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path":"/health" ,"remote_addr":"127.0.0.1","re​​mote_port":33876,"status":200,"tid":"23058758840320","timestamp":1714484154} {"function":"log_server_request","level": "INFO","行":2734,"方法":"获取","msg":"request","params":{},"path":"/health","re​​mote_addr":"127.0.0.1","re​​mote_port":33892,"status":200,"tid": "23058943975424","时间戳":1714484154} {"function":"process_single_task","level":"INFO","line":1506,"msg":"槽数据","n_idle_slots":1,"n_processing_slots ":0,"task_id":4,"tid":"23059827412992","timestamp":1714484154} {"function":"log_server_request","level":"INFO","line":2734,"method" :"GET","msg":"请求","params":{},"path":"/health","re​​mote_addr":"127.0.0.1","re​​mote_port":33908,"status":200 ,"tid":"23058760941568","timestamp":1714484154} {"function":"process_single_task","level":"INFO","line":1506,"msg":"槽数据","n_idle_slots" :1,“n_processing_slots”:0,“task_id”:5,“tid”:“23059827412992”,“时间戳”:1714484154} {“function”:“log_server_request”,“level”:“INFO”,“line”:第2734章,“方法”:“GET”,“消息”:“请求”,“参数”:{},“路径”:“/health”,“remote_addr”:“127.0.0.1”,“remote_port”:33924, “状态”:200,“tid”:“23058946076672”,“时间戳”:1714484154} ⠇ {“功能”:“process_single_task”,“level”:“INFO”,“line”:1506,“msg”:“插槽数据”,“n_idle_slots”:1,“n_processing_slots”:0,“task_id”:6,“tid”:“23059827412992”,“时间戳”:1714484155}33908,“状态”:200,“tid”:“23058760941568”,“时间戳”:1714484154} {“功能”:“process_single_task”,“level”:“INFO”,“line”:1506,“msg”:“槽数据","n_idle_slots":1,"n_processing_slots":0,"task_id":5,"tid":"23059827412992","timestamp":1714484154} {"function":"log_server_request","level":" INFO","line":2734,"method":"GET","msg":"请求","params":{},"path":"/health","re​​mote_addr":"127.0.0.1" ,“remote_port”:33924,“status”:200,“tid”:“23058946076672”,“timestamp”:1714484154} ⠇ {“function”:“process_single_task”,“level”:“INFO”,“line”:1506 ,“msg”:“槽数据”,“n_idle_slots”:1,“n_processing_slots”:0,“task_id”:6,“tid”:“23059827412992”,“时间戳”:1714484155}33908,“状态”:200,“tid”:“23058760941568”,“时间戳”:1714484154} {“功能”:“process_single_task”,“level”:“INFO”,“line”:1506,“msg”:“槽数据","n_idle_slots":1,"n_processing_slots":0,"task_id":5,"tid":"23059827412992","timestamp":1714484154} {"function":"log_server_request","level":" INFO","line":2734,"method":"GET","msg":"请求","params":{},"path":"/health","re​​mote_addr":"127.0.0.1" ,“remote_port”:33924,“status”:200,“tid”:“23058946076672”,“timestamp”:1714484154} ⠇ {“function”:“process_single_task”,“level”:“INFO”,“line”:1506 ,“msg”:“槽数据”,“n_idle_slots”:1,“n_processing_slots”:0,“task_id”:6,“tid”:“23059827412992”,“时间戳”:1714484155} {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg":"request","params":{},"path": “/health”,“remote_addr”:“127.0.0.1”,“remote_port”:33960,“status”:200,“tid”:“23058756739072”,“timestamp”:1714484155} [GIN] 2024/04/30 - 21:35:55 | 21:35:55 200 | 200 5.818520362s | 127.0.0.1 |发布“/api/聊天” > > > > > 你好吗? > > > > {“function”:“process_single_task”,“level”:“INFO”,“line”:1506,“msg”:“槽数据”,“n_idle_slots”:1,“n_processing_slots”:0,“task_id”:7, “tid”:“23059827412992”,“时间戳”:1714484161} > > > > {“function”:“log_server_request”,“level”:“INFO”,“line”:2734,“method”:“GET”,“msg”:“请求","params":{},"path":"/health","re​​mote_addr":"127.0.0.1","re​​mote_port":33968,"status":200,"tid":"23058754637824","时间戳":1714484161} > > > > {"function":"process_single_task","level":"INFO","line":1506,"msg":"槽数据","n_idle_slots":1,"n_processing_slots":0," task_id":8,"tid":"23059827412992","timestamp":1714484161} > > > > {"function":"log_server_request","level":"INFO","line":2734,"method":"GET", "msg":"request","params":{},"path":"/health","re​​mote_addr":"127.0.0.1","re​​mote_port":33968,"status":200,"tid": "23058754637824","时间戳":1714484161} > > > > {"function":"log_server_request","level":"INFO","line":2734,"method":"POST","msg":"请求"," params":{},"path":"/tokenize","re​​mote_addr":"127.0.0.1","re​​mote_port":33968,"status":200,"tid":"23058754637824","timestamp":1714484161 } > > > > {“function”:“process_single_task”,“level”:“INFO”,“line”:1506,“msg”:“槽数据”,“n_idle_slots”:1,“n_processing_slots”:0,“task_id”:9 ,"tid":"23059827412992","timestamp":1714484161} > > > > {"function":"log_server_request","level":"INFO","line":2734,"method":"GET","msg": “请求”,“参数”:{},“路径”:“/health”,“remote_addr”:“127.0.0.1”,“remote_port”:33968,“状态”:200,“tid”:“23058754637824”, "timestamp":1714484161} > > > > ⠙ {"function":"launch_slot_with_data","level":"INFO","line":830,"msg":"slot 正在处理任务","slot_id":0,"task_id" :10,"tid":"23059827412992","时间戳":1714484161} > > > > {"function":"update_slots","ga_i":0,"level":"INFO","line":1809,"msg":"槽进度","n_past":0,"n_past_se": 0,"n_prompt_tokens_processed":14,"slot_id":0,"task_id":10,"tid":"23059827412992","timestamp":1714484161} > > > > {"function":"update_slots","level":"INFO" ,"line":1836,"msg":"kv 缓存 rm [p0, end)","p0":0,"slot_id":0,"task_id":10,"tid":"23059827412992","时间戳":1714484161} > > > > 我只是一个人工智能,所以我没有人类那样的感觉。但我功能正常,随时准备帮助回答 > > > > 您的问题或与您讨论您选择的主题!今天我能为您提供什么帮助?{"function":"print_timings","level":"INFO","line":269,"msg":"提示评估时间 = 91.42 ms / 14 个令牌(每个令牌 6.53 ms,每秒 153.14 个令牌)","n_prompt_tokens_processed":14,"n_tokens_second":153.1443823358894,"slot_id":0,"t_prompt_processing":91.417,"t_token":6.5297857142857145,"task_id":10, “tid”:“23059827412992” ,“时间戳”:1714484162} > > > > {“function”:“print_timings”,“level”:“INFO”,“line”:283,“msg”:“生成评估时间= 535.35毫秒/ 47次运行(每个令牌11.39毫秒,每秒87.79个令牌)”, “n_decoded”:47,“n_tokens_second”:87.79319658764655,“slot_id”:0,“t_token”:11.39040425531915,“t_token_ Generation”:535.349,“task_id”:10,“tid”:“23059827412992”,“时间戳”:1714484162} > > > > {“function”:“print_timings”,“level”:“INFO”,“line”:293,“msg”:“总时间= 626.77 ms”,“slot_id”:0,“t_prompt_processing”:91.417,“t_token_ Generation” :535.349,“t_total”:626.7660000000001,“task_id”:10,“tid”:“23059827412992”,“时间戳”:1714484162} > > > > {“function”:“update_slots”,“level”:“INFO”,“line”:第1640章:“23059827412992”,“时间戳”:1714484162,“截断”:false} > > > > {“功能”:“log_server_request”,“级别”:“INFO”,“行”:2734,“方法”:“POST”,“msg ":"request","params":{},"path":"/completion","re​​mote_addr":"127.0.0.1","re​​mote_port":33968,"status":200,"tid":"23058754637824 ","时间戳":1714484162} > > > > [GIN] 2024/04/30 - 21:36:02 | 200 | 200 759.646073 毫秒 | 127.0.0.1 |发布“/api/聊天” > > ### 操作系统 > Linux > > ### 图形处理器 > 英伟达 > > ### 中央处理器 > AMD > > ### 奥拉玛版 > 0.1.32
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#2516