How to get content from content extraction #1816

Closed
opened 2025-11-11 14:53:50 -06:00 by GiteaMirror · 0 comments
Owner

Originally created by @gaussiangit on GitHub (Aug 19, 2024).

I am trying to build RAG but somehow unusable with latest ollama + openwebui.
I am assuming the content extract is not accurate. I am providing one of the research papers as input and asking to summarize.

image

`

time=2024-08-19T14:14:12.689Z level=INFO source=server.go:618 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.33 GiB (4.64 BPW)
llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: Tesla V100-PCIE-32GB, compute capability 7.0, VMM: yes
llm_load_tensors: ggml ctx size = 0.27 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU buffer size = 281.81 MiB
llm_load_tensors: CUDA0 buffer size = 4155.99 MiB
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 1024.00 MiB
llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.02 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 560.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 2
INFO [main] model loaded | tid="140448995848192" timestamp=1724076855
time=2024-08-19T14:14:15.448Z level=INFO source=server.go:623 msg="llama runner started in 3.01 seconds"
[GIN] 2024/08/19 - 14:14:15 | 200 | 3.169720163s | 173.10.0.2 | POST "/api/embeddings"
[GIN] 2024/08/19 - 14:14:15 | 200 | 23.80363ms | 173.10.0.2 | POST "/api/embeddings"
INFO [update_slots] input truncated | n_ctx=2048 n_erase=3479 n_keep=4 n_left=2044 n_shift=1022 tid="140448995848192" timestamp=1724076855
[GIN] 2024/08/19 - 14:14:17 | 200 | 1.859629956s | 173.10.0.2 | POST "/api/chat"
[GIN] 2024/08/19 - 14:14:17 | 200 | 156.630161ms | 173.10.0.2 | POST "/v1/chat/completions"
[GIN] 2024/08/19 - 14:19:11 | 200 | 32.461µs | 173.10.0.2 | GET "/api/version"`

Originally created by @gaussiangit on GitHub (Aug 19, 2024). I am trying to build RAG but somehow unusable with latest ollama + openwebui. I am assuming the content extract is not accurate. I am providing one of the research papers as input and asking to summarize. ![image](https://github.com/user-attachments/assets/4d36fc3c-1e3f-4597-8242-0ec32faac12a) ` > time=2024-08-19T14:14:12.689Z level=INFO source=server.go:618 msg="waiting for server to become available" status="llm server loading model" > llm_load_vocab: special tokens cache size = 256 > llm_load_vocab: token to piece cache size = 0.7999 MB > llm_load_print_meta: format = GGUF V3 (latest) > llm_load_print_meta: arch = llama > llm_load_print_meta: vocab type = BPE > llm_load_print_meta: n_vocab = 128256 > llm_load_print_meta: n_merges = 280147 > llm_load_print_meta: vocab_only = 0 > llm_load_print_meta: n_ctx_train = 131072 > llm_load_print_meta: n_embd = 4096 > llm_load_print_meta: n_layer = 32 > llm_load_print_meta: n_head = 32 > llm_load_print_meta: n_head_kv = 8 > llm_load_print_meta: n_rot = 128 > llm_load_print_meta: n_swa = 0 > llm_load_print_meta: n_embd_head_k = 128 > llm_load_print_meta: n_embd_head_v = 128 > llm_load_print_meta: n_gqa = 4 > llm_load_print_meta: n_embd_k_gqa = 1024 > llm_load_print_meta: n_embd_v_gqa = 1024 > llm_load_print_meta: f_norm_eps = 0.0e+00 > llm_load_print_meta: f_norm_rms_eps = 1.0e-05 > llm_load_print_meta: f_clamp_kqv = 0.0e+00 > llm_load_print_meta: f_max_alibi_bias = 0.0e+00 > llm_load_print_meta: f_logit_scale = 0.0e+00 > llm_load_print_meta: n_ff = 14336 > llm_load_print_meta: n_expert = 0 > llm_load_print_meta: n_expert_used = 0 > llm_load_print_meta: causal attn = 1 > llm_load_print_meta: pooling type = 0 > llm_load_print_meta: rope type = 0 > llm_load_print_meta: rope scaling = linear > llm_load_print_meta: freq_base_train = 500000.0 > llm_load_print_meta: freq_scale_train = 1 > llm_load_print_meta: n_ctx_orig_yarn = 131072 > llm_load_print_meta: rope_finetuned = unknown > llm_load_print_meta: ssm_d_conv = 0 > llm_load_print_meta: ssm_d_inner = 0 > llm_load_print_meta: ssm_d_state = 0 > llm_load_print_meta: ssm_dt_rank = 0 > llm_load_print_meta: model type = 8B > llm_load_print_meta: model ftype = Q4_0 > llm_load_print_meta: model params = 8.03 B > llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) > llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct > llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' > llm_load_print_meta: EOS token = 128009 '<|eot_id|>' > llm_load_print_meta: LF token = 128 'Ä' > llm_load_print_meta: EOT token = 128009 '<|eot_id|>' > llm_load_print_meta: max token length = 256 > ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no > ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no > ggml_cuda_init: found 1 CUDA devices: > Device 0: Tesla V100-PCIE-32GB, compute capability 7.0, VMM: yes > llm_load_tensors: ggml ctx size = 0.27 MiB > llm_load_tensors: offloading 32 repeating layers to GPU > llm_load_tensors: offloading non-repeating layers to GPU > llm_load_tensors: offloaded 33/33 layers to GPU > llm_load_tensors: CPU buffer size = 281.81 MiB > llm_load_tensors: CUDA0 buffer size = 4155.99 MiB > llama_new_context_with_model: n_ctx = 8192 > llama_new_context_with_model: n_batch = 512 > llama_new_context_with_model: n_ubatch = 512 > llama_new_context_with_model: flash_attn = 0 > llama_new_context_with_model: freq_base = 500000.0 > llama_new_context_with_model: freq_scale = 1 > llama_kv_cache_init: CUDA0 KV buffer size = 1024.00 MiB > llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB > llama_new_context_with_model: CUDA_Host output buffer size = 2.02 MiB > llama_new_context_with_model: CUDA0 compute buffer size = 560.00 MiB > llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB > llama_new_context_with_model: graph nodes = 1030 > llama_new_context_with_model: graph splits = 2 > INFO [main] model loaded | tid="140448995848192" timestamp=1724076855 > time=2024-08-19T14:14:15.448Z level=INFO source=server.go:623 msg="llama runner started in 3.01 seconds" > [GIN] 2024/08/19 - 14:14:15 | 200 | 3.169720163s | 173.10.0.2 | POST "/api/embeddings" > [GIN] 2024/08/19 - 14:14:15 | 200 | 23.80363ms | 173.10.0.2 | POST "/api/embeddings" > INFO [update_slots] input truncated | n_ctx=2048 n_erase=3479 n_keep=4 n_left=2044 n_shift=1022 tid="140448995848192" timestamp=1724076855 > [GIN] 2024/08/19 - 14:14:17 | 200 | 1.859629956s | 173.10.0.2 | POST "/api/chat" > [GIN] 2024/08/19 - 14:14:17 | 200 | 156.630161ms | 173.10.0.2 | POST "/v1/chat/completions" > [GIN] 2024/08/19 - 14:19:11 | 200 | 32.461µs | 173.10.0.2 | GET "/api/version"`
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#1816