[GH-ISSUE #5816] orian ollama webui #50137

Closed
opened 2026-04-28 14:19:20 -05:00 by GiteaMirror · 32 comments
Owner

Originally created by @werruww on GitHub (Jul 20, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5816

Failed to post request http://Localhost:11434

(b) C:\Users\m\Desktop\1>ollama serve
Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.

ollama not work with add orian ollama webui on egde

Originally created by @werruww on GitHub (Jul 20, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5816 Failed to post request http://Localhost:11434 (b) C:\Users\m\Desktop\1>ollama serve Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted. ollama not work with add orian ollama webui on egde
GiteaMirror added the feature request label 2026-04-28 14:19:21 -05:00
Author
Owner

@rick-github commented on GitHub (Jul 20, 2024):

Either there's already an ollama server running, or something else is using the port. If ollama list fails, then it's likely a different process. You can use netstat -aon | findstr :11434 to find the id of the process that has bound to the port, and then find the name of the program with tasklist /FI "PID eq xxxx", where xxxx is the number at the end of the line from the nestat command.

<!-- gh-comment-id:2241299878 --> @rick-github commented on GitHub (Jul 20, 2024): Either there's already an ollama server running, or something else is using the port. If `ollama list` fails, then it's likely a different process. You can use `netstat -aon | findstr :11434` to find the id of the process that has bound to the port, and then find the name of the program with `tasklist /FI "PID eq xxxx"`, where `xxxx` is the number at the end of the line from the `nestat` command.
Author
Owner

@werruww commented on GitHub (Jul 20, 2024):

(b) C:\Users\m\Desktop\1>netstat -aon | findstr :11434
TCP 127.0.0.1:11434 0.0.0.0:0 LISTENING 4184
TCP 127.0.0.1:11434 127.0.0.1:58211 ESTABLISHED 4184
TCP 127.0.0.1:58211 127.0.0.1:11434 ESTABLISHED 2628

<!-- gh-comment-id:2241300878 --> @werruww commented on GitHub (Jul 20, 2024): (b) C:\Users\m\Desktop\1>netstat -aon | findstr :11434 TCP 127.0.0.1:11434 0.0.0.0:0 LISTENING 4184 TCP 127.0.0.1:11434 127.0.0.1:58211 ESTABLISHED 4184 TCP 127.0.0.1:58211 127.0.0.1:11434 ESTABLISHED 2628
Author
Owner

@rick-github commented on GitHub (Jul 20, 2024):

Now do tasklist /FI "PID eq 4184"

<!-- gh-comment-id:2241312911 --> @rick-github commented on GitHub (Jul 20, 2024): Now do `tasklist /FI "PID eq 4184"`
Author
Owner

@werruww commented on GitHub (Jul 21, 2024):

(b) C:\Users\m\Desktop\e>netstat -aon | findstr :11434
TCP 127.0.0.1:11434 0.0.0.0:0 LISTENING 8812
TCP 127.0.0.1:52102 127.0.0.1:11434 TIME_WAIT 0

(b) C:\Users\m\Desktop\e>tasklist /FI "PID eq 4184"
INFO: No tasks are running which match the specified criteria.

<!-- gh-comment-id:2241809235 --> @werruww commented on GitHub (Jul 21, 2024): (b) C:\Users\m\Desktop\e>netstat -aon | findstr :11434 TCP 127.0.0.1:11434 0.0.0.0:0 LISTENING 8812 TCP 127.0.0.1:52102 127.0.0.1:11434 TIME_WAIT 0 (b) C:\Users\m\Desktop\e>tasklist /FI "PID eq 4184" INFO: No tasks are running which match the specified criteria.
Author
Owner

@werruww commented on GitHub (Jul 21, 2024):

Untitled

<!-- gh-comment-id:2241809586 --> @werruww commented on GitHub (Jul 21, 2024): ![Untitled](https://github.com/user-attachments/assets/f881f0af-fe07-40fe-8cd0-e71235264aba)
Author
Owner

@rick-github commented on GitHub (Jul 21, 2024):

Now do tasklist /FI "PID eq 8812"

<!-- gh-comment-id:2241809742 --> @rick-github commented on GitHub (Jul 21, 2024): Now do `tasklist /FI "PID eq 8812"`
Author
Owner

@werruww commented on GitHub (Jul 21, 2024):

U2ntitled

<!-- gh-comment-id:2241810001 --> @werruww commented on GitHub (Jul 21, 2024): ![U2ntitled](https://github.com/user-attachments/assets/10f4320a-b170-490a-a3ba-490da3302456)
Author
Owner

@werruww commented on GitHub (Jul 21, 2024):

(b) C:\Users\m\Desktop\e>tasklist /FI "PID eq 8812"

Image Name PID Session Name Session# Mem Usage
========================= ======== ================ =========== ============
ollama.exe 8812 Console 1 19,752 K

(b) C:\Users\m\Desktop\e>

<!-- gh-comment-id:2241810026 --> @werruww commented on GitHub (Jul 21, 2024): (b) C:\Users\m\Desktop\e>tasklist /FI "PID eq 8812" Image Name PID Session Name Session# Mem Usage ========================= ======== ================ =========== ============ ollama.exe 8812 Console 1 19,752 K (b) C:\Users\m\Desktop\e>
Author
Owner

@rick-github commented on GitHub (Jul 21, 2024):

From the screenshot, orian was able to retrieve the model list, so it can communicate with the ollama server. What do the ollama server logs show when you ask a question and it fails?

<!-- gh-comment-id:2241811583 --> @rick-github commented on GitHub (Jul 21, 2024): From the screenshot, orian was able to retrieve the model list, so it can communicate with the ollama server. What do the ollama server logs show when you ask a question and it fails?
Author
Owner

@werruww commented on GitHub (Jul 21, 2024):

2024/07/21 15:39:59 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:C:\Users\m\.ollama\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\Users\m\AppData\Local\Programs\Ollama\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-21T15:39:59.249-07:00 level=INFO source=images.go:730 msg="total blobs: 4"
time=2024-07-21T15:39:59.250-07:00 level=INFO source=images.go:737 msg="total unused blobs removed: 0"
time=2024-07-21T15:39:59.250-07:00 level=INFO source=routes.go:1111 msg="Listening on 127.0.0.1:11434 (version 0.1.46)"
time=2024-07-21T15:39:59.251-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7]"
time=2024-07-21T15:39:59.274-07:00 level=INFO source=types.go:98 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="16.0 GiB" available="12.2 GiB"
[GIN] 2024/07/21 - 15:39:59 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/07/21 - 15:39:59 | 200 | 4.745ms | 127.0.0.1 | GET "/api/tags"

<!-- gh-comment-id:2241817358 --> @werruww commented on GitHub (Jul 21, 2024): 2024/07/21 15:39:59 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:C:\\Users\\m\\.ollama\\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\m\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-07-21T15:39:59.249-07:00 level=INFO source=images.go:730 msg="total blobs: 4" time=2024-07-21T15:39:59.250-07:00 level=INFO source=images.go:737 msg="total unused blobs removed: 0" time=2024-07-21T15:39:59.250-07:00 level=INFO source=routes.go:1111 msg="Listening on 127.0.0.1:11434 (version 0.1.46)" time=2024-07-21T15:39:59.251-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7]" time=2024-07-21T15:39:59.274-07:00 level=INFO source=types.go:98 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="16.0 GiB" available="12.2 GiB" [GIN] 2024/07/21 - 15:39:59 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/07/21 - 15:39:59 | 200 | 4.745ms | 127.0.0.1 | GET "/api/tags"
Author
Owner

@werruww commented on GitHub (Jul 21, 2024):

2024/07/21 15:40:57 routes.go:1096: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:C:\Users\m\.ollama\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\Users\m\AppData\Local\Programs\Ollama\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-21T15:40:57.045-07:00 level=INFO source=images.go:778 msg="total blobs: 4"
time=2024-07-21T15:40:57.046-07:00 level=INFO source=images.go:785 msg="total unused blobs removed: 0"
time=2024-07-21T15:40:57.047-07:00 level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.2.7)"
time=2024-07-21T15:40:57.048-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [rocm_v6.1 cpu cpu_avx cpu_avx2 cuda_v11.3]"
time=2024-07-21T15:40:57.048-07:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-07-21T15:40:57.068-07:00 level=INFO source=gpu.go:346 msg="no compatible GPUs were discovered"
time=2024-07-21T15:40:57.068-07:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="16.0 GiB" available="11.6 GiB"
[GIN] 2024/07/21 - 15:40:57 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/07/21 - 15:40:57 | 200 | 5.025ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/07/21 - 15:41:01 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/07/21 - 15:41:16 | 200 | 2.0518ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/07/21 - 15:41:17 | 403 | 0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/07/21 - 15:41:46 | 403 | 0s | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/07/21 - 15:41:52 | 403 | 0s | 127.0.0.1 | OPTIONS "/api/generate"
[GIN] 2024/07/21 - 15:42:39 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/07/21 - 15:42:39 | 200 | 23.9883ms | 127.0.0.1 | POST "/api/show"
time=2024-07-21T15:42:39.983-07:00 level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[12.4 GiB]" memory.required.full="5.2 GiB" memory.required.partial="0 B" memory.required.kv="512.0 MiB" memory.required.allocations="[5.2 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.0 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="296.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-07-21T15:42:39.993-07:00 level=INFO source=server.go:383 msg="starting llama server" cmd="C:\Users\m\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2\ollama_llama_server.exe --model C:\Users\m\.ollama\models\blobs\sha256-ab9e4eec7e80892fd78f74d9a15d0299f1e22121cea44efd68a7a02a3fe9a1da --ctx-size 4096 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 51080"
time=2024-07-21T15:42:40.037-07:00 level=INFO source=sched.go:437 msg="loaded runners" count=1
time=2024-07-21T15:42:40.037-07:00 level=INFO source=server.go:571 msg="waiting for llama runner to start responding"
time=2024-07-21T15:42:40.037-07:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3337 commit="a8db2a9c" tid="5964" timestamp=1721601760
INFO [wmain] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="5964" timestamp=1721601760 total_threads=4
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="6" port="51080" tid="5964" timestamp=1721601760
llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from C:\Users\m.ollama\models\blobs\sha256-ab9e4eec7e80892fd78f74d9a15d0299f1e22121cea44efd68a7a02a3fe9a1da (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct-imatrix
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 15
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
time=2024-07-21T15:42:40.290-07:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_K: 193 tensors
llama_model_loader: - type q6_K: 33 tensors
llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.8000 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.58 GiB (4.89 BPW)
llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct-imatrix
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size = 0.14 MiB
llm_load_tensors: CPU buffer size = 4685.30 MiB
[GIN] 2024/07/21 - 15:42:52 | 403 | 0s | 127.0.0.1 | GET "/api/tags"
llama_new_context_with_model: n_ctx = 4096
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 512.00 MiB
llama_new_context_with_model: KV self size = 512.00 MiB, K (f16): 256.00 MiB, V (f16): 256.00 MiB
llama_new_context_with_model: CPU output buffer size = 2.02 MiB
llama_new_context_with_model: CPU compute buffer size = 296.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 1
INFO [wmain] model loaded | tid="5964" timestamp=1721601842
time=2024-07-21T15:44:02.055-07:00 level=INFO source=server.go:617 msg="llama runner started in 82.02 seconds"
[GIN] 2024/07/21 - 15:44:02 | 200 | 1m22s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/07/21 - 15:44:07 | 403 | 0s | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/07/21 - 15:44:39 | 200 | 997.9µs | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/07/21 - 15:44:41 | 200 | 22.664353s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/07/21 - 15:44:43 | 403 | 0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/07/21 - 15:50:27 | 200 | 1.5728ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/07/21 - 15:50:28 | 403 | 0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/07/21 - 15:50:48 | 200 | 1.4457ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/07/21 - 15:50:51 | 403 | 0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/07/21 - 15:52:10 | 200 | 6.7769ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/07/21 - 15:52:13 | 403 | 0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/07/21 - 15:52:40 | 200 | 6.2826ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/07/21 - 15:52:51 | 403 | 0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/07/21 - 15:53:00 | 200 | 2.1904ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/07/21 - 15:53:02 | 403 | 0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/07/21 - 15:53:36 | 200 | 1.0021ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/07/21 - 15:54:15 | 403 | 0s | 127.0.0.1 | POST "/api/generate"
time=2024-07-21T16:05:17.322-07:00 level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[12.0 GiB]" memory.required.full="5.2 GiB" memory.required.partial="0 B" memory.required.kv="512.0 MiB" memory.required.allocations="[5.2 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.0 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="296.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-07-21T16:05:17.327-07:00 level=INFO source=server.go:383 msg="starting llama server" cmd="C:\Users\m\AppData\Local\Programs\Ollama\ollama_runners\cpu_avx2\ollama_llama_server.exe --model C:\Users\m\.ollama\models\blobs\sha256-ab9e4eec7e80892fd78f74d9a15d0299f1e22121cea44efd68a7a02a3fe9a1da --ctx-size 4096 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 51486"
time=2024-07-21T16:05:17.331-07:00 level=INFO source=sched.go:437 msg="loaded runners" count=1
time=2024-07-21T16:05:17.331-07:00 level=INFO source=server.go:571 msg="waiting for llama runner to start responding"
time=2024-07-21T16:05:17.331-07:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3337 commit="a8db2a9c" tid="2840" timestamp=1721603117
INFO [wmain] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="2840" timestamp=1721603117 total_threads=4
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="6" port="51486" tid="2840" timestamp=1721603117
llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from C:\Users\m.ollama\models\blobs\sha256-ab9e4eec7e80892fd78f74d9a15d0299f1e22121cea44efd68a7a02a3fe9a1da (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct-imatrix
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 15
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
time=2024-07-21T16:05:17.586-07:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_K: 193 tensors
llama_model_loader: - type q6_K: 33 tensors
llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.8000 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.58 GiB (4.89 BPW)
llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct-imatrix
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: ggml ctx size = 0.14 MiB
llm_load_tensors: CPU buffer size = 4685.30 MiB
llama_new_context_with_model: n_ctx = 4096
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 512.00 MiB
llama_new_context_with_model: KV self size = 512.00 MiB, K (f16): 256.00 MiB, V (f16): 256.00 MiB
llama_new_context_with_model: CPU output buffer size = 2.02 MiB
llama_new_context_with_model: CPU compute buffer size = 296.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 1
INFO [wmain] model loaded | tid="2840" timestamp=1721603122
time=2024-07-21T16:05:22.777-07:00 level=INFO source=server.go:617 msg="llama runner started in 5.45 seconds"
[GIN] 2024/07/21 - 16:05:36 | 200 | 19.6263103s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/07/21 - 16:09:01 | 200 | 1.0746ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/07/21 - 16:09:02 | 403 | 0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/07/21 - 16:09:34 | 200 | 10.0524ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/07/21 - 16:09:35 | 403 | 0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/07/21 - 16:16:46 | 200 | 1.3193ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/07/21 - 16:16:48 | 403 | 0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/07/21 - 16:17:01 | 200 | 3.3784ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/07/21 - 16:17:06 | 403 | 0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/07/21 - 16:19:40 | 200 | 2.1921ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/07/21 - 16:19:41 | 403 | 0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/07/21 - 16:22:18 | 403 | 0s | 127.0.0.1 | POST "/api/generate"
[GIN] 2024/07/21 - 16:42:30 | 200 | 1.619ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/07/21 - 16:42:32 | 403 | 0s | 127.0.0.1 | POST "/api/generate"

<!-- gh-comment-id:2241817592 --> @werruww commented on GitHub (Jul 21, 2024): 2024/07/21 15:40:57 routes.go:1096: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:C:\\Users\\m\\.ollama\\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\m\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-07-21T15:40:57.045-07:00 level=INFO source=images.go:778 msg="total blobs: 4" time=2024-07-21T15:40:57.046-07:00 level=INFO source=images.go:785 msg="total unused blobs removed: 0" time=2024-07-21T15:40:57.047-07:00 level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.2.7)" time=2024-07-21T15:40:57.048-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [rocm_v6.1 cpu cpu_avx cpu_avx2 cuda_v11.3]" time=2024-07-21T15:40:57.048-07:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs" time=2024-07-21T15:40:57.068-07:00 level=INFO source=gpu.go:346 msg="no compatible GPUs were discovered" time=2024-07-21T15:40:57.068-07:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="16.0 GiB" available="11.6 GiB" [GIN] 2024/07/21 - 15:40:57 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/07/21 - 15:40:57 | 200 | 5.025ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/07/21 - 15:41:01 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/07/21 - 15:41:16 | 200 | 2.0518ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/07/21 - 15:41:17 | 403 | 0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/07/21 - 15:41:46 | 403 | 0s | 127.0.0.1 | GET "/api/tags" [GIN] 2024/07/21 - 15:41:52 | 403 | 0s | 127.0.0.1 | OPTIONS "/api/generate" [GIN] 2024/07/21 - 15:42:39 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/07/21 - 15:42:39 | 200 | 23.9883ms | 127.0.0.1 | POST "/api/show" time=2024-07-21T15:42:39.983-07:00 level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[12.4 GiB]" memory.required.full="5.2 GiB" memory.required.partial="0 B" memory.required.kv="512.0 MiB" memory.required.allocations="[5.2 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.0 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="296.0 MiB" memory.graph.partial="677.5 MiB" time=2024-07-21T15:42:39.993-07:00 level=INFO source=server.go:383 msg="starting llama server" cmd="C:\\Users\\m\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cpu_avx2\\ollama_llama_server.exe --model C:\\Users\\m\\.ollama\\models\\blobs\\sha256-ab9e4eec7e80892fd78f74d9a15d0299f1e22121cea44efd68a7a02a3fe9a1da --ctx-size 4096 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 51080" time=2024-07-21T15:42:40.037-07:00 level=INFO source=sched.go:437 msg="loaded runners" count=1 time=2024-07-21T15:42:40.037-07:00 level=INFO source=server.go:571 msg="waiting for llama runner to start responding" time=2024-07-21T15:42:40.037-07:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server error" INFO [wmain] build info | build=3337 commit="a8db2a9c" tid="5964" timestamp=1721601760 INFO [wmain] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="5964" timestamp=1721601760 total_threads=4 INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="6" port="51080" tid="5964" timestamp=1721601760 llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from C:\Users\m\.ollama\models\blobs\sha256-ab9e4eec7e80892fd78f74d9a15d0299f1e22121cea44efd68a7a02a3fe9a1da (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct-imatrix llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 15 llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... time=2024-07-21T15:42:40.290-07:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 20: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default' llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.8000 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.58 GiB (4.89 BPW) llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct-imatrix llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: ggml ctx size = 0.14 MiB llm_load_tensors: CPU buffer size = 4685.30 MiB [GIN] 2024/07/21 - 15:42:52 | 403 | 0s | 127.0.0.1 | GET "/api/tags" llama_new_context_with_model: n_ctx = 4096 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 512.00 MiB llama_new_context_with_model: KV self size = 512.00 MiB, K (f16): 256.00 MiB, V (f16): 256.00 MiB llama_new_context_with_model: CPU output buffer size = 2.02 MiB llama_new_context_with_model: CPU compute buffer size = 296.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 1 INFO [wmain] model loaded | tid="5964" timestamp=1721601842 time=2024-07-21T15:44:02.055-07:00 level=INFO source=server.go:617 msg="llama runner started in 82.02 seconds" [GIN] 2024/07/21 - 15:44:02 | 200 | 1m22s | 127.0.0.1 | POST "/api/chat" [GIN] 2024/07/21 - 15:44:07 | 403 | 0s | 127.0.0.1 | GET "/api/tags" [GIN] 2024/07/21 - 15:44:39 | 200 | 997.9µs | 127.0.0.1 | GET "/api/tags" [GIN] 2024/07/21 - 15:44:41 | 200 | 22.664353s | 127.0.0.1 | POST "/api/chat" [GIN] 2024/07/21 - 15:44:43 | 403 | 0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/07/21 - 15:50:27 | 200 | 1.5728ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/07/21 - 15:50:28 | 403 | 0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/07/21 - 15:50:48 | 200 | 1.4457ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/07/21 - 15:50:51 | 403 | 0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/07/21 - 15:52:10 | 200 | 6.7769ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/07/21 - 15:52:13 | 403 | 0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/07/21 - 15:52:40 | 200 | 6.2826ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/07/21 - 15:52:51 | 403 | 0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/07/21 - 15:53:00 | 200 | 2.1904ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/07/21 - 15:53:02 | 403 | 0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/07/21 - 15:53:36 | 200 | 1.0021ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/07/21 - 15:54:15 | 403 | 0s | 127.0.0.1 | POST "/api/generate" time=2024-07-21T16:05:17.322-07:00 level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[12.0 GiB]" memory.required.full="5.2 GiB" memory.required.partial="0 B" memory.required.kv="512.0 MiB" memory.required.allocations="[5.2 GiB]" memory.weights.total="4.4 GiB" memory.weights.repeating="4.0 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="296.0 MiB" memory.graph.partial="677.5 MiB" time=2024-07-21T16:05:17.327-07:00 level=INFO source=server.go:383 msg="starting llama server" cmd="C:\\Users\\m\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cpu_avx2\\ollama_llama_server.exe --model C:\\Users\\m\\.ollama\\models\\blobs\\sha256-ab9e4eec7e80892fd78f74d9a15d0299f1e22121cea44efd68a7a02a3fe9a1da --ctx-size 4096 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 51486" time=2024-07-21T16:05:17.331-07:00 level=INFO source=sched.go:437 msg="loaded runners" count=1 time=2024-07-21T16:05:17.331-07:00 level=INFO source=server.go:571 msg="waiting for llama runner to start responding" time=2024-07-21T16:05:17.331-07:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server error" INFO [wmain] build info | build=3337 commit="a8db2a9c" tid="2840" timestamp=1721603117 INFO [wmain] system info | n_threads=4 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="2840" timestamp=1721603117 total_threads=4 INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="6" port="51486" tid="2840" timestamp=1721603117 llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from C:\Users\m\.ollama\models\blobs\sha256-ab9e4eec7e80892fd78f74d9a15d0299f1e22121cea44efd68a7a02a3fe9a1da (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct-imatrix llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 15 llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... time=2024-07-21T16:05:17.586-07:00 level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 19: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 20: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default' llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.8000 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.58 GiB (4.89 BPW) llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct-imatrix llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 llm_load_tensors: ggml ctx size = 0.14 MiB llm_load_tensors: CPU buffer size = 4685.30 MiB llama_new_context_with_model: n_ctx = 4096 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 512.00 MiB llama_new_context_with_model: KV self size = 512.00 MiB, K (f16): 256.00 MiB, V (f16): 256.00 MiB llama_new_context_with_model: CPU output buffer size = 2.02 MiB llama_new_context_with_model: CPU compute buffer size = 296.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 1 INFO [wmain] model loaded | tid="2840" timestamp=1721603122 time=2024-07-21T16:05:22.777-07:00 level=INFO source=server.go:617 msg="llama runner started in 5.45 seconds" [GIN] 2024/07/21 - 16:05:36 | 200 | 19.6263103s | 127.0.0.1 | POST "/api/chat" [GIN] 2024/07/21 - 16:09:01 | 200 | 1.0746ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/07/21 - 16:09:02 | 403 | 0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/07/21 - 16:09:34 | 200 | 10.0524ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/07/21 - 16:09:35 | 403 | 0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/07/21 - 16:16:46 | 200 | 1.3193ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/07/21 - 16:16:48 | 403 | 0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/07/21 - 16:17:01 | 200 | 3.3784ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/07/21 - 16:17:06 | 403 | 0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/07/21 - 16:19:40 | 200 | 2.1921ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/07/21 - 16:19:41 | 403 | 0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/07/21 - 16:22:18 | 403 | 0s | 127.0.0.1 | POST "/api/generate" [GIN] 2024/07/21 - 16:42:30 | 200 | 1.619ms | 127.0.0.1 | GET "/api/tags" [GIN] 2024/07/21 - 16:42:32 | 403 | 0s | 127.0.0.1 | POST "/api/generate"
Author
Owner

@werruww commented on GitHub (Jul 21, 2024):

I used several add-ons for Olama and they worked efficiently, except for this add-on
Un3titled

<!-- gh-comment-id:2241820539 --> @werruww commented on GitHub (Jul 21, 2024): I used several add-ons for Olama and they worked efficiently, except for this add-on ![Un3titled](https://github.com/user-attachments/assets/afc0b36c-2b25-4fec-9b20-51513c731260)
Author
Owner

@werruww commented on GitHub (Jul 22, 2024):

Untit4led

<!-- gh-comment-id:2241823406 --> @werruww commented on GitHub (Jul 22, 2024): ![Untit4led](https://github.com/user-attachments/assets/7a5e4800-1c6f-47a3-b7b1-85f2757b219c)
Author
Owner

@werruww commented on GitHub (Jul 22, 2024):

not work

<!-- gh-comment-id:2241823534 --> @werruww commented on GitHub (Jul 22, 2024): not work
Author
Owner

@rick-github commented on GitHub (Jul 22, 2024):

This is a guess, but I think the capital "L" in Localhost is the problem. ollama has a list of allowed origins (see OLLAMA_ORIGINS in the logs) and it contains localhost but not Localhost. There are two ways to test this: either change the connection string in orian to localhost:11434, or add Localhost to the environment variable OLLAMA_ORIGINS and restart ollama: OLLAMA_ORIGINS=Localhost.

<!-- gh-comment-id:2241824520 --> @rick-github commented on GitHub (Jul 22, 2024): This is a guess, but I think the capital "L" in Localhost is the problem. ollama has a list of allowed origins (see `OLLAMA_ORIGINS` in the logs) and it contains `localhost` but not `Localhost`. There are two ways to test this: either change the connection string in orian to `localhost:11434`, or add `Localhost` to the environment variable `OLLAMA_ORIGINS` and restart ollama: `OLLAMA_ORIGINS=Localhost`.
Author
Owner

@werruww commented on GitHub (Jul 22, 2024):

(b) C:\Users\m\Desktop\e>set OLLAMA_ORIGINS=http://localhost:11434

(b) C:\Users\m\Desktop\e>

<!-- gh-comment-id:2241826371 --> @werruww commented on GitHub (Jul 22, 2024): (b) C:\Users\m\Desktop\e>set OLLAMA_ORIGINS=http://localhost:11434 (b) C:\Users\m\Desktop\e>
Author
Owner

@werruww commented on GitHub (Jul 22, 2024):

(b) C:\Users\m\Desktop\e>set OLLAMA_ORIGINS=http://localhost:11434

(b) C:\Users\m\Desktop\e>ollama start
2024/07/21 17:05:35 routes.go:1096: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:C:\Users\m\.ollama\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost:11434 http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\Users\m\AppData\Local\Programs\Ollama\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-21T17:05:35.244-07:00 level=INFO source=images.go:778 msg="total blobs: 4"
time=2024-07-21T17:05:35.245-07:00 level=INFO source=images.go:785 msg="total unused blobs removed: 0"
time=2024-07-21T17:05:35.247-07:00 level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.2.7)"
time=2024-07-21T17:05:35.250-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v6.1]"
time=2024-07-21T17:05:35.250-07:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-07-21T17:05:35.329-07:00 level=INFO source=gpu.go:346 msg="no compatible GPUs were discovered"
time=2024-07-21T17:05:35.329-07:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="16.0 GiB" available="9.3 GiB"

(b) C:\Users\m\Desktop\e>

(b) C:\Users\m\Desktop\e>ollama serve
2024/07/21 17:05:53 routes.go:1096: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:C:\Users\m\.ollama\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost:11434 http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\Users\m\AppData\Local\Programs\Ollama\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-21T17:05:53.378-07:00 level=INFO source=images.go:778 msg="total blobs: 4"
time=2024-07-21T17:05:53.379-07:00 level=INFO source=images.go:785 msg="total unused blobs removed: 0"
time=2024-07-21T17:05:53.383-07:00 level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.2.7)"
time=2024-07-21T17:05:53.385-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11.3 rocm_v6.1 cpu]"
time=2024-07-21T17:05:53.388-07:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
time=2024-07-21T17:05:53.410-07:00 level=INFO source=gpu.go:346 msg="no compatible GPUs were discovered"
time=2024-07-21T17:05:53.410-07:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="16.0 GiB" available="9.4 GiB"

(b) C:\Users\m\Desktop\e>ollama list
NAME ID SIZE MODIFIED
Llama-3.gguf:latest 99059b18f23c 4.9 GB 3 hours ago

(b) C:\Users\m\Desktop\e>

<!-- gh-comment-id:2241826418 --> @werruww commented on GitHub (Jul 22, 2024): (b) C:\Users\m\Desktop\e>set OLLAMA_ORIGINS=http://localhost:11434 (b) C:\Users\m\Desktop\e>ollama start 2024/07/21 17:05:35 routes.go:1096: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:C:\\Users\\m\\.ollama\\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost:11434 http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\m\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-07-21T17:05:35.244-07:00 level=INFO source=images.go:778 msg="total blobs: 4" time=2024-07-21T17:05:35.245-07:00 level=INFO source=images.go:785 msg="total unused blobs removed: 0" time=2024-07-21T17:05:35.247-07:00 level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.2.7)" time=2024-07-21T17:05:35.250-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v6.1]" time=2024-07-21T17:05:35.250-07:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs" time=2024-07-21T17:05:35.329-07:00 level=INFO source=gpu.go:346 msg="no compatible GPUs were discovered" time=2024-07-21T17:05:35.329-07:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="16.0 GiB" available="9.3 GiB" (b) C:\Users\m\Desktop\e> (b) C:\Users\m\Desktop\e>ollama serve 2024/07/21 17:05:53 routes.go:1096: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:C:\\Users\\m\\.ollama\\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost:11434 http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\m\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-07-21T17:05:53.378-07:00 level=INFO source=images.go:778 msg="total blobs: 4" time=2024-07-21T17:05:53.379-07:00 level=INFO source=images.go:785 msg="total unused blobs removed: 0" time=2024-07-21T17:05:53.383-07:00 level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.2.7)" time=2024-07-21T17:05:53.385-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11.3 rocm_v6.1 cpu]" time=2024-07-21T17:05:53.388-07:00 level=INFO source=gpu.go:205 msg="looking for compatible GPUs" time=2024-07-21T17:05:53.410-07:00 level=INFO source=gpu.go:346 msg="no compatible GPUs were discovered" time=2024-07-21T17:05:53.410-07:00 level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="16.0 GiB" available="9.4 GiB" (b) C:\Users\m\Desktop\e>ollama list NAME ID SIZE MODIFIED Llama-3.gguf:latest 99059b18f23c 4.9 GB 3 hours ago (b) C:\Users\m\Desktop\e>
Author
Owner

@werruww commented on GitHub (Jul 22, 2024):

2024/07/21 15:39:59 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:C:\Users\m\.ollama\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\Users\m\AppData\Local\Programs\Ollama\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-21T15:39:59.249-07:00 level=INFO source=images.go:730 msg="total blobs: 4"
time=2024-07-21T15:39:59.250-07:00 level=INFO source=images.go:737 msg="total unused blobs removed: 0"
time=2024-07-21T15:39:59.250-07:00 level=INFO source=routes.go:1111 msg="Listening on 127.0.0.1:11434 (version 0.1.46)"
time=2024-07-21T15:39:59.251-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7]"
time=2024-07-21T15:39:59.274-07:00 level=INFO source=types.go:98 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="16.0 GiB" available="12.2 GiB"
[GIN] 2024/07/21 - 15:39:59 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/07/21 - 15:39:59 | 200 | 4.745ms | 127.0.0.1 | GET "/api/tags"

<!-- gh-comment-id:2241827229 --> @werruww commented on GitHub (Jul 22, 2024): 2024/07/21 15:39:59 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:C:\\Users\\m\\.ollama\\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\m\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-07-21T15:39:59.249-07:00 level=INFO source=images.go:730 msg="total blobs: 4" time=2024-07-21T15:39:59.250-07:00 level=INFO source=images.go:737 msg="total unused blobs removed: 0" time=2024-07-21T15:39:59.250-07:00 level=INFO source=routes.go:1111 msg="Listening on 127.0.0.1:11434 (version 0.1.46)" time=2024-07-21T15:39:59.251-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7]" time=2024-07-21T15:39:59.274-07:00 level=INFO source=types.go:98 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="16.0 GiB" available="12.2 GiB" [GIN] 2024/07/21 - 15:39:59 | 200 | 0s | 127.0.0.1 | HEAD "/" [GIN] 2024/07/21 - 15:39:59 | 200 | 4.745ms | 127.0.0.1 | GET "/api/tags"
Author
Owner

@rick-github commented on GitHub (Jul 22, 2024):

OLLAMA_ORIGINS=Localhost

No http prefix.
No port suffix.
Capital "L".
Set it in the process context of the server. I don't know how you do that in windows, but since the logs don't show a change in OLLAMA_ORIGINS, whatever you are doing is not the right way.

<!-- gh-comment-id:2241832377 --> @rick-github commented on GitHub (Jul 22, 2024): `OLLAMA_ORIGINS=Localhost` No http prefix. No port suffix. Capital "L". Set it in the process context of the server. I don't know how you do that in windows, but since the logs don't show a change in `OLLAMA_ORIGINS`, whatever you are doing is not the right way.
Author
Owner

@werruww commented on GitHub (Jul 22, 2024):

(base) C:\Windows\system32>cd C:\Users\m\Desktop\e\ollama-webui

(base) C:\Users\m\Desktop\e\ollama-webui>conda activate b

(b) C:\Users\m\Desktop\e\ollama-webui>OLLAMA_ORIGINS=Localhost
'OLLAMA_ORIGINS' is not recognized as an internal or external command,
operable program or batch file.

(b) C:\Users\m\Desktop\e\ollama-webui>OLLAMA_ORIGINS=Localhost:11434
'OLLAMA_ORIGINS' is not recognized as an internal or external command,
operable program or batch file.

(b) C:\Users\m\Desktop\e\ollama-webui>OLLAMA_ORIGINS
'OLLAMA_ORIGINS' is not recognized as an internal or external command,
operable program or batch file.

(b) C:\Users\m\Desktop\e\ollama-webui>

<!-- gh-comment-id:2241834940 --> @werruww commented on GitHub (Jul 22, 2024): (base) C:\Windows\system32>cd C:\Users\m\Desktop\e\ollama-webui (base) C:\Users\m\Desktop\e\ollama-webui>conda activate b (b) C:\Users\m\Desktop\e\ollama-webui>OLLAMA_ORIGINS=Localhost 'OLLAMA_ORIGINS' is not recognized as an internal or external command, operable program or batch file. (b) C:\Users\m\Desktop\e\ollama-webui>OLLAMA_ORIGINS=Localhost:11434 'OLLAMA_ORIGINS' is not recognized as an internal or external command, operable program or batch file. (b) C:\Users\m\Desktop\e\ollama-webui>OLLAMA_ORIGINS 'OLLAMA_ORIGINS' is not recognized as an internal or external command, operable program or batch file. (b) C:\Users\m\Desktop\e\ollama-webui>
Author
Owner

@werruww commented on GitHub (Jul 22, 2024):

Is there an add-on for the browser that works as a mother and has a chat with books?

<!-- gh-comment-id:2241837465 --> @werruww commented on GitHub (Jul 22, 2024): Is there an add-on for the browser that works as a mother and has a chat with books?
Author
Owner

@rick-github commented on GitHub (Jul 22, 2024):

I don't have specific knowledge of add-ons, but a few are listed in the extensions and plugins section.

<!-- gh-comment-id:2241840355 --> @rick-github commented on GitHub (Jul 22, 2024): I don't have specific knowledge of add-ons, but a few are listed in the [extensions and plugins](https://github.com/ollama/ollama#extensions--plugins) section.
Author
Owner

@rick-github commented on GitHub (Jul 22, 2024):

This is a bug in the CORS package that ollama uses.

<!-- gh-comment-id:2241917165 --> @rick-github commented on GitHub (Jul 22, 2024): This is a bug in the CORS package that ollama uses.
Author
Owner

@werruww commented on GitHub (Jul 22, 2024):

Olama's add-ons work with chat, but the add-ons that have chat with books are the ones that do not work

<!-- gh-comment-id:2241922791 --> @werruww commented on GitHub (Jul 22, 2024): Olama's add-ons work with chat, but the add-ons that have chat with books are the ones that do not work
Author
Owner

@werruww commented on GitHub (Jul 22, 2024):

What is the solution؟

<!-- gh-comment-id:2241927090 --> @werruww commented on GitHub (Jul 22, 2024): What is the solution؟
Author
Owner

@rick-github commented on GitHub (Jul 22, 2024):

Two possible solutions. 1) file a bug with the developer of orian and ask them to lowercase the hostname. 2) file a bug with the developer of github.com/gin-contrib/cors and ask them to do case-insensitive checking of the hostname. I couldn't find a github repo for orian, so number 2 is probably easier. But that may take a while, it would be faster for the developer of orian to fix their code.

<!-- gh-comment-id:2241929995 --> @rick-github commented on GitHub (Jul 22, 2024): Two possible solutions. 1) file a bug with the developer of orian and ask them to lowercase the hostname. 2) file a bug with the developer of `github.com/gin-contrib/cors` and ask them to do case-insensitive checking of the hostname. I couldn't find a github repo for orian, so number 2 is probably easier. But that may take a while, it would be faster for the developer of orian to fix their code.
Author
Owner

@werruww commented on GitHub (Jul 22, 2024):

I want a Python script to create a chat with a book using Obama in the terminal

<!-- gh-comment-id:2241932559 --> @werruww commented on GitHub (Jul 22, 2024): I want a Python script to create a chat with a book using Obama in the terminal
Author
Owner

@werruww commented on GitHub (Jul 22, 2024):

or add for edge

<!-- gh-comment-id:2241932827 --> @werruww commented on GitHub (Jul 22, 2024): or add for edge
Author
Owner

@werruww commented on GitHub (Jul 22, 2024):

and thank you

<!-- gh-comment-id:2241933040 --> @werruww commented on GitHub (Jul 22, 2024): and thank you
Author
Owner

@rick-github commented on GitHub (Jul 22, 2024):

there are plenty of terminal demos that you can modify, eg https://github.com/steinbring/python-langchain-rag-demo or https://docs.llamaindex.ai/en/stable/getting_started/starter_tools/rag_cli/. If you are looking for something more complete but runs in the browser instead of terminal, look at https://github.com/datvodinh/rag-chatbot.git or https://github.com/infiniflow/ragflow.

There are more listed on the ollama integrations page, https://github.com/ollama/ollama?tab=readme-ov-file#community-integrations

<!-- gh-comment-id:2241944831 --> @rick-github commented on GitHub (Jul 22, 2024): there are plenty of terminal demos that you can modify, eg https://github.com/steinbring/python-langchain-rag-demo or https://docs.llamaindex.ai/en/stable/getting_started/starter_tools/rag_cli/. If you are looking for something more complete but runs in the browser instead of terminal, look at https://github.com/datvodinh/rag-chatbot.git or https://github.com/infiniflow/ragflow. There are more listed on the ollama integrations page, https://github.com/ollama/ollama?tab=readme-ov-file#community-integrations
Author
Owner

@KarthikeyaKollu commented on GitHub (Jul 30, 2024):

Hey there, im the founder of that Extension Reachout to me at https://www.linkedin.com/in/karthikeyakollu/

<!-- gh-comment-id:2257480072 --> @KarthikeyaKollu commented on GitHub (Jul 30, 2024): Hey there, im the founder of that Extension Reachout to me at https://www.linkedin.com/in/karthikeyakollu/
Author
Owner

@pdevine commented on GitHub (Sep 12, 2024):

I'm going to go ahead an close this.

<!-- gh-comment-id:2347286723 --> @pdevine commented on GitHub (Sep 12, 2024): I'm going to go ahead an close this.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#50137