[GH-ISSUE #4124] /api/embeddings responds with 500 before Ollama is initialized - handle max queued requests failure better #28320

Closed
opened 2026-04-22 06:23:18 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @maximiliangugler on GitHub (May 3, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/4124

Originally assigned to: @dhiltgen on GitHub.

What is the issue?

Hello,
please forgive the ambiguity of this report.

The issue i am encountering now is the following:

Before updating to 0.1.33, i was running on version 0.1.32.
I was running the server with embedding-models for generating embeddings and I was using the langchain OllamaEmbeddings class for it.

I wrote a custom wrapper for asynchronous embeddings to speed up the time it takes to embed documents:
https://github.com/maxggl/rag-experiment/blob/main/get_embedding_function.py

import asyncio
import aiohttp
from langchain_community.embeddings.ollama import OllamaEmbeddings

class AsyncOllamaEmbedder:
    def __init__(self, model='avr/sfr-embedding-mistral:q8_0', base_url='http://localhost:11434'):
        self.sync_embeddings = OllamaEmbeddings(model=model)
        self.base_url = f"{base_url}/api/embeddings"
        self.session = None

    async def init_session(self):
        if self.session is None or self.session.closed:
            self.session = await aiohttp.ClientSession().__aenter__()

    async def close_session(self):
        if self.session and not self.session.closed:
            await self.session.__aexit__(None, None, None)

    def embed_documents(self, texts):
        # Synchronous wrapper for asynchronous embedding
        return self.sync_call(self.async_embed_documents, texts)

    def embed_query(self, query):
        # Single query synchronous wrapper for asynchronous embedding
        return self.sync_call(self.async_embed_documents, [query])[0]

    async def async_embed_documents(self, texts):
        # Initialize session right before use
        await self.init_session()
        tasks = [self.send_embedding_request(text) for text in texts]
        results = await asyncio.gather(*tasks)
        await self.close_session()
        return results

    async def send_embedding_request(self, text):
        await self.init_session()  # Ensure session is available
        async with self.session.post(self.base_url, json={'model': self.sync_embeddings.model, 'prompt': text}) as response:
            if response.status == 200:
                data = await response.json()
                return data.get('embedding')
            else:
                return None  # Handle errors as needed

    def sync_call(self, async_func, *args):
        loop = asyncio.get_event_loop()
        if loop.is_running():
            new_loop = asyncio.new_event_loop()
            result = new_loop.run_until_complete(async_func(*args))
            new_loop.close()
            return result
        else:
            return loop.run_until_complete(async_func(*args))

def get_embedding_function():
    return AsyncOllamaEmbedder()

With v. 0.1.32 everything was working fine and all requests returned 200 after the model loaded:


time=2024-05-03T13:46:20.391+02:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6"
time=2024-05-03T13:46:20.405+02:00 level=INFO source=server.go:127 msg="offload to gpu" reallayers=13 layers=13 required="691.1 MiB" used="691.1 MiB" available="9073.0 MiB" kv="6.0 MiB" fulloffload="12.0 MiB" partialoffload="12.0 MiB"
time=2024-05-03T13:46:20.405+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-03T13:46:20.409+02:00 level=INFO source=server.go:264 msg="starting llama server" cmd="C:\\Users\\MAXIMI~1\\AppData\\Local\\Temp\\ollama1382626371\\runners\\cuda_v11.3\\ollama_llama_server.exe --model C:\\Users\\Maximilian\\.ollama\\models\\blobs\\sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 13 --port 53479"
time=2024-05-03T13:46:20.430+02:00 level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
{"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"17128","timestamp":1714736780}
{"build":2679,"commit":"7593639","function":"wmain","level":"INFO","line":2820,"msg":"build info","tid":"17128","timestamp":1714736780}
{"function":"wmain","level":"INFO","line":2827,"msg":"system info","n_threads":12,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"17128","timestamp":1714736780,"total_threads":24}
llama_model_loader: loaded meta data with 24 key-value pairs and 112 tensors from C:\Users\Maximilian\.ollama\models\blobs\sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = nomic-bert
llama_model_loader: - kv   1:                               general.name str              = nomic-embed-text-v1.5
llama_model_loader: - kv   2:                     nomic-bert.block_count u32              = 12
llama_model_loader: - kv   3:                  nomic-bert.context_length u32              = 2048
llama_model_loader: - kv   4:                nomic-bert.embedding_length u32              = 768
llama_model_loader: - kv   5:             nomic-bert.feed_forward_length u32              = 3072
llama_model_loader: - kv   6:            nomic-bert.attention.head_count u32              = 12
llama_model_loader: - kv   7:    nomic-bert.attention.layer_norm_epsilon f32              = 0.000000
llama_model_loader: - kv   8:                          general.file_type u32              = 1
llama_model_loader: - kv   9:                nomic-bert.attention.causal bool             = false
llama_model_loader: - kv  10:                    nomic-bert.pooling_type u32              = 1
llama_model_loader: - kv  11:                  nomic-bert.rope.freq_base f32              = 1000.000000
llama_model_loader: - kv  12:            tokenizer.ggml.token_type_count u32              = 2
llama_model_loader: - kv  13:                tokenizer.ggml.bos_token_id u32              = 101
llama_model_loader: - kv  14:                tokenizer.ggml.eos_token_id u32              = 102
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = bert
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,30522]   = ["[PAD]", "[unused0]", "[unused1]", "...
llama_model_loader: - kv  17:                      tokenizer.ggml.scores arr[f32,30522]   = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,30522]   = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 100
llama_model_loader: - kv  20:          tokenizer.ggml.seperator_token_id u32              = 102
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  22:                tokenizer.ggml.cls_token_id u32              = 101
llama_model_loader: - kv  23:               tokenizer.ggml.mask_token_id u32              = 103
llama_model_loader: - type  f32:   51 tensors
llama_model_loader: - type  f16:   61 tensors
llm_load_vocab: mismatch in special tokens definition ( 7104/30522 vs 5/30522 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = nomic-bert
llm_load_print_meta: vocab type       = WPM
llm_load_print_meta: n_vocab          = 30522
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 2048
llm_load_print_meta: n_embd           = 768
llm_load_print_meta: n_head           = 12
llm_load_print_meta: n_head_kv        = 12
llm_load_print_meta: n_layer          = 12
llm_load_print_meta: n_rot            = 64
llm_load_print_meta: n_embd_head_k    = 64
llm_load_print_meta: n_embd_head_v    = 64
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 768
llm_load_print_meta: n_embd_v_gqa     = 768
llm_load_print_meta: f_norm_eps       = 1.0e-12
llm_load_print_meta: f_norm_rms_eps   = 0.0e+00
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 3072
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 0
llm_load_print_meta: pooling type     = 1
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 2048
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 137M
llm_load_print_meta: model ftype      = F16
llm_load_print_meta: model params     = 136.73 M
llm_load_print_meta: model size       = 260.86 MiB (16.00 BPW)
llm_load_print_meta: general.name     = nomic-embed-text-v1.5
llm_load_print_meta: BOS token        = 101 '[CLS]'
llm_load_print_meta: EOS token        = 102 '[SEP]'
llm_load_print_meta: UNK token        = 100 '[UNK]'
llm_load_print_meta: SEP token        = 102 '[SEP]'
llm_load_print_meta: PAD token        = 0 '[PAD]'
llm_load_print_meta: CLS token        = 101 '[CLS]'
llm_load_print_meta: MASK token       = 103 '[MASK]'
llm_load_print_meta: LF token         = 0 '[PAD]'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3080, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size =    0.09 MiB
llm_load_tensors: offloading 12 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 13/13 layers to GPU
llm_load_tensors:        CPU buffer size =    44.72 MiB
llm_load_tensors:      CUDA0 buffer size =   216.15 MiB
.......................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 1000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =    72.00 MiB
llama_new_context_with_model: KV self size  =   72.00 MiB, K (f16):   36.00 MiB, V (f16):   36.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.00 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =    23.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =     3.50 MiB
llama_new_context_with_model: graph nodes  = 453
llama_new_context_with_model: graph splits = 2
{"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"17128","timestamp":1714736781}
{"function":"initialize","level":"INFO","line":460,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"17128","timestamp":1714736781}
{"function":"wmain","level":"INFO","line":3064,"msg":"model loaded","tid":"17128","timestamp":1714736781}
{"function":"wmain","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"23","port":"53479","tid":"17128","timestamp":1714736782}
{"function":"update_slots","level":"INFO","line":1578,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"17128","timestamp":1714736782}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"17128","timestamp":1714736782}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53484,"status":200,"tid":"26436","timestamp":1714736782}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"17128","timestamp":1714736782}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":3,"tid":"17128","timestamp":1714736782}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53483,"status":200,"tid":"25184","timestamp":1714736782}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":4,"tid":"17128","timestamp":1714736782}{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53481,"status":200,"tid":"18948","timestamp":1714736782}

{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"17128","timestamp":1714736782}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53486,"status":200,"tid":"10400","timestamp":1714736782}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53485,"status":200,"tid":"5640","timestamp":1714736782}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":5,"tid":"17128","timestamp":1714736782}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53498,"status":200,"tid":"20792","timestamp":1714736782}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":6,"tid":"17128","timestamp":1714736782}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53498,"status":200,"tid":"20792","timestamp":1714736782}
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":7,"tid":"17128","timestamp":1714736782}
{"function":"update_slots","level":"INFO","line":1840,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":7,"tid":"17128","timestamp":1714736782}
{"function":"update_slots","level":"INFO","line":1648,"msg":"slot released","n_cache_tokens":183,"n_ctx":2048,"n_past":183,"n_system_tokens":0,"slot_id":0,"task_id":7,"tid":"17128","timestamp":1714736782,"truncated":false}
{"function":"log_server_request","level":"INFO","line":2741,"method":"POST","msg":"request","params":{},"path":"/embedding","remote_addr":"127.0.0.1","remote_port":53498,"status":200,"tid":"20792","timestamp":1714736782}
[GIN] 2024/05/03 - 13:46:22 | 200 |    2.2024646s |       127.0.0.1 | POST     "/api/embeddings"
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":10,"tid":"17128","timestamp":1714736782}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53498,"status":200,"tid":"20792","timestamp":1714736782}
{"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":11,"tid":"17128","timestamp":1714736782}
{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53498,"status":200,"tid":"20792","timestamp":1714736782}
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":12,"tid":"17128","timestamp":1714736782}
{"function":"update_slots","level":"INFO","line":1840,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":12,"tid":"17128","timestamp":1714736782}
{"function":"update_slots","level":"INFO","line":1648,"msg":"slot released","n_cache_tokens":22,"n_ctx":2048,"n_past":22,"n_system_tokens":0,"slot_id":0,"task_id":12,"tid":"17128","timestamp":1714736782,"truncated":false}
{"function":"log_server_request","level":"INFO","line":2741,"method":"POST","msg":"request","params":{},"path":"/embedding","remote_addr":"127.0.0.1","remote_port":53549,"status":200,"tid":"22736","timestamp":1714736782}

But after updating to 0.1.33, it seems as if the requests were processed and returned 500 because the model was not yet loaded but the server responded anyway, at least in the log it appears to be this way:

time=2024-05-03T13:51:30.047+02:00 level=INFO source=gpu.go:96 msg="Detecting GPUs"
[GIN] 2024/05/03 - 13:51:30 | 500 |     11.5625ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     11.5625ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     11.5625ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     12.6713ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     12.7613ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     13.2678ms |       127.0.0.1 | POST     "/api/embeddings"
time=2024-05-03T13:51:30.057+02:00 level=INFO source=gpu.go:101 msg="detected GPUs" library=C:\Users\Maximilian\AppData\Local\Programs\Ollama\cudart64_110.dll count=1
time=2024-05-03T13:51:30.057+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
[GIN] 2024/05/03 - 13:51:30 | 500 |     12.8669ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      13.386ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     14.3949ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     13.8924ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     14.3949ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     14.9223ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     15.0123ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     15.0123ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     15.1662ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     15.6853ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     15.6853ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     15.6853ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     15.6853ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     15.1662ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      15.672ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     15.1662ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     15.1662ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     16.4901ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     15.6853ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      15.672ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     17.0477ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     16.6936ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     16.6936ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     16.4901ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     17.0477ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     17.0477ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     16.6936ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     18.0693ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     18.1391ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |       6.329ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     18.1391ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      8.2953ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |       6.476ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      6.5942ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     18.1391ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      7.6967ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      18.073ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      5.9696ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |       6.476ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |     17.0415ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      5.3425ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |       6.069ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      8.9295ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |       6.675ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |       6.682ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      5.4222ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      5.9277ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      6.7458ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      7.2516ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      7.5524ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      7.5524ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      7.5524ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      7.5524ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      6.4886ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      6.9948ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      6.4886ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      6.4886ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      6.9948ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      8.8933ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      7.2587ms |       127.0.0.1 | POST     "/api/embeddings"
....
[GIN] 2024/05/03 - 13:51:30 | 500 |      2.6255ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      2.1017ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:30 | 500 |      1.5844ms |       127.0.0.1 | POST     "/api/embeddings"
time=2024-05-03T13:51:30.281+02:00 level=WARN source=memory.go:18 msg="requested context length is greater than model max context length" requested=8192 model=2048
time=2024-05-03T13:51:30.281+02:00 level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=13 memory.available="9073.0 MiB" memory.required.full="735.9 MiB" memory.required.partial="735.9 MiB" memory.required.kv="6.0 MiB" memory.weights.total="260.9 MiB" memory.weights.repeating="216.1 MiB" memory.weights.nonrepeating="44.7 MiB" memory.graph.full="12.0 MiB" memory.graph.partial="12.0 MiB"
time=2024-05-03T13:51:30.281+02:00 level=WARN source=server.go:76 msg="requested context length is greater than the model's training context window size" requested=8192 "training size"=2048
time=2024-05-03T13:51:30.282+02:00 level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=13 memory.available="9073.0 MiB" memory.required.full="789.9 MiB" memory.required.partial="789.9 MiB" memory.required.kv="24.0 MiB" memory.weights.total="260.9 MiB" memory.weights.repeating="216.1 MiB" memory.weights.nonrepeating="44.7 MiB" memory.graph.full="48.0 MiB" memory.graph.partial="48.0 MiB"
time=2024-05-03T13:51:30.282+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-05-03T13:51:30.286+02:00 level=INFO source=server.go:289 msg="starting llama server" cmd="C:\\Users\\Maximilian\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model C:\\Users\\Maximilian\\.ollama\\models\\blobs\\sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 13 --parallel 1 --port 53915"
time=2024-05-03T13:51:30.305+02:00 level=INFO source=sched.go:340 msg="loaded runners" count=1
time=2024-05-03T13:51:30.305+02:00 level=INFO source=server.go:432 msg="waiting for llama runner to start responding"
{"function":"server_params_parse","level":"INFO","line":2606,"msg":"logging to file is disabled.","tid":"9888","timestamp":1714737090}
{"build":2770,"commit":"952d03d","function":"wmain","level":"INFO","line":2823,"msg":"build info","tid":"9888","timestamp":1714737090}
{"function":"wmain","level":"INFO","line":2830,"msg":"system info","n_threads":12,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | ","tid":"9888","timestamp":1714737090,"total_threads":24}
llama_model_loader: loaded meta data with 24 key-value pairs and 112 tensors from C:\Users\Maximilian\.ollama\models\blobs\sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = nomic-bert
llama_model_loader: - kv   1:                               general.name str              = nomic-embed-text-v1.5
llama_model_loader: - kv   2:                     nomic-bert.block_count u32              = 12
llama_model_loader: - kv   3:                  nomic-bert.context_length u32              = 2048
llama_model_loader: - kv   4:                nomic-bert.embedding_length u32              = 768
llama_model_loader: - kv   5:             nomic-bert.feed_forward_length u32              = 3072
llama_model_loader: - kv   6:            nomic-bert.attention.head_count u32              = 12
llama_model_loader: - kv   7:    nomic-bert.attention.layer_norm_epsilon f32              = 0.000000
llama_model_loader: - kv   8:                          general.file_type u32              = 1
llama_model_loader: - kv   9:                nomic-bert.attention.causal bool             = false
llama_model_loader: - kv  10:                    nomic-bert.pooling_type u32              = 1
llama_model_loader: - kv  11:                  nomic-bert.rope.freq_base f32              = 1000.000000
llama_model_loader: - kv  12:            tokenizer.ggml.token_type_count u32              = 2
llama_model_loader: - kv  13:                tokenizer.ggml.bos_token_id u32              = 101
llama_model_loader: - kv  14:                tokenizer.ggml.eos_token_id u32              = 102
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = bert
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,30522]   = ["[PAD]", "[unused0]", "[unused1]", "...
llama_model_loader: - kv  17:                      tokenizer.ggml.scores arr[f32,30522]   = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,30522]   = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 100
llama_model_loader: - kv  20:          tokenizer.ggml.seperator_token_id u32              = 102
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  22:                tokenizer.ggml.cls_token_id u32              = 101
llama_model_loader: - kv  23:               tokenizer.ggml.mask_token_id u32              = 103
llama_model_loader: - type  f32:   51 tensors
llama_model_loader: - type  f16:   61 tensors
llm_load_vocab: mismatch in special tokens definition ( 7104/30522 vs 5/30522 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = nomic-bert
llm_load_print_meta: vocab type       = WPM
llm_load_print_meta: n_vocab          = 30522
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 2048
llm_load_print_meta: n_embd           = 768
llm_load_print_meta: n_head           = 12
llm_load_print_meta: n_head_kv        = 12
llm_load_print_meta: n_layer          = 12
llm_load_print_meta: n_rot            = 64
llm_load_print_meta: n_embd_head_k    = 64
llm_load_print_meta: n_embd_head_v    = 64
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 768
llm_load_print_meta: n_embd_v_gqa     = 768
llm_load_print_meta: f_norm_eps       = 1.0e-12
llm_load_print_meta: f_norm_rms_eps   = 0.0e+00
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 3072
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 0
llm_load_print_meta: pooling type     = 1
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 2048
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 137M
llm_load_print_meta: model ftype      = F16
llm_load_print_meta: model params     = 136.73 M
llm_load_print_meta: model size       = 260.86 MiB (16.00 BPW)
llm_load_print_meta: general.name     = nomic-embed-text-v1.5
llm_load_print_meta: BOS token        = 101 '[CLS]'
llm_load_print_meta: EOS token        = 102 '[SEP]'
llm_load_print_meta: UNK token        = 100 '[UNK]'
llm_load_print_meta: SEP token        = 102 '[SEP]'
llm_load_print_meta: PAD token        = 0 '[PAD]'
llm_load_print_meta: CLS token        = 101 '[CLS]'
llm_load_print_meta: MASK token       = 103 '[MASK]'
llm_load_print_meta: LF token         = 0 '[PAD]'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3080, compute capability 8.6, VMM: yes
llm_load_tensors: ggml ctx size =    0.11 MiB
llm_load_tensors: offloading 12 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 13/13 layers to GPU
llm_load_tensors:        CPU buffer size =    44.72 MiB
llm_load_tensors:      CUDA0 buffer size =   216.15 MiB
.......................................................
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 1000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   288.00 MiB
llama_new_context_with_model: KV self size  =  288.00 MiB, K (f16):  144.00 MiB, V (f16):  144.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.00 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =    23.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =     3.50 MiB
llama_new_context_with_model: graph nodes  = 453
llama_new_context_with_model: graph splits = 2
{"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"9888","timestamp":1714737092}
{"function":"initialize","level":"INFO","line":460,"msg":"new slot","n_ctx_slot":8192,"slot_id":0,"tid":"9888","timestamp":1714737092}
{"function":"wmain","level":"INFO","line":3067,"msg":"model loaded","tid":"9888","timestamp":1714737092}
{"function":"wmain","hostname":"127.0.0.1","level":"INFO","line":3270,"msg":"HTTP server listening","n_threads_http":"23","port":"53915","tid":"9888","timestamp":1714737092}
{"function":"update_slots","level":"INFO","line":1581,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"9888","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":3,"tid":"9888","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"9888","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53928,"status":200,"tid":"8144","timestamp":1714737092}{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"9888","timestamp":1714737092}

{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53930,"status":200,"tid":"9528","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53926,"status":200,"tid":"5772","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"9888","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":4,"tid":"9888","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53927,"status":200,"tid":"9092","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53929,"status":200,"tid":"20912","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":5,"tid":"9888","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53964,"status":200,"tid":"1948","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":6,"tid":"9888","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":7,"tid":"9888","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53964,"status":200,"tid":"1948","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53965,"status":200,"tid":"18032","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":8,"tid":"9888","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53964,"status":200,"tid":"1948","timestamp":1714737092}
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":9,"tid":"9888","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":10,"tid":"9888","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53964,"status":200,"tid":"1948","timestamp":1714737092}
[GIN] 2024/05/03 - 13:51:32 | 500 |      4.7901ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      4.8137ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      5.3891ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      5.3891ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      5.3891ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      5.3891ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      5.3888ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      5.8936ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      5.8936ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      5.3888ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      5.8936ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      5.2997ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      6.9595ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      6.4547ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      6.4547ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      5.2997ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      6.4547ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      6.4547ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      6.4547ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      6.9595ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      6.4547ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      5.2997ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      6.4547ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      6.4547ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      2.3236ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      3.1593ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      3.1593ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      3.6841ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      4.2082ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      4.2082ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      4.2082ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      4.2082ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      4.2082ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |        3.65ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      4.2082ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |        3.65ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      4.2012ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      4.7594ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      4.2012ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      4.2012ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      4.2012ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      4.2544ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      4.2012ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      3.6973ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      4.2012ms |       127.0.0.1 | POST     "/api/embeddings"
[GIN] 2024/05/03 - 13:51:32 | 500 |      4.2012ms |       127.0.0.1 | POST     "/api/embeddings"
{"function":"update_slots","level":"INFO","line":1843,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":9,"tid":"9888","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":0,"n_processing_slots":1,"task_id":12,"tid":"9888","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"POST","msg":"request","params":{},"path":"/embedding","remote_addr":"127.0.0.1","remote_port":53965,"status":200,"tid":"18032","timestamp":1714737092}
{"function":"update_slots","level":"INFO","line":1651,"msg":"slot released","n_cache_tokens":201,"n_ctx":8192,"n_past":201,"n_system_tokens":0,"slot_id":0,"task_id":9,"tid":"9888","timestamp":1714737092,"truncated":false}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53964,"status":200,"tid":"1948","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":14,"tid":"9888","timestamp":1714737092}
[GIN] 2024/05/03 - 13:51:32 | 200 |    2.2254284s |       127.0.0.1 | POST     "/api/embeddings"
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53965,"status":200,"tid":"18032","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":15,"tid":"9888","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":16,"tid":"9888","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53988,"status":200,"tid":"11896","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53965,"status":200,"tid":"18032","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":17,"tid":"9888","timestamp":1714737092}
{"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":18,"tid":"9888","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53988,"status":200,"tid":"11896","timestamp":1714737092}
{"function":"update_slots","level":"INFO","line":1843,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":18,"tid":"9888","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":0,"n_processing_slots":1,"task_id":20,"tid":"9888","timestamp":1714737092}
{"function":"log_server_request","level":"INFO","line":2744,"method":"POST","msg":"request","params":{},"path":"/embedding","remote_addr":"127.0.0.1","remote_port":53965,"status":200,"tid":"18032","timestamp":1714737092}
{"function":"update_slots","level":"INFO","line":1651,"msg":"slot released","n_cache_tokens":183,"n_ctx":8192,"n_past":183,"n_system_tokens":0,"slot_id":0,"task_id":18,"tid":"9888","timestamp":1714737092,"truncated":false}
{"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53988,"status":200,"tid":"11896","timestamp":1714737092}
{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":22,"tid":"9888","timestamp":1714737092}

Thank you for your help!

OS

No response

GPU

No response

CPU

No response

Ollama version

0.1.33

Originally created by @maximiliangugler on GitHub (May 3, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/4124 Originally assigned to: @dhiltgen on GitHub. ### What is the issue? Hello, please forgive the ambiguity of this report. The issue i am encountering now is the following: Before updating to 0.1.33, i was running on version 0.1.32. I was running the server with embedding-models for generating embeddings and I was using the langchain OllamaEmbeddings class for it. I wrote a custom wrapper for asynchronous embeddings to speed up the time it takes to embed documents: [https://github.com/maxggl/rag-experiment/blob/main/get_embedding_function.py](url) ``` import asyncio import aiohttp from langchain_community.embeddings.ollama import OllamaEmbeddings class AsyncOllamaEmbedder: def __init__(self, model='avr/sfr-embedding-mistral:q8_0', base_url='http://localhost:11434'): self.sync_embeddings = OllamaEmbeddings(model=model) self.base_url = f"{base_url}/api/embeddings" self.session = None async def init_session(self): if self.session is None or self.session.closed: self.session = await aiohttp.ClientSession().__aenter__() async def close_session(self): if self.session and not self.session.closed: await self.session.__aexit__(None, None, None) def embed_documents(self, texts): # Synchronous wrapper for asynchronous embedding return self.sync_call(self.async_embed_documents, texts) def embed_query(self, query): # Single query synchronous wrapper for asynchronous embedding return self.sync_call(self.async_embed_documents, [query])[0] async def async_embed_documents(self, texts): # Initialize session right before use await self.init_session() tasks = [self.send_embedding_request(text) for text in texts] results = await asyncio.gather(*tasks) await self.close_session() return results async def send_embedding_request(self, text): await self.init_session() # Ensure session is available async with self.session.post(self.base_url, json={'model': self.sync_embeddings.model, 'prompt': text}) as response: if response.status == 200: data = await response.json() return data.get('embedding') else: return None # Handle errors as needed def sync_call(self, async_func, *args): loop = asyncio.get_event_loop() if loop.is_running(): new_loop = asyncio.new_event_loop() result = new_loop.run_until_complete(async_func(*args)) new_loop.close() return result else: return loop.run_until_complete(async_func(*args)) def get_embedding_function(): return AsyncOllamaEmbedder() ``` With v. 0.1.32 everything was working fine and all requests returned 200 after the model loaded: ``` time=2024-05-03T13:46:20.391+02:00 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6" time=2024-05-03T13:46:20.405+02:00 level=INFO source=server.go:127 msg="offload to gpu" reallayers=13 layers=13 required="691.1 MiB" used="691.1 MiB" available="9073.0 MiB" kv="6.0 MiB" fulloffload="12.0 MiB" partialoffload="12.0 MiB" time=2024-05-03T13:46:20.405+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-05-03T13:46:20.409+02:00 level=INFO source=server.go:264 msg="starting llama server" cmd="C:\\Users\\MAXIMI~1\\AppData\\Local\\Temp\\ollama1382626371\\runners\\cuda_v11.3\\ollama_llama_server.exe --model C:\\Users\\Maximilian\\.ollama\\models\\blobs\\sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 13 --port 53479" time=2024-05-03T13:46:20.430+02:00 level=INFO source=server.go:389 msg="waiting for llama runner to start responding" {"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"17128","timestamp":1714736780} {"build":2679,"commit":"7593639","function":"wmain","level":"INFO","line":2820,"msg":"build info","tid":"17128","timestamp":1714736780} {"function":"wmain","level":"INFO","line":2827,"msg":"system info","n_threads":12,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"17128","timestamp":1714736780,"total_threads":24} llama_model_loader: loaded meta data with 24 key-value pairs and 112 tensors from C:\Users\Maximilian\.ollama\models\blobs\sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = nomic-bert llama_model_loader: - kv 1: general.name str = nomic-embed-text-v1.5 llama_model_loader: - kv 2: nomic-bert.block_count u32 = 12 llama_model_loader: - kv 3: nomic-bert.context_length u32 = 2048 llama_model_loader: - kv 4: nomic-bert.embedding_length u32 = 768 llama_model_loader: - kv 5: nomic-bert.feed_forward_length u32 = 3072 llama_model_loader: - kv 6: nomic-bert.attention.head_count u32 = 12 llama_model_loader: - kv 7: nomic-bert.attention.layer_norm_epsilon f32 = 0.000000 llama_model_loader: - kv 8: general.file_type u32 = 1 llama_model_loader: - kv 9: nomic-bert.attention.causal bool = false llama_model_loader: - kv 10: nomic-bert.pooling_type u32 = 1 llama_model_loader: - kv 11: nomic-bert.rope.freq_base f32 = 1000.000000 llama_model_loader: - kv 12: tokenizer.ggml.token_type_count u32 = 2 llama_model_loader: - kv 13: tokenizer.ggml.bos_token_id u32 = 101 llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 102 llama_model_loader: - kv 15: tokenizer.ggml.model str = bert llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,30522] = ["[PAD]", "[unused0]", "[unused1]", "... llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,30522] = [-1000.000000, -1000.000000, -1000.00... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,30522] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 100 llama_model_loader: - kv 20: tokenizer.ggml.seperator_token_id u32 = 102 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 22: tokenizer.ggml.cls_token_id u32 = 101 llama_model_loader: - kv 23: tokenizer.ggml.mask_token_id u32 = 103 llama_model_loader: - type f32: 51 tensors llama_model_loader: - type f16: 61 tensors llm_load_vocab: mismatch in special tokens definition ( 7104/30522 vs 5/30522 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = nomic-bert llm_load_print_meta: vocab type = WPM llm_load_print_meta: n_vocab = 30522 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 2048 llm_load_print_meta: n_embd = 768 llm_load_print_meta: n_head = 12 llm_load_print_meta: n_head_kv = 12 llm_load_print_meta: n_layer = 12 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_embd_head_k = 64 llm_load_print_meta: n_embd_head_v = 64 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 768 llm_load_print_meta: n_embd_v_gqa = 768 llm_load_print_meta: f_norm_eps = 1.0e-12 llm_load_print_meta: f_norm_rms_eps = 0.0e+00 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 3072 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 0 llm_load_print_meta: pooling type = 1 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 2048 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 137M llm_load_print_meta: model ftype = F16 llm_load_print_meta: model params = 136.73 M llm_load_print_meta: model size = 260.86 MiB (16.00 BPW) llm_load_print_meta: general.name = nomic-embed-text-v1.5 llm_load_print_meta: BOS token = 101 '[CLS]' llm_load_print_meta: EOS token = 102 '[SEP]' llm_load_print_meta: UNK token = 100 '[UNK]' llm_load_print_meta: SEP token = 102 '[SEP]' llm_load_print_meta: PAD token = 0 '[PAD]' llm_load_print_meta: CLS token = 101 '[CLS]' llm_load_print_meta: MASK token = 103 '[MASK]' llm_load_print_meta: LF token = 0 '[PAD]' ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3080, compute capability 8.6, VMM: yes llm_load_tensors: ggml ctx size = 0.09 MiB llm_load_tensors: offloading 12 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 13/13 layers to GPU llm_load_tensors: CPU buffer size = 44.72 MiB llm_load_tensors: CUDA0 buffer size = 216.15 MiB ....................................................... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: freq_base = 1000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 72.00 MiB llama_new_context_with_model: KV self size = 72.00 MiB, K (f16): 36.00 MiB, V (f16): 36.00 MiB llama_new_context_with_model: CPU output buffer size = 0.00 MiB llama_new_context_with_model: CUDA0 compute buffer size = 23.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 3.50 MiB llama_new_context_with_model: graph nodes = 453 llama_new_context_with_model: graph splits = 2 {"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"17128","timestamp":1714736781} {"function":"initialize","level":"INFO","line":460,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"17128","timestamp":1714736781} {"function":"wmain","level":"INFO","line":3064,"msg":"model loaded","tid":"17128","timestamp":1714736781} {"function":"wmain","hostname":"127.0.0.1","level":"INFO","line":3267,"msg":"HTTP server listening","n_threads_http":"23","port":"53479","tid":"17128","timestamp":1714736782} {"function":"update_slots","level":"INFO","line":1578,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"17128","timestamp":1714736782} {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"17128","timestamp":1714736782} {"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53484,"status":200,"tid":"26436","timestamp":1714736782} {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"17128","timestamp":1714736782} {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":3,"tid":"17128","timestamp":1714736782} {"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53483,"status":200,"tid":"25184","timestamp":1714736782} {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":4,"tid":"17128","timestamp":1714736782}{"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53481,"status":200,"tid":"18948","timestamp":1714736782} {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"17128","timestamp":1714736782} {"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53486,"status":200,"tid":"10400","timestamp":1714736782} {"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53485,"status":200,"tid":"5640","timestamp":1714736782} {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":5,"tid":"17128","timestamp":1714736782} {"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53498,"status":200,"tid":"20792","timestamp":1714736782} {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":6,"tid":"17128","timestamp":1714736782} {"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53498,"status":200,"tid":"20792","timestamp":1714736782} {"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":7,"tid":"17128","timestamp":1714736782} {"function":"update_slots","level":"INFO","line":1840,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":7,"tid":"17128","timestamp":1714736782} {"function":"update_slots","level":"INFO","line":1648,"msg":"slot released","n_cache_tokens":183,"n_ctx":2048,"n_past":183,"n_system_tokens":0,"slot_id":0,"task_id":7,"tid":"17128","timestamp":1714736782,"truncated":false} {"function":"log_server_request","level":"INFO","line":2741,"method":"POST","msg":"request","params":{},"path":"/embedding","remote_addr":"127.0.0.1","remote_port":53498,"status":200,"tid":"20792","timestamp":1714736782} [GIN] 2024/05/03 - 13:46:22 | 200 | 2.2024646s | 127.0.0.1 | POST "/api/embeddings" {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":10,"tid":"17128","timestamp":1714736782} {"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53498,"status":200,"tid":"20792","timestamp":1714736782} {"function":"process_single_task","level":"INFO","line":1510,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":11,"tid":"17128","timestamp":1714736782} {"function":"log_server_request","level":"INFO","line":2741,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53498,"status":200,"tid":"20792","timestamp":1714736782} {"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":12,"tid":"17128","timestamp":1714736782} {"function":"update_slots","level":"INFO","line":1840,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":12,"tid":"17128","timestamp":1714736782} {"function":"update_slots","level":"INFO","line":1648,"msg":"slot released","n_cache_tokens":22,"n_ctx":2048,"n_past":22,"n_system_tokens":0,"slot_id":0,"task_id":12,"tid":"17128","timestamp":1714736782,"truncated":false} {"function":"log_server_request","level":"INFO","line":2741,"method":"POST","msg":"request","params":{},"path":"/embedding","remote_addr":"127.0.0.1","remote_port":53549,"status":200,"tid":"22736","timestamp":1714736782} ``` But after updating to 0.1.33, it seems as if the requests were processed and returned 500 because the model was not yet loaded but the server responded anyway, at least in the log it appears to be this way: ``` time=2024-05-03T13:51:30.047+02:00 level=INFO source=gpu.go:96 msg="Detecting GPUs" [GIN] 2024/05/03 - 13:51:30 | 500 | 11.5625ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 11.5625ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 11.5625ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 12.6713ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 12.7613ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 13.2678ms | 127.0.0.1 | POST "/api/embeddings" time=2024-05-03T13:51:30.057+02:00 level=INFO source=gpu.go:101 msg="detected GPUs" library=C:\Users\Maximilian\AppData\Local\Programs\Ollama\cudart64_110.dll count=1 time=2024-05-03T13:51:30.057+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" [GIN] 2024/05/03 - 13:51:30 | 500 | 12.8669ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 13.386ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 14.3949ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 13.8924ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 14.3949ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 14.9223ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 15.0123ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 15.0123ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 15.1662ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 15.6853ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 15.6853ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 15.6853ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 15.6853ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 15.1662ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 15.672ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 15.1662ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 15.1662ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 16.4901ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 15.6853ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 15.672ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 17.0477ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 16.6936ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 16.6936ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 16.4901ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 17.0477ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 17.0477ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 16.6936ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 18.0693ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 18.1391ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 6.329ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 18.1391ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 8.2953ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 6.476ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 6.5942ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 18.1391ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 7.6967ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 18.073ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 5.9696ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 6.476ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 17.0415ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 5.3425ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 6.069ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 8.9295ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 6.675ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 6.682ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 5.4222ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 5.9277ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 6.7458ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 7.2516ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 7.5524ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 7.5524ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 7.5524ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 7.5524ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 6.4886ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 6.9948ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 6.4886ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 6.4886ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 6.9948ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 8.8933ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 7.2587ms | 127.0.0.1 | POST "/api/embeddings" .... [GIN] 2024/05/03 - 13:51:30 | 500 | 2.6255ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 2.1017ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:30 | 500 | 1.5844ms | 127.0.0.1 | POST "/api/embeddings" time=2024-05-03T13:51:30.281+02:00 level=WARN source=memory.go:18 msg="requested context length is greater than model max context length" requested=8192 model=2048 time=2024-05-03T13:51:30.281+02:00 level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=13 memory.available="9073.0 MiB" memory.required.full="735.9 MiB" memory.required.partial="735.9 MiB" memory.required.kv="6.0 MiB" memory.weights.total="260.9 MiB" memory.weights.repeating="216.1 MiB" memory.weights.nonrepeating="44.7 MiB" memory.graph.full="12.0 MiB" memory.graph.partial="12.0 MiB" time=2024-05-03T13:51:30.281+02:00 level=WARN source=server.go:76 msg="requested context length is greater than the model's training context window size" requested=8192 "training size"=2048 time=2024-05-03T13:51:30.282+02:00 level=INFO source=memory.go:152 msg="offload to gpu" layers.real=-1 layers.estimate=13 memory.available="9073.0 MiB" memory.required.full="789.9 MiB" memory.required.partial="789.9 MiB" memory.required.kv="24.0 MiB" memory.weights.total="260.9 MiB" memory.weights.repeating="216.1 MiB" memory.weights.nonrepeating="44.7 MiB" memory.graph.full="48.0 MiB" memory.graph.partial="48.0 MiB" time=2024-05-03T13:51:30.282+02:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-05-03T13:51:30.286+02:00 level=INFO source=server.go:289 msg="starting llama server" cmd="C:\\Users\\Maximilian\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model C:\\Users\\Maximilian\\.ollama\\models\\blobs\\sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 13 --parallel 1 --port 53915" time=2024-05-03T13:51:30.305+02:00 level=INFO source=sched.go:340 msg="loaded runners" count=1 time=2024-05-03T13:51:30.305+02:00 level=INFO source=server.go:432 msg="waiting for llama runner to start responding" {"function":"server_params_parse","level":"INFO","line":2606,"msg":"logging to file is disabled.","tid":"9888","timestamp":1714737090} {"build":2770,"commit":"952d03d","function":"wmain","level":"INFO","line":2823,"msg":"build info","tid":"9888","timestamp":1714737090} {"function":"wmain","level":"INFO","line":2830,"msg":"system info","n_threads":12,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | ","tid":"9888","timestamp":1714737090,"total_threads":24} llama_model_loader: loaded meta data with 24 key-value pairs and 112 tensors from C:\Users\Maximilian\.ollama\models\blobs\sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = nomic-bert llama_model_loader: - kv 1: general.name str = nomic-embed-text-v1.5 llama_model_loader: - kv 2: nomic-bert.block_count u32 = 12 llama_model_loader: - kv 3: nomic-bert.context_length u32 = 2048 llama_model_loader: - kv 4: nomic-bert.embedding_length u32 = 768 llama_model_loader: - kv 5: nomic-bert.feed_forward_length u32 = 3072 llama_model_loader: - kv 6: nomic-bert.attention.head_count u32 = 12 llama_model_loader: - kv 7: nomic-bert.attention.layer_norm_epsilon f32 = 0.000000 llama_model_loader: - kv 8: general.file_type u32 = 1 llama_model_loader: - kv 9: nomic-bert.attention.causal bool = false llama_model_loader: - kv 10: nomic-bert.pooling_type u32 = 1 llama_model_loader: - kv 11: nomic-bert.rope.freq_base f32 = 1000.000000 llama_model_loader: - kv 12: tokenizer.ggml.token_type_count u32 = 2 llama_model_loader: - kv 13: tokenizer.ggml.bos_token_id u32 = 101 llama_model_loader: - kv 14: tokenizer.ggml.eos_token_id u32 = 102 llama_model_loader: - kv 15: tokenizer.ggml.model str = bert llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,30522] = ["[PAD]", "[unused0]", "[unused1]", "... llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,30522] = [-1000.000000, -1000.000000, -1000.00... llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,30522] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 100 llama_model_loader: - kv 20: tokenizer.ggml.seperator_token_id u32 = 102 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 22: tokenizer.ggml.cls_token_id u32 = 101 llama_model_loader: - kv 23: tokenizer.ggml.mask_token_id u32 = 103 llama_model_loader: - type f32: 51 tensors llama_model_loader: - type f16: 61 tensors llm_load_vocab: mismatch in special tokens definition ( 7104/30522 vs 5/30522 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = nomic-bert llm_load_print_meta: vocab type = WPM llm_load_print_meta: n_vocab = 30522 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 2048 llm_load_print_meta: n_embd = 768 llm_load_print_meta: n_head = 12 llm_load_print_meta: n_head_kv = 12 llm_load_print_meta: n_layer = 12 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_embd_head_k = 64 llm_load_print_meta: n_embd_head_v = 64 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 768 llm_load_print_meta: n_embd_v_gqa = 768 llm_load_print_meta: f_norm_eps = 1.0e-12 llm_load_print_meta: f_norm_rms_eps = 0.0e+00 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 3072 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 0 llm_load_print_meta: pooling type = 1 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 2048 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 137M llm_load_print_meta: model ftype = F16 llm_load_print_meta: model params = 136.73 M llm_load_print_meta: model size = 260.86 MiB (16.00 BPW) llm_load_print_meta: general.name = nomic-embed-text-v1.5 llm_load_print_meta: BOS token = 101 '[CLS]' llm_load_print_meta: EOS token = 102 '[SEP]' llm_load_print_meta: UNK token = 100 '[UNK]' llm_load_print_meta: SEP token = 102 '[SEP]' llm_load_print_meta: PAD token = 0 '[PAD]' llm_load_print_meta: CLS token = 101 '[CLS]' llm_load_print_meta: MASK token = 103 '[MASK]' llm_load_print_meta: LF token = 0 '[PAD]' ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3080, compute capability 8.6, VMM: yes llm_load_tensors: ggml ctx size = 0.11 MiB llm_load_tensors: offloading 12 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 13/13 layers to GPU llm_load_tensors: CPU buffer size = 44.72 MiB llm_load_tensors: CUDA0 buffer size = 216.15 MiB ....................................................... llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: freq_base = 1000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 288.00 MiB llama_new_context_with_model: KV self size = 288.00 MiB, K (f16): 144.00 MiB, V (f16): 144.00 MiB llama_new_context_with_model: CPU output buffer size = 0.00 MiB llama_new_context_with_model: CUDA0 compute buffer size = 23.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 3.50 MiB llama_new_context_with_model: graph nodes = 453 llama_new_context_with_model: graph splits = 2 {"function":"initialize","level":"INFO","line":448,"msg":"initializing slots","n_slots":1,"tid":"9888","timestamp":1714737092} {"function":"initialize","level":"INFO","line":460,"msg":"new slot","n_ctx_slot":8192,"slot_id":0,"tid":"9888","timestamp":1714737092} {"function":"wmain","level":"INFO","line":3067,"msg":"model loaded","tid":"9888","timestamp":1714737092} {"function":"wmain","hostname":"127.0.0.1","level":"INFO","line":3270,"msg":"HTTP server listening","n_threads_http":"23","port":"53915","tid":"9888","timestamp":1714737092} {"function":"update_slots","level":"INFO","line":1581,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"9888","timestamp":1714737092} {"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":3,"tid":"9888","timestamp":1714737092} {"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":2,"tid":"9888","timestamp":1714737092} {"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53928,"status":200,"tid":"8144","timestamp":1714737092}{"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":0,"tid":"9888","timestamp":1714737092} {"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53930,"status":200,"tid":"9528","timestamp":1714737092} {"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53926,"status":200,"tid":"5772","timestamp":1714737092} {"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":1,"tid":"9888","timestamp":1714737092} {"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":4,"tid":"9888","timestamp":1714737092} {"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53927,"status":200,"tid":"9092","timestamp":1714737092} {"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53929,"status":200,"tid":"20912","timestamp":1714737092} {"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":5,"tid":"9888","timestamp":1714737092} {"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53964,"status":200,"tid":"1948","timestamp":1714737092} {"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":6,"tid":"9888","timestamp":1714737092} {"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":7,"tid":"9888","timestamp":1714737092} {"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53964,"status":200,"tid":"1948","timestamp":1714737092} {"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53965,"status":200,"tid":"18032","timestamp":1714737092} {"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":8,"tid":"9888","timestamp":1714737092} {"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53964,"status":200,"tid":"1948","timestamp":1714737092} {"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":9,"tid":"9888","timestamp":1714737092} {"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":10,"tid":"9888","timestamp":1714737092} {"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53964,"status":200,"tid":"1948","timestamp":1714737092} [GIN] 2024/05/03 - 13:51:32 | 500 | 4.7901ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 4.8137ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 5.3891ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 5.3891ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 5.3891ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 5.3891ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 5.3888ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 5.8936ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 5.8936ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 5.3888ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 5.8936ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 5.2997ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 6.9595ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 6.4547ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 6.4547ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 5.2997ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 6.4547ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 6.4547ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 6.4547ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 6.9595ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 6.4547ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 5.2997ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 6.4547ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 6.4547ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 2.3236ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 3.1593ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 3.1593ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 3.6841ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 4.2082ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 4.2082ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 4.2082ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 4.2082ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 4.2082ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 3.65ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 4.2082ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 3.65ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 4.2012ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 4.7594ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 4.2012ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 4.2012ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 4.2012ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 4.2544ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 4.2012ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 3.6973ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 4.2012ms | 127.0.0.1 | POST "/api/embeddings" [GIN] 2024/05/03 - 13:51:32 | 500 | 4.2012ms | 127.0.0.1 | POST "/api/embeddings" {"function":"update_slots","level":"INFO","line":1843,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":9,"tid":"9888","timestamp":1714737092} {"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":0,"n_processing_slots":1,"task_id":12,"tid":"9888","timestamp":1714737092} {"function":"log_server_request","level":"INFO","line":2744,"method":"POST","msg":"request","params":{},"path":"/embedding","remote_addr":"127.0.0.1","remote_port":53965,"status":200,"tid":"18032","timestamp":1714737092} {"function":"update_slots","level":"INFO","line":1651,"msg":"slot released","n_cache_tokens":201,"n_ctx":8192,"n_past":201,"n_system_tokens":0,"slot_id":0,"task_id":9,"tid":"9888","timestamp":1714737092,"truncated":false} {"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53964,"status":200,"tid":"1948","timestamp":1714737092} {"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":14,"tid":"9888","timestamp":1714737092} [GIN] 2024/05/03 - 13:51:32 | 200 | 2.2254284s | 127.0.0.1 | POST "/api/embeddings" {"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53965,"status":200,"tid":"18032","timestamp":1714737092} {"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":15,"tid":"9888","timestamp":1714737092} {"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":16,"tid":"9888","timestamp":1714737092} {"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53988,"status":200,"tid":"11896","timestamp":1714737092} {"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53965,"status":200,"tid":"18032","timestamp":1714737092} {"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":17,"tid":"9888","timestamp":1714737092} {"function":"launch_slot_with_data","level":"INFO","line":833,"msg":"slot is processing task","slot_id":0,"task_id":18,"tid":"9888","timestamp":1714737092} {"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53988,"status":200,"tid":"11896","timestamp":1714737092} {"function":"update_slots","level":"INFO","line":1843,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":18,"tid":"9888","timestamp":1714737092} {"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":0,"n_processing_slots":1,"task_id":20,"tid":"9888","timestamp":1714737092} {"function":"log_server_request","level":"INFO","line":2744,"method":"POST","msg":"request","params":{},"path":"/embedding","remote_addr":"127.0.0.1","remote_port":53965,"status":200,"tid":"18032","timestamp":1714737092} {"function":"update_slots","level":"INFO","line":1651,"msg":"slot released","n_cache_tokens":183,"n_ctx":8192,"n_past":183,"n_system_tokens":0,"slot_id":0,"task_id":18,"tid":"9888","timestamp":1714737092,"truncated":false} {"function":"log_server_request","level":"INFO","line":2744,"method":"GET","msg":"request","params":{},"path":"/health","remote_addr":"127.0.0.1","remote_port":53988,"status":200,"tid":"11896","timestamp":1714737092} {"function":"process_single_task","level":"INFO","line":1513,"msg":"slot data","n_idle_slots":1,"n_processing_slots":0,"task_id":22,"tid":"9888","timestamp":1714737092} ``` Thank you for your help! ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version 0.1.33
GiteaMirror added the bugwindows labels 2026-04-22 06:23:18 -05:00
Author
Owner

@maximiliangugler commented on GitHub (May 3, 2024):

OS: Windows
GPU: NVIDIA / CUDA
CPU: Intel

<!-- gh-comment-id:2092858805 --> @maximiliangugler commented on GitHub (May 3, 2024): OS: Windows GPU: NVIDIA / CUDA CPU: Intel
Author
Owner

@dhiltgen commented on GitHub (May 3, 2024):

@maxggl did you by any chance get a response body in the 500?

I'm trying to repro but haven't quite found the combination of factors leading to this failure mode. Can you share any more insight into the sequence of API calls and bodies that results in the failure? My attempts against /api/emebddings always block until the model is loaded.

<!-- gh-comment-id:2093250334 --> @dhiltgen commented on GitHub (May 3, 2024): @maxggl did you by any chance get a response body in the 500? I'm trying to repro but haven't quite found the combination of factors leading to this failure mode. Can you share any more insight into the sequence of API calls and bodies that results in the failure? My attempts against /api/emebddings always block until the model is loaded.
Author
Owner

@maximiliangugler commented on GitHub (May 3, 2024):

@dhiltgen Sure,
this is an example of the first out of many (300 this case) asynchronous API calls made to http://localhost:11434/api/embeddings

Requst Body:

{'model': 'nomic-embed-text', 'prompt': 'operating or financial flexibility could all be negatively affected, especially if lenders were to impose new operating or financial covenants. \nOur operations may also be impacted by regulations affecting capital flows, financial markets or financial institutions, which can limit \nour ability to manage and deploy our liquidity or increase our funding costs. Any such events could have a material adverse effect on our \nbusiness and financial condition.'}

Crucially, the responses of the failed requests with status code 500 have this body:
{'error': 'server busy, please try again. maximum pending requests exceeded'}

I can also confirm that if less asynchronous API calls are made to /api/embeddings at once (like 10 or 15), the route seems to be blocking as intened and only return successful (200) responses after the model was loaded completely (As far as can be seen in the logs).

So does this mean that with version 0.1.33, a limit on how many pending requests Ollama can handle was introduced?

Thank you for the effort!

<!-- gh-comment-id:2093608262 --> @maximiliangugler commented on GitHub (May 3, 2024): @dhiltgen Sure, this is an example of the first out of many (300 this case) asynchronous API calls made to _http://localhost:11434/api/embeddings_ Requst Body: ```{'model': 'nomic-embed-text', 'prompt': 'operating or financial flexibility could all be negatively affected, especially if lenders were to impose new operating or financial covenants. \nOur operations may also be impacted by regulations affecting capital flows, financial markets or financial institutions, which can limit \nour ability to manage and deploy our liquidity or increase our funding costs. Any such events could have a material adverse effect on our \nbusiness and financial condition.'}``` Crucially, the responses of the failed requests with status code 500 have this body: `{'error': 'server busy, please try again. maximum pending requests exceeded'}` I can also confirm that if less asynchronous API calls are made to `/api/embeddings` at once (like 10 or 15), the route seems to be blocking as intened and only return successful (200) responses after the model was loaded completely (As far as can be seen in the logs). So does this mean that with version 0.1.33, a limit on how many pending requests Ollama can handle was introduced? Thank you for the effort!
Author
Owner

@maximiliangugler commented on GitHub (May 3, 2024):

Taking a look at the changes made from 0.1.32 to 0.1.33, I discovered this line in the newly added sched.go in line 48
var maxQueuedRequests = 10 // TODO configurable

This probably has something to do with it, right?

<!-- gh-comment-id:2093659474 --> @maximiliangugler commented on GitHub (May 3, 2024): Taking a look at the changes made from 0.1.32 to 0.1.33, I discovered this line in the newly added [sched.go](https://github.com/ollama/ollama/blob/main/server/sched.go) in line 48 `var maxQueuedRequests = 10 // TODO configurable` This probably has something to do with it, right?
Author
Owner

@dhiltgen commented on GitHub (May 3, 2024):

Yes, this looks like the explanation. I'll get a PR up to refine this to both return a better HTTP status code, and also make the queue depth adjustable.

<!-- gh-comment-id:2093867285 --> @dhiltgen commented on GitHub (May 3, 2024): Yes, this looks like the explanation. I'll get a PR up to refine this to both return a better HTTP status code, and also make the queue depth adjustable.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#28320