[GH-ISSUE #1462] --test ignores --model #62823

Closed
opened 2026-05-03 10:25:45 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @kfsone on GitHub (Dec 11, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1462

(venv) root@afa266a7b553:/workspace# litellm --model starling-lm --test
/workspace/venv/lib/python3.10/site-packages/pydantic/_internal/_fields.py:149: UserWarning: Field "model_list" has conflict with protected namespace "model_".

You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
  warnings.warn(

LiteLLM: Making a test ChatCompletions request to your proxy
Traceback (most recent call last):
  File "/workspace/venv/bin/litellm", line 8, in <module>
    sys.exit(run_server())
  File "/workspace/venv/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
  File "/workspace/venv/lib/python3.10/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
  File "/workspace/venv/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/workspace/venv/lib/python3.10/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
  File "/workspace/venv/lib/python3.10/site-packages/litellm/proxy/proxy_cli.py", line 198, in run_server
    response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
  File "/workspace/venv/lib/python3.10/site-packages/openai/_utils/_utils.py", line 303, in wrapper
    return func(*args, **kwargs)
  File "/workspace/venv/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 598, in create
    return self._post(
  File "/workspace/venv/lib/python3.10/site-packages/openai/_base_client.py", line 1086, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
  File "/workspace/venv/lib/python3.10/site-packages/openai/_base_client.py", line 846, in request
    return self._request(
  File "/workspace/venv/lib/python3.10/site-packages/openai/_base_client.py", line 884, in _request
    return self._retry_request(
  File "/workspace/venv/lib/python3.10/site-packages/openai/_base_client.py", line 956, in _retry_request
    return self._request(
  File "/workspace/venv/lib/python3.10/site-packages/openai/_base_client.py", line 884, in _request
    return self._retry_request(
  File "/workspace/venv/lib/python3.10/site-packages/openai/_base_client.py", line 956, in _retry_request
    return self._request(
  File "/workspace/venv/lib/python3.10/site-packages/openai/_base_client.py", line 898, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Error code: 500 - {'detail': 'OpenAIException - The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable\n\nTraceback (most recent call last):\n  File "/workspace/venv/lib/python3.10/site-packages/litellm/llms/openai.py", line 266, in acompletion\n    openai_aclient = AsyncOpenAI(api_key=api_key, base_url=api_base, http_client=litellm.aclient_session, timeout=timeout, max_retries=max_retries)\n  File "/workspace/venv/lib/python3.10/site-packages/openai/_client.py", line 303, in __init__\n    raise OpenAIError(\nopenai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/workspace/venv/lib/python3.10/site-packages/litellm/main.py", line 187, in acompletion\n    response = await init_response\n  File "/workspace/venv/lib/python3.10/site-packages/litellm/llms/openai.py", line 278, in acompletion\n    raise OpenAIError(status_code=500, message=f"{str(e)}")\nlitellm.llms.openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/workspace/venv/lib/python3.10/site-packages/litellm/proxy/proxy_server.py", line 1033, in chat_completion\n    response = await litellm.acompletion(**data)\n  File "/workspace/venv/lib/python3.10/site-packages/litellm/utils.py", line 1682, in wrapper_async\n    raise e\n  File "/workspace/venv/lib/python3.10/site-packages/litellm/utils.py", line 1626, in wrapper_async\n    result = await original_function(*args, **kwargs)\n  File "/workspace/venv/lib/python3.10/site-packages/litellm/main.py", line 197, in acompletion\n    raise exception_type(\n  File "/workspace/venv/lib/python3.10/site-packages/litellm/utils.py", line 4973, in exception_type\n    raise e\n  File "/workspace/venv/lib/python3.10/site-packages/litellm/utils.py", line 4115, in exception_type\n    raise APIError(\nlitellm.exceptions.APIError: OpenAIException - The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable\n'}

(same for litellm --test --model ...)

 /workspace/config.yaml:

model_list:
  - model_name: starling-lm
    litellm_params:
      model: ollama/starling-lm
      api_base: http://192.168.86.26:11434
      api_key: "none"
      rpm: 100
  - model_name: vicuna:7b-16k
    litellm_params:
      model: ollama/vicuna:7b-16k
      api_base: http://192.168.86.26:11434
      api_key: "none"
      rpm: 100

litellm_settings: # module level litellm settings - https://github.com/BerriAI/litellm/blob/main/litellm/__init__.py
  drop_params: True
  set_verbose: True

litellm command line:

litellm --config /workspace/config.yaml

Output from litellm starting up:

(venv) root@afa266a7b553:/workspace# litellm --config /workspace/config.yaml
/workspace/venv/lib/python3.10/site-packages/pydantic/_internal/_fields.py:149: UserWarning: Field "model_list" has conflict with protected namespace "model_".

You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
  warnings.warn(
INFO:     Started server process [130]
INFO:     Waiting for application startup.

#------------------------------------------------------------#
#                                                            #
#              'I don't like how this works...'               #
#        https://github.com/BerriAI/litellm/issues/new        #
#                                                            #
#------------------------------------------------------------#

 Thank you for using LiteLLM! - Krrish & Ishaan



Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new


LiteLLM: Proxy initialized with Config, Set models:
    starling-lm
    vicuna:7b-16k
LiteLLM.Router:
 Initialized Model List [{'model_name': 'starling-lm', 'litellm_params': {'model': 'ollama/starling-lm-ModelID-ollama/starling-lmhttp://192.168.86.26:11434100', 'api_base': 'http://192.168.86.26:11434', 'api_key': 'none', 'rpm': 100}, 'model_info': {'id': '26263866-4b46-471d-a4eb-41826662724c'}}, {'model_name': 'vicuna:7b-16k', 'litellm_params': {'model': 'ollama/vicuna:7b-16k', 'api_base': 'http://192.168.86.26:11434', 'api_key': 'none', 'rpm': 100}}]
LiteLLM.Router:
 Initialized Model List [{'model_name': 'starling-lm', 'litellm_params': {'model': 'ollama/starling-lm-ModelID-ollama/starling-lmhttp://192.168.86.26:11434100', 'api_base': 'http://192.168.86.26:11434', 'api_key': 'none', 'rpm': 100}, 'model_info': {'id': '26263866-4b46-471d-a4eb-41826662724c'}}, {'model_name': 'vicuna:7b-16k', 'litellm_params': {'model': 'ollama/vicuna:7b-16k-ModelID-ollama/vicuna:7b-16khttp://192.168.86.26:11434100', 'api_base': 'http://192.168.86.26:11434', 'api_key': 'none', 'rpm': 100}, 'model_info': {'id': '37bc152d-f977-4d30-b021-d55fbc5a828a'}}]
LiteLLM.Router: Intialized router with Routing strategy: simple-shuffle


LiteLLM: Test your local proxy with: "litellm --test" This runs an openai.ChatCompletion request to your proxy [In a new terminal tab]

LiteLLM: Curl Command Test for your local proxy

    curl --location 'http://0.0.0.0:8000/chat/completions' \
    --header 'Content-Type: application/json' \
    --data ' {
    "model": "gpt-3.5-turbo",
    "messages": [
        {
        "role": "user",
        "content": "what llm are you"
        }
    ]
    }'




Docs: https://docs.litellm.ai/docs/simple_proxy

INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

successful query forward to ollama:

LiteLLM.Router: Inside async function with retries: args - (); kwargs - {'proxy_server_request': {'url': 'http://0.0.0.0:8000/chat/completions', 'method': 'POST', 'headers': {'host': '0.0.0.0:8000', 'user-agent': 'curl/7.81.0', 'accept': '*/*', 'content-type': 'application/json', 'content-length': '142'}, 'body': {'model': 'starling-lm', 'messages': [{'role': 'user', 'content': 'what llm are you'}]}}, 'user': None, 'metadata': {'user_api_key': None, 'headers': {'host': '0.0.0.0:8000', 'user-agent': 'curl/7.81.0', 'accept': '*/*', 'content-type': 'application/json', 'content-length': '142'}, 'model_group': 'starling-lm'}, 'request_timeout': 600, 'model': 'starling-lm', 'messages': [{'role': 'user', 'content': 'what llm are you'}], 'original_function': <bound method Router._acompletion of <litellm.router.Router object at 0x7f04918cf2e0>>, 'num_retries': 3}
LiteLLM.Router: async function w/ retries: original_function - <bound method Router._acompletion of <litellm.router.Router object at 0x7f04918cf2e0>>
LiteLLM.Router: Inside _acompletion()- model: starling-lm; kwargs: {'proxy_server_request': {'url': 'http://0.0.0.0:8000/chat/completions', 'method': 'POST', 'headers': {'host': '0.0.0.0:8000', 'user-agent': 'curl/7.81.0', 'accept': '*/*', 'content-type': 'application/json', 'content-length': '142'}, 'body': {'model': 'starling-lm', 'messages': [{'role': 'user', 'content': 'what llm are you'}]}}, 'user': None, 'metadata': {'user_api_key': None, 'headers': {'host': '0.0.0.0:8000', 'user-agent': 'curl/7.81.0', 'accept': '*/*', 'content-type': 'application/json', 'content-length': '142'}, 'model_group': 'starling-lm'}, 'request_timeout': 600}
LiteLLM.Router: initial list of deployments: [{'model_name': 'starling-lm', 'litellm_params': {'model': 'ollama/starling-lm-ModelID-ollama/starling-lmhttp://192.168.86.26:11434100', 'api_base': 'http://192.168.86.26:11434', 'api_key': 'none', 'rpm': 100}, 'model_info': {'id': '26263866-4b46-471d-a4eb-41826662724c'}}]
get cache: cache key: 07-47:cooldown_models
get cache: cache result: None
LiteLLM.Router: retrieve cooldown models: []
LiteLLM.Router: cooldown deployments: []
LiteLLM.Router: healthy deployments: length 1 [{'model_name': 'starling-lm', 'litellm_params': {'model': 'ollama/starling-lm-ModelID-ollama/starling-lmhttp://192.168.86.26:11434100', 'api_base': 'http://192.168.86.26:11434', 'api_key': 'none', 'rpm': 100}, 'model_info': {'id': '26263866-4b46-471d-a4eb-41826662724c'}}]
LiteLLM.Router:
rpms [100]
LiteLLM.Router:
 weights [1.0]
LiteLLM.Router:
 selected index, 0
callback: <bound method Router.deployment_callback_on_failure of <litellm.router.Router object at 0x7f04918cf2e0>>
callback: <bound method Router.deployment_callback of <litellm.router.Router object at 0x7f04918cf2e0>>
litellm.cache: None
kwargs[caching]: False; litellm.cache: None
kwargs[caching]: False; litellm.cache: None

LiteLLM completion() model= starling-lm; provider = ollama

LiteLLM: Params passed to completion() {'functions': [], 'function_call': '', 'temperature': None, 'top_p': None, 'stream': None, 'max_tokens': None, 'presence_penalty': None, 'frequency_penalty': None, 'logit_bias': None, 'user': None, 'response_format': None, 'seed': None, 'tools': None, 'tool_choice': None, 'max_retries': 0, 'custom_llm_provider': 'ollama', 'model': 'starling-lm', 'n': None, 'stop': None}

LiteLLM: Non-Default params passed to completion() {'max_retries': 0}
self.optional_params: {}
PRE-API-CALL ADDITIONAL ARGS: {'api_base': 'http://192.168.86.26:11434/api/generate', 'complete_input_dict': {'model': 'starling-lm', 'prompt': 'what llm are you'}}


POST Request Sent from LiteLLM:
curl -X POST \
http://192.168.86.26:11434/api/generate \
-d '{'model': 'starling-lm', 'prompt': 'what llm are you'}'


Async Wrapper: Completed Call, calling async_success_handler
Logging Details LiteLLM-Success Call
success callbacks: [<bound method Router.deployment_callback of <litellm.router.Router object at 0x7f04918cf2e0>>]
LiteLLM.Router: Async Response: ModelResponse(id='chatcmpl-3ce1d54e-7acf-416b-9c22-85eeb6189572', choices=[Choices(finish_reason='stop', index=0, message=Message(content=' I am an AI language model known as OpenAI GPT-4, designed to assist users with various tasks, including answering questions and providing information.', role='assistant'))], created=1702280848, model='ollama/starling-lm', object='chat.completion', system_fingerprint=None, usage=Usage(prompt_tokens=5, completion_tokens=30, total_tokens=35))
success callbacks: Running Custom Callback Function
final response: ModelResponse(id='chatcmpl-3ce1d54e-7acf-416b-9c22-85eeb6189572', choices=[Choices(finish_reason='stop', index=0, message=Message(content=' I am an AI language model known as OpenAI GPT-4, designed to assist users with various tasks, including answering questions and providing information.', role='assistant'))], created=1702280848, model='ollama/starling-lm', object='chat.completion', system_fingerprint=None, usage=Usage(prompt_tokens=5, completion_tokens=30, total_tokens=35))
get cache: cache key: ollama/starling-lm:tpm:07-47
get cache: cache result: None
INFO:     127.0.0.1:44184 - "POST /chat/completions HTTP/1.1" 200 OK
set cache: key: ollama/starling-lm:tpm:07-47; value: 35
get cache: cache key: ollama/starling-lm:rpm:07-47
get cache: cache result: None
set cache: key: ollama/starling-lm:rpm:07-47; value: 1
Custom Logger - final response object: {'id': 'chatcmpl-3ce1d54e-7acf-416b-9c22-85eeb6189572', 'choices': [{'finish_reason': 'stop', 'index': 0, 'message': {'content': ' I am an AI language model known as OpenAI GPT-4, designed to assist users with various tasks, including answering questions and providing information.', 'role': 'assistant'}}], 'created': 1702280848, 'model': 'ollama/starling-lm', 'object': 'chat.completion', 'system_fingerprint': None, 'usage': {'prompt_tokens': 5, 'completion_tokens': 30, 'total_tokens': 35}}
Async success callbacks: []
Originally created by @kfsone on GitHub (Dec 11, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1462 ``` (venv) root@afa266a7b553:/workspace# litellm --model starling-lm --test /workspace/venv/lib/python3.10/site-packages/pydantic/_internal/_fields.py:149: UserWarning: Field "model_list" has conflict with protected namespace "model_". You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`. warnings.warn( LiteLLM: Making a test ChatCompletions request to your proxy Traceback (most recent call last): File "/workspace/venv/bin/litellm", line 8, in <module> sys.exit(run_server()) File "/workspace/venv/lib/python3.10/site-packages/click/core.py", line 1157, in __call__ return self.main(*args, **kwargs) File "/workspace/venv/lib/python3.10/site-packages/click/core.py", line 1078, in main rv = self.invoke(ctx) File "/workspace/venv/lib/python3.10/site-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, **ctx.params) File "/workspace/venv/lib/python3.10/site-packages/click/core.py", line 783, in invoke return __callback(*args, **kwargs) File "/workspace/venv/lib/python3.10/site-packages/litellm/proxy/proxy_cli.py", line 198, in run_server response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [ File "/workspace/venv/lib/python3.10/site-packages/openai/_utils/_utils.py", line 303, in wrapper return func(*args, **kwargs) File "/workspace/venv/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 598, in create return self._post( File "/workspace/venv/lib/python3.10/site-packages/openai/_base_client.py", line 1086, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) File "/workspace/venv/lib/python3.10/site-packages/openai/_base_client.py", line 846, in request return self._request( File "/workspace/venv/lib/python3.10/site-packages/openai/_base_client.py", line 884, in _request return self._retry_request( File "/workspace/venv/lib/python3.10/site-packages/openai/_base_client.py", line 956, in _retry_request return self._request( File "/workspace/venv/lib/python3.10/site-packages/openai/_base_client.py", line 884, in _request return self._retry_request( File "/workspace/venv/lib/python3.10/site-packages/openai/_base_client.py", line 956, in _retry_request return self._request( File "/workspace/venv/lib/python3.10/site-packages/openai/_base_client.py", line 898, in _request raise self._make_status_error_from_response(err.response) from None openai.InternalServerError: Error code: 500 - {'detail': 'OpenAIException - The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable\n\nTraceback (most recent call last):\n File "/workspace/venv/lib/python3.10/site-packages/litellm/llms/openai.py", line 266, in acompletion\n openai_aclient = AsyncOpenAI(api_key=api_key, base_url=api_base, http_client=litellm.aclient_session, timeout=timeout, max_retries=max_retries)\n File "/workspace/venv/lib/python3.10/site-packages/openai/_client.py", line 303, in __init__\n raise OpenAIError(\nopenai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/workspace/venv/lib/python3.10/site-packages/litellm/main.py", line 187, in acompletion\n response = await init_response\n File "/workspace/venv/lib/python3.10/site-packages/litellm/llms/openai.py", line 278, in acompletion\n raise OpenAIError(status_code=500, message=f"{str(e)}")\nlitellm.llms.openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/workspace/venv/lib/python3.10/site-packages/litellm/proxy/proxy_server.py", line 1033, in chat_completion\n response = await litellm.acompletion(**data)\n File "/workspace/venv/lib/python3.10/site-packages/litellm/utils.py", line 1682, in wrapper_async\n raise e\n File "/workspace/venv/lib/python3.10/site-packages/litellm/utils.py", line 1626, in wrapper_async\n result = await original_function(*args, **kwargs)\n File "/workspace/venv/lib/python3.10/site-packages/litellm/main.py", line 197, in acompletion\n raise exception_type(\n File "/workspace/venv/lib/python3.10/site-packages/litellm/utils.py", line 4973, in exception_type\n raise e\n File "/workspace/venv/lib/python3.10/site-packages/litellm/utils.py", line 4115, in exception_type\n raise APIError(\nlitellm.exceptions.APIError: OpenAIException - The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable\n'} ``` (same for `litellm --test --model ...`)  /workspace/config.yaml: ``` model_list: - model_name: starling-lm litellm_params: model: ollama/starling-lm api_base: http://192.168.86.26:11434 api_key: "none" rpm: 100 - model_name: vicuna:7b-16k litellm_params: model: ollama/vicuna:7b-16k api_base: http://192.168.86.26:11434 api_key: "none" rpm: 100 litellm_settings: # module level litellm settings - https://github.com/BerriAI/litellm/blob/main/litellm/__init__.py drop_params: True set_verbose: True ``` litellm command line: ``` litellm --config /workspace/config.yaml ``` Output from litellm starting up: ``` (venv) root@afa266a7b553:/workspace# litellm --config /workspace/config.yaml /workspace/venv/lib/python3.10/site-packages/pydantic/_internal/_fields.py:149: UserWarning: Field "model_list" has conflict with protected namespace "model_". You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`. warnings.warn( INFO: Started server process [130] INFO: Waiting for application startup. #------------------------------------------------------------# # # # 'I don't like how this works...' # # https://github.com/BerriAI/litellm/issues/new # # # #------------------------------------------------------------# Thank you for using LiteLLM! - Krrish & Ishaan Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM: Proxy initialized with Config, Set models: starling-lm vicuna:7b-16k LiteLLM.Router: Initialized Model List [{'model_name': 'starling-lm', 'litellm_params': {'model': 'ollama/starling-lm-ModelID-ollama/starling-lmhttp://192.168.86.26:11434100', 'api_base': 'http://192.168.86.26:11434', 'api_key': 'none', 'rpm': 100}, 'model_info': {'id': '26263866-4b46-471d-a4eb-41826662724c'}}, {'model_name': 'vicuna:7b-16k', 'litellm_params': {'model': 'ollama/vicuna:7b-16k', 'api_base': 'http://192.168.86.26:11434', 'api_key': 'none', 'rpm': 100}}] LiteLLM.Router: Initialized Model List [{'model_name': 'starling-lm', 'litellm_params': {'model': 'ollama/starling-lm-ModelID-ollama/starling-lmhttp://192.168.86.26:11434100', 'api_base': 'http://192.168.86.26:11434', 'api_key': 'none', 'rpm': 100}, 'model_info': {'id': '26263866-4b46-471d-a4eb-41826662724c'}}, {'model_name': 'vicuna:7b-16k', 'litellm_params': {'model': 'ollama/vicuna:7b-16k-ModelID-ollama/vicuna:7b-16khttp://192.168.86.26:11434100', 'api_base': 'http://192.168.86.26:11434', 'api_key': 'none', 'rpm': 100}, 'model_info': {'id': '37bc152d-f977-4d30-b021-d55fbc5a828a'}}] LiteLLM.Router: Intialized router with Routing strategy: simple-shuffle LiteLLM: Test your local proxy with: "litellm --test" This runs an openai.ChatCompletion request to your proxy [In a new terminal tab] LiteLLM: Curl Command Test for your local proxy curl --location 'http://0.0.0.0:8000/chat/completions' \ --header 'Content-Type: application/json' \ --data ' { "model": "gpt-3.5-turbo", "messages": [ { "role": "user", "content": "what llm are you" } ] }' Docs: https://docs.litellm.ai/docs/simple_proxy INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) ``` successful query forward to ollama: ``` LiteLLM.Router: Inside async function with retries: args - (); kwargs - {'proxy_server_request': {'url': 'http://0.0.0.0:8000/chat/completions', 'method': 'POST', 'headers': {'host': '0.0.0.0:8000', 'user-agent': 'curl/7.81.0', 'accept': '*/*', 'content-type': 'application/json', 'content-length': '142'}, 'body': {'model': 'starling-lm', 'messages': [{'role': 'user', 'content': 'what llm are you'}]}}, 'user': None, 'metadata': {'user_api_key': None, 'headers': {'host': '0.0.0.0:8000', 'user-agent': 'curl/7.81.0', 'accept': '*/*', 'content-type': 'application/json', 'content-length': '142'}, 'model_group': 'starling-lm'}, 'request_timeout': 600, 'model': 'starling-lm', 'messages': [{'role': 'user', 'content': 'what llm are you'}], 'original_function': <bound method Router._acompletion of <litellm.router.Router object at 0x7f04918cf2e0>>, 'num_retries': 3} LiteLLM.Router: async function w/ retries: original_function - <bound method Router._acompletion of <litellm.router.Router object at 0x7f04918cf2e0>> LiteLLM.Router: Inside _acompletion()- model: starling-lm; kwargs: {'proxy_server_request': {'url': 'http://0.0.0.0:8000/chat/completions', 'method': 'POST', 'headers': {'host': '0.0.0.0:8000', 'user-agent': 'curl/7.81.0', 'accept': '*/*', 'content-type': 'application/json', 'content-length': '142'}, 'body': {'model': 'starling-lm', 'messages': [{'role': 'user', 'content': 'what llm are you'}]}}, 'user': None, 'metadata': {'user_api_key': None, 'headers': {'host': '0.0.0.0:8000', 'user-agent': 'curl/7.81.0', 'accept': '*/*', 'content-type': 'application/json', 'content-length': '142'}, 'model_group': 'starling-lm'}, 'request_timeout': 600} LiteLLM.Router: initial list of deployments: [{'model_name': 'starling-lm', 'litellm_params': {'model': 'ollama/starling-lm-ModelID-ollama/starling-lmhttp://192.168.86.26:11434100', 'api_base': 'http://192.168.86.26:11434', 'api_key': 'none', 'rpm': 100}, 'model_info': {'id': '26263866-4b46-471d-a4eb-41826662724c'}}] get cache: cache key: 07-47:cooldown_models get cache: cache result: None LiteLLM.Router: retrieve cooldown models: [] LiteLLM.Router: cooldown deployments: [] LiteLLM.Router: healthy deployments: length 1 [{'model_name': 'starling-lm', 'litellm_params': {'model': 'ollama/starling-lm-ModelID-ollama/starling-lmhttp://192.168.86.26:11434100', 'api_base': 'http://192.168.86.26:11434', 'api_key': 'none', 'rpm': 100}, 'model_info': {'id': '26263866-4b46-471d-a4eb-41826662724c'}}] LiteLLM.Router: rpms [100] LiteLLM.Router: weights [1.0] LiteLLM.Router: selected index, 0 callback: <bound method Router.deployment_callback_on_failure of <litellm.router.Router object at 0x7f04918cf2e0>> callback: <bound method Router.deployment_callback of <litellm.router.Router object at 0x7f04918cf2e0>> litellm.cache: None kwargs[caching]: False; litellm.cache: None kwargs[caching]: False; litellm.cache: None LiteLLM completion() model= starling-lm; provider = ollama LiteLLM: Params passed to completion() {'functions': [], 'function_call': '', 'temperature': None, 'top_p': None, 'stream': None, 'max_tokens': None, 'presence_penalty': None, 'frequency_penalty': None, 'logit_bias': None, 'user': None, 'response_format': None, 'seed': None, 'tools': None, 'tool_choice': None, 'max_retries': 0, 'custom_llm_provider': 'ollama', 'model': 'starling-lm', 'n': None, 'stop': None} LiteLLM: Non-Default params passed to completion() {'max_retries': 0} self.optional_params: {} PRE-API-CALL ADDITIONAL ARGS: {'api_base': 'http://192.168.86.26:11434/api/generate', 'complete_input_dict': {'model': 'starling-lm', 'prompt': 'what llm are you'}} POST Request Sent from LiteLLM: curl -X POST \ http://192.168.86.26:11434/api/generate \ -d '{'model': 'starling-lm', 'prompt': 'what llm are you'}' Async Wrapper: Completed Call, calling async_success_handler Logging Details LiteLLM-Success Call success callbacks: [<bound method Router.deployment_callback of <litellm.router.Router object at 0x7f04918cf2e0>>] LiteLLM.Router: Async Response: ModelResponse(id='chatcmpl-3ce1d54e-7acf-416b-9c22-85eeb6189572', choices=[Choices(finish_reason='stop', index=0, message=Message(content=' I am an AI language model known as OpenAI GPT-4, designed to assist users with various tasks, including answering questions and providing information.', role='assistant'))], created=1702280848, model='ollama/starling-lm', object='chat.completion', system_fingerprint=None, usage=Usage(prompt_tokens=5, completion_tokens=30, total_tokens=35)) success callbacks: Running Custom Callback Function final response: ModelResponse(id='chatcmpl-3ce1d54e-7acf-416b-9c22-85eeb6189572', choices=[Choices(finish_reason='stop', index=0, message=Message(content=' I am an AI language model known as OpenAI GPT-4, designed to assist users with various tasks, including answering questions and providing information.', role='assistant'))], created=1702280848, model='ollama/starling-lm', object='chat.completion', system_fingerprint=None, usage=Usage(prompt_tokens=5, completion_tokens=30, total_tokens=35)) get cache: cache key: ollama/starling-lm:tpm:07-47 get cache: cache result: None INFO: 127.0.0.1:44184 - "POST /chat/completions HTTP/1.1" 200 OK set cache: key: ollama/starling-lm:tpm:07-47; value: 35 get cache: cache key: ollama/starling-lm:rpm:07-47 get cache: cache result: None set cache: key: ollama/starling-lm:rpm:07-47; value: 1 Custom Logger - final response object: {'id': 'chatcmpl-3ce1d54e-7acf-416b-9c22-85eeb6189572', 'choices': [{'finish_reason': 'stop', 'index': 0, 'message': {'content': ' I am an AI language model known as OpenAI GPT-4, designed to assist users with various tasks, including answering questions and providing information.', 'role': 'assistant'}}], 'created': 1702280848, 'model': 'ollama/starling-lm', 'object': 'chat.completion', 'system_fingerprint': None, 'usage': {'prompt_tokens': 5, 'completion_tokens': 30, 'total_tokens': 35}} Async success callbacks: [] ```
Author
Owner

@igorschlum commented on GitHub (Dec 11, 2023):

Hi @kfsone

Could you introduce the issue with a explaination for a human. I would like to help, but I don't understand the issue :-)

<!-- gh-comment-id:1849520955 --> @igorschlum commented on GitHub (Dec 11, 2023): Hi @kfsone Could you introduce the issue with a explaination for a human. I would like to help, but I don't understand the issue :-)
Author
Owner

@mxyng commented on GitHub (Dec 11, 2023):

This looks like an issue for litellm.

<!-- gh-comment-id:1850517538 --> @mxyng commented on GitHub (Dec 11, 2023): This looks like an issue for [litellm](https://github.com/BerriAI/litellm).
Author
Owner

@kfsone commented on GitHub (Dec 11, 2023):

What the heck -- @mxyng is correct. I'm not sure how I goofed that up, sorry.

<!-- gh-comment-id:1850780238 --> @kfsone commented on GitHub (Dec 11, 2023): What the heck -- @mxyng is correct. I'm not sure how I goofed that up, sorry.
Author
Owner

@kfsone commented on GitHub (Dec 11, 2023):

Checking my browser history, it shows I originated the issue post at litellm?
image

<!-- gh-comment-id:1850790909 --> @kfsone commented on GitHub (Dec 11, 2023): Checking my browser history, it shows I originated the issue post at litellm? ![image](https://github.com/jmorganca/ollama/assets/323009/9d9d629a-751a-4b2f-a753-96187cc3a4f9)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#62823