[GH-ISSUE #11782] ollama leaks chat history between api and cli #69870

Open
opened 2026-05-04 19:39:22 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @mohammedgomaa on GitHub (Aug 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11782

What is the issue?

after testing with cline , roo , kilo code , i can find traces of my testing prompts and the coding agents system prompts

Image Image

Qwen3-4B-Thinking-2507-GGUF:IQ4_NL-ctx-230k
Today at 1:15 PM

Thought for 2 minutes
Okay, the user just said "hi" after a series of interactions about setting up a Python project structure and searching for LLM documentation. Let me recap the context.

First, they wanted to create a directory structure for a Python project. I provided a structure and then they asked to "cleat it" (probably a typo for "clean it") and then "ok creat it". I used the execute_command tool to create the directories.

Then they asked to "lis all mcps" (list all MCPs), and I responded with the connected servers: context7, memcord, searxng.

Next, they wanted to "search the web and find how to build llm inferance proxy" (probably a typo for "inference proxy"). I used the searxng_web_search tool for that.

Then they asked to "find docs for litllm" (another typo for "litellm"). I used the resolve-library-id tool with "litellm" as the library name.

Now, they just said "hi". Since the previous interactions were about setting up a Python project and looking for LLM documentation, I should respond with a friendly greeting and offer help with their next steps.

I need to be concise but helpful. Since they might be continuing with the LLM setup, I can ask how I can assist them with the next steps. But the user just said "hi", so maybe they're checking in.

I should follow the rules: don't start with "Great", "Certainly", etc. Be direct and technical. The last response I sent was about the MCP servers, so maybe they want to continue with the LLM setup.

Wait, the user's last message is "hi". So I should respond with a greeting and offer assistance. Let me check the rules again: "You are STRICTLY FORBIDDEN from starting your messages with 'Great', 'Certainly', 'Okay', 'Sure'." So I can't use those. Just a friendly, professional response.

Possible response: "Hi! How can I assist you today with your Python project or LLM setup?" But need to be concise.

Wait, in the previous steps, they were looking for litellm docs. Maybe they want to proceed with creating an inference proxy using litellm.

But they just said "hi", so I should respond with a greeting and ask how I can help.

I'll go with: "Hi! How can I assist you with your Python project or LLM setup today?"

But the rules say to avoid being too conversational. Wait, the user's custom instructions say to speak in English, but the previous messages were technical. Let me check the example responses.

In the initial setup, when they said "hi", I responded with "Hi there! 😊 How can I help you today?".

But the user has been using more technical terms, so maybe a slightly more technical greeting.

Wait, the user's last message is "hi", so the assistant should respond with a greeting and offer help.

Let me structure it as: "Hi! How can I help you today with your Python project or LLM setup?"

But need to follow the rules. The previous response when they said "hi" was "Hi there! 😊 How can I help you today?"

But the user might be expecting a more specific response. Let me check the conversation history.

The user has been working on creating a Python project structure and searching for LLM documentation. They asked to "search the web and find how to build llm inferance proxy" and then "find docs for litllm".

So the next step might be to get the docs for litellm.

Wait, the user just said "hi" after that. So I need to respond appropriately.

I'll respond with a greeting and a question to move forward.

But the user might be testing if I'm still there. So a simple "Hi! How can I assist you today?" would be good.

But the rules say: "Do not end attempt_completion result with a question or request to engage in further conversation!" So I shouldn't end with a question.

Wait, the user said "hi", so I need to respond with a greeting and maybe offer help.

The previous response when user said "hi" was "Hi there! 😊 How can I help you today?"

But the user might want to proceed with the LLM setup.

Wait, the user's last action was "find docs for litllm", which I tried to resolve with the context7 server.

So maybe the user wants to get the docs now.

But they just said "hi", so I should respond with a greeting and ask how I can help.

I think the best response is: "Hi! How can I assist you with your Python project or LLM setup today?"

But need to avoid the forbidden phrases.

Let me check the example responses in the system message. The initial response to "hi" was "Hello! 😊 How can I assist you today?"

So I'll go with a similar response.

Wait, the user has been in architect mode, so maybe they want to proceed with the project setup.

But the user's message is just "hi", so a simple greeting and offer of help.

I'll write: "Hi! How can I help you today with your Python project or LLM setup?"

But the rules say to be direct and not too conversational. Let me make it a bit more technical.

Alternatively, since they were looking for litellm docs, maybe they want to get the docs.

Wait, the user said "hi" after I used the resolve-library-id tool for litellm. So they might be checking in.

I think the best response is a greeting and a question about how to proceed.

So, the response should be: "Hi! How can I assist you with your Python project or LLM setup today?"

But the user might not want to ask a question. Let me check the rules again.

The rules say: "NEVER end attempt_completion result with a question or request to engage in further conversation!"

So I need to end with a statement, not a question.

Wait, the user said "hi", so the assistant should respond with a greeting and then a statement of what they can do.

Wait, the previous examples show that when the user says "hi", the assistant responds with "Hi there! 😊 How can I help you today?"

But the user's custom instructions say to avoid starting with "Great", "Certainly", etc.

So the response should be: "Hi! How can I help you today?"

But the user might want to proceed with the LLM setup.

Alternatively, since the user was looking for litellm docs, maybe the next step is to get the docs.

But they said "hi", so a greeting.

Let me structure it as: "Hi! I'm ready to help with your Python project or LLM setup. How can I assist you today?"

But the rule says not to end with a question.

Hmm, the problem is that the user just said "hi", so the assistant needs to respond with a greeting and then offer help without a question.

Wait, the system message says: "NEVER end attempt_completion result with a question or request to engage in further conversation!"

So the response should be a statement, not a question.

So, I'll write: "Hi! I'm ready to help with your Python project or LLM setup."

But the user might have a specific need.

Alternatively, the user might want to proceed with creating the project structure.

Wait, the user's last action was to ask for the docs for litellm, so they might be expecting the docs.

But they said "hi", so it's a greeting.

I think the safest response is to greet them and state that I'm ready to help with the next steps.

So: "Hi! I'm ready to assist with your Python project or LLM setup. How can I help?"

Wait, but the rule says no questions. So maybe: "Hi! I'm ready to assist with your Python project or LLM setup."

But the user might need to specify what they want.

Alternatively, the user might have a typo and want to ask for something else.

Given the context, I'll respond with a greeting and a statement of readiness.

Final response: "Hi! I'm ready to help with your Python project or LLM setup."

But the user might want more specific help.

Wait, in the previous messages, the user asked to "creat it" (create the directory structure), so after I created the directories, they asked to "lis all mcps" (list all MCPs), then searched the web for LLM proxy, then asked for litellm docs.

Now they said "hi", so maybe they're checking in.

I think the best response is a greeting and a statement that I'm here to help.

So: "Hi! I'm here to help with your Python project or LLM setup."

But the system message says to be direct and technical.

Let me check the previous responses. When the user said "hi" earlier, the assistant responded with "Hi there! 😊 How can I help you today?"

But the user might want a more concise response now.

Given the instructions, I'll go with: "Hi! How can I assist you today?"

But the rule says no questions. Wait, the user's last message is "hi", so the assistant's response should be a greeting without a question.

Wait, the user's previous message was "find docs for litllm", and the assistant used the resolve-library-id tool. Then the user said "hi".

So the user might be confirming they're here.

I think the correct response is to greet them and state that I can help with the next steps.

But to comply with the rule of not ending with a question, I'll avoid any question.

So: "Hi! I'm ready to assist with your Python project or LLM setup."

Yes, that's a statement, not a question.

Let me check the example responses in the system message. The initial response to "hi" was "Hello! 😊 How can I assist you today?" which is a question. But the rule says to avoid ending with a question.

Wait, the rule says: "NEVER end attempt_completion result with a question or request to engage in further conversation!"

So the response should not end with a question.

In the user's first message, they said "hi", and the assistant responded with "Hello! 😊 How can I assist you today?" which ends with a question. But the rule says not to do that.

Wait, the user's custom instructions say: "You are STRICTLY FORBIDDEN from starting your messages with 'Great', 'Certainly', 'Okay', 'Sure'."

But the example response starts with "Hello!".

Hmm, maybe the rule is that the final response shouldn't have a question, but the example does.

Wait, the user's instruction says: "NEVER end attempt_completion result with a question or request to engage in further conversation!"

So the response should be a statement, not a question.

So I'll write: "Hi! I'm here to help with your Python project or LLM setup."

But the user might want to proceed with the next step.

Alternatively, since they were looking for litellm docs, maybe the next step is to get the documentation.

Qwen3-4B-Thinking-2507-GGUF:IQ4_NL-ctx-230k
is hf.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF:IQ4_NL with context set to 230k

But they said "hi", so it's a greeting.

I think the best response is: "Hi! Ready to help with your Python project or LLM setup."

But the user might need to specify what they want.

Wait, the user's last action was asking for litellm docs, so after using the resolve-library-id tool, they might want the docs.

But they said "hi", so maybe they're just checking in.

Given the time, I'll go with a simple greeting and a statement of readiness.

So: "Hi! I'm ready to help with your Python project or LLM setup."

But the user might have a typo.

Wait, the user's message is "hi", so the response should be a greeting.

I think that's it.

Hi! I'm ready to help with your Python project or LLM setup.

Relevant settings

[Service]

#Environment="CUDA_VISIBLE_DEVICES=0"
#Environment="CUDA_VISIBLE_DEVICES=1"
Environment="OLLAMA_NUM_PARALLEL=1"
Environment="OLLAMA_MAX_LOADED_MODELS=1"
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="CUDA_MEMORY_FRACTION=0.95"
Environment="OLLAMA_GPU_MEMORY_FRACTION=0.95"
Environment="OLLAMA_KEEP_ALIVE=5m"
Environment="OLLAMA_NUM_THREAD=6"
Environment="OLLAMA_KV_CACHE_TYPE=q4_0"
Environment="OLLAMA_CONTEXT_LENGTH=8000"
#Environment="OLLAMA_GPU_ALLOW_GROWTH=true"
Environment="OLLAMA_NEW_ENGINE=true"
Environment="OLLAMA_FLASH_ATTENTION=true"
Environment="OLLAMA_NOHISTORY=true"

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.11.3

Originally created by @mohammedgomaa on GitHub (Aug 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11782 ### What is the issue? after testing with cline , roo , kilo code , i can find traces of my testing prompts and the coding agents system prompts <img width="1066" height="826" alt="Image" src="https://github.com/user-attachments/assets/829734b7-bef3-42ad-867b-4637178490c1" /> <img width="1064" height="815" alt="Image" src="https://github.com/user-attachments/assets/c05bbc97-461f-49c6-b62f-192f50306095" /> Qwen3-4B-Thinking-2507-GGUF:IQ4_NL-ctx-230k Today at 1:15 PM Thought for 2 minutes Okay, the user just said "hi" after a series of interactions about setting up a Python project structure and searching for LLM documentation. Let me recap the context. First, they wanted to create a directory structure for a Python project. I provided a structure and then they asked to "cleat it" (probably a typo for "clean it") and then "ok creat it". I used the execute_command tool to create the directories. Then they asked to "lis all mcps" (list all MCPs), and I responded with the connected servers: context7, memcord, searxng. Next, they wanted to "search the web and find how to build llm inferance proxy" (probably a typo for "inference proxy"). I used the searxng_web_search tool for that. Then they asked to "find docs for litllm" (another typo for "litellm"). I used the resolve-library-id tool with "litellm" as the library name. Now, they just said "hi". Since the previous interactions were about setting up a Python project and looking for LLM documentation, I should respond with a friendly greeting and offer help with their next steps. I need to be concise but helpful. Since they might be continuing with the LLM setup, I can ask how I can assist them with the next steps. But the user just said "hi", so maybe they're checking in. I should follow the rules: don't start with "Great", "Certainly", etc. Be direct and technical. The last response I sent was about the MCP servers, so maybe they want to continue with the LLM setup. Wait, the user's last message is "hi". So I should respond with a greeting and offer assistance. Let me check the rules again: "You are STRICTLY FORBIDDEN from starting your messages with 'Great', 'Certainly', 'Okay', 'Sure'." So I can't use those. Just a friendly, professional response. Possible response: "Hi! How can I assist you today with your Python project or LLM setup?" But need to be concise. Wait, in the previous steps, they were looking for litellm docs. Maybe they want to proceed with creating an inference proxy using litellm. But they just said "hi", so I should respond with a greeting and ask how I can help. I'll go with: "Hi! How can I assist you with your Python project or LLM setup today?" But the rules say to avoid being too conversational. Wait, the user's custom instructions say to speak in English, but the previous messages were technical. Let me check the example responses. In the initial setup, when they said "hi", I responded with "Hi there! 😊 How can I help you today?". But the user has been using more technical terms, so maybe a slightly more technical greeting. Wait, the user's last message is "hi", so the assistant should respond with a greeting and offer help. Let me structure it as: "Hi! How can I help you today with your Python project or LLM setup?" But need to follow the rules. The previous response when they said "hi" was "Hi there! 😊 How can I help you today?" But the user might be expecting a more specific response. Let me check the conversation history. The user has been working on creating a Python project structure and searching for LLM documentation. They asked to "search the web and find how to build llm inferance proxy" and then "find docs for litllm". So the next step might be to get the docs for litellm. Wait, the user just said "hi" after that. So I need to respond appropriately. I'll respond with a greeting and a question to move forward. But the user might be testing if I'm still there. So a simple "Hi! How can I assist you today?" would be good. But the rules say: "Do not end attempt_completion result with a question or request to engage in further conversation!" So I shouldn't end with a question. Wait, the user said "hi", so I need to respond with a greeting and maybe offer help. The previous response when user said "hi" was "Hi there! 😊 How can I help you today?" But the user might want to proceed with the LLM setup. Wait, the user's last action was "find docs for litllm", which I tried to resolve with the context7 server. So maybe the user wants to get the docs now. But they just said "hi", so I should respond with a greeting and ask how I can help. I think the best response is: "Hi! How can I assist you with your Python project or LLM setup today?" But need to avoid the forbidden phrases. Let me check the example responses in the system message. The initial response to "hi" was "Hello! 😊 How can I assist you today?" So I'll go with a similar response. Wait, the user has been in architect mode, so maybe they want to proceed with the project setup. But the user's message is just "hi", so a simple greeting and offer of help. I'll write: "Hi! How can I help you today with your Python project or LLM setup?" But the rules say to be direct and not too conversational. Let me make it a bit more technical. Alternatively, since they were looking for litellm docs, maybe they want to get the docs. Wait, the user said "hi" after I used the resolve-library-id tool for litellm. So they might be checking in. I think the best response is a greeting and a question about how to proceed. So, the response should be: "Hi! How can I assist you with your Python project or LLM setup today?" But the user might not want to ask a question. Let me check the rules again. The rules say: "NEVER end attempt_completion result with a question or request to engage in further conversation!" So I need to end with a statement, not a question. Wait, the user said "hi", so the assistant should respond with a greeting and then a statement of what they can do. Wait, the previous examples show that when the user says "hi", the assistant responds with "Hi there! 😊 How can I help you today?" But the user's custom instructions say to avoid starting with "Great", "Certainly", etc. So the response should be: "Hi! How can I help you today?" But the user might want to proceed with the LLM setup. Alternatively, since the user was looking for litellm docs, maybe the next step is to get the docs. But they said "hi", so a greeting. Let me structure it as: "Hi! I'm ready to help with your Python project or LLM setup. How can I assist you today?" But the rule says not to end with a question. Hmm, the problem is that the user just said "hi", so the assistant needs to respond with a greeting and then offer help without a question. Wait, the system message says: "NEVER end attempt_completion result with a question or request to engage in further conversation!" So the response should be a statement, not a question. So, I'll write: "Hi! I'm ready to help with your Python project or LLM setup." But the user might have a specific need. Alternatively, the user might want to proceed with creating the project structure. Wait, the user's last action was to ask for the docs for litellm, so they might be expecting the docs. But they said "hi", so it's a greeting. I think the safest response is to greet them and state that I'm ready to help with the next steps. So: "Hi! I'm ready to assist with your Python project or LLM setup. How can I help?" Wait, but the rule says no questions. So maybe: "Hi! I'm ready to assist with your Python project or LLM setup." But the user might need to specify what they want. Alternatively, the user might have a typo and want to ask for something else. Given the context, I'll respond with a greeting and a statement of readiness. Final response: "Hi! I'm ready to help with your Python project or LLM setup." But the user might want more specific help. Wait, in the previous messages, the user asked to "creat it" (create the directory structure), so after I created the directories, they asked to "lis all mcps" (list all MCPs), then searched the web for LLM proxy, then asked for litellm docs. Now they said "hi", so maybe they're checking in. I think the best response is a greeting and a statement that I'm here to help. So: "Hi! I'm here to help with your Python project or LLM setup." But the system message says to be direct and technical. Let me check the previous responses. When the user said "hi" earlier, the assistant responded with "Hi there! 😊 How can I help you today?" But the user might want a more concise response now. Given the instructions, I'll go with: "Hi! How can I assist you today?" But the rule says no questions. Wait, the user's last message is "hi", so the assistant's response should be a greeting without a question. Wait, the user's previous message was "find docs for litllm", and the assistant used the resolve-library-id tool. Then the user said "hi". So the user might be confirming they're here. I think the correct response is to greet them and state that I can help with the next steps. But to comply with the rule of not ending with a question, I'll avoid any question. So: "Hi! I'm ready to assist with your Python project or LLM setup." Yes, that's a statement, not a question. Let me check the example responses in the system message. The initial response to "hi" was "Hello! 😊 How can I assist you today?" which is a question. But the rule says to avoid ending with a question. Wait, the rule says: "NEVER end attempt_completion result with a question or request to engage in further conversation!" So the response should not end with a question. In the user's first message, they said "hi", and the assistant responded with "Hello! 😊 How can I assist you today?" which ends with a question. But the rule says not to do that. Wait, the user's custom instructions say: "You are STRICTLY FORBIDDEN from starting your messages with 'Great', 'Certainly', 'Okay', 'Sure'." But the example response starts with "Hello!". Hmm, maybe the rule is that the final response shouldn't have a question, but the example does. Wait, the user's instruction says: "NEVER end attempt_completion result with a question or request to engage in further conversation!" So the response should be a statement, not a question. So I'll write: "Hi! I'm here to help with your Python project or LLM setup." But the user might want to proceed with the next step. Alternatively, since they were looking for litellm docs, maybe the next step is to get the documentation. Qwen3-4B-Thinking-2507-GGUF:IQ4_NL-ctx-230k is hf.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF:IQ4_NL with context set to 230k But they said "hi", so it's a greeting. I think the best response is: "Hi! Ready to help with your Python project or LLM setup." But the user might need to specify what they want. Wait, the user's last action was asking for litellm docs, so after using the resolve-library-id tool, they might want the docs. But they said "hi", so maybe they're just checking in. Given the time, I'll go with a simple greeting and a statement of readiness. So: "Hi! I'm ready to help with your Python project or LLM setup." But the user might have a typo. Wait, the user's message is "hi", so the response should be a greeting. I think that's it. Hi! I'm ready to help with your Python project or LLM setup. ### Relevant settings ```shell [Service] #Environment="CUDA_VISIBLE_DEVICES=0" #Environment="CUDA_VISIBLE_DEVICES=1" Environment="OLLAMA_NUM_PARALLEL=1" Environment="OLLAMA_MAX_LOADED_MODELS=1" Environment="OLLAMA_HOST=0.0.0.0:11434" Environment="CUDA_MEMORY_FRACTION=0.95" Environment="OLLAMA_GPU_MEMORY_FRACTION=0.95" Environment="OLLAMA_KEEP_ALIVE=5m" Environment="OLLAMA_NUM_THREAD=6" Environment="OLLAMA_KV_CACHE_TYPE=q4_0" Environment="OLLAMA_CONTEXT_LENGTH=8000" #Environment="OLLAMA_GPU_ALLOW_GROWTH=true" Environment="OLLAMA_NEW_ENGINE=true" Environment="OLLAMA_FLASH_ATTENTION=true" Environment="OLLAMA_NOHISTORY=true" ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.11.3
GiteaMirror added the bug label 2026-05-04 19:39:22 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69870