[GH-ISSUE #10950] OLLAMA_DEBUG=1 Not Showing Prompts in Logs #53724

Closed
opened 2026-04-29 04:35:53 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @Manon-56 on GitHub (Jun 2, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10950

What is the issue?

As mentioned in this issue and other related issues, adding OLLAMA_DEBUG = 1 to the environment variables should add the prompts and responses to the logs. However, neither of them are present within the logs. I have searched the issues but couldn't find a similar problem. Here are the steps I've taken:

  1. Set OLLAMA_DEBUG=1 as environment variable.
  2. Restarted Ollama (docker down and then docker up)
  3. Ran Ollama serve and sent a prompt
  4. Displayed the logs. The word "DEBUG" is present but I could not find the prompt, or even part of the prompt (probably tokenized) using grep.

ollama version is 0.9.0, using docker
models tested : deepseek-r1 and llama3.1

There's certainly something I am missing. Thanks in advance for your help!

Relevant log output

level=INFO source=routes.go:1234 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434/ OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[http://localhost/ https://localhost/ http://localhost:* https://localhost:* http://127.0.0.1/ https://127.0.0.1/ http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0/ https://0.0.0.0/ http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"

OS

Linux

GPU

Nvidia

CPU

No response

Ollama version

0.9.0

Originally created by @Manon-56 on GitHub (Jun 2, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10950 ### What is the issue? As mentioned in [this issue](https://github.com/ollama/ollama/issues/2449) and other related issues, adding `OLLAMA_DEBUG = 1` to the environment variables should add the prompts and responses to the logs. However, neither of them are present within the logs. I have searched the issues but couldn't find a similar problem. Here are the steps I've taken: 1. Set OLLAMA_DEBUG=1 as environment variable. 2. Restarted Ollama (`docker down `and then `docker up`) 3. Ran Ollama serve and sent a prompt 4. Displayed the logs. The word "DEBUG" is present but I could not find the prompt, or even part of the prompt (probably tokenized) using grep. ollama version is 0.9.0, using docker models tested : deepseek-r1 and llama3.1 There's certainly something I am missing. Thanks in advance for your help! ### Relevant log output ```shell level=INFO source=routes.go:1234 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434/ OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:4 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:2 OLLAMA_ORIGINS:[http://localhost/ https://localhost/ http://localhost:* https://localhost:* http://127.0.0.1/ https://127.0.0.1/ http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0/ https://0.0.0.0/ http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" ``` ### OS Linux ### GPU Nvidia ### CPU _No response_ ### Ollama version 0.9.0
GiteaMirror added the bug label 2026-04-29 04:35:53 -05:00
Author
Owner

@rick-github commented on GitHub (Jun 2, 2025):

OLLAMA_DEBUG=2

#10650

<!-- gh-comment-id:2931404994 --> @rick-github commented on GitHub (Jun 2, 2025): `OLLAMA_DEBUG=2` #10650
Author
Owner

@Manon-56 commented on GitHub (Jun 3, 2025):

This works, thanks a lot!

<!-- gh-comment-id:2934079017 --> @Manon-56 commented on GitHub (Jun 3, 2025): This works, thanks a lot!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#53724