[GH-ISSUE #2941] Global Configuration Variables for Ollama #48317

Closed
opened 2026-04-28 07:42:22 -05:00 by GiteaMirror · 13 comments
Owner

Originally created by @bkawakami on GitHub (Mar 5, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/2941

I am currently using Ollama for running LLMs locally and am greatly appreciative of the functionality it offers. However, I've come across a point of confusion regarding the global configuration of the Ollama environment, especially when it comes to setting it up for different use cases.

Could you provide more detailed information or documentation on the following aspects:

  1. What are all the global configuration variables available for Ollama, and where can I find a comprehensive list?
  2. Is there a way to set these configurations globally via a YAML file or a similar approach, rather than setting individual environment variables?
  3. If YAML or similar file-based configurations are possible, could you provide an example of how to structure this file for different scenarios (e.g., different models, host configurations)?
Originally created by @bkawakami on GitHub (Mar 5, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/2941 I am currently using Ollama for running LLMs locally and am greatly appreciative of the functionality it offers. However, I've come across a point of confusion regarding the global configuration of the Ollama environment, especially when it comes to setting it up for different use cases. Could you provide more detailed information or documentation on the following aspects: 1. What are all the global configuration variables available for Ollama, and where can I find a comprehensive list? 2. Is there a way to set these configurations globally via a YAML file or a similar approach, rather than setting individual environment variables? 3. If YAML or similar file-based configurations are possible, could you provide an example of how to structure this file for different scenarios (e.g., different models, host configurations)?
Author
Owner

@bmizerany commented on GitHub (Mar 6, 2024):

Thank you for the feedback.

The environment variables accepted for configuration of Ollama may be found throughout https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server. I do see now it is hard to get a quick glance at them all, and so I'll open an issue to add environment docs to ollama -h and possibly the wiki.

Ollama currently has no plans to support a YAML config file.

I hope this helps. Please reopen if I missed something.

<!-- gh-comment-id:1979903005 --> @bmizerany commented on GitHub (Mar 6, 2024): Thank you for the feedback. The environment variables accepted for configuration of Ollama may be found throughout https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server. I do see now it is hard to get a quick glance at them all, and so I'll open an issue to add environment docs to `ollama -h` and possibly the wiki. Ollama currently has no plans to support a YAML config file. I hope this helps. Please reopen if I missed something.
Author
Owner

@ridvan70 commented on GitHub (May 29, 2024):

You can see a list of environment variables at

until the docs has been enhanced with this topic.

<!-- gh-comment-id:2137420910 --> @ridvan70 commented on GitHub (May 29, 2024): You can see a list of environment variables at - [https://github.com/ollama/ollama/blob/main/envconfig/config.go](https://github.com/ollama/ollama/blob/main/envconfig/config.go) until the docs has been enhanced with this topic.
Author
Owner

@nikhil-swamix commented on GitHub (Aug 31, 2024):

No help anywhere! Due to some urgent requirements for automated coding, I compiled this from source code RAG, hope a wandering soul discover this.

Variable Default Value Description + Effect + Scenario
OLLAMA_HOST "http://127.0.0.1:11434" Configures the host and scheme for the Ollama server. Effect: Determines the URL used for connecting to the Ollama server. Scenario: Useful when deploying Ollama in a distributed environment or when you need to expose the service on a specific network interface.
OLLAMA_ORIGINS [localhost, 127.0.0.1, 0.0.0.0] + app://, file://, tauri:// Configures allowed origins for CORS. Effect: Controls which origins are allowed to make requests to the Ollama server. Scenario: Critical when integrating Ollama with web applications to prevent unauthorized access from different domains.
OLLAMA_MODELS $HOME/.ollama/models Sets the path to the models directory. Effect: Determines where model files are stored and loaded from. Scenario: Useful for managing disk space on different drives or setting up shared model repositories in multi-user environments.
OLLAMA_KEEP_ALIVE 5 minutes Sets how long models stay loaded in memory. Effect: Controls the duration models remain in memory after use. Scenario: Longer durations improve response times for frequent queries but increase memory usage. Shorter durations free up resources but may increase initial response times.
OLLAMA_DEBUG false Enables additional debug information. Effect: Increases verbosity of logging and debugging output. Scenario: Invaluable for troubleshooting issues or understanding the system's behavior during development or deployment.
OLLAMA_FLASH_ATTENTION false Enables experimental flash attention feature. Effect: Activates an experimental optimization for attention mechanisms. Scenario: Can potentially improve performance on compatible hardware but may introduce instability.
OLLAMA_NOHISTORY false Disables readline history. Effect: Prevents command history from being saved. Scenario: Useful in security-sensitive environments where command history should not be persisted.
OLLAMA_NOPRUNE false Disables pruning of model blobs on startup. Effect: Keeps all model blobs, potentially increasing disk usage. Scenario: Helpful when you need to maintain all model versions for compatibility or rollback purposes.
OLLAMA_SCHED_SPREAD false Allows scheduling models across all GPUs. Effect: Enables multi-GPU usage for model inference. Scenario: Beneficial in high-performance computing environments with multiple GPUs to maximize hardware utilization.
OLLAMA_INTEL_GPU false Enables experimental Intel GPU detection. Effect: Allows usage of Intel GPUs for model inference. Scenario: Useful for organizations leveraging Intel GPU hardware for AI workloads.
OLLAMA_LLM_LIBRARY "" (auto-detect) Sets the LLM library to use. Effect: Overrides automatic detection of LLM library. Scenario: Useful when you need to force a specific library version or implementation for compatibility or performance reasons.
OLLAMA_TMPDIR System default temp directory Sets the location for temporary files. Effect: Determines where temporary files are stored. Scenario: Important for managing I/O performance or when system temp directory has limited space.
CUDA_VISIBLE_DEVICES All available Sets which NVIDIA devices are visible. Effect: Controls which NVIDIA GPUs can be used. Scenario: Critical for managing GPU allocation in multi-user or multi-process environments.
HIP_VISIBLE_DEVICES All available Sets which AMD devices are visible. Effect: Controls which AMD GPUs can be used. Scenario: Similar to CUDA_VISIBLE_DEVICES but for AMD hardware.
OLLAMA_RUNNERS_DIR System-dependent Sets the location for runners. Effect: Determines where runner executables are located. Scenario: Important for custom deployments or when runners need to be isolated from the main application.
OLLAMA_NUM_PARALLEL 0 (unlimited) Sets the number of parallel model requests. Effect: Controls concurrency of model inference. Scenario: Critical for managing system load and ensuring responsiveness in high-traffic environments.
OLLAMA_MAX_LOADED_MODELS 0 (unlimited) Sets the maximum number of loaded models. Effect: Limits the number of models that can be simultaneously loaded. Scenario: Helps manage memory usage in environments with limited resources or many different models.
OLLAMA_MAX_QUEUE 512 Sets the maximum number of queued requests. Effect: Limits the size of the request queue. Scenario: Prevents system overload during traffic spikes and ensures timely processing of requests.
OLLAMA_MAX_VRAM 0 (unlimited) Sets a maximum VRAM override in bytes. Effect: Limits the amount of VRAM that can be used. Scenario: Useful in shared GPU environments to prevent a single process from monopolizing GPU memory.
<!-- gh-comment-id:2322778733 --> @nikhil-swamix commented on GitHub (Aug 31, 2024): ### No help anywhere! Due to some urgent requirements for automated coding, I compiled this from source code RAG, hope a wandering soul discover this. | Variable | Default Value | Description + Effect + Scenario | |----------|---------------|----------------------------------| | OLLAMA_HOST | "http://127.0.0.1:11434" | Configures the host and scheme for the Ollama server. Effect: Determines the URL used for connecting to the Ollama server. Scenario: Useful when deploying Ollama in a distributed environment or when you need to expose the service on a specific network interface. | | OLLAMA_ORIGINS | [localhost, 127.0.0.1, 0.0.0.0] + app://, file://, tauri:// | Configures allowed origins for CORS. Effect: Controls which origins are allowed to make requests to the Ollama server. Scenario: Critical when integrating Ollama with web applications to prevent unauthorized access from different domains. | | OLLAMA_MODELS | $HOME/.ollama/models | Sets the path to the models directory. Effect: Determines where model files are stored and loaded from. Scenario: Useful for managing disk space on different drives or setting up shared model repositories in multi-user environments. | | OLLAMA_KEEP_ALIVE | 5 minutes | Sets how long models stay loaded in memory. Effect: Controls the duration models remain in memory after use. Scenario: Longer durations improve response times for frequent queries but increase memory usage. Shorter durations free up resources but may increase initial response times. | | OLLAMA_DEBUG | false | Enables additional debug information. Effect: Increases verbosity of logging and debugging output. Scenario: Invaluable for troubleshooting issues or understanding the system's behavior during development or deployment. | | OLLAMA_FLASH_ATTENTION | false | Enables experimental flash attention feature. Effect: Activates an experimental optimization for attention mechanisms. Scenario: Can potentially improve performance on compatible hardware but may introduce instability. | | OLLAMA_NOHISTORY | false | Disables readline history. Effect: Prevents command history from being saved. Scenario: Useful in security-sensitive environments where command history should not be persisted. | | OLLAMA_NOPRUNE | false | Disables pruning of model blobs on startup. Effect: Keeps all model blobs, potentially increasing disk usage. Scenario: Helpful when you need to maintain all model versions for compatibility or rollback purposes. | | OLLAMA_SCHED_SPREAD | false | Allows scheduling models across all GPUs. Effect: Enables multi-GPU usage for model inference. Scenario: Beneficial in high-performance computing environments with multiple GPUs to maximize hardware utilization. | | OLLAMA_INTEL_GPU | false | Enables experimental Intel GPU detection. Effect: Allows usage of Intel GPUs for model inference. Scenario: Useful for organizations leveraging Intel GPU hardware for AI workloads. | | OLLAMA_LLM_LIBRARY | "" (auto-detect) | Sets the LLM library to use. Effect: Overrides automatic detection of LLM library. Scenario: Useful when you need to force a specific library version or implementation for compatibility or performance reasons. | | OLLAMA_TMPDIR | System default temp directory | Sets the location for temporary files. Effect: Determines where temporary files are stored. Scenario: Important for managing I/O performance or when system temp directory has limited space. | | CUDA_VISIBLE_DEVICES | All available | Sets which NVIDIA devices are visible. Effect: Controls which NVIDIA GPUs can be used. Scenario: Critical for managing GPU allocation in multi-user or multi-process environments. | | HIP_VISIBLE_DEVICES | All available | Sets which AMD devices are visible. Effect: Controls which AMD GPUs can be used. Scenario: Similar to CUDA_VISIBLE_DEVICES but for AMD hardware. | | OLLAMA_RUNNERS_DIR | System-dependent | Sets the location for runners. Effect: Determines where runner executables are located. Scenario: Important for custom deployments or when runners need to be isolated from the main application. | | OLLAMA_NUM_PARALLEL | 0 (unlimited) | Sets the number of parallel model requests. Effect: Controls concurrency of model inference. Scenario: Critical for managing system load and ensuring responsiveness in high-traffic environments. | | OLLAMA_MAX_LOADED_MODELS | 0 (unlimited) | Sets the maximum number of loaded models. Effect: Limits the number of models that can be simultaneously loaded. Scenario: Helps manage memory usage in environments with limited resources or many different models. | | OLLAMA_MAX_QUEUE | 512 | Sets the maximum number of queued requests. Effect: Limits the size of the request queue. Scenario: Prevents system overload during traffic spikes and ensures timely processing of requests. | | OLLAMA_MAX_VRAM | 0 (unlimited) | Sets a maximum VRAM override in bytes. Effect: Limits the amount of VRAM that can be used. Scenario: Useful in shared GPU environments to prevent a single process from monopolizing GPU memory. |
Author
Owner

@yumemio commented on GitHub (Nov 20, 2024):

Not sure if this helps, but in version 0.4.2 ollama help <subcommand> lists available envvar configurations (not sure if the list is exhaustive). Note that the list is shown only when help is invoked with a subcommand.

The above table by @nikhil-swamix is more detailed, though!

$ ollama help serve
Start ollama

Usage:
  ollama serve [flags]

Aliases:
  serve, start

Flags:
  -h, --help   help for serve

Environment Variables:
      OLLAMA_DEBUG               Show additional debug information (e.g. OLLAMA_DEBUG=1)
      OLLAMA_HOST                IP Address for the ollama server (default 127.0.0.1:11434)
      OLLAMA_KEEP_ALIVE          The duration that models stay loaded in memory (default "5m")
      OLLAMA_MAX_LOADED_MODELS   Maximum number of loaded models per GPU
      OLLAMA_MAX_QUEUE           Maximum number of queued requests
      OLLAMA_MODELS              The path to the models directory
      OLLAMA_NUM_PARALLEL        Maximum number of parallel requests
      OLLAMA_NOPRUNE             Do not prune model blobs on startup
      OLLAMA_ORIGINS             A comma separated list of allowed origins
      OLLAMA_SCHED_SPREAD        Always schedule model across all GPUs
      OLLAMA_TMPDIR              Location for temporary files
      OLLAMA_FLASH_ATTENTION     Enabled flash attention
      OLLAMA_LLM_LIBRARY         Set LLM library to bypass autodetection
      OLLAMA_GPU_OVERHEAD        Reserve a portion of VRAM per GPU (bytes)
      OLLAMA_LOAD_TIMEOUT        How long to allow model loads to stall before giving up (default "5m")
<!-- gh-comment-id:2487380636 --> @yumemio commented on GitHub (Nov 20, 2024): Not sure if this helps, but in version 0.4.2 `ollama help <subcommand>` lists available envvar configurations (not sure if the list is exhaustive). Note that the list is shown only when `help` is invoked with a subcommand. The above table by @nikhil-swamix is more detailed, though! ``` $ ollama help serve Start ollama Usage: ollama serve [flags] Aliases: serve, start Flags: -h, --help help for serve Environment Variables: OLLAMA_DEBUG Show additional debug information (e.g. OLLAMA_DEBUG=1) OLLAMA_HOST IP Address for the ollama server (default 127.0.0.1:11434) OLLAMA_KEEP_ALIVE The duration that models stay loaded in memory (default "5m") OLLAMA_MAX_LOADED_MODELS Maximum number of loaded models per GPU OLLAMA_MAX_QUEUE Maximum number of queued requests OLLAMA_MODELS The path to the models directory OLLAMA_NUM_PARALLEL Maximum number of parallel requests OLLAMA_NOPRUNE Do not prune model blobs on startup OLLAMA_ORIGINS A comma separated list of allowed origins OLLAMA_SCHED_SPREAD Always schedule model across all GPUs OLLAMA_TMPDIR Location for temporary files OLLAMA_FLASH_ATTENTION Enabled flash attention OLLAMA_LLM_LIBRARY Set LLM library to bypass autodetection OLLAMA_GPU_OVERHEAD Reserve a portion of VRAM per GPU (bytes) OLLAMA_LOAD_TIMEOUT How long to allow model loads to stall before giving up (default "5m") ```
Author
Owner

@nikhil-swamix commented on GitHub (Nov 20, 2024):

as i understand from yu's comment, 0.4.2 has good improvement of vars, and original file of go env config seem to be changed, rather improved...

what would be better if a docs page is provided... atleast something like
https://docs.continue.dev/getting-started/install

at this point ive scanned most source of ollama so, maybe not apply to me, but new user definitely benefit from docs, and thousands by the day with the rise of AI!

if any maintainer looking, please let me know if doc page is required. im having a experimental agent which generates doc sites. hoping to make meaningful contrib to ollama.

<!-- gh-comment-id:2487485965 --> @nikhil-swamix commented on GitHub (Nov 20, 2024): as i understand from yu's comment, 0.4.2 has good improvement of vars, and original file of go env config seem to be changed, rather improved... what would be better if a docs page is provided... atleast something like https://docs.continue.dev/getting-started/install at this point ive scanned most source of ollama so, maybe not apply to me, but new user definitely benefit from docs, and thousands by the day with the rise of AI! if any maintainer looking, please let me know if doc page is required. im having a experimental agent which generates doc sites. hoping to make meaningful contrib to ollama.
Author
Owner

@Vyerni commented on GitHub (Jan 22, 2025):

Thanks for the list.

Would be lovely to be able to list all needed models in environment variables, so it will download automatically on creation if missing.

<!-- gh-comment-id:2606195344 --> @Vyerni commented on GitHub (Jan 22, 2025): Thanks for the list. Would be lovely to be able to list all needed models in environment variables, so it will download automatically on creation if missing.
Author
Owner

@melroy89 commented on GitHub (Jan 30, 2025):

no help anywhere! had some urgent for automated coding, compiled this, hope a wandering soul discover this.

Variable Default Value Description + Effect + Scenario
OLLAMA_HOST "http://127.0.0.1:11434" Configures the host and scheme for the Ollama server. Effect: Determines the URL used for connecting to the Ollama server. Scenario: Useful when deploying Ollama in a distributed environment or when you need to expose the service on a specific network interface.
OLLAMA_ORIGINS [localhost, 127.0.0.1, 0.0.0.0] + app://, file://, tauri:// Configures allowed origins for CORS. Effect: Controls which origins are allowed to make requests to the Ollama server. Scenario: Critical when integrating Ollama with web applications to prevent unauthorized access from different domains.
OLLAMA_MODELS $HOME/.ollama/models Sets the path to the models directory. Effect: Determines where model files are stored and loaded from. Scenario: Useful for managing disk space on different drives or setting up shared model repositories in multi-user environments.
OLLAMA_KEEP_ALIVE 5 minutes Sets how long models stay loaded in memory. Effect: Controls the duration models remain in memory after use. Scenario: Longer durations improve response times for frequent queries but increase memory usage. Shorter durations free up resources but may increase initial response times.
OLLAMA_DEBUG false Enables additional debug information. Effect: Increases verbosity of logging and debugging output. Scenario: Invaluable for troubleshooting issues or understanding the system's behavior during development or deployment.
OLLAMA_FLASH_ATTENTION false Enables experimental flash attention feature. Effect: Activates an experimental optimization for attention mechanisms. Scenario: Can potentially improve performance on compatible hardware but may introduce instability.
OLLAMA_NOHISTORY false Disables readline history. Effect: Prevents command history from being saved. Scenario: Useful in security-sensitive environments where command history should not be persisted.
OLLAMA_NOPRUNE false Disables pruning of model blobs on startup. Effect: Keeps all model blobs, potentially increasing disk usage. Scenario: Helpful when you need to maintain all model versions for compatibility or rollback purposes.
OLLAMA_SCHED_SPREAD false Allows scheduling models across all GPUs. Effect: Enables multi-GPU usage for model inference. Scenario: Beneficial in high-performance computing environments with multiple GPUs to maximize hardware utilization.
OLLAMA_INTEL_GPU false Enables experimental Intel GPU detection. Effect: Allows usage of Intel GPUs for model inference. Scenario: Useful for organizations leveraging Intel GPU hardware for AI workloads.
OLLAMA_LLM_LIBRARY "" (auto-detect) Sets the LLM library to use. Effect: Overrides automatic detection of LLM library. Scenario: Useful when you need to force a specific library version or implementation for compatibility or performance reasons.
OLLAMA_TMPDIR System default temp directory Sets the location for temporary files. Effect: Determines where temporary files are stored. Scenario: Important for managing I/O performance or when system temp directory has limited space.
CUDA_VISIBLE_DEVICES All available Sets which NVIDIA devices are visible. Effect: Controls which NVIDIA GPUs can be used. Scenario: Critical for managing GPU allocation in multi-user or multi-process environments.
HIP_VISIBLE_DEVICES All available Sets which AMD devices are visible. Effect: Controls which AMD GPUs can be used. Scenario: Similar to CUDA_VISIBLE_DEVICES but for AMD hardware.
OLLAMA_RUNNERS_DIR System-dependent Sets the location for runners. Effect: Determines where runner executables are located. Scenario: Important for custom deployments or when runners need to be isolated from the main application.
OLLAMA_NUM_PARALLEL 0 (unlimited) Sets the number of parallel model requests. Effect: Controls concurrency of model inference. Scenario: Critical for managing system load and ensuring responsiveness in high-traffic environments.
OLLAMA_MAX_LOADED_MODELS 0 (unlimited) Sets the maximum number of loaded models. Effect: Limits the number of models that can be simultaneously loaded. Scenario: Helps manage memory usage in environments with limited resources or many different models.
OLLAMA_MAX_QUEUE 512 Sets the maximum number of queued requests. Effect: Limits the size of the request queue. Scenario: Prevents system overload during traffic spikes and ensures timely processing of requests.
OLLAMA_MAX_VRAM 0 (unlimited) Sets a maximum VRAM override in bytes. Effect: Limits the amount of VRAM that can be used. Scenario: Useful in shared GPU environments to prevent a single process from monopolizing GPU memory.

Can this be documented somewhere in the FAQ? https://github.com/ollama/ollama/blob/main/docs/faq.md

<!-- gh-comment-id:2623280768 --> @melroy89 commented on GitHub (Jan 30, 2025): > ### no help anywhere! had some urgent for automated coding, compiled this, hope a wandering soul discover this. > Variable Default Value Description + Effect + Scenario > OLLAMA_HOST "http://127.0.0.1:11434" Configures the host and scheme for the Ollama server. Effect: Determines the URL used for connecting to the Ollama server. Scenario: Useful when deploying Ollama in a distributed environment or when you need to expose the service on a specific network interface. > OLLAMA_ORIGINS [localhost, 127.0.0.1, 0.0.0.0] + app://, file://, tauri:// Configures allowed origins for CORS. Effect: Controls which origins are allowed to make requests to the Ollama server. Scenario: Critical when integrating Ollama with web applications to prevent unauthorized access from different domains. > OLLAMA_MODELS $HOME/.ollama/models Sets the path to the models directory. Effect: Determines where model files are stored and loaded from. Scenario: Useful for managing disk space on different drives or setting up shared model repositories in multi-user environments. > OLLAMA_KEEP_ALIVE 5 minutes Sets how long models stay loaded in memory. Effect: Controls the duration models remain in memory after use. Scenario: Longer durations improve response times for frequent queries but increase memory usage. Shorter durations free up resources but may increase initial response times. > OLLAMA_DEBUG false Enables additional debug information. Effect: Increases verbosity of logging and debugging output. Scenario: Invaluable for troubleshooting issues or understanding the system's behavior during development or deployment. > OLLAMA_FLASH_ATTENTION false Enables experimental flash attention feature. Effect: Activates an experimental optimization for attention mechanisms. Scenario: Can potentially improve performance on compatible hardware but may introduce instability. > OLLAMA_NOHISTORY false Disables readline history. Effect: Prevents command history from being saved. Scenario: Useful in security-sensitive environments where command history should not be persisted. > OLLAMA_NOPRUNE false Disables pruning of model blobs on startup. Effect: Keeps all model blobs, potentially increasing disk usage. Scenario: Helpful when you need to maintain all model versions for compatibility or rollback purposes. > OLLAMA_SCHED_SPREAD false Allows scheduling models across all GPUs. Effect: Enables multi-GPU usage for model inference. Scenario: Beneficial in high-performance computing environments with multiple GPUs to maximize hardware utilization. > OLLAMA_INTEL_GPU false Enables experimental Intel GPU detection. Effect: Allows usage of Intel GPUs for model inference. Scenario: Useful for organizations leveraging Intel GPU hardware for AI workloads. > OLLAMA_LLM_LIBRARY "" (auto-detect) Sets the LLM library to use. Effect: Overrides automatic detection of LLM library. Scenario: Useful when you need to force a specific library version or implementation for compatibility or performance reasons. > OLLAMA_TMPDIR System default temp directory Sets the location for temporary files. Effect: Determines where temporary files are stored. Scenario: Important for managing I/O performance or when system temp directory has limited space. > CUDA_VISIBLE_DEVICES All available Sets which NVIDIA devices are visible. Effect: Controls which NVIDIA GPUs can be used. Scenario: Critical for managing GPU allocation in multi-user or multi-process environments. > HIP_VISIBLE_DEVICES All available Sets which AMD devices are visible. Effect: Controls which AMD GPUs can be used. Scenario: Similar to CUDA_VISIBLE_DEVICES but for AMD hardware. > OLLAMA_RUNNERS_DIR System-dependent Sets the location for runners. Effect: Determines where runner executables are located. Scenario: Important for custom deployments or when runners need to be isolated from the main application. > OLLAMA_NUM_PARALLEL 0 (unlimited) Sets the number of parallel model requests. Effect: Controls concurrency of model inference. Scenario: Critical for managing system load and ensuring responsiveness in high-traffic environments. > OLLAMA_MAX_LOADED_MODELS 0 (unlimited) Sets the maximum number of loaded models. Effect: Limits the number of models that can be simultaneously loaded. Scenario: Helps manage memory usage in environments with limited resources or many different models. > OLLAMA_MAX_QUEUE 512 Sets the maximum number of queued requests. Effect: Limits the size of the request queue. Scenario: Prevents system overload during traffic spikes and ensures timely processing of requests. > OLLAMA_MAX_VRAM 0 (unlimited) Sets a maximum VRAM override in bytes. Effect: Limits the amount of VRAM that can be used. Scenario: Useful in shared GPU environments to prevent a single process from monopolizing GPU memory. Can this be documented somewhere in the FAQ? https://github.com/ollama/ollama/blob/main/docs/faq.md
Author
Owner

@alpha-ulrich commented on GitHub (Mar 10, 2025):

@nikhil-swamix I would suggest to add that formally in Ollama documentation! I was looking for exactly this information.

<!-- gh-comment-id:2709832080 --> @alpha-ulrich commented on GitHub (Mar 10, 2025): @nikhil-swamix I would suggest to add that formally in Ollama documentation! I was looking for exactly this information.
Author
Owner

@SuperUserNameMan commented on GitHub (Mar 10, 2025):

btw, is there any env var to tell ollama how many CPU threads it is allowed to use ?

<!-- gh-comment-id:2710156941 --> @SuperUserNameMan commented on GitHub (Mar 10, 2025): btw, is there any env var to tell ollama how many CPU threads it is allowed to use ?
Author
Owner

@encryptic12 commented on GitHub (Aug 23, 2025):

This is seriously a gaping hole in the documentation. I am trying to use the new engine and memory estimation for testing and just assuming a value of 1 for these enables them.... an authoritative list of all environment variables, their default values, and possible values I would have thought are imparative for complete documentation. How hard is it to provide?

<!-- gh-comment-id:3217264362 --> @encryptic12 commented on GitHub (Aug 23, 2025): This is seriously a **gaping hole** in the documentation. I am trying to use the new engine and memory estimation for testing and just assuming a value of 1 for these enables them.... an authoritative list of all environment variables, their default values, and possible values I would have thought are imparative for complete documentation. How hard is it to provide?
Author
Owner

@spygi commented on GitHub (Oct 8, 2025):

No help anywhere! Due to some urgent requirements for automated coding, I compiled this from source code RAG, hope a wandering soul discover this.

Variable Default Value Description + Effect + Scenario
OLLAMA_HOST "http://127.0.0.1:11434" Configures the host and scheme for the Ollama server. Effect: Determines the URL used for connecting to the Ollama server. Scenario: Useful when deploying Ollama in a distributed environment or when you need to expose the service on a specific network interface.
OLLAMA_ORIGINS [localhost, 127.0.0.1, 0.0.0.0] + app://, file://, tauri:// Configures allowed origins for CORS. Effect: Controls which origins are allowed to make requests to the Ollama server. Scenario: Critical when integrating Ollama with web applications to prevent unauthorized access from different domains.
OLLAMA_MODELS $HOME/.ollama/models Sets the path to the models directory. Effect: Determines where model files are stored and loaded from. Scenario: Useful for managing disk space on different drives or setting up shared model repositories in multi-user environments.
OLLAMA_KEEP_ALIVE 5 minutes Sets how long models stay loaded in memory. Effect: Controls the duration models remain in memory after use. Scenario: Longer durations improve response times for frequent queries but increase memory usage. Shorter durations free up resources but may increase initial response times.
OLLAMA_DEBUG false Enables additional debug information. Effect: Increases verbosity of logging and debugging output. Scenario: Invaluable for troubleshooting issues or understanding the system's behavior during development or deployment.
OLLAMA_FLASH_ATTENTION false Enables experimental flash attention feature. Effect: Activates an experimental optimization for attention mechanisms. Scenario: Can potentially improve performance on compatible hardware but may introduce instability.
OLLAMA_NOHISTORY false Disables readline history. Effect: Prevents command history from being saved. Scenario: Useful in security-sensitive environments where command history should not be persisted.
OLLAMA_NOPRUNE false Disables pruning of model blobs on startup. Effect: Keeps all model blobs, potentially increasing disk usage. Scenario: Helpful when you need to maintain all model versions for compatibility or rollback purposes.
OLLAMA_SCHED_SPREAD false Allows scheduling models across all GPUs. Effect: Enables multi-GPU usage for model inference. Scenario: Beneficial in high-performance computing environments with multiple GPUs to maximize hardware utilization.
OLLAMA_INTEL_GPU false Enables experimental Intel GPU detection. Effect: Allows usage of Intel GPUs for model inference. Scenario: Useful for organizations leveraging Intel GPU hardware for AI workloads.
OLLAMA_LLM_LIBRARY "" (auto-detect) Sets the LLM library to use. Effect: Overrides automatic detection of LLM library. Scenario: Useful when you need to force a specific library version or implementation for compatibility or performance reasons.
OLLAMA_TMPDIR System default temp directory Sets the location for temporary files. Effect: Determines where temporary files are stored. Scenario: Important for managing I/O performance or when system temp directory has limited space.
CUDA_VISIBLE_DEVICES All available Sets which NVIDIA devices are visible. Effect: Controls which NVIDIA GPUs can be used. Scenario: Critical for managing GPU allocation in multi-user or multi-process environments.
HIP_VISIBLE_DEVICES All available Sets which AMD devices are visible. Effect: Controls which AMD GPUs can be used. Scenario: Similar to CUDA_VISIBLE_DEVICES but for AMD hardware.
OLLAMA_RUNNERS_DIR System-dependent Sets the location for runners. Effect: Determines where runner executables are located. Scenario: Important for custom deployments or when runners need to be isolated from the main application.
OLLAMA_NUM_PARALLEL 0 (unlimited) Sets the number of parallel model requests. Effect: Controls concurrency of model inference. Scenario: Critical for managing system load and ensuring responsiveness in high-traffic environments.
OLLAMA_MAX_LOADED_MODELS 0 (unlimited) Sets the maximum number of loaded models. Effect: Limits the number of models that can be simultaneously loaded. Scenario: Helps manage memory usage in environments with limited resources or many different models.
OLLAMA_MAX_QUEUE 512 Sets the maximum number of queued requests. Effect: Limits the size of the request queue. Scenario: Prevents system overload during traffic spikes and ensures timely processing of requests.
OLLAMA_MAX_VRAM 0 (unlimited) Sets a maximum VRAM override in bytes. Effect: Limits the amount of VRAM that can be used. Scenario: Useful in shared GPU environments to prevent a single process from monopolizing GPU memory.

thank you for this!

small correction for others: the default for OLLAMA_NUM_PARALLEL is 1 f2e9c9aff5/envconfig/config.go (L222)
(validated also with version 0.12.3 of MacOS)

<!-- gh-comment-id:3383471857 --> @spygi commented on GitHub (Oct 8, 2025): > ### No help anywhere! Due to some urgent requirements for automated coding, I compiled this from source code RAG, hope a wandering soul discover this. > Variable Default Value Description + Effect + Scenario > OLLAMA_HOST "http://127.0.0.1:11434" Configures the host and scheme for the Ollama server. Effect: Determines the URL used for connecting to the Ollama server. Scenario: Useful when deploying Ollama in a distributed environment or when you need to expose the service on a specific network interface. > OLLAMA_ORIGINS [localhost, 127.0.0.1, 0.0.0.0] + app://, file://, tauri:// Configures allowed origins for CORS. Effect: Controls which origins are allowed to make requests to the Ollama server. Scenario: Critical when integrating Ollama with web applications to prevent unauthorized access from different domains. > OLLAMA_MODELS $HOME/.ollama/models Sets the path to the models directory. Effect: Determines where model files are stored and loaded from. Scenario: Useful for managing disk space on different drives or setting up shared model repositories in multi-user environments. > OLLAMA_KEEP_ALIVE 5 minutes Sets how long models stay loaded in memory. Effect: Controls the duration models remain in memory after use. Scenario: Longer durations improve response times for frequent queries but increase memory usage. Shorter durations free up resources but may increase initial response times. > OLLAMA_DEBUG false Enables additional debug information. Effect: Increases verbosity of logging and debugging output. Scenario: Invaluable for troubleshooting issues or understanding the system's behavior during development or deployment. > OLLAMA_FLASH_ATTENTION false Enables experimental flash attention feature. Effect: Activates an experimental optimization for attention mechanisms. Scenario: Can potentially improve performance on compatible hardware but may introduce instability. > OLLAMA_NOHISTORY false Disables readline history. Effect: Prevents command history from being saved. Scenario: Useful in security-sensitive environments where command history should not be persisted. > OLLAMA_NOPRUNE false Disables pruning of model blobs on startup. Effect: Keeps all model blobs, potentially increasing disk usage. Scenario: Helpful when you need to maintain all model versions for compatibility or rollback purposes. > OLLAMA_SCHED_SPREAD false Allows scheduling models across all GPUs. Effect: Enables multi-GPU usage for model inference. Scenario: Beneficial in high-performance computing environments with multiple GPUs to maximize hardware utilization. > OLLAMA_INTEL_GPU false Enables experimental Intel GPU detection. Effect: Allows usage of Intel GPUs for model inference. Scenario: Useful for organizations leveraging Intel GPU hardware for AI workloads. > OLLAMA_LLM_LIBRARY "" (auto-detect) Sets the LLM library to use. Effect: Overrides automatic detection of LLM library. Scenario: Useful when you need to force a specific library version or implementation for compatibility or performance reasons. > OLLAMA_TMPDIR System default temp directory Sets the location for temporary files. Effect: Determines where temporary files are stored. Scenario: Important for managing I/O performance or when system temp directory has limited space. > CUDA_VISIBLE_DEVICES All available Sets which NVIDIA devices are visible. Effect: Controls which NVIDIA GPUs can be used. Scenario: Critical for managing GPU allocation in multi-user or multi-process environments. > HIP_VISIBLE_DEVICES All available Sets which AMD devices are visible. Effect: Controls which AMD GPUs can be used. Scenario: Similar to CUDA_VISIBLE_DEVICES but for AMD hardware. > OLLAMA_RUNNERS_DIR System-dependent Sets the location for runners. Effect: Determines where runner executables are located. Scenario: Important for custom deployments or when runners need to be isolated from the main application. > OLLAMA_NUM_PARALLEL 0 (unlimited) Sets the number of parallel model requests. Effect: Controls concurrency of model inference. Scenario: Critical for managing system load and ensuring responsiveness in high-traffic environments. > OLLAMA_MAX_LOADED_MODELS 0 (unlimited) Sets the maximum number of loaded models. Effect: Limits the number of models that can be simultaneously loaded. Scenario: Helps manage memory usage in environments with limited resources or many different models. > OLLAMA_MAX_QUEUE 512 Sets the maximum number of queued requests. Effect: Limits the size of the request queue. Scenario: Prevents system overload during traffic spikes and ensures timely processing of requests. > OLLAMA_MAX_VRAM 0 (unlimited) Sets a maximum VRAM override in bytes. Effect: Limits the amount of VRAM that can be used. Scenario: Useful in shared GPU environments to prevent a single process from monopolizing GPU memory. thank you for this! small correction for others: the default for OLLAMA_NUM_PARALLEL is 1 https://github.com/ollama/ollama/blob/f2e9c9aff5f59b21a5d9a9668408732b3de01e20/envconfig/config.go#L222 (validated also with version 0.12.3 of MacOS)
Author
Owner

@CL415 commented on GitHub (Jan 19, 2026):

Can this be reopened until the documentation does list them?

<!-- gh-comment-id:3768700660 --> @CL415 commented on GitHub (Jan 19, 2026): Can this be reopened until the documentation does list them?
Author
Owner

@zyberwoof commented on GitHub (Apr 11, 2026):

For anyone that is curious, you can find out the valid environment variables on-the-fly by running ollama serve --help.

I agree that it would be best to have these values documented somewhere clearly. But I thought this tip might help others.

<!-- gh-comment-id:4230149185 --> @zyberwoof commented on GitHub (Apr 11, 2026): For anyone that is curious, you can find out the valid environment variables on-the-fly by running `ollama serve --help`. _I agree that it would be best to have these values documented somewhere clearly. But I thought this tip might help others._
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#48317