[GH-ISSUE #12353] Feature Request: Auto-size num_ctx to a user VRAM budget (and recalc on model switch) #70265

Open
opened 2026-05-04 20:49:05 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @taggedzi on GitHub (Sep 20, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/12353

Feature Request: Auto-size num_ctx to a user VRAM budget (and recalc on model switch)

Pitch:

Right now, users must guess a safe num_ctx. If it’s too high, VRAM overflows and Ollama may slow down or fall back to CPU, with no clear feedback. A --fit-vram option would auto-size context length to stay within GPU VRAM (with an optional --max-vram budget), recalculating on model switch. This prevents silent CPU fallback, makes usage smoother on smaller/shared GPUs, and removes trial-and-error for users.

TL;DR

Add a --fit-vram option so Ollama automatically chooses the largest safe context length that fits in GPU memory (with an optional --max-vram budget). The algorithm should also detect and respect each model’s maximum context limit (from GGUF metadata or n_ctx_train), recalculating whenever models are switched. This prevents silent CPU fallback, avoids exceeding model caps, and makes running on smaller or shared GPUs much smoother.


Description

Today, users must guess a num_ctx that won’t overflow GPU VRAM. If the chosen context length is too large for available VRAM, users report severe slowdowns and, in some cases, work shifting to system RAM and/or CPU involvement. There’s no built-in way to say:

“Keep me on GPU—use the largest safe context that fits this VRAM budget, without exceeding the model’s maximum allowable context.”

This makes it tricky to run Ollama smoothly on smaller GPUs (e.g., 6–8 GB cards) or in shared GPU environments, and can cause errors on models with unusually low maximum context sizes.


Requested behavior

  • New flag (e.g., --fit-vram) that auto-sizes num_ctx at model load to the largest value that fits available GPU VRAM.

  • Budget cap (e.g., --max-vram=6GB) so users on shared GPUs can leave headroom.

  • Respect model max context (model_max_ctx):

    • Detect from GGUF metadata (llama.context_length) or fallback to n_ctx_train from llama.cpp.
    • Clamp chosen context to this maximum.
    • Warn clearly if user-requested context exceeds model’s cap.
  • Recalculate the max safe num_ctx whenever the model/quant/format changes (hot-swap or reload).

  • Fail fast / warn (optionally --gpu-only) if even the minimal context would exceed the budget, rather than silently running on CPU.


Why this helps

  • Prevents confusing performance cliffs from CPU fallback or RAM spill.
  • Avoids crashes or invalid settings when models have low maximum contexts.
  • Makes Ollama friendlier on 6–8 GB GPUs (e.g., RTX 3070 Ti) and in shared environments.
  • Reduces trial-and-error around num_ctx, KV cache size, and offload settings.

Prior art / context

  • num_ctx exists today as an option in Ollama’s API/Modelfile, but there’s no auto-sizing to a VRAM budget or respect for a model’s max cap.
  • When VRAM is insufficient, users have observed Ollama running much slower or shifting work to CPU/RAM, consistent with the need for a protective auto-sizer.
  • VRAM usage increases with context via the KV cache (from llama.cpp), which is why long contexts often push memory beyond GPU limits.
  • Some models ship with very low max contexts (e.g., 512–2048), and ignoring these caps leads to unstable or misleading behavior.
  • Real-world usage shows that setting very large contexts can push inference off the GPU; dialing context down restores GPU usage and speed—further motivating an automatic fit-to-VRAM mode.

Technical notes (to aid implementation)

Heuristic + backoff algorithm

  • Probe free VRAM at load.
  • Estimate footprint = params + KV cache(num_ctx) + workspace buffers + safety headroom.
  • Start from requested/default num_ctx, then back off until estimate ≤ budget.
  • Headroom default (15–20%) to avoid fragmentation; overridable with --vram-headroom.

Model max context detection

  • Read llama.context_length from GGUF metadata if available.
  • Otherwise parse n_ctx_train reported by llama.cpp.
  • If neither is available, use a conservative default (e.g., 2048) and log a warning.
  • Clamp final context to:
    effective_ctx = min(requested_ctx, model_max_ctx, vram_fit_ctx)
  • Warn if user-requested context exceeds model_max_ctx.
  • Allow advanced override flags: --assume-max-ctx=<N> and --ignore-model-max-ctx.

Cross-platform GPU APIs

  • CUDA: cudaMemGetInfo.
  • ROCm: hipMemGetInfo.
  • Metal: MTLDevice.recommendedMaxWorkingSetSize + allocation tracking.
  • On Apple silicon, respect unified memory.

Shared GPU / budget cap

  • If --max-vram is set, cap to min(free_vram, budget).
  • Optionally --reserve-vram=<MB> for leaving space for other apps.

Dry-run & observability

  • --fit-vram=check: print computed max num_ctx, model_max_ctx, and breakdown.
  • On load, log chosen num_ctx and breakdown at INFO level.
  • API endpoint (GET /v1/compute/max_context) could return the calculation for a given (model, quant, gpu).

Recalculation triggers

  • Recompute on model switch, quant change, GPU change, or offload settings change.
  • Cache results per (model_id, quant, gpu_uuid) for fast reuse; invalidate on driver change or memory pressure.

Failure policy

  • If minimal context cannot fit under budget:

    • With --gpu-only, fail fast with a clear error.
    • Otherwise fall back with explicit warning: “Falling back to CPU; requested budget insufficient for GPU with any context.”

Config & UX

  • Flags: --fit-vram[=on|off|check], --max-vram=6GB, --vram-headroom=15%, --gpu-only.
  • Env vars: OLLAMA_FIT_VRAM, OLLAMA_MAX_VRAM_MB, etc.
  • Return effective num_ctx and model_max_ctx in model load responses so clients can adapt.

Docs snippet

  • Include a small illustrative table: VRAM vs. typical max context for common GPUs (8/12/24 GB) and models (3B/7B/13B quants).

Future-proofing

  • Integrate with paged KV cache or sliding window attention once available.
  • Consider a token-rate guard: warn if throughput would crater even if it technically fits.

Sample UX (CLI)

Requested context > model cap:

$ ollama run mistral:7b --fit-vram --max-vram=8GB --num_ctx=8192
[info] Model reports max context: 4096 tokens
[warn] Requested num_ctx=8192 exceeds model cap → clamping to 4096
[info] GPU memory free: 7750 MB; budget: 8192 MB
[info] Params: 3.2 GB; KV (4096): 1.7 GB; workspace+headroom: 0.5 GB
[info] Effective context length: 4096 (bounded by model cap)
> Model loaded on GPU with context 4096

VRAM budget < model cap:

$ ollama run llama3:8b --fit-vram --max-vram=5GB --num_ctx=8192
[info] Model max context: 8192
[info] Budget 5.0 GB cannot fit 8192 → testing lower contexts
[info] Effective context length: 3584 (fits model cap and budget)
> Model loaded on GPU with context 3584

Unknown model cap:

$ ollama run some-older-gguf --fit-vram
[warn] Model max context unknown (no GGUF key; n_ctx_train not reported)
[info] Using conservative default cap: 2048 (override with --assume-max-ctx)
[info] Effective context length: 2048

Over-budget + gpu-only:

$ ollama run llama2:13b --fit-vram --max-vram=6GB --gpu-only
[error] Minimal safe context exceeds 6144 MB budget.
[error] Cannot run on GPU with any context under this limit.
Aborting (gpu-only mode).

Edited: Added detection for model size and fall backs.

Originally created by @taggedzi on GitHub (Sep 20, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/12353 # Feature Request: Auto-size `num_ctx` to a user VRAM budget (and recalc on model switch) **Pitch:** Right now, users must guess a safe `num_ctx`. If it’s too high, VRAM overflows and Ollama may slow down or fall back to CPU, with no clear feedback. A `--fit-vram` option would auto-size context length to stay within GPU VRAM (with an optional `--max-vram` budget), recalculating on model switch. This prevents silent CPU fallback, makes usage smoother on smaller/shared GPUs, and removes trial-and-error for users. ## TL;DR Add a `--fit-vram` option so Ollama automatically chooses the largest safe context length that fits in GPU memory (with an optional `--max-vram` budget). The algorithm should also **detect and respect each model’s maximum context limit** (from GGUF metadata or `n_ctx_train`), recalculating whenever models are switched. This prevents silent CPU fallback, avoids exceeding model caps, and makes running on smaller or shared GPUs much smoother. --- ## Description Today, users must guess a `num_ctx` that won’t overflow GPU VRAM. If the chosen context length is too large for available VRAM, users report severe slowdowns and, in some cases, work shifting to system RAM and/or CPU involvement. There’s no built-in way to say: > *“Keep me on GPU—use the largest safe context that fits this VRAM budget, without exceeding the model’s maximum allowable context.”* This makes it tricky to run Ollama smoothly on smaller GPUs (e.g., 6–8 GB cards) or in shared GPU environments, and can cause errors on models with unusually low maximum context sizes. --- ## Requested behavior * **New flag** (e.g., `--fit-vram`) that auto-sizes `num_ctx` at model load to the largest value that fits available GPU VRAM. * **Budget cap** (e.g., `--max-vram=6GB`) so users on shared GPUs can leave headroom. * **Respect model max context (`model_max_ctx`)**: * Detect from GGUF metadata (`llama.context_length`) or fallback to `n_ctx_train` from llama.cpp. * Clamp chosen context to this maximum. * Warn clearly if user-requested context exceeds model’s cap. * **Recalculate** the max safe `num_ctx` whenever the model/quant/format changes (hot-swap or reload). * **Fail fast / warn** (optionally `--gpu-only`) if even the minimal context would exceed the budget, rather than silently running on CPU. --- ## Why this helps * Prevents confusing performance cliffs from CPU fallback or RAM spill. * Avoids crashes or invalid settings when models have low maximum contexts. * Makes Ollama friendlier on 6–8 GB GPUs (e.g., RTX 3070 Ti) and in shared environments. * Reduces trial-and-error around `num_ctx`, KV cache size, and offload settings. --- ## Prior art / context * `num_ctx` exists today as an option in Ollama’s API/Modelfile, but there’s no auto-sizing to a VRAM budget or respect for a model’s max cap. * When VRAM is insufficient, users have observed Ollama running much slower or shifting work to CPU/RAM, consistent with the need for a protective auto-sizer. * VRAM usage increases with context via the KV cache (from llama.cpp), which is why long contexts often push memory beyond GPU limits. * Some models ship with **very low max contexts** (e.g., 512–2048), and ignoring these caps leads to unstable or misleading behavior. * Real-world usage shows that setting very large contexts can push inference off the GPU; dialing context down restores GPU usage and speed—further motivating an automatic fit-to-VRAM mode. --- ## Technical notes (to aid implementation) **Heuristic + backoff algorithm** * Probe free VRAM at load. * Estimate footprint = params + KV cache(`num_ctx`) + workspace buffers + safety headroom. * Start from requested/default `num_ctx`, then back off until estimate ≤ budget. * Headroom default (15–20%) to avoid fragmentation; overridable with `--vram-headroom`. **Model max context detection** * Read `llama.context_length` from GGUF metadata if available. * Otherwise parse `n_ctx_train` reported by llama.cpp. * If neither is available, use a conservative default (e.g., 2048) and log a warning. * Clamp final context to: `effective_ctx = min(requested_ctx, model_max_ctx, vram_fit_ctx)` * Warn if user-requested context exceeds `model_max_ctx`. * Allow advanced override flags: `--assume-max-ctx=<N>` and `--ignore-model-max-ctx`. **Cross-platform GPU APIs** * CUDA: `cudaMemGetInfo`. * ROCm: `hipMemGetInfo`. * Metal: `MTLDevice.recommendedMaxWorkingSetSize` + allocation tracking. * On Apple silicon, respect unified memory. **Shared GPU / budget cap** * If `--max-vram` is set, cap to `min(free_vram, budget)`. * Optionally `--reserve-vram=<MB>` for leaving space for other apps. **Dry-run & observability** * `--fit-vram=check`: print computed max `num_ctx`, `model_max_ctx`, and breakdown. * On load, log chosen `num_ctx` and breakdown at INFO level. * API endpoint (`GET /v1/compute/max_context`) could return the calculation for a given (model, quant, gpu). **Recalculation triggers** * Recompute on model switch, quant change, GPU change, or offload settings change. * Cache results per `(model_id, quant, gpu_uuid)` for fast reuse; invalidate on driver change or memory pressure. **Failure policy** * If minimal context cannot fit under budget: * With `--gpu-only`, fail fast with a clear error. * Otherwise fall back with explicit warning: *“Falling back to CPU; requested budget insufficient for GPU with any context.”* **Config & UX** * Flags: `--fit-vram[=on|off|check]`, `--max-vram=6GB`, `--vram-headroom=15%`, `--gpu-only`. * Env vars: `OLLAMA_FIT_VRAM`, `OLLAMA_MAX_VRAM_MB`, etc. * Return effective `num_ctx` and `model_max_ctx` in model load responses so clients can adapt. **Docs snippet** * Include a small illustrative table: VRAM vs. typical max context for common GPUs (8/12/24 GB) and models (3B/7B/13B quants). **Future-proofing** * Integrate with paged KV cache or sliding window attention once available. * Consider a token-rate guard: warn if throughput would crater even if it technically fits. --- ## Sample UX (CLI) **Requested context > model cap:** ```bash $ ollama run mistral:7b --fit-vram --max-vram=8GB --num_ctx=8192 [info] Model reports max context: 4096 tokens [warn] Requested num_ctx=8192 exceeds model cap → clamping to 4096 [info] GPU memory free: 7750 MB; budget: 8192 MB [info] Params: 3.2 GB; KV (4096): 1.7 GB; workspace+headroom: 0.5 GB [info] Effective context length: 4096 (bounded by model cap) > Model loaded on GPU with context 4096 ``` **VRAM budget < model cap:** ```bash $ ollama run llama3:8b --fit-vram --max-vram=5GB --num_ctx=8192 [info] Model max context: 8192 [info] Budget 5.0 GB cannot fit 8192 → testing lower contexts [info] Effective context length: 3584 (fits model cap and budget) > Model loaded on GPU with context 3584 ``` **Unknown model cap:** ```bash $ ollama run some-older-gguf --fit-vram [warn] Model max context unknown (no GGUF key; n_ctx_train not reported) [info] Using conservative default cap: 2048 (override with --assume-max-ctx) [info] Effective context length: 2048 ``` **Over-budget + gpu-only:** ```bash $ ollama run llama2:13b --fit-vram --max-vram=6GB --gpu-only [error] Minimal safe context exceeds 6144 MB budget. [error] Cannot run on GPU with any context under this limit. Aborting (gpu-only mode). ``` Edited: Added detection for model size and fall backs.
GiteaMirror added the feature request label 2026-05-04 20:49:05 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#70265