[GH-ISSUE #11659] Allow users to set context length like we used to instead of using terrible GUI presets every single launch #69766

Closed
opened 2026-05-04 19:07:53 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @FatheredPuma81 on GitHub (Aug 5, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11659

OLLAMA_CONTEXT_LENGTH no longer functions after 0.9.6. The GUI presets are not a good alternative when you need something between 32k and 64k context length. It's also way more inconvenient than double clicking a script that sets it to exactly what you want.
If this was intentional my feature request is to bring OLLAMA_CONTEXT_LENGTH variable back and make it override whatever the GUI setting is.

Originally created by @FatheredPuma81 on GitHub (Aug 5, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11659 OLLAMA_CONTEXT_LENGTH no longer functions after 0.9.6. The GUI presets are not a good alternative when you need something between 32k and 64k context length. It's also way more inconvenient than double clicking a script that sets it to exactly what you want. If this was intentional my feature request is to bring OLLAMA_CONTEXT_LENGTH variable back and make it override whatever the GUI setting is.
GiteaMirror added the bugfeature request labels 2026-05-04 19:07:53 -05:00
Author
Owner

@FatheredPuma81 commented on GitHub (Aug 5, 2025):

Well in the meantime I guess I'll be upgrading to version 0.9.6 I guess.

<!-- gh-comment-id:3153115794 --> @FatheredPuma81 commented on GitHub (Aug 5, 2025): Well in the meantime I guess I'll be upgrading to version 0.9.6 I guess.
Author
Owner

@FatheredPuma81 commented on GitHub (Aug 5, 2025):

@jmorganca If it's a bug then I plead the "The AI told me it was intentionally depreciated and I decided to update halfway through what I was doing and spent and hour trying to figure out wth was going on"
I fixed the OP.

<!-- gh-comment-id:3154159632 --> @FatheredPuma81 commented on GitHub (Aug 5, 2025): @jmorganca If it's a bug then I plead the "The AI told me it was intentionally depreciated and I decided to update halfway through what I was doing and spent and hour trying to figure out wth was going on" I fixed the OP.
Author
Owner

@av commented on GitHub (Aug 5, 2025):

Also stopped working for me in Docker, no app in sight

<!-- gh-comment-id:3156496876 --> @av commented on GitHub (Aug 5, 2025): Also stopped working for me in Docker, no app in sight
Author
Owner

@blackfeather9 commented on GitHub (Aug 11, 2025):

I recently started using the OLLAMA_CONTEXT_LENGTH variable and it wasn't working at all (running ollama with docker compose). I set it to 128k using the following format in ollama.env:

OLLAMA_CONTEXT_LENGTH value="128000 ollama serve"

Perhaps I was reading this documentation wrong https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-specify-the-context-window-size

When I ran the container, it gave me the following error:

time=2025-08-11T11:29:42.338Z level=WARN source=config.go:210 msg="invalid environment variable, using default" key=OLLAMA_CONTEXT_LENGTH value="128000 ollama serve" default=4096

The solution this morning was to change my env variable to simply OLLAMA_CONTEXT_LENGTH value=128000 and then run docker compose down and docker compose up -d --force-recreate which loaded the new values. Loading a model into memory then shows it following whatever context I defined, i.e. llama_context: n_ctx = 128000

Using the ollama image from 8/7 which I believe is running 0.11.4. Maybe this will help you.

<!-- gh-comment-id:3174529073 --> @blackfeather9 commented on GitHub (Aug 11, 2025): I recently started using the `OLLAMA_CONTEXT_LENGTH` variable and it wasn't working at all (running ollama with docker compose). I set it to 128k using the following format in `ollama.env`: ``` OLLAMA_CONTEXT_LENGTH value="128000 ollama serve" ``` Perhaps I was reading this documentation wrong https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-specify-the-context-window-size When I ran the container, it gave me the following error: ``` time=2025-08-11T11:29:42.338Z level=WARN source=config.go:210 msg="invalid environment variable, using default" key=OLLAMA_CONTEXT_LENGTH value="128000 ollama serve" default=4096 ``` The solution this morning was to change my env variable to simply `OLLAMA_CONTEXT_LENGTH value=128000` and then run `docker compose down` and `docker compose up -d --force-recreate` which loaded the new values. Loading a model into memory then shows it following whatever context I defined, i.e. `llama_context: n_ctx = 128000` Using the ollama image from 8/7 which I believe is running ` 0.11.4`. Maybe this will help you.
Author
Owner

@fictiontoreality commented on GitHub (Jan 17, 2026):

OLLAMA_CONTEXT_LENGTH works fine for me on ollama 0.14.2 running in a Docker container.

$ docker compose exec ollama ollama ps
NAME           ID              SIZE     PROCESSOR    CONTEXT    UNTIL              
gpt-oss:20b    17052f91a42e    17 GB    100% GPU     128000     2 minutes from now    

My compose.yaml

services:
  # Easy setup of local LLM.
  ollama:
    # Requires Nvidia Container Toolkit for Nvidia GPU support.
    # Also supports AMD GPUs via a different approach.
    # For details: https://hub.docker.com/r/ollama/ollama
    image: ollama/ollama:0.14.2
    container_name: ollama
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]
    volumes:
      - /home/user/data/ollama:/root/.ollama
    environment:
      OLLAMA_CONTEXT_LENGTH: 128000
    ports:
      - 11434:11434
    # privileged: true
    restart: unless-stopped
<!-- gh-comment-id:3764202008 --> @fictiontoreality commented on GitHub (Jan 17, 2026): `OLLAMA_CONTEXT_LENGTH` works fine for me on ollama 0.14.2 running in a Docker container. ``` $ docker compose exec ollama ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL gpt-oss:20b 17052f91a42e 17 GB 100% GPU 128000 2 minutes from now ``` My compose.yaml ``` services: # Easy setup of local LLM. ollama: # Requires Nvidia Container Toolkit for Nvidia GPU support. # Also supports AMD GPUs via a different approach. # For details: https://hub.docker.com/r/ollama/ollama image: ollama/ollama:0.14.2 container_name: ollama deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] volumes: - /home/user/data/ollama:/root/.ollama environment: OLLAMA_CONTEXT_LENGTH: 128000 ports: - 11434:11434 # privileged: true restart: unless-stopped ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69766