[GH-ISSUE #6389] OLLAMA_ORIGINS environment variables appends instead of sets #4014

Open
opened 2026-04-12 14:53:07 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @saddy001 on GitHub (Aug 16, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6389

What is the issue?

Setting "OLLAMA_ORIGINS=://localhost,://127.0.0.1" will result in these entries being added the the allowed origins. Is this intended? I thought it should be overridden.

Before:
OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*]

After:
OLLAMA_ORIGINS:[*://localhost *://127.0.0.1 http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*]

Looking into d29cd4c2ed/envconfig/config.go (L66) it also clear why: "0.0.0.0" is iterated every time.

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.3.6

Originally created by @saddy001 on GitHub (Aug 16, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6389 ### What is the issue? Setting "OLLAMA_ORIGINS=*://localhost,*://127.0.0.1" will result in these entries being **added** the the allowed origins. Is this intended? I thought it should be overridden. Before: `OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*]` After: `OLLAMA_ORIGINS:[*://localhost *://127.0.0.1 http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*]` Looking into https://github.com/ollama/ollama/blob/d29cd4c2ed104a1f6fba16a264c3cc7785a7d82f/envconfig/config.go#L66 it also clear why: "0.0.0.0" is iterated every time. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.6
GiteaMirror added the bug label 2026-04-12 14:53:07 -05:00
Author
Owner

@Logeshplayboy001 commented on GitHub (Jan 22, 2026):

Key Updates
Service Layer (services/aiService.ts): Replaced the Google Gemini client with a custom fetch implementation that connects to the Ollama API (/api/chat, /api/tags, /api/generate). It handles NDJSON streaming responses from Ollama.
Settings Panel (components/SettingsPanel.tsx): Now allows you to configure the Base URL and dynamically fetches the list of models installed on your Ollama instance. It also includes helpful error messages for connection issues (e.g., CORS).
Configuration (types.ts, App.tsx): Updated the data structures to support dynamic model lists and base URL configuration.
Important Note on CORS
For the browser to connect to your local Ollama instance, you must start Ollama with the OLLAMA_ORIGINS environment variable set to allow cross-origin requests.
Mac/Linux:
code
Bash
OLLAMA_ORIGINS="" ollama serve
Windows (PowerShell):
code
Powershell
$env:OLLAMA_ORIGINS="
"; ollama serve
Here are the code changes:
types.ts
check_circle
services/aiService.ts
check_circle
components/SettingsPanel.tsx
check_circle
App.tsx
check_circle
Checkpoint
User
8a4c1a95903a4e83af64c7ba3ab6b4eb...
Gemini 3 Pro Preview
Ran for 100s
lightbulb
Thought for 41 seconds
I will enhance the application to better visualize the conversation in two key ways:
Reasoning Visualization: Support for tags (common in reasoning models like DeepSeek-R1 via Ollama). I will parse these tags to display the model's internal thought process in a collapsible, distinct UI element, separating it from the final response.
Performance Metrics: I will capture and visualize the generation statistics provided by Ollama (tokens per second, generation time, token count) at the bottom of each response, giving you visibility into the local model's performance.
Here are the updates:

<!-- gh-comment-id:3782471916 --> @Logeshplayboy001 commented on GitHub (Jan 22, 2026): Key Updates Service Layer (services/aiService.ts): Replaced the Google Gemini client with a custom fetch implementation that connects to the Ollama API (/api/chat, /api/tags, /api/generate). It handles NDJSON streaming responses from Ollama. Settings Panel (components/SettingsPanel.tsx): Now allows you to configure the Base URL and dynamically fetches the list of models installed on your Ollama instance. It also includes helpful error messages for connection issues (e.g., CORS). Configuration (types.ts, App.tsx): Updated the data structures to support dynamic model lists and base URL configuration. Important Note on CORS For the browser to connect to your local Ollama instance, you must start Ollama with the OLLAMA_ORIGINS environment variable set to allow cross-origin requests. Mac/Linux: code Bash OLLAMA_ORIGINS="*" ollama serve Windows (PowerShell): code Powershell $env:OLLAMA_ORIGINS="*"; ollama serve Here are the code changes: types.ts check_circle services/aiService.ts check_circle components/SettingsPanel.tsx check_circle App.tsx check_circle Checkpoint User 8a4c1a95903a4e83af64c7ba3ab6b4eb... Gemini 3 Pro Preview Ran for 100s lightbulb Thought for 41 seconds I will enhance the application to better visualize the conversation in two key ways: Reasoning Visualization: Support for <think> tags (common in reasoning models like DeepSeek-R1 via Ollama). I will parse these tags to display the model's internal thought process in a collapsible, distinct UI element, separating it from the final response. Performance Metrics: I will capture and visualize the generation statistics provided by Ollama (tokens per second, generation time, token count) at the bottom of each response, giving you visibility into the local model's performance. Here are the updates:
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#4014