Ollama response 403 to OPTION requests on /api/tags #5712

Closed
opened 2025-11-12 13:07:42 -06:00 by GiteaMirror · 6 comments
Owner

Originally created by @gfmaster on GitHub (Feb 3, 2025).

What is the issue?

I'm trying to use Lobe Chat. However when I try to set up with Ollama, Lobe Chat is unable to connect to Ollama.

I ran ollama serve with the following line.
OLLAMA_HOST=0.0.0.0 ollama serve

log says it returns 403 to OPTION request on /api/tags, while GET request is served properly.

[GIN] 2025/02/03 - 21:25:38 | 200 | 2.090666ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2025/02/03 - 21:25:54 | 403 | 422.708µs | 192.168.1.40 | OPTIONS "/api/tags"
[GIN] 2025/02/03 - 21:25:59 | 403 | 5.875µs | 192.168.1.40 | OPTIONS "/api/tags"

Similar to the following
https://github.com/ollama/ollama/issues/3746

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

0.5.7

Originally created by @gfmaster on GitHub (Feb 3, 2025). ### What is the issue? I'm trying to use Lobe Chat. However when I try to set up with Ollama, Lobe Chat is unable to connect to Ollama. I ran ollama serve with the following line. OLLAMA_HOST=0.0.0.0 ollama serve log says it returns 403 to OPTION request on /api/tags, while GET request is served properly. [GIN] 2025/02/03 - 21:25:38 | 200 | 2.090666ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/02/03 - 21:25:54 | 403 | 422.708µs | 192.168.1.40 | OPTIONS "/api/tags" [GIN] 2025/02/03 - 21:25:59 | 403 | 5.875µs | 192.168.1.40 | OPTIONS "/api/tags" Similar to the following https://github.com/ollama/ollama/issues/3746 ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.5.7
GiteaMirror added the bug label 2025-11-12 13:07:42 -06:00
Author
Owner
@rick-github commented on GitHub (Feb 3, 2025): https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-allow-additional-web-origins-to-access-ollama
Author
Owner

@gfmaster commented on GitHub (Feb 3, 2025):

@rick-github Thank you for pointing out. I was using local domain name instead of local ip to connect Lobe Chat. After specifying local domain origin in OLLAMA_ORIGINS solved problem.

I still have question.

When I tried using locally deployed Lobe Chat via local ip, ollama still rejected OPTIONS request with 403.
This is the first line of logs.
2025/02/03 22:00:48 routes.go:1187: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/keith/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://ds920.local:* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]"

Why does it reject OPTIONS request from web app although http://0.0.0.0:* is included in OLLAMA_ORIGINS?

@gfmaster commented on GitHub (Feb 3, 2025): @rick-github Thank you for pointing out. I was using local domain name instead of local ip to connect Lobe Chat. After specifying local domain origin in OLLAMA_ORIGINS solved problem. I still have question. When I tried using locally deployed Lobe Chat via local ip, ollama still rejected OPTIONS request with 403. This is the first line of logs. `2025/02/03 22:00:48 routes.go:1187: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/keith/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://ds920.local:* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]"` Why does it reject OPTIONS request from web app although `http://0.0.0.0:*` is included in OLLAMA_ORIGINS?
Author
Owner

@rick-github commented on GitHub (Feb 3, 2025):

Does Lobe chat have logs? They may show the rejection reason. If not, you can try running tcpdump on port 11434 and see the interaction.

@rick-github commented on GitHub (Feb 3, 2025): Does Lobe chat have logs? They may show the rejection reason. If not, you can try running `tcpdump` on port 11434 and see the interaction.
Author
Owner

@yutianlong commented on GitHub (Feb 6, 2025):

我在使用hollama 连接ollama 时遇到了相同的问题
这是我的service:
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=$PATH"
Environment="OLLAMA_HOST=0.0.0.0"
Environment="GIN_MODE=release"

[Install]
WantedBy=default.target
当发送请求时返回:
由于目标计算机积极拒绝,无法连接。 10.110.111.2:14434
(在http中并没有更多的response返回)
下面是我的日志:
2月 06 10:58:20 tm-C660-G3 systemd[1]: Started Ollama Service.
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: 2025/02/06 10:58:20 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION>
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: time=2025-02-06T10:58:20.665+08:00 level=INFO source=images.go:432 msg="total blobs: 5"
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: time=2025-02-06T10:58:20.666+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: - using env: export GIN_MODE=release
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: - using code: gin.SetMode(gin.ReleaseMode)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: time=2025-02-06T10:58:20.666+08:00 level=INFO source=routes.go:1238 msg="Listening on [::]:11434 (version 0.5.7)"
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: time=2025-02-06T10:58:20.668+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx c>
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: time=2025-02-06T10:58:20.669+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: time=2025-02-06T10:58:20.699+08:00 level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
2月 06 10:58:20 tm-C660-G3 ollama[1343580]: time=2025-02-06T10:58:20.699+08:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" t>
2月 06 11:06:29 tm-C660-G3 ollama[1343580]: [GIN] 2025/02/06 - 11:06:29 | 403 | 105.289µs | 10.11.7.2 | GET "/api/tags"
2月 06 11:07:31 tm-C660-G3 ollama[1343580]: [GIN] 2025/02/06 - 11:07:31 | 403 | 56.156µs | 10.11.7.2 | GET "/api/tags"

@yutianlong commented on GitHub (Feb 6, 2025): 我在使用hollama 连接ollama 时遇到了相同的问题 这是我的service: [Unit] Description=Ollama Service After=network-online.target [Service] ExecStart=/usr/bin/ollama serve User=ollama Group=ollama Restart=always RestartSec=3 Environment="PATH=$PATH" Environment="OLLAMA_HOST=0.0.0.0" Environment="GIN_MODE=release" [Install] WantedBy=default.target 当发送请求时返回: 由于目标计算机积极拒绝,无法连接。 10.110.111.2:14434 (在http中并没有更多的response返回) 下面是我的日志: 2月 06 10:58:20 tm-C660-G3 systemd[1]: Started Ollama Service. 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: 2025/02/06 10:58:20 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION> 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: time=2025-02-06T10:58:20.665+08:00 level=INFO source=images.go:432 msg="total blobs: 5" 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: time=2025-02-06T10:58:20.666+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0" 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: - using env: export GIN_MODE=release 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: - using code: gin.SetMode(gin.ReleaseMode) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: time=2025-02-06T10:58:20.666+08:00 level=INFO source=routes.go:1238 msg="Listening on [::]:11434 (version 0.5.7)" 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: time=2025-02-06T10:58:20.668+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx c> 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: time=2025-02-06T10:58:20.669+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: time=2025-02-06T10:58:20.699+08:00 level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered" 2月 06 10:58:20 tm-C660-G3 ollama[1343580]: time=2025-02-06T10:58:20.699+08:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" t> 2月 06 11:06:29 tm-C660-G3 ollama[1343580]: [GIN] 2025/02/06 - 11:06:29 | 403 | 105.289µs | 10.11.7.2 | GET "/api/tags" 2月 06 11:07:31 tm-C660-G3 ollama[1343580]: [GIN] 2025/02/06 - 11:07:31 | 403 | 56.156µs | 10.11.7.2 | GET "/api/tags"
Author
Owner

@rick-github commented on GitHub (Feb 6, 2025):

10.110.111.2:14434 is the wrong port number.

@rick-github commented on GitHub (Feb 6, 2025): 10.110.111.2:14434 is the wrong port number.
Author
Owner

@LeisureLinux commented on GitHub (Feb 6, 2025):

10.110.111.2:14434 is the wrong port number.

he don't have good eyesight. :-)

@LeisureLinux commented on GitHub (Feb 6, 2025): > 10.110.111.2:14434 is the wrong port number. he don't have good eyesight. :-)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama-ollama#5712