[GH-ISSUE #13917] remote ollama in vscode #9106

Open
opened 2026-04-12 21:57:32 -05:00 by GiteaMirror · 2 comments
Owner

Originally created by @skwde on GitHub (Jan 26, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/13917

Based on the docs https://docs.ollama.com/integrations/vscode I try to add a remote ollama instance to VSCode Copilot Chat. Though the models are not listed when I click Add Models... and then Ollama, see attached screenshot.

Image

Also the VSCode setting

{
  "github.copilot.chat.byok.ollamaEndpoint": "http://<IP>:11434",
}

does not resolve the issue.

Selecting Ollama to add a model does nothing.

Note, I can do

$ OLLAMA_HOST=<IP>:11434 ollama list
NAME                    ID              SIZE     MODIFIED    
glm-4.7-flash:latest    d1a8a26252f1    19 GB    8 hours ago    

in the VSCode terminal and get a full list of the models as I would expect.

How should I add models to VSCode?

Originally created by @skwde on GitHub (Jan 26, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/13917 Based on the docs https://docs.ollama.com/integrations/vscode I try to add a remote ollama instance to VSCode Copilot Chat. Though the models are not listed when I click `Add Models...` and then `Ollama`, see attached screenshot. <img width="1262" height="257" alt="Image" src="https://github.com/user-attachments/assets/9d6b0810-dbb0-4711-9473-19c2dc400099" /> Also the VSCode setting ```json { "github.copilot.chat.byok.ollamaEndpoint": "http://<IP>:11434", } ``` does not resolve the issue. Selecting `Ollama` to add a model does nothing. Note, I can do ```sh $ OLLAMA_HOST=<IP>:11434 ollama list NAME ID SIZE MODIFIED glm-4.7-flash:latest d1a8a26252f1 19 GB 8 hours ago ``` in the VSCode terminal and get a full list of the models as I would expect. How should I add models to VSCode?
Author
Owner

@amirsoroush commented on GitHub (Feb 4, 2026):

Same here.

I was trying to connect to another machine which serves models with Ollama in my local network. I made sure Ollama listens on 0.0.0.0. I can use Ollama client directly from my own machine.

But both "zed" and "vscode" didn't work. The VSCode behavior was exactly how the OP described. In Zed, it could find the machine (it listed the models) but when I ask something, it hangs forever (100% CPU usage)

<!-- gh-comment-id:3844663327 --> @amirsoroush commented on GitHub (Feb 4, 2026): Same here. I was trying to connect to another machine which serves models with Ollama in my local network. I made sure Ollama listens on 0.0.0.0. I can use Ollama client directly from my own machine. But both "zed" and "vscode" didn't work. The VSCode behavior was exactly how the OP described. In Zed, it could find the machine (it listed the models) but when I ask something, it hangs forever (100% CPU usage)
Author
Owner

@howsTricks commented on GitHub (Apr 13, 2026):

Same issue here

<!-- gh-comment-id:4233094824 --> @howsTricks commented on GitHub (Apr 13, 2026): Same issue here
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#9106