[GH-ISSUE #418] Use with Continue.dev plugin in VSCodium seems broken (Linux) #191

Closed
opened 2026-04-12 09:43:05 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @matbgn on GitHub (Aug 26, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/418

I cannot get ollama server communicating with Continue plugin on VSCodium.

Continue still use ChatGPT API instead of local one. Here some context:

~/.continue/config.py

"""
This is the Continue configuration file.

If you aren't getting strong typing on these imports,
be sure to select the Python interpreter in ~/.continue/server/env.
"""

import subprocess

from continuedev.src.continuedev.core.main import Step
from continuedev.src.continuedev.core.sdk import ContinueSDK
from continuedev.src.continuedev.core.config import CustomCommand, SlashCommand, ContinueConfig
from continuedev.src.continuedev.plugins.context_providers.github import GitHubIssuesContextProvider
from continuedev.src.continuedev.plugins.context_providers.google import GoogleContextProvider
from continuedev.src.continuedev.libs.llm.ollama import Ollama

class CommitMessageStep(Step):
    """
    This is a Step, the building block of Continue.
    It can be used below as a slash command, so that
    run will be called when you type '/commit'.
    """
    async def run(self, sdk: ContinueSDK):

        # Get the root directory of the workspace
        dir = sdk.ide.workspace_directory

        # Run git diff in that directory
        diff = subprocess.check_output(
            ["git", "diff"], cwd=dir).decode("utf-8")

        # Ask gpt-3.5-16k to write a commit message,
        # and set it as the description of this step
        self.description = await sdk.models.gpt3516k.complete(
            f"{diff}\n\nWrite a short, specific (less than 50 chars) commit message about the above changes:")


config = ContinueConfig(

    # If set to False, we will not collect any usage data
    # See here to learn what anonymous data we collect: https://continue.dev/docs/telemetry
    allow_anonymous_telemetry=False,

    # GPT-4 is recommended for best results
    # See options here: https://continue.dev/docs/customization#change-the-default-llm
    models=Models(
        default=Ollama(model="codellama")
    )
    # Set a system message with information that the LLM should always keep in mind
    # E.g. "Please give concise answers. Always respond in Spanish."
    system_message=None,

    # Set temperature to any value between 0 and 1. Higher values will make the LLM
    # more creative, while lower values will make it more predictable.
    temperature=0.5,

    # Custom commands let you map a prompt to a shortened slash command
    # They are like slash commands, but more easily defined - write just a prompt instead of a Step class
    # Their output will always be in chat form
    custom_commands=[CustomCommand(
        name="test",
        description="This is an example custom command. Use /config to edit it and create more",
        prompt="Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
    )],

    # Slash commands let you run a Step from a slash command
    slash_commands=[
        # SlashCommand(
        #     name="commit",
        #     description="This is an example slash command. Use /config to edit it and create more",
        #     step=CommitMessageStep,
        # )
    ],

    # Context providers let you quickly select context by typing '@'
    # Uncomment the following to
    # - quickly reference GitHub issues
    # - show Google search results to the LLM
    context_providers=[
        # GitHubIssuesContextProvider(
        #     repo_name="<your github username or organization>/<your repo name>",
        #     auth_token="<your github auth token>"
        # ),
        # GoogleContextProvider(
        #     serper_api_key="<your serper.dev api key>"
        # )
    ]
)

Ollama starts correctly and just wait indefinitely for instructions

./ollama serve
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:	export GIN_MODE=release
 - using code:	gin.SetMode(gin.ReleaseMode)

[GIN-debug] GET    /                         --> github.com/jmorganca/ollama/server.Serve.func1 (4 handlers)
[GIN-debug] HEAD   /                         --> github.com/jmorganca/ollama/server.Serve.func2 (4 handlers)
[GIN-debug] POST   /api/pull                 --> github.com/jmorganca/ollama/server.PullModelHandler (4 handlers)
[GIN-debug] POST   /api/generate             --> github.com/jmorganca/ollama/server.GenerateHandler (4 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/jmorganca/ollama/server.EmbeddingHandler (4 handlers)
[GIN-debug] POST   /api/create               --> github.com/jmorganca/ollama/server.CreateModelHandler (4 handlers)
[GIN-debug] POST   /api/push                 --> github.com/jmorganca/ollama/server.PushModelHandler (4 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/jmorganca/ollama/server.CopyModelHandler (4 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/jmorganca/ollama/server.ListModelsHandler (4 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/jmorganca/ollama/server.DeleteModelHandler (4 handlers)
2023/08/26 14:17:00 routes.go:452: Listening on 127.0.0.1:11434

Continue statements are obviously coming from OpenAI

image

Originally created by @matbgn on GitHub (Aug 26, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/418 I cannot get ollama server communicating with Continue plugin on VSCodium. Continue still use ChatGPT API instead of local one. Here some context: ~/.continue/config.py ``` """ This is the Continue configuration file. If you aren't getting strong typing on these imports, be sure to select the Python interpreter in ~/.continue/server/env. """ import subprocess from continuedev.src.continuedev.core.main import Step from continuedev.src.continuedev.core.sdk import ContinueSDK from continuedev.src.continuedev.core.config import CustomCommand, SlashCommand, ContinueConfig from continuedev.src.continuedev.plugins.context_providers.github import GitHubIssuesContextProvider from continuedev.src.continuedev.plugins.context_providers.google import GoogleContextProvider from continuedev.src.continuedev.libs.llm.ollama import Ollama class CommitMessageStep(Step): """ This is a Step, the building block of Continue. It can be used below as a slash command, so that run will be called when you type '/commit'. """ async def run(self, sdk: ContinueSDK): # Get the root directory of the workspace dir = sdk.ide.workspace_directory # Run git diff in that directory diff = subprocess.check_output( ["git", "diff"], cwd=dir).decode("utf-8") # Ask gpt-3.5-16k to write a commit message, # and set it as the description of this step self.description = await sdk.models.gpt3516k.complete( f"{diff}\n\nWrite a short, specific (less than 50 chars) commit message about the above changes:") config = ContinueConfig( # If set to False, we will not collect any usage data # See here to learn what anonymous data we collect: https://continue.dev/docs/telemetry allow_anonymous_telemetry=False, # GPT-4 is recommended for best results # See options here: https://continue.dev/docs/customization#change-the-default-llm models=Models( default=Ollama(model="codellama") ) # Set a system message with information that the LLM should always keep in mind # E.g. "Please give concise answers. Always respond in Spanish." system_message=None, # Set temperature to any value between 0 and 1. Higher values will make the LLM # more creative, while lower values will make it more predictable. temperature=0.5, # Custom commands let you map a prompt to a shortened slash command # They are like slash commands, but more easily defined - write just a prompt instead of a Step class # Their output will always be in chat form custom_commands=[CustomCommand( name="test", description="This is an example custom command. Use /config to edit it and create more", prompt="Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.", )], # Slash commands let you run a Step from a slash command slash_commands=[ # SlashCommand( # name="commit", # description="This is an example slash command. Use /config to edit it and create more", # step=CommitMessageStep, # ) ], # Context providers let you quickly select context by typing '@' # Uncomment the following to # - quickly reference GitHub issues # - show Google search results to the LLM context_providers=[ # GitHubIssuesContextProvider( # repo_name="<your github username or organization>/<your repo name>", # auth_token="<your github auth token>" # ), # GoogleContextProvider( # serper_api_key="<your serper.dev api key>" # ) ] ) ``` Ollama starts correctly and just wait indefinitely for instructions ``` ./ollama serve [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production. - using env: export GIN_MODE=release - using code: gin.SetMode(gin.ReleaseMode) [GIN-debug] GET / --> github.com/jmorganca/ollama/server.Serve.func1 (4 handlers) [GIN-debug] HEAD / --> github.com/jmorganca/ollama/server.Serve.func2 (4 handlers) [GIN-debug] POST /api/pull --> github.com/jmorganca/ollama/server.PullModelHandler (4 handlers) [GIN-debug] POST /api/generate --> github.com/jmorganca/ollama/server.GenerateHandler (4 handlers) [GIN-debug] POST /api/embeddings --> github.com/jmorganca/ollama/server.EmbeddingHandler (4 handlers) [GIN-debug] POST /api/create --> github.com/jmorganca/ollama/server.CreateModelHandler (4 handlers) [GIN-debug] POST /api/push --> github.com/jmorganca/ollama/server.PushModelHandler (4 handlers) [GIN-debug] POST /api/copy --> github.com/jmorganca/ollama/server.CopyModelHandler (4 handlers) [GIN-debug] GET /api/tags --> github.com/jmorganca/ollama/server.ListModelsHandler (4 handlers) [GIN-debug] DELETE /api/delete --> github.com/jmorganca/ollama/server.DeleteModelHandler (4 handlers) 2023/08/26 14:17:00 routes.go:452: Listening on 127.0.0.1:11434 ``` Continue statements are obviously coming from OpenAI ![image](https://github.com/jmorganca/ollama/assets/13169819/fe7a39b4-aef5-48f9-b815-b51675cd4112)
GiteaMirror added the bug label 2026-04-12 09:43:05 -05:00
Author
Owner

@sestinj commented on GitHub (Aug 26, 2023):

Hi @matbgn 👋 I'm an author of Continue. It looks like you might have an out-of-date version of the extension. I'd recommend first upgrading to the latest version (v0.0.332 or higher). There have been several updates to the extension, especially with respect to Ollama support. If this doesn't work, let me know and I'll look into the error right away!

I recognize there's also some chance that you actually have upgraded but perhaps some error on our end caused the UI not to update—if that's the case, then we've got a whole other bug on our hands : ) Let me know!

<!-- gh-comment-id:1694415020 --> @sestinj commented on GitHub (Aug 26, 2023): Hi @matbgn 👋 I'm an author of Continue. It looks like you might have an out-of-date version of the extension. I'd recommend first upgrading to the latest version (v0.0.332 or higher). There have been several updates to the extension, especially with respect to Ollama support. If this doesn't work, let me know and I'll look into the error right away! I recognize there's also some chance that you actually have upgraded but perhaps some error on our end caused the UI not to update—if that's the case, then we've got a whole other bug on our hands : ) Let me know!
Author
Owner

@matbgn commented on GitHub (Aug 26, 2023):

Thanks for your quick answer, unfortunately I just installed Continue today, so it is the last available plugin (no other force method known), at least on Open VSX Registry, marketplace for VSCodium: https://open-vsx.org/extension/Continue/continue

<!-- gh-comment-id:1694489118 --> @matbgn commented on GitHub (Aug 26, 2023): Thanks for your quick answer, unfortunately I just installed Continue today, so it is the last available plugin (no other force method known), at least on Open VSX Registry, marketplace for VSCodium: https://open-vsx.org/extension/Continue/continue
Author
Owner

@sestinj commented on GitHub (Aug 27, 2023):

@matbgn That would explain it! I haven't been continuously updating to the Open VSX Registry. I'll add this to our CI pipeline so I don't have the chance to forget. In the meantime, you can download the latest version from the bottom of the page here (vsix-artifact) and then download manually

<!-- gh-comment-id:1694776133 --> @sestinj commented on GitHub (Aug 27, 2023): @matbgn That would explain it! I haven't been continuously updating to the Open VSX Registry. I'll add this to our CI pipeline so I don't have the chance to forget. In the meantime, you can download the latest version from the bottom of the page [here](https://github.com/continuedev/continue/actions/runs/5992454295) (vsix-artifact) and then download manually
Author
Owner

@sestinj commented on GitHub (Aug 28, 2023):

@matbgn Newest version is now available on the Open VSX Registry, and will be updated along with the VS Code Extension Marketplace from now on: https://open-vsx.org/extension/Continue/continue

<!-- gh-comment-id:1695178516 --> @sestinj commented on GitHub (Aug 28, 2023): @matbgn Newest version is now available on the Open VSX Registry, and will be updated along with the VS Code Extension Marketplace from now on: https://open-vsx.org/extension/Continue/continue
Author
Owner

@mchiang0610 commented on GitHub (Aug 30, 2023):

@matbgn @sestinj it looks like this should be fixed? Thank you!!

I'll close this issue for now, but if anything arises, please feel free to reopen.

<!-- gh-comment-id:1699852131 --> @mchiang0610 commented on GitHub (Aug 30, 2023): @matbgn @sestinj it looks like this should be fixed? Thank you!! I'll close this issue for now, but if anything arises, please feel free to reopen.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#191