[GH-ISSUE #11719] Ollama runs some issues with the gguf model downloaded from Modelscope #69816

Closed
opened 2026-05-04 19:28:00 -05:00 by GiteaMirror · 0 comments
Owner

Originally created by @williamlzw on GitHub (Aug 6, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/11719

What is the issue?

https://github.com/microsoft/semantic-kernel/issues/12821#issuecomment-3155229077

ollama pull modelscope.cn/Qwen/Qwen3-Embedding-4B-GGUF:Qwen3-Embedding-4B-Q4_K_M.gguf
This model pulled.
When C# writes code with semantic-kernel and loads the qwen3:4b and qwen3-embedding-4B-GGUF:Qwen3-Embedding-4B-Q4_K_M.gguf models, qwen3:4b is offload and reloaded with qwen3:4b for each dialog.
However, I use the embedding model from the ollama official website such as granite-embedding with qwen3:4b, and each conversation does not offload qwen3:4b.
I don't know if the problem is with the go code of the ollama service or the semantic-kernel library of C#.

.net core9.0
Microsoft.SemanticKernel 1.61.0

#pragma warning disable SKEXP0001

using Microsoft.Extensions.AI;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.VectorData;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.InMemory;
using Microsoft.SemanticKernel.Data;
using Microsoft.SemanticKernel.PromptTemplates.Handlebars;

class Program1
{
    public static void Test()
    {
        Program2 program2 = new Program2();
        program2.Test2().GetAwaiter().GetResult();
    }

    public static void Main()
    {
        Test();
    }
}

class Program2
{
    public async Task Test2()
    {
        var builder = Kernel.CreateBuilder();
        var modelId = "qwen3:4b";
        //var embeddingModelId = "granite-embedding:278m"; 
        var embeddingModelId = "modelscope.cn/Qwen/Qwen3-Embedding-4B-GGUF:Qwen3-Embedding-4B-Q4_K_M.gguf";
        var endpoint = new Uri("http://localhost:11434");
        builder.Services.AddOllamaChatCompletion(modelId, endpoint).AddOllamaEmbeddingGenerator(embeddingModelId, endpoint);
        var kernel = builder.Build();
        var chatService = kernel.GetRequiredService<IChatCompletionService>();
        var embeddingService = kernel.GetRequiredService<IEmbeddingGenerator<string, Embedding<float>>>();

        var vectorStore = new InMemoryVectorStore(new() { EmbeddingGenerator = embeddingService });
        var collection = vectorStore.GetCollection<string, InformationItem>("ExampleCollection");
        await collection.EnsureCollectionExistsAsync();

        var collectionName = "ExampleCollection";
        foreach (var factTextFile in Directory.GetFiles("Facts", "*.txt"))
        {
            var factContent = File.ReadAllText(factTextFile);
            await collection.UpsertAsync(new InformationItem()
            {
                Id = Guid.NewGuid().ToString(),
                Text = factContent
            });
        }

        var vectorStoreTextSearch = new VectorStoreTextSearch<InformationItem>(collection);
        kernel.Plugins.Add(vectorStoreTextSearch.CreateWithSearch("SearchPlugin"));
 
        while (true)
        {
            Console.ForegroundColor = ConsoleColor.White;
            Console.Write("User > ");
            var question = Console.ReadLine()!;
            if (question is null || string.IsNullOrWhiteSpace(question))
            {
                DisposeServices(kernel);
                return;
            }

            var response = kernel.InvokePromptStreamingAsync(
                promptTemplate: @"Question: {{input}}
            Answer the question using the memory content:
            {{#with (SearchPlugin-Search input)}}
              {{#each this}}
                {{this}}
                -----------------
              {{/each}}
            {{/with}}",
                templateFormat: HandlebarsPromptTemplateFactory.HandlebarsTemplateFormat,
                promptTemplateFactory: new HandlebarsPromptTemplateFactory(),
                arguments: new KernelArguments()
                {
            { "input", question },
            { "collection", collectionName }
                });

            Console.Write("\nAssistant > ");
            await foreach (var message in response)
            {
                Console.Write(message);
            }
            Console.WriteLine();
        }
    }

    static void DisposeServices(Kernel kernel)
    {
        foreach (var target in kernel
            .GetAllServices<IChatCompletionService>()
            .OfType<IDisposable>())
        {
            target.Dispose();
        }
    }
}

/// <summary>
/// Information item to represent the embedding data stored in the memory
/// </summary>
internal sealed class InformationItem
{
    [VectorStoreKey]
    [TextSearchResultName]
    public string Id { get; set; } = string.Empty;

    [VectorStoreData]
    [TextSearchResultValue]
    public string Text { get; set; } = string.Empty;

    [VectorStoreVector(Dimensions: 1536)]
    public string Embedding => this.Text;
}
Originally created by @williamlzw on GitHub (Aug 6, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/11719 ### What is the issue? https://github.com/microsoft/semantic-kernel/issues/12821#issuecomment-3155229077 ollama pull modelscope.cn/Qwen/Qwen3-Embedding-4B-GGUF:Qwen3-Embedding-4B-Q4_K_M.gguf This model pulled. When C# writes code with semantic-kernel and loads the qwen3:4b and qwen3-embedding-4B-GGUF:Qwen3-Embedding-4B-Q4_K_M.gguf models, qwen3:4b is offload and reloaded with qwen3:4b for each dialog. However, I use the embedding model from the ollama official website such as granite-embedding with qwen3:4b, and each conversation does not offload qwen3:4b. I don't know if the problem is with the go code of the ollama service or the semantic-kernel library of C#. .net core9.0 Microsoft.SemanticKernel 1.61.0 ``` #pragma warning disable SKEXP0001 using Microsoft.Extensions.AI; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.VectorData; using Microsoft.SemanticKernel; using Microsoft.SemanticKernel.ChatCompletion; using Microsoft.SemanticKernel.Connectors.InMemory; using Microsoft.SemanticKernel.Data; using Microsoft.SemanticKernel.PromptTemplates.Handlebars; class Program1 { public static void Test() { Program2 program2 = new Program2(); program2.Test2().GetAwaiter().GetResult(); } public static void Main() { Test(); } } class Program2 { public async Task Test2() { var builder = Kernel.CreateBuilder(); var modelId = "qwen3:4b"; //var embeddingModelId = "granite-embedding:278m"; var embeddingModelId = "modelscope.cn/Qwen/Qwen3-Embedding-4B-GGUF:Qwen3-Embedding-4B-Q4_K_M.gguf"; var endpoint = new Uri("http://localhost:11434"); builder.Services.AddOllamaChatCompletion(modelId, endpoint).AddOllamaEmbeddingGenerator(embeddingModelId, endpoint); var kernel = builder.Build(); var chatService = kernel.GetRequiredService<IChatCompletionService>(); var embeddingService = kernel.GetRequiredService<IEmbeddingGenerator<string, Embedding<float>>>(); var vectorStore = new InMemoryVectorStore(new() { EmbeddingGenerator = embeddingService }); var collection = vectorStore.GetCollection<string, InformationItem>("ExampleCollection"); await collection.EnsureCollectionExistsAsync(); var collectionName = "ExampleCollection"; foreach (var factTextFile in Directory.GetFiles("Facts", "*.txt")) { var factContent = File.ReadAllText(factTextFile); await collection.UpsertAsync(new InformationItem() { Id = Guid.NewGuid().ToString(), Text = factContent }); } var vectorStoreTextSearch = new VectorStoreTextSearch<InformationItem>(collection); kernel.Plugins.Add(vectorStoreTextSearch.CreateWithSearch("SearchPlugin")); while (true) { Console.ForegroundColor = ConsoleColor.White; Console.Write("User > "); var question = Console.ReadLine()!; if (question is null || string.IsNullOrWhiteSpace(question)) { DisposeServices(kernel); return; } var response = kernel.InvokePromptStreamingAsync( promptTemplate: @"Question: {{input}} Answer the question using the memory content: {{#with (SearchPlugin-Search input)}} {{#each this}} {{this}} ----------------- {{/each}} {{/with}}", templateFormat: HandlebarsPromptTemplateFactory.HandlebarsTemplateFormat, promptTemplateFactory: new HandlebarsPromptTemplateFactory(), arguments: new KernelArguments() { { "input", question }, { "collection", collectionName } }); Console.Write("\nAssistant > "); await foreach (var message in response) { Console.Write(message); } Console.WriteLine(); } } static void DisposeServices(Kernel kernel) { foreach (var target in kernel .GetAllServices<IChatCompletionService>() .OfType<IDisposable>()) { target.Dispose(); } } } /// <summary> /// Information item to represent the embedding data stored in the memory /// </summary> internal sealed class InformationItem { [VectorStoreKey] [TextSearchResultName] public string Id { get; set; } = string.Empty; [VectorStoreData] [TextSearchResultValue] public string Text { get; set; } = string.Empty; [VectorStoreVector(Dimensions: 1536)] public string Embedding => this.Text; } ```
GiteaMirror added the bug label 2026-05-04 19:28:00 -05:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#69816