[GH-ISSUE #7955] Inconsistency between Ollama REST API and CLI Model List causing model accessibility issues #51604

Closed
opened 2026-04-28 20:37:34 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @NasonZ on GitHub (Dec 5, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7955

Summary

There's a critical inconsistency between models listed via Ollama's REST API endpoint (curl http:// localhost:11434/api/tags) and the CLI command (ollama list). This leads to models being accessible through
only one interface (either CLI or Python client) despite being physically present in the system (see the logs at to view descrepency).

Description

Previously, Ollama had a consistent behaviour: all models were downloaded to .ollama\models\blobs, and both the CLI (ollama list) and REST API showed the same models. However, something has changed in how Ollama manages model storage, leading to a split in model accessibility between the CLI and REST API interfaces.

Timeline of Issue Discovery

  1. Initial State (Before Issue):

    • All models were stored in .ollama\models\blobs
    • Models were accessible via both CLI and Python client
    • ollama list and REST API showed identical model lists
  2. Issue Detection:

    • Noticed phi3:14b-medium-128k-instruct-q4_K_M was suddenly the only model visible in ollama list
    • Started redownloading needed models (hermes3 and qwen)
    • Later discovered via curl http://localhost:11434/api/tags that original models were still listed
    • Confirmed original model files still present in .ollama\models\blobs
    • However, newly downloaded models' files were not appearing in .ollama\models\blobs

Current Behaviour

  1. Split Storage Behaviour:

    • Original models:

      • Still physically present in .ollama\models\blobs
      • Visible via REST API (curl http://localhost:11434/api/tags)
      • Accessible through Python client
      • Invisible to CLI (ollama list)
    • Newly downloaded models:

      • Stored in unknown location (not in .ollama\models\blobs)
      • Visible via CLI
      • Not accessible through Python client
  2. Example of the Inconsistency:

    # Using curl to query Ollama's REST API
    curl http://localhost:11434/api/tags
    {
        "models": [
            {
                "name": "qwen2.5:14b-instruct-q4_K_M",
                "model": "qwen2.5:14b-instruct-q4_K_M",
                "size": 8988124069,
                "digest": "7cdf5a0187d5..."
            }
        ]
    }
    
    # Using Ollama CLI
    ollama list
    NAME                    ID              SIZE      MODIFIED
    qwen2.5:14b            7cdf5a0187d5    9.0 GB    23 hours ago
    

    Note that despite showing different model names, they reference the same model (same ID: 7cdf5a0187d5). Same happened with hermes3:latest.

  3. Model accessibility is split between interfaces:

    • Python client (which uses the REST API):
      from ollama import Client
      client = Client(host='http://localhost:11434')
      
      messages = [{"role": "user", "content": "What is the capital of France?"}]
      # Works - model name from REST API
      client.chat(model="qwen2.5:14b-instruct-q4_K_M", messages=messages, format="json") # Success
      
      # Fails - model name from CLI list
      response = client.chat(model="qwen2.5:14b", messages=messages, format="json") # Fails with "model not found"
      
    • CLI:
      # Works - model name from CLI list
      ollama run qwen2.5:14b  # Success
      
      # Fails - model name from REST API
      ollama run qwen2.5:14b-instruct-q4_K_M  # Attempted to redownload - completed in under a second which is obivously is not possible for a 9GB model but the fact that it had the same ID as the blob file means it just used the existing file (qwen2.5:14b). qwen2.5:14b redownloaded as expected despite qwen2.5:14b-instruct-q4_K_M being present in the blobs directory.
      
  4. Redownloading models:

    • Models that maintain the same name in both interfaces (e.g., hermes3:latest) work across both interfaces after redownload
    • Models with different names (e.g., Qwen variants) maintain the split behaviour

I think the .ollama\models\blobs directory is no longer recognised by the CLI, although it is still utilised by the Python client. This discrepancy explains why the Python client can access the original models, while the CLI cannot.

Here's what happened in sequence:

  1. I noticed my models were missing from the CLI, so I redownloaded models like hermes3 and qwen2.5:14b
  2. Later, I discovered that my original model files were actually still present in the blobs directory
  3. I then tried to run qwen2.5:14b-instruct-q4_K_M (the name of the model in the original blob file) , instead of calling the original version from the blobs directory, Ollama pointed to the newly downloaded qwen2.5:14b (hence why both share the same ID)

So it seems that Ollama is now storing models in a different directory.

Expected Behaviour

  • Ollama should maintain a single, consistent model storage location (traditionally .ollama\models\blobs)
  • Both CLI and REST API should:
  • Both REST API (curl http://localhost:11434/api/tags) and CLI (ollama list) should show the same list of models
  • Models should be accessible through both CLI and Python client
  • Ollama should store allmodels in the .ollama\models\blobs directory

System Information

  • OS: Windows 10
  • Original Models Location: C:\Users\Nason\.ollama\models\blobs
  • New Models Location: Unknown (not in traditional blob storage location)

Additional Notes

  • This appears to be a regression in Ollama's model storage management
  • The split between CLI and REST API suggests a fundamental change in how Ollama manages model storage
  • Despite the storage/accessibility issues, models function correctly when successfully accessed
  • The presence of original model files in blobs directory but invisibility to CLI suggests a broken link in Ollama's model registry system

Any advice or help on fixing this issue and getting back to the usual behaviour would be greatly appreciated.

Logs

Z:\nas\projects\deep_thinker> ollama list
NAME                                                          ID              SIZE      MODIFIED
hf.co/bartowski/deepthought-8b-llama-v0.01-alpha-GGUF:Q6_K    94820e8abf2f    6.6 GB    41 minutes ago
qwen2.5:14b                                                   7cdf5a0187d5    9.0 GB    22 hours ago
hermes3:latest                                                b5c6c7cb379d    4.7 GB    23 hours ago
phi3:14b-medium-128k-instruct-q4_K_M                          1372226c9b0b    8.6 GB    2 weeks ago

Z:\nas\projects\deep_thinker>curl http: //localhost:11434/api/tags
{
    "models": [
        {
            "name": "phi3:14b-medium-128k-instruct-q4_K_M",
            "model": "phi3:14b-medium-128k-instruct-q4_K_M",
            "modified_at": "2024-11-21T02:29:23.846191Z",
            "size": 8566823460,
            "digest": "1372226c9b0b61f3c693ef9dc54ec646d4cd75347eca849a76a04de22e350517",
            "details": {
                "parent_model": "",
                "format": "gguf",
                "family": "phi3",
                "families": [
                    "phi3"
                ],
                "parameter_size": "14.0B",
                "quantization_level": "Q4_K_M"
            }
        },
        {
            "name": "qwen2.5:14b-instruct-q4_K_M",
            "model": "qwen2.5:14b-instruct-q4_K_M",
            "modified_at": "2024-10-22T06:01:07.5828927+01:00",
            "size": 8988124069,
            "digest": "7cdf5a0187d5c58cc5d369b255592f7841d1c4696d45a8c8a9489440385b22f6",
            "details": {
                "parent_model": "",
                "format": "gguf",
                "family": "qwen2",
                "families": [
                    "qwen2"
                ],
                "parameter_size": "14.8B",
                "quantization_level": "Q4_K_M"
            }
        },
        {
            "name": "hermes3:latest",
            "model": "hermes3:latest",
            "modified_at": "2024-10-12T04:44:51.4633447+01:00",
            "size": 4661226630,
            "digest": "b5c6c7cb379dae220c8525ae71c4f025f4f85157070d594318b1764c882b217d",
            "details": {
                "parent_model": "",
                "format": "gguf",
                "family": "llama",
                "families": [
                    "llama"
                ],
                "parameter_size": "8.0B",
                "quantization_level": "Q4_0"
            }
        },
        ...
        {
            "name": "gemma2:9b",
            "model": "gemma2:9b",
            "modified_at": "2024-07-19T17:45:43.0850279+01:00",
            "size": 5443152417,
            "digest": "ff02c3702f322b9e075e9568332d96c0a7028002f1a5a056e0a6784320a4db0b",
            "details": {
                "parent_model": "",
                "format": "gguf",
                "family": "gemma2",
                "families": [
                    "gemma2"
                ],
                "parameter_size": "9.2B",
                "quantization_level": "Q4_0"
            }
        }
    ]
}
  • Note different time stamps for hermes3:latest.
  • Also note Phi's time stamps is from 2 weeks ago, it was the only model to survive whatever issue this is.

Testing models present in ollama list

Z:\nas\projects\deep_thinker>ollama run qwen2.5: 14b
>>> hi, tell me about yourself
Hello! I'm Qwen, an AI assistant developed by Alibaba Cloud. My purpose is to assist users like you in generating text on
various topics, answering questions, providing explanations, and helping with creative writing tasks. Whether you need
information, want to have a conversation, or are looking for assistance with something specific, feel free to ask me anything!

>>> /bye

Z:\nas\projects\deep_thinker>ollama run hf.co/bartowski/deepthought-8b-llama-v0.01-alpha-GGUF:Q6_K
>>> hi, tell me about yourself
The user has requested information about myself. This is a personal introduction task that requires crafting a concise       
yet informative response about my background, interests, and personality."
  },
  {
    "step": 2,
    "type": "data_gathering",
    "thought": "I need to collect relevant information about myself. As an AI language model, I don't have personal
experiences or emotions, but I can provide information about my capabilities and the nature of my existence."
  },
  {
    "step": 3,
    "type": "synthesis",
    "thought": "Based on the gathered information, I'll create a comprehensive yet concise description of myself: 'I am      
an AI language model designed to assist and communicate with humans. My purpose is to provide information, answer
questions, and engage in conversation based on my training data.'"
  },
  ...

So models from ollama list can be used with the CLI.

Testing a model only present in the api list

Z:\nas\projects\deep_thinker>ollama run gemma2: 9b 
pulling manifest
pulling ff1d1fc78170...   0% ▕                                                                   ▏ 8.9 MB/5.4 GB  4.3 MB/s  21m12^ 
C

Z:\nas\projects\deep_thinker>
Z:\nas\projects\deep_thinker>ollama run mistral:v0.3
pulling manifest
pulling ff82381e2bea...   0% ▕                                                                                                                                           ▏ 2.3 MB/4.1 GB                  ^C

Models from the API list can NOT be used with the CLI.

Testing client

Gemma works fine through the python client

from ollama import Client

client = Client(host='http://localhost:11434')
response = client.chat(
    model="gemma2:9b",
    messages=[{"role": "user", "content": "What is the capital of France?"}],
)

response
{'model': 'gemma2:9b',
 'created_at': '2024-12-05T18:41:41.508362Z',
 'message': {'role': 'assistant',
  'content': '{\n\n  "capitalOfFrance": "Paris" \n\n}'},
 ...}

The new qwen2.5 does not work through the python client

 response = client.chat(
    model="qwen2.5:14b",
    messages=[{"role": "user", "content": "What is the capital of France?"}],
)

response
File c:\Users\Nason\anaconda3\envs\agent_studio\lib\site-packages\ollama\_client.py:74, in Client._request(self, method, url, **kwargs)
     72   response.raise_for_status()
     73 except httpx.HTTPStatusError as e:
---> 74   raise ResponseError(e.response.text, e.response.status_code) from None
     76 return response

ResponseError: model "qwen2.5:14b" not found, try pulling it first

The new deepthought-8b does not work through the python client

client = Client(host='http://localhost:11434')
response = client.chat(
    model="hf.co/bartowski/deepthought-8b-llama-v0.01-alpha-GGUF:Q6_K",
    messages=[{"role": "user", "content": "What is the capital of France?"}],
)

response
File c:\Users\Nason\anaconda3\envs\agent_studio\lib\site-packages\ollama\_client.py:74, in Client._request(self, method, url, **kwargs)
     72   response.raise_for_status()
     73 except httpx.HTTPStatusError as e:
---> 74   raise ResponseError(e.response.text, e.response.status_code) from None
     76 return response

ResponseError: model "hf.co/bartowski/deepthought-8b-llama-v0.01-alpha-GGUF:Q6_K" not found, try pulling it first

Old qwen2.5:14b does work through the python client

response = client.chat(
    model="qwen2.5:14b-instruct-q4_K_M",
    messages=[{"role": "user", "content": "What is the capital of France?"}],
)

response
{'model': 'qwen2.5:14b-instruct-q4_K_M',
 'created_at': '2024-12-05T19:59:09.3711764Z',
 'message': {'role': 'assistant',
  'content': 'The capital of France is Paris.'},
  ...}

So in summary the logs and testing demonstrate that the python client can access the original models but not the redownloaded models, and that the CLI can access the redownloaded models but not the original models.

OS

Windows

GPU

Nvidia

CPU

AMD

Ollama version

0.4.2

Originally created by @NasonZ on GitHub (Dec 5, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7955 ## Summary There's a critical inconsistency between models listed via Ollama's REST API endpoint (`curl http:// localhost:11434/api/tags`) and the CLI command (`ollama list`). This leads to models being accessible through only one interface (either CLI or Python client) despite being physically present in the system (see the logs at to view descrepency). ## Description Previously, Ollama had a consistent behaviour: all models were downloaded to `.ollama\models\blobs`, and both the CLI (`ollama list`) and REST API showed the same models. However, something has changed in how Ollama manages model storage, leading to a split in model accessibility between the CLI and REST API interfaces. ## Timeline of Issue Discovery 1. Initial State (Before Issue): - All models were stored in `.ollama\models\blobs` - Models were accessible via both CLI and Python client - `ollama list` and REST API showed identical model lists 2. Issue Detection: - Noticed `phi3:14b-medium-128k-instruct-q4_K_M` was suddenly the only model visible in `ollama list` - Started redownloading needed models (hermes3 and qwen) - Later discovered via `curl http://localhost:11434/api/tags` that original models were still listed - Confirmed original model files still present in `.ollama\models\blobs` - However, newly downloaded models' files were not appearing in `.ollama\models\blobs` ## Current Behaviour 1. Split Storage Behaviour: - Original models: - Still physically present in `.ollama\models\blobs` - Visible via REST API (`curl http://localhost:11434/api/tags`) - Accessible through Python client - Invisible to CLI (`ollama list`) - Newly downloaded models: - Stored in unknown location (not in `.ollama\models\blobs`) - Visible via CLI - Not accessible through Python client 2. Example of the Inconsistency: ```bash # Using curl to query Ollama's REST API curl http://localhost:11434/api/tags { "models": [ { "name": "qwen2.5:14b-instruct-q4_K_M", "model": "qwen2.5:14b-instruct-q4_K_M", "size": 8988124069, "digest": "7cdf5a0187d5..." } ] } # Using Ollama CLI ollama list NAME ID SIZE MODIFIED qwen2.5:14b 7cdf5a0187d5 9.0 GB 23 hours ago ``` Note that despite showing different model names, they reference the same model (same ID: 7cdf5a0187d5). Same happened with hermes3:latest. 2. Model accessibility is split between interfaces: - Python client (which uses the REST API): ```python from ollama import Client client = Client(host='http://localhost:11434') messages = [{"role": "user", "content": "What is the capital of France?"}] # Works - model name from REST API client.chat(model="qwen2.5:14b-instruct-q4_K_M", messages=messages, format="json") # Success # Fails - model name from CLI list response = client.chat(model="qwen2.5:14b", messages=messages, format="json") # Fails with "model not found" ``` - CLI: ```bash # Works - model name from CLI list ollama run qwen2.5:14b # Success # Fails - model name from REST API ollama run qwen2.5:14b-instruct-q4_K_M # Attempted to redownload - completed in under a second which is obivously is not possible for a 9GB model but the fact that it had the same ID as the blob file means it just used the existing file (qwen2.5:14b). qwen2.5:14b redownloaded as expected despite qwen2.5:14b-instruct-q4_K_M being present in the blobs directory. ``` 3. Redownloading models: - Models that maintain the same name in both interfaces (e.g., hermes3:latest) work across both interfaces after redownload - Models with different names (e.g., Qwen variants) maintain the split behaviour I think the `.ollama\models\blobs` directory is no longer recognised by the CLI, although it is still utilised by the Python client. This discrepancy explains why the Python client can access the original models, while the CLI cannot. Here's what happened in sequence: 1. I noticed my models were missing from the CLI, so I redownloaded models like `hermes3` and `qwen2.5:14b` 2. Later, I discovered that my original model files were actually still present in the blobs directory 3. I then tried to run `qwen2.5:14b-instruct-q4_K_M` (the name of the model in the original blob file) , instead of calling the original version from the blobs directory, Ollama pointed to the newly downloaded `qwen2.5:14b` (hence why both share the same ID) So it seems that Ollama is now storing models in a different directory. ## Expected Behaviour - Ollama should maintain a single, consistent model storage location (traditionally `.ollama\models\blobs`) - Both CLI and REST API should: - Both REST API (`curl http://localhost:11434/api/tags`) and CLI (`ollama list`) should show the same list of models - Models should be accessible through both CLI and Python client - Ollama should store allmodels in the `.ollama\models\blobs` directory ## System Information - OS: Windows 10 - Original Models Location: `C:\Users\Nason\.ollama\models\blobs` - New Models Location: Unknown (not in traditional blob storage location) ## Additional Notes - This appears to be a regression in Ollama's model storage management - The split between CLI and REST API suggests a fundamental change in how Ollama manages model storage - Despite the storage/accessibility issues, models function correctly when successfully accessed - The presence of original model files in blobs directory but invisibility to CLI suggests a broken link in Ollama's model registry system Any advice or help on fixing this issue and getting back to the usual behaviour would be greatly appreciated. ## Logs ``` Z:\nas\projects\deep_thinker> ollama list NAME ID SIZE MODIFIED hf.co/bartowski/deepthought-8b-llama-v0.01-alpha-GGUF:Q6_K 94820e8abf2f 6.6 GB 41 minutes ago qwen2.5:14b 7cdf5a0187d5 9.0 GB 22 hours ago hermes3:latest b5c6c7cb379d 4.7 GB 23 hours ago phi3:14b-medium-128k-instruct-q4_K_M 1372226c9b0b 8.6 GB 2 weeks ago Z:\nas\projects\deep_thinker>curl http: //localhost:11434/api/tags { "models": [ { "name": "phi3:14b-medium-128k-instruct-q4_K_M", "model": "phi3:14b-medium-128k-instruct-q4_K_M", "modified_at": "2024-11-21T02:29:23.846191Z", "size": 8566823460, "digest": "1372226c9b0b61f3c693ef9dc54ec646d4cd75347eca849a76a04de22e350517", "details": { "parent_model": "", "format": "gguf", "family": "phi3", "families": [ "phi3" ], "parameter_size": "14.0B", "quantization_level": "Q4_K_M" } }, { "name": "qwen2.5:14b-instruct-q4_K_M", "model": "qwen2.5:14b-instruct-q4_K_M", "modified_at": "2024-10-22T06:01:07.5828927+01:00", "size": 8988124069, "digest": "7cdf5a0187d5c58cc5d369b255592f7841d1c4696d45a8c8a9489440385b22f6", "details": { "parent_model": "", "format": "gguf", "family": "qwen2", "families": [ "qwen2" ], "parameter_size": "14.8B", "quantization_level": "Q4_K_M" } }, { "name": "hermes3:latest", "model": "hermes3:latest", "modified_at": "2024-10-12T04:44:51.4633447+01:00", "size": 4661226630, "digest": "b5c6c7cb379dae220c8525ae71c4f025f4f85157070d594318b1764c882b217d", "details": { "parent_model": "", "format": "gguf", "family": "llama", "families": [ "llama" ], "parameter_size": "8.0B", "quantization_level": "Q4_0" } }, ... { "name": "gemma2:9b", "model": "gemma2:9b", "modified_at": "2024-07-19T17:45:43.0850279+01:00", "size": 5443152417, "digest": "ff02c3702f322b9e075e9568332d96c0a7028002f1a5a056e0a6784320a4db0b", "details": { "parent_model": "", "format": "gguf", "family": "gemma2", "families": [ "gemma2" ], "parameter_size": "9.2B", "quantization_level": "Q4_0" } } ] } ``` - Note different time stamps for hermes3:latest. - Also note Phi's time stamps is from 2 weeks ago, it was the only model to survive whatever issue this is. ## Testing models present in ollama list ``` Z:\nas\projects\deep_thinker>ollama run qwen2.5: 14b >>> hi, tell me about yourself Hello! I'm Qwen, an AI assistant developed by Alibaba Cloud. My purpose is to assist users like you in generating text on various topics, answering questions, providing explanations, and helping with creative writing tasks. Whether you need information, want to have a conversation, or are looking for assistance with something specific, feel free to ask me anything! >>> /bye Z:\nas\projects\deep_thinker>ollama run hf.co/bartowski/deepthought-8b-llama-v0.01-alpha-GGUF:Q6_K >>> hi, tell me about yourself The user has requested information about myself. This is a personal introduction task that requires crafting a concise yet informative response about my background, interests, and personality." }, { "step": 2, "type": "data_gathering", "thought": "I need to collect relevant information about myself. As an AI language model, I don't have personal experiences or emotions, but I can provide information about my capabilities and the nature of my existence." }, { "step": 3, "type": "synthesis", "thought": "Based on the gathered information, I'll create a comprehensive yet concise description of myself: 'I am an AI language model designed to assist and communicate with humans. My purpose is to provide information, answer questions, and engage in conversation based on my training data.'" }, ... ``` So models from ollama list can be used with the CLI. ## Testing a model only present in the api list ``` Z:\nas\projects\deep_thinker>ollama run gemma2: 9b pulling manifest pulling ff1d1fc78170... 0% ▕ ▏ 8.9 MB/5.4 GB 4.3 MB/s 21m12^ C Z:\nas\projects\deep_thinker> Z:\nas\projects\deep_thinker>ollama run mistral:v0.3 pulling manifest pulling ff82381e2bea... 0% ▕ ▏ 2.3 MB/4.1 GB ^C ``` Models from the API list can NOT be used with the CLI. ## Testing client Gemma works fine through the python client ``` from ollama import Client client = Client(host='http://localhost:11434') response = client.chat( model="gemma2:9b", messages=[{"role": "user", "content": "What is the capital of France?"}], ) response {'model': 'gemma2:9b', 'created_at': '2024-12-05T18:41:41.508362Z', 'message': {'role': 'assistant', 'content': '{\n\n "capitalOfFrance": "Paris" \n\n}'}, ...} ``` The new qwen2.5 does not work through the python client ``` response = client.chat( model="qwen2.5:14b", messages=[{"role": "user", "content": "What is the capital of France?"}], ) response File c:\Users\Nason\anaconda3\envs\agent_studio\lib\site-packages\ollama\_client.py:74, in Client._request(self, method, url, **kwargs) 72 response.raise_for_status() 73 except httpx.HTTPStatusError as e: ---> 74 raise ResponseError(e.response.text, e.response.status_code) from None 76 return response ResponseError: model "qwen2.5:14b" not found, try pulling it first ``` The new deepthought-8b does not work through the python client ``` client = Client(host='http://localhost:11434') response = client.chat( model="hf.co/bartowski/deepthought-8b-llama-v0.01-alpha-GGUF:Q6_K", messages=[{"role": "user", "content": "What is the capital of France?"}], ) response File c:\Users\Nason\anaconda3\envs\agent_studio\lib\site-packages\ollama\_client.py:74, in Client._request(self, method, url, **kwargs) 72 response.raise_for_status() 73 except httpx.HTTPStatusError as e: ---> 74 raise ResponseError(e.response.text, e.response.status_code) from None 76 return response ResponseError: model "hf.co/bartowski/deepthought-8b-llama-v0.01-alpha-GGUF:Q6_K" not found, try pulling it first ``` Old qwen2.5:14b does work through the python client ``` response = client.chat( model="qwen2.5:14b-instruct-q4_K_M", messages=[{"role": "user", "content": "What is the capital of France?"}], ) response {'model': 'qwen2.5:14b-instruct-q4_K_M', 'created_at': '2024-12-05T19:59:09.3711764Z', 'message': {'role': 'assistant', 'content': 'The capital of France is Paris.'}, ...} ``` So in summary the logs and testing demonstrate that the python client can access the original models but not the redownloaded models, and that the CLI can access the redownloaded models but not the original models. ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.4.2
GiteaMirror added the wslbug labels 2026-04-28 20:37:34 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 5, 2024):

You are likely running two servers. Do ollama -v and curl http://localhost:11434/api/version show the same results? Have you previously installed ollama in WSL? Does tasklist show any ollama processes?

<!-- gh-comment-id:2521354955 --> @rick-github commented on GitHub (Dec 5, 2024): You are likely running two servers. Do `ollama -v` and `curl http://localhost:11434/api/version` show the same results? Have you previously installed ollama in WSL? Does `tasklist` show any ollama processes?
Author
Owner

@rick-github commented on GitHub (Dec 5, 2024):

Also try running this in a PowerShell in administrator mode:

netstat -bano | Select-String -Pattern 11434 -Context 0,1
<!-- gh-comment-id:2521366872 --> @rick-github commented on GitHub (Dec 5, 2024): Also try running this in a PowerShell in administrator mode: ``` netstat -bano | Select-String -Pattern 11434 -Context 0,1 ```
Author
Owner

@NasonZ commented on GitHub (Dec 5, 2024):

Thanks for the speedy response. Tried killing any ollama servers running and starting back up but had no luck. I did install ollama in a wsl environment weeks ago, but I've not used that WSL enviroment in 2 weeks and have been using ollama daily until now.

(base) PS Z:\nas\projects\deep_thinker> Get-Process | Where-Object {$_.ProcessName -like 'ollama'} | kill
(base) PS Z:\nas\projects\deep_thinker>  taskkill /fi "imagename eq ollama app.exe"

INFO: No tasks running with the specified criteria.
(base) PS Z:\nas\projects\deep_thinker> ollama -v
ollama version is 0.4.2
(base) PS Z:\nas\projects\deep_thinker> curl http://localhost:11434/api/version


StatusCode        : 200
StatusDescription : OK
Content           : {"version":"0.4.2"}
RawContent        : HTTP/1.1 200 OK
                    Content-Length: 19
                    Content-Type: application/json; charset=utf-8
                    Date: Thu, 05 Dec 2024 20:46:16 GMT

                    {"version":"0.4.2"}
Forms             : {}
Headers           : {[Content-Length, 19], [Content-Type, application/json; charset=utf-8], [Date, Thu, 05 Dec 2024       
                    20:46:16 GMT]}
Images            : {}
InputFields       : {}
Links             : {}
ParsedHtml        : mshtml.HTMLDocumentClass
RawContentLength  : 19

(base) PS Z:\nas\projects\deep_thinker> netstat -bano | Select-String -Pattern 11434 -Context 0,1

>   TCP    127.0.0.1:8743         127.0.0.1:11434        TIME_WAIT       0
    TCP    127.0.0.1:8749         0.0.0.0:0              LISTENING       34224
>   TCP    127.0.0.1:11434        0.0.0.0:0              LISTENING       38128
   [wslrelay.exe]
<!-- gh-comment-id:2521372765 --> @NasonZ commented on GitHub (Dec 5, 2024): Thanks for the speedy response. Tried killing any ollama servers running and starting back up but had no luck. I did install ollama in a wsl environment weeks ago, but I've not used that WSL enviroment in 2 weeks and have been using ollama daily until now. ``` (base) PS Z:\nas\projects\deep_thinker> Get-Process | Where-Object {$_.ProcessName -like 'ollama'} | kill (base) PS Z:\nas\projects\deep_thinker> taskkill /fi "imagename eq ollama app.exe" INFO: No tasks running with the specified criteria. (base) PS Z:\nas\projects\deep_thinker> ollama -v ollama version is 0.4.2 (base) PS Z:\nas\projects\deep_thinker> curl http://localhost:11434/api/version StatusCode : 200 StatusDescription : OK Content : {"version":"0.4.2"} RawContent : HTTP/1.1 200 OK Content-Length: 19 Content-Type: application/json; charset=utf-8 Date: Thu, 05 Dec 2024 20:46:16 GMT {"version":"0.4.2"} Forms : {} Headers : {[Content-Length, 19], [Content-Type, application/json; charset=utf-8], [Date, Thu, 05 Dec 2024 20:46:16 GMT]} Images : {} InputFields : {} Links : {} ParsedHtml : mshtml.HTMLDocumentClass RawContentLength : 19 (base) PS Z:\nas\projects\deep_thinker> netstat -bano | Select-String -Pattern 11434 -Context 0,1 > TCP 127.0.0.1:8743 127.0.0.1:11434 TIME_WAIT 0 TCP 127.0.0.1:8749 0.0.0.0:0 LISTENING 34224 > TCP 127.0.0.1:11434 0.0.0.0:0 LISTENING 38128 [wslrelay.exe] ```
Author
Owner

@rick-github commented on GitHub (Dec 5, 2024):

>   TCP    127.0.0.1:11434        0.0.0.0:0              LISTENING       38128
   [wslrelay.exe]

You have an ollama server running in a WSL container.

<!-- gh-comment-id:2521378198 --> @rick-github commented on GitHub (Dec 5, 2024): ``` > TCP 127.0.0.1:11434 0.0.0.0:0 LISTENING 38128 [wslrelay.exe] ``` You have an ollama server running in a WSL container.
Author
Owner

@NasonZ commented on GitHub (Dec 5, 2024):

Bingo, that was the issue.

(base) PS Z:\nas\projects\deep_thinker> ollama list
NAME                                                          ID              SIZE      MODIFIED
qwen2.5:14b-instruct-q4_K_M                                   7cdf5a0187d5    9.0 GB    3 hours ago
qwen2.5:14b                                                   7cdf5a0187d5    9.0 GB    25 hours ago
hermes3:latest                                                b5c6c7cb379d    4.7 GB    25 hours ago
phi3:14b-medium-128k-instruct-q4_K_M                          1372226c9b0b    8.6 GB    2 weeks ago
(base) PS Z:\nas\projects\deep_thinker> wsl --list --verbose
  NAME                   STATE           VERSION
* Ubuntu-22.04           Running         2
  docker-desktop         Running         2
  docker-desktop-data    Running         2
(base) PS Z:\nas\projects\deep_thinker> wsl --shutdown
(base) PS Z:\nas\projects\deep_thinker> 
(base) PS Z:\nas\projects\deep_thinker> ollama list
NAME                                    ID              SIZE      MODIFIED     
phi3:14b-medium-128k-instruct-q4_K_M    1372226c9b0b    8.6 GB    2 weeks ago
qwen2.5:14b-instruct-q4_K_M             7cdf5a0187d5    9.0 GB    6 weeks ago
llama3.2:1b                             baf6a787fdff    1.3 GB    7 weeks ago
hermes3:latest                          b5c6c7cb379d    4.7 GB    7 weeks ago
nomic-embed-text:latest                 0a109f422b47    274 MB    2 months ago
mistral-nemo:latest                     4b300b8c6a97    7.1 GB    4 months ago
llama3.1:latest                         a340353013fd    4.7 GB    4 months ago
mistral:v0.3                            f974a74358d6    4.1 GB    4 months ago
llama3-groq-tool-use:latest             55065f5d86c6    4.7 GB    4 months ago
deepseek-coder-v2:latest                8577f96d693e    8.9 GB    4 months ago
gemma2:9b                               ff02c3702f32    5.4 GB    4 months ago
<!-- gh-comment-id:2521396002 --> @NasonZ commented on GitHub (Dec 5, 2024): Bingo, that was the issue. ``` (base) PS Z:\nas\projects\deep_thinker> ollama list NAME ID SIZE MODIFIED qwen2.5:14b-instruct-q4_K_M 7cdf5a0187d5 9.0 GB 3 hours ago qwen2.5:14b 7cdf5a0187d5 9.0 GB 25 hours ago hermes3:latest b5c6c7cb379d 4.7 GB 25 hours ago phi3:14b-medium-128k-instruct-q4_K_M 1372226c9b0b 8.6 GB 2 weeks ago (base) PS Z:\nas\projects\deep_thinker> wsl --list --verbose NAME STATE VERSION * Ubuntu-22.04 Running 2 docker-desktop Running 2 docker-desktop-data Running 2 (base) PS Z:\nas\projects\deep_thinker> wsl --shutdown (base) PS Z:\nas\projects\deep_thinker> (base) PS Z:\nas\projects\deep_thinker> ollama list NAME ID SIZE MODIFIED phi3:14b-medium-128k-instruct-q4_K_M 1372226c9b0b 8.6 GB 2 weeks ago qwen2.5:14b-instruct-q4_K_M 7cdf5a0187d5 9.0 GB 6 weeks ago llama3.2:1b baf6a787fdff 1.3 GB 7 weeks ago hermes3:latest b5c6c7cb379d 4.7 GB 7 weeks ago nomic-embed-text:latest 0a109f422b47 274 MB 2 months ago mistral-nemo:latest 4b300b8c6a97 7.1 GB 4 months ago llama3.1:latest a340353013fd 4.7 GB 4 months ago mistral:v0.3 f974a74358d6 4.1 GB 4 months ago llama3-groq-tool-use:latest 55065f5d86c6 4.7 GB 4 months ago deepseek-coder-v2:latest 8577f96d693e 8.9 GB 4 months ago gemma2:9b ff02c3702f32 5.4 GB 4 months ago ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#51604