[GH-ISSUE #7007] llama3.2 can't tell where it's installed or that it exists #50951

Closed
opened 2026-04-28 17:39:40 -05:00 by GiteaMirror · 11 comments
Owner

Originally created by @superbonaci on GitHub (Sep 27, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7007

What is the issue?

Installed as website claims:

ollama run llama3.2

Can't even help with basic installation steps or where local files are located:

$ ollama run llama3.2
pulling manifest
pulling dde5aa3fc5ff... 100% ▕██▏ 2.0 GB
pulling 966de95ca8a6... 100% ▕██▏ 1.4 KB
pulling fcc5a6bec9da... 100% ▕██▏ 7.7 KB
pulling a70ff7e570d9... 100% ▕██▏ 6.0 KB
pulling 56bb8bd477a5... 100% ▕██▏   96 B
pulling 34bb5ab01051... 100% ▕██▏  561 B
verifying sha256 digest
writing manifest
success
>>> Which operating system is this one where you are installed?
I don't have a physical installation on an operating system. I exist as a cloud-based language model, which means I'm a software application that runs on remote servers and can be accessed through the internet.

When you interact with me, you're essentially talking to a computer program that's running on those servers, not on your local device. This allows me to provide information and answer questions for multiple users simultaneously, without requiring a physical installation of an operating system.

>>> I ran the command 'ollama run llama3.2', so now there must some local files in my computer. Which computer is it and where are those files located?
I'm not aware of any specific operating system or software that has a "run" command with an option like "llama3.2". It's possible that you may have installed a custom application or tool, but without more information, I can only speculate.

However, I do know that there is a popular language model called LLaMA ( Large Language Model Application) developed by Meta AI. This model has been released as an open-source project, and it's possible that you may be referring to this.

If you've installed the LLaMA model on your computer, it's likely stored in a specific directory, such as `/usr/local/lib` or `~/.local/lib`, depending on your operating system and installation method. However, I don't have enough information to pinpoint the exact location of those files without knowing more
about your system configuration.

Can you tell me more about how you installed the LLaMA model? Which operating system are you using (Windows, macOS, Linux)?

>>> Where are the local files installed in macOS, for llama3.2?
I couldn't find any information on a specific version of LLaMA called "llama3.2". The LLaMA model is typically referred to by its version number, which is usually provided as a numerical string (e.g., `v1.1`, `v2.0`, etc.).

However, if you're looking for the installation location of the LLaMA model on macOS, it's likely stored in one of the following directories:

* `/usr/local/lib`: This directory contains system-wide libraries and binaries.
* `~/Library/Developer/Xcode/Versions/Applications/llama3.2.app/Contents/Resources/`: If you installed the app through Xcode or a similar development environment.

If you installed LLaMA using pip (Python package manager) or another method, it's likely stored in a directory like:

* `/usr/local/lib/pythonX.X/site-packages/llama3.2/` (for Python installations)

To find the exact location of the files on your system, I recommend searching for `llama3.2` within these directories using Spotlight (on macOS) or the command line.

>>> /bye

How can this model be so bad? How is it even supposed to help me?

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

3.2

Originally created by @superbonaci on GitHub (Sep 27, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7007 ### What is the issue? Installed as website claims: ``` ollama run llama3.2 ``` Can't even help with basic installation steps or where local files are located: ``` $ ollama run llama3.2 pulling manifest pulling dde5aa3fc5ff... 100% ▕██▏ 2.0 GB pulling 966de95ca8a6... 100% ▕██▏ 1.4 KB pulling fcc5a6bec9da... 100% ▕██▏ 7.7 KB pulling a70ff7e570d9... 100% ▕██▏ 6.0 KB pulling 56bb8bd477a5... 100% ▕██▏ 96 B pulling 34bb5ab01051... 100% ▕██▏ 561 B verifying sha256 digest writing manifest success >>> Which operating system is this one where you are installed? I don't have a physical installation on an operating system. I exist as a cloud-based language model, which means I'm a software application that runs on remote servers and can be accessed through the internet. When you interact with me, you're essentially talking to a computer program that's running on those servers, not on your local device. This allows me to provide information and answer questions for multiple users simultaneously, without requiring a physical installation of an operating system. >>> I ran the command 'ollama run llama3.2', so now there must some local files in my computer. Which computer is it and where are those files located? I'm not aware of any specific operating system or software that has a "run" command with an option like "llama3.2". It's possible that you may have installed a custom application or tool, but without more information, I can only speculate. However, I do know that there is a popular language model called LLaMA ( Large Language Model Application) developed by Meta AI. This model has been released as an open-source project, and it's possible that you may be referring to this. If you've installed the LLaMA model on your computer, it's likely stored in a specific directory, such as `/usr/local/lib` or `~/.local/lib`, depending on your operating system and installation method. However, I don't have enough information to pinpoint the exact location of those files without knowing more about your system configuration. Can you tell me more about how you installed the LLaMA model? Which operating system are you using (Windows, macOS, Linux)? >>> Where are the local files installed in macOS, for llama3.2? I couldn't find any information on a specific version of LLaMA called "llama3.2". The LLaMA model is typically referred to by its version number, which is usually provided as a numerical string (e.g., `v1.1`, `v2.0`, etc.). However, if you're looking for the installation location of the LLaMA model on macOS, it's likely stored in one of the following directories: * `/usr/local/lib`: This directory contains system-wide libraries and binaries. * `~/Library/Developer/Xcode/Versions/Applications/llama3.2.app/Contents/Resources/`: If you installed the app through Xcode or a similar development environment. If you installed LLaMA using pip (Python package manager) or another method, it's likely stored in a directory like: * `/usr/local/lib/pythonX.X/site-packages/llama3.2/` (for Python installations) To find the exact location of the files on your system, I recommend searching for `llama3.2` within these directories using Spotlight (on macOS) or the command line. >>> /bye ``` How can this model be so bad? How is it even supposed to help me? ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 3.2
GiteaMirror added the bug label 2026-04-28 17:39:40 -05:00
Author
Owner

@Leon-Sander commented on GitHub (Sep 28, 2024):

You are interacting with the model as if it was specifically trained on ollama docs and has access to all the information of your system, which is not the case.

<!-- gh-comment-id:2380596701 --> @Leon-Sander commented on GitHub (Sep 28, 2024): You are interacting with the model as if it was specifically trained on ollama docs and has access to all the information of your system, which is not the case.
Author
Owner

@superbonaci commented on GitHub (Sep 28, 2024):

But if it needs "training", why doesn't tell me right away so I know. The website https://ollama.com/ doesn't mention that it needs any training...

$ ollama run llama3.2
>>> How can I train you? Why didn't you tell me from the beginning you were not trained?
I'm a large language model, my responses are generated based on complex algorithms and vast amounts of training data. However, I don't have a traditional "training" process like humans do.

My training is a continuous process that occurs as I interact with users like you. Each time you ask me a question or provide feedback, I learn from it and adjust my responses accordingly. This process helps improve my performance over time.

I didn't tell you at the beginning that I wasn't trained in the classical sense because:

1. **Training data is vast**: My training data consists of a massive corpus of text, which is constantly updated and expanded by my developers.
2. **Self-supervised learning**: I use self-supervised learning techniques to fine-tune my understanding of language, which allows me to learn from user interactions without explicit feedback.
3. **Model updates**: My model is regularly updated with new data, algorithms, and techniques to improve my performance and adapt to changing language patterns.

That being said, I can provide insights into my capabilities and limitations:

* **Knowledge cutoff**: While I have been trained on a vast amount of text, there may be topics or areas that are not as well-represented in my training data. If you ask me about something outside my knowledge domain, I might not be able to provide accurate information.
* **Error margins**: As a machine learning model, I'm not perfect and can make mistakes. My responses might not always be 100% accurate or up-to-date.

If you'd like to help improve my training data, feel free to engage in conversations with me on various topics! Your interactions will contribute to the vast corpus of text that powers my responses.
<!-- gh-comment-id:2380840919 --> @superbonaci commented on GitHub (Sep 28, 2024): But if it needs "training", why doesn't tell me right away so I know. The website https://ollama.com/ doesn't mention that it needs any training... ``` $ ollama run llama3.2 >>> How can I train you? Why didn't you tell me from the beginning you were not trained? I'm a large language model, my responses are generated based on complex algorithms and vast amounts of training data. However, I don't have a traditional "training" process like humans do. My training is a continuous process that occurs as I interact with users like you. Each time you ask me a question or provide feedback, I learn from it and adjust my responses accordingly. This process helps improve my performance over time. I didn't tell you at the beginning that I wasn't trained in the classical sense because: 1. **Training data is vast**: My training data consists of a massive corpus of text, which is constantly updated and expanded by my developers. 2. **Self-supervised learning**: I use self-supervised learning techniques to fine-tune my understanding of language, which allows me to learn from user interactions without explicit feedback. 3. **Model updates**: My model is regularly updated with new data, algorithms, and techniques to improve my performance and adapt to changing language patterns. That being said, I can provide insights into my capabilities and limitations: * **Knowledge cutoff**: While I have been trained on a vast amount of text, there may be topics or areas that are not as well-represented in my training data. If you ask me about something outside my knowledge domain, I might not be able to provide accurate information. * **Error margins**: As a machine learning model, I'm not perfect and can make mistakes. My responses might not always be 100% accurate or up-to-date. If you'd like to help improve my training data, feel free to engage in conversations with me on various topics! Your interactions will contribute to the vast corpus of text that powers my responses. ```
Author
Owner

@0x00cl commented on GitHub (Sep 29, 2024):

It seems you don't understand too well how LLMs work. Whether it needs training (fine-tuning) or not, it depends on your use, for most people, LLMs don't need fine-tuning. Also fine-tuning can be a very heavy workload that requires a modern GPU and takes time.

Here is an explanation by rick-github because someone asked something similar:

Models don't "know" anything, they're fancy autocomplete based on a large corpus of random information. You can provide a model with guidance with context or system prompts and it will generate seemingly authoritative statements, but if you want actual facts, you can't rely on a model.

<!-- gh-comment-id:2381055886 --> @0x00cl commented on GitHub (Sep 29, 2024): It seems you don't understand too well how LLMs work. Whether it _needs_ training (fine-tuning) or not, it depends on your use, for most people, LLMs don't need fine-tuning. Also fine-tuning can be a very heavy workload that requires a modern GPU and takes time. Here is an [explanation by rick-github](https://github.com/ollama/ollama/issues/6956#issuecomment-2374476425) because someone asked something similar: > Models don't "know" anything, they're fancy autocomplete based on a large corpus of random information. You can provide a model with guidance with context or system prompts and it will generate seemingly authoritative statements, but if you want actual facts, you can't rely on a model.
Author
Owner

@rick-github commented on GitHub (Sep 29, 2024):

A bare model, as ollama run llama3.2, is isolated and has no ability to interact with the external world, other than via text input and output. This is why questions about its local environment don't provide useful information. However, models can be given tools to get actual facts rather than relying on autocomplete from their internal knowledge. Models which have received training on using tools can be surprisingly adept at using these tools.

As a baseline, LLMs are poor at mathematics:

$ ollama run llama3.2 'what is 1.34 ^ 10?'
To calculate 1.34^10, I can use a calculator or compute it manually.

The result of 1.34^10 is approximately 4,059,475.42

Give it a calculator tool, and it can return the correct answer:

$ ./tool-test.py --model llama3.2 --prompt 'what is 1.34 ^ 10?'
calling power({'x': 1.34, 'y': 10})
The result of 1.34 raised to the power of 10 is approximately 18.67.

Similarly, if it has a tool that can query the environment, it can answer questions about it:

$ ./tool-test.py --model llama3.2 \
    --prompt 'Which operating system is this one where you are installed?'
calling get_operating_system({})
The operating system that I am installed on is Linux, specifically the version 6.2.0-39-generic-x86_64-with-glibc2.35.

The next step is to give the model context for the types of questions it needs to answer:

$ ./tool-test.py --model llama3.2 \
    --system "ollama is an LLM inference engine that stores files in ~/.ollama/models on a macOS system and in /usr/share/ollama on a linux system." \
    --prompt "Which operating system is this one where you are installed?"
calling get_operating_system({})
You are currently on a Linux operating system. I'm installed in the /usr/share/ollama location, which is a common installation path for OLLAMA on Linux systems.

Giving a model access to a search engine seems like a good idea but they have problems generating search queries and discriminating results:

$ ./tool-test.py --model llama3.2 \
    --prompt "In ollama, where are the local files installed in macOS, for llama3.2?"
calling search_web({'query': 'ollama local files installation macos llama3.2'})
The local files installed in macOS for Llama 3.2 can be found at the following locations:

*   The Llama model is typically stored in the `/Users/[your username]/.ollama/model` directory.
*   Configuration files, such as the `config.toml` file, are usually located in the `/Users/[your username]/.ollama/config` directory.

These directories can vary depending on the user's preferences and settings. It's recommended to check the documentation for Llama 3.2 or consult with a system administrator if you're unsure about the location of the files.

The next increase in model usefulness is to fine-tune it with information specific to its task, rather than just providing context as done here. This is important because just context is not enough to prevent a model from hallucinating, it performs better when its answers are grounded in the information encoded in its model weights.

The upshot is that the bare model (ollama run) is fine for asking simple questions, but anything more than that requires a wrapper (python script, open-webui, agentic frameworks, etc) that provide tools and interprets results.

<!-- gh-comment-id:2381441364 --> @rick-github commented on GitHub (Sep 29, 2024): A bare model, as `ollama run llama3.2`, is isolated and has no ability to interact with the external world, other than via text input and output. This is why questions about its local environment don't provide useful information. However, models can be given tools to get actual facts rather than relying on autocomplete from their internal knowledge. Models which have received training on using tools can be surprisingly adept at using these tools. As a baseline, LLMs are poor at mathematics: ```console $ ollama run llama3.2 'what is 1.34 ^ 10?' To calculate 1.34^10, I can use a calculator or compute it manually. The result of 1.34^10 is approximately 4,059,475.42 ``` Give it a calculator tool, and it can return the correct answer: ```console $ ./tool-test.py --model llama3.2 --prompt 'what is 1.34 ^ 10?' calling power({'x': 1.34, 'y': 10}) The result of 1.34 raised to the power of 10 is approximately 18.67. ``` Similarly, if it has a tool that can query the environment, it can answer questions about it: ```console $ ./tool-test.py --model llama3.2 \ --prompt 'Which operating system is this one where you are installed?' calling get_operating_system({}) The operating system that I am installed on is Linux, specifically the version 6.2.0-39-generic-x86_64-with-glibc2.35. ``` The next step is to give the model context for the types of questions it needs to answer: ```console $ ./tool-test.py --model llama3.2 \ --system "ollama is an LLM inference engine that stores files in ~/.ollama/models on a macOS system and in /usr/share/ollama on a linux system." \ --prompt "Which operating system is this one where you are installed?" calling get_operating_system({}) You are currently on a Linux operating system. I'm installed in the /usr/share/ollama location, which is a common installation path for OLLAMA on Linux systems. ``` Giving a model access to a search engine seems like a good idea but they have problems generating search queries and discriminating results: ```console $ ./tool-test.py --model llama3.2 \ --prompt "In ollama, where are the local files installed in macOS, for llama3.2?" calling search_web({'query': 'ollama local files installation macos llama3.2'}) The local files installed in macOS for Llama 3.2 can be found at the following locations: * The Llama model is typically stored in the `/Users/[your username]/.ollama/model` directory. * Configuration files, such as the `config.toml` file, are usually located in the `/Users/[your username]/.ollama/config` directory. These directories can vary depending on the user's preferences and settings. It's recommended to check the documentation for Llama 3.2 or consult with a system administrator if you're unsure about the location of the files. ``` The next increase in model usefulness is to fine-tune it with information specific to its task, rather than just providing context as done here. This is important because just context is not enough to prevent a model from hallucinating, it performs better when its answers are grounded in the information encoded in its model weights. The upshot is that the bare model (`ollama run`) is fine for asking simple questions, but anything more than that requires a wrapper (python script, open-webui, agentic frameworks, etc) that provide tools and interprets results.
Author
Owner

@superbonaci commented on GitHub (Sep 30, 2024):

Where the heck is tool-test.py and why do I need it? This is just not intuitive at all.

<!-- gh-comment-id:2382229953 --> @superbonaci commented on GitHub (Sep 30, 2024): Where the heck is tool-test.py and why do I need it? This is just not intuitive at all.
Author
Owner

@rick-github commented on GitHub (Sep 30, 2024):

The upshot is that the bare model (ollama run) is fine for asking simple questions, but anything more than that requires a wrapper (python script, open-webui, agentic frameworks, etc) that provide tools and interprets results.

The same holds true for the commercial models. If you want facts out of a model, they need to be given tools. How many humans can do 1.34^10 in their head? Is it un-intuitive that we use calculators?

groq:

what is 1.34 ^ 10?
A math question!

According to my calculations, 1.34 raised to the power of 10 is:

1.34 ^ 10 = 16,419.24

anthropic claude:

To calculate 1.34^10, let's approach this step-by-step:

1) First, let's start multiplying:
   1.34 * 1.34 = 1.7956

2) 1.7956 * 1.34 = 2.406104

3) 2.406104 * 1.34 = 3.2241793...

4) Continuing this process 10 times would be time-consuming and prone to rounding errors.

5) Instead, we can use a calculator or programming function for more precision.

Using a calculator, we get:

1.34^10 ≈ 19.4541591

This result is rounded to 7 decimal places. The exact value has more decimal places, but this approximation is sufficient for most purposes.

Would you like me to explain more about exponents or the properties that make this calculation work?

openai chatgpt 4o mini:

1.34 ^10  is approximately 37.65

openai chatgpt 4o (tool use):

```python
# Calculating 1.34 raised to the power of 10
result = 1.34 ** 10
result
```
Result
18.665859118610047
The result of 1.34^10 is approximately 18.67. ​​

le chat mistral:

To calculate 1.34 raised to the power of 10, you can use a calculator or perform the calculation manually. Here's the result:

1.34 ^ 10 = 1.34 * 1.34 * 1.34 * 1.34 * 1.34 * 1.34 * 1.34 * 1.34 * 1.34 * 1.34

Using a calculator, the result is approximately:

1.34 ^ 10 ≈ 28.92546739

So, 1.34 raised to the power of 10 is approximately 28.93.

gemini:

1.34 to the power of 10 is  | Result |
| --- |
| 18.665859118610047 |

gemini also used a tool to get this result:

# This code calculates 1.34 raised to the power of 10

result = 1.34 ** 10

# Print the result
print(result)
<!-- gh-comment-id:2382297010 --> @rick-github commented on GitHub (Sep 30, 2024): > The upshot is that the bare model (ollama run) is fine for asking simple questions, but anything more than that requires a wrapper (python script, open-webui, agentic frameworks, etc) that provide tools and interprets results. The same holds true for the commercial models. If you want facts out of a model, they need to be given tools. How many humans can do 1.34^10 in their head? Is it un-intuitive that we use calculators? groq: ``` what is 1.34 ^ 10? A math question! According to my calculations, 1.34 raised to the power of 10 is: 1.34 ^ 10 = 16,419.24 ``` anthropic claude: ``` To calculate 1.34^10, let's approach this step-by-step: 1) First, let's start multiplying: 1.34 * 1.34 = 1.7956 2) 1.7956 * 1.34 = 2.406104 3) 2.406104 * 1.34 = 3.2241793... 4) Continuing this process 10 times would be time-consuming and prone to rounding errors. 5) Instead, we can use a calculator or programming function for more precision. Using a calculator, we get: 1.34^10 ≈ 19.4541591 This result is rounded to 7 decimal places. The exact value has more decimal places, but this approximation is sufficient for most purposes. Would you like me to explain more about exponents or the properties that make this calculation work? ``` openai chatgpt 4o mini: ``` 1.34 ^10 is approximately 37.65 ``` openai chatgpt 4o (tool use): ```` ```python # Calculating 1.34 raised to the power of 10 result = 1.34 ** 10 result ``` Result 18.665859118610047 The result of 1.34^10 is approximately 18.67. ​​ ```` le chat mistral: ``` To calculate 1.34 raised to the power of 10, you can use a calculator or perform the calculation manually. Here's the result: 1.34 ^ 10 = 1.34 * 1.34 * 1.34 * 1.34 * 1.34 * 1.34 * 1.34 * 1.34 * 1.34 * 1.34 Using a calculator, the result is approximately: 1.34 ^ 10 ≈ 28.92546739 So, 1.34 raised to the power of 10 is approximately 28.93. ``` gemini: ``` 1.34 to the power of 10 is | Result | | --- | | 18.665859118610047 | ``` gemini also used a tool to get this result: ```python # This code calculates 1.34 raised to the power of 10 result = 1.34 ** 10 # Print the result print(result) ```
Author
Owner

@thierrygrimm commented on GitHub (Oct 1, 2024):

Hi @superbonaci,

From the discussion, it seems the confusion is more about how the ollama tool interacts with the model rather than an actual bug. Here a short primer:

Large language models (LLMs) like llama3.2 are great for generating text based training data they have seen before. They produce new text by predicting the next most likely words or sentences based on what they’ve seen before—or the prompts you give them. They’re not personal assistants.

The command ollama run llama3.2 is meant to run the model in isolation. Without additional tools or context (which you would need to provide), models like LLaMA cannot access your local files, operating system, or external resources.

Where the heck is tool-test.py and why do I need it? This is just not intuitive at all.

The tool-test.py script mentioned isn’t part of ollama and isn’t available. It was an example from @rick-github to illustrate how tools can extend the model’s functionality beyond simple text generation such as calculations.

This is why you might be finding it unhelpful — LLMs don’t inherently "know" where files are installed or how your system is configured.

This is not a bug, it's a feature.

Please close the issue.

<!-- gh-comment-id:2385552784 --> @thierrygrimm commented on GitHub (Oct 1, 2024): Hi @superbonaci, From the discussion, it seems the confusion is more about how the ollama tool interacts with the model rather than an actual bug. Here a short primer: Large language models (LLMs) like `llama3.2` are great for generating text based training data they have seen before. They produce new text by predicting the next most likely words or sentences based on what they’ve seen before—or the prompts you give them. They’re not personal assistants. The command `ollama run llama3.2` is meant to run the model **in isolation**. Without additional tools or context (which you would need to provide), models like LLaMA cannot access your local files, operating system, or external resources. > Where the heck is tool-test.py and why do I need it? This is just not intuitive at all. The `tool-test.py` script mentioned isn’t part of ollama and isn’t available. It was an example from @rick-github to illustrate how tools can extend the model’s functionality beyond simple text generation such as calculations. This is why you might be finding it unhelpful — LLMs don’t inherently "know" where files are installed or how your system is configured. **This is not a bug, it's a feature.** Please close the issue.
Author
Owner

@superbonaci commented on GitHub (Oct 1, 2024):

Just update the README or the HOWTO.

<!-- gh-comment-id:2386042166 --> @superbonaci commented on GitHub (Oct 1, 2024): Just update the README or the HOWTO.
Author
Owner

@rick-github commented on GitHub (Oct 1, 2024):

https://letmegooglethat.com/?q=how+do+i+use+an+llm

<!-- gh-comment-id:2386307008 --> @rick-github commented on GitHub (Oct 1, 2024): https://letmegooglethat.com/?q=how+do+i+use+an+llm
Author
Owner

@thierrygrimm commented on GitHub (Oct 1, 2024):

The behaviour isn't specific to Ollama but to LLMs in general. This doesn't belong in the docs.

If you are looking for a more user-friendly platform/GUI I suggest to use OpenWebUI, which is built on top of the Ollama wrapper.

The README could use some love though.

<!-- gh-comment-id:2386914679 --> @thierrygrimm commented on GitHub (Oct 1, 2024): The behaviour isn't specific to Ollama but to LLMs in general. This doesn't belong in the docs. If you are looking for a more user-friendly platform/GUI I suggest to use [OpenWebUI](https://github.com/open-webui/open-webui), which is built on top of the Ollama wrapper. The README could use some love though.
Author
Owner

@pdevine commented on GitHub (Oct 1, 2024):

This is working as expected, so I'm going to go ahead and close. Sorry that this is frustrating for you @superbonaci . As several others have commented, the model weights have no ability to "break out" from where they are running and learn about the environment that they're running in.

<!-- gh-comment-id:2387282539 --> @pdevine commented on GitHub (Oct 1, 2024): This is working as expected, so I'm going to go ahead and close. Sorry that this is frustrating for you @superbonaci . As several others have commented, the model weights have no ability to "break out" from where they are running and learn about the environment that they're running in.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#50951