[GH-ISSUE #1722] How to update a model in a timely manner? #26738

Closed
opened 2026-04-22 03:13:32 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @PriyaranjanMaratheDish on GitHub (Dec 26, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1722

So here is what I am trying to do -

1)Create a custom Ollama model by giving it data exported from Snowflake database tables. Data in Snowflake tables is already in a Golden Format. Have additional follow up questions on my requirement -

A)Instead of creating the model using -f (file with data exported from Snowflake database), can I create a model GPT using results of Snowflake query execution?
B)How to update this model in a timely manner? So that my results are consistent with the new data generated?

TIA.

Originally created by @PriyaranjanMaratheDish on GitHub (Dec 26, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1722 So here is what I am trying to do - 1)Create a custom Ollama model by giving it data exported from Snowflake database tables. Data in Snowflake tables is already in a Golden Format. Have additional follow up questions on my requirement - A)Instead of creating the model using -f (file with data exported from Snowflake database), can I create a model GPT using results of Snowflake query execution? B)How to update this model in a timely manner? So that my results are consistent with the new data generated? TIA.
Author
Owner

@technovangelist commented on GitHub (Dec 27, 2023):

Hi @PriyaranjanMaratheDish, thanks for submitting this issue. It sounds like you want to use a model that has been fine-tuned on data you have produced somewhere else. This is something we would like to support in the future, but for now, you still have to use external tools to fine tune a model. Alternatively, you may also be interested in a technique referred to as RAG. We have a few examples in our repo about using RAG that may be helpful for getting started. Let us know if this makes sense and if you have any further questions we can answer them here or in the discord at https://discord.gg/ollama.

<!-- gh-comment-id:1869967306 --> @technovangelist commented on GitHub (Dec 27, 2023): Hi @PriyaranjanMaratheDish, thanks for submitting this issue. It sounds like you want to use a model that has been fine-tuned on data you have produced somewhere else. This is something we would like to support in the future, but for now, you still have to use external tools to fine tune a model. Alternatively, you may also be interested in a technique referred to as RAG. We have a few examples in our repo about using RAG that may be helpful for getting started. Let us know if this makes sense and if you have any further questions we can answer them here or in the discord at https://discord.gg/ollama.
Author
Owner

@PriyaranjanMaratheDish commented on GitHub (Dec 27, 2023):

Thanks! Will check the examples and use the discord groups for additional questions. Appreciate it.

<!-- gh-comment-id:1870468571 --> @PriyaranjanMaratheDish commented on GitHub (Dec 27, 2023): Thanks! Will check the examples and use the discord groups for additional questions. Appreciate it.
Author
Owner

@PriyaranjanMaratheDish commented on GitHub (Dec 27, 2023):

Not sure but if I recreate using the command, ollama create GPTName -f ./Source_Data_File will it work?

<!-- gh-comment-id:1870698856 --> @PriyaranjanMaratheDish commented on GitHub (Dec 27, 2023): Not sure but if I recreate using the command, ollama create GPTName -f ./Source_Data_File will it work?
Author
Owner

@technovangelist commented on GitHub (Jan 3, 2024):

that will work if Source_Data_File is a modelfile as described here: https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md

<!-- gh-comment-id:1875706272 --> @technovangelist commented on GitHub (Jan 3, 2024): that will work if `Source_Data_File` is a modelfile as described here: https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md
Author
Owner

@pdevine commented on GitHub (Mar 12, 2024):

@PriyaranjanMaratheDish Sorry about the slow response. There actually is a document here which explains how to convert/quantize models and pull them into Ollama. The doc @technovangelist mentioned is also useful for understanding what parameters can go into the modelfile.

the Source_Data_File in your previous comment should be in the form of the Modelfile.

I'm going to go ahead and close out the issue, but feel free to keep commenting or reopen it if I didn't answer your question.

<!-- gh-comment-id:1992656669 --> @pdevine commented on GitHub (Mar 12, 2024): @PriyaranjanMaratheDish Sorry about the slow response. There actually is a document [here](https://github.com/ollama/ollama/blob/main/docs/import.md) which explains how to convert/quantize models and pull them into Ollama. The doc @technovangelist mentioned is also useful for understanding what parameters can go into the modelfile. the `Source_Data_File` in your previous comment should be in the form of the `Modelfile`. I'm going to go ahead and close out the issue, but feel free to keep commenting or reopen it if I didn't answer your question.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#26738