[GH-ISSUE #5016] Integration with MLFlow #49686

Open
opened 2026-04-28 12:41:37 -05:00 by GiteaMirror · 4 comments
Owner

Originally created by @ulhaqi12 on GitHub (Jun 13, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/5016

Hey,

Currently, Ollama is saving models locally on a cache. To maintain different versions of LLMs or finetuned ones and also for extensive monitoring it's a good idea to provide integration with MLFlow where we can log all the experiments on MLFlow for better monitoring of the system.. I propose integrating Ollama with MLFlow to enhance our ML lifecycle management, leveraging Ollama's advanced model serving and monitoring capabilities.

BR,
Ikram

Originally created by @ulhaqi12 on GitHub (Jun 13, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/5016 Hey, Currently, Ollama is saving models locally on a cache. To maintain different versions of LLMs or finetuned ones and also for extensive monitoring it's a good idea to provide integration with MLFlow where we can log all the experiments on MLFlow for better monitoring of the system.. I propose integrating Ollama with MLFlow to enhance our ML lifecycle management, leveraging Ollama's advanced model serving and monitoring capabilities. BR, Ikram
GiteaMirror added the feature request label 2026-04-28 12:41:37 -05:00
Author
Owner

@ic4-y commented on GitHub (Jun 13, 2024):

Not super familiar with MLFlow (yet), but looking at it, this might be a cool enhancement proposal.

How would this integration look like?

<!-- gh-comment-id:2166643916 --> @ic4-y commented on GitHub (Jun 13, 2024): Not super familiar with MLFlow (yet), but looking at it, this might be a cool enhancement proposal. How would this integration look like?
Author
Owner

@rahulmisal27 commented on GitHub (Nov 22, 2024):

@ic4-y We can use ollama api to list local models and used them directly in mlflow for prompt engineering and logging. I think it is a great idea to provide direct support for ollama models which streamlines local model testing and development.

<!-- gh-comment-id:2493252268 --> @rahulmisal27 commented on GitHub (Nov 22, 2024): @ic4-y We can use ollama api to list local models and used them directly in mlflow for prompt engineering and logging. I think it is a great idea to provide direct support for ollama models which streamlines local model testing and development.
Author
Owner

@rajesh-chawla commented on GitHub (Feb 11, 2025):

Any thoughts on this?
We are looking at self hosting models & would like to use mlflow to llm evaluation / llm as a judge.

<!-- gh-comment-id:2652317854 --> @rajesh-chawla commented on GitHub (Feb 11, 2025): Any thoughts on this? We are looking at self hosting models & would like to use mlflow to llm evaluation / llm as a judge.
Author
Owner

@ulhaqi12 commented on GitHub (Feb 12, 2025):

A simple integration would be to pull models from MLFlow and deploy on Ollama. MLFlow stores models on either file storage or S3 like object storage (Minio or GCP Storage)
We can give model_id (identifier in MLFlow) in ollama ModelFile and it will pull model from there (either from local storage or s3 like storage.) and deploy using Ollama.
@ic4-y Can you guide which module in this repo should i look to for this improvement?

<!-- gh-comment-id:2652884461 --> @ulhaqi12 commented on GitHub (Feb 12, 2025): A simple integration would be to pull models from MLFlow and deploy on Ollama. MLFlow stores models on either file storage or S3 like object storage (Minio or GCP Storage) We can give model_id (identifier in MLFlow) in ollama ModelFile and it will pull model from there (either from local storage or s3 like storage.) and deploy using Ollama. @ic4-y Can you guide which module in this repo should i look to for this improvement?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#49686