About MLflow
MLflow is an open-source, unified platform to manage end-to-end ML and GenAI workflows. It’s designed to help streamline the machine learning process, from experimentation to production deployment, and it provides tools for tracking experiments, sharing models, and managing model deployment.
MLflow in Gathr
Connect your MLflow instance with Gathr by setting up a connection to the MLflow Tracking Server and gain access to your MLflow registered models directly within Gathr’s interface.
Here is an overview of how to leverage Gathr for utilizing MLflow registered models with each step explained in detail later.
Use Gathr to employ models registered with MLflow, with an overview for each step.
Stage 1: Set up MLflow connection
Gathr > Connections Page: The starting point for creating an MLflow connection.
Create MLflow Connection: Provide connection configuration parameters required to establish the connection.
Allow Model Deployment via Gathr?: Decision point during connection configuration for deploying model from or outside Gathr.
Yes: Continue with Gathr’s deployment process.
No: Follow the manual deployment process.
For more information, see how to create a connection for MLflow →
Stage 2: Configure Model Deployment Configurations
Models Listing Page: Access and manage models for deployment within Gathr.
Deployment Configurations: Update the configurations required for model deployment on cluster.
Deploy Model: Initiate deployment from Gathr and confirm successful completion.
For more information, see how to navigate to MLflow models listing page and provide deployment configurations →
Stage 3: Map Endpoint to AI Gateway
Map Endpoint to AI Gateway URL & Model Route List: Update AI Gateway for LLM models.
Save Model Version Details: Store model details for future use.
For more information, see how to map deployed model endpoint with AI Gateway →
Stage 4: Copy Deployed Model Details
Copy Deployed Model Details: Copy the details of a particular model version. Use it in Model Invoke processor.
For more information, see how to copy deployed model details →
Stage 5: Utilize MLflow Registered Models in Pipelines
Gathr Pipeline Page: Create and configure pipelines for model predictions.
Run Pipeline: Execute the pipeline.
Store Results using an Emitter: Save the results from the pipeline.
For more information, see how to use Model Invoke processor in an inference pipeline →
If you have any feedback on Gathr documentation, please email us!