Machine Learning Models Monitoring in MLOps Context: Metrics and Tools
DOI:
https://doi.org/10.3991/ijim.v17i23.43479Keywords:
Machine Learning, MLOps, Metrics, Monitroring tools, Continuous MonitoringAbstract
In many machine learning projects, the lack of an effective monitoring system is a worrying issue. This leads to a series of challenges and risks that compromise the quality, reliability and sustainability of models deployed in production. As Machine Learning gains importance in various fields, poorly implemented monitoring represents a major obstacle to realizing its full potential. This article presents a comprehensive guide of machine learning models monitoring metrics and tool used in the MLOps context. The monitoring of metrics is important to evaluate and validate the performance of a machine-learning model, not only throughout the development phase but also during its deployment in the production environment. It enables real-time data to be collected on various metrics. The purpose of monitoring in MLOps context is to identify potential issues and adjustments made accordingly, guaranteeing consistent model quality and reliability. This article provides a comprehensive guide that introduces and explains a wide range of metrics used for continuous monitoring of ML systems at various stages of the MLOps lifecycle. Additionally, it presents a comparative analysis of available monitoring tools, enabling organizations to optimize their performance and ensure the seamless deployment of their machine learning applications. In essence, it underscores the critical importance of continuous monitoring and tailored metrics for ensuring the success and reliability of machine learning systems.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 Anas BODOR, Meriem Hnida, Najima Daoudi
This work is licensed under a Creative Commons Attribution 4.0 International License.