Test Case Backward Generation for Communicating Systems from Event Logs

Abstract

Data quality is a key element for building and optimizing good learning models. Despite many attempts to characterize data quality, there is still a need for a rigorous formalization and an efficient measure of the quality from available observations. Indeed, without a clear understanding of the training and testing processes, it is hard to evaluate the intrinsic performance of a model. Besides, tools allowing to measure data quality specific to machine learning are still lacking. In this paper, we introduce and explain a novel metric to measure data quality. This metric is based on the correlated evolution between the classification performance and the deterioration of data. The proposed method has the major advantage of being model-independent. Furthermore, we provide an interpretation of each criterion and examples of assessment levels. We confirm the utility of the proposed metric with intensive numerical experiments and detail some illustrative cases with controlled and interpretable qualities.

Type
Publication
Proceedings of the 16th International Conference on International Conference on Agents and Artificial Intelligence, ICAART 2024, Rome, Italy, February 24-26, 2024