Building Interpretable Learning Models With Evolutionary Algorithms

Loading...
Thumbnail Image

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

The field of explainable artificial intelligence has gained substantial attention in response to the concerns of trust and accountability in machine learning and artificial intelligence models. Many research paths focus on explaining black-box models using post hoc explanations, but these are only estimations and can often be incorrect. Using intrinsically interpretable models can lead to better explanations but they typically suffer from poor predictive performance and thus do not model data with multiple underlying generative rules well.

We propose an evolutionary algorithm that uses a multi-objective approach to partition the decision boundary of classification problems. This algorithm facilitates training multiple interpretable prediction models on disjoint sections of the data space. The multi-objective boundary partitioning algorithm was used on synthetic data with various interpretable models. The results show an increase in predictive performance while maintaining the interpretability of individual models.

Description

Keywords

XAI, Evolutionary Computing, Linear Genetic Programming, Interpretable Models

Citation

Endorsement

Review

Supplemented By

Referenced By

Creative Commons license

Except where otherwised noted, this item's license is described as Attribution-ShareAlike 4.0 International