Evaluating Explanation Correctness in Legal Decision Making

Loading...
Thumbnail Image

Date

Journal Title

Journal ISSN

Volume Title

Publisher

Canadian Artificial Intelligence Association

Abstract

As machine learning models are being extensively deployed across many applications, concerns are rising with regard to their trustability. Explainable models have become an important topic of interest for high-stakes decision making, but their evaluation in the legal domain still remains seriously understudied; existing work does not have thorough feedback from subject matter experts to inform their evaluation. Our work here aims to quantify the faithfulness and plausibility of explainable AI methods over several legal tasks, using computational evaluation and user studies directly involving lawyers. The computational evaluation is for measuring faithfulness, how close the explanation is to the model’s true reasoning, while the user studies are measuring plausibility, how reasonable is the explanation to a subject matter expert. The general goal of this evaluation is to find a more accurate indication of whether or not machine learning methods are able to adequately satisfy legal requirements.

Description

Keywords

Citation

Luo, Chu Fei et al. "Evaluating Explanation Correctness in Legal Decision Making" (2022). 35 Canadian Conference on Artificial Intelligence.

Endorsement

Review

Supplemented By

Referenced By

Creative Commons license

Except where otherwised noted, this item's license is described as Attribution 4.0 International