Self reported incompetence

Loading...
Thumbnail Image

Date

Authors

Sigurdarson, Tindur

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Today, machine learning plays an important role in many industries. In order for trust to be established between machine learning predictors and their users, it is necessary to increase transparency and make it easy for anyone to determine a predictor's potential issues. In this thesis, I propose that the predictors themselves could notify the users on their ineptitudes, and closely examine three possible avenues of how to accomplish that: accounting for the effects of uncertainty, determining bias, and detecting potentially impactful predictions. I provide concrete definitions for these concepts, discuss their potential effects on predictive performance, and describe my experiments showing the effectiveness of their use. I find that uncertainty can benefit predictors, but its effect depends on the distribution and which attributes are uncertain. Different sources of bias can be detected without difficulty with the generation of learning curves, confusion matrices, and the comparison of Shapley values for specific attribute values. Finally, my results indicate that detecting potentially impactful or interesting data records can be straightforward by calculating surprise scores, a novel method I propose.

Description

Keywords

Machine learning, Transparency, Explainable AI, Uncertainty, Bias, Impact

Citation

Endorsement

Review

Supplemented By

Referenced By

Creative Commons license

Except where otherwised noted, this item's license is described as CC0 1.0 Universal