Options
Knowing the class distinguishing abilities of the features, to build better decision-making models
Journal
Americas Conference on Information Systems (AMCIS) proceedings 2024
Date Issued
2024
Author(s)
Sadhukhan, Payel
Sengupta, Kausik
Palit, Sarbani
Abstract
Explainability allows end-users to have a transparent and humane reckoning of an ML scheme's capability and utility. ML model's modus opernadi can be explained via the features which trained it. To this end, we found no work explaining the features' importance based on their class-distinguishing abilities. In a given dataset, a feature is not equally good at distinguishing between the data points' possible categorizations (or classes). This work explains the features based on their class or category-distinguishing capabilities. We estimate the variables' class-distinguishing capabilities (scores) for pair-wise class combinations, utilize them in a missing feature context, and propose a novel decision-making protocol. A key novelty of this work lies in the refusal to render a decision option when the missing feature (of the test point) has a high class-distinguishing potential for the likely classes. Two real-world datasets are used empirically to validate the explainability of our scheme.
Views
14
Last Week
3
3
Last Month
5
5
Acquisition Date
Nov 10, 2024
Nov 10, 2024
Downloads
1
Acquisition Date
Nov 10, 2024
Nov 10, 2024