Publications
Using Shapley Values Based Explanations for Computer Vision Models
Aditya Lahiri, Kamran Alipour, Ehsan Adeli, Babak Salimi
ICML 2022, RDMDE Workshop
Shapley Values are great, but they have a few issues- exponential runtime with respect to number of players, and all the unexpected side-effects that arise with finding the function value for partial coalitions . We try to figure these two things out for Image Models.
First, we define players as interpretable feature values. These could be annotated attributes such as say, Hair color, Mustache, etc for human faces. This limits the number of players and also make individual players aka features interpretable.
Second, we use generative models to produce counterfactual images based on user-defined increase or decrease of these attributes by “walking in latent space”! Players that are out of coalition in partial coalitions simply do not change their attribute values.
We get images for any possible coalition and get desired label by passing this image through the Image Model to be explained! We give the user the pair of original image and counterfacfual image along with shapley values based attributions accounting for the difference in two in terms of interpretable features. Neat?(Arxiv)
Explainable AI: Foundations, Applications, Opportunities for Data Management Research
Romila Pradhan, Aditya Lahiri, Sainyam Galhotra, Babak Salimi
Tutorial accepted at ACM SIGMOD '22 and ICDE '22
Website - https://explainable-ai-tutorial.github.io
Explaining Image Classifiers Using Contrastive Counterfactuals in Generative Latent Spaces
Kamran Alipour, Aditya Lahiri, Ehsan Adeli, Babak Salimi, Michael Pazzani