Papers

Possibilities

Review and contextualization of some methods for explainability. Development of a library SHAP
Attention flows are Shapley Values when the players are restricted to those from the same layer and the payoff is the total flow
Estimation of Shapley Values for each data point in the training data
A review of explanation techniques that are based on the removal of features
Survey on how to enhance explainability in neural NLG. The use of intermediate structures (like modules) for each sub-task and the use of latent variables (not really clear)

Shapley

NLG Explainability

LRP

NLG as a Tool for Explainability

NLG can be used as an interface to make explanations easily digestible to the user. There are two approaches: Template-based (SimpleNLG?), E2E Generation (image captioning could be an example), new evaluation metrics (instead of BLUE, METEOR and ROGUE) that allow a fair comparison between systems
Use of NLG (in particular SimpleNLG) to explain the results obtained from LIME. A questionary was handed online in order to compare the understandability of the NLG results with a simple table.
Explanations are made by computing differences between the output and the reference output, where the reference to give in input is chosen based on the problem at hand. (Lundberg, 2017) used it as a basis to develop DeepSHAP
Demonstration and example of how Adversarial Attacks can trick LIME and SHAP (biases in key features such as gender and race).
Need for the explainability field to take into consideration the input from other disciplines, such as philosophy. Particular emphasis on the need to detect triggers in order to give explanations to the user automatically
A comparison between pipeline and E2E architectures in NLG from data. In summary pipeline > E2E on unseen data in particular. It should also be easier to explain given the modular structure

Shapley Values application in NLG (exists an example in pytorch)

A more in depth analysis of the attention weights with respect to explanations

Computational bottleneck (FastSHAP)

Agent to answer to explainability questions

LRP for NLG

Interface for end-users 

Work on visualization

Focus on general explanation methods and address their specific adaptations for use in text classification, more specifically, in text classification with transformer models such as BERT. Library in python TransSHAP

DeepLIFT to RNNs?

   Login to remove ads X
Feedback | How-To