Shapley Values application in NLG (exists an example in pytorch)
A more in depth analysis of the attention weights with respect to explanations
Computational bottleneck (FastSHAP)
Agent to answer to explainability questions
LRP for NLG
Interface for end-users
Work on visualization
DeepLIFT to RNNs?