Explanation methods provide explanation for the predictions/decisions of black-box ML models. Currently, the main usage of these techniques is justifying the predictions of a ML model and explaining the behavior of the model in a human-understandable manner. Although this is a very helpful usage in the sense of interpretability, explanation methods can have further applications as debugging tools. Devising such tools are quite essential for the development of robust and accurate ML models. They provide a better understanding of the data and the model, and help the developer to take appropriate actions for improving the overall performance. Specifically, they can explain mispredicted samples, identify mislabeled data, expose potential biases, discover new knowledge about the domain, and make helpful recommendations concerning the data and the model.
During this project, different perspectives of data, different data preprocessing techniques, simple and complex ML models, and various explanation techniques are studied. Then, state-of-the-art explanation-based methods for debugging ML models are identified and their application for a real-world use-case will be explored. This will create an interactive development process between the user, use-case, and model. The end goal is improving the accuracy and robustness of the ML model on the studied use-case through the lens of explainability. In summary, the main goal of this project is applying existing explanation-based ML debugging tools on a real world use-case to investigate their potential advantages and challenges. Afterwards, solutions for remedying the challenges associated with the existing works are proposed.