Oppgaven er ikke lenger tilgjengelig

Debugging machine learning models through the lens of explainability

The research around developing methods for debugging and refining Machine Learning (ML) models is still in its infancy. We believe employing tailored tools in the development process can aid developers in creating more trustworthy and reliable models. This is particularly essential for the development of black-box models such as deep neural networks and random forests, as their opacity in decision-making and complex structure prevent straightforward investigation. Therefore, there is a need for techniques that can assist in understanding the behavior of the model and provide reasons for anomalies.

 

Explanation methods provide explanation for the predictions/decisions of black-box ML models. Currently, the main usage of these techniques is justifying the predictions of a ML model and explaining the behavior of the model in a human-understandable manner. Although this is a very helpful usage in the sense of interpretability, explanation methods can have further applications as debugging tools. Devising such tools are quite essential for the development of robust and accurate ML models. They provide a better understanding of the data and the model, and help the developer to take appropriate actions for improving the overall performance. Specifically, they can explain mispredicted samples, identify mislabeled data, expose potential biases, discover new knowledge about the domain, and make helpful recommendations concerning the data and the model. 

During this project, different perspectives of data, different data preprocessing techniques, simple and complex ML models, and various explanation techniques are studied. Then, state-of-the-art explanation-based methods for debugging ML models are identified and their application for a real-world use-case will be explored. This will create an interactive development process between the user, use-case, and model. The end goal is improving the accuracy and robustness of the ML model on the studied use-case through the lens of explainability. In summary, the main goal of this project is applying existing explanation-based ML debugging tools on a real world use-case to investigate their potential advantages and challenges. Afterwards, solutions for remedying the challenges associated with the existing works are proposed.

Publisert 3. des. 2020 09:24 - Sist endret 3. des. 2020 09:25

Veileder(e)

Omfang (studiepoeng)

60