Self-explainable AI for the Internet of Things

A challenge in contemporary AI systems is that they are continually evolving black boxes evading the growing need to explain deep learning neural network function to humans. The European Union’s 2016 General Data Protection Regulation says that companies must be able to provide explanations to consumers about decisions made by artificial intelligences.  

Humans trust each other because they can explain themselves. Can AI systems do the same? This is the question self-explainable AI aims to address. Self-explaining neural networks was recently proposed by Elton11 as an alternative to interpretable AI (e.g., LIME, SHAP, Saliency maps). Self-explainable AI yield two outputs – a decision and its explanation. The idea is not new and has been initially pursued in expert systems research. However, self-explanation for deep neural networks is an under-explored area of research. One of the main challenges is to determine how to encode explanation as a neural network output along with decisions. A subsequent challenge is to obtain encoded explanations as data to train deep neural networks either automatically from human-understandable features (e.g., visual features that humans recognize) in the input domain or explicit explanations specified by humans in terms of degree of presence of certain factors. For instance, LaLonde et al. train deep neural networks for lung node classification that offer “explanations”. They use a dataset that not only labels severity (cancerous or non-cancerous) but also quantities (on a scale of 1-5) in terms of five visual attributes which are deemed relevant for diagnosis (subtlety, sphericity, margin, lobulation, spiculation, and texture). Furthermore, it is necessary to verify if decisions/predictions and their explanations are mutually consistent. 

In this masters thesis, the student will incorporate self-explainability in our machine learning pipeline Erdre https://github.com/SINTEF-9012 to synthesize virtual sensors used in the Internet of Things. Explanation will include features that support a prediction, information about data quality, and an uncertainty estimate. The student will perform a mutual information analysis between decisions/predictions and explanations to demonstrate consistency between them. The pipeline will be tested on various case studies such as  manufacturing, occupational health, Internet of Underwater Things stemming from various European Union and Norwegian research council projects. 

Emneord: Explainable AI, Machine Learning
Publisert 19. sep. 2022 14:29 - Sist endret 19. sep. 2022 14:29

Veileder(e)

Omfang (studiepoeng)

60