Disputation: Peyman Rasouli

Doctoral candidate Peyman Rasouli  at the Department of Informatics, Faculty of Mathematics and Natural Sciences, is defending the thesis Local Explainability of Tabular Machine Learning Models and its Impact on Model Reliability for the degree of Philosophiae Doctor.

    Picture of the candidate

    Photo: UiO

     

    The PhD defence will be partially digital, in Kristen Nygaards sal (5370), Ole-Johan Dahls hus and streamed directly using Zoom. The host of the session will moderate the technicalities while the chair of the defence will moderate the disputation.

    Ex auditorio questions: the chair of the defence will invite the attending audience at Kristen Nygaards sal to ask ex auditorio questions. 

    Trial lecture

    "Biases in large language models: where do they come from, how to measure, and how to avoid them?"

    Time and place: June 16,  2023 11:15 AM, Kristen Nygaards sal (5370), Ole-Johan Dahls hus/ZOOM

    Main research findings

    • ML models are widely used in real-world applications, but their increasing complexity has made them opaque black boxes, hindering their safe adoption in critical areas. This thesis explores Interpretable Machine Learning (IML), aiming to make ML models explainable and transparent. It focuses on local explanations for every decision of a model, specifically targeting tabular models and model-agnostic explanation techniques. The study demonstrates that incorporating model insights obtained via auxiliary explanations yield more faithful explanations. It also concludes that more accurate and actionable explanations can be generated by utilizing correlation and semantic information of features. The thesis emphasizes fulfilling various properties for creating useful local explanations and proposes frameworks to overcome their modeling and computational challenges. Additionally, this work investigates debugging techniques based on local explanations to improve the reliability of black-box tabular classifiers by identifying and addressing data deficiencies. This research facilitates a better understanding of opaque ML models, enabling their safe use in critical applications. Faithful and actionable explanations empower users to make informed decisions, while developers can enhance model reliability by identifying and addressing deficiencies. This research bridges the gap between black-box predictive power and the need for transparency, ensuring responsible ML technology use for societal benefit.

    Adjudication committee:

    • Professor Barbara Hammer, University of Bielefeld, Germany
    • Dr. Jiaoyan Chen, University of Manchester, UK
    • Professor Geir Ketil Sandve, University of Oslo, Department of informatics Norway

    Supervisors

    • Professor Ingrid Chieh Yu, Department of Informatics, UIO, Norway
    • Research Manager Aida Omerovic, SINTEF, Norway

    Chair of defence:

    Professor Ole Hanseth

    Candidate contact information

    Contact information at Department: Mozhdeh Sheibani Harat 

    Published June 2, 2023 9:43 AM - Last modified June 16, 2023 9:36 AM