Skip to main content

Tutorial - Towards Better Explainable AI Through Genetic Programming

Yi Mei,Victoria University of Wellington NZ;Andrew Lensen, Victoria University of Wellington,NZ

  • CIS
    Members: Free
    IEEE Members: Free
    Non-members: Free
    Length: 02:09:03
18 Jul 2022

Yi Mei,Victoria University of Wellington NZ;Andrew Lensen, Victoria University of Wellington,NZ ABSTRACT: Although machine learning has achieved great success in many real-world applications, it is criticised as usually behaving like a black box, and it is often difficult, if not impossible, to understand how the machine learning system makes the decision/prediction. This could lead to serious consequences, such as the accidents of the Tesla automatic driving cars, and biases of the automatic bank loan approval systems.To address this issue, Explainable AI (XAI) is becoming a very hot research topic in the AI field due to its urgent needs in various domains such as finance, security, medical, gaming, legislation, etc. There have been an increasing number of studies on XAI in recent years, which improves the current machine learning systems from different aspects.In evolutionary computation, Genetic Programming (GP) has been successfully used in various machine learning tasks including classification, symbolic regression, clustering, feature construction, and automatic heuristic design. As a symbolic-based evolutionary learning approach, GP has an intrinsic great potential to contribute to XAI, as a GP model tends to be interpretable.This tutorial will give a brief introduction on the common approaches in XAI, such as attention map, post-hoc explanation (LIME, SHAP), visualisation, and then introduce how to approach better model interpretability through GP, including multi-objective GP, simplification in GP, different representations in GP, and post-hoc explanation using GP. Finally, we will discuss the current trend, challenges and potential future research directions.