Tutorial: Paving the Way From Interpretable Fuzzy Systems to eXplainable Artificial Intelligence Systems
Jose Alonso, Ciro Castiello, Corrando Mencar, Luis Magdalena
-
CIS
IEEE Members: Free
Non-members: FreeLength: 01:22:31
In the era of the Internet of Things and Big Data, data scientists are required to extract valuable knowledge from the given data. They first analyze, cure and pre-process data. Then, they apply Artificial Intelligence (AI) techniques to automatically extract knowledge from data. Actually, AI has been identified as the �most strategic technology of the 21st century� and is already part of our everyday life. The European Commission states that �EU must therefore ensure that AI is developed and applied in an appropriate framework which promotes innovation and respects the Union's values and fundamental rights as well as ethical principles such as accountability and transparency�. It emphasizes the importance of eXplainable AI (XAI in short), in order to develop an AI coherent with European values:�to further strengthen trust, people also need to understand how the technology works, hence the importance of research into the explainability of AI systems�. Moreover, as remarked in the last challenge stated by the USA Defense Advanced Research Projects Agency (DARPA), �even though current AI systems offer many benefits in many applications, their effectiveness is limited by a lack of explanation ability when interacting with humans�. Accordingly, users without a strong background on AI, require a new generation of XAI systems. They are expected to naturally interact with humans, thus providing comprehensible explanations of decisions automatically made. XAI is an endeavor to evolve AI methodologies and technology by focusing on the development of agents capable of both generating decisions that a human could understand in a given context, and explicitly explaining such decisions. This way, it is possible to verify if automated decisions are made on the basis of accepted rules and principles, so that decisions can be trusted and their impact justified. Even though XAI systems are likely to make their impact felt in the near future, there is a lack of experts to develop the fundamentals of XAI, i.e., ready to develop and to maintain the new generation of AI systems that are expected to surround us soon. This is mainly due to the inherent multidisciplinary character of this field of research, with XAI researchers coming from heterogeneous research fields. Moreover, it is hard to find XAI experts with a holistic view as well as wide and solid background regarding all the related topics. Consequently, the main goal of this tutorial is to provide attendees with a holistic view of fundamentals and current research trends in the XAI field, paying special attention to fuzzy-grounded knowledge representation and how to enhance human-machine interaction. The tutorial will cover the main theoretical concepts of the topic, as well as examples and real applications of XAI techniques. In addition, ethical and legal aspects concerning XAI will also be considered.