WORKSHOP: Trustworthy Artificial Intelligence for High-Risk Applications
Truong T. Tran, Penn State; Ramazan Aygun, Kennesaw State University
-
CIS
IEEE Members: Free
Non-members: FreeLength: 01:00:49
Truong T. Tran, Penn State; Ramazan Aygun, Kennesaw State University
ABSTRACT: Recent advances in Artificial Intelligence (AI) and Machine Learning (ML) have been enticing for crucial applications such as medical diagnosis, drug discovery, weather forecasting, autonomous robotics, and fraud detection to leverage from historical data and experience. Indeed, the blooming of big data with massive volumes of information has changed how we manage and analyze data and how we feed data to learning models to obtain accurate results. It is necessary to ensure those models perform correctly with a high level of confidence and robustness. While traditional Artificial Intelligence and Machine Learning focus on high achievements, errors caused by trained models could be costly or even fatal in some scenarios, especially for high-risk applications. Therefore, trustworthy AI approaches are significant in creating reliable outcomes. On the other hand, transparency and explainability of complex models may help gain trust and confidence in the prediction results. However, there is a need to develop metrics for measuring the accuracy of the explanation. The workshop’s objective is to promote the need for high confidence, robustness, and explanation accuracy in AI/ML systems and their applications. The workshop consists of presentations followed by discussion and Q&A. The attendees will get to know and discuss the following topics:
– Challenges for AI in high-risk applications
– Characteristics of trustworthy AI and ML systems
– Metrics and methods for evaluating explainable and trustworthy AI (e.g., quantifying reliability of explanation) and risk measurement
– Design of AI/ML model for high-risk applications, including example of trustable neural networks and other classification models.