Skip to main content

PANEL: Adversarial ML: Lesson Learned, Challenges & Opportunities

Catherine Huang, Google;Alexey Kurakin, Google;David Wagner, UC Berkeley; Aditi Raghunathan, C.M.U.; Dipankar Dasgupta, University of Memphis

  • CIS
    Members: Free
    IEEE Members: Free
    Non-members: Free
    Length: 00:57:51
06 Jun 2023

Catherine Huang, Google;Alexey Kurakin, Google;David Wagner, UC Berkeley; Aditi Raghunathan, C.M.U.; Dipankar Dasgupta, University of Memphis; ABSTRACT: As artificial intelligence (AI) continues to advance in serving a diverse range of applications including computer vision, speech recognition, healthcare and cybersecurity, adversarial machine learning (AdvML) is not just a research topic, it has become a growing concern in defense and commercial communities. Many real-world ML applications have not taken adversarial attack into account during system design, thus the ML models are extremely fragile in adversarial settings. Recent research has investigated the vulnerability of ML algorithms and various defense mechanisms. The questions surrounding this space are more pressing than ever before: Can we make AI/ML more secure? How can we make a system robust to novel or potentially adversarial inputs? Can we use AdvML to help solve some of our industrial ML challenges? How can ML systems detect and adapt to changes in the environment over time? How can we improve maintainability and interpretability of deployed models? These questions are essential to consider in designing systems for high stakes applications. In this panel, we invite the IEEE community to join our experts in AdvML to discuss the lessons learned, challenges and opportunities in building more reliable and practical ML models by leveraging ML security and adversarial machine learning.