Skip to main content
  • CIS
    Members: Free
    IEEE Members: Free
    Non-members: Free
    Length: 00:13:25
06 Jun 2023

Edoarde Ortega, Duke University ABSTRACT: The acceleration of Artificial Intelligence (AI) has brought forward new digital tools that have had a wide impact across society. However, AI digital tools (such as ChatGPT, midjourney, DALL-E 2) have brought forward legal concerns that governing bodies have worked to address. These governing bodies have brought forward overarching regulatory frameworks to address data governance for these AI digital tools (i.e., Global Data Protection Regulation, the European AI act, Blueprint for an AI Bill of Rights, NIST Risk Management Framework, etc.). We recognize that these AI digital tools are a vital aspect of future of technological development. In addition, we acknowledge this complex issue requires a multi-disciplinary perspective. We survey the current landscape of published AI-specific regulatory frameworks and known engineering design process methods. Subsequently adopting a trans-disciplinary framework to address AI ethics and compliance through the design of these AI digital tools. In this work, we address AI ethics and compliance through a product lifecycle approach. This product lifecycle approach considers several principles: a Human-Centered Design for Risk Assessment, Functional Safety and Risk Management Standardization, and Continuous Governance throughout Product Lifecycle. Establishing risk management throughout AI product lifecycles can ensure accountability for AI product use cases. In addition, by utilizing previous Functional Safety considerations we can create safety mechanisms throughout the product lifecycle of AI digital tools. Finally, establishing in-field testing for continuous governance will enable the flexibility for new compliance standards and transparency. We establish this governance framework to aid in new compliance strategies for these emerging issues with AI digital tools.