Skip to main content
  • CIS
    Members: Free
    IEEE Members: Free
    Non-members: Free
    Length: 00:45:01
20 Jul 2020

All Artificial Intelligence (AI) systems sometimes make errors and will make errors in the future. These errors must be detected and corrected immediately and locally in the networks of collaborating systems. Real-time re-training is not always viable due to the resources involved and can introduce new mistakes and damage existing skills. One-shot correctors are needed. Correctors are external systems, and the main legacy AI system remains unchanged. The ideal correctors should: be simple, not damage the skills of the legacy system when they work successfully, allow fast non-iterative learning, and allow correction of the new mistakes without destroying previous fixes. If the essential dimensionality of the data is high enough, then the correction problem can be solved by surprisingly simple methods even if the data sets are exponentially large with respect to the dimensionality. This phenomenon is a manifestation of the blessing of dimensionality. The mathematical foundations of these methods are given by stochastic separation theorems that belong to measure concentration theory.
Designing future AI cannot be limited to the development of individual AI systems, but will be naturally extended to the engineering of ecosystems and social networks of AI. Correctors are, at the same time, simple elements of these AI ecosystems and social networks as well as a means of providing cooperation, communication and mutual learning of AI systems, and the division of labour between them.
The talk presents the theory and applications of AI error correctors.