Tutorial: Randomization Based Deep and Shallow Learning Methods for Classification and Forecasting
P. N. Suganthan
-
CIS
IEEE Members: Free
Non-members: FreeLength: 01:40:39
This tutorial will first introduce the main randomization-based learning paradigms with closed-form solutions such as the randomization-based feedforward neural networks, randomization based recurrent neural networks and kernel ridge regression. The popular instantiation of the feedforward type called random vector functional link neural network (RVFL) originated in early 1990s. Other feedforward methods are random weight neural networks (RWNN), extreme learning machines (ELM), etc. Reservoir computing methods such as echo state networks (ESN) and liquid state machines (LSM) are randomized recurrent networks. Another paradigm is based on kernel trick such as the kernel ridge regression which includes randomization for scaling to large training data. The tutorial will also consider computational complexity with increasing scale of the classification/forecasting problems. Another randomization-based paradigm is the random forest which exhibits highly competitive performances. The tutorial will also present extensive benchmarking studies using classification and forecasting datasets