Skip to main content
  • CIS
    Members: Free
    IEEE Members: Free
    Non-members: Free
    Length: 01:30:07
19 Jul 2020

There have been different strategies to improve the performance of a machine
learning model, e.g., increasing the depth, width, and/or nonlinearity of the model, and using
ensemble learning to aggregate multiple base/weak learners in parallel or in series. The goal of this tutorial is to describe a novel and new strategy called patch learning (PL) for this problem.
PL consists of three steps: 1) train an initial global model using all training data; 2) identify
from the initial global model the patches which contribute the most to the learning error, and
then train a (local) patch model for each such patch; and, 3) update the global model using
training data that do not fall into any patch.
To use a PL model, one first determines if the input falls into any patch. If yes, then the
corresponding patch model is used to compute the output. Otherwise, the global model is used.
To-date, PL can only be implemented using fuzzy systems. How this is accomplished will be
explained. Some regression problems on 1D/2D/3D curve fitting, nonlinear system identification,
and chaotic time-series prediction, will be explained to demonstrate the effectiveness of PL.
PL opens up a promising new line of research in machine learning. Opportunities for future
research will be explained.

More Like This

  • CIS
    Members: Free
    IEEE Members: Free
    Non-members: Free
  • CIS
    Members: Free
    IEEE Members: Free
    Non-members: Free