Date of Publication :14th March 2017
Abstract: Linear Discriminant Analysis (LDA) is most commonly used as dimensionality reduction technique in the pre-processing step for pattern-classification and machine learning applications. The goal is to project a dataset onto a lower-dimensional space with good class-separability in order avoid over fitting (―curse of dimensionality‖) and also reduce computational costs. Ronald A. Fisher formulated the Linear Discriminant in 1936, and it also has some practical uses as classifier. The original Linear discriminant was described for a 2-class problem, and it was then later generalized as ―multi-class Linear Discriminant Analysis‖ or ―Multiple Discriminant Analysis‖ by C. R. Rao in 1948 The general LDA approach is very similar to a Principal Component Analysis (for more information about the PCA, see the previous article, but in addition to finding the component axes that maximize the variance of our data (PCA), we are additionally interested in the axes that maximize the separation between multiple classes (LDA). So, in a nutshell, often the goal of an LDA is to project a feature space (a dataset n-dimensional samples) onto a smaller subspace kk (where k≤n−1k≤n−1) while maintaining the class-discriminatory information. In general, dimensionality reduction does not only help reducing computational costs for a given classification task, but it can also be helpful to avoid over fitting by minimizing the error in parameter estimation (―curse of dimensionality‖).
Reference :