Learning robust principal components from L1-norm maximization
Principal component analysis (PCA) is fundamental in many pattern recognition applications. Much research has been performed to minimize the reconstruction error in L1-norm based reconstruction error minimization (L1-PCA-REM) since conventional L2-norm based PCA (L2-PCA) is sensitive to outliers. Recently, the variance maximization formulation of PCA with L1-norm (L1-PCA-VM) has been proposed, where new greedy and non-greedy solutions are developed. Armed with the gradient ascent perspective for optimization, we show that the L1-PCA-VM formulation is problematic in learning principal components and that only a greedy solution can achieve robustness motivation, which are verified by experiments on synthetic and real-world datasets.
关键词:
Principal component analysis (PCA),
Outliers,
L1-norm,
Greedy algorithms,
Non-greedy algorithms