Please wait a minute...
Front. Inform. Technol. Electron. Eng.  2012, Vol. 13 Issue (12): 901-908    DOI: 10.1631/jzus.C1200180
    
Learning robust principal components from L1-norm maximization
Ding-cheng Feng, Feng Chen, Wen-li Xu
Tsinghua National Laboratory for Information Science and Technology, Tsinghua University, Beijing 100084, China; Department of Automation, Tsinghua University, Beijing 100084, China
Download:   PDF(0KB)
Export: BibTeX | EndNote (RIS)      

Abstract  Principal component analysis (PCA) is fundamental in many pattern recognition applications. Much research has been performed to minimize the reconstruction error in L1-norm based reconstruction error minimization (L1-PCA-REM) since conventional L2-norm based PCA (L2-PCA) is sensitive to outliers. Recently, the variance maximization formulation of PCA with L1-norm (L1-PCA-VM) has been proposed, where new greedy and non-greedy solutions are developed. Armed with the gradient ascent perspective for optimization, we show that the L1-PCA-VM formulation is problematic in learning principal components and that only a greedy solution can achieve robustness motivation, which are verified by experiments on synthetic and real-world datasets.

Key wordsPrincipal component analysis (PCA)      Outliers      L1-norm      Greedy algorithms      Non-greedy algorithms     
Received: 11 June 2012      Published: 09 December 2012
CLC:  TP391.4  
Cite this article:

Ding-cheng Feng, Feng Chen, Wen-li Xu. Learning robust principal components from L1-norm maximization. Front. Inform. Technol. Electron. Eng., 2012, 13(12): 901-908.

URL:

http://www.zjujournals.com/xueshu/fitee/10.1631/jzus.C1200180     OR     http://www.zjujournals.com/xueshu/fitee/Y2012/V13/I12/901


Learning robust principal components from L1-norm maximization

Principal component analysis (PCA) is fundamental in many pattern recognition applications. Much research has been performed to minimize the reconstruction error in L1-norm based reconstruction error minimization (L1-PCA-REM) since conventional L2-norm based PCA (L2-PCA) is sensitive to outliers. Recently, the variance maximization formulation of PCA with L1-norm (L1-PCA-VM) has been proposed, where new greedy and non-greedy solutions are developed. Armed with the gradient ascent perspective for optimization, we show that the L1-PCA-VM formulation is problematic in learning principal components and that only a greedy solution can achieve robustness motivation, which are verified by experiments on synthetic and real-world datasets.

关键词: Principal component analysis (PCA),  Outliers,  L1-norm,  Greedy algorithms,  Non-greedy algorithms 
[1] Yuan Liang, Wei-feng Lv, Wen-jun Wu, Ke Xu. Friendship-aware task planning in mobile crowdsourcing[J]. Front. Inform. Technol. Electron. Eng., 2017, 18(1): 107-121.
[2] Xiu-rui Geng, Lu-yan Ji, Kang Sun. Non-negative matrix factorization based unmixing for principal component transformed hyperspectral data[J]. Front. Inform. Technol. Electron. Eng., 2016, 17(5): 403-412.
[3] Gurmanik Kaur, Ajat Shatru Arora, Vijender Kumar Jain. Using hybrid models to predict blood pressure reactivity to unsupported back based on anthropometric characteristics[J]. Front. Inform. Technol. Electron. Eng., 2015, 16(6): 474-485.
[4] Jia-geng Feng, Jun Xiao. View-invariant human action recognition via robust locally adaptive multi-view learning[J]. Front. Inform. Technol. Electron. Eng., 2015, 16(11): 917-929.
[5] Min Du, Xing-shu Chen. Accelerated k-nearest neighbors algorithm based on principal component analysis for text categorization[J]. Front. Inform. Technol. Electron. Eng., 2013, 14(6): 407-416.