Please wait a minute...
Front. Inform. Technol. Electron. Eng.  2012, Vol. 13 Issue (12): 901-908    DOI: 10.1631/jzus.C1200180
    
Learning robust principal components from L1-norm maximization
Ding-cheng Feng, Feng Chen, Wen-li Xu
Tsinghua National Laboratory for Information Science and Technology, Tsinghua University, Beijing 100084, China; Department of Automation, Tsinghua University, Beijing 100084, China
Learning robust principal components from L1-norm maximization
Ding-cheng Feng, Feng Chen, Wen-li Xu
Tsinghua National Laboratory for Information Science and Technology, Tsinghua University, Beijing 100084, China; Department of Automation, Tsinghua University, Beijing 100084, China
 全文: PDF 
摘要: Principal component analysis (PCA) is fundamental in many pattern recognition applications. Much research has been performed to minimize the reconstruction error in L1-norm based reconstruction error minimization (L1-PCA-REM) since conventional L2-norm based PCA (L2-PCA) is sensitive to outliers. Recently, the variance maximization formulation of PCA with L1-norm (L1-PCA-VM) has been proposed, where new greedy and non-greedy solutions are developed. Armed with the gradient ascent perspective for optimization, we show that the L1-PCA-VM formulation is problematic in learning principal components and that only a greedy solution can achieve robustness motivation, which are verified by experiments on synthetic and real-world datasets.
关键词: Principal component analysis (PCA)OutliersL1-normGreedy algorithmsNon-greedy algorithms    
Abstract: Principal component analysis (PCA) is fundamental in many pattern recognition applications. Much research has been performed to minimize the reconstruction error in L1-norm based reconstruction error minimization (L1-PCA-REM) since conventional L2-norm based PCA (L2-PCA) is sensitive to outliers. Recently, the variance maximization formulation of PCA with L1-norm (L1-PCA-VM) has been proposed, where new greedy and non-greedy solutions are developed. Armed with the gradient ascent perspective for optimization, we show that the L1-PCA-VM formulation is problematic in learning principal components and that only a greedy solution can achieve robustness motivation, which are verified by experiments on synthetic and real-world datasets.
Key words: Principal component analysis (PCA)    Outliers    L1-norm    Greedy algorithms    Non-greedy algorithms
收稿日期: 2012-06-11 出版日期: 2012-12-09
CLC:  TP391.4  
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章  
Ding-cheng Feng
Feng Chen
Wen-li Xu

引用本文:

Ding-cheng Feng, Feng Chen, Wen-li Xu. Learning robust principal components from L1-norm maximization. Front. Inform. Technol. Electron. Eng., 2012, 13(12): 901-908.

链接本文:

http://www.zjujournals.com/xueshu/fitee/CN/10.1631/jzus.C1200180        http://www.zjujournals.com/xueshu/fitee/CN/Y2012/V13/I12/901

[1] Min Du, Xing-shu Chen. Accelerated k-nearest neighbors algorithm based on principal component analysis for text categorization[J]. Front. Inform. Technol. Electron. Eng., 2013, 14(6): 407-416.