Please wait a minute...
Front. Inform. Technol. Electron. Eng.  2014, Vol. 15 Issue (2): 119-125    DOI: 10.1631/jzus.C1300197
    
A pruning algorithm with L1/2 regularizer for extreme learning machine
Ye-tian Fan, Wei Wu, Wen-yu Yang, Qin-wei Fan, Jian Wang
School of Mathematical Sciences, Dalian University of Technology, Dalian 116023, China; College of Science, Huazhong Agricultural University, Wuhan 430070, China
Download:   PDF(0KB)
Export: BibTeX | EndNote (RIS)      

Abstract  Compared with traditional learning methods such as the back propagation (BP) method, extreme learning machine provides much faster learning speed and needs less human intervention, and thus has been widely used. In this paper we combine the L1/2 regularization method with extreme learning machine to prune extreme learning machine. A variable learning coefficient is employed to prevent too large a learning increment. A numerical experiment demonstrates that a network pruned by L1/2 regularization has fewer hidden nodes but provides better performance than both the original network and the network pruned by L2 regularization.

Key wordsExtreme learning machine (ELM)      L1/2 regularizer      Network pruning     
Received: 22 July 2013      Published: 29 January 2014
CLC:  TP312  
Cite this article:

Ye-tian Fan, Wei Wu, Wen-yu Yang, Qin-wei Fan, Jian Wang. A pruning algorithm with L1/2 regularizer for extreme learning machine. Front. Inform. Technol. Electron. Eng., 2014, 15(2): 119-125.

URL:

http://www.zjujournals.com/xueshu/fitee/10.1631/jzus.C1300197     OR     http://www.zjujournals.com/xueshu/fitee/Y2014/V15/I2/119


利用L1/2正则化进行极端学习机的网络修剪算法

研究背景:1. 神经网络有着广泛的应用,但收敛速度慢、精度低,影响了它的发展。相较于传统的神经网络,极端学习机克服了这些缺点,它不仅提供更快的学习速度,而且只需较少的人工干预,这些优点使得极端学习机得到了广泛应用。2. 相比于L1和L2正则化,L1/2正则化的解具有更好的稀疏性;而与L0正则化相比,它又更容易求解。
创新要点:将L1/2正则化方法与极端学习机结合,利用L1/2正则化较好的稀疏性,修剪极端学习机的网络结构。
方法提亮:极小化的目标函数中含有L1/2范数,当权值变得较小时,其导数值会较大。为了阻止权值过快增长,提出一个可变学习率。
重要结论:数据实验表明,相比于原始的极端学习机算法和带L2正则化的极端学习机算法,带L1/2正则化的极端学习机算法不仅拥有较少隐节点,并且拥有更好泛化能力。

关键词: 极端学习机,  L1/2正则化,  网络修剪 
[1] Xin Li, Jin Sun, Fu Xiao. An efficient prediction framework for multi-parametric yield analysis under parameter variations[J]. Front. Inform. Technol. Electron. Eng., 2016, 17(12): 1344-1359.
[2] Xin Li, Jin Sun, Fu Xiao, Jiang-shan Tian. An efficient bi-objective optimization framework for statistical chip-level yield analysis under parameter variations[J]. Front. Inform. Technol. Electron. Eng., 2016, 17(2): 160-172.
[3] Mei Wen, Da-fei Huang, Chang-qing Xun, Dong Chen. Improving performance portability for GPU-specific OpenCL kernels on multi-core/many-core CPUs by analysis-based transformations[J]. Front. Inform. Technol. Electron. Eng., 2015, 16(11): 899-916.
[4] Bin Chen, Lao-bing Zhang, Xiao-cheng Liu, Hans Vangheluwe. Activity-based simulation using DEVS: increasing performance by an activity model in parallel DEVS simulation[J]. Front. Inform. Technol. Electron. Eng., 2014, 15(1): 13-30.