Please wait a minute...
Front. Inform. Technol. Electron. Eng.  2010, Vol. 11 Issue (12): 939-947    DOI: 10.1631/jzus.C1000137
    
A regeneratable dynamic differential evolution algorithm for neural networks with integer weights
Jian Bao, Yu Chen, Jin-shou Yu
School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China, Institute of Software and Intelligent Technology, Hangzhou Dianzi University, Hangzhou 310018, China
A regeneratable dynamic differential evolution algorithm for neural networks with integer weights
Jian Bao, Yu Chen, Jin-shou Yu
School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China, Institute of Software and Intelligent Technology, Hangzhou Dianzi University, Hangzhou 310018, China
 全文: PDF 
摘要: Neural networks with integer weights are more suited for embedded systems and hardware implementations than those with real weights. However, many learning algorithms, which have been proposed for training neural networks with float weights, are inefficient and difficult to train for neural networks with integer weights. In this paper, a novel regeneratable dynamic differential evolution algorithm (RDDE) is presented. This algorithm is efficient for training networks with integer weights. In comparison with the conventional differential evolution algorithm (DE), RDDE has introduced three new strategies: (1) A regeneratable strategy is introduced to ensure further evolution, when all the individuals are the same after several iterations such that they cannot evolve further. In other words, there is an escape from the local minima. (2) A dynamic strategy is designed to speed up convergence and simplify the algorithm by updating its population dynamically. (3) A local greedy strategy is introduced to improve local searching ability when the population approaches the global optimal solution. In comparison with other gradient based algorithms, RDDE does not need the gradient information, which has been the main obstacle for training networks with integer weights. The experiment results show that RDDE can train integer-weight networks more efficiently.
关键词: Differential evolutionInteger weightsNeural networksGreedyEmbedded systemsFunction approximation    
Abstract: Neural networks with integer weights are more suited for embedded systems and hardware implementations than those with real weights. However, many learning algorithms, which have been proposed for training neural networks with float weights, are inefficient and difficult to train for neural networks with integer weights. In this paper, a novel regeneratable dynamic differential evolution algorithm (RDDE) is presented. This algorithm is efficient for training networks with integer weights. In comparison with the conventional differential evolution algorithm (DE), RDDE has introduced three new strategies: (1) A regeneratable strategy is introduced to ensure further evolution, when all the individuals are the same after several iterations such that they cannot evolve further. In other words, there is an escape from the local minima. (2) A dynamic strategy is designed to speed up convergence and simplify the algorithm by updating its population dynamically. (3) A local greedy strategy is introduced to improve local searching ability when the population approaches the global optimal solution. In comparison with other gradient based algorithms, RDDE does not need the gradient information, which has been the main obstacle for training networks with integer weights. The experiment results show that RDDE can train integer-weight networks more efficiently.
Key words: Differential evolution    Integer weights    Neural networks    Greedy    Embedded systems    Function approximation
收稿日期: 2010-05-05 出版日期: 2010-12-09
CLC:  TP183  
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章  
Jian Bao
Yu Chen
Jin-shou Yu

引用本文:

Jian Bao, Yu Chen, Jin-shou Yu. A regeneratable dynamic differential evolution algorithm for neural networks with integer weights. Front. Inform. Technol. Electron. Eng., 2010, 11(12): 939-947.

链接本文:

http://www.zjujournals.com/xueshu/fitee/CN/10.1631/jzus.C1000137        http://www.zjujournals.com/xueshu/fitee/CN/Y2010/V11/I12/939

[1] Yan Liu, Jie Yang, Long Li, Wei Wu. Negative effects of sufficiently small initial weights on back-propagation neural networks[J]. Front. Inform. Technol. Electron. Eng., 2012, 13(8): 585-592.
[2] Ding-cheng Feng, Feng Chen, Wen-li Xu. Learning robust principal components from L1-norm maximization[J]. Front. Inform. Technol. Electron. Eng., 2012, 13(12): 901-908.
[3] Sahar Moghimi, Mohammad Hossein Miran Baygi, Giti Torkaman, Ehsanollah Kabir, Ali Mahloojifar, Narges Armanfard. Studying pressure sores through illuminant invariant assessment of digital color images[J]. Front. Inform. Technol. Electron. Eng., 2010, 11(8): 598-606.
[4] Bo-yang Qu, Ponnuthurai-Nagaratnam Suganthan. Multi-objective differential evolution with diversity enhancement[J]. Front. Inform. Technol. Electron. Eng., 2010, 11(7): 538-543.