Please wait a minute...
Front. Inform. Technol. Electron. Eng.  2012, Vol. 13 Issue (8): 585-592    DOI: 10.1631/jzus.C1200008
    
Negative effects of sufficiently small initial weights on back-propagation neural networks
Yan Liu, Jie Yang, Long Li, Wei Wu
School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China; School of Information Science and Engineering, Dalian Polytechnic University, Dalian 116034, China; Department of Mathematics and Computational Science, Hengyang Normal University, Hengyang 421002, China
Download:   PDF(0KB)
Export: BibTeX | EndNote (RIS)      

Abstract  In the training of feedforward neural networks, it is usually suggested that the initial weights should be small in magnitude in order to prevent premature saturation. The aim of this paper is to point out the other side of the story: In some cases, the gradient of the error functions is zero not only for infinitely large weights but also for zero weights. Slow convergence in the beginning of the training procedure is often the result of sufficiently small initial weights. Therefore, we suggest that, in these cases, the initial values of the weights should be neither too large, nor too small. For instance, a typical range of choices of the initial weights might be something like (?0.4, ?0.1)∪(0.1,0.4), rather than (?0.1, 0.1) as suggested by the usual strategy. Our theory that medium size weights should be used has also been extended to a few commonly used transfer functions and error functions. Numerical experiments are carried out to support our theoretical findings.

Key wordsNeural networks      Back-propagation      Gradient learning method      Convergence     
Received: 11 January 2012      Published: 02 August 2012
CLC:  TP18  
Cite this article:

Yan Liu, Jie Yang, Long Li, Wei Wu. Negative effects of sufficiently small initial weights on back-propagation neural networks. Front. Inform. Technol. Electron. Eng., 2012, 13(8): 585-592.

URL:

http://www.zjujournals.com/xueshu/fitee/10.1631/jzus.C1200008     OR     http://www.zjujournals.com/xueshu/fitee/Y2012/V13/I8/585


Negative effects of sufficiently small initial weights on back-propagation neural networks

In the training of feedforward neural networks, it is usually suggested that the initial weights should be small in magnitude in order to prevent premature saturation. The aim of this paper is to point out the other side of the story: In some cases, the gradient of the error functions is zero not only for infinitely large weights but also for zero weights. Slow convergence in the beginning of the training procedure is often the result of sufficiently small initial weights. Therefore, we suggest that, in these cases, the initial values of the weights should be neither too large, nor too small. For instance, a typical range of choices of the initial weights might be something like (?0.4, ?0.1)∪(0.1,0.4), rather than (?0.1, 0.1) as suggested by the usual strategy. Our theory that medium size weights should be used has also been extended to a few commonly used transfer functions and error functions. Numerical experiments are carried out to support our theoretical findings.

关键词: Neural networks,  Back-propagation,  Gradient learning method,  Convergence 
[1] Yu-jun Xiao, Wen-yuan Xu, Zhen-hua Jia, Zhuo-ran Ma, Dong-lian Qi. NIPAD: a non-invasive power-based anomaly detection scheme for programmable logic controllers[J]. Front. Inform. Technol. Electron. Eng., 2017, 18(4): 519-534.
[2] Muhammad Asif Zahoor Raja, Iftikhar Ahmad, Imtiaz Khan, Muhammed Ibrahem Syam, Abdul Majid Wazwaz. Neuro-heuristic computational intelligence for solving nonlinear pantograph systems[J]. Front. Inform. Technol. Electron. Eng., 2017, 18(4): 464-484.
[3] De-xuan Zou, Li-qun Gao, Steven Li. Volterra filter modeling of a nonlinear discrete-time system based on a ranked differential evolution algorithm[J]. Front. Inform. Technol. Electron. Eng., 2014, 15(8): 687-696.
[4] Bin Lin, Xiao-lang Yan, Zheng Shi, Yi-wei Yang. A sparse matrix model-based optical proximity correction algorithm with model-based mapping between segments and control sites[J]. Front. Inform. Technol. Electron. Eng., 2011, 12(5): 436-442.
[5] Sahar Moghimi, Mohammad Hossein Miran Baygi, Giti Torkaman, Ehsanollah Kabir, Ali Mahloojifar, Narges Armanfard. Studying pressure sores through illuminant invariant assessment of digital color images[J]. Front. Inform. Technol. Electron. Eng., 2010, 11(8): 598-606.
[6] Jian Bao, Yu Chen, Jin-shou Yu. A regeneratable dynamic differential evolution algorithm for neural networks with integer weights[J]. Front. Inform. Technol. Electron. Eng., 2010, 11(12): 939-947.