Please wait a minute...
J4  2009, Vol. 43 Issue (09): 1568-1573    DOI: 10.3785/j.issn.1008973X.2009.09.004
自动化技术、计算机技术     
基于蚁群算法的选择性神经网络集成方法
赵胜颖,高广春
(浙江大学城市学院,信息与电气工程学院,浙江 杭州 310015)
Ant conlony optimizationbased approach for selective neural network ensemble
 DIAO Qing-Ying, GAO An-Chun
(School of Information and Electrical Engineering, Zhejiang University City College, Hangzhou 310015, China)
 全文: PDF(563 KB)   HTML
摘要:

为选择差异度较大、精确度较高的神经网络个体组建神经网络集成,提高神经网络集成的性能,提出一种新的选择性神经网络集成构造方法.该算法采用蚁群优化算法在独立训练的神经网络个体中选择部分组建网络集成,在蚁群优化过程中神经网络个体被选择的概率由信息素和启发因子决定,信息素反映当前神经网络个体的精确度,启发因子反映神经网络个体间的差异度,能有效提高系统的搜索效率和预测精度.实验结果表明,该算法构造的神经网络集成使用了较少的网络个体,而预测误差均好于传统的Bagging和Boosting算法.

Abstract:

A new approach was presented to improve the performance of selective neural network ensemble by choosing the appropriate individuals that are  accurate and diverse from candidate neural networks. Ant colony optimization algorithm was employed in which the selective probability depends on the pheromone and heuristic information. The pheromone is respecified according to the accuracy of individuals while heuristic information indicates the diversity of individuals. The experiments on typical date sets show that this approach yields ensemble with smaller size while achieving much better performance, compared to the traditional Bagging and Boosting algorithm.

:  TP 183  
基金资助:

浙江省自然科学基金资助项目(Y107435);杭州市市属高校重点实验室科技创新资助项目(20080431T08).

作者简介: 赵胜颖(1969-),女,浙江绍兴人,高级工程师,从事信息通信、计算机神经网络等研究.
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章  

引用本文:

赵胜颖, 高广春. 基于蚁群算法的选择性神经网络集成方法[J]. J4, 2009, 43(09): 1568-1573.

DIAO Qing-Ying, GAO An-Chun. Ant conlony optimizationbased approach for selective neural network ensemble. J4, 2009, 43(09): 1568-1573.

链接本文:

http://www.zjujournals.com/eng/CN/10.3785/j.issn.1008973X.2009.09.004        http://www.zjujournals.com/eng/CN/Y2009/V43/I09/1568

[1] HANSEN L K, SALAMON P. Neural network ensembles[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1990, 12(10): 9931001.
[2] SCHAPIRE R E. The strength of weak learnability[J]. Machine Learning, 1990, 5(2): 197227.
[3] FREUND Y. Boosting a weak algorithm by majority[J]. Information and Computation, 1995, 121(2): 256285.
[4] BREIMAN L. Bagging predictors[J]. Machine Learning, 1996, 24(2): 123140.
[5] HO T K. The random subspace method for constructing decision forests[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(8): 832844.
[6]WANG Yaonan, ZHANG Dongbo, HUANG Huixian. Neural network ensemble based on rough sets reduction and selective strategy[C]∥ Proceedings of 7th World Congress on Intelligent Control and Automation. Chongqing: IEEE, 2008: 20332038.
[7] LIN Jian, ZHU Bangzhu. Neural network ensemble based on feature selection[C] ∥ 2007 IEEE International Conference on Control and Automation. Guangzhou: IEEE, 2007: 18441847.
[8] ISLAM M M, YAO X, MURASE K. A constructive algorithm for training cooperative neural network ensembles[J] . IEEE Transactions on Neural Networks, 2003, 14(4): 820834.
[9] LIU Yong, YAO Xin. A cooperative ensemble learning system[C]∥IEEE World Congress on Computational Intelligence. The 1998 IEEE International Joint Conference on Neural Networks. Anchorage, Alaska: IEEE, 1998: 22022207.
[10] LIU Yong, YAO Xin. Negatively correlated neural networks for classification[C]∥Proceeding of the 3rd International Symposium on Artificial Life and Robotics (AROBIII'98). Beppu, Japan: [s. n.], 1998: 736739.
[11] ZHOU Zhihua, WU Jianxin, TANG Wei. Ensembling neural networks: many could be better than all[J]. Artificial Intelligence, 2002, 137: 239263.
[12] LAZAREVIC A, OBRADORIC Z. Effective pruning of neural network classifier ensembles[C]∥ Proceeding of International Joint Conference on Neural Networks. Washington DC: IEEE, 2001: 796801.
[13] FU Qiang, HU Shangxu, ZHAO Shengying. Clusteringbased selective neural network ensembles[J]. Journal of Zhejiang University: Science, 2005, 6A(5): 387392.
[14]GAN Zhigang, XIAO Nanfeng. A new ensemble learning algorithm based on improved KMean for training neural network ensembles[C]∥Second International Symposium on Intelligent Information Technology and Security Informatics. Moscow: IEEE, 2009: 811.
[15] YAO Xin, LIU Yong. Making use of population information in evolutionary artificial neural networks[J]. IEEE Transactions on Systems, Man and Cybernetics  Part B: Cybernetics, 1998, 28(3): 417425.
[16] LIU Yong, YAO Xin, HIGUCHI T. Evolutionary ensembles with negative correlation learning[J]. IEEE Transaction on Evolutionary Computation, 2000, 4(4): 380387.
[17] YATES W B, PARTRIDGE D. Use of methodological diversity to improve neural network generalization[J]. Neural Computing and Applications, 1996, 4(2): 114128.
[18] ROLI F, GIACINTO G, VERNAZZA G. Methods for designing multiple classifier systems[C]∥ MCS2001, Lecture Notes in Computer Science. Beilin, Heidelberg: SpringerVerlag, 2001: 7887.
[19] GIACINTO G, ROLI F. Design of effective neural network ensembles for image classification purposes[J] . Image and Vision Computing, 2001, 19: 699707.
[20] BAKKER B, HESKES T. Clustering ensembles of neural network models[J]. Neural Networks, 2003, 16: 261269.
[21] LI Kai, HUANG Houkuan, YE Xiuchen, et al. A selective approach to neural network ensemble based on clustering technology[C]∥Proceeding of the 3rd International Conference on Machine Learning and Cybernetics. Shanghai: IEEE, 2004: 32293233.
[22] 李凯, 黄厚宽. 一种基于聚类技术的选择性神经网络集成方法[J]. 计算机研究与发展, 2005, 42(04): 594598.
LI Kai, HUANG Houkuan. A selective approach to neural network ensemble based on clustering technology[J]. Journal of Computer Research and Development, 2005, 42(04): 594598.
[23] 傅强, 胡上序, 赵胜颖. 基于PSO算法的神经网络集成构造方法[J]. 浙江大学学报:工学版, 2004, 38(12): 15961600.
FU Qiang,  HU Shangxu, ZHAO Shengying. PSObased approach for neural network ensembles[J]. Journal of Zhejiang University: Engineering Science, 2004, 38(12): 15961600.
[24] WU Jianxin, ZHOU Zhihua, CHEN Zhaoqian. Ensemble of GAbased selective neural network ensembles[C]∥Proceeding of the 8th International Conference on Neural Information Processing(ICONIP'01). Shanghai: [s. n.], 2001:14771482.
[25] DORIGO M, CARO G D, GAMBARDELLA L M. Ant algorithms for discrete optimization[J]. Artificial Life, 1999, 5(3): 137172.
[26] DORIGO M ,GAMBARDELLA L M ,MIDDENDORF M ,et al .Guested section on ant colony optimization[J].IEEE Transactions on Evolutionary Computation, 2002,6(4):317319.
[27] BREIMAN L. Bagging predictors[J]. Machine Learning, 1996, 24(2): 123140.
[28]HANSEN J V. Combining predictors: meta machine learning methods and bias/variance and ambiguity decomposition[D]. Nordre Ringgada: University of Arhus, Denmark, 2000.
[29] DIETTRICH T G. Approximate statistical tests for comparing supervised classification learning algorithms[J]. Neural Computation, 1998, 10(7): 18951923.

[1] 张建海, 张森林, 刘妹琴. 离散时滞标准神经网络模型的鲁棒稳定性分析[J]. J4, 2009, 43(8): 1383-1388.