Please wait a minute...
J4  2011, Vol. 45 Issue (12): 2247-2251    DOI: 10.3785/j.issn.1008-973X.2011.12.027
计算机技术﹑电信技术     
一种适用于数据流分类的特征选择方法
张玉红, 胡学钢, 杨秋洁
合肥工业大学 计算机与信息学院,安徽 合肥 230009
A feature selection approach suitable for
data stream classification
ZHANG Yu-hong, HU Xue-gang, YANG Qiu-jie
School of Computer and Information, Hefei University of Technology, Hefei 230009, China
 全文: PDF  HTML
摘要:

数据流环境下的高维、属性冗余、含噪音等问题是经常且可能同时存在的,在一定程度上影响了数据流的分类效果.为改善这一现状,提出一种快速、有效的数据流特征选择方法.引入统计指标IV(information value)值作为特征重要度的评价标准,在此基础上依据经验阈值来进行特征选择,从而解决了传统特征选择方法时空效率不高、区分度不明显、难以应用数据流的问题.实验结果表明:FS-IV具有较小的时间开销和较好的抗噪性能,该方法与已有的数据流分类模型相结合,在保证分类精度可比的情况下,能显著提高时空性能.

Abstract:

The problems of high-dimension, redundant features, and noise, which exist usually and simultaneously in the data stream, lead to long training time and low classification accuracy. An effective and real-time feature selection approach (feature selectioninformation value, FS-IV) was proposed for the data stream classification. In the FS-IV approach, a statistical index, the information value (IV), was introduced to measure the importance of features, and the feature was selected according to the threshold of IV value. Therefore, the FS-IV overcomes the problem of expensive cost of time and space and the problem of unobvious distinguishability in classical feature selection approaches in data stream. The experimental result shows that the FS-IV approach is little-cost and anti-noisy, and the FS-IV combined with the data stream classification approach can perform with notable shortened time while maintaining the accuracy.

出版日期: 2011-12-01
:  TP18  
基金资助:

国家“973”重点基础研究发展计划资助项目(2009CB326203);国家自然科学基金资助项目 (60975034),安徽省自然科学基金资助项目(090412044).

通讯作者: 胡学钢,教授,博导.     E-mail: jsjxhuxg@hfut.edu.cn
作者简介: 张玉红(1979—),女,博士生,主要研究领域为数据挖掘、人工智能.E-mail: yuhong.hfut@gmail.com
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
作者相关文章  

引用本文:

张玉红, 胡学钢, 杨秋洁. 一种适用于数据流分类的特征选择方法[J]. J4, 2011, 45(12): 2247-2251.

ZHANG Yu-hong, HU Xue-gang, YANG Qiu-jie. A feature selection approach suitable for
data stream classification. J4, 2011, 45(12): 2247-2251.

链接本文:

https://www.zjujournals.com/eng/CN/10.3785/j.issn.1008-973X.2011.12.027        https://www.zjujournals.com/eng/CN/Y2011/V45/I12/2247

[1] GOLAB L, TAMEROZSU M. Issues in data stream management[J]. SIGMOD Record, 2003, 32(2): 5-14.
[2] DOMINGOS P, HULTEN G. Mining highspeed data streams[C]∥ Proc of the 6th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Boston, MA, USA: [s. n.], 2000: 71-80.
[3] KIRA K, RENDELL L. The feature selection problem: traditional methods and a new algorithm[C]∥Proceedings of the Ninth National Conference on Artificial Intelligence, New Orleans: AAAI,1992: 129-134.
[4] WANG W, BATTITI R. Identifying intrusions in computer networks with principal component analysis [C]∥Proceedings of the First International Conference on Availability Reliability and Security. Vienna, Austria: IEEE Computer Society, 2006: 270-279.
[5] ZAINAL A, MAAROF M, SHAMSUDDIN S. Feature selection using rough set in intrusion detection[C]∥IEEE TENCON. Hongkong: IEEE,2006: 1-4.
[6] 胡清华,于达仁,谢宗霞.基于邻域粒化和粗糙逼近的数值属性约简[J].软件学报,2008,19(3): 640-649.
HU Qinghua, YU Daren, XIE Zongxia. Numerical attribute reduction based on neighborhood granulation and rough approximation[J]. Journal of Software, 2008,19(3): 640-649.
[7] TAN P, STEINBACH M, KUMAR V. Introduction of data mining [M]. Beijing: Pearson Education Press, 2006: 30-35.
[8] HOLTE R. Very simple classification rules perform well on most commonly used datasets[J]. Machine Learning, 1993,1(11): 63-90.
[9] 叶建芳,潘晓弘,王正肖,等.基于免疫离散粒子群算法的调度属性选择[J].浙江大学学报:工学版,2009,43(12): 2203-2207.
YE Jianfang, PAN Xiaohong, WANG Zhengxiao, et al. Scheduling feature selection based on immune binary partial swarm optimization\
[J\]. Journal of Zhejiang University: Engineering Science, 2009, 43(12): 2203-2207.
[10] 倪丽萍,倪志伟,吴昊,等.基于分形维数和蚁群算法的属性选择方法[J].模式识别与人工智能,2009,22(2): 293-298.
NI Liping, NI Zhiwei, WU Hao, et al. Feature selection method based on fractal dimension and ant colony optimization algorithm[J]. Pattern Recognition and Artificial Intelligence, 2009,22(2): 293-298.
[11] WANG Y, WONG A K C. From association to classification: inference using weight of evidence[J]. IEEE Transactions on Knowledge and Data Engineering, 2003,15(3): 764-767.
[12] MOEZ H, ALEC Y,RAY F. Variable selection in the credit card industry[C/OL]. [2009-03-20]. http:∥www.lexjansen.com/cgibin/xsl_transform.php?x=nesug06&s=nesug&c=nesug.
[13] FAYYAD U, IRANI K. Multi interval discrimination of continuous values attributes for classification learning[C]∥ Proceeding International Joint Conference Artificial Intelligence. Cahambery, France:[s. n.],1993: 1022-1029.
[14] HOLMES G, KIRKBY R, PFAHRINGER B. MOA: Massive online analysis[EB/OL].[2011-05-12]. http:∥sourceforge.net /projects/moadatastream, 2007.
[15] HO T. The random subspace method for constructing decision forests[J]. IEEE Transaction on Pattern Analysis and Machine Intelligence, 1998,20(8): 832-844.
[16] HULTEN G, SPENCER L, DOMINGOS P. Mining timechanging data streams[C]∥ Proc of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Francisco, CA, USA: [s. n.], 2001: 97-106.
[17] NEWMAN D. KDD Cup Data. [EB/OL]. [2009-03-13]. http:∥kdd.ics.uci.edu/ databases/kddcup99/kddcup99.html.
[18] LI P, HU X, LIANG Q,et al.Concept drifting detection on noisy streaming data in random ensemble decision trees[C]∥ Proceedings of the 6th International Conference on Machine Learning and Data Mining. German: Lecture Notes in Computer Science,2009: 236-250.

No related articles found!