|
|
Physiological signals based emotional state
recognition model of audience |
YE Xiao-han1,2, CHEN Ling1, JIANG Xian-ta1, CHEN Gen-cai1 |
1. College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China; 2. Information Technology
and Applications Department, Zhejiang Yuying College of Vocational Technology, Hangzhou 310018,China |
|
|
Abstract To study the relationship between movie plot and the physiological signals of audience, a physiological signals based emotional state recognition model of movie audience was proposed. Features were extracted from physiological signals, and sequential forward selection (SFS) method was used for feature selection purpose. The emotional state recognition model was built based on support vector machine (SVM). Experiments were conducted to evaluate the performance of the model. In the experiments, three movies with different types were employed. During watching these movies, the facial expressions and physiological signals of 11 participants were recorded, and the human judgment of the emotional states of the participants was obtained based on their facial expressions. The experimental results indicate that the proposed model can distinguish among various emotional states, and the average recognition rate is higher than 90%.
|
Published: 24 July 2012
|
|
基于生理信号的观众情感状态识别模型
为研究电影情节与观众生理信号变化的关系,提出基于生理信号的观众情感状态识别模型,从观众生理信号中提取特征,采用顺序前进法(SFS)进行特征选择,并基于支持向量机(SVM)建立观众情感状态识别模型.实验选择了不同类型的3部影片,共11名人员参加,在电影播放时拍摄观众表情并记录其生理信号,基于表情人工标注其情感状态.实验结果表明:该模型对各情感状态的区分较理想,平均识别率在90%以上.
|
|
[1] XU M, CHIA L, JIN J. Affective content analysis in comedy and horror videos by audio emotional event detection [C] ∥ Proceedings of International Conference on Multimedia and Expo. Amsterdam:[s.n.], 2005: 622-625.
[2] HANJALIC A, XU L. Affective video content representation and modeling [J]. IEEE Transactions on Multimedia, 2005, 7(1): 143-154.
[3] 孙凯,于俊清.面向观众的个性化电影情感内容表示与识别[J].计算机辅助设计与图形学学报,2010, 22(1): 136-144.
SUN Kai, YU Junqing. Personalized video affective content representation and recognition of audience[J]. Journal of Computeraided Design & Computer Graphics, 2010, 22(1): 136-144.
[4] SOLEYMANI M, CHANEL G, KIERKELS J J M, et al. Affective characterization of movie scenes based on multimedia content analysis and user’s physiological emotional responses[J]. IEEE International Symposium on Multimedia, 2008: 228-235.
[5] MONEY A G, AGIUS H. Analysing user physiological responses for affective video summarisation[J]. Displays, 2009, 30(2): 59-70.
[6] EKMAN P. Emotion in the Human Face[M]. Cambridge: Cambridge University Press, 1982.
[7] KIM J, ANDRE E. Emotion recognition based on physiological changes in music listening[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008, 30(12): 2067-2083.
[8] 孙杰,陈岭,阮升升,等.基于心率的身体控制游戏生理状态模型[J].浙江大学学报:工学版, 2011, 45(2): 295-300.
SUN Jie, CHEN Ling, RUAN Shengsheng, et al. Heart rate based physiological states model forbodycontrolled games[J]. Journal of Zhejiang University: Engineering Science, 2011, 45(2): 295-300.
[9] Thought Technology Ltd. Biofeedback Equipment[EB/OL]. [2011-01-20]. http:∥www.thoughttechnology.com.
[10] CHEN L, CHEN G, XU C, et al. EmoPlayer: a media player for video clips with affective annotations[J]. Interacting with Computers, 2008, 20(1): 17-28.
[11] 吴奇,申寻兵,傅小兰.微表情研究及其应用[J].心理科学进展,2010,18(9): 1359-1368.
WU Qi, SHEN Xunbing, FU Xiaolan. Research and application of microexpressions[J]. Advances in Psychological Science, 2010, 18(9): 1359-1368.
[12] VERVERIDIS D, KOTROPOULOS C. Fast and accurate sequential floating forward feature selection with the Bayes classifier applied to speech emotion recognition[J]. Signal Processing, 2008, 88(12): 2956-2970.
[13] VAPNIK V N. The nature of statistical learning theory [M]. New York: SpringerVerlag, 1995.
[14] ChihChung Chang and ChihJen Lin. LIBSVMA Library for Support Vector Machines[EB/OL]. [2011-01-20]. http:∥www.csie.ntu.edu.tw/~cjlin/libsvm.
[15] RUAN S, CHEN L, SUN J, et al. Study on the change of physiological signals during playing bodycontrolled games[C]∥ Proceedings of the International Conference on Advances in Computer Enterntainment Technology. New York:[s.n.], 2009: 349-352. |
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
|
Shared |
|
|
|
|
|
Discussed |
|
|
|
|