基于多头自注意力机制与MLP-Interactor的多模态情感分析
林宜山,左景,卢树华

Multimodal sentiment analysis based on multi-head self-attention mechanism and MLP-Interactor
Yishan LIN,Jing ZUO,Shuhua LU
表 3 在CMU-MOSEI数据集上和其他基准模型性能的对比结果
Tab.3 Comparison of performance on CMU-MOSEI dataset with other benchmark models
模型MAEcorrA2A7F1
TFN[40](2017)0.5930.700—/82.550.2—/82.1
LMF[41](2018)0.6230.677—/82.048.0—/82.1
MulT[20](2019)0.5800.703—/82.551.8—/82.3
BBFN[11](2021)0.5290.767—/86.254.8—/86.1
Self-MM[24](2021)0.5300.76582.81/85.1782.53/85.30
MISA[7](2020)0.5550.75683.6/85.552.283.8/85.3
MAG-BERT[43](2020)0.5430.75582.51/84.8282.77/84.71
CubeMLP[31](2022)0.5290.760—/85.154.9—/84.5
PS-Mixer[30](2023)0.5370.76583.1/86.153.083.1/86.1
MTSA[44](2022)0.5410.774—/85.552.9—/85.3
AOBERT[10](2023)0.5150.76384.9/86.254.585.0/85.9
TETFN[25](2023)
TMRN[45](2023)
0.551
0.535
0.748
0.762
84.25/85.18
83.39/86.19

53.65
84.18/85.27
83.67/86.08
MTAMW[46](2024)0.5250.78283.09/86.4953.7383.48/86.45
MIBSA[47](2024)0.5680.753—/86.7052.40—/85.80
FRDIN[48](2024)0.5250.77883.30/86.3054.4083.70/86.20
CRNet[49](2024)0.5410.771—/86.2053.80—/86.10
本文模型0.5120.79483.0/86.854.582.5/86.8