基于多头自注意力机制与MLP-Interactor的多模态情感分析
林宜山,左景,卢树华

Multimodal sentiment analysis based on multi-head self-attention mechanism and MLP-Interactor
Yishan LIN,Jing ZUO,Shuhua LU
表 2 在CMU-MOSI数据集上和其他基准模型性能的对比结果
Tab.2 Comparison of performance on CMU-MOSI dataset with other benchmark models
模型MAEcorrA2A7F1
TFN[40](2017)0.9010.698— / 80.834.9—/ 80.7
LMF[41](2018)0.9170.695— / 82.533.2—/ 82.4
MFN[42](2018)0.9650.63277.4 / —34.177.3 / —
MulT[20](2019)0.8710.698— / 83.040.0— / 82.8
BBFN[11](2021)0.7760.755—/84.345.0—/84.3
Self-MM[24](2021)0.7130.79884.0/85.9884.42/85.95
MISA[7](2020)0.7830.76181.8/83.442.381.7/83.6
MAG-BERT[43](2020)0.7310.79882.5/84.382.6/84.3
CubeMLP[31](2022)0.7700.767—/ 85.645.5—/85.5
PS-Mixer[30](2023)0.7940.74880.3/82.144.3180.3/82.1
MTSA[44](2022)0.6960.806—/86.846.4—/86.8
AOBERT[10](2023)0.8560.70085.2/85.640.285.4/86.4
TETFN[25](2023)
TMRN[45](2023)
0.717
0.704
0.800
0.784
84.05/86.10
83.67/85.67

48.68
83.83/86.07
83.45/85.52
MTAMW[46](2024)0.7120.79484.40/86.5946.8484.20/86.46
MIBSA[47](2024)0.7280.798—/87.0043.10—/87.20
FRDIN[48](2024)0.6820.81385.8/87.446.5985.3/87.5
CRNet[49](2024)0.7120.797—/86.447.40—/86.4
本文模型0.5750.86887.6/89.652.2387.7/89.6