基于多头自注意力机制与MLP-Interactor的多模态情感分析
|
林宜山,左景,卢树华
|
Multimodal sentiment analysis based on multi-head self-attention mechanism and MLP-Interactor
|
Yishan LIN,Jing ZUO,Shuhua LU
|
|
表 2 在CMU-MOSI数据集上和其他基准模型性能的对比结果 |
Tab.2 Comparison of performance on CMU-MOSI dataset with other benchmark models |
|
模型 | MAE | corr | A2 | A7 | F1 | TFN[40](2017) | 0.901 | 0.698 | — / 80.8 | 34.9 | —/ 80.7 | LMF[41](2018) | 0.917 | 0.695 | — / 82.5 | 33.2 | —/ 82.4 | MFN[42](2018) | 0.965 | 0.632 | 77.4 / — | 34.1 | 77.3 / — | MulT[20](2019) | 0.871 | 0.698 | — / 83.0 | 40.0 | — / 82.8 | BBFN[11](2021) | 0.776 | 0.755 | —/84.3 | 45.0 | —/84.3 | Self-MM[24](2021) | 0.713 | 0.798 | 84.0/85.98 | — | 84.42/85.95 | MISA[7](2020) | 0.783 | 0.761 | 81.8/83.4 | 42.3 | 81.7/83.6 | MAG-BERT[43](2020) | 0.731 | 0.798 | 82.5/84.3 | — | 82.6/84.3 | CubeMLP[31](2022) | 0.770 | 0.767 | —/ 85.6 | 45.5 | —/85.5 | PS-Mixer[30](2023) | 0.794 | 0.748 | 80.3/82.1 | 44.31 | 80.3/82.1 | MTSA[44](2022) | 0.696 | 0.806 | —/86.8 | 46.4 | —/86.8 | AOBERT[10](2023) | 0.856 | 0.700 | 85.2/85.6 | 40.2 | 85.4/86.4 | TETFN[25](2023) TMRN[45](2023) | 0.717 0.704 | 0.800 0.784 | 84.05/86.10 83.67/85.67 | — 48.68 | 83.83/86.07 83.45/85.52 | MTAMW[46](2024) | 0.712 | 0.794 | 84.40/86.59 | 46.84 | 84.20/86.46 | MIBSA[47](2024) | 0.728 | 0.798 | —/87.00 | 43.10 | —/87.20 | FRDIN[48](2024) | 0.682 | 0.813 | 85.8/87.4 | 46.59 | 85.3/87.5 | CRNet[49](2024) | 0.712 | 0.797 | —/86.4 | 47.40 | —/86.4 | 本文模型 | 0.575 | 0.868 | 87.6/89.6 | 52.23 | 87.7/89.6 |
|
|
|