MFC-ACL: Multi-view fusion clustering with attentive contrastive learning

Neural Netw. 2024 Dec 20:184:107055. doi: 10.1016/j.neunet.2024.107055. Online ahead of print.

Abstract

Multi-view clustering can better handle high-dimensional data by combining information from multiple views, which is important in big data mining. However, the existing models which simply perform feature fusion after feature extraction for individual views, mostly fails to capture the holistic attribute information of multi-view data due to ignoring the significant disparities among views, which seriously affects the performance of multi-view clustering. In this paper, inspired by the attention mechanism, an approach called Multi-View Fusion Clustering with Attentive Contrastive Learning (MFC-ACL) is proposed to tackle these issues. Here, the Att-AE module which optimizes AE using Attention Networks, is firstly constructed to extract view features with global information effectively. To obtain consistent features of multi-view data from various perspectives, a Transformer Feature Fusion Contrastive Module (TFFC) is introduced to combine and learn the extracted low-dimensional features in a contrastive manner. Finally, the optimized clustering results can be derived by clustering the resulting high-level features with shared consistency information. Adequate experimental results indicate that the proposed approach presents better clustering compared to state-of-the-art methods on six benchmark datasets.

Keywords: Attention network; Feature fusion; Multi-view clustering.