ConBGAT: a novel model combining convolutional neural networks, transformer and graph attention network for information extraction from scanned image

PeerJ Comput Sci. 2024 Nov 28:10:e2536. doi: 10.7717/peerj-cs.2536. eCollection 2024.

Abstract

Extracting information from scanned images is a critical task with far-reaching practical implications. Traditional methods often fall short by inadequately leveraging both image and text features, leading to less accurate and efficient outcomes. In this study, we introduce ConBGAT, a cutting-edge model that seamlessly integrates convolutional neural networks (CNNs), Transformers, and graph attention networks to address these shortcomings. Our approach constructs detailed graphs from text regions within images, utilizing advanced Optical Character Recognition to accurately detect and interpret characters. By combining superior extracted features of CNNs for image and Distilled Bidirectional Encoder Representations from Transformers (DistilBERT) for text, our model achieves a comprehensive and efficient data representation. Rigorous testing on real-world datasets shows that ConBGAT significantly outperforms existing methods, demonstrating its superior capability across multiple evaluation metrics. This advancement not only enhances accuracy but also sets a new benchmark for information extraction in scanned image.

Keywords: Bert; CNN; Deep learning; GAT; Information extraction; Scanned image.

Grants and funding

The authors received no funding for this work.