Graph neural network interatomic potentials (GNN-IPs) are gaining significant attention due to their capability of learning from large datasets. Specifically, universal interatomic potentials based on GNN, usually trained with crystalline geometries, often exhibit remarkable extrapolative behavior toward untrained domains, such as surfaces and amorphous configurations. However, the origin of this extrapolation capability is not well understood. This work provides a theoretical explanation of how GNN-IPs extrapolate to untrained geometries. First, we demonstrate that GNN-IPs can capture non-local electrostatic interactions through the message-passing algorithm, as evidenced by tests on toy models and density-functional theory data. We find that GNN-IP models, SevenNet and MACE, accurately predict electrostatic forces in untrained domains, indicating that they have learned the exact functional form of the Coulomb interaction. Based on these results, we suggest that the ability to learn non-local electrostatic interactions, coupled with the embedding nature of GNN-IPs, explains their extrapolation ability. We find that the universal GNN-IP, SevenNet-0, effectively infers non-local Coulomb interactions in untrained domains but fails to extrapolate the non-local forces arising from the kinetic term, which supports the suggested theory. Finally, we address the impact of hyperparameters on the extrapolation performance of universal potentials, such as SevenNet-0 and MACE-MP-0, and discuss the limitations of the extrapolation capabilities.
© 2024 Author(s). Published under an exclusive license by AIP Publishing.