Invariants for neural automata

Cogn Neurodyn. 2024 Dec;18(6):3291-3307. doi: 10.1007/s11571-023-09977-5. Epub 2023 May 31.

Abstract

Computational modeling of neurodynamical systems often deploys neural networks and symbolic dynamics. One particular way for combining these approaches within a framework called vector symbolic architectures leads to neural automata. Specifically, neural automata result from the assignment of symbols and symbol strings to numbers, known as Gödel encoding. Under this assignment, symbolic computation becomes represented by trajectories of state vectors in a real phase space, that allows for statistical correlation analyses with real-world measurements and experimental data. However, these assignments are usually completely arbitrary. Hence, it makes sense to address the problem which aspects of the dynamics observed under a Gödel representation is intrinsic to the dynamics and which are not. In this study, we develop a formally rigorous mathematical framework for the investigation of symmetries and invariants of neural automata under different encodings. As a central concept we define patterns of equality for such systems. We consider different macroscopic observables, such as the mean activation level of the neural network, and ask for their invariance properties. Our main result shows that only step functions that are defined over those patterns of equality are invariant under symbolic recodings, while the mean activation, e.g., is not. Our work could be of substantial importance for related regression studies of real-world measurements with neurosymbolic processors for avoiding confounding results that are dependant on a particular encoding and not intrinsic to the dynamics.

Keywords: Computational cognitive neurodynamics; Invariants; Language processing; Neural automata; Observables; Symbolic dynamics.