Distributed nonconvex optimization subject to globally coupled constraints via collaborative neurodynamic optimization

Neural Netw. 2024 Dec 16:184:107027. doi: 10.1016/j.neunet.2024.107027. Online ahead of print.

Abstract

In this paper, a recurrent neural network is proposed for distributed nonconvex optimization subject to globally coupled (in)equality constraints and local bound constraints. Two distributed optimization models, including a resource allocation problem and a consensus-constrained optimization problem, are established, where the objective functions are not necessarily convex, or the constraints do not guarantee a convex feasible set. To handle the nonconvexity, an augmented Lagrangian function is designed, based on which a recurrent neural network is developed for solving the optimization models in a distributed manner, and the convergence to a local optimal solution is proven. For the search of global optimal solutions, a collaborative neurodynamic optimization method is established by utilizing multiple proposed recurrent neural networks and a meta-heuristic rule. A numerical example, a simulation involving an electricity market, and a distributed cooperative control problem are provided to verify and demonstrate the characteristics of the main results.

Keywords: Augmented Lagrangian function; Collaborative neurodynamic optimization; Distributed nonconvex optimization; Recurrent neural network.