SRGC-Nets: Sparse Repeated Group Convolutional Neural Networks

IEEE Trans Neural Netw Learn Syst. 2020 Aug;31(8):2889-2902. doi: 10.1109/TNNLS.2019.2933665. Epub 2019 Sep 9.

Abstract

Group convolution is widely used in many mobile networks to remove the filter's redundancy from the channel extent. In order to further reduce the redundancy of group convolution, this article proposes a novel repeated group convolutional (RGC) kernel, which has M primary groups, and each primary group includes N tiny groups. In every primary group, the same convolutional kernel is repeated in all the tiny groups. The RGC filter is the first kernel to remove the redundancy from group extent. Based on RGC, a sparse RGC (SRGC) kernel is also introduced in this article, and its corresponding network is called SRGC neural networks (SRGC-Net). The SRGC kernel is the summation of RGC kernel and pointwise group convolutional (PGC) kernel. The number of PGC's groups is M . Accordingly, in each primary group, besides the center locations in all channels, the values of parameters located in other N-1 tiny groups are all zero. Therefore, SRGC can significantly reduce the parameters. Moreover, it can also effectively retrieve spatial and channel-difference features by utilizing RGC and PGC to preserve the richness of produced features. Comparative experiments were performed on the benchmark classification data sets. Compared with the traditional popular networks, SRGC-Nets can perform better with timely reducing the model size and computational complexity. Furthermore, it can also achieve better performances than other latest state-of-the-art mobile networks on most of the databases and effectively decrease the test and training runtime.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Computers, Handheld / trends*
  • Databases, Factual / trends*
  • Neural Networks, Computer*