Efficient Computation Reduction in Bayesian Neural Networks Through Feature Decomposition and Memorization

IEEE Trans Neural Netw Learn Syst. 2021 Apr;32(4):1703-1712. doi: 10.1109/TNNLS.2020.2987760. Epub 2021 Apr 2.

Abstract

The Bayesian method is capable of capturing real-world uncertainties/incompleteness and properly addressing the overfitting issue faced by deep neural networks. In recent years, Bayesian neural networks (BNNs) have drawn tremendous attention to artificial intelligence (AI) researchers and proved to be successful in many applications. However, the required high computation complexity makes BNNs difficult to be deployed in computing systems with a limited power budget. In this article, an efficient BNN inference flow is proposed to reduce the computation cost and then is evaluated using both software and hardware implementations. A feature decomposition and memorization (DM) strategy is utilized to reform the BNN inference flow in a reduced manner. About half of the computations could be eliminated compared with the traditional approach that has been proved by theoretical analysis and software validations. Subsequently, in order to resolve the hardware resource limitations, a memory-friendly computing framework is further deployed to reduce the memory overhead introduced by the DM strategy. Finally, we implement our approach in Verilog and synthesize it with a 45-nm FreePDK technology. Hardware simulation results on multilayer BNNs demonstrate that, when compared with the traditional BNN inference method, it provides an energy consumption reduction of 73% and a 4× speedup at the expense of 14% area overhead.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Artificial Intelligence
  • Bayes Theorem*
  • Computer Systems
  • Computers
  • Deep Learning
  • Humans
  • Memory
  • Neural Networks, Computer*
  • Software
  • Software Validation