Power-Efficient Deep Neural Networks with Noisy Memristor Implementation

Authors

Image provided by Elsa Dupraz
Elsa
Dupraz
IMT Atlantique
Profile
Lav
Varshney
University of Illinois at Urbana-Champaign
Profile
François
Leduc-Primeau
Polytechnique Montreal & IMT Atlantique

Abstract

This paper considers Deep Neural Network (DNN) linear-nonlinear computations implemented on memristor crossbar substrates. To address the case where true memristor conductance values may differ from their target values, it introduces a theoretical framework that characterizes the effect of conductance value variations on the final inference computation. With only second-order moment assumptions, theoretical results on tracking the mean, variance, and covariance of the layer-by-layer noisy computations are given. By allowing the possibility of amplifying certain signals within the DNN, power consumption is characterized and then optimized via KKT conditions. Simulation results verify the accuracy of the proposed analysis and demonstrate the significant power efficiency gains that are possible via optimization for a target mean squared error.

Paper Manuscript