Lower Bounds on the Expected Excess Risk Using Mutual Information

Authors

Image provided by Mert Bora Dogan
Mert Bora
Dogan
EPFL
Profile
Michael
Gastpar
EPFL

Abstract

The expected excess risk of a learning algorithm is the average suboptimality of using the learning algorithm, relative to the optimal hypothesis in the hypothesis class. In this work, we lower bound the expected excess risk of a learning algorithm using the mutual information between the input and the noisy output of the learning algorithm. The setting we consider is, where the hypothesis class is the set of real numbers and the true risk function has a local strong convexity property. Our main results also lead to asymptotic lower bounds on the expected excess risk, which do not require the knowledge of the local strong convexity constants of the true risk function.

Paper Manuscript