Secret Sharing at Scale for Privacy-Preserving Federated Learning

Authors

Profile
Kannan
Ramchandran
University of California at Berkeley
Profile
Nived
Rajaraman
University of California, Berkeley
Profile
Swanand
Kadhe
University of California, Berkeley

Abstract

Secret sharing is a fundamental security primitive that is widely used, finding application in modern settings like privacy-preserving federated learning. Conventional secret sharing methods, such as Shamir's celebrated scheme, distribute a secret judiciously to 'n' parties such that any fraction of the participants below a targeted threshold learn nothing about the secret, while a fraction above that threshold can recover the secret at O(n^2) computational cost. This quadratic cost limits the ability to scale secret sharing to only a limited number of participants, and is a barrier to applications like federated learning, where the number of clients is typically in the thousands. In this talk, we present FastShare - a novel multi-secret sharing scheme, powered by the (finite field) Fast Fourier Transform (FFT). FastShare critically leverages the non-adversarial nature of dropouts in federated learning, allowing for recovery from a random subset of the clients with O(n log n) computational cost. We highlight FastSecAgg, a secure aggregation protocol for federated learning that builds on top of FastShare and enables the central server to aggregate local client models in a privacy-preserving manner while being robust to client dropouts. FastSecAgg is particularly attractive when the number of model parameters is much larger than the number of clients, as is typical in federated learning.