Differentially private modification of SignSGD

17 May 2025, 14:00
15m
Клуб Выпусников

Клуб Выпусников

ТЦ Дирижабль, ул. Первомайская 3а
Математическая оптимизация Оптимизация и машинное обучение

Speaker

Alexey Kravatskiy

Description

Crucial for large-scale models, federated learning faces two major challenges: privacy preservation and high communication costs. While SignSGD addresses the communication issue by transmitting only gradient signs, its only earlier proposed private version lacks proper privacy guarantees and convergence analysis. We construct a new variant of DP-SignSGD that combines Gaussian noise with Bernoulli subsampling to achieve true differential privacy. Our approach satisfies $(\alpha, \varepsilon_R)$-Rényi differential privacy, which can be readily converted to standard $(\varepsilon, \delta)$-privacy guarantees. We demonstrate the algorithm's performance on logistic regression problem and classification of handwritten digits with MLP and CNN. The main challenge remains the tradeoff between precision of a single iteration and the maximum number of privacy-preserving iterations. Our analysis suggests that the sign mechanism's binary output and potential gradient privacy may provide additional privacy guarantees beyond our current calculations. The algorithm can be readily adapted to tighter privacy bounds, and we identify the need for theoretical convergence guarantees as the primary direction for future research.

Primary author

Co-authors

Presentation materials