On Stochastic Variation of Optimal Gradient Sliding Algorithm for Strongly Convex Case

23 May 2023, 14:55
15m
202 НК (МФТИ)

202 НК

МФТИ

Computer & Data Science Computer & Data Science 23

Speaker

Mr Vladimir Smirnov (Moscow Institute of Physics and Technology)

Description

In this study, we explore the stochastic variation of a proposed Acc. GD algorithm for convex optimization problems expressed as r = p + q, where r is strongly convex, q is Lq-smooth and convex, and p is Lp-smooth. Substantiating and proving theoretical bounds of the convergence rate we expect a similarly high convergence rate and wider practical application. Comparing obtained convergence results with ones from the original article, we are trying to apply the algorithm to solve distributed problems of the master-worker type. Various tests were conducted, including speed comparisons in scenarios involving the first response from the worker or the average response from the workers, measuring the convergence rate with different batch sizes, and varying the number of steps of the proximal algorithm in cases where the solution is exact or inaccurate.

Primary authors

Mr Aleksandr Beznosikov (Moscow Institute of Physics and Technology) Mr Vladimir Smirnov (Moscow Institute of Physics and Technology)

Presentation materials