Speaker
Description
In this study, we explore the stochastic variation of a proposed Acc. GD algorithm for convex optimization problems expressed as r = p + q, where r is strongly convex, q is Lq-smooth and convex, and p is Lp-smooth. Substantiating and proving theoretical bounds of the convergence rate we expect a similarly high convergence rate and wider practical application. Comparing obtained convergence results with ones from the original article, we are trying to apply the algorithm to solve distributed problems of the master-worker type. Various tests were conducted, including speed comparisons in scenarios involving the first response from the worker or the average response from the workers, measuring the convergence rate with different batch sizes, and varying the number of steps of the proximal algorithm in cases where the solution is exact or inaccurate.