Prepared in 2021

[23] Secure Distributed Training at Scale
Eduard Gorbunov*, Alexander Borzunov*, Michael Diskin, Max Ryabinin
(*equal contribution)
June 2021

[22] Near-Optimal High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise
Eduard Gorbunov, Marina Danilova, Innokentiy Shibaev, Pavel Dvurechensky, Alexander Gasnikov
June 2021

[21] Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Max Ryabinin*, Eduard Gorbunov*, Vsevolod Plokhotnyuk, Gennady Pekhimenko
(*equal contribution)
March 2021

[20] MARINA: Faster Non-Convex Distributed Learning with Compression
Eduard Gorbunov, Konstantin Burlachenko, Zhize Li, Peter Richtárik
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:3788-3798, 2021
February 2021
[poster ICML 2021]

Prepared in 2020

[19] Recent Theoretical Advances in Non-Convex Optimization
Marina Danilova, Pavel Dvurechensky, Alexander Gasnikov, Eduard Gorbunov, Sergey Guminov,
Dmitry Kamzolov, Innokentiy Shibaev
December 2020

[18] Recent theoretical advances in decentralized distributed convex optimization
Eduard Gorbunov, Alexander Rogozin, Aleksandr Beznosikov, Darina Dvinskikh, Alexander Gasnikov
November 2020

[17] Local SGD: Unified Theory and New Efficient Methods
Eduard Gorbunov, Filip Hanzely and Peter Richtárik
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:3556-3564, 2021
November 2020
[poster AISTATS 2021]

[16] Linearly Converging Error Compensated SGD
Eduard Gorbunov, Dmitry Kovalev, Dmitry Makarenko and Peter Richtárik
Advances in Neural Information Processing Systems 33 (NeurIPS 2020)
October 2020
[video MLSS 2020] [slides MLSS 2020] [video FLOW] [slides FLOW] [poster NeurIPS 2020] [video NeurIPS 2020]

[15] Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient Clipping
Eduard Gorbunov, Marina Danilova and Alexander Gasnikov
Advances in Neural Information Processing Systems 33 (NeurIPS 2020)
arXiv: 2005.10785
May 2020
[poster NeurIPS 2020] [video NeurIPS 2020]

Prepared in 2019

[14] Derivative-Free Method For Decentralized Distributed Non-Smooth Optimization
Aleksandr Beznosikov, Eduard Gorbunov and Alexander Gasnikov
Accepted to IFAC World Congress 
arXiv: 1911.10645
November 2019

[13] Optimal Decentralized Distributed Algorithms for Stochastic Convex Optimization
Eduard Gorbunov, Darina Dvinskikh and Alexander Gasnikov
arXiv: 1911.07363
November 2019

[12] Accelerated Gradient-Free Optimization Methods with a Non-Euclidean Proximal Operator
E. Vorontsova, A. Gasnikov, E Gorbunov, P. Dvurechensky
Automation and Remote Control, August 2019, Volume 80, Issue 8, pp 1487–1501

[11] A Stochastic Derivative Free Optimization Method with Momentum
Eduard Gorbunov, Adel Bibi, Ozan Sener, El Houcine Bergou and Peter Richtárik
Published at ICLR 2020
arXiv: 1905.13278
May 2019

[10] A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent
Eduard Gorbunov, Filip Hanzely and Peter Richtárik
Published at AISTATS 2020
arXiv: 1905.11261
May 2019

[9] On Primal-Dual Approach for Distributed Stochastic Convex Optimization over Networks
Darina Dvinskikh, Eduard Gorbunov, Alexander Gasnikov, Pavel Dvurechensky, Cesar A. Uribe
58th Conference on Decision and Control
arXiv: 1903.09844
March 2019

[8] Stochastic Three Points Method for Unconstrained Smooth Minimization
El Houcine Bergou, Eduard Gorbunov and Peter Richtárik
SIAM Journal on Optimization 30, no. 4 (2020): 2726-2749
arXiv: 1902.03591
February 2019

[7] Distributed learning with compressed gradient differences
Konstantin Mishchenko, Eduard Gorbunov, Martin Takáč and Peter Richtárik
arXiv: 1901.09269
January 2019

Prepared in 2018

[6] The global rate of convergence for optimal tensor methods in smooth convex optimization
Alexander Gasnikov, Eduard Gorbunov, Dmitry Kovalev, Ahmed Mohammed,
Elena Chernousova
Computer Research and Modeling, 2018, Vol. 10:6
DOI:, arXiv: 1809.00382
September 2018

[5] On the upper bound for the mathematical expectation of the norm of a vector uniformly distributed on the sphere and the phenomenon of concentration of uniform measure on the sphere
Eduard Gorbunov, Evgeniya Vorontsova, Alexander Gasnikov
Mathematical Notes, 2019, Volume 106, Issue 1, Pages 13–23
DOI:, arXiv: 1804.03722
April 2018

[4] An Accelerated Directional Derivative Method for Smooth Stochastic Convex Optimization
Pavel Dvurechensky, Alexander Gasnikov, Eduard Gorbunov
European Journal of Operational Research (in press)
DOI:, arXiv: 1804.02394
April 2018

[3] An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization
Eduard Gorbunov, Pavel Dvurechensky, Alexander Gasnikov
arXiv: 1802.09022
February 2018

[2] Stochastic Spectral and Conjugate Descent Methods
Dmitry Kovalev, Eduard Gorbunov, Elnur Gasanov, Peter Richtárik
Advances in Neural Information Processing Systems 31
arXiv: 1802.03703
February 2018

Made with Mobirise