Prepared in 2023

[40] Unified analysis of SGD-type methods
Eduard Gorbunov
arXiv:2303.16502
March 2023

[39] Byzantine-Robust Loopless Stochastic Variance-Reduced Gradient
Nikita Fedin, Eduard Gorbunov
MOTOR 2023
arXiv:2303.04560
March 2023

[38] Single-Call Stochastic Extragradient Methods for Structured Non-monotone Variational Inequalities: Improved Analysis under Weaker Conditions
Sayantan Choudhury, Eduard Gorbunov, Nicolas Loizou 
arXiv:2302.14043
February 2023

[37] High-Probability Bounds for Stochastic Optimization and Variational Inequalities: the Case of Unbounded Variance  
Abdurakhmon Sadiev, Marina Danilova, Eduard Gorbunov, Samuel Horváth, Gauthier Gidel, Pavel Dvurechensky,
Alexander Gasnikov, Peter Richtárik
arXiv:2302.00999
February 2023

Prepared in 2022

[36] Randomized gradient-free methods in convex optimization 
Alexander Gasnikov, Darina Dvinskikh, Pavel Dvurechensky, Eduard Gorbunov, Aleksander Beznosikov, Alexander Lobanov
arXiv:2211.13566
November 2022

[35] Convergence of Proximal Point and Extragradient-Based Methods Beyond Monotonicity:
the Case of Negative Comonotonicity
Eduard Gorbunov, Adrien Taylor, Samuel Horváth, Gauthier Gidel
arXiv:2210.13831
October 2022

[34] Smooth Monotone Stochastic Variational Inequalities and Saddle Point Problems - Survey
Aleksandr Beznosikov, Boris Polyak, Eduard Gorbunov, Dmitry Kovalev, Alexander Gasnikov
European Mathematical Society Magazine, (127), 15-28
arXiv:2208.13592
August 2022

[33] Federated Optimization Algorithms with Random Reshuffling and Gradient Compression
Abdurakhmon Sadiev, Grigory Malinovsky, Eduard Gorbunov, Igor Sokolov, Ahmed Khaled,
Konstantin Burlachenko, Peter Richtárik
arXiv:2206.07021
June 2022

[32]  Clipped Stochastic Methods for Variational Inequalities with Heavy-Tailed Noise
Eduard Gorbunov*, Marina Danilova*, David Dobre*, Pavel Dvurechensky, Alexander Gasnikov, Gauthier Gidel
(*equal contribution)
NeurIPS 2022
arXiv:2206.01095
June 2022

[31] Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions
and Communication Compression as a Cherry on the Top
Eduard Gorbunov, Samuel Horváth, Peter Richtárik, Gauthier Gidel
ICLR 2023
arXiv:2206.00529 
June 2022

[30] Last-Iterate Convergence of Optimistic Gradient Method for Monotone Variational Inequalities
Eduard Gorbunov, Adrien Taylor, Gauthier Gidel
NeurIPS 2022
arXiv:2205.08446
May 2022

[29] Distributed Methods with Absolute Compression and Error Compensation
Marina Danilova, Eduard Gorbunov
Accepted to MOTOR 2022
arXiv:2203.02383
March 2022

[28] Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods
Aleksandr Beznosikov*, Eduard Gorbunov*, Hugo Berard*, Nicolas Loizou
(*equal contribution)
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:172-235
(AISTATS 2023)
arXiv:2202.07262
February 2022

[27] 3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory
for Lazy Aggregation
Peter Richtárik, Igor Sokolov, Ilyas Fatkhullin, Elnur Gasanov, Zhize Li, Eduard Gorbunov
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:18596-18648 (ICML 2022)
arXiv:2202.00998
February 2022

Prepared in 2021

[26] Stochastic Extragradient: General Analysis and Improved Rates 
Eduard Gorbunov, Hugo Berard, Gauthier Gidel, Nicolas Loizou
AISTATS 2022
arXiv:2111.08611
November 2021

[25] Extragradient Method: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities
and Connections With Cocoercivity
Eduard Gorbunov, Nicolas Loizou, Gauthier Gidel
AISTATS 2022
arXiv:2110.04261
October 2021

[24] EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback
Ilyas Fatkhullin, Igor Sokolov, Eduard Gorbunov, Zhize Li, Peter Richtárik
Short version of this work was accepted to the NeurIPS 2021 workshop OPT2021  
arXiv:2110.03294
October 2021

[23] Secure Distributed Training at Scale
Eduard Gorbunov*, Alexander Borzunov*, Michael Diskin, Max Ryabinin
(*equal contribution)
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:7679-7739 (ICML 2022)
arXiv:2106.11257    
June 2021

[22] Near-Optimal High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise
Eduard Gorbunov, Marina Danilova, Innokentiy Shibaev, Pavel Dvurechensky, Alexander Gasnikov
arXiv:2106.05958
June 2021

[21] Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Max Ryabinin*, Eduard Gorbunov*, Vsevolod Plokhotnyuk, Gennady Pekhimenko
(*equal contribution)
NeurIPS 2021
arXiv:2103.03239
March 2021

[20] MARINA: Faster Non-Convex Distributed Learning with Compression
Eduard Gorbunov, Konstantin Burlachenko, Zhize Li, Peter Richtárik
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:3788-3798, 2021 (ICML 2021)
arXiv:2102.07845
February 2021
[poster ICML 2021]

Prepared in 2020

[19] Recent Theoretical Advances in Non-Convex Optimization
Marina Danilova, Pavel Dvurechensky, Alexander Gasnikov, Eduard Gorbunov, Sergey Guminov,
Dmitry Kamzolov, Innokentiy Shibaev
arXiv:2012.06188
December 2020

[18] Recent theoretical advances in decentralized distributed convex optimization
Eduard Gorbunov, Alexander Rogozin, Aleksandr Beznosikov, Darina Dvinskikh, Alexander Gasnikov
arXiv:2011.13259
November 2020

[17] Local SGD: Unified Theory and New Efficient Methods
Eduard Gorbunov, Filip Hanzely and Peter Richtárik
AISTATS 2021
arXiv:2011.02828
November 2020
[poster AISTATS 2021]

[16] Linearly Converging Error Compensated SGD
Eduard Gorbunov, Dmitry Kovalev, Dmitry Makarenko and Peter Richtárik
NeurIPS 2020
arXiv:2010.12292
October 2020
[video MLSS 2020] [slides MLSS 2020] [video FLOW] [slides FLOW] [poster NeurIPS 2020] [video NeurIPS 2020]

[15] Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient Clipping
Eduard Gorbunov, Marina Danilova and Alexander Gasnikov
NeurIPS 2020
arXiv: 2005.10785
May 2020
[poster NeurIPS 2020] [video NeurIPS 2020]

Prepared in 2019

[14] Derivative-Free Method For Decentralized Distributed Non-Smooth Optimization
Aleksandr Beznosikov, Eduard Gorbunov and Alexander Gasnikov
IFAC-PapersOnLine, Volume 53, Issue 2, 2020, Pages 4038-4043, DOI: https://doi.org/10.1016/j.ifacol.2020.12.2272
arXiv: 1911.10645
November 2019

[13] Optimal Decentralized Distributed Algorithms for Stochastic Convex Optimization
Eduard Gorbunov, Darina Dvinskikh and Alexander Gasnikov
arXiv: 1911.07363
November 2019

[12] Accelerated Gradient-Free Optimization Methods with a Non-Euclidean Proximal Operator
E. Vorontsova, A. Gasnikov, E Gorbunov, P. Dvurechensky
Automation and Remote Control, August 2019, Volume 80, Issue 8, pp 1487–1501
DOI: https://doi.org/10.1134/S0005117919080095

[11] A Stochastic Derivative Free Optimization Method with Momentum
Eduard Gorbunov, Adel Bibi, Ozan Sener, El Houcine Bergou and Peter Richtárik
ICLR 2020
arXiv: 1905.13278
May 2019
[video]

[10] A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent
Eduard Gorbunov, Filip Hanzely and Peter Richtárik
AISTATS 2020
arXiv: 1905.11261
May 2019

[9] On Primal-Dual Approach for Distributed Stochastic Convex Optimization over Networks
Darina Dvinskikh, Eduard Gorbunov, Alexander Gasnikov, Pavel Dvurechensky, Cesar A. Uribe
58th Conference on Decision and Control
arXiv: 1903.09844
March 2019

[8] Stochastic Three Points Method for Unconstrained Smooth Minimization
El Houcine Bergou, Eduard Gorbunov and Peter Richtárik
SIAM Journal on Optimization 30, no. 4 (2020): 2726-2749
arXiv: 1902.03591
February 2019

[7] Distributed learning with compressed gradient differences
Konstantin Mishchenko, Eduard Gorbunov, Martin Takáč and Peter Richtárik
arXiv: 1901.09269
January 2019

Prepared in 2018

[6] The global rate of convergence for optimal tensor methods in smooth convex optimization
Alexander Gasnikov, Eduard Gorbunov, Dmitry Kovalev, Ahmed Mohammed,
Elena Chernousova
Computer Research and Modeling, 2018, Vol. 10:6
DOI: https://doi.org/10.20537/2076-7633-2018-10-6-737-753, arXiv: 1809.00382
September 2018

[5] On the upper bound for the mathematical expectation of the norm of a vector uniformly distributed on the sphere and the phenomenon of concentration of uniform measure on the sphere
Eduard Gorbunov, Evgeniya Vorontsova, Alexander Gasnikov
Mathematical Notes, 2019, Volume 106, Issue 1, Pages 13–23
DOI: https://doi.org/10.4213/mzm12041, arXiv: 1804.03722
April 2018

[4] An Accelerated Directional Derivative Method for Smooth Stochastic Convex Optimization
Pavel Dvurechensky, Alexander Gasnikov, Eduard Gorbunov
European Journal of Operational Research, Volume 290, Issue 2, 16 April 2021, Pages 601-621
DOI: https://doi.org/10.1016/j.ejor.2020.08.027, arXiv: 1804.02394
April 2018

[3] An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization
Eduard Gorbunov, Pavel Dvurechensky, Alexander Gasnikov
SIAM Journal on Optimization, Vol. 32, Iss. 2 (2022)
DOI: https://doi.org/10.1137/19M1259225, arXiv: 1802.09022
February 2018

[2] Stochastic Spectral and Conjugate Descent Methods
Dmitry Kovalev, Eduard Gorbunov, Elnur Gasanov, Peter Richtárik
Advances in Neural Information Processing Systems 31
arXiv: 1802.03703
February 2018

Build a free website with Mobirise