[57] Methods for Convex (L0,L1)-Smooth Optimization: Clipping, Acceleration, and Adaptivity
Eduard Gorbunov*, Nazarii Tupitsa*, Sayantan Choudhury, Alen Aliev, Peter Richtárik, Samuel Horváth, Martin Takáč
*equal contribution
arXiv:2409.14989
September 2024
[56] Low-Resource Machine Translation through the Lens of Personalized Federated Learning
Viktor Moskvoretskii, Nazarii Tupitsa, Chris Biemann, Samuel Horváth, Eduard Gorbunov, Irina Nikishina
EMNLP 2024 (Findings)
arXiv:2406.12564
June 2024
[55] Gradient Clipping Improves AdaGrad when the Noise Is Heavy-Tailed
Savelii Chezhegov, Yaroslav Klyukin, Andrei Semenov, Aleksandr Beznosikov, Alexander Gasnikov, Samuel Horváth,
Martin Takáč, Eduard Gorbunov
arXiv:2406.04443
June 2024
[54] Exploring Jacobian Inexactness in Second-Order Methods for Variational Inequalities:
Lower Bounds, Optimal Algorithms and Quasi-Newton Approximations
Artem Agafonov, Petr Ostroukhov, Roman Mozhaev, Konstantin Yakovlev, Eduard Gorbunov, Martin Takáč,
Alexander Gasnikov, Dmitry Kamzolov
NeurIPS 2024
arXiv:2405.15990
May 2024
[53] Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
Sayantan Choudhury, Nazarii Tupitsa, Nicolas Loizou, Samuel Horvath, Martin Takac, Eduard Gorbunov
NeurIPS 2024
arXiv:2403.02648
March 2024
[52] Federated Learning Can Find Friends That Are Beneficial
Nazarii Tupitsa, Samuel Horváth, Martin Takáč, Eduard Gorbunov
arXiv:2402.05050
February 2024
[51] Zeroth-order Median Clipping for Non-Smooth Convex Optimization Problems with Heavy-tailed Symmetric Noise
Nikita Kornilov, Yuriy Dorn, Aleksandr Lobanov, Nikolay Kutuzov, Innokentiy Shibaev, Eduard Gorbunov, Alexander Gasnikov, Alexander Nazin
arXiv:2402.02461
February 2024
[50] Byzantine Robustness and Partial Participation Can Be Achieved At Once: Just Clip Gradient Differences
Grigory Malinovsky, Peter Richtárik, Samuel Horváth, Eduard Gorbunov
NeurIPS 2024
arXiv:2311.14127
November 2023
[49] Byzantine-Tolerant Methods for Distributed Variational Inequalities
Nazarii Tupitsa, Abdulla Jasem Almansoori, Yanlin Wu, Martin Takáč, Karthik Nandakumar, Samuel Horváth, Eduard Gorbunov
NeurIPS 2023
arXiv:2311.04611
November 2023
[48] Breaking the Heavy-Tailed Noise Barrier in Stochastic Optimization Problems
Nikita Puchkin*, Eduard Gorbunov*, Nikolay Kutuzov, Alexander Gasnikov
*equal contribution
AISTATS 2024
arXiv:2311.04161
November 2023
[47] Accelerated Zeroth-order Method for Non-Smooth Stochastic Convex Optimization Problem with Infinite Variance
Nikita Kornilov, Ohad Shamir, Aleksandr Lobanov, Darina Dvinskikh, Alexander Gasnikov, Innokentiy Shibaev,
Eduard Gorbunov, Samuel Horváth
NeurIPS 2023
arXiv:2310.18763
October 2023
[46] Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates
Ahmad Rammal, Kaja Gruntkowska, Nikita Fedin, Eduard Gorbunov, Peter Richtárik
AISTATS 2024
arXiv:2310.09804
October 2023
[45] High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise
Eduard Gorbunov, Abdurakhmon Sadiev, Marina Danilova, Samuel Horváth, Gauthier Gidel, Pavel Dvurechensky,
Alexander Gasnikov, Peter Richtárik
ICML 2024 (oral)
arXiv:2310.01860
October 2023
[44] Intermediate Gradient Methods with Relative Inexactness
Nikita Kornilov, Eduard Gorbunov, Mohammad Alkousa, Fedor Stonyakin, Pavel Dvurechensky, Alexander Gasnikov
arXiv:2310.00506
October 2023
[43] Clip21: Error Feedback for Gradient Clipping
Sarit Khirirat, Eduard Gorbunov, Samuel Horváth, Rustem Islamov, Fakhri Karray, Peter Richtárik
arXiv:2305.18929
May 2023
[42] Partially Personalized Federated Learning: Breaking the Curse of Data Heterogeneity
Konstantin Mishchenko, Rustem Islamov, Eduard Gorbunov, Samuel Horváth
arXiv:2305.18285
May 2023
[41] Implicitly normalized forecaster with clipping for linear and non-linear heavy-tailed multi-armed bandits
Yuriy Dorn, Nikita Kornilov, Nikolay Kutuzov, Alexander Nazin, Eduard Gorbunov, Alexander Gasnikov
arXiv:2305.06743
May 2023
[40] Unified analysis of SGD-type methods
Eduard Gorbunov
arXiv:2303.16502
March 2023
[39] Byzantine-Robust Loopless Stochastic Variance-Reduced Gradient
Nikita Fedin, Eduard Gorbunov
MOTOR 2023
arXiv:2303.04560
March 2023
[38] Single-Call Stochastic Extragradient Methods for Structured Non-monotone Variational Inequalities: Improved Analysis under Weaker Conditions
Sayantan Choudhury, Eduard Gorbunov, Nicolas Loizou
NeurIPS 2023
arXiv:2302.14043
February 2023
[37] High-Probability Bounds for Stochastic Optimization and Variational Inequalities: the Case of Unbounded Variance
Abdurakhmon Sadiev, Marina Danilova, Eduard Gorbunov, Samuel Horváth, Gauthier Gidel, Pavel Dvurechensky,
Alexander Gasnikov, Peter Richtárik
ICML 2023
arXiv:2302.00999
February 2023
[36] Randomized gradient-free methods in convex optimization
Alexander Gasnikov, Darina Dvinskikh, Pavel Dvurechensky, Eduard Gorbunov, Aleksander Beznosikov, Alexander Lobanov
arXiv:2211.13566
November 2022
[35] Convergence of Proximal Point and Extragradient-Based Methods Beyond Monotonicity:
the Case of Negative Comonotonicity
Eduard Gorbunov, Adrien Taylor, Samuel Horváth, Gauthier Gidel
ICML 2023
arXiv:2210.13831
October 2022
[34] Smooth Monotone Stochastic Variational Inequalities and Saddle Point Problems - Survey
Aleksandr Beznosikov, Boris Polyak, Eduard Gorbunov, Dmitry Kovalev, Alexander Gasnikov
European Mathematical Society Magazine, (127), 15-28
arXiv:2208.13592
August 2022
[33] Federated Optimization Algorithms with Random Reshuffling and Gradient Compression
Abdurakhmon Sadiev, Grigory Malinovsky, Eduard Gorbunov, Igor Sokolov, Ahmed Khaled,
Konstantin Burlachenko, Peter Richtárik
NeurIPS 2024
arXiv:2206.07021
June 2022
[32] Clipped Stochastic Methods for Variational Inequalities with Heavy-Tailed Noise
Eduard Gorbunov*, Marina Danilova*, David Dobre*, Pavel Dvurechensky, Alexander Gasnikov, Gauthier Gidel
(*equal contribution)
NeurIPS 2022
arXiv:2206.01095
June 2022
[31] Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions
and Communication Compression as a Cherry on the Top
Eduard Gorbunov, Samuel Horváth, Peter Richtárik, Gauthier Gidel
ICLR 2023
arXiv:2206.00529
June 2022
[30] Last-Iterate Convergence of Optimistic Gradient Method for Monotone Variational Inequalities
Eduard Gorbunov, Adrien Taylor, Gauthier Gidel
NeurIPS 2022
arXiv:2205.08446
May 2022
[29] Distributed Methods with Absolute Compression and Error Compensation
Marina Danilova, Eduard Gorbunov
MOTOR 2022
arXiv:2203.02383
March 2022
[28] Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods
Aleksandr Beznosikov*, Eduard Gorbunov*, Hugo Berard*, Nicolas Loizou
(*equal contribution)
AISTATS 2023
arXiv:2202.07262
February 2022
[27] 3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory
for Lazy Aggregation
Peter Richtárik, Igor Sokolov, Ilyas Fatkhullin, Elnur Gasanov, Zhize Li, Eduard Gorbunov
ICML 2022
arXiv:2202.00998
February 2022
[26] Stochastic Extragradient: General Analysis and Improved Rates
Eduard Gorbunov, Hugo Berard, Gauthier Gidel, Nicolas Loizou
AISTATS 2022
arXiv:2111.08611
November 2021
[25] Extragradient Method: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities
and Connections With Cocoercivity
Eduard Gorbunov, Nicolas Loizou, Gauthier Gidel
AISTATS 2022
arXiv:2110.04261
October 2021
[24] EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback
Ilyas Fatkhullin, Igor Sokolov, Eduard Gorbunov, Zhize Li, Peter Richtárik
Short version of this work was accepted to the NeurIPS 2021 workshop OPT2021
arXiv:2110.03294
October 2021
[23] Secure Distributed Training at Scale
Eduard Gorbunov*, Alexander Borzunov*, Michael Diskin, Max Ryabinin
(*equal contribution)
ICML 2022
arXiv:2106.11257
June 2021
[22] Near-Optimal High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise
Eduard Gorbunov, Marina Danilova, Innokentiy Shibaev, Pavel Dvurechensky, Alexander Gasnikov
arXiv:2106.05958
June 2021
[21] Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Max Ryabinin*, Eduard Gorbunov*, Vsevolod Plokhotnyuk, Gennady Pekhimenko
(*equal contribution)
NeurIPS 2021
arXiv:2103.03239
March 2021
[20] MARINA: Faster Non-Convex Distributed Learning with Compression
Eduard Gorbunov, Konstantin Burlachenko, Zhize Li, Peter Richtárik
ICML 2021
arXiv:2102.07845
February 2021
[poster ICML 2021]
[19] Recent Theoretical Advances in Non-Convex Optimization
Marina Danilova, Pavel Dvurechensky, Alexander Gasnikov, Eduard Gorbunov, Sergey Guminov,
Dmitry Kamzolov, Innokentiy Shibaev
High-Dimensional Optimization and Probability: With a View Towards Data Science
arXiv:2012.06188
December 2020
[18] Recent theoretical advances in decentralized distributed convex optimization
Eduard Gorbunov, Alexander Rogozin, Aleksandr Beznosikov, Darina Dvinskikh, Alexander Gasnikov
High-Dimensional Optimization and Probability: With a View Towards Data Science
arXiv:2011.13259
November 2020
[17] Local SGD: Unified Theory and New Efficient Methods
Eduard Gorbunov, Filip Hanzely and Peter Richtárik
AISTATS 2021
arXiv:2011.02828
November 2020
[poster AISTATS 2021]
[16] Linearly Converging Error Compensated SGD
Eduard Gorbunov, Dmitry Kovalev, Dmitry Makarenko and Peter Richtárik
NeurIPS 2020
arXiv:2010.12292
October 2020
[video MLSS 2020] [slides MLSS 2020] [video FLOW] [slides FLOW] [poster NeurIPS 2020] [video NeurIPS 2020]
[15] Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient Clipping
Eduard Gorbunov, Marina Danilova and Alexander Gasnikov
NeurIPS 2020
arXiv: 2005.10785
May 2020
[poster NeurIPS 2020] [video NeurIPS 2020]
[14] Derivative-Free Method For Decentralized Distributed Non-Smooth Optimization
Aleksandr Beznosikov, Eduard Gorbunov and Alexander Gasnikov
IFAC-PapersOnLine, Volume 53, Issue 2, 2020, Pages 4038-4043
arXiv: 1911.10645
November 2019
[13] Optimal Decentralized Distributed Algorithms for Stochastic Convex Optimization
Eduard Gorbunov, Darina Dvinskikh and Alexander Gasnikov
arXiv: 1911.07363
November 2019
[12] Accelerated Gradient-Free Optimization Methods with a Non-Euclidean Proximal Operator
E. Vorontsova, A. Gasnikov, E Gorbunov, P. Dvurechensky
Automation and Remote Control, August 2019, Volume 80, Issue 8, pp 1487–1501
DOI: https://doi.org/10.1134/S0005117919080095
[11] A Stochastic Derivative Free Optimization Method with Momentum
Eduard Gorbunov, Adel Bibi, Ozan Sener, El Houcine Bergou and Peter Richtárik
ICLR 2020
arXiv: 1905.13278
May 2019
[video]
[10] A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent
Eduard Gorbunov, Filip Hanzely and Peter Richtárik
AISTATS 2020
arXiv: 1905.11261
May 2019
[9] On Primal-Dual Approach for Distributed Stochastic Convex Optimization over Networks
Darina Dvinskikh, Eduard Gorbunov, Alexander Gasnikov, Pavel Dvurechensky, Cesar A. Uribe
58th Conference on Decision and Control (CDC 2019)
arXiv: 1903.09844
March 2019
[8] Stochastic Three Points Method for Unconstrained Smooth Minimization
El Houcine Bergou, Eduard Gorbunov and Peter Richtárik
SIAM Journal on Optimization 30, no. 4 (2020): 2726-2749
arXiv: 1902.03591
February 2019
[7] Distributed learning with compressed gradient differences
Konstantin Mishchenko, Eduard Gorbunov, Martin Takáč and Peter Richtárik
arXiv: 1901.09269
January 2019
[6] The global rate of convergence for optimal tensor methods in smooth convex optimization
Alexander Gasnikov, Eduard Gorbunov, Dmitry Kovalev, Ahmed Mohammed,
Elena Chernousova
Computer Research and Modeling, 2018, Vol. 10:6
DOI: https://doi.org/10.20537/2076-7633-2018-10-6-737-753, arXiv: 1809.00382
September 2018
[5] On the upper bound for the mathematical expectation of the norm of a vector uniformly distributed on the sphere and the phenomenon of concentration of uniform measure on the sphere
Eduard Gorbunov, Evgeniya Vorontsova, Alexander Gasnikov
Mathematical Notes, 2019, Volume 106, Issue 1, Pages 13–23
DOI: https://doi.org/10.4213/mzm12041, arXiv: 1804.03722
April 2018
[4] An Accelerated Directional Derivative Method for Smooth Stochastic Convex Optimization
Pavel Dvurechensky, Alexander Gasnikov, Eduard Gorbunov
European Journal of Operational Research, Volume 290, Issue 2, 16 April 2021, Pages 601-621
DOI: https://doi.org/10.1016/j.ejor.2020.08.027, arXiv: 1804.02394
April 2018
[3] An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization
Eduard Gorbunov, Pavel Dvurechensky, Alexander Gasnikov
SIAM Journal on Optimization, Vol. 32, Iss. 2 (2022)
DOI: https://doi.org/10.1137/19M1259225, arXiv: 1802.09022
February 2018
[2] Stochastic Spectral and Conjugate Descent Methods
Dmitry Kovalev, Eduard Gorbunov, Elnur Gasanov, Peter Richtárik
NeurIPS 2019
arXiv: 1802.03703
February 2018
[1] Accelerated Directional Search with non-Euclidean prox-structure
Evgeniya Vorontsova, Alexander Gasnikov, Eduard Gorbunov
Automation and Remote Control, April 2019, Volume 80,Issue 4, pp 693–707
DOI: https://doi.org/10.1134/S0005117919040076, arXiv: 1710.00162
October 2017
Offline Website Maker