Talks

[29] High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise
ICML 2024 Oral, Vienna, Austria, 25 July, 2024
[slides]

[28] Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences
EURO 2024, Copenhagen, Denmark, 3 July, 2024
[slides]

[27] Last-Iterate Convergence of Extragradient-Based Methods
EUROPT 2024, Lund, Sweden 26 June, 2024
[slides]

[26] Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences
Invited talk at INSAIT, 24 June, 2024
[slides]

[25] Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences
PODL 2024, Nantes, France, 21 June, 2024
[slides]

[24] Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences
NETYS 2024, online, 29 May, 2024
(Keynote talk)
[slides]

[23] Variance Reduction for Byzantine-Robust Distributed Optimization
Federated Learning One-World Seminar, online, 7 February, 2024
[slides] [video]

[22] "Clipped Methods for Stochastic Optimization with Heavy-Tailed Noise"
TES Conference on Mathematical Optimization for Machine Learning, Berlin, Germany, 15 September, 2023
[slides]

[21] "Algorithms for Stochastic Optimization with Heavy-Tailed Noise and Connections with the Training of Large Language Models"
Oberseminar at LT Group, University of Hamburg, Germany, 6 June, 2023
[slides]

[20] "Convergence of Proximal Point and Extragradient-Based Methods Beyond Monotonicity: the Case of Negative Comonotonicity"
PEP talks, UCLouvain, Belgium, 13 February, 2023  
[slides] [video]

[19] "Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top"
MBZUAI Workshop on Collaborative Learning: From Theory to Practice, Abu Dhabi, UAE, 8 October, 2022 
[slides]

[18] "Methods with Clipping for Stochastic Optimization and Variational Inequalities with Heavy-Tailed Noise"
All-Russian Optimization Seminar, online, 9 September, 2022 (in Russian)
[slides] [video]

[17] "Distributed Methods with Absolute Compression and Error Compensation"
MOTOR 2022, Petrozavodsk, Russia, 3 July, 2022
[slides]

[16] "Secure Distributed Training at Scale"
Lagrange Workshop on Federated Learning, online, 25 April, 2022 
[slides]

[15] "Extragradient Method: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities
and Connections With Cocoercivity"
Rising Stars in AI Symposium 2022, KAUST, Saudi Arabia, 13 March, 2022
[slides] [video]

[14] "Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices"
Vector Institute Endless Summer School session "NeurIPS 2021 Highlights", online, 16 February, 2022 
[slides

[13] "Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices"  
MLO EPFL internal seminar, online, 20 December, 2021
[slides]

[12] "Extragradient Method: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities
and Connections With Cocoercivity"
MTL MLOpt internal seminar, online, 1 December, 2021 
[slides]

[11] "Extragradient Method: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities
and Connections With Cocoercivity"
All-Russian Optimization Seminar, online, 17 November, 2021 (in Russian)
[slides] [video]

[10] "Secure Distributed Training at Scale"
Federated Learning One-World Seminar, online, 3 November, 2021
[slides] [video]

[9] "MARINA: Faster Non-Convex Distributed Learning with Compression"
Federated Learning One-World Seminar, online, 10 March, 2021 
[slides] [video]

[8] "Linearly Converging Error Compensated SGD"
NeurIPS New Year AfterParty at Yandex, 19 January, 2021 
[slides] [video]

[7] "Linearly Converging Error Compensated SGD"
Federated Learning One-World Seminar and Russian Optimization Seminar, online, 7 October, 2020
[slides] [video]

[6] "On the convergence of SGD-like methods for convex and non-convex optimization problems"
Russian Optimization Seminar, online, 8 July, 2020 (in Russian)
[slides] [video]

[5] "A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent"
SIERRA, INRIA, Paris, France, 18 October, 2019
[slides]

[4] 23rd International Symposium on Mathematical Programming
Section "New methods for stochastic optimization and variational inequalities"
Talk ”An Accelerated Directional Derivative Method for Smooth Stochastic Convex Optimization”
Bordeaux, 6 July, 2018
[slides]

[3] Workshop ”Optimization at Work”
Talk ”An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization” 
Moscow, Russia, 14 April, 2018
[slides] [video]

[2] 60th Scientific Conference of MIPT
Section of information transmission problems, data analysis and optimization
Talk ”About accelerated Directional Search with non-Euclidean prox-structure”
Moscow, Russia, 25 November, 2017
[slides]

[1] Workshop ”Optimization at Work”
Talk ”Accelerated Directional Search with non-Euclidean prox-structure”
Moscow, Russia, 27 October, 2017
[slides]

Posters

[28] ICML 2024
Poster: "High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise"
Vienna, Austria, 25 July, 2024

[27] AISTATS 2024
Poster "Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates"
Valencia, Spain, 3 May, 2024

[26] AISTATS 2024
Poster "Breaking the Heavy-Tailed Noise Barrier in Stochastic Optimization Problems"
Valencia, Spain, 3 May, 2024

[25] NeurIPS 2023
Poster "Single-Call Stochastic Extragradient Methods for Structured Non-monotone Variational Inequalities: Improved Analysis under Weaker Conditions"
New Orleans, USA, 10 December - 16 December, 2023

[24] NeurIPS 2023
Poster "Accelerated Zeroth-order Method for Non-Smooth Stochastic Convex Optimization Problem with Infinite Variance"
New Orleans, USA, 10 December - 16 December, 2023

[23] NeurIPS 2023
Poster "Byzantine-Tolerant Methods for Distributed Variational Inequalities"
New Orleans, USA, 10 December - 16 December, 2023

[22] ICML 2023
Poster "High-Probability Bounds for Stochastic Optimization and Variational Inequalities:
the Case of Unbounded Variance"
Honolulu, USA, 27 July, 2023

[21] ICML 2023
Poster "Convergence of Proximal Point and Extragradient-Based Methods Beyond Monotonicity:
the Case of Negative Comonotonicity"
Honolulu, USA, 25 July, 2023

[20] ICLR 2023
Poster "Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions
and Communication Compression as a Cherry on the Top"
Kigali, Rwanda, 2 May, 2023

[19] AISTATS 2023 
Poster "Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods"
Valencia, Spain, 27 April, 2023

[18] NeurIPS 2022
Poster "Clipped Stochastic Methods for Variational Inequalities with Heavy-Tailed Noise"
New Orleans, USA, 28 November - 9 December, 2022

[17] NeurIPS 2022 
Poster "Last-Iterate Convergence of Optimistic Gradient Method for Monotone Variational Inequalities"
New Orleans, USA, 28 November - 9 December, 2022

[16] ICML 2022
Poster "Secure Distributed Training at Scale"
Baltimore, USA, 21 July, 2022

[15] ICML 2022
Poster "3PC: Three Point Compressors for Communication-Efficient Distributed Training
and a Better Theory for Lazy Aggregation"
Baltimore, USA, 21 July, 2022

[14] AISTATS 2022
Virtual poster "Extragradient Method: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities
and Connections With Cocoercivity"
Online, 29 March, 2022

[13] AISTATS 2022 
Virtual poster "Stochastic Extragradient: General Analysis and Improved Rates"
Online, 28 March, 2022

[12] NeurIPS 2021 
Virtual poster "Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices"
Online, 10 December, 2021

[11] ICML 2021
Virtual poster "MARINA: Faster Non-Convex Distributed Learning with Compression"
Online, 21 July, 2021

[10] AISTATS 2021
Virtual poster "Local SGD: Unified Theory and New Efficient Methods"
Online, 13-15 April, 2021

[9] NeurIPS 2020
Virtual poster "Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient Clipping"
[video]
Online, 6-12 December, 2020

[8] NeurIPS 2020
Virtual poster "Linearly Converging Error Compensated SGD"
[video]
Online, 6-12 December, 2020

[7] AISTATS 2020
Virtual poster "A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent"
[video]
Online, 26-28 August, 2020

[6] Machine Learning Summer School 2020
Virtual poster "Linearly Converging Error Compensated SGD"
[video] [slides]
Online, 8 July, 2020

[5] ICLR 2020 
Virtual poster "A Stochastic Derivative Free Optimization Method with Momentum"
Online, 27 April, 2020

[4] NeurIPS2019 workshop "Optimization Foundations for Reinforcement Learning"
Poster "A Stochastic Derivative Free Optimization Method with Momentum"
Based on the joint work with Adel Bibi, Ozan Sener, El Houcine Bergou and Peter Richtárik
Vancouver, Canada, 14 December, 2019

[3] NeurIPS2019 workshop "Beyond First Order Methods in ML"
Poster "An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization"
Based on the joint work with Pavel Dvurechensky and Alexander Gasnikov
Vancouver, Canada, 13 December, 2019

[2] Traditional Youth School ”Control, Information and Optimization” organized by Boris Polyak and Elena Gryazina
Poster "An Accelerated Directional Derivative Method for Smooth Stochastic Convex Optimization"
Voronovo, Russia, 10-15 June, 2018
Also my work was chosen and I gave a talk there.  I won third prize for this talk in competitions of best talks among participants.
[slides of the talk]

[1] KAUST Research Workshop on Optimization and Big Data 
Poster ”Stochastic Spectral Descent Methods”
Dmitry Kovalev, Eduard Gorbunov, Elnur Gasanov, Peter Richtárik
KAUST, Thuwal, KSA, 5 - 7 February, 2018

Best AI Website Maker