Talks

[18] "Methods with Clipping for Stochastic Optimization and Variational Inequalities with Heavy-Tailed Noise"
All-Russian Optimization Seminar, online, 9 September, 2022 (in Russian)
[slides] [video]

[17] "Distributed Methods with Absolute Compression and Error Compensation"
MOTOR 2022, Petrozavodsk, Russia, 3 July, 2022
[slides]

[16] "Secure Distributed Training at Scale"
Lagrange Workshop on Federated Learning, online, 25 April, 2022 
[slides]

[15] "Extragradient Method: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities
and Connections With Cocoercivity"
Rising Stars in AI Symposium 2022, KAUST, Saudi Arabia, 13 March, 2022
[slides] [video]

[14] "Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices"
Vector Institute Endless Summer School session "NeurIPS 2021 Highlights", online, 16 February, 2022 
[slides

[13] "Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices"  
MLO EPFL internal seminar, online, 20 December, 2021
[slides]

[12] "Extragradient Method: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities
and Connections With Cocoercivity"
MTL MLOpt internal seminar, online, 1 December, 2021 
[slides]

[11] "Extragradient Method: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities
and Connections With Cocoercivity"
All-Russian Optimization Seminar, online, 17 November, 2021 (in Russian)
[slides] [video]

[10] "Secure Distributed Training at Scale"
Federated Learning One-World Seminar, online, 3 November, 2021
[slides] [video]

[9] "MARINA: Faster Non-Convex Distributed Learning with Compression"
Federated Learning One-World Seminar, online, 10 March, 2021 
[slides] [video]

[8] "Linearly Converging Error Compensated SGD"
NeurIPS New Year AfterParty at Yandex, 19 January, 2021 
[slides] [video]

[7] "Linearly Converging Error Compensated SGD"
Federated Learning One-World Seminar and Russian Optimization Seminar, online, 7 October, 2020
[slides] [video]

[6] "On the convergence of SGD-like methods for convex and non-convex optimization problems"
Russian Optimization Seminar, online, 8 July, 2020 (in Russian)
[slides] [video]

[5] "A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent"
SIERRA, INRIA, Paris, France, 18 October, 2019
[slides]

[4] 23rd International Symposium on Mathematical Programming
Section "New methods for stochastic optimization and variational inequalities"
Talk ”An Accelerated Directional Derivative Method for Smooth Stochastic Convex Optimization”
Bordeaux, 6 July, 2018
[slides]

[3] Workshop ”Optimization at Work”
Talk ”An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization” 
Moscow, Russia, 14 April, 2018
[slides] [video]

[2] 60th Scientific Conference of MIPT
Section of information transmission problems, data analysis and optimization
Talk ”About accelerated Directional Search with non-Euclidean prox-structure”
Moscow, Russia, 25 November, 2017
[slides]

[1] Workshop ”Optimization at Work”
Talk ”Accelerated Directional Search with non-Euclidean prox-structure”
Moscow, Russia, 27 October, 2017
[slides]

Posters

[16] ICML 2022
Poster "Secure Distributed Training at Scale"
Baltimore, USA, 21 July, 2022

[15] ICML 2022
Poster "3PC: Three Point Compressors for Communication-Efficient Distributed Training
and a Better Theory for Lazy Aggregation"
Baltimore, USA, 21 July, 2022

[14] AISTATS 2022
Virtual poster "Extragradient Method: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities
and Connections With Cocoercivity"
Online, 29 March, 2022

[13] AISTATS 2022 
Virtual poster "Stochastic Extragradient: General Analysis and Improved Rates"
Online, 28 March, 2022

[12] NeurIPS 2021 
Virtual poster "Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices"
Online, 10 December, 2021

[11] ICML 2021
Virtual poster "MARINA: Faster Non-Convex Distributed Learning with Compression"
Online, 21 July, 2021

[10] AISTATS 2021
Virtual poster "Local SGD: Unified Theory and New Efficient Methods"
Online, 13-15 April, 2021

[9] NeurIPS 2020
Virtual poster "Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient Clipping"
[video]
Online, 6-12 December, 2020

[8] NeurIPS 2020
Virtual poster "Linearly Converging Error Compensated SGD"
[video]
Online, 6-12 December, 2020

[7] AISTATS 2020
Virtual poster "A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent"
[video]
Online, 26-28 August, 2020

[6] Machine Learning Summer School 2020
Virtual poster "Linearly Converging Error Compensated SGD"
[video] [slides]
Online, 8 July, 2020

[5] ICLR 2020 
Virtual poster "A Stochastic Derivative Free Optimization Method with Momentum"
Online, 27 April, 2020

[4] NeurIPS2019 workshop "Optimization Foundations for Reinforcement Learning"
Poster "A Stochastic Derivative Free Optimization Method with Momentum"
Based on the joint work with Adel Bibi, Ozan Sener, El Houcine Bergou and Peter Richtárik
Vancouver, Canada, 14 December, 2019

[3] NeurIPS2019 workshop "Beyond First Order Methods in ML"
Poster "An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization"
Based on the joint work with Pavel Dvurechensky and Alexander Gasnikov
Vancouver, Canada, 13 December, 2019

[2] Traditional Youth School ”Control, Information and Optimization” organized by Boris Polyak and Elena Gryazina
Poster "An Accelerated Directional Derivative Method for Smooth Stochastic Convex Optimization"
Voronovo, Russia, 10-15 June, 2018
Also my work was chosen and I gave a talk there.  I won third prize for this talk in competitions of best talks among participants.
[slides of the talk]

[1] KAUST Research Workshop on Optimization and Big Data 
Poster ”Stochastic Spectral Descent Methods”
Dmitry Kovalev, Eduard Gorbunov, Elnur Gasanov, Peter Richtárik
KAUST, Thuwal, KSA, 5 - 7 February, 2018

Made with Mobirise ‌

Free HTML5 Website Maker