Talks

[9] "MARINA: Faster Non-Convex Distributed Learning with Compression"
Federated Learning One-World Seminar, online, 10 March, 2021 
[slides] [video]

[8] "Linearly Converging Error Compensated SGD"
NeurIPS New Year AfterParty at Yandex, 19 January, 2021 
[slides] [video]

[7] "Linearly Converging Error Compensated SGD"
Federated Learning One-World Seminar and Russian Optimization Seminar, online, 7 October, 2020
[slides] [video]

[6] "On the convergence of SGD-like methods for convex and non-convex optimization problems"
Russian Optimization Seminar, online, 8 July, 2020 (in Russian)
[slides] [video]

[5] "A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent"
SIERRA, INRIA, Paris, France, 18 October, 2019
[slides]

[4] 23rd International Symposium on Mathematical Programming
Section "New methods for stochastic optimization and variational inequalities"
Talk ”An Accelerated Directional Derivative Method for Smooth Stochastic Convex Optimization”
Bordeaux, 6 July, 2018
[slides]

[3] Workshop ”Optimization at Work”
Talk ”An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization” 
Moscow, Russia, 14 April, 2018
[slides] [video]

[2] 60th Scientific Conference of MIPT
Section of information transmission problems, data analysis and optimization
Talk ”About accelerated Directional Search with non-Euclidean prox-structure”
Moscow, Russia, 25 November, 2017
[slides]

[1] Workshop ”Optimization at Work”
Talk ”Accelerated Directional Search with non-Euclidean prox-structure”
Moscow, Russia, 27 October, 2017
[slides]

Posters

[11] ICML 2021
Virtual poster "MARINA: Faster Non-Convex Distributed Learning with Compression"
Online, 21 July, 2021

[10] AISTATS 2021
Virtual poster "Local SGD: Unified Theory and New Efficient Methods"
Online, 13-15 April, 2021

[9] NeurIPS 2020
Virtual poster "Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient Clipping"
[video]
Online, 6-12 December, 2020

[8] NeurIPS 2020
Virtual poster "Linearly Converging Error Compensated SGD"
[video]
Online, 6-12 December, 2020

[7] AISTATS 2020
Virtual poster "A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent"
[video]
Online, 26-28 August, 2020

[6] Machine Learning Summer School 2020
Virtual poster "Linearly Converging Error Compensated SGD"
[video] [slides]
Online, 8 July, 2020

[5] ICLR 2020 
Virtual poster "A Stochastic Derivative Free Optimization Method with Momentum"
Online, 27 April, 2020

[4] NeurIPS2019 workshop "Optimization Foundations for Reinforcement Learning"
Poster "A Stochastic Derivative Free Optimization Method with Momentum"
Based on the joint work with Adel Bibi, Ozan Sener, El Houcine Bergou and Peter Richtárik
Vancouver, Canada, 14 December, 2019

[3] NeurIPS2019 workshop "Beyond First Order Methods in ML"
Poster "An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization"
Based on the joint work with Pavel Dvurechensky and Alexander Gasnikov
Vancouver, Canada, 13 December, 2019

[2] Traditional Youth School ”Control, Information and Optimization” organized by Boris Polyak and Elena Gryazina
Poster "An Accelerated Directional Derivative Method for Smooth Stochastic Convex Optimization"
Voronovo, Russia, 10-15 June, 2018
Also my work was chosen and I gave a talk there.  I won third prize for this talk in competitions of best talks among participants.
[slides of the talk]

[1] KAUST Research Workshop on Optimization and Big Data 
Poster ”Stochastic Spectral Descent Methods”
Dmitry Kovalev, Eduard Gorbunov, Elnur Gasanov, Peter Richtárik
KAUST, Thuwal, KSA, 5 - 7 February, 2018

The site was built with Mobirise