Asymptotic study of stochastic adaptive algorithm in non-convex landscape

Fiche du document

Type de document
Périmètre
Langue
Identifiants
Collection

Archives ouvertes

Licence

info:eu-repo/semantics/OpenAccess



Sujets proches En

Algorism

Citer ce document

Sébastien Gadat et al., « Asymptotic study of stochastic adaptive algorithm in non-convex landscape », HAL-SHS : économie et finance, ID : 10670/1.8pbwz7


Métriques


Partage / Export

Résumé En

This paper studies some asymptotic properties of adaptive algorithms widely used in optimization and machine learning, and among them Adagrad and Rmsprop, which are involved in most of the blackbox deep learning algorithms. Our setup is the non-convex landscape optimization point of view, we consider a one time scale parametrization and we consider the situation where these algorithms may be used or not with mini-batches. We adopt the point of view of stochastic algorithms and establish the almost sure convergence of these methods when using a decreasing step-size towards the set of critical points of the target function. With a mild extra assumption on the noise, we also obtain the convergence towards the set of minimizers of the function. Along our study, we also obtain a \convergence rate" of the methods, in the vein of the works of [GL13].

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Sur les mêmes disciplines

Exporter en