no code implementations • 4 Jun 2021 • Leonid Berlyand, Robert Creese, Pierre-Emmanuel Jabin
We introduce two-scale loss functions for use in various gradient descent algorithms applied to classification problems via deep neural networks.
no code implementations • 10 Feb 2020 • Leonid Berlyand, Pierre-Emmanuel Jabin, C. Alex Safsten
Our main result consists of two novel conditions on the classifier which, if either is satisfied, ensure stability of training, that is we derive tight bounds on accuracy as loss decreases.
no code implementations • 31 Oct 2019 • Wojciech Czaja, Dong Dong, Pierre-Emmanuel Jabin, Franck Olivier Ndjakou Njeunje
We present a new feature extraction method for complex and large datasets, based on the concept of transport operators on graphs.