no code implementations • 23 Nov 2021 • Lionel Tondji, Sergii Kashubin, Moustapha Cisse
Variance reduction (VR) techniques have contributed significantly to accelerating learning with massive datasets in the smooth and strongly convex setting (Schmidt et al., 2017; Johnson & Zhang, 2013; Roux et al., 2012).
no code implementations • 26 Jul 2021 • Wojciech Sirko, Sergii Kashubin, Marvin Ritter, Abigail Annkah, Yasser Salah Eddine Bouchareb, Yann Dauphin, Daniel Keysers, Maxim Neumann, Moustapha Cisse, John Quinn
Identifying the locations and footprints of buildings is vital for many practical and scientific purposes.
no code implementations • 24 Jun 2020 • Forest Yang, Moustapha Cisse, Sanmi Koyejo
In algorithmically fair prediction problems, a standard goal is to ensure the equality of fairness metrics across multiple overlapping groups simultaneously.
2 code implementations • 13 Feb 2018 • Yossi Adi, Carsten Baum, Moustapha Cisse, Benny Pinkas, Joseph Keshet
Unfortunately, once the models are sold they can be easily copied and redistributed.
no code implementations • 10 Jan 2018 • Felix Kreuk, Yossi Adi, Moustapha Cisse, Joseph Keshet
We also present two black-box attacks: where the adversarial examples were generated with a system that was trained on YOHO, but the attack is on a system that was trained on NTIMIT; and when the adversarial examples were generated with a system that was trained on Mel-spectrum feature set, but the attack is on a system that was trained on MFCC.
no code implementations • ECCV 2018 • Pierre Stock, Moustapha Cisse
ConvNets and Imagenet have driven the recent success of deep learning for image classification.
2 code implementations • NeurIPS 2017 • Edouard Grave, Moustapha Cisse, Armand Joulin
Recently, continuous cache models were proposed as extensions to recurrent neural network language models, to adapt their predictions to local changes in the data distribution.
1 code implementation • ICLR 2018 • Chuan Guo, Mayank Rana, Moustapha Cisse, Laurens van der Maaten
This paper investigates strategies that defend against adversarial-example attacks on image-classification systems by transforming the inputs before feeding them to the system.
71 code implementations • ICLR 2018 • Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz
We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.
Ranked #1 on
Out-of-Distribution Generalization
on ImageNet-W
no code implementations • 17 Jul 2017 • Moustapha Cisse, Yossi Adi, Natalia Neverova, Joseph Keshet
Generating adversarial examples is a critical step for evaluating and improving the robustness of learning machines.
1 code implementation • ICML 2017 • Moustapha Cisse, Piotr Bojanowski, Edouard Grave, Yann Dauphin, Nicolas Usunier
We introduce Parseval networks, a form of deep neural networks in which the Lipschitz constant of linear, convolutional and aggregation layers is constrained to be smaller than 1.