no code implementations • 25 May 2023 • Michael Kounavis, Ousmane Dia, Ilqar Ramazanli
We conclude that influence functions can be made practical, even for large scale machine learning systems, and that influence values can be taken into account by algorithms that selectively remove training points, as part of the learning process.
1 code implementation • 8 Mar 2022 • Yifei Ming, Yiyou Sun, Ousmane Dia, Yixuan Li
Out-of-distribution (OOD) detection is a critical task for reliable machine learning.
Out-of-Distribution Detection Out of Distribution (OOD) Detection +1
no code implementations • 6 Apr 2021 • Sanjay Kariyappa, Ousmane Dia, Moinuddin K Qureshi
To this end, we propose Adaptive Noise Injection (ANI), which uses a light-weight DNN on the client-side to inject noise to each input, before transmitting it to the service provider to perform inference.
1 code implementation • 13 Nov 2019 • Rey Reza Wiyatno, Anqi Xu, Ousmane Dia, Archy de Berker
Recent research has found that many families of machine learning models are vulnerable to adversarial examples: inputs that are specifically designed to cause the target model to produce erroneous outputs.
1 code implementation • 25 Sep 2019 • Chiheb Trabelsi, Olexa Bilaniuk, Ousmane Dia, Ying Zhang, Mirco Ravanelli, Jonathan Binas, Negar Rostamzadeh, Christopher J Pal
Using the Wall Street Journal Dataset, we compare our phase-aware loss to several others that operate both in the time and frequency domains and demonstrate the effectiveness of our proposed signal extraction method and proposed loss.
no code implementations • NeurIPS Workshop Deep_Invers 2019 • Chiheb Trabelsi, Olexa Bilaniuk, Ousmane Dia, Ying Zhang, Mirco Ravanelli, Jonathan Binas, Negar Rostamzadeh, Christopher J Pal
Building on recent advances, we propose a new deep complex-valued method for signal retrieval and extraction in the frequency domain.
2 code implementations • NeurIPS 2018 • Taesup Kim, Jaesik Yoon, Ousmane Dia, Sungwoong Kim, Yoshua Bengio, Sungjin Ahn
Learning to infer Bayesian posterior from a few-shot dataset is an important step towards robust meta-learning due to the model uncertainty inherent in the problem.