1 code implementation • 27 Jan 2021 • Siddhant Garg, Goutham Ramakrishnan, Varun Thumbe
Large datasets in NLP suffer from noisy labels, due to erroneous automatic and human annotation procedures.
no code implementations • 11 Jun 2020 • Goutham Ramakrishnan, Aws Albarghouthi
Deep neural networks are vulnerable to a range of adversaries.
no code implementations • 8 May 2020 • Siddhant Garg, Goutham Ramakrishnan
The last few decades have seen significant breakthroughs in the fields of deep learning and quantum computing.
2 code implementations • EMNLP 2020 • Siddhant Garg, Goutham Ramakrishnan
Modern text classification models are susceptible to adversarial examples, perturbed versions of the original text indiscernible by humans which get misclassified by the model.
1 code implementation • 7 Feb 2020 • Goutham Ramakrishnan, Jordan Henkel, Zi Wang, Aws Albarghouthi, Somesh Jha, Thomas Reps
Deep neural networks are vulnerable to adversarial examples - small input perturbations that result in incorrect predictions.
1 code implementation • 30 Sep 2019 • Goutham Ramakrishnan, Yun Chan Lee, Aws Albarghouthi
When a model makes a consequential decision, e. g., denying someone a loan, it needs to additionally generate actionable, realistic feedback on what the person can do to favorably change the decision.
1 code implementation • 10 Jan 2019 • Goutham Ramakrishnan, Deepak Anand, Amit Sethi
Normalizing unwanted color variations due to differences in staining processes and scanner responses has been shown to aid machine learning in computational pathology.