1 code implementation • 2 Oct 2024 • Sina Mavali, Jonas Ricker, David Pape, Yash Sharma, Asja Fischer, Lea Schönherr
While generative AI (GenAI) offers countless possibilities for creative and productive tasks, artificially generated media can be misused for fraud, manipulation, scams, misinformation campaigns, and more.
1 code implementation • 4 Apr 2024 • Vishaal Udandarao, Ameya Prabhu, Adhiraj Ghosh, Yash Sharma, Philip H. S. Torr, Adel Bibi, Samuel Albanie, Matthias Bethge
Web-crawled pretraining datasets underlie the impressive "zero-shot" evaluation performance of multimodal models, such as CLIP for classification/retrieval and Stable-Diffusion for image generation.
no code implementations • 12 Mar 2024 • Yash Sharma, Basil Abraham, Preethi Jyothi
An important and difficult task in code-switched speech recognition is to recognize the language, as lots of words in two languages can sound similar, especially in some accents.
1 code implementation • 10 Jan 2024 • Sébastien Lachapelle, Pau Rodríguez López, Yash Sharma, Katie Everett, Rémi Le Priol, Alexandre Lacoste, Simon Lacoste-Julien
We develop a nonparametric identifiability theory that formalizes this principle and shows that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
no code implementations • 30 Nov 2023 • Daniel McDuff, Mike Schaekermann, Tao Tu, Anil Palepu, Amy Wang, Jake Garrison, Karan Singhal, Yash Sharma, Shekoofeh Azizi, Kavita Kulkarni, Le Hou, Yong Cheng, Yun Liu, S Sara Mahdavi, Sushant Prakash, Anupam Pathak, Christopher Semturs, Shwetak Patel, Dale R Webster, Ewa Dominowska, Juraj Gottweis, Joelle Barral, Katherine Chou, Greg S Corrado, Yossi Matias, Jake Sunshine, Alan Karthikesalingam, Vivek Natarajan
Comparing the two assisted study arms, the DDx quality score was higher for clinicians assisted by our LLM (top-10 accuracy 51. 7%) compared to clinicians without its assistance (36. 1%) (McNemar's Test: 45. 7, p < 0. 01) and clinicians with search (44. 4%) (4. 75, p = 0. 03).
no code implementations • 15 Nov 2023 • Ian Berlot-Attwell, Kumar Krishna Agrawal, A. Michael Carrell, Yash Sharma, Naomi Saphra
Although modern neural networks often generalize to new combinations of familiar concepts, the conditions that enable such compositionality have long been an open question.
no code implementations • 24 Aug 2023 • Shan Guleria, Benjamin Schwartz, Yash Sharma, Philip Fernandes, James Jablonski, Sodiq Adewole, Sanjana Srivastava, Fisher Rhoads, Michael Porter, Michelle Yeghyayan, Dylan Hyatt, Andrew Copland, Lubaina Ehsan, Donald Brown, Sana Syed
Discussion: Each of these five challenges is addressed, in part, by one of our AI-based models.
no code implementations • 23 May 2023 • Jack Brady, Roland S. Zimmermann, Yash Sharma, Bernhard Schölkopf, Julius von Kügelgen, Wieland Brendel
Under this generative process, we prove that the ground-truth object representations can be identified by an invertible and compositional inference model, even in the presence of dependencies between objects.
no code implementations • 29 Jul 2022 • Nazanin Moradinasab, Yash Sharma, Laura S. Shankman, Gary K. Owens, Donald E. Brown
In this study, we used a weakly supervised learning approach to train the HoVer-Net segmentation model using point annotations to detect nuclei in fluorescent images.
no code implementations • 8 Jul 2022 • Yash Sharma, Yi Zhu, Chris Russell, Thomas Brox
While self-supervised learning has enabled effective representation learning in the absence of labels, for vision, video remains a relatively untapped source of supervision.
1 code implementation • 29 Jun 2022 • Yash Sharma, Sana Syed, Donald E. Brown
Nuclei vary substantially in structure and appearances across different cancer types, leading to a drop in performance of deep learning models when trained on one cancer type and tested on another.
1 code implementation • 26 Apr 2022 • Yash Sharma, Nick Coronato, Donald E. Brown
Exercise testing has been available for more than a half-century and is a remarkably versatile tool for diagnostic and prognostic information of patients for a range of diseases, especially cardiovascular and pulmonary.
1 code implementation • NeurIPS 2021 • Yilun Du, Shuang Li, Yash Sharma, Joshua B. Tenenbaum, Igor Mordatch
In this work, we propose COMET, which discovers and represents concepts as separate energy functions, enabling us to represent both global concepts as well as objects under a unified framework.
1 code implementation • 21 Jul 2021 • Sébastien Lachapelle, Pau Rodríguez López, Yash Sharma, Katie Everett, Rémi Le Priol, Alexandre Lacoste, Simon Lacoste-Julien
This work introduces a novel principle we call disentanglement via mechanism sparsity regularization, which can be applied when the latent factors of interest depend sparsely on past latent factors and/or observed auxiliary variables.
no code implementations • 13 Jun 2021 • Yash Sharma, Lubaina Ehsan, Sana Syed, Donald E. Brown
In this work, we compare the performance of features extracted from networks trained on ImageNet and histopathology data.
1 code implementation • NeurIPS 2021 • Julius von Kügelgen, Yash Sharma, Luigi Gresele, Wieland Brendel, Bernhard Schölkopf, Michel Besserve, Francesco Locatello
A common practice is to perform data augmentation via hand-crafted transformations intended to leave the semantics of the data invariant.
Ranked #1 on Image Classification on Causal3DIdent
1 code implementation • 19 Mar 2021 • Yash Sharma, Aman Shrivastava, Lubaina Ehsan, Christopher A. Moskaluk, Sana Syed, Donald E. Brown
We regularized the clustering mechanism by introducing a KL-divergence loss between the attention weights of patches in a cluster and the uniform distribution.
1 code implementation • 17 Feb 2021 • Roland S. Zimmermann, Yash Sharma, Steffen Schneider, Matthias Bethge, Wieland Brendel
Contrastive learning has recently seen tremendous success in self-supervised learning.
Ranked #1 on Disentanglement on KITTI-Masks
no code implementations • ICLR 2021 • Nasim Rahaman, Anirudh Goyal, Muhammad Waleed Gondal, Manuel Wuthrich, Stefan Bauer, Yash Sharma, Yoshua Bengio, Bernhard Schölkopf
Capturing the structure of a data-generating process by means of appropriate inductive biases can help in learning models that generalise well and are robust to changes in the input distribution.
no code implementations • 12 Oct 2020 • Yash Sharma, Basil Abraham, Karan Taneja, Preethi Jyothi
Building Automatic Speech Recognition (ASR) systems for code-switched speech has recently gained renewed attention due to the widespread use of speech technologies in multilingual communities worldwide.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
1 code implementation • ICLR 2021 • David Klindt, Lukas Schott, Yash Sharma, Ivan Ustyuzhaninov, Wieland Brendel, Matthias Bethge, Dylan Paiton
We construct an unsupervised learning model that achieves nonlinear disentanglement of underlying factors of variation in naturalistic videos.
Ranked #1 on Disentanglement on Natural Sprites
no code implementations • 13 Jul 2020 • Nasim Rahaman, Anirudh Goyal, Muhammad Waleed Gondal, Manuel Wuthrich, Stefan Bauer, Yash Sharma, Yoshua Bengio, Bernhard Schölkopf
Capturing the structure of a data-generating process by means of appropriate inductive biases can help in learning models that generalize well and are robust to changes in the input distribution.
1 code implementation • 12 Jun 2020 • Marissa A. Weis, Kashyap Chitta, Yash Sharma, Wieland Brendel, Matthias Bethge, Andreas Geiger, Alexander S. Ecker
Perceiving the world in terms of objects and tracking them through time is a crucial prerequisite for reasoning and scene understanding.
no code implementations • 23 May 2020 • Simra Shahid, Tanmay Singh, Yash Sharma, Kapil Sharma
With the increasing number of cybersecurity threats, it becomes more difficult for researchers to skim through the security reports for malware analysis.
1 code implementation • 4 Sep 2019 • Aman Shrivastava, Will Adorno, Yash Sharma, Lubaina Ehsan, S. Asad Ali, Sean R. Moore, Beatrice C. Amadi, Paul Kelly, Sana Syed, Donald E. Brown
We propose a Self-Attentive Adversarial Stain Normalization (SAASN) approach for the normalization of multiple stain appearances to a common domain.
no code implementations • 28 Feb 2019 • Yash Sharma, Gavin Weiguang Ding, Marcus Brubaker
Carefully crafted, often imperceptible, adversarial perturbations have been shown to cause state-of-the-art models to yield extremely inaccurate outputs, rendering them unsuitable for safety-critical application domains.
1 code implementation • ICLR 2020 • Gavin Weiguang Ding, Yash Sharma, Kry Yik Chau Lui, Ruitong Huang
We study adversarial robustness of neural networks from a margin maximization perspective, where margins are defined as the distances from inputs to a classifier's decision boundary.
1 code implementation • 29 Sep 2018 • Yash Sharma, Tien-Dung Le, Moustafa Alzantot
Our team participated in the CAAD 2018 competition, and won 1st place in both attack subtracks, non-targeted and targeted adversarial attacks, and 3rd place in defense.
3 code implementations • 28 May 2018 • Moustafa Alzantot, Yash Sharma, Supriyo Chakraborty, huan zhang, Cho-Jui Hsieh, Mani Srivastava
Our experiments on different datasets (MNIST, CIFAR-10, and ImageNet) show that GenAttack can successfully generate visually imperceptible adversarial examples against state-of-the-art image recognition models with orders of magnitude fewer queries than previous approaches.
5 code implementations • EMNLP 2018 • Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, Kai-Wei Chang
Deep neural networks (DNNs) are vulnerable to adversarial examples, perturbations to correctly classified examples which can cause the model to misclassify.
no code implementations • 27 Mar 2018 • Yash Sharma, Pin-Yu Chen
Feature Squeezing is a recently proposed defense method which reduces the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample.
1 code implementation • 19 Feb 2018 • Yingzhen Li, John Bradshaw, Yash Sharma
There is a rising interest in studying the robustness of deep neural network classifiers against adversaries, with both advanced attack and defence techniques being actively developed.
no code implementations • 30 Oct 2017 • Yash Sharma, Pin-Yu Chen
The Madry Lab recently hosted a competition designed to test the robustness of their adversarially trained MNIST model.
6 code implementations • 13 Sep 2017 • Pin-Yu Chen, Yash Sharma, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh
Recent studies have highlighted the vulnerability of deep neural networks (DNNs) to adversarial examples - a visually indistinguishable adversarial image can easily be crafted to cause a well-trained model to misclassify.
5 code implementations • 14 Aug 2017 • Pin-Yu Chen, huan zhang, Yash Sharma, Jin-Feng Yi, Cho-Jui Hsieh
However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples.
13 code implementations • 3 Oct 2016 • Nicolas Papernot, Fartash Faghri, Nicholas Carlini, Ian Goodfellow, Reuben Feinman, Alexey Kurakin, Cihang Xie, Yash Sharma, Tom Brown, Aurko Roy, Alexander Matyasko, Vahid Behzadan, Karen Hambardzumyan, Zhishuai Zhang, Yi-Lin Juang, Zhi Li, Ryan Sheatsley, Abhibhav Garg, Jonathan Uesato, Willi Gierke, Yinpeng Dong, David Berthelot, Paul Hendricks, Jonas Rauber, Rujun Long, Patrick McDaniel
An adversarial example library for constructing attacks, building defenses, and benchmarking both