Search Results for author: Yash Sharma

Found 25 papers, 17 papers with code

Encoding Cardiopulmonary Exercise Testing Time Series as Images for Classification using Convolutional Neural Network

1 code implementation26 Apr 2022 Yash Sharma, Nick Coronato, Donald E. Brown

Exercise testing has been available for more than a half-century and is a remarkably versatile tool for diagnostic and prognostic information of patients for a range of diseases, especially cardiovascular and pulmonary.

Time Series

Unsupervised Learning of Compositional Energy Concepts

1 code implementation NeurIPS 2021 Yilun Du, Shuang Li, Yash Sharma, Joshua B. Tenenbaum, Igor Mordatch

In this work, we propose COMET, which discovers and represents concepts as separate energy functions, enabling us to represent both global concepts as well as objects under a unified framework.

Disentanglement Unsupervised Image Decomposition

Disentanglement via Mechanism Sparsity Regularization: A New Principle for Nonlinear ICA

1 code implementation21 Jul 2021 Sébastien Lachapelle, Pau Rodríguez López, Yash Sharma, Katie Everett, Rémi Le Priol, Alexandre Lacoste, Simon Lacoste-Julien

This work introduces a novel principle we call disentanglement via mechanism sparsity regularization, which can be applied when the latent factors of interest depend sparsely on past latent factors and/or observed auxiliary variables.


HistoTransfer: Understanding Transfer Learning for Histopathology

no code implementations13 Jun 2021 Yash Sharma, Lubaina Ehsan, Sana Syed, Donald E. Brown

In this work, we compare the performance of features extracted from networks trained on ImageNet and histopathology data.

Multi-Task Learning

Cluster-to-Conquer: A Framework for End-to-End Multi-Instance Learning for Whole Slide Image Classification

1 code implementation19 Mar 2021 Yash Sharma, Aman Shrivastava, Lubaina Ehsan, Christopher A. Moskaluk, Sana Syed, Donald E. Brown

We regularized the clustering mechanism by introducing a KL-divergence loss between the attention weights of patches in a cluster and the uniform distribution.

Image Classification Multiple Instance Learning +1

Spatially Structured Recurrent Modules

no code implementations ICLR 2021 Nasim Rahaman, Anirudh Goyal, Muhammad Waleed Gondal, Manuel Wuthrich, Stefan Bauer, Yash Sharma, Yoshua Bengio, Bernhard Schölkopf

Capturing the structure of a data-generating process by means of appropriate inductive biases can help in learning models that generalise well and are robust to changes in the input distribution.

Starcraft II Video Prediction

Improving Low Resource Code-switched ASR using Augmented Code-switched TTS

no code implementations12 Oct 2020 Yash Sharma, Basil Abraham, Karan Taneja, Preethi Jyothi

Building Automatic Speech Recognition (ASR) systems for code-switched speech has recently gained renewed attention due to the widespread use of speech technologies in multilingual communities worldwide.

Automatic Speech Recognition Data Augmentation

S2RMs: Spatially Structured Recurrent Modules

no code implementations13 Jul 2020 Nasim Rahaman, Anirudh Goyal, Muhammad Waleed Gondal, Manuel Wuthrich, Stefan Bauer, Yash Sharma, Yoshua Bengio, Bernhard Schölkopf

Capturing the structure of a data-generating process by means of appropriate inductive biases can help in learning models that generalize well and are robust to changes in the input distribution.

Starcraft II Video Prediction

Benchmarking Unsupervised Object Representations for Video Sequences

1 code implementation12 Jun 2020 Marissa A. Weis, Kashyap Chitta, Yash Sharma, Wieland Brendel, Matthias Bethge, Andreas Geiger, Alexander S. Ecker

Perceiving the world in terms of objects and tracking them through time is a crucial prerequisite for reasoning and scene understanding.

Multi-Object Tracking Object Detection +1

Devising Malware Characterstics using Transformers

no code implementations23 May 2020 Simra Shahid, Tanmay Singh, Yash Sharma, Kapil Sharma

With the increasing number of cybersecurity threats, it becomes more difficult for researchers to skim through the security reports for malware analysis.

Malware Analysis

Self-Attentive Adversarial Stain Normalization

1 code implementation4 Sep 2019 Aman Shrivastava, Will Adorno, Yash Sharma, Lubaina Ehsan, S. Asad Ali, Sean R. Moore, Beatrice C. Amadi, Paul Kelly, Sana Syed, Donald E. Brown

We propose a Self-Attentive Adversarial Stain Normalization (SAASN) approach for the normalization of multiple stain appearances to a common domain.

Translation whole slide images

On the Effectiveness of Low Frequency Perturbations

no code implementations28 Feb 2019 Yash Sharma, Gavin Weiguang Ding, Marcus Brubaker

Carefully crafted, often imperceptible, adversarial perturbations have been shown to cause state-of-the-art models to yield extremely inaccurate outputs, rendering them unsuitable for safety-critical application domains.

Adversarial Attack Adversarial Robustness

MMA Training: Direct Input Space Margin Maximization through Adversarial Training

1 code implementation ICLR 2020 Gavin Weiguang Ding, Yash Sharma, Kry Yik Chau Lui, Ruitong Huang

We study adversarial robustness of neural networks from a margin maximization perspective, where margins are defined as the distances from inputs to a classifier's decision boundary.

Adversarial Defense Adversarial Robustness

CAAD 2018: Generating Transferable Adversarial Examples

1 code implementation29 Sep 2018 Yash Sharma, Tien-Dung Le, Moustafa Alzantot

Our team participated in the CAAD 2018 competition, and won 1st place in both attack subtracks, non-targeted and targeted adversarial attacks, and 3rd place in defense.

Adversarial Attack Adversarial Defense +1

GenAttack: Practical Black-box Attacks with Gradient-Free Optimization

3 code implementations28 May 2018 Moustafa Alzantot, Yash Sharma, Supriyo Chakraborty, huan zhang, Cho-Jui Hsieh, Mani Srivastava

Our experiments on different datasets (MNIST, CIFAR-10, and ImageNet) show that GenAttack can successfully generate visually imperceptible adversarial examples against state-of-the-art image recognition models with orders of magnitude fewer queries than previous approaches.

Adversarial Attack Adversarial Robustness

Generating Natural Language Adversarial Examples

5 code implementations EMNLP 2018 Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, Kai-Wei Chang

Deep neural networks (DNNs) are vulnerable to adversarial examples, perturbations to correctly classified examples which can cause the model to misclassify.

Natural Language Inference Sentiment Analysis

Bypassing Feature Squeezing by Increasing Adversary Strength

no code implementations27 Mar 2018 Yash Sharma, Pin-Yu Chen

Feature Squeezing is a recently proposed defense method which reduces the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample.

Are Generative Classifiers More Robust to Adversarial Attacks?

1 code implementation19 Feb 2018 Yingzhen Li, John Bradshaw, Yash Sharma

There is a rising interest in studying the robustness of deep neural network classifiers against adversaries, with both advanced attack and defence techniques being actively developed.

Adversarial Defense Adversarial Robustness

Attacking the Madry Defense Model with $L_1$-based Adversarial Examples

no code implementations30 Oct 2017 Yash Sharma, Pin-Yu Chen

The Madry Lab recently hosted a competition designed to test the robustness of their adversarially trained MNIST model.

EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples

6 code implementations13 Sep 2017 Pin-Yu Chen, Yash Sharma, huan zhang, Jin-Feng Yi, Cho-Jui Hsieh

Recent studies have highlighted the vulnerability of deep neural networks (DNNs) to adversarial examples - a visually indistinguishable adversarial image can easily be crafted to cause a well-trained model to misclassify.

Adversarial Attack Adversarial Robustness

ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models

5 code implementations14 Aug 2017 Pin-Yu Chen, huan zhang, Yash Sharma, Jin-Feng Yi, Cho-Jui Hsieh

However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples.

Adversarial Attack Adversarial Defense +3

Cannot find the paper you are looking for? You can Submit a new open access paper.