Search Results for author: Arshdeep Sekhon

Found 14 papers, 9 papers with code

Improving Interpretability via Explicit Word Interaction Graph Layer

1 code implementation3 Feb 2023 Arshdeep Sekhon, Hanjie Chen, Aman Shrivastava, Zhe Wang, Yangfeng Ji, Yanjun Qi

Recent NLP literature has seen growing interest in improving model interpretability.

White-box Testing of NLP models with Mask Neuron Coverage

no code implementations Findings (NAACL) 2022 Arshdeep Sekhon, Yangfeng Ji, Matthew B. Dwyer, Yanjun Qi

Recent literature has seen growing interest in using black-box strategies like CheckList for testing the behavior of NLP models.

Data Augmentation Fault Detection

ST-MAML: A Stochastic-Task based Method for Task-Heterogeneous Meta-Learning

no code implementations27 Sep 2021 Zhe Wang, Jake Grigsby, Arshdeep Sekhon, Yanjun Qi

This paper proposes a novel method, ST-MAML, that empowers model-agnostic meta-learning (MAML) to learn from multiple task distributions.

Few-Shot Image Classification Meta-Learning

Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing

1 code implementation EMNLP (BlackboxNLP) 2021 Sanchit Sinha, Hanjie Chen, Arshdeep Sekhon, Yangfeng Ji, Yanjun Qi

Via a small portion of word-level swaps, these adversarial perturbations aim to make the resulting text semantically and spatially similar to its seed input (therefore sharing similar interpretations).

Language Modelling

Evolving Image Compositions for Feature Representation Learning

no code implementations16 Jun 2021 Paola Cascante-Bonilla, Arshdeep Sekhon, Yanjun Qi, Vicente Ordonez

This paper proposes PatchMix, a data augmentation method that creates new samples by composing patches from pairs of images in a grid-like pattern.

Data Augmentation Representation Learning +1

Relate and Predict: Structure-Aware Prediction with Jointly Optimized Neural DAG

no code implementations3 Mar 2021 Arshdeep Sekhon, Zhe Wang, Yanjun Qi

Understanding relationships between feature variables is one important way humans use to make decisions.

Beyond Data Samples: Aligning Differential Networks Estimation with Scientific Knowledge

1 code implementation24 Apr 2020 Arshdeep Sekhon, Zhe Wang, Yanjun Qi

Learning the differential statistical dependency network between two contexts is essential for many real-life applications, mostly in the high dimensional low sample regime.

Structured Prediction

Neural Message Passing for Multi-Label Classification

1 code implementation ICLR 2019 Jack Lanchantin, Arshdeep Sekhon, Yanjun Qi

We propose Label Message Passing (LaMP) Neural Networks to efficiently model the joint prediction of multiple labels.

Classification General Classification +1

DeepDiff: Deep-learning for predicting Differential gene expression from histone modifications

1 code implementation10 Jul 2018 Arshdeep Sekhon, Ritambhara Singh, Yanjun Qi

In this paper, we develop a novel attention-based deep learning architecture, DeepDiff, that provides a unified and end-to-end solution to model and to interpret how dependencies among histone modifications control the differential patterns of gene regulation.

A Fast and Scalable Joint Estimator for Integrating Additional Knowledge in Learning Multiple Related Sparse Gaussian Graphical Models

2 code implementations ICML 2018 Beilun Wang, Arshdeep Sekhon, Yanjun Qi

We consider the problem of including additional knowledge in estimating sparse Gaussian graphical models (sGGMs) from aggregated samples, arising often in bioinformatics and neuroimaging applications.

Computational Efficiency Structured Prediction

Fast and Scalable Learning of Sparse Changes in High-Dimensional Gaussian Graphical Model Structure

2 code implementations30 Oct 2017 Beilun Wang, Arshdeep Sekhon, Yanjun Qi

We focus on the problem of estimating the change in the dependency structures of two $p$-dimensional Gaussian Graphical models (GGMs).

Attend and Predict: Understanding Gene Regulation by Selective Attention on Chromatin

2 code implementations NeurIPS 2017 Ritambhara Singh, Jack Lanchantin, Arshdeep Sekhon, Yanjun Qi

This paper presents an attention-based deep learning approach; we call AttentiveChrome, that uses a unified architecture to model and to interpret dependencies among chromatin factors for controlling gene regulation.

GaKCo: a Fast GApped k-mer string Kernel using COunting

1 code implementation24 Apr 2017 Ritambhara Singh, Arshdeep Sekhon, Kamran Kowsari, Jack Lanchantin, Beilun Wang, Yanjun Qi

This is because current gk-SK uses a trie-based algorithm to calculate co-occurrence of mismatched substrings resulting in a time cost proportional to $O(\Sigma^{M})$.

Cannot find the paper you are looking for? You can Submit a new open access paper.