1 code implementation • 3 Feb 2023 • Arshdeep Sekhon, Hanjie Chen, Aman Shrivastava, Zhe Wang, Yangfeng Ji, Yanjun Qi
Recent NLP literature has seen growing interest in improving model interpretability.
no code implementations • Findings (NAACL) 2022 • Arshdeep Sekhon, Yangfeng Ji, Matthew B. Dwyer, Yanjun Qi
Recent literature has seen growing interest in using black-box strategies like CheckList for testing the behavior of NLP models.
no code implementations • 27 Sep 2021 • Zhe Wang, Jake Grigsby, Arshdeep Sekhon, Yanjun Qi
This paper proposes a novel method, ST-MAML, that empowers model-agnostic meta-learning (MAML) to learn from multiple task distributions.
1 code implementation • EMNLP (BlackboxNLP) 2021 • Sanchit Sinha, Hanjie Chen, Arshdeep Sekhon, Yangfeng Ji, Yanjun Qi
Via a small portion of word-level swaps, these adversarial perturbations aim to make the resulting text semantically and spatially similar to its seed input (therefore sharing similar interpretations).
no code implementations • 16 Jun 2021 • Paola Cascante-Bonilla, Arshdeep Sekhon, Yanjun Qi, Vicente Ordonez
This paper proposes PatchMix, a data augmentation method that creates new samples by composing patches from pairs of images in a grid-like pattern.
no code implementations • 3 Mar 2021 • Arshdeep Sekhon, Zhe Wang, Yanjun Qi
Understanding relationships between feature variables is one important way humans use to make decisions.
1 code implementation • 24 Apr 2020 • Arshdeep Sekhon, Zhe Wang, Yanjun Qi
Learning the differential statistical dependency network between two contexts is essential for many real-life applications, mostly in the high dimensional low sample regime.
1 code implementation • ICLR 2019 • Jack Lanchantin, Arshdeep Sekhon, Yanjun Qi
We propose Label Message Passing (LaMP) Neural Networks to efficiently model the joint prediction of multiple labels.
1 code implementation • 10 Jul 2018 • Arshdeep Sekhon, Ritambhara Singh, Yanjun Qi
In this paper, we develop a novel attention-based deep learning architecture, DeepDiff, that provides a unified and end-to-end solution to model and to interpret how dependencies among histone modifications control the differential patterns of gene regulation.
2 code implementations • ICML 2018 • Beilun Wang, Arshdeep Sekhon, Yanjun Qi
We consider the problem of including additional knowledge in estimating sparse Gaussian graphical models (sGGMs) from aggregated samples, arising often in bioinformatics and neuroimaging applications.
no code implementations • ICLR 2018 • Jack Lanchantin, Arshdeep Sekhon, Ritambhara Singh, Yanjun Qi
In this paper, we propose a novel deep architecture, the Prototype Matching Network (PMN) to mimic the TF binding mechanisms.
2 code implementations • 30 Oct 2017 • Beilun Wang, Arshdeep Sekhon, Yanjun Qi
We focus on the problem of estimating the change in the dependency structures of two $p$-dimensional Gaussian Graphical models (GGMs).
2 code implementations • NeurIPS 2017 • Ritambhara Singh, Jack Lanchantin, Arshdeep Sekhon, Yanjun Qi
This paper presents an attention-based deep learning approach; we call AttentiveChrome, that uses a unified architecture to model and to interpret dependencies among chromatin factors for controlling gene regulation.
1 code implementation • 24 Apr 2017 • Ritambhara Singh, Arshdeep Sekhon, Kamran Kowsari, Jack Lanchantin, Beilun Wang, Yanjun Qi
This is because current gk-SK uses a trie-based algorithm to calculate co-occurrence of mismatched substrings resulting in a time cost proportional to $O(\Sigma^{M})$.