no code implementations • ACL 2022 • Shiv Shankar
Information integration from different modalities is an active area of research.
no code implementations • 28 Sep 2021 • Shiv Shankar
Information integration from different modalities is an active area of research.
no code implementations • 30 Aug 2021 • Shiv Shankar
Learning distributions over graph-structured data is a challenging task with many applications in biology and chemistry.
no code implementations • 3 Jul 2021 • Shiv Shankar, Daniel Sheldon
Field observations form the basis of many scientific studies, especially in ecological and social sciences.
no code implementations • 25 Jan 2021 • Yash Chandak, Shiv Shankar, Philip S. Thomas
Many sequential decision-making systems leverage data collected using prior policies to propose a new policy.
no code implementations • 31 Dec 2020 • Shiv Shankar, Daniel Sheldon, Tao Sun, John Pickering, Thomas G. Dietterich
However, it will remove intrinsic variability if the variables are dependent, and therefore does not apply to many situations, including modeling of species counts that are controlled by common causes.
no code implementations • 31 Dec 2020 • Shiv Shankar, Don Towsley
The development of Graph Neural Networks (GNNs) has led to great progress in machine learning on graph-structured data.
no code implementations • 9 Jul 2020 • Vihari Piratla, Shiv Shankar
It is believed that by processing augmented inputs in tandem with the original ones, the model learns a more robust set of features which are shared between the original and augmented counterparts.
1 code implementation • ICML 2020 • Yash Chandak, Georgios Theocharous, Shiv Shankar, Martha White, Sridhar Mahadevan, Philip S. Thomas
Most reinforcement learning methods are based upon the key assumption that the transition dynamics and reward functions are fixed, that is, the underlying Markov decision process is stationary.
1 code implementation • 6 Sep 2019 • MohamadAli Torkamani, Shiv Shankar, Amirmohammad Rooshenas, Phillip Wallis
Most deep neural networks use simple, fixed activation functions, such as sigmoids or rectified linear units, regardless of domain or network structure.
no code implementations • 19 May 2019 • MohamadAli Torkamani, Phillip Wallis, Shiv Shankar, Amirmohammad Rooshenas
Most deep neural networks use simple, fixed activation functions, such as sigmoids or rectified linear units, regardless of domain or network structure.
no code implementations • ICLR 2019 • Shiv Shankar, Sunita Sarawagi
Modern neural architectures critically rely on attention for mapping structured inputs to sequences.
1 code implementation • EMNLP 2018 • Shiv Shankar, Siddhant Garg, Sunita Sarawagi
In this paper we show that a simple beam approximation of the joint distribution between attention and output is an easy, accurate, and efficient attention mechanism for sequence to sequence learning.
1 code implementation • ICLR 2018 • Shiv Shankar, Vihari Piratla, Soumen Chakrabarti, Siddhartha Chaudhuri, Preethi Jyothi, Sunita Sarawagi
We present CROSSGRAD, a method to use multi-domain training data to learn a classifier that generalizes to new domains.
Ranked #58 on
Domain Generalization
on PACS
no code implementations • 5 Jul 2017 • Shiv Shankar, Sunita Sarawagi
In this paper, we establish their potential in online adapting a batch trained neural network to domain-relevant labeled data at deployment time.