no code implementations • ACL 2022 • Shiv Shankar
Information integration from different modalities is an active area of research.
no code implementations • 26 Mar 2025 • Shiv Shankar, Tomas Geffner
Flow matching models typically use linear interpolants to define the forward/noise addition process.
no code implementations • 16 Apr 2024 • Shiv Shankar, Ritwik Sinha, Yash Chandak, Saayan Mitra, Madalina Fiterau
A/B tests are often required to be conducted on subjects that might have social connections.
no code implementations • 5 Dec 2023 • Yash Chandak, Shiv Shankar, Vasilis Syrgkanis, Emma Brunskill
Indirect experiments provide a valuable framework for estimating treatment effects in situations where conducting randomized control trials (RCTs) is impractical or unethical.
no code implementations • 6 Feb 2023 • Yash Chandak, Shiv Shankar, Venkata Gandikota, Philip S. Thomas, Arya Mazumdar
We propose a first-order method for convex optimization, where instead of being restricted to the gradient from a single parameter, gradients from multiple parameters can be used during each step of gradient descent.
1 code implementation • 24 Jan 2023 • Yash Chandak, Shiv Shankar, Nathaniel D. Bastian, Bruno Castro da Silva, Emma Brunskil, Philip S. Thomas
Methods for sequential decision-making are often built upon a foundational assumption that the underlying decision process is stationary.
no code implementations • 21 Nov 2022 • Shiv Shankar, Vihari Piratla
Most deep learning research has focused on developing new model and training procedures.
no code implementations • 3 Nov 2022 • Shiv Shankar, Ritwik Sinha, Saayan Mitra, Viswanathan Swaminathan, Sridhar Mahadevan, Moumita Sinha
We propose a two-stage experimental design, where the two brands only need to agree on high-level aggregate parameters of the experiment to test the alternate experiences.
no code implementations • 1 Sep 2022 • Shiv Shankar, Laure Thompson, Madalina Fiterau
In this work, we present an iterative representation refinement approach, called Progressive Fusion, which mitigates the issues with late fusion representations.
no code implementations • 28 Sep 2021 • Shiv Shankar
Information integration from different modalities is an active area of research.
no code implementations • 30 Aug 2021 • Shiv Shankar
Learning distributions over graph-structured data is a challenging task with many applications in biology and chemistry.
no code implementations • 3 Jul 2021 • Shiv Shankar, Daniel Sheldon
Field observations form the basis of many scientific studies, especially in ecological and social sciences.
no code implementations • 25 Jan 2021 • Yash Chandak, Shiv Shankar, Philip S. Thomas
Many sequential decision-making systems leverage data collected using prior policies to propose a new policy.
no code implementations • 31 Dec 2020 • Shiv Shankar, Daniel Sheldon, Tao Sun, John Pickering, Thomas G. Dietterich
However, it will remove intrinsic variability if the variables are dependent, and therefore does not apply to many situations, including modeling of species counts that are controlled by common causes.
no code implementations • 31 Dec 2020 • Shiv Shankar, Don Towsley
The development of Graph Neural Networks (GNNs) has led to great progress in machine learning on graph-structured data.
no code implementations • 9 Jul 2020 • Vihari Piratla, Shiv Shankar
It is believed that by processing augmented inputs in tandem with the original ones, the model learns a more robust set of features which are shared between the original and augmented counterparts.
1 code implementation • ICML 2020 • Yash Chandak, Georgios Theocharous, Shiv Shankar, Martha White, Sridhar Mahadevan, Philip S. Thomas
Most reinforcement learning methods are based upon the key assumption that the transition dynamics and reward functions are fixed, that is, the underlying Markov decision process is stationary.
1 code implementation • 6 Sep 2019 • MohamadAli Torkamani, Shiv Shankar, Amirmohammad Rooshenas, Phillip Wallis
Most deep neural networks use simple, fixed activation functions, such as sigmoids or rectified linear units, regardless of domain or network structure.
no code implementations • 19 May 2019 • MohamadAli Torkamani, Phillip Wallis, Shiv Shankar, Amirmohammad Rooshenas
Most deep neural networks use simple, fixed activation functions, such as sigmoids or rectified linear units, regardless of domain or network structure.
no code implementations • ICLR 2019 • Shiv Shankar, Sunita Sarawagi
Modern neural architectures critically rely on attention for mapping structured inputs to sequences.
1 code implementation • EMNLP 2018 • Shiv Shankar, Siddhant Garg, Sunita Sarawagi
In this paper we show that a simple beam approximation of the joint distribution between attention and output is an easy, accurate, and efficient attention mechanism for sequence to sequence learning.
1 code implementation • ICLR 2018 • Shiv Shankar, Vihari Piratla, Soumen Chakrabarti, Siddhartha Chaudhuri, Preethi Jyothi, Sunita Sarawagi
We present CROSSGRAD, a method to use multi-domain training data to learn a classifier that generalizes to new domains.
Ranked #92 on
Domain Generalization
on PACS
no code implementations • 5 Jul 2017 • Shiv Shankar, Sunita Sarawagi
In this paper, we establish their potential in online adapting a batch trained neural network to domain-relevant labeled data at deployment time.