Search Results for author: Shiv Shankar

Found 15 papers, 4 papers with code

Adversarial Stein Training for Graph Energy Models

no code implementations30 Aug 2021 Shiv Shankar

Learning distributions over graph-structured data is a challenging task with many applications in biology and chemistry.

Graph Generation

Sibling Regression for Generalized Linear Models

no code implementations3 Jul 2021 Shiv Shankar, Daniel Sheldon

Field observations form the basis of many scientific studies, especially in ecological and social sciences.

High-Confidence Off-Policy (or Counterfactual) Variance Estimation

no code implementations25 Jan 2021 Yash Chandak, Shiv Shankar, Philip S. Thomas

Many sequential decision-making systems leverage data collected using prior policies to propose a new policy.

Decision Making

Three-quarter Sibling Regression for Denoising Observational Data

no code implementations31 Dec 2020 Shiv Shankar, Daniel Sheldon, Tao Sun, John Pickering, Thomas G. Dietterich

However, it will remove intrinsic variability if the variables are dependent, and therefore does not apply to many situations, including modeling of species counts that are controlled by common causes.

Denoising

Bosonic Random Walk Networks for Graph Learning

no code implementations31 Dec 2020 Shiv Shankar, Don Towsley

The development of Graph Neural Networks (GNNs) has led to great progress in machine learning on graph-structured data.

Graph Learning

Untapped Potential of Data Augmentation: A Domain Generalization Viewpoint

no code implementations9 Jul 2020 Vihari Piratla, Shiv Shankar

It is believed that by processing augmented inputs in tandem with the original ones, the model learns a more robust set of features which are shared between the original and augmented counterparts.

Data Augmentation Domain Generalization

Optimizing for the Future in Non-Stationary MDPs

1 code implementation ICML 2020 Yash Chandak, Georgios Theocharous, Shiv Shankar, Martha White, Sridhar Mahadevan, Philip S. Thomas

Most reinforcement learning methods are based upon the key assumption that the transition dynamics and reward functions are fixed, that is, the underlying Markov decision process is stationary.

Differential Equation Units: Learning Functional Forms of Activation Functions from Data

1 code implementation6 Sep 2019 MohamadAli Torkamani, Shiv Shankar, Amirmohammad Rooshenas, Phillip Wallis

Most deep neural networks use simple, fixed activation functions, such as sigmoids or rectified linear units, regardless of domain or network structure.

Learning Compact Neural Networks Using Ordinary Differential Equations as Activation Functions

no code implementations19 May 2019 MohamadAli Torkamani, Phillip Wallis, Shiv Shankar, Amirmohammad Rooshenas

Most deep neural networks use simple, fixed activation functions, such as sigmoids or rectified linear units, regardless of domain or network structure.

Surprisingly Easy Hard-Attention for Sequence to Sequence Learning

1 code implementation EMNLP 2018 Shiv Shankar, Siddhant Garg, Sunita Sarawagi

In this paper we show that a simple beam approximation of the joint distribution between attention and output is an easy, accurate, and efficient attention mechanism for sequence to sequence learning.

Hard Attention Image Captioning +2

Labeled Memory Networks for Online Model Adaptation

no code implementations5 Jul 2017 Shiv Shankar, Sunita Sarawagi

In this paper, we establish their potential in online adapting a batch trained neural network to domain-relevant labeled data at deployment time.

Few-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.