Search Results for author: Dale Schuurmans

Found 67 papers, 15 papers with code

Combiner: Full Attention Transformer with Sparse Computation Cost

no code implementations12 Jul 2021 Hongyu Ren, Hanjun Dai, Zihang Dai, Mengjiao Yang, Jure Leskovec, Dale Schuurmans, Bo Dai

However, the key limitation of transformers is their quadratic memory and time complexity $\mathcal{O}(L^2)$ with respect to the sequence length in attention layers, which restricts application in extremely long sequences.

Image Generation Language Modelling

On the Sample Complexity of Batch Reinforcement Learning with Policy-Induced Data

no code implementations18 Jun 2021 Chenjun Xiao, Ilbin Lee, Bo Dai, Dale Schuurmans, Csaba Szepesvari

We study the fundamental question of the sample complexity of learning a good policy in finite Markov decision processes (MDPs) when the data available for learning is obtained by following a logging policy that must be chosen without knowledge of the underlying MDP.

Leveraging Non-uniformity in First-order Non-convex Optimization

no code implementations13 May 2021 Jincheng Mei, Yue Gao, Bo Dai, Csaba Szepesvari, Dale Schuurmans

Classical global convergence results for first-order methods rely on uniform smoothness and the \L{}ojasiewicz inequality.

Joint Attention for Multi-Agent Coordination and Social Learning

no code implementations15 Apr 2021 Dennis Lee, Natasha Jaques, Chase Kew, Jiaxing Wu, Douglas Eck, Dale Schuurmans, Aleksandra Faust

We then train agents to minimize the difference between the attention weights that they apply to the environment at each timestep, and the attention of other agents.

On the Optimality of Batch Policy Optimization Algorithms

no code implementations6 Apr 2021 Chenjun Xiao, Yifan Wu, Tor Lattimore, Bo Dai, Jincheng Mei, Lihong Li, Csaba Szepesvari, Dale Schuurmans

First, we introduce a class of confidence-adjusted index algorithms that unifies optimistic and pessimistic principles in a common framework, which enables a general analysis.

Value prediction

Optimization Issues in KL-Constrained Approximate Policy Iteration

no code implementations11 Feb 2021 Nevena Lazić, Botao Hao, Yasin Abbasi-Yadkori, Dale Schuurmans, Csaba Szepesvári

We compare the use of KL divergence as a constraint vs. as a regularizer, and point out several optimization issues with the widely-used constrained approach.

Offline Policy Selection under Uncertainty

1 code implementation12 Dec 2020 Mengjiao Yang, Bo Dai, Ofir Nachum, George Tucker, Dale Schuurmans

More importantly, we show how the belief distribution estimated by BayesDICE may be used to rank policies with respect to any arbitrary downstream policy selection metric, and we empirically demonstrate that this selection procedure significantly outperforms existing approaches, such as ranking policies according to mean or high-confidence lower bound value estimates.

Escaping the Gravitational Pull of Softmax

no code implementations NeurIPS 2020 Jincheng Mei, Chenjun Xiao, Bo Dai, Lihong Li, Csaba Szepesvari, Dale Schuurmans

Both findings are based on an analysis of convergence rates using the Non-uniform \L{}ojasiewicz (N\L{}) inequalities.

Learning Discrete Energy-based Models via Auxiliary-variable Local Exploration

no code implementations NeurIPS 2020 Hanjun Dai, Rishabh Singh, Bo Dai, Charles Sutton, Dale Schuurmans

In this paper we propose ALOE, a new algorithm for learning conditional and unconditional EBMs for discrete structured data, where parameter gradients are estimated using a learned sampler that mimics local search.

Language Modelling

CoinDICE: Off-Policy Confidence Interval Estimation

no code implementations NeurIPS 2020 Bo Dai, Ofir Nachum, Yinlam Chow, Lihong Li, Csaba Szepesvári, Dale Schuurmans

We study high-confidence behavior-agnostic off-policy evaluation in reinforcement learning, where the goal is to estimate a confidence interval on a target policy's value, given only access to a static experience dataset collected by unknown behavior policies.

Attention that does not Explain Away

no code implementations29 Sep 2020 Nan Ding, Xinjie Fan, Zhenzhong Lan, Dale Schuurmans, Radu Soricut

Models based on the Transformer architecture have achieved better accuracy than the ones based on competing architectures for a large set of tasks.

EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL

no code implementations21 Jul 2020 Seyed Kamyar Seyed Ghasemipour, Dale Schuurmans, Shixiang Shane Gu

In this work, we closely investigate an important simplification of BCQ -- a prior approach for offline RL -- which removes a heuristic design choice and naturally restricts extracted policies to remain exactly within the support of a given behavior policy.

Decision Making Offline RL +1

Off-Policy Evaluation via the Regularized Lagrangian

no code implementations NeurIPS 2020 Mengjiao Yang, Ofir Nachum, Bo Dai, Lihong Li, Dale Schuurmans

The recently proposed distribution correction estimation (DICE) family of estimators has advanced the state of the art in off-policy evaluation from behavior-agnostic data.

Go Wide, Then Narrow: Efficient Training of Deep Thin Networks

no code implementations ICML 2020 Denny Zhou, Mao Ye, Chen Chen, Tianjian Meng, Mingxing Tan, Xiaodan Song, Quoc Le, Qiang Liu, Dale Schuurmans

This is achieved by layerwise imitation, that is, forcing the thin network to mimic the intermediate outputs of the wide network from layer to layer.

Model Compression

Scalable Deep Generative Modeling for Sparse Graphs

1 code implementation ICML 2020 Hanjun Dai, Azade Nazi, Yujia Li, Bo Dai, Dale Schuurmans

Based on this, we develop a novel autoregressive model, named BiGG, that utilizes this sparsity to avoid generating the full adjacency matrix, and importantly reduces the graph generation time complexity to $O((n + m)\log n)$.

Graph Generation

A maximum-entropy approach to off-policy evaluation in average-reward MDPs

no code implementations NeurIPS 2020 Nevena Lazic, Dong Yin, Mehrdad Farajtabar, Nir Levine, Dilan Gorur, Chris Harris, Dale Schuurmans

This work focuses on off-policy evaluation (OPE) with function approximation in infinite-horizon undiscounted Markov decision processes (MDPs).

On the Global Convergence Rates of Softmax Policy Gradient Methods

no code implementations ICML 2020 Jincheng Mei, Chenjun Xiao, Csaba Szepesvari, Dale Schuurmans

First, we show that with the true gradient, policy gradient with a softmax parametrization converges at a $O(1/t)$ rate, with constants depending on the problem and initialization.

Policy Gradient Methods

Energy-Based Processes for Exchangeable Data

1 code implementation ICML 2020 Mengjiao Yang, Bo Dai, Hanjun Dai, Dale Schuurmans

Recently there has been growing interest in modeling sets with exchangeability such as point clouds.

Denoising Point Cloud Generation

Variational Inference for Deep Probabilistic Canonical Correlation Analysis

no code implementations9 Mar 2020 Mahdi Karami, Dale Schuurmans

In this paper, we propose a deep probabilistic multi-view model that is composed of a linear multi-view layer based on probabilistic canonical correlation analysis (CCA) description in the latent space together with deep generative networks as observation models.

MULTI-VIEW LEARNING Variational Inference

Batch Stationary Distribution Estimation

1 code implementation ICML 2020 Junfeng Wen, Bo Dai, Lihong Li, Dale Schuurmans

We consider the problem of approximating the stationary distribution of an ergodic Markov chain given a set of sampled transitions.

GenDICE: Generalized Offline Estimation of Stationary Values

2 code implementations ICLR 2020 Ruiyi Zhang, Bo Dai, Lihong Li, Dale Schuurmans

An important problem that arises in reinforcement learning and Monte Carlo methods is estimating quantities defined by the stationary distribution of a Markov chain.

Learning to Combat Compounding-Error in Model-Based Reinforcement Learning

no code implementations24 Dec 2019 Chenjun Xiao, Yifan Wu, Chen Ma, Dale Schuurmans, Martin Müller

Despite its potential to improve sample complexity versus model-free approaches, model-based reinforcement learning can fail catastrophically if the model is inaccurate.

Model-based Reinforcement Learning

AlgaeDICE: Policy Gradient from Arbitrary Experience

no code implementations4 Dec 2019 Ofir Nachum, Bo Dai, Ilya Kostrikov, Yin-Lam Chow, Lihong Li, Dale Schuurmans

In many real-world applications of reinforcement learning (RL), interactions with the environment are limited due to cost or feasibility.

Surrogate Objectives for Batch Policy Optimization in One-step Decision Making

no code implementations NeurIPS 2019 Minmin Chen, Ramki Gummadi, Chris Harris, Dale Schuurmans

We investigate batch policy optimization for cost-sensitive classification and contextual bandits---two related tasks that obviate exploration but require generalizing from observed rewards to action selections in unseen contexts.

Decision Making Multi-Armed Bandits

Maximum Entropy Monte-Carlo Planning

no code implementations NeurIPS 2019 Chenjun Xiao, Ruitong Huang, Jincheng Mei, Dale Schuurmans, Martin Müller

We then extend this approach to general sequential decision making by developing a general MCTS algorithm, Maximum Entropy for Tree Search (MENTS).

Atari Games Decision Making

Invertible Convolutional Flow

1 code implementation NeurIPS 2019 Mahdi Karami, Dale Schuurmans, Jascha Sohl-Dickstein, Laurent Dinh, Daniel Duckworth

We show that these transforms allow more effective normalizing flow models to be developed for generative image models.

Domain Aggregation Networks for Multi-Source Domain Adaptation

no code implementations ICML 2020 Junfeng Wen, Russell Greiner, Dale Schuurmans

In many real-world applications, we want to exploit multiple source datasets of similar tasks to learn a model for a different but related target dataset -- e. g., recognizing characters of a new font using a set of different fonts.

Domain Adaptation Sentiment Analysis

Advantage Amplification in Slowly Evolving Latent-State Environments

no code implementations29 May 2019 Martin Mladenov, Ofer Meshi, Jayden Ooi, Dale Schuurmans, Craig Boutilier

Latent-state environments with long horizons, such as those faced by recommender systems, pose significant challenges for reinforcement learning (RL).

Recommendation Systems

Exponential Family Estimation via Adversarial Dynamics Embedding

1 code implementation NeurIPS 2019 Bo Dai, Zhen Liu, Hanjun Dai, Niao He, Arthur Gretton, Le Song, Dale Schuurmans

We present an efficient algorithm for maximum likelihood estimation (MLE) of exponential family models, with a general parametrization of the energy function that includes neural networks.

Learning to Generalize from Sparse and Underspecified Rewards

1 code implementation19 Feb 2019 Rishabh Agarwal, Chen Liang, Dale Schuurmans, Mohammad Norouzi

The parameters of the auxiliary reward function are optimized with respect to the validation performance of a trained policy.

Semantic Parsing

The Value Function Polytope in Reinforcement Learning

no code implementations31 Jan 2019 Robert Dadashi, Adrien Ali Taïga, Nicolas Le Roux, Dale Schuurmans, Marc G. Bellemare

We establish geometric and topological properties of the space of value functions in finite state-action Markov decision processes.

Non-delusional Q-learning and value-iteration

no code implementations NeurIPS 2018 Tyler Lu, Dale Schuurmans, Craig Boutilier

We identify a fundamental source of error in Q-learning and other forms of dynamic programming with function approximation.


Understanding the impact of entropy on policy optimization

1 code implementation27 Nov 2018 Zafarali Ahmed, Nicolas Le Roux, Mohammad Norouzi, Dale Schuurmans

Entropy regularization is commonly used to improve policy optimization in reinforcement learning.

Kernel Exponential Family Estimation via Doubly Dual Embedding

1 code implementation6 Nov 2018 Bo Dai, Hanjun Dai, Arthur Gretton, Le Song, Dale Schuurmans, Niao He

We investigate penalized maximum log-likelihood estimation for exponential family distributions whose natural parameter resides in a reproducing kernel Hilbert space.

Variational Rejection Sampling

no code implementations5 Apr 2018 Aditya Grover, Ramki Gummadi, Miguel Lazaro-Gredilla, Dale Schuurmans, Stefano Ermon

Learning latent variable models with stochastic variational inference is challenging when the approximate posterior is far from the true posterior, due to high variance in the gradient estimates.

Latent Variable Models Variational Inference

Smoothed Action Value Functions for Learning Gaussian Policies

no code implementations ICML 2018 Ofir Nachum, Mohammad Norouzi, George Tucker, Dale Schuurmans

State-action value functions (i. e., Q-values) are ubiquitous in reinforcement learning (RL), giving rise to popular algorithms such as SARSA and Q-learning.

Continuous Control Q-Learning

Multi-view Matrix Factorization for Linear Dynamical System Estimation

no code implementations NeurIPS 2017 Mahdi Karami, Martha White, Dale Schuurmans, Csaba Szepesvari

In this paper, we instead reconsider likelihood maximization and develop an optimization based strategy for recovering the latent states and transition parameters.

Global Optimization

Safe Exploration for Identifying Linear Systems via Robust Optimization

no code implementations30 Nov 2017 Tyler Lu, Martin Zinkevich, Craig Boutilier, Binz Roy, Dale Schuurmans

Motivated by the cooling of Google's data centers, we study how one can safely identify the parameters of a system model with a desired accuracy and confidence level.

Safe Exploration

Trust-PCL: An Off-Policy Trust Region Method for Continuous Control

1 code implementation ICLR 2018 Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans

When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.

Continuous Control

Bridging the Gap Between Value and Policy Based Reinforcement Learning

1 code implementation NeurIPS 2017 Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans

We establish a new connection between value and policy based reinforcement learning (RL) based on a relationship between softmax temporal value consistency and policy optimality under entropy regularization.


Deep Learning Games

no code implementations NeurIPS 2016 Dale Schuurmans, Martin A. Zinkevich

We investigate a reduction of supervised learning to game playing that reveals new connections and learning methods.

Improving Policy Gradient by Exploring Under-appreciated Rewards

no code implementations28 Nov 2016 Ofir Nachum, Mohammad Norouzi, Dale Schuurmans

We propose a more directed exploration strategy that promotes exploration of under-appreciated reward regions.

Stochastic Neural Networks with Monotonic Activation Functions

no code implementations1 Jan 2016 Siamak Ravanbakhsh, Barnabas Poczos, Jeff Schneider, Dale Schuurmans, Russell Greiner

We propose a Laplace approximation that creates a stochastic unit from any smooth monotonic activation function, using only Gaussian noise.

Embedding Inference for Structured Multilabel Prediction

no code implementations NeurIPS 2015 Farzaneh Mirzazadeh, Siamak Ravanbakhsh, Nan Ding, Dale Schuurmans

A key bottleneck in structured output prediction is the need for inference during training and testing, usually requiring some form of dynamic programming.

Semi-Supervised Zero-Shot Classification With Label Representation Learning

no code implementations ICCV 2015 Xin Li, Yuhong Guo, Dale Schuurmans

Most existing zero-shot learning methods require a user to first provide a set of semantic visual attributes for each class as side information before applying a two-step prediction procedure that introduces an intermediate attribute prediction problem.

Classification General Classification +3

Learning with a Strong Adversary

1 code implementation10 Nov 2015 Ruitong Huang, Bing Xu, Dale Schuurmans, Csaba Szepesvari

The robustness of neural networks to intended perturbations has recently attracted significant attention.

General Classification

Convex Deep Learning via Normalized Kernels

no code implementations NeurIPS 2014 Özlem Aslan, Xinhua Zhang, Dale Schuurmans

Deep learning has been a long standing pursuit in machine learning, which until recently was hampered by unreliable training methods before the discovery of improved heuristics for embedded layer training.

Generalized Conditional Gradient for Sparse Estimation

no code implementations17 Oct 2014 Yao-Liang Yu, Xinhua Zhang, Dale Schuurmans

Structured sparsity is an important modeling tool that expands the applicability of convex formulations for data analysis, however it also creates significant challenges for efficient algorithm design.

Dictionary Learning Matrix Completion +1

Adaptive Monte Carlo via Bandit Allocation

no code implementations13 May 2014 James Neufeld, András György, Dale Schuurmans, Csaba Szepesvári

We consider the problem of sequentially choosing between a set of unbiased Monte Carlo estimators to minimize the mean-squared-error (MSE) of a final combined estimate.

Polar Operators for Structured Sparse Estimation

no code implementations NeurIPS 2013 Xinhua Zhang, Yao-Liang Yu, Dale Schuurmans

Structured sparse estimation has become an important technique in many areas of data analysis.

Convex Two-Layer Modeling

no code implementations NeurIPS 2013 Özlem Aslan, Hao Cheng, Xinhua Zhang, Dale Schuurmans

Latent variable prediction models, such as multi-layer networks, impose auxiliary latent variables between inputs and outputs to allow automatic inference of implicit features useful for prediction.

Convex Relaxations of Bregman Divergence Clustering

no code implementations26 Sep 2013 Hao Cheng, Xinhua Zhang, Dale Schuurmans

Although many convex relaxations of clustering have been proposed in the past decade, current formulations remain restricted to spherical Gaussian or discriminative models and are susceptible to imbalanced clusters.

A Polynomial-time Form of Robust Regression

no code implementations NeurIPS 2012 Yao-Liang Yu, Özlem Aslan, Dale Schuurmans

Despite the variety of robust regression methods that have been developed, current regression formulations are either NP-hard, or allow unbounded response to even a single leverage point.

Convex Multi-view Subspace Learning

no code implementations NeurIPS 2012 Martha White, Xinhua Zhang, Dale Schuurmans, Yao-Liang Yu

Subspace learning seeks a low dimensional representation of data that enables accurate reconstruction.

Relaxed Clipping: A Global Training Method for Robust Regression and Classification

no code implementations NeurIPS 2010 Min Yang, Linli Xu, Martha White, Dale Schuurmans, Yao-Liang Yu

We present a generic procedure that can be applied to standard loss functions and demonstrate improved robustness in regression and classification problems.

Classification General Classification

A General Projection Property for Distribution Families

no code implementations NeurIPS 2009 Yao-Liang Yu, Yuxi Li, Dale Schuurmans, Csaba Szepesvári

We prove that linear projections between distribution families with fixed first and second moments are surjective, regardless of dimension.

Discriminative Batch Mode Active Learning

no code implementations NeurIPS 2007 Yuhong Guo, Dale Schuurmans

Most previous studies in active learning have focused on selecting one unlabeled instance at one time while retraining in each iteration.

Active Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.