Search Results for author: Aldo Pacchiano

Found 64 papers, 9 papers with code

Provable Interactive Learning with Hindsight Instruction Feedback

no code implementations14 Apr 2024 Dipendra Misra, Aldo Pacchiano, Robert E. Schapire

We study interactive learning in a setting where the agent has to generate a response (e. g., an action or trajectory) given a context and an instruction.

Multiple-policy Evaluation via Density Estimation

no code implementations29 Mar 2024 Yilei Chen, Aldo Pacchiano, Ioannis Ch. Paschalidis

Up to low order and logarithmic terms $\mathrm{CAESAR}$ achieves a sample complexity $\tilde{O}\left(\frac{H^4}{\epsilon^2}\sum_{h=1}^H\max_{k\in[K]}\sum_{s, a}\frac{(d_h^{\pi^k}(s, a))^2}{\mu^*_h(s, a)}\right)$, where $d^{\pi}$ is the visitation distribution of policy $\pi$, $\mu^*$ is the optimal sampling distribution, and $H$ is the horizon.

Density Estimation

Active Preference Optimization for Sample Efficient RLHF

1 code implementation16 Feb 2024 Nirjhar Das, Souradip Chakraborty, Aldo Pacchiano, Sayak Ray Chowdhury

Reinforcement Learning from Human Feedback (RLHF) is pivotal in aligning Large Language Models (LLMs) with human preferences.

Active Learning

Contextual Bandits with Stage-wise Constraints

no code implementations15 Jan 2024 Aldo Pacchiano, Mohammad Ghavamzadeh, Peter Bartlett

In the setting that the constraint is in expectation, we further specialize our results to multi-armed bandits and propose a computationally efficient algorithm for this setting with regret analysis.

Multi-Armed Bandits

Experiment Planning with Function Approximation

no code implementations NeurIPS 2023 Aldo Pacchiano, Jonathan N. Lee, Emma Brunskill

We study the problem of experiment planning with function approximation in contextual bandit problems.

Model Selection

Anytime Model Selection in Linear Bandits

1 code implementation NeurIPS 2023 Parnian Kassraie, Nicolas Emmenegger, Andreas Krause, Aldo Pacchiano

This allows us to develop ALEXP, which has an exponentially improved ($\log M$) dependence on $M$ for its regret.

Model Selection

Data-Driven Online Model Selection With Regret Guarantees

no code implementations5 Jun 2023 Aldo Pacchiano, Christoph Dann, Claudio Gentile

We consider model selection for sequential decision making in stochastic environments with bandit feedback, where a meta-learner has at its disposal a pool of base learners, and decides on the fly which action to take based on the policies recommended by each base learner.

Decision Making Model Selection

Improving Offline RL by Blending Heuristics

no code implementations1 Jun 2023 Sinong Geng, Aldo Pacchiano, Andrey Kolobov, Ching-An Cheng

We propose Heuristic Blending (HUBL), a simple performance-improving technique for a broad class of offline RL algorithms based on value bootstrapping.

D4RL Offline RL

Estimating Optimal Policy Value in General Linear Contextual Bandits

no code implementations19 Feb 2023 Jonathan N. Lee, Weihao Kong, Aldo Pacchiano, Vidya Muthukumar, Emma Brunskill

Whether this is possible for more realistic context distributions has remained an open and important question for tasks such as model selection.

Model Selection Multi-Armed Bandits

Transfer RL via the Undo Maps Formalism

no code implementations26 Nov 2022 Abhi Gupta, Ted Moskovitz, David Alvarez-Melis, Aldo Pacchiano

Transferring knowledge across domains is one of the most fundamental problems in machine learning, but doing so effectively in the context of reinforcement learning remains largely an open problem.

Imitation Learning Transfer Learning

Leveraging Offline Data in Online Reinforcement Learning

no code implementations9 Nov 2022 Andrew Wagenmaker, Aldo Pacchiano

Practical scenarios often motivate an intermediate setting: if we have some set of offline data and, in addition, may also interact with the environment, how can we best use the offline data to minimize the number of online interactions necessary to learn an $\epsilon$-optimal policy?

Offline RL reinforcement-learning +1

Unpacking Reward Shaping: Understanding the Benefits of Reward Engineering on Sample Complexity

no code implementations18 Oct 2022 Abhishek Gupta, Aldo Pacchiano, Yuexiang Zhai, Sham M. Kakade, Sergey Levine

Reinforcement learning provides an automated framework for learning behaviors from high-level reward specifications, but in practice the choice of reward function can be crucial for good results -- while in principle the reward only needs to specify what the task is, in reality practitioners often need to design more detailed rewards that provide the agent with some hints about how the task should be completed.

reinforcement-learning Reinforcement Learning (RL)

Neural Design for Genetic Perturbation Experiments

no code implementations26 Jul 2022 Aldo Pacchiano, Drausin Wulsin, Robert A. Barton, Luis Voloch

The problem of how to genetically modify cells in order to maximize a certain cellular phenotype has taken center stage in drug development over the last few years (with, for example, genetically edited CAR-T, CAR-NK, and CAR-NKT cells entering cancer clinical trials).

Best of Both Worlds Model Selection

no code implementations29 Jun 2022 Aldo Pacchiano, Christoph Dann, Claudio Gentile

We study the problem of model selection in bandit scenarios in the presence of nested policy classes, with the goal of obtaining simultaneous adversarial and stochastic ("best of both worlds") high-probability regret guarantees.

Model Selection

Joint Representation Training in Sequential Tasks with Shared Structure

no code implementations24 Jun 2022 Aldo Pacchiano, Ofir Nachum, Nilseh Tripuraneni, Peter Bartlett

In contrast with previous work that have studied multi task RL in other function approximation models, we show that in the presence of bilinear optimization oracle and finite state action spaces there exists a computationally efficient algorithm for multitask MatrixRL via a reduction to quadratic programming.

Multi-Armed Bandits Reinforcement Learning (RL)

Online Nonsubmodular Minimization with Delayed Costs: From Full Information to Bandit Feedback

no code implementations15 May 2022 Tianyi Lin, Aldo Pacchiano, Yaodong Yu, Michael I. Jordan

Motivated by applications to online learning in sparse estimation and Bayesian optimization, we consider the problem of online unconstrained nonsubmodular minimization with delayed costs in both full information and bandit feedback settings.

Bayesian Optimization

Meta Learning MDPs with Linear Transition Models

no code implementations21 Jan 2022 Robert Müller, Aldo Pacchiano

We study meta-learning in Markov Decision Processes (MDP) with linear transition models in the undiscounted episodic setting.

Meta-Learning

Neural Pseudo-Label Optimism for the Bank Loan Problem

no code implementations NeurIPS 2021 Aldo Pacchiano, Shaun Singh, Edward Chou, Alexander C. Berg, Jakob Foerster

The lender only observes whether a customer will repay a loan if the loan is issued to begin with, and thus modeled decisions affect what data is available to the lender for future decisions.

Decision Making Pseudo Label

Dueling RL: Reinforcement Learning with Trajectory Preferences

no code implementations8 Nov 2021 Aldo Pacchiano, Aadirupa Saha, Jonathan Lee

We consider the problem of preference based reinforcement learning (PbRL), where, unlike traditional reinforcement learning, an agent receives feedback only in terms of a 1 bit (0/1) preference over a trajectory pair instead of absolute rewards for them.

reinforcement-learning Reinforcement Learning (RL)

Towards an Understanding of Default Policies in Multitask Policy Optimization

no code implementations4 Nov 2021 Ted Moskovitz, Michael Arbel, Jack Parker-Holder, Aldo Pacchiano

Much of the recent success of deep reinforcement learning has been driven by regularized policy optimization (RPO) algorithms with strong performance across multiple domains.

Sample Efficient Reinforcement Learning In Continuous State Spaces: A Perspective Beyond Linearity

no code implementations15 Jun 2021 Dhruv Malik, Aldo Pacchiano, Vishwak Srinivasan, Yuanzhi Li

Reinforcement learning (RL) is empirically successful in complex nonlinear Markov decision processes (MDPs) with continuous state spaces.

Atari Games reinforcement-learning +1

Parallelizing Contextual Bandits

no code implementations21 May 2021 Jeffrey Chan, Aldo Pacchiano, Nilesh Tripuraneni, Yun S. Song, Peter Bartlett, Michael I. Jordan

Standard approaches to decision-making under uncertainty focus on sequential exploration of the space of decisions.

Decision Making Decision Making Under Uncertainty +1

Near Optimal Policy Optimization via REPS

no code implementations NeurIPS 2021 Aldo Pacchiano, Jonathan Lee, Peter Bartlett, Ofir Nachum

Since its introduction a decade ago, \emph{relative entropy policy search} (REPS) has demonstrated successful policy learning on a number of simulated and real-world robotic domains, not to mention providing algorithmic components used by many recently proposed reinforcement learning (RL) algorithms.

Reinforcement Learning (RL)

ES-ENAS: Efficient Evolutionary Optimization for Large Hybrid Search Spaces

2 code implementations19 Jan 2021 Xingyou Song, Krzysztof Choromanski, Jack Parker-Holder, Yunhao Tang, Qiuyi Zhang, Daiyi Peng, Deepali Jain, Wenbo Gao, Aldo Pacchiano, Tamas Sarlos, Yuxiang Yang

In this paper, we approach the problem of optimizing blackbox functions over large hybrid search spaces consisting of both combinatorial and continuous parameters.

Combinatorial Optimization Continuous Control +4

Fairness with Continuous Optimal Transport

no code implementations6 Jan 2021 Silvia Chiappa, Aldo Pacchiano

Whilst optimal transport (OT) is increasingly being recognized as a powerful and flexible approach for dealing with fairness issues, current OT fairness methods are confined to the use of discrete OT.

Fairness

Regret Bound Balancing and Elimination for Model Selection in Bandits and RL

no code implementations24 Dec 2020 Aldo Pacchiano, Christoph Dann, Claudio Gentile, Peter Bartlett

Finally, unlike recent efforts in model selection for linear stochastic bandits, our approach is versatile enough to also cover cases where the context information is generated by an adversarial environment, rather than a stochastic one.

Model Selection valid

Online Model Selection for Reinforcement Learning with Function Approximation

no code implementations19 Nov 2020 Jonathan N. Lee, Aldo Pacchiano, Vidya Muthukumar, Weihao Kong, Emma Brunskill

Towards this end, we consider the problem of model selection in RL with function approximation, given a set of candidate RL algorithms with known regret guarantees.

Model Selection reinforcement-learning +1

Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian

no code implementations NeurIPS 2020 Jack Parker-Holder, Luke Metz, Cinjon Resnick, Hengyuan Hu, Adam Lerer, Alistair Letcher, Alex Peysakhovich, Aldo Pacchiano, Jakob Foerster

In the era of ever decreasing loss functions, SGD and its various offspring have become the go-to optimization tool in machine learning and are a key component of the success of deep neural networks (DNNs).

BIG-bench Machine Learning

Accelerated Message Passing for Entropy-Regularized MAP Inference

no code implementations ICML 2020 Jonathan N. Lee, Aldo Pacchiano, Peter Bartlett, Michael. I. Jordan

Maximum a posteriori (MAP) inference in discrete-valued Markov random fields is a fundamental problem in machine learning that involves identifying the most likely configuration of random variables given a distribution.

Towards Tractable Optimism in Model-Based Reinforcement Learning

no code implementations21 Jun 2020 Aldo Pacchiano, Philip J. Ball, Jack Parker-Holder, Krzysztof Choromanski, Stephen Roberts

The principle of optimism in the face of uncertainty is prevalent throughout sequential decision making problems such as multi-armed bandits and reinforcement learning (RL).

Continuous Control Decision Making +4

Stochastic Bandits with Linear Constraints

no code implementations17 Jun 2020 Aldo Pacchiano, Mohammad Ghavamzadeh, Peter Bartlett, Heinrich Jiang

We propose an upper-confidence bound algorithm for this problem, called optimistic pessimistic linear bandit (OPLB), and prove an $\widetilde{\mathcal{O}}(\frac{d\sqrt{T}}{\tau-c_0})$ bound on its $T$-round regret, where the denominator is the difference between the constraint threshold and the cost of a known feasible action.

Multi-Armed Bandits

Regret Balancing for Bandit and RL Model Selection

no code implementations9 Jun 2020 Yasin Abbasi-Yadkori, Aldo Pacchiano, My Phan

Given a set of base learning algorithms, an effective model selection strategy adapts to the best learning algorithm in an online fashion.

Model Selection

Learning the Truth From Only One Side of the Story

no code implementations8 Jun 2020 Heinrich Jiang, Qijia Jiang, Aldo Pacchiano

Learning under one-sided feedback (i. e., where we only observe the labels for examples we predicted positively on) is a fundamental problem in machine learning -- applications include lending and recommendation systems.

Recommendation Systems

Stochastic Flows and Geometric Optimization on the Orthogonal Group

no code implementations ICML 2020 Krzysztof Choromanski, David Cheikhi, Jared Davis, Valerii Likhosherstov, Achille Nazaret, Achraf Bahamou, Xingyou Song, Mrugank Akarte, Jack Parker-Holder, Jacob Bergquist, Yuan Gao, Aldo Pacchiano, Tamas Sarlos, Adrian Weller, Vikas Sindhwani

We present a new class of stochastic, geometrically-driven optimization algorithms on the orthogonal group $O(d)$ and naturally reductive homogeneous manifolds obtained from the action of the rotation group $SO(d)$.

Metric Learning Stochastic Optimization

Robustness Guarantees for Mode Estimation with an Application to Bandits

no code implementations5 Mar 2020 Aldo Pacchiano, Heinrich Jiang, Michael. I. Jordan

Mode estimation is a classical problem in statistics with a wide range of applications in machine learning.

Multi-Armed Bandits

Model Selection in Contextual Stochastic Bandit Problems

no code implementations NeurIPS 2020 Aldo Pacchiano, My Phan, Yasin Abbasi-Yadkori, Anup Rao, Julian Zimmert, Tor Lattimore, Csaba Szepesvari

Our methods rely on a novel and generic smoothing transformation for bandit algorithms that permits us to obtain optimal $O(\sqrt{T})$ model selection guarantees for stochastic contextual bandit problems as long as the optimal base algorithm satisfies a high probability regret guarantee.

Model Selection Multi-Armed Bandits

On Thompson Sampling with Langevin Algorithms

no code implementations ICML 2020 Eric Mazumdar, Aldo Pacchiano, Yi-An Ma, Peter L. Bartlett, Michael. I. Jordan

The resulting approximate Thompson sampling algorithm has logarithmic regret and its computational complexity does not scale with the time horizon of the algorithm.

Thompson Sampling

Ready Policy One: World Building Through Active Learning

no code implementations ICML 2020 Philip Ball, Jack Parker-Holder, Aldo Pacchiano, Krzysztof Choromanski, Stephen Roberts

Model-Based Reinforcement Learning (MBRL) offers a promising direction for sample efficient learning, often achieving state of the art results for continuous control tasks.

Active Learning Continuous Control +1

ES-MAML: Simple Hessian-Free Meta Learning

1 code implementation ICLR 2020 Xingyou Song, Wenbo Gao, Yuxiang Yang, Krzysztof Choromanski, Aldo Pacchiano, Yunhao Tang

We introduce ES-MAML, a new framework for solving the model agnostic meta learning (MAML) problem based on Evolution Strategies (ES).

Meta-Learning

Reinforcement Learning with Chromatic Networks

no code implementations25 Sep 2019 Xingyou Song, Krzysztof Choromanski, Jack Parker-Holder, Yunhao Tang, Wenbo Gao, Aldo Pacchiano, Tamas Sarlos, Deepali Jain, Yuxiang Yang

We present a neural architecture search algorithm to construct compact reinforcement learning (RL) policies, by combining ENAS and ES in a highly scalable and intuitive way.

Neural Architecture Search reinforcement-learning +1

Behavior-Guided Reinforcement Learning

no code implementations25 Sep 2019 Aldo Pacchiano, Jack Parker-Holder, Yunhao Tang, Anna Choromanska, Krzysztof Choromanski, Michael I. Jordan

We introduce a new approach for comparing reinforcement learning policies, using Wasserstein distances (WDs) in a newly defined latent behavioral space.

reinforcement-learning Reinforcement Learning (RL)

Wasserstein Fair Classification

1 code implementation28 Jul 2019 Ray Jiang, Aldo Pacchiano, Tom Stepleton, Heinrich Jiang, Silvia Chiappa

We propose an approach to fair classification that enforces independence between the classifier outputs and sensitive information by minimizing Wasserstein-1 distances.

Classification Fairness +1

Reinforcement Learning with Chromatic Networks for Compact Architecture Search

no code implementations10 Jul 2019 Xingyou Song, Krzysztof Choromanski, Jack Parker-Holder, Yunhao Tang, Wenbo Gao, Aldo Pacchiano, Tamas Sarlos, Deepali Jain, Yuxiang Yang

We present a neural architecture search algorithm to construct compact reinforcement learning (RL) policies, by combining ENAS and ES in a highly scalable and intuitive way.

Combinatorial Optimization Neural Architecture Search +2

Learning to Score Behaviors for Guided Policy Optimization

1 code implementation ICML 2020 Aldo Pacchiano, Jack Parker-Holder, Yunhao Tang, Anna Choromanska, Krzysztof Choromanski, Michael. I. Jordan

We introduce a new approach for comparing reinforcement learning policies, using Wasserstein distances (WDs) in a newly defined latent behavioral space.

Efficient Exploration Imitation Learning +2

Structured Monte Carlo Sampling for Nonisotropic Distributions via Determinantal Point Processes

no code implementations29 May 2019 Krzysztof Choromanski, Aldo Pacchiano, Jack Parker-Holder, Yunhao Tang

We propose a new class of structured methods for Monte Carlo (MC) sampling, called DPPMC, designed for high-dimensional nonisotropic distributions where samples are correlated to reduce the variance of the estimator via determinantal point processes.

Point Processes

From Complexity to Simplicity: Adaptive ES-Active Subspaces for Blackbox Optimization

1 code implementation NeurIPS 2019 Krzysztof Choromanski, Aldo Pacchiano, Jack Parker-Holder, Yunhao Tang

ASEBO adapts to the geometry of the function and learns optimal sets of sensing directions, which are used to probe it, on-the-fly.

Multi-Armed Bandits

Provably Robust Blackbox Optimization for Reinforcement Learning

no code implementations7 Mar 2019 Krzysztof Choromanski, Aldo Pacchiano, Jack Parker-Holder, Yunhao Tang, Deepali Jain, Yuxiang Yang, Atil Iscen, Jasmine Hsu, Vikas Sindhwani

Interest in derivative-free optimization (DFO) and "evolutionary strategies" (ES) has recently surged in the Reinforcement Learning (RL) community, with growing evidence that they can match state of the art methods for policy optimization problems in Robotics.

reinforcement-learning Reinforcement Learning (RL) +1

Gen-Oja: Simple & Efficient Algorithm for Streaming Generalized Eigenvector Computation

no code implementations NeurIPS 2018 Kush Bhatia, Aldo Pacchiano, Nicolas Flammarion, Peter L. Bartlett, Michael. I. Jordan

In this paper, we study the problems of principle Generalized Eigenvector computation and Canonical Correlation Analysis in the stochastic setting.

Gen-Oja: A Two-time-scale approach for Streaming CCA

no code implementations20 Nov 2018 Kush Bhatia, Aldo Pacchiano, Nicolas Flammarion, Peter L. Bartlett, Michael. I. Jordan

In this paper, we study the problems of principal Generalized Eigenvector computation and Canonical Correlation Analysis in the stochastic setting.

Vocal Bursts Valence Prediction

Online learning with kernel losses

no code implementations27 Feb 2018 Aldo Pacchiano, Niladri S. Chatterji, Peter L. Bartlett

We also study the full information setting when the underlying losses are kernel functions and present an adapted exponential weights algorithm and a conditional gradient descent algorithm.

Real time clustering of time series using triangular potentials

no code implementations18 Feb 2015 Aldo Pacchiano, Oliver Williams

Motivated by the problem of computing investment portfolio weightings we investigate various methods of clustering as alternatives to traditional mean-variance approaches.

Clustering Time Series +1

Cannot find the paper you are looking for? You can Submit a new open access paper.