1 code implementation • 27 May 2024 • Ju-Seung Byun, Andrew Perrault
To enhance training robustness, RL has adopted techniques from supervised learning, such as ensembles and layer normalization.
no code implementations • 23 May 2024 • Jingyi Chen, Ju-Seung Byun, Micha Elsner, Andrew Perrault
Recent advancements in generative models have sparked significant interest within the machine learning community.
no code implementations • 11 Jan 2024 • Xi Chen, Zhihui Zhu, Andrew Perrault
We study reinforcement learning in the presence of an unknown reward perturbation.
no code implementations • 14 Dec 2023 • Adam Żychowski, Andrew Perrault, Jacek Mańdziuk
It outperformed all competing methods on 13 datasets with adversarial accuracy metrics, and on all 20 considered datasets with minimax regret.
no code implementations • 17 Jul 2023 • Lily Xu, Esther Rolf, Sara Beery, Joseph R. Bennett, Tanya Berger-Wolf, Tanya Birch, Elizabeth Bondi-Kelly, Justin Brashares, Melissa Chapman, Anthony Corso, Andrew Davies, Nikhil Garg, Angela Gaylard, Robert Heilmayr, Hannah Kerner, Konstantin Klemmer, Vipin Kumar, Lester Mackey, Claire Monteleoni, Paul Moorcroft, Jonathan Palmer, Andrew Perrault, David Thau, Milind Tambe
In this white paper, we synthesize key points made during presentations and discussions from the AI-Assisted Decision Making for Conservation workshop, hosted by the Center for Research on Computation and Society at Harvard University on October 20-21, 2022.
no code implementations • 26 May 2023 • Sanket Shah, Andrew Perrault, Bryan Wilder, Milind Tambe
In this paper, we propose solutions to these issues, avoiding the aforementioned assumptions and utilizing the ML model's features to increase the sample efficiency of learning loss functions.
no code implementations • 28 Aug 2022 • Ju-Seung Byun, Andrew Perrault
Distributional reinforcement learning (DRL) has been shown to improve performance by modeling the value distribution, not just the mean.
no code implementations • 30 Mar 2022 • Sanket Shah, Kai Wang, Bryan Wilder, Andrew Perrault, Milind Tambe
Decision-Focused Learning (DFL) is a paradigm for tailoring a predictive model to a downstream optimization task that uses its predictions in order to perform better on that specific task.
1 code implementation • ICLR 2022 • Ju-Seung Byun, Andrew Perrault
We introduce transition policies that smoothly connect lower-level policies by producing a distribution of states and actions that matches what is expected by the next policy.
1 code implementation • 15 Jun 2021 • Lily Xu, Andrew Perrault, Fei Fang, Haipeng Chen, Milind Tambe
We formulate the problem as a game between the defender and nature who controls the parameter values of the adversarial behavior and design an algorithm MIRROR to find a robust policy.
no code implementations • NeurIPS 2021 • Kai Wang, Sanket Shah, Haipeng Chen, Andrew Perrault, Finale Doshi-Velez, Milind Tambe
In the predict-then-optimize framework, the objective is to train a predictive model, mapping from environment features to parameters of an optimization problem, which maximizes decision quality when the optimization is subsequently solved.
no code implementations • NeurIPS 2021 • Kai Wang, Sanket Shah, Haipeng Chen, Andrew Perrault, Finale Doshi-Velez, Milind Tambe
In the predict-then-optimize framework, the objective is to train a predictive model, mapping from environment features to parameters of an optimization problem, which maximizes decision quality when the optimization is subsequently solved.
1 code implementation • NeurIPS 2020 • Aditya Mate, Jackson Killian, Haifeng Xu, Andrew Perrault, Milind Tambe
Our main contributions are as follows: (i) Building on the Whittle index technique for RMABs, we derive conditions under which the Collapsing Bandits problem is indexable.
2 code implementations • 14 Sep 2020 • Lily Xu, Elizabeth Bondi, Fei Fang, Andrew Perrault, Kai Wang, Milind Tambe
Conservation efforts in green security domains to protect wildlife and forests are constrained by the limited availability of defenders (i. e., patrollers), who must patrol vast areas to protect from attackers (e. g., poachers or illegal loggers).
no code implementations • 5 Jul 2020 • Aditya Mate, Jackson A. Killian, Haifeng Xu, Andrew Perrault, Milind Tambe
(ii) We exploit the optimality of threshold policies to build fast algorithms for computing the Whittle index, including a closed-form.
2 code implementations • NeurIPS 2020 • Kai Wang, Bryan Wilder, Andrew Perrault, Milind Tambe
Solving optimization problems with unknown parameters often requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
no code implementations • 16 Dec 2019 • Andrew Perrault, Fei Fang, Arunesh Sinha, Milind Tambe
With the maturing of AI and multiagent systems research, we have a tremendous opportunity to direct these advances towards addressing complex societal problems.
no code implementations • 20 Nov 2019 • Sanket Shah, Arunesh Sinha, Pradeep Varakantham, Andrew Perrault, Milind Tambe
To solve the online problem with a hard bound on risk, we formulate it as a Reinforcement Learning (RL) problem with constraints on the action space (hard bound on risk).
no code implementations • 3 Mar 2019 • Andrew Perrault, Bryan Wilder, Eric Ewing, Aditya Mate, Bistra Dilkina, Milind Tambe
Stackelberg security games are a critical tool for maximizing the utility of limited defense resources to protect important targets from an intelligent adversary.
no code implementations • 13 May 2015 • Andrew Perrault, Joanna Drummond, Fahiem Bacchus
The Stable Matching Problem with Couples (SMP-C) is a ubiquitous real-world extension of the stable matching problem (SMP) involving complementarities.