Search Results for author: Minqi Jiang

Found 32 papers, 24 papers with code

Rainbow Teaming: Open-Ended Generation of Diverse Adversarial Prompts

no code implementations26 Feb 2024 Mikayel Samvelyan, Sharath Chandra Raparthy, Andrei Lupu, Eric Hambro, Aram H. Markosyan, Manish Bhatt, Yuning Mao, Minqi Jiang, Jack Parker-Holder, Jakob Foerster, Tim Rocktäschel, Roberta Raileanu

As large language models (LLMs) become increasingly prevalent across many real-world applications, understanding and enhancing their robustness to user inputs is of paramount importance.

Question Answering

Refining Minimax Regret for Unsupervised Environment Design

1 code implementation19 Feb 2024 Michael Beukman, Samuel Coward, Michael Matthews, Mattie Fellows, Minqi Jiang, Michael Dennis, Jakob Foerster

In this work, we introduce Bayesian level-perfect MMR (BLP), a refinement of the minimax regret objective that overcomes this limitation.

Learning to Act without Actions

1 code implementation17 Dec 2023 Dominik Schmidt, Minqi Jiang

LAPO takes a first step towards pre-training powerful, generalist policies and world models on the vast amounts of videos readily available on the web.

Reinforcement Learning (RL)

The Generalization Gap in Offline Reinforcement Learning

1 code implementation10 Dec 2023 Ishita Mediratta, Qingfei You, Minqi Jiang, Roberta Raileanu

Our experiments reveal that existing offline learning algorithms struggle to match the performance of online RL on both train and test environments.

Offline RL reinforcement-learning +1

Learning Curricula in Open-Ended Worlds

1 code implementation3 Dec 2023 Minqi Jiang

Deep reinforcement learning (RL) provides powerful methods for training optimal sequential decision-making agents.

Decision Making Reinforcement Learning (RL)

minimax: Efficient Baselines for Autocurricula in JAX

1 code implementation21 Nov 2023 Minqi Jiang, Michael Dennis, Edward Grefenstette, Tim Rocktäschel

This compute requirement is a major obstacle to rapid innovation for the field.

Decision Making

ADGym: Design Choices for Deep Anomaly Detection

1 code implementation NeurIPS 2023 Minqi Jiang, Chaochuan Hou, Ao Zheng, Songqiao Han, Hailiang Huang, Qingsong Wen, Xiyang Hu, Yue Zhao

Deep learning (DL) techniques have recently found success in anomaly detection (AD) across various fields such as finance, medical services, and cloud computing.

Anomaly Detection Cloud Computing

Stabilizing Unsupervised Environment Design with a Learned Adversary

1 code implementation21 Aug 2023 Ishita Mediratta, Minqi Jiang, Jack Parker-Holder, Michael Dennis, Eugene Vinitsky, Tim Rocktäschel

As a result, we make it possible for PAIRED to match or exceed state-of-the-art methods, producing robust agents in several established challenging procedurally-generated environments, including a partially-observed maze navigation task and a continuous-control car racing environment.

Car Racing Reinforcement Learning (RL)

Anomaly Detection with Score Distribution Discrimination

1 code implementation26 Jun 2023 Minqi Jiang, Songqiao Han, Hailiang Huang

In this paper, we propose to optimize the anomaly scoring function from the view of score distribution, thus better retaining the diversity and more fine-grained information of input data, especially when the unlabeled data contains anomaly noises in more practical AD scenarios.

Anomaly Detection

Reward-Free Curricula for Training Robust World Models

1 code implementation15 Jun 2023 Marc Rigter, Minqi Jiang, Ingmar Posner

We consider robustness in terms of minimax regret over all environment instantiations and show that the minimax regret can be connected to minimising the maximum error in the world model across environment instances.

A Study of Global and Episodic Bonuses for Exploration in Contextual MDPs

2 code implementations5 Jun 2023 Mikael Henaff, Minqi Jiang, Roberta Raileanu

This results in an algorithm which sets a new state of the art across 16 tasks from the MiniHack suite used in prior work, and also performs robustly on Habitat and Montezuma's Revenge.

Montezuma's Revenge

MAESTRO: Open-Ended Environment Design for Multi-Agent Reinforcement Learning

no code implementations6 Mar 2023 Mikayel Samvelyan, Akbir Khan, Michael Dennis, Minqi Jiang, Jack Parker-Holder, Jakob Foerster, Roberta Raileanu, Tim Rocktäschel

Open-ended learning methods that automatically generate a curriculum of increasingly challenging tasks serve as a promising avenue toward generally capable reinforcement learning agents.

Continuous Control Multi-agent Reinforcement Learning +2

Weakly Supervised Anomaly Detection: A Survey

2 code implementations9 Feb 2023 Minqi Jiang, Chaochuan Hou, Ao Zheng, Xiyang Hu, Songqiao Han, Hailiang Huang, Xiangnan He, Philip S. Yu, Yue Zhao

Anomaly detection (AD) is a crucial task in machine learning with various applications, such as detecting emerging diseases, identifying financial frauds, and detecting fake news.

Supervised Anomaly Detection Time Series +2

General Intelligence Requires Rethinking Exploration

no code implementations15 Nov 2022 Minqi Jiang, Tim Rocktäschel, Edward Grefenstette

We are at the cusp of a transition from "learning from data" to "learning what data to learn from" as a central focus of artificial intelligence (AI) research.

reinforcement-learning Reinforcement Learning (RL)

Exploration via Elliptical Episodic Bonuses

2 code implementations11 Oct 2022 Mikael Henaff, Roberta Raileanu, Minqi Jiang, Tim Rocktäschel

In recent years, a number of reinforcement learning (RL) methods have been proposed to explore complex environments which differ across episodes.

Reinforcement Learning (RL)

GriddlyJS: A Web IDE for Reinforcement Learning

no code implementations13 Jul 2022 Christopher Bamford, Minqi Jiang, Mikayel Samvelyan, Tim Rocktäschel

Progress in reinforcement learning (RL) research is often driven by the design of new, challenging environments -- a costly undertaking requiring skills orthogonal to that of a typical machine learning researcher.

Offline RL reinforcement-learning +1

Grounding Aleatoric Uncertainty for Unsupervised Environment Design

1 code implementation11 Jul 2022 Minqi Jiang, Michael Dennis, Jack Parker-Holder, Andrei Lupu, Heinrich Küttler, Edward Grefenstette, Tim Rocktäschel, Jakob Foerster

Problematically, in partially-observable or stochastic settings, optimal policies may depend on the ground-truth distribution over aleatoric parameters of the environment in the intended deployment setting, while curriculum learning necessarily shifts the training distribution.

Reinforcement Learning (RL)

Evolving Curricula with Regret-Based Environment Design

3 code implementations2 Mar 2022 Jack Parker-Holder, Minqi Jiang, Michael Dennis, Mikayel Samvelyan, Jakob Foerster, Edward Grefenstette, Tim Rocktäschel

Our approach, which we call Adversarially Compounding Complexity by Editing Levels (ACCEL), seeks to constantly produce levels at the frontier of an agent's capabilities, resulting in curricula that start simple but become increasingly complex.

Reinforcement Learning (RL)

Replay-Guided Adversarial Environment Design

4 code implementations NeurIPS 2021 Minqi Jiang, Michael Dennis, Jack Parker-Holder, Jakob Foerster, Edward Grefenstette, Tim Rocktäschel

Furthermore, our theory suggests a highly counterintuitive improvement to PLR: by stopping the agent from updating its policy on uncurated levels (training on less data), we can improve the convergence to Nash equilibria.

Reinforcement Learning (RL)

MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research

1 code implementation27 Sep 2021 Mikayel Samvelyan, Robert Kirk, Vitaly Kurin, Jack Parker-Holder, Minqi Jiang, Eric Hambro, Fabio Petroni, Heinrich Küttler, Edward Grefenstette, Tim Rocktäschel

By leveraging the full set of entities and environment dynamics from NetHack, one of the richest grid-based video games, MiniHack allows designing custom RL testbeds that are fast and convenient to use.

NetHack reinforcement-learning +2

Return Dispersion as an Estimator of Learning Potential for Prioritized Level Replay

no code implementations NeurIPS Workshop ICBINB 2021 Iryna Korshunova, Minqi Jiang, Jack Parker-Holder, Tim Rocktäschel, Edward Grefenstette

Prioritized Level Replay (PLR) has been shown to induce adaptive curricula that improve the sample-efficiency and generalization of reinforcement learning policies in environments featuring multiple tasks or levels.

reinforcement-learning Reinforcement Learning (RL)

Grid-to-Graph: Flexible Spatial Relational Inductive Biases for Reinforcement Learning

1 code implementation8 Feb 2021 Zhengyao Jiang, Pasquale Minervini, Minqi Jiang, Tim Rocktaschel

In this work, we show that we can incorporate relational inductive biases, encoded in the form of relational graphs, into agents.

reinforcement-learning Reinforcement Learning (RL)

Prioritized Level Replay

4 code implementations8 Oct 2020 Minqi Jiang, Edward Grefenstette, Tim Rocktäschel

Environments with procedurally generated content serve as important benchmarks for testing systematic generalization in deep reinforcement learning.

Systematic Generalization

WordCraft: An Environment for Benchmarking Commonsense Agents

1 code implementation ICML Workshop LaReL 2020 Minqi Jiang, Jelena Luketina, Nantas Nardelli, Pasquale Minervini, Philip H. S. Torr, Shimon Whiteson, Tim Rocktäschel

This is partly due to the lack of lightweight simulation environments that sufficiently reflect the semantics of the real world and provide knowledge sources grounded with respect to observations in an RL environment.

Benchmarking Knowledge Graphs +2

Cannot find the paper you are looking for? You can Submit a new open access paper.