Search Results for author: Yifang Chen

Found 7 papers, 0 papers with code

Corruption Robust Active Learning

no code implementations21 Jun 2021 Yifang Chen, Simon S. Du, Kevin Jamieson

We conduct theoretical studies on streaming-based active learning for binary classification under unknown adversarial label corruptions.

Active Learning

Improved Corruption Robust Algorithms for Episodic Reinforcement Learning

no code implementations13 Feb 2021 Yifang Chen, Simon S. Du, Kevin Jamieson

We study episodic reinforcement learning under unknown adversarial corruptions in both the rewards and the transition probabilities of the underlying system.

More Practical and Adaptive Algorithms for Online Quantum State Learning

no code implementations1 Jun 2020 Yifang Chen, Xin Wang

This regret bound depends only on the maximum rank $M$ of measurements rather than the number of qubits, which takes advantage of low-rank measurements.

Fair Contextual Multi-Armed Bandits: Theory and Experiments

no code implementations13 Dec 2019 Yifang Chen, Alex Cuellar, Haipeng Luo, Jignesh Modi, Heramb Nemlekar, Stefanos Nikolaidis

We introduce a Multi-Armed Bandit algorithm with fairness constraints, where fairness is defined as a minimum rate that a task or a resource is assigned to a user.

Decision Making Fairness +1

Multi-Armed Bandits with Fairness Constraints for Distributing Resources to Human Teammates

no code implementations30 Jun 2019 Houston Claure, Yifang Chen, Jignesh Modi, Malte Jung, Stefanos Nikolaidis

How should a robot that collaborates with multiple people decide upon the distribution of resources (e. g. social attention, or parts needed for an assembly)?

Fairness Multi-Armed Bandits

A New Algorithm for Non-stationary Contextual Bandits: Efficient, Optimal, and Parameter-free

no code implementations3 Feb 2019 Yifang Chen, Chung-Wei Lee, Haipeng Luo, Chen-Yu Wei

We propose the first contextual bandit algorithm that is parameter-free, efficient, and optimal in terms of dynamic regret.

Multi-Armed Bandits

Cannot find the paper you are looking for? You can Submit a new open access paper.