Search Results for author: Evan Zheran Liu

Found 15 papers, 10 papers with code

AutoBencher: Creating Salient, Novel, Difficult Datasets for Language Models

1 code implementation11 Jul 2024 Xiang Lisa Li, Evan Zheran Liu, Percy Liang, Tatsunori Hashimoto

In this paper, we present three desiderata for a good benchmark for language models: (i) salience (e. g., knowledge about World War II is more salient than a random day in history), (ii) novelty (i. e., the benchmark reveals new trends in model rankings not shown by previous benchmarks), and (iii) difficulty (i. e., the benchmark should be difficult for existing models, leaving headroom for future improvement).

Language Modelling Math +2

Simple Embodied Language Learning as a Byproduct of Meta-Reinforcement Learning

no code implementations14 Jun 2023 Evan Zheran Liu, Sahaana Suri, Tong Mu, Allan Zhou, Chelsea Finn

Specifically, we design an office navigation environment, where the agent's goal is to find a particular office, and office locations differ in different buildings (i. e., tasks).

Meta Reinforcement Learning Navigate +2

A Survey of Meta-Reinforcement Learning

no code implementations19 Jan 2023 Jacob Beck, Risto Vuorio, Evan Zheran Liu, Zheng Xiong, Luisa Zintgraf, Chelsea Finn, Shimon Whiteson

Meta-RL is most commonly studied in a problem setting where, given a distribution of tasks, the goal is to learn a policy that is capable of adapting to any new task from the task distribution with as little data as possible.

Deep Reinforcement Learning Meta Reinforcement Learning +3

Learning Options via Compression

1 code implementation8 Dec 2022 Yiding Jiang, Evan Zheran Liu, Benjamin Eysenbach, Zico Kolter, Chelsea Finn

Identifying statistical regularities in solutions to some tasks in multi-task reinforcement learning can accelerate the learning of new tasks.

Giving Feedback on Interactive Student Programs with Meta-Exploration

1 code implementation16 Nov 2022 Evan Zheran Liu, Moritz Stephan, Allen Nie, Chris Piech, Emma Brunskill, Chelsea Finn

However, teaching and giving feedback on such software is time-consuming -- standard approaches require instructors to manually grade student-implemented interactive programs.

Analyzing a Caching Model

no code implementations13 Dec 2021 Leon Sixt, Evan Zheran Liu, Marie Pellat, James Wexler, Milad Hashemi, Been Kim, Martin Maas

Machine Learning has been successfully applied in systems applications such as memory prefetching and caching, where learned models have been shown to outperform heuristics.

Just Train Twice: Improving Group Robustness without Training Group Information

1 code implementation19 Jul 2021 Evan Zheran Liu, Behzad Haghgoo, Annie S. Chen, aditi raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, Chelsea Finn

Standard training via empirical risk minimization (ERM) can produce models that achieve high accuracy on average but low accuracy on certain groups, especially in the presence of spurious correlations between the input and label.

Image Classification Out-of-Distribution Generalization

Decoupling Exploration and Exploitation for Meta-Reinforcement Learning without Sacrifices

2 code implementations6 Aug 2020 Evan Zheran Liu, aditi raghunathan, Percy Liang, Chelsea Finn

Learning a new task often requires both exploring to gather task-relevant information and exploiting this information to solve the task.

Meta Reinforcement Learning reinforcement-learning +3

Learning Abstract Models for Strategic Exploration and Fast Reward Transfer

1 code implementation12 Jul 2020 Evan Zheran Liu, Ramtin Keramati, Sudarshan Seshadri, Kelvin Guu, Panupong Pasupat, Emma Brunskill, Percy Liang

Model-based reinforcement learning (RL) is appealing because (i) it enables planning and thus more strategic exploration, and (ii) by decoupling dynamics from rewards, it enables fast transfer to new reward functions.

Model-based Reinforcement Learning Montezuma's Revenge +2

An Imitation Learning Approach for Cache Replacement

1 code implementation ICML 2020 Evan Zheran Liu, Milad Hashemi, Kevin Swersky, Parthasarathy Ranganathan, Junwhan Ahn

While directly applying Belady's is infeasible since the future is unknown, we train a policy conditioned only on past accesses that accurately approximates Belady's even on diverse and complex access patterns, and call this approach Parrot.

Imitation Learning

Explore then Execute: Adapting without Rewards via Factorized Meta-Reinforcement Learning

no code implementations ICML Workshop LifelongML 2020 Evan Zheran Liu, aditi raghunathan, Percy Liang, Chelsea Finn

In principle, meta-reinforcement learning approaches can exploit this shared structure, but in practice, they fail to adapt to new environments when adaptation requires targeted exploration (e. g., exploring the cabinets to find ingredients in a new kitchen).

Meta Reinforcement Learning reinforcement-learning +2

Learning Abstract Models for Long-Horizon Exploration

no code implementations ICLR 2019 Evan Zheran Liu, Ramtin Keramati, Sudarshan Seshadri, Kelvin Guu, Panupong Pasupat, Emma Brunskill, Percy Liang

In our approach, a manager maintains an abstract MDP over a subset of the abstract states, which grows monotonically through targeted exploration (possible due to the abstract MDP).

Atari Games Reinforcement Learning

Reinforcement Learning on Web Interfaces Using Workflow-Guided Exploration

5 code implementations ICLR 2018 Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin Shi, Percy Liang

Reinforcement learning (RL) agents improve through trial-and-error, but when reward is sparse and the agent cannot discover successful action sequences, learning stagnates.

reinforcement-learning Reinforcement Learning +1

From Language to Programs: Bridging Reinforcement Learning and Maximum Marginal Likelihood

3 code implementations ACL 2017 Kelvin Guu, Panupong Pasupat, Evan Zheran Liu, Percy Liang

Our goal is to learn a semantic parser that maps natural language utterances into executable programs when only indirect supervision is available: examples are labeled with the correct execution result, but not the program itself.

reinforcement-learning Reinforcement Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.