Search Results for author: Sungryull Sohn

Found 18 papers, 8 papers with code

TOD-Flow: Modeling the Structure of Task-Oriented Dialogues

1 code implementation7 Dec 2023 Sungryull Sohn, Yiwei Lyu, Anthony Liu, Lajanugen Logeswaran, Dong-Ki Kim, Dongsub Shim, Honglak Lee

Our TOD-Flow graph learns what a model can, should, and should not predict, effectively reducing the search space and providing a rationale for the model's prediction.

Dialog Act Classification Response Generation

Code Models are Zero-shot Precondition Reasoners

no code implementations16 Nov 2023 Lajanugen Logeswaran, Sungryull Sohn, Yiwei Lyu, Anthony Zhe Liu, Dong-Ki Kim, Dongsub Shim, Moontae Lee, Honglak Lee

One of the fundamental skills required for an agent acting in an environment to complete tasks is the ability to understand what actions are plausible at any given point.

Decision Making

From Heuristic to Analytic: Cognitively Motivated Strategies for Coherent Physical Commonsense Reasoning

1 code implementation24 Oct 2023 Zheyuan Zhang, Shane Storks, Fengyuan Hu, Sungryull Sohn, Moontae Lee, Honglak Lee, Joyce Chai

We incorporate these interlinked dual processes in fine-tuning and in-context learning with PLMs, applying them to two language understanding tasks that require coherent physical commonsense reasoning.

In-Context Learning Physical Commonsense Reasoning

A Picture is Worth a Thousand Words: Language Models Plan from Pixels

no code implementations16 Mar 2023 Anthony Z. Liu, Lajanugen Logeswaran, Sungryull Sohn, Honglak Lee

Planning is an important capability of artificial agents that perform long-horizon tasks in real-world environments.

Multimodal Subtask Graph Generation from Instructional Videos

no code implementations17 Feb 2023 Yunseok Jang, Sungryull Sohn, Lajanugen Logeswaran, Tiange Luo, Moontae Lee, Honglak Lee

Real-world tasks consist of multiple inter-dependent subtasks (e. g., a dirty pan needs to be washed before it can be used for cooking).

Graph Generation

Fast Inference and Transfer of Compositional Task Structures for Few-shot Task Generalization

no code implementations25 May 2022 Sungryull Sohn, Hyunjae Woo, Jongwook Choi, lyubing qiang, Izzeddin Gur, Aleksandra Faust, Honglak Lee

Different from the previous meta-rl methods trying to directly infer the unstructured task embedding, our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks, and use it as a prior to improve the task inference in testing.

Hierarchical Reinforcement Learning Meta Reinforcement Learning +2

Learning Parameterized Task Structure for Generalization to Unseen Entities

1 code implementation28 Mar 2022 Anthony Z. Liu, Sungryull Sohn, Mahdi Qazwini, Honglak Lee

These subtasks are defined in terms of entities (e. g., "apple", "pear") that can be recombined to form new subtasks (e. g., "pickup apple", and "pickup pear").

Successor Feature Landmarks for Long-Horizon Goal-Conditioned Reinforcement Learning

1 code implementation NeurIPS 2021 Christopher Hoang, Sungryull Sohn, Jongwook Choi, Wilka Carvalho, Honglak Lee

SFL leverages the ability of successor features (SF) to capture transition dynamics, using it to drive exploration by estimating state-novelty and to enable high-level planning by abstracting the state-space as a non-parametric landmark-based graph.

Efficient Exploration reinforcement-learning +1

Shortest-Path Constrained Reinforcement Learning for Sparse Reward Tasks

1 code implementation13 Jul 2021 Sungryull Sohn, Sungtae Lee, Jongwook Choi, Harm van Seijen, Mehdi Fatemi, Honglak Lee

We propose the k-Shortest-Path (k-SP) constraint: a novel constraint on the agent's trajectory that improves the sample efficiency in sparse-reward MDPs.

Continuous Control reinforcement-learning +1

Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in a First-person Simulated 3D Environment

no code implementations28 Oct 2020 Wilka Carvalho, Anthony Liang, Kimin Lee, Sungryull Sohn, Honglak Lee, Richard L. Lewis, Satinder Singh

In this work, we show that one can learn object-interaction tasks from scratch without supervision by learning an attentive object-model as an auxiliary task during task learning with an object-centric relational RL agent.

Object Reinforcement Learning (RL) +1

BRPO: Batch Residual Policy Optimization

no code implementations8 Feb 2020 Sungryull Sohn, Yin-Lam Chow, Jayden Ooi, Ofir Nachum, Honglak Lee, Ed Chi, Craig Boutilier

In batch reinforcement learning (RL), one often constrains a learned policy to be close to the behavior (data-generating) policy, e. g., by constraining the learned action distribution to differ from the behavior policy by some maximum degree that is the same at each state.

reinforcement-learning Reinforcement Learning (RL)

Meta Reinforcement Learning with Autonomous Inference of Subtask Dependencies

1 code implementation ICLR 2020 Sungryull Sohn, Hyunjae Woo, Jongwook Choi, Honglak Lee

We propose and address a novel few-shot RL problem, where a task is characterized by a subtask graph which describes a set of subtasks and their dependencies that are unknown to the agent.

Efficient Exploration Meta Reinforcement Learning +4

Hierarchical Reinforcement Learning for Zero-shot Generalization with Subtask Dependencies

1 code implementation NeurIPS 2018 Sungryull Sohn, Junhyuk Oh, Honglak Lee

We introduce a new RL problem where the agent is required to generalize to a previously-unseen environment characterized by a subtask graph which describes a set of subtasks and their dependencies.

Hierarchical Reinforcement Learning Network Embedding +3

Neural Task Graph Execution

no code implementations ICLR 2018 Sungryull Sohn, Junhyuk Oh, Honglak Lee

Unlike existing approaches which explicitly describe what the agent should do, our problem only describes properties of subtasks and relationships between them, which requires the agent to perform a complex reasoning to find the optimal subtask to execute.

Reinforcement Learning (RL)

Learning to Generate Long-term Future via Hierarchical Prediction

2 code implementations ICML 2017 Ruben Villegas, Jimei Yang, Yuliang Zou, Sungryull Sohn, Xunyu Lin, Honglak Lee

To avoid inherent compounding errors in recursive pixel-level prediction, we propose to first estimate high-level structure in the input frames, then predict how that structure evolves in the future, and finally by observing a single frame from the past and the predicted high-level structure, we construct the future frames without having to observe any of the pixel-level predictions.

Video Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.