Search Results for author: Jongwook Choi

Found 15 papers, 5 papers with code

Exploiting Style Latent Flows for Generalizing Deepfake Detection Video Detection

no code implementations11 Mar 2024 Jongwook Choi, TaeHoon Kim, Yonghyun Jeong, Seungryul Baek, Jongwon Choi

This paper presents a new approach for the detection of fake videos, based on the analysis of style latent vectors and their abnormal behavior in temporal changes in the generated videos.

Contrastive Learning DeepFake Detection +1

Fast Inference and Transfer of Compositional Task Structures for Few-shot Task Generalization

no code implementations25 May 2022 Sungryull Sohn, Hyunjae Woo, Jongwook Choi, lyubing qiang, Izzeddin Gur, Aleksandra Faust, Honglak Lee

Different from the previous meta-rl methods trying to directly infer the unstructured task embedding, our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks, and use it as a prior to improve the task inference in testing.

Hierarchical Reinforcement Learning Meta Reinforcement Learning +2

Lipschitz-constrained Unsupervised Skill Discovery

no code implementations ICLR 2022 Seohong Park, Jongwook Choi, Jaekyeom Kim, Honglak Lee, Gunhee Kim

To address this issue, we propose Lipschitz-constrained Skill Discovery (LSD), which encourages the agent to discover more diverse, dynamic, and far-reaching skills.

Environment Generation for Zero-Shot Compositional Reinforcement Learning

1 code implementation NeurIPS 2021 Izzeddin Gur, Natasha Jaques, Yingjie Miao, Jongwook Choi, Manoj Tiwari, Honglak Lee, Aleksandra Faust

We learn to generate environments composed of multiple pages or rooms, and train RL agents capable of completing wide-range of complex tasks in those environments.

Navigate reinforcement-learning +1

Successor Feature Landmarks for Long-Horizon Goal-Conditioned Reinforcement Learning

1 code implementation NeurIPS 2021 Christopher Hoang, Sungryull Sohn, Jongwook Choi, Wilka Carvalho, Honglak Lee

SFL leverages the ability of successor features (SF) to capture transition dynamics, using it to drive exploration by estimating state-novelty and to enable high-level planning by abstracting the state-space as a non-parametric landmark-based graph.

Efficient Exploration reinforcement-learning +1

Shortest-Path Constrained Reinforcement Learning for Sparse Reward Tasks

1 code implementation13 Jul 2021 Sungryull Sohn, Sungtae Lee, Jongwook Choi, Harm van Seijen, Mehdi Fatemi, Honglak Lee

We propose the k-Shortest-Path (k-SP) constraint: a novel constraint on the agent's trajectory that improves the sample efficiency in sparse-reward MDPs.

Continuous Control reinforcement-learning +1

Variational Empowerment as Representation Learning for Goal-Based Reinforcement Learning

no code implementations2 Jun 2021 Jongwook Choi, Archit Sharma, Honglak Lee, Sergey Levine, Shixiang Shane Gu

Learning to reach goal states and learning diverse skills through mutual information (MI) maximization have been proposed as principled frameworks for self-supervised reinforcement learning, allowing agents to acquire broadly applicable multitask policies with minimal reward engineering.

reinforcement-learning Reinforcement Learning (RL) +1

Meta Reinforcement Learning with Autonomous Inference of Subtask Dependencies

1 code implementation ICLR 2020 Sungryull Sohn, Hyunjae Woo, Jongwook Choi, Honglak Lee

We propose and address a novel few-shot RL problem, where a task is characterized by a subtask graph which describes a set of subtasks and their dependencies that are unknown to the agent.

Efficient Exploration Meta Reinforcement Learning +4

Self-Imitation Learning via Trajectory-Conditioned Policy for Hard-Exploration Tasks

no code implementations25 Sep 2019 Yijie Guo, Jongwook Choi, Marcin Moczulski, Samy Bengio, Mohammad Norouzi, Honglak Lee

We propose a new method of learning a trajectory-conditioned policy to imitate diverse trajectories from the agent's own past experiences and show that such self-imitation helps avoid myopic behavior and increases the chance of finding a globally optimal solution for hard-exploration tasks, especially when there are misleading rewards.

Imitation Learning

Memory Based Trajectory-conditioned Policies for Learning from Sparse Rewards

no code implementations NeurIPS 2020 Yijie Guo, Jongwook Choi, Marcin Moczulski, Shengyu Feng, Samy Bengio, Mohammad Norouzi, Honglak Lee

Reinforcement learning with sparse rewards is challenging because an agent can rarely obtain non-zero rewards and hence, gradient-based optimization of parameterized policies can be incremental and slow.

Efficient Exploration Imitation Learning +1

Contingency-Aware Exploration in Reinforcement Learning

no code implementations ICLR 2019 Jongwook Choi, Yijie Guo, Marcin Moczulski, Junhyuk Oh, Neal Wu, Mohammad Norouzi, Honglak Lee

This paper investigates whether learning contingency-awareness and controllable aspects of an environment can lead to better exploration in reinforcement learning.

Montezuma's Revenge reinforcement-learning +1

Supervising Neural Attention Models for Video Captioning by Human Gaze Data

no code implementations CVPR 2017 Youngjae Yu, Jongwook Choi, Yeonhwa Kim, Kyung Yoo, Sang-Hun Lee, Gunhee Kim

The attention mechanisms in deep neural networks are inspired by human's attention that sequentially focuses on the most relevant parts of the information over time to generate prediction output.

Descriptive Gaze Prediction +2

Cannot find the paper you are looking for? You can Submit a new open access paper.