Search Results for author: Kimin Lee

Found 53 papers, 31 papers with code

Simplified Stochastic Feedforward Neural Networks

no code implementations11 Apr 2017 Kimin Lee, Jaehyung Kim, Song Chong, Jinwoo Shin

In this paper, we aim at developing efficient training methods for SFNN, in particular using known architectures and pre-trained parameters of DNN.

Confident Multiple Choice Learning

2 code implementations ICML 2017 Kimin Lee, Changho Hwang, KyoungSoo Park, Jinwoo Shin

Ensemble methods are arguably the most trustworthy techniques for boosting the performance of machine learning models.

General Classification Image Classification +1

Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples

3 code implementations ICLR 2018 Kimin Lee, Honglak Lee, Kibok Lee, Jinwoo Shin

The problem of detecting whether a test sample is from in-distribution (i. e., training distribution by a classifier) or out-of-distribution sufficiently different from it arises in many real-world machine learning applications.

Hierarchical Novelty Detection for Visual Object Recognition

no code implementations CVPR 2018 Kibok Lee, Kimin Lee, Kyle Min, Yuting Zhang, Jinwoo Shin, Honglak Lee

The essential ingredients of our methods are confidence-calibrated classifiers, data relabeling, and the leave-one-out strategy for modeling novel classes under the hierarchical taxonomy.

Generalized Zero-Shot Learning Novelty Detection +2

A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks

4 code implementations NeurIPS 2018 Kimin Lee, Kibok Lee, Honglak Lee, Jinwoo Shin

Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications.

Class Incremental Learning Incremental Learning +1

Learning to Specialize with Knowledge Distillation for Visual Question Answering

no code implementations NeurIPS 2018 Jonghwan Mun, Kimin Lee, Jinwoo Shin, Bohyung Han

The proposed framework is model-agnostic and applicable to any tasks other than VQA, e. g., image classification with a large number of labels but few per-class examples, which is known to be difficult under existing MCL schemes.

General Classification General Knowledge +5

Using Pre-Training Can Improve Model Robustness and Uncertainty

1 code implementation28 Jan 2019 Dan Hendrycks, Kimin Lee, Mantas Mazeika

He et al. (2018) have called into question the utility of pre-training by showing that training from scratch can often yield similar performance to pre-training.

Adversarial Robustness General Classification +1

Robust Inference via Generative Classifiers for Handling Noisy Labels

1 code implementation31 Jan 2019 Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, Jinwoo Shin

Large-scale datasets may contain significant proportions of noisy (incorrect) class labels, and it is well-known that modern deep neural networks (DNNs) poorly generalize from such noisy training datasets.

Overcoming Catastrophic Forgetting with Unlabeled Data in the Wild

1 code implementation ICCV 2019 Kibok Lee, Kimin Lee, Jinwoo Shin, Honglak Lee

Lifelong learning with deep neural networks is well-known to suffer from catastrophic forgetting: the performance on previous tasks drastically degrades when learning a new task.

Class Incremental Learning Incremental Learning

Robust Determinantal Generative Classifier for Noisy Labels and Adversarial Attacks

no code implementations ICLR 2019 Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, Jinwoo Shin

For instance, on CIFAR-10 dataset containing 45% noisy training labels, we improve the test accuracy of a deep model optimized by the state-of-the-art noise-handling training method from33. 34% to 43. 02%.

Network Randomization: A Simple Technique for Generalization in Deep Reinforcement Learning

2 code implementations ICLR 2020 Kimin Lee, Kibok Lee, Jinwoo Shin, Honglak Lee

Deep reinforcement learning (RL) agents often fail to generalize to unseen environments (yet semantically similar to trained agents), particularly when they are trained on high-dimensional state spaces, such as images.

Data Augmentation reinforcement-learning +1

Reinforcement Learning with Augmented Data

2 code implementations NeurIPS 2020 Michael Laskin, Kimin Lee, Adam Stooke, Lerrel Pinto, Pieter Abbeel, Aravind Srinivas

To this end, we present Reinforcement Learning with Augmented Data (RAD), a simple plug-and-play module that can enhance most RL algorithms.

Data Augmentation OpenAI Gym +2

Context-aware Dynamics Model for Generalization in Model-Based Reinforcement Learning

2 code implementations ICML 2020 Kimin Lee, Younggyo Seo, Seung-Hyun Lee, Honglak Lee, Jinwoo Shin

Model-based reinforcement learning (RL) enjoys several benefits, such as data-efficiency and planning, by learning a model of the environment's dynamics.

Model-based Reinforcement Learning reinforcement-learning +1

Learning to Sample with Local and Global Contexts in Experience Replay Buffer

no code implementations ICLR 2021 Youngmin Oh, Kimin Lee, Jinwoo Shin, Eunho Yang, Sung Ju Hwang

Experience replay, which enables the agents to remember and reuse experience from the past, has played a significant role in the success of off-policy reinforcement learning (RL).

Reinforcement Learning (RL)

Dynamics Generalization via Information Bottleneck in Deep Reinforcement Learning

no code implementations3 Aug 2020 Xingyu Lu, Kimin Lee, Pieter Abbeel, Stas Tiomkin

Despite the significant progress of deep reinforcement learning (RL) in solving sequential decision making problems, RL agents often overfit to training environments and struggle to adapt to new, unseen environments.

Decision Making reinforcement-learning +1

Decoupling Representation Learning from Reinforcement Learning

3 code implementations14 Sep 2020 Adam Stooke, Kimin Lee, Pieter Abbeel, Michael Laskin

In an effort to overcome limitations of reward-driven feature learning in deep reinforcement learning (RL) from images, we propose decoupling representation learning from policy learning.

Data Augmentation reinforcement-learning +2

Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in a First-person Simulated 3D Environment

no code implementations28 Oct 2020 Wilka Carvalho, Anthony Liang, Kimin Lee, Sungryull Sohn, Honglak Lee, Richard L. Lewis, Satinder Singh

In this work, we show that one can learn object-interaction tasks from scratch without supervision by learning an attentive object-model as an auxiliary task during task learning with an object-centric relational RL agent.

Object Reinforcement Learning (RL) +1

MASKER: Masked Keyword Regularization for Reliable Text Classification

1 code implementation17 Dec 2020 Seung Jun Moon, Sangwoo Mo, Kimin Lee, Jaeho Lee, Jinwoo Shin

We claim that one central obstacle to the reliability is the over-reliance of the model on a limited number of keywords, instead of looking at the whole context.

Domain Generalization General Classification +6

R-LAtte: Attention Module for Visual Control via Reinforcement Learning

no code implementations1 Jan 2021 Mandi Zhao, Qiyang Li, Aravind Srinivas, Ignasi Clavera, Kimin Lee, Pieter Abbeel

Attention mechanisms are generic inductive biases that have played a critical role in improving the state-of-the-art in supervised learning, unsupervised pre-training and generative modeling for multiple domains including vision, language and speech.

reinforcement-learning Reinforcement Learning (RL) +1

Weighted Bellman Backups for Improved Signal-to-Noise in Q-Updates

no code implementations1 Jan 2021 Kimin Lee, Michael Laskin, Aravind Srinivas, Pieter Abbeel

Furthermore, since our weighted Bellman backups rely on maintaining an ensemble, we investigate how weighted Bellman backups interact with other benefits previously derived from ensembles: (a) Bootstrap; (b) UCB Exploration.

Q-Learning Reinforcement Learning (RL)

Addressing Distribution Shift in Online Reinforcement Learning with Offline Datasets

no code implementations1 Jan 2021 SeungHyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, Jinwoo Shin

As it turns out, fine-tuning offline RL agents is a non-trivial challenge, due to distribution shift – the agent encounters out-of-distribution samples during online interaction, which may cause bootstrapping error in Q-learning and instability during fine-tuning.

D4RL Offline RL +3

Compute- and Memory-Efficient Reinforcement Learning with Latent Experience Replay

no code implementations1 Jan 2021 Lili Chen, Kimin Lee, Aravind Srinivas, Pieter Abbeel

In this paper, we present Latent Vector Experience Replay (LeVER), a simple modification of existing off-policy RL methods, to address these computational and memory requirements without sacrificing the performance of RL agents.

Atari Games reinforcement-learning +2

PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via Relabeling Experience and Unsupervised Pre-training

2 code implementations9 Jun 2021 Kimin Lee, Laura Smith, Pieter Abbeel

We also show that our method is able to utilize real-time human feedback to effectively prevent reward exploitation and learn new behaviors that are difficult to specify with standard reward functions.

reinforcement-learning Reinforcement Learning (RL) +1

Scenic4RL: Programmatic Modeling and Generation of Reinforcement Learning Environments

no code implementations18 Jun 2021 Abdus Salam Azad, Edward Kim, Qiancheng Wu, Kimin Lee, Ion Stoica, Pieter Abbeel, Sanjit A. Seshia

To showcase the benefits, we interfaced SCENIC to an existing RTS environment Google Research Football(GRF) simulator and introduced a benchmark consisting of 32 realistic scenarios, encoded in SCENIC, to train RL agents and testing their generalization capabilities.

reinforcement-learning Reinforcement Learning (RL)

Offline-to-Online Reinforcement Learning via Balanced Replay and Pessimistic Q-Ensemble

1 code implementation1 Jul 2021 SeungHyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, Jinwoo Shin

Recent advance in deep offline reinforcement learning (RL) has made it possible to train strong robotic agents from offline datasets.

Offline RL reinforcement-learning +1

Skill Preferences: Learning to Extract and Execute Robotic Skills from Human Feedback

no code implementations11 Aug 2021 Xiaofei Wang, Kimin Lee, Kourosh Hakhamaneshi, Pieter Abbeel, Michael Laskin

A promising approach to solving challenging long-horizon tasks has been to extract behavior priors (skills) by fitting generative models to large offline datasets of demonstrations.

Autoregressive Latent Video Prediction with High-Fidelity Image Generator

no code implementations29 Sep 2021 Younggyo Seo, Kimin Lee, Fangchen Liu, Stephen James, Pieter Abbeel

Video prediction is an important yet challenging problem; burdened with the tasks of generating future frames and learning environment dynamics.

Data Augmentation Video Prediction +1

Towards More Generalizable One-shot Visual Imitation Learning

no code implementations26 Oct 2021 Zhao Mandi, Fangchen Liu, Kimin Lee, Pieter Abbeel

We then study the multi-task setting, where multi-task training is followed by (i) one-shot imitation on variations within the training tasks, (ii) one-shot imitation on new tasks, and (iii) fine-tuning on new tasks.

Contrastive Learning Imitation Learning +2

URLB: Unsupervised Reinforcement Learning Benchmark

1 code implementation28 Oct 2021 Michael Laskin, Denis Yarats, Hao liu, Kimin Lee, Albert Zhan, Kevin Lu, Catherine Cang, Lerrel Pinto, Pieter Abbeel

Deep Reinforcement Learning (RL) has emerged as a powerful paradigm to solve a range of complex yet specific control tasks.

Continuous Control reinforcement-learning +2

B-Pref: Benchmarking Preference-Based Reinforcement Learning

1 code implementation4 Nov 2021 Kimin Lee, Laura Smith, Anca Dragan, Pieter Abbeel

However, it is difficult to quantify the progress in preference-based RL due to the lack of a commonly adopted benchmark.

Benchmarking reinforcement-learning +1

Improving Transferability of Representations via Augmentation-Aware Self-Supervision

2 code implementations NeurIPS 2021 Hankook Lee, Kibok Lee, Kimin Lee, Honglak Lee, Jinwoo Shin

Recent unsupervised representation learning methods have shown to be effective in a range of vision tasks by learning representations invariant to data augmentations such as random cropping and color jittering.

Representation Learning Transfer Learning

Reinforcement Learning with Action-Free Pre-Training from Videos

2 code implementations25 Mar 2022 Younggyo Seo, Kimin Lee, Stephen James, Pieter Abbeel

Our framework consists of two phases: we pre-train an action-free latent video prediction model, and then utilize the pre-trained representations for efficiently learning action-conditional world models on unseen environments.

reinforcement-learning Reinforcement Learning (RL) +2

Reward Uncertainty for Exploration in Preference-based Reinforcement Learning

2 code implementations ICLR 2022 Xinran Liang, Katherine Shu, Kimin Lee, Pieter Abbeel

Our intuition is that disagreement in learned reward model reflects uncertainty in tailored human feedback and could be useful for exploration.

reinforcement-learning Reinforcement Learning (RL) +1

Masked World Models for Visual Control

no code implementations28 Jun 2022 Younggyo Seo, Danijar Hafner, Hao liu, Fangchen Liu, Stephen James, Kimin Lee, Pieter Abbeel

Yet the current approaches typically train a single model end-to-end for learning both visual representations and dynamics, making it difficult to accurately model the interaction between robots and small objects.

Model-based Reinforcement Learning Reinforcement Learning (RL) +1

HARP: Autoregressive Latent Video Prediction with High-Fidelity Image Generator

no code implementations15 Sep 2022 Younggyo Seo, Kimin Lee, Fangchen Liu, Stephen James, Pieter Abbeel

Video prediction is an important yet challenging problem; burdened with the tasks of generating future frames and learning environment dynamics.

Data Augmentation Video Prediction +1

Instruction-Following Agents with Multimodal Transformer

1 code implementation24 Oct 2022 Hao liu, Lisa Lee, Kimin Lee, Pieter Abbeel

Our \ours method consists of a multimodal transformer that encodes visual observations and language instructions, and a transformer-based policy that predicts actions based on encoded representations.

Instruction Following Visual Grounding

Multi-View Masked World Models for Visual Robotic Manipulation

1 code implementation5 Feb 2023 Younggyo Seo, Junsu Kim, Stephen James, Kimin Lee, Jinwoo Shin, Pieter Abbeel

In this paper, we investigate how to learn good representations with multi-view data and utilize them for visual robotic manipulation.

Camera Calibration Representation Learning

Controllability-Aware Unsupervised Skill Discovery

3 code implementations10 Feb 2023 Seohong Park, Kimin Lee, Youngwoon Lee, Pieter Abbeel

One of the key capabilities of intelligent agents is the ability to discover useful skills without external supervision.

Aligning Text-to-Image Models using Human Feedback

no code implementations23 Feb 2023 Kimin Lee, Hao liu, MoonKyung Ryu, Olivia Watkins, Yuqing Du, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, Shixiang Shane Gu

Our results demonstrate the potential for learning from human feedback to significantly improve text-to-image models.

Image Generation

Preference Transformer: Modeling Human Preferences using Transformers for RL

1 code implementation2 Mar 2023 Changyeon Kim, Jongjin Park, Jinwoo Shin, Honglak Lee, Pieter Abbeel, Kimin Lee

In this paper, we present Preference Transformer, a neural architecture that models human preferences using transformers.

Decision Making Reinforcement Learning (RL)

DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models

2 code implementations25 May 2023 Ying Fan, Olivia Watkins, Yuqing Du, Hao liu, MoonKyung Ryu, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, Kangwook Lee, Kimin Lee

We focus on diffusion models, defining the fine-tuning task as an RL problem, and updating the pre-trained text-to-image diffusion models using policy gradient to maximize the feedback-trained reward.

reinforcement-learning Reinforcement Learning (RL)

InstructBooth: Instruction-following Personalized Text-to-Image Generation

no code implementations4 Dec 2023 Daewon Chae, Nokyung Park, Jinkyu Kim, Kimin Lee

In this work, we introduce InstructBooth, a novel method designed to enhance image-text alignment in personalized text-to-image models without sacrificing the personalization ability.

Instruction Following Text-to-Image Generation

Promptable Behaviors: Personalizing Multi-Objective Rewards from Human Preferences

no code implementations14 Dec 2023 Minyoung Hwang, Luca Weihs, Chanwoo Park, Kimin Lee, Aniruddha Kembhavi, Kiana Ehsani

Customizing robotic behaviors to be aligned with diverse human preferences is an underexplored challenge in the field of embodied AI.

Multi-Objective Reinforcement Learning

Confidence-aware Reward Optimization for Fine-tuning Text-to-Image Models

1 code implementation2 Apr 2024 KyuYoung Kim, Jongheon Jeong, Minyong An, Mohammad Ghavamzadeh, Krishnamurthy Dvijotham, Jinwoo Shin, Kimin Lee

To investigate this issue in depth, we introduce the Text-Image Alignment Assessment (TIA2) benchmark, which comprises a diverse collection of text prompts, images, and human annotations.

Identity Decoupling for Multi-Subject Personalization of Text-to-Image Models

no code implementations5 Apr 2024 Sangwon Jang, Jaehyeong Jo, Kimin Lee, Sung Ju Hwang

Our experiments demonstrate that MuDI can produce high-quality personalized images without identity mixing, even for highly similar subjects as shown in Figure 1.

Data Augmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.