Search Results for author: Honglak Lee

Found 145 papers, 72 papers with code

Fine-grained Text Style Transfer with Diffusion-Based Language Models

1 code implementation31 May 2023 Yiwei Lyu, Tiange Luo, Jiacheng Shi, Todd C. Hollon, Honglak Lee

Diffusion probabilistic models have shown great success in generating high-quality images controllably, and researchers have tried to utilize this controllability into text generation domain.

Style Transfer Text Style Transfer

Discriminator-Guided Multi-step Reasoning with Language Models

1 code implementation24 May 2023 Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang

In the context of multi-step reasoning, language models (LMs) probabilities are often miscalibrated -- solutions with high probabilities are not always correct.

A Picture is Worth a Thousand Words: Language Models Plan from Pixels

no code implementations16 Mar 2023 Anthony Z. Liu, Lajanugen Logeswaran, Sungryull Sohn, Honglak Lee

Planning is an important capability of artificial agents that perform long-horizon tasks in real-world environments.

Preference Transformer: Modeling Human Preferences using Transformers for RL

1 code implementation2 Mar 2023 Changyeon Kim, Jongjin Park, Jinwoo Shin, Honglak Lee, Pieter Abbeel, Kimin Lee

In this paper, we present Preference Transformer, a neural architecture that models human preferences using transformers.

Decision Making Reinforcement Learning (RL)

Multimodal Subtask Graph Generation from Instructional Videos

no code implementations17 Feb 2023 Yunseok Jang, Sungryull Sohn, Lajanugen Logeswaran, Tiange Luo, Moontae Lee, Honglak Lee

Real-world tasks consist of multiple inter-dependent subtasks (e. g., a dirty pan needs to be washed before it can be used for cooking).

Graph Generation

Composing Task Knowledge with Modular Successor Feature Approximators

1 code implementation28 Jan 2023 Wilka Carvalho, Angelos Filos, Richard L. Lewis, Honglak Lee, Satinder Singh

Recently, the Successor Features and Generalized Policy Improvement (SF&GPI) framework has been proposed as a method for learning, composing, and transferring predictive knowledge and behavior.

Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers

no code implementations27 Jan 2023 Sungmin Cha, Sungjun Cho, Dasol Hwang, Honglak Lee, Taesup Moon, Moontae Lee

Since the recent advent of regulations for data protection (e. g., the General Data Protection Regulation), there has been increasing demand in deleting information learned from sensitive data in pre-trained models without retraining from scratch.

Image Classification

Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching

no code implementations7 Jan 2023 Byoungjip Kim, Sungik Choi, Dasol Hwang, Moontae Lee, Honglak Lee

Despite surprising performance on zero-shot transfer, pre-training a large-scale multimodal model is often prohibitive as it requires a huge amount of data and computing resources.

Language Modelling Self-Supervised Learning

Neural Shape Compiler: A Unified Framework for Transforming between Text, Point Cloud, and Program

no code implementations25 Dec 2022 Tiange Luo, Honglak Lee, Justin Johnson

On Text2Shape, ShapeGlot, ABO, Genre, and Program Synthetic datasets, Neural Shape Compiler shows strengths in $\textit{Text}$ $\Longrightarrow$ $\textit{Point Cloud}$, $\textit{Point Cloud}$ $\Longrightarrow$ $\textit{Text}$, $\textit{Point Cloud}$ $\Longrightarrow$ $\textit{Program}$, and Point Cloud Completion tasks.

Point Cloud Completion

Significantly Improving Zero-Shot X-ray Pathology Classification via Fine-tuning Pre-trained Image-Text Encoders

no code implementations14 Dec 2022 Jongseong Jang, Daeun Kyung, Seung Hwan Kim, Honglak Lee, Kyunghoon Bae, Edward Choi

However, large-scale and high-quality data to train powerful neural networks are rare in the medical domain as the labeling must be done by qualified experts.

Classification Contrastive Learning +1

Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost

1 code implementation27 Oct 2022 Sungjun Cho, Seonwoo Min, Jinwoo Kim, Moontae Lee, Honglak Lee, Seunghoon Hong

The forward and backward cost are thus linear to the number of edges, which each attention head can also choose flexibly based on the input.

Stochastic Block Model

UniCLIP: Unified Framework for Contrastive Language-Image Pre-training

no code implementations27 Sep 2022 Janghyeon Lee, Jongsuk Kim, Hyounguk Shon, Bumsoo Kim, Seung Hwan Kim, Honglak Lee, Junmo Kim

Pre-training vision-language models with contrastive objectives has shown promising results that are both scalable to large uncurated datasets and transferable to many downstream applications.

Grouping-matrix based Graph Pooling with Adaptive Number of Clusters

no code implementations7 Sep 2022 Sung Moon Ko, Sungjun Cho, Dae-Woong Jeong, Sehui Han, Moontae Lee, Honglak Lee

Conventional methods ask users to specify an appropriate number of clusters as a hyperparameter, then assume that all input graphs share the same number of clusters.

Binary Classification Molecular Property Prediction +1

Learning Action Translator for Meta Reinforcement Learning on Sparse-Reward Tasks

no code implementations19 Jul 2022 Yijie Guo, Qiucheng Wu, Honglak Lee

Meta reinforcement learning (meta-RL) aims to learn a policy solving a set of training tasks simultaneously and quickly adapting to new tasks.

Efficient Exploration Meta Reinforcement Learning +2

Pure Transformers are Powerful Graph Learners

1 code implementation6 Jul 2022 Jinwoo Kim, Tien Dat Nguyen, Seonwoo Min, Sungjun Cho, Moontae Lee, Honglak Lee, Seunghoon Hong

We show that standard Transformers without graph-specific modifications can lead to promising results in graph learning both in theory and practice.

Graph Learning Graph Regression +1

Towards More Objective Evaluation of Class Incremental Learning: Representation Learning Perspective

no code implementations16 Jun 2022 Sungmin Cha, Jihwan Kwak, Dongsub Shim, Hyunwoo Kim, Moontae Lee, Honglak Lee, Taesup Moon

While the common method for evaluating CIL algorithms is based on average test accuracy for all learned classes, we argue that maximizing accuracy alone does not necessarily lead to effective CIL algorithms.

class-incremental learning Class Incremental Learning +3

Fast Inference and Transfer of Compositional Task Structures for Few-shot Task Generalization

no code implementations25 May 2022 Sungryull Sohn, Hyunjae Woo, Jongwook Choi, lyubing qiang, Izzeddin Gur, Aleksandra Faust, Honglak Lee

Different from the previous meta-rl methods trying to directly infer the unstructured task embedding, our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks, and use it as a prior to improve the task inference in testing.

Hierarchical Reinforcement Learning Meta Reinforcement Learning +2

Few-shot Reranking for Multi-hop QA via Language Model Prompting

2 code implementations25 May 2022 Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang

To alleviate the need for a large number of labeled question-document pairs for retriever training, we propose PromptRank, which relies on large language models prompting for multi-hop path reranking.

Open-Domain Question Answering Passage Re-Ranking +2

RiCS: A 2D Self-Occlusion Map for Harmonizing Volumetric Objects

no code implementations14 May 2022 Yunseok Jang, Ruben Villegas, Jimei Yang, Duygu Ceylan, Xin Sun, Honglak Lee

We test the effectiveness of our representation on the human image harmonization task by predicting shading that is coherent with a given background image.

Image Harmonization

Learning Parameterized Task Structure for Generalization to Unseen Entities

1 code implementation28 Mar 2022 Anthony Z. Liu, Sungryull Sohn, Mahdi Qazwini, Honglak Lee

These subtasks are defined in terms of entities (e. g., "apple", "pear") that can be recombined to form new subtasks (e. g., "pickup apple", and "pickup pear").

Enriched CNN-Transformer Feature Aggregation Networks for Super-Resolution

1 code implementation15 Mar 2022 Jinsu Yoo, TaeHoon Kim, Sihaeng Lee, Seung Hwan Kim, Honglak Lee, Tae Hyun Kim

Recent transformer-based super-resolution (SR) methods have achieved promising results against conventional CNN-based methods.

Image Restoration Super-Resolution

Lipschitz-constrained Unsupervised Skill Discovery

no code implementations ICLR 2022 Seohong Park, Jongwook Choi, Jaekyeom Kim, Honglak Lee, Gunhee Kim

To address this issue, we propose Lipschitz-constrained Skill Discovery (LSD), which encourages the agent to discover more diverse, dynamic, and far-reaching skills.

Environment Generation for Zero-Shot Compositional Reinforcement Learning

1 code implementation NeurIPS 2021 Izzeddin Gur, Natasha Jaques, Yingjie Miao, Jongwook Choi, Manoj Tiwari, Honglak Lee, Aleksandra Faust

We learn to generate environments composed of multiple pages or rooms, and train RL agents capable of completing wide-range of complex tasks in those environments.

Navigate reinforcement-learning +1

Improving Transferability of Representations via Augmentation-Aware Self-Supervision

1 code implementation NeurIPS 2021 Hankook Lee, Kibok Lee, Kimin Lee, Honglak Lee, Jinwoo Shin

Recent unsupervised representation learning methods have shown to be effective in a range of vision tasks by learning representations invariant to data augmentations such as random cropping and color jittering.

Representation Learning Transfer Learning

Successor Feature Landmarks for Long-Horizon Goal-Conditioned Reinforcement Learning

1 code implementation NeurIPS 2021 Christopher Hoang, Sungryull Sohn, Jongwook Choi, Wilka Carvalho, Honglak Lee

SFL leverages the ability of successor features (SF) to capture transition dynamics, using it to drive exploration by estimating state-novelty and to enable high-level planning by abstracting the state-space as a non-parametric landmark-based graph.

Efficient Exploration reinforcement-learning +1

Shortest-Path Constrained Reinforcement Learning for Sparse Reward Tasks

1 code implementation13 Jul 2021 Sungryull Sohn, Sungtae Lee, Jongwook Choi, Harm van Seijen, Mehdi Fatemi, Honglak Lee

We propose the k-Shortest-Path (k-SP) constraint: a novel constraint on the agent's trajectory that improves the sample efficiency in sparse-reward MDPs.

Continuous Control reinforcement-learning +1

Variational Empowerment as Representation Learning for Goal-Based Reinforcement Learning

no code implementations2 Jun 2021 Jongwook Choi, Archit Sharma, Honglak Lee, Sergey Levine, Shixiang Shane Gu

Learning to reach goal states and learning diverse skills through mutual information (MI) maximization have been proposed as principled frameworks for self-supervised reinforcement learning, allowing agents to acquire broadly applicable multitask policies with minimal reward engineering.

reinforcement-learning Reinforcement Learning (RL) +1

Pathdreamer: A World Model for Indoor Navigation

1 code implementation ICCV 2021 Jing Yu Koh, Honglak Lee, Yinfei Yang, Jason Baldridge, Peter Anderson

People navigating in unfamiliar buildings take advantage of myriad visual, spatial and semantic cues to efficiently achieve their navigation goals.

Semantic Segmentation Vision and Language Navigation

Revisiting Hierarchical Approach for Persistent Long-Term Video Prediction

1 code implementation ICLR 2021 Wonkwang Lee, Whie Jung, Han Zhang, Ting Chen, Jing Yu Koh, Thomas Huang, Hyungsuk Yoon, Honglak Lee, Seunghoon Hong

Despite the recent advances in the literature, existing approaches are limited to moderately short-term prediction (less than a few seconds), while extrapolating it to a longer future quickly leads to destruction in structure and content.

Translation Video Prediction

Adversarial Environment Generation for Learning to Navigate the Web

1 code implementation2 Mar 2021 Izzeddin Gur, Natasha Jaques, Kevin Malta, Manoj Tiwari, Honglak Lee, Aleksandra Faust

The regret objective trains the adversary to design a curriculum of environments that are "just-the-right-challenge" for the navigator agents; our results show that over time, the adversary learns to generate increasingly complex web navigation tasks.

Benchmarking Decision Making +2

Cross-Modal Contrastive Learning for Text-to-Image Generation

1 code implementation CVPR 2021 Han Zhang, Jing Yu Koh, Jason Baldridge, Honglak Lee, Yinfei Yang

The quality of XMC-GAN's output is a major step up from previous models, as we show on three challenging datasets.

Ranked #22 on Text-to-Image Generation on COCO (using extra training data)

Contrastive Learning Text-to-Image Generation

Evolving Reinforcement Learning Algorithms

5 code implementations ICLR 2021 John D. Co-Reyes, Yingjie Miao, Daiyi Peng, Esteban Real, Sergey Levine, Quoc V. Le, Honglak Lee, Aleksandra Faust

Learning from scratch on simple classical control and gridworld tasks, our method rediscovers the temporal-difference (TD) algorithm.

Atari Games Meta-Learning +2

Demystifying Loss Functions for Classification

no code implementations1 Jan 2021 Simon Kornblith, Honglak Lee, Ting Chen, Mohammad Norouzi

It is common to use the softmax cross-entropy loss to train neural networks on classification datasets where a single class label is assigned to each example.

Classification General Classification +1

Batch Reinforcement Learning Through Continuation Method

no code implementations ICLR 2021 Yijie Guo, Shengyu Feng, Nicolas Le Roux, Ed Chi, Honglak Lee, Minmin Chen

Many real-world applications of reinforcement learning (RL) require the agent to learn from a fixed set of trajectories, without collecting new interactions.

reinforcement-learning Reinforcement Learning (RL)

Few-shot Sequence Learning with Transformers

no code implementations17 Dec 2020 Lajanugen Logeswaran, Ann Lee, Myle Ott, Honglak Lee, Marc'Aurelio Ranzato, Arthur Szlam

In the simplest setting, we append a token to an input sequence which represents the particular task to be undertaken, and show that the embedding of this token can be optimized on the fly given few labeled examples.

Few-Shot Learning

Ode to an ODE

no code implementations NeurIPS 2020 Krzysztof M. Choromanski, Jared Quincy Davis, Valerii Likhosherstov, Xingyou Song, Jean-Jacques Slotine, Jacob Varley, Honglak Lee, Adrian Weller, Vikas Sindhwani

We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the orthogonal group O(d).

Text-to-Image Generation Grounded by Fine-Grained User Attention

no code implementations7 Nov 2020 Jing Yu Koh, Jason Baldridge, Honglak Lee, Yinfei Yang

Localized Narratives is a dataset with detailed natural language descriptions of images paired with mouse traces that provide a sparse, fine-grained visual grounding for phrases.

Retrieval Text-to-Image Generation +1

Why Do Better Loss Functions Lead to Less Transferable Features?

no code implementations NeurIPS 2021 Simon Kornblith, Ting Chen, Honglak Lee, Mohammad Norouzi

We show that many objectives lead to statistically significant improvements in ImageNet accuracy over vanilla softmax cross-entropy, but the resulting fixed feature extractors transfer substantially worse to downstream tasks, and the choice of loss has little effect when networks are fully fine-tuned on the new tasks.

General Classification Image Classification

Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in a First-person Simulated 3D Environment

no code implementations28 Oct 2020 Wilka Carvalho, Anthony Liang, Kimin Lee, Sungryull Sohn, Honglak Lee, Richard L. Lewis, Satinder Singh

In this work, we show that one can learn object-interaction tasks from scratch without supervision by learning an attentive object-model as an auxiliary task during task learning with an object-centric relational RL agent.

Reinforcement Learning (RL) Representation Learning

Bridging Imagination and Reality for Model-Based Deep Reinforcement Learning

1 code implementation NeurIPS 2020 Guangxiang Zhu, Minghao Zhang, Honglak Lee, Chongjie Zhang

It maximizes the mutual information between imaginary and real trajectories so that the policy improvement learned from imaginary trajectories can be easily generalized to real trajectories.

Model-based Reinforcement Learning reinforcement-learning +1

Text as Neural Operator: Image Manipulation by Text Instruction

1 code implementation11 Aug 2020 Tianhao Zhang, Hung-Yu Tseng, Lu Jiang, Weilong Yang, Honglak Lee, Irfan Essa

In recent years, text-guided image manipulation has gained increasing attention in the multimedia and computer vision community.

Conditional Image Generation Image Captioning +2

Understanding and Diagnosing Vulnerability under Adversarial Attacks

no code implementations17 Jul 2020 Haizhong Zheng, Ziqi Zhang, Honglak Lee, Atul Prakash

Moreover, we design the first diagnostic method to quantify the vulnerability contributed by each layer, which can be used to identify vulnerable parts of model architectures.

Classification General Classification

An Ode to an ODE

no code implementations NeurIPS 2020 Krzysztof Choromanski, Jared Quincy Davis, Valerii Likhosherstov, Xingyou Song, Jean-Jacques Slotine, Jacob Varley, Honglak Lee, Adrian Weller, Vikas Sindhwani

We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the orthogonal group O(d).

CompressNet: Generative Compression at Extremely Low Bitrates

no code implementations14 Jun 2020 Suraj Kiran Raman, Aditya Ramesh, Vijayakrishna Naganoor, Shubham Dash, Giridharan Kumaravelu, Honglak Lee

Compressing images at extremely low bitrates (< 0. 1 bpp) has always been a challenging task since the quality of reconstruction significantly reduces due to the strong imposed constraint on the number of bits allocated for the compressed data.

Context-aware Dynamics Model for Generalization in Model-Based Reinforcement Learning

2 code implementations ICML 2020 Kimin Lee, Younggyo Seo, Seung-Hyun Lee, Honglak Lee, Jinwoo Shin

Model-based reinforcement learning (RL) enjoys several benefits, such as data-efficiency and planning, by learning a model of the environment's dynamics.

Model-based Reinforcement Learning reinforcement-learning +1

Time Dependence in Non-Autonomous Neural ODEs

no code implementations ICLR Workshop DeepDiffEq 2019 Jared Quincy Davis, Krzysztof Choromanski, Jake Varley, Honglak Lee, Jean-Jacques Slotine, Valerii Likhosterov, Adrian Weller, Ameesh Makadia, Vikas Sindhwani

Neural Ordinary Differential Equations (ODEs) are elegant reinterpretations of deep networks where continuous time can replace the discrete notion of depth, ODE solvers perform forward propagation, and the adjoint method enables efficient, constant memory backpropagation.

Image Classification Video Prediction

Improved Consistency Regularization for GANs

no code implementations11 Feb 2020 Zhengli Zhao, Sameer Singh, Honglak Lee, Zizhao Zhang, Augustus Odena, Han Zhang

Recent work has increased the performance of Generative Adversarial Networks (GANs) by enforcing a consistency cost on the discriminator.

Image Generation

BRPO: Batch Residual Policy Optimization

no code implementations8 Feb 2020 Sungryull Sohn, Yin-Lam Chow, Jayden Ooi, Ofir Nachum, Honglak Lee, Ed Chi, Craig Boutilier

In batch reinforcement learning (RL), one often constrains a learned policy to be close to the behavior (data-generating) policy, e. g., by constraining the learned action distribution to differ from the behavior policy by some maximum degree that is the same at each state.

reinforcement-learning Reinforcement Learning (RL)

High-Fidelity Synthesis with Disentangled Representation

2 code implementations ECCV 2020 Wonkwang Lee, Donggyun Kim, Seunghoon Hong, Honglak Lee

Despite the simplicity, we show that the proposed method is highly effective, achieving comparable image generation quality to the state-of-the-art methods using the disentangled representation.

Disentanglement Image Generation +1

Meta Reinforcement Learning with Autonomous Inference of Subtask Dependencies

1 code implementation ICLR 2020 Sungryull Sohn, Hyunjae Woo, Jongwook Choi, Honglak Lee

We propose and address a novel few-shot RL problem, where a task is characterized by a subtask graph which describes a set of subtasks and their dependencies that are unknown to the agent.

Efficient Exploration Meta Reinforcement Learning +4

Efficient Adversarial Training with Transferable Adversarial Examples

2 code implementations CVPR 2020 Haizhong Zheng, Ziqi Zhang, Juncheng Gu, Honglak Lee, Atul Prakash

Adversarial training is an effective defense method to protect classification models against adversarial attacks.

How Should an Agent Practice?

no code implementations15 Dec 2019 Janarthanan Rajendran, Richard Lewis, Vivek Veeriah, Honglak Lee, Satinder Singh

We present a method for learning intrinsic reward functions to drive the learning of an agent during periods of practice in which extrinsic task rewards are not available.

High Fidelity Video Prediction with Large Stochastic Recurrent Neural Networks

no code implementations NeurIPS 2019 Ruben Villegas, Arkanath Pathak, Harini Kannan, Dumitru Erhan, Quoc V. Le, Honglak Lee

Predicting future video frames is extremely challenging, as there are many factors of variation that make up the dynamics of how frames change through time.

Inductive Bias Optical Flow Estimation +2

Network Randomization: A Simple Technique for Generalization in Deep Reinforcement Learning

2 code implementations ICLR 2020 Kimin Lee, Kibok Lee, Jinwoo Shin, Honglak Lee

Deep reinforcement learning (RL) agents often fail to generalize to unseen environments (yet semantically similar to trained agents), particularly when they are trained on high-dimensional state spaces, such as images.

Data Augmentation reinforcement-learning +1

Distilling Effective Supervision from Severe Label Noise

2 code implementations CVPR 2020 Zizhao Zhang, Han Zhang, Sercan O. Arik, Honglak Lee, Tomas Pfister

For instance, on CIFAR100 with a $40\%$ uniform noise ratio and only 10 trusted labeled data per class, our method achieves $80. 2{\pm}0. 3\%$ classification accuracy, where the error rate is only $1. 4\%$ higher than a neural network trained without label noise.

Image Classification

Self-Imitation Learning via Trajectory-Conditioned Policy for Hard-Exploration Tasks

no code implementations25 Sep 2019 Yijie Guo, Jongwook Choi, Marcin Moczulski, Samy Bengio, Mohammad Norouzi, Honglak Lee

We propose a new method of learning a trajectory-conditioned policy to imitate diverse trajectories from the agent's own past experiences and show that such self-imitation helps avoid myopic behavior and increases the chance of finding a globally optimal solution for hard-exploration tasks, especially when there are misleading rewards.

Imitation Learning

Memory Based Trajectory-conditioned Policies for Learning from Sparse Rewards

no code implementations NeurIPS 2020 Yijie Guo, Jongwook Choi, Marcin Moczulski, Shengyu Feng, Samy Bengio, Mohammad Norouzi, Honglak Lee

Reinforcement learning with sparse rewards is challenging because an agent can rarely obtain non-zero rewards and hence, gradient-based optimization of parameterized policies can be incremental and slow.

Efficient Exploration Imitation Learning +1

Data-Efficient Learning for Sim-to-Real Robotic Grasping using Deep Point Cloud Prediction Networks

no code implementations21 Jun 2019 Xinchen Yan, Mohi Khansari, Jasmine Hsu, Yuanzheng Gong, Yunfei Bai, Sören Pirk, Honglak Lee

Training a deep network policy for robot manipulation is notoriously costly and time consuming as it depends on collecting a significant amount of real world data.

3D Shape Representation Robotic Grasping +1

SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing

1 code implementation19 Jun 2019 Haonan Qiu, Chaowei Xiao, Lei Yang, Xinchen Yan, Honglak Lee, Bo Li

In this paper, we aim to explore the impact of semantic manipulation on DNNs predictions by manipulating the semantic attributes of images and generate "unrestricted adversarial examples".

Face Recognition Face Verification

Zero-Shot Entity Linking by Reading Entity Descriptions

3 code implementations ACL 2019 Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Honglak Lee

First, we show that strong reading comprehension models pre-trained on large unlabeled data can be used to generalize to unseen entities.

Entity Linking Reading Comprehension

Robust Determinantal Generative Classifier for Noisy Labels and Adversarial Attacks

no code implementations ICLR 2019 Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, Jinwoo Shin

For instance, on CIFAR-10 dataset containing 45% noisy training labels, we improve the test accuracy of a deep model optimized by the state-of-the-art noise-handling training method from33. 34% to 43. 02%.

Similarity of Neural Network Representations Revisited

8 code implementations ICML 2019 2019 Simon Kornblith, Mohammad Norouzi, Honglak Lee, Geoffrey Hinton

We introduce a similarity index that measures the relationship between representational similarity matrices and does not suffer from this limitation.

Overcoming Catastrophic Forgetting with Unlabeled Data in the Wild

1 code implementation ICCV 2019 Kibok Lee, Kimin Lee, Jinwoo Shin, Honglak Lee

Lifelong learning with deep neural networks is well-known to suffer from catastrophic forgetting: the performance on previous tasks drastically degrades when learning a new task.

class-incremental learning Class Incremental Learning +1

Robust Inference via Generative Classifiers for Handling Noisy Labels

1 code implementation31 Jan 2019 Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, Jinwoo Shin

Large-scale datasets may contain significant proportions of noisy (incorrect) class labels, and it is well-known that modern deep neural networks (DNNs) poorly generalize from such noisy training datasets.

Diversity-Sensitive Conditional Generative Adversarial Networks

no code implementations ICLR 2019 Dingdong Yang, Seunghoon Hong, Yunseok Jang, Tianchen Zhao, Honglak Lee

We propose a simple yet highly effective method that addresses the mode-collapse problem in the Conditional Generative Adversarial Network (cGAN).

Image Inpainting Image-to-Image Translation +2

Generative Adversarial Self-Imitation Learning

no code implementations ICLR 2019 Yijie Guo, Junhyuk Oh, Satinder Singh, Honglak Lee

This paper explores a simple regularizer for reinforcement learning by proposing Generative Adversarial Self-Imitation Learning (GASIL), which encourages the agent to imitate past good trajectories via generative adversarial imitation learning framework.

Imitation Learning reinforcement-learning +1

Contingency-Aware Exploration in Reinforcement Learning

no code implementations ICLR 2019 Jongwook Choi, Yijie Guo, Marcin Moczulski, Junhyuk Oh, Neal Wu, Mohammad Norouzi, Honglak Lee

This paper investigates whether learning contingency-awareness and controllable aspects of an environment can lead to better exploration in reinforcement learning.

Montezuma's Revenge reinforcement-learning +1

MT-VAE: Learning Motion Transformations to Generate Multimodal Human Dynamics

1 code implementation ECCV 2018 Xinchen Yan, Akash Rastogi, Ruben Villegas, Kalyan Sunkavalli, Eli Shechtman, Sunil Hadap, Ersin Yumer, Honglak Lee

Our model jointly learns a feature embedding for motion modes (that the motion sequence can be reconstructed from) and a feature transformation that represents the transition of one motion mode to the next motion mode.

Human Dynamics Human Pose Forecasting +1

Hierarchical Reinforcement Learning for Zero-shot Generalization with Subtask Dependencies

1 code implementation NeurIPS 2018 Sungryull Sohn, Junhyuk Oh, Honglak Lee

We introduce a new RL problem where the agent is required to generalize to a previously-unseen environment characterized by a subtask graph which describes a set of subtasks and their dependencies.

Hierarchical Reinforcement Learning Network Embedding +2

A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks

5 code implementations NeurIPS 2018 Kimin Lee, Kibok Lee, Honglak Lee, Jinwoo Shin

Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications.

class-incremental learning Class Incremental Learning +2

Sample-Efficient Reinforcement Learning with Stochastic Ensemble Value Expansion

2 code implementations NeurIPS 2018 Jacob Buckman, Danijar Hafner, George Tucker, Eugene Brevdo, Honglak Lee

Integrating model-free and model-based approaches in reinforcement learning has the potential to achieve the high performance of model-free algorithms with low sample complexity.

Continuous Control reinforcement-learning +1

Self-Imitation Learning

4 code implementations ICML 2018 Junhyuk Oh, Yijie Guo, Satinder Singh, Honglak Lee

This paper proposes Self-Imitation Learning (SIL), a simple off-policy actor-critic algorithm that learns to reproduce the agent's past good decisions.

Atari Games Imitation Learning

Hierarchical Long-term Video Prediction without Supervision

no code implementations ICML 2018 Nevan Wichers, Ruben Villegas, Dumitru Erhan, Honglak Lee

Much of recent research has been devoted to video prediction and generation, yet most of the previous works have demonstrated only limited success in generating videos on short-term horizons.

Video Prediction

Data-Efficient Hierarchical Reinforcement Learning

11 code implementations NeurIPS 2018 Ofir Nachum, Shixiang Gu, Honglak Lee, Sergey Levine

In this paper, we study how we can develop HRL algorithms that are general, in that they do not make onerous additional assumptions beyond standard RL algorithms, and efficient, in the sense that they can be used with modest numbers of interaction samples, making them suitable for real-world problems such as robotic control.

Hierarchical Reinforcement Learning reinforcement-learning +1

Neural Kinematic Networks for Unsupervised Motion Retargetting

1 code implementation CVPR 2018 Ruben Villegas, Jimei Yang, Duygu Ceylan, Honglak Lee

We propose a recurrent neural network architecture with a Forward Kinematics layer and cycle consistency based adversarial training objective for unsupervised motion retargetting.

Hierarchical Novelty Detection for Visual Object Recognition

no code implementations CVPR 2018 Kibok Lee, Kimin Lee, Kyle Min, Yuting Zhang, Jinwoo Shin, Honglak Lee

The essential ingredients of our methods are confidence-calibrated classifiers, data relabeling, and the leave-one-out strategy for modeling novel classes under the hierarchical taxonomy.

Generalized Zero-Shot Learning Object Recognition

An efficient framework for learning sentence representations

6 code implementations ICLR 2018 Lajanugen Logeswaran, Honglak Lee

In this work we propose a simple and efficient framework for learning sentence representations from unlabelled data.

General Classification Representation Learning

Neural Task Graph Execution

no code implementations ICLR 2018 Sungryull Sohn, Junhyuk Oh, Honglak Lee

Unlike existing approaches which explicitly describe what the agent should do, our problem only describes properties of subtasks and relationships between them, which requires the agent to perform a complex reasoning to find the optimal subtask to execute.

Reinforcement Learning (RL)

Unsupervised Hierarchical Video Prediction

no code implementations ICLR 2018 Nevan Wichers, Dumitru Erhan, Honglak Lee

Much recent research has been devoted to video prediction and generation, but mostly for short-scale time horizons.

Video Prediction

Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples

3 code implementations ICLR 2018 Kimin Lee, Honglak Lee, Kibok Lee, Jinwoo Shin

The problem of detecting whether a test sample is from in-distribution (i. e., training distribution by a classifier) or out-of-distribution sufficiently different from it arises in many real-world machine learning applications.

Addressee and Response Selection in Multi-Party Conversations with Speaker Interaction RNNs

1 code implementation12 Sep 2017 Rui Zhang, Honglak Lee, Lazaros Polymenakos, Dragomir Radev

In this paper, we study the problem of addressee and response selection in multi-party conversations.

Learning 6-DOF Grasping Interaction via Deep Geometry-aware 3D Representations

1 code implementation24 Aug 2017 Xinchen Yan, Jasmine Hsu, Mohi Khansari, Yunfei Bai, Arkanath Pathak, Abhinav Gupta, James Davidson, Honglak Lee

Our contributions are fourfold: (1) To best of our knowledge, we are presenting for the first time a method to learn a 6-DOF grasping net from RGBD input; (2) We build a grasping dataset from demonstrations in virtual reality with rich sensory and interaction annotations.

3D Geometry Prediction 3D Shape Modeling +1

Value Prediction Network

2 code implementations NeurIPS 2017 Junhyuk Oh, Satinder Singh, Honglak Lee

This paper proposes a novel deep reinforcement learning (RL) architecture, called Value Prediction Network (VPN), which integrates model-free and model-based RL methods into a single neural network.

Atari Games Reinforcement Learning (RL) +1

Decomposing Motion and Content for Natural Video Sequence Prediction

1 code implementation25 Jun 2017 Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, Honglak Lee

To the best of our knowledge, this is the first end-to-end trainable network architecture with motion and content separation to model the spatiotemporal dynamics for pixel-level future prediction in natural videos.

 Ranked #1 on Video Prediction on KTH (Cond metric)

Future prediction Video Prediction

Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning

1 code implementation ICML 2017 Junhyuk Oh, Satinder Singh, Honglak Lee, Pushmeet Kohli

As a step towards developing zero-shot task generalization capabilities in reinforcement learning (RL), we introduce a new RL problem where the agent should learn to execute sequences of instructions after learning useful skills that solve subtasks.

reinforcement-learning Reinforcement Learning (RL)

Towards Understanding the Invertibility of Convolutional Neural Networks

no code implementations24 May 2017 Anna C. Gilbert, Yi Zhang, Kibok Lee, Yuting Zhang, Honglak Lee

Several recent works have empirically observed that Convolutional Neural Nets (CNNs) are (approximately) invertible.

Compressive Sensing General Classification

Exploring the structure of a real-time, arbitrary neural artistic stylization network

18 code implementations18 May 2017 Golnaz Ghiasi, Honglak Lee, Manjunath Kudlur, Vincent Dumoulin, Jonathon Shlens

In this paper, we present a method which combines the flexibility of the neural algorithm of artistic style with the speed of fast style transfer networks to allow real-time stylization using any content/style image pair.

Style Transfer

Learning to Generate Long-term Future via Hierarchical Prediction

2 code implementations ICML 2017 Ruben Villegas, Jimei Yang, Yuliang Zou, Sungryull Sohn, Xunyu Lin, Honglak Lee

To avoid inherent compounding errors in recursive pixel-level prediction, we propose to first estimate high-level structure in the input frames, then predict how that structure evolves in the future, and finally by observing a single frame from the past and the predicted high-level structure, we construct the future frames without having to observe any of the pixel-level predictions.

Video Prediction

Discriminative Bimodal Networks for Visual Localization and Detection with Natural Language Queries

no code implementations CVPR 2017 Yuting Zhang, Luyao Yuan, Yijie Guo, Zhiyuan He, I-An Huang, Honglak Lee

Our training objective encourages better localization on single images, incorporates text phrases in a broad range, and properly pairs image regions with text phrases into positive and negative examples.

Natural Language Queries Visual Localization

Weakly Supervised Semantic Segmentation using Web-Crawled Videos

no code implementations CVPR 2017 Seunghoon Hong, Donghun Yeo, Suha Kwak, Honglak Lee, Bohyung Han

Our goal is to overcome this limitation with no additional human intervention by retrieving videos relevant to target class labels from web repository, and generating segmentation labels from the retrieved videos to simulate strong supervision for semantic segmentation.

Image Classification Weakly supervised Semantic Segmentation +1

Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction without 3D Supervision

2 code implementations NeurIPS 2016 Xinchen Yan, Jimei Yang, Ersin Yumer, Yijie Guo, Honglak Lee

We demonstrate the ability of the model in generating 3D volume from a single 2D image with three sets of experiments: (1) learning from single-class objects; (2) learning from multi-class objects and (3) testing on novel object classes.

3D Object Reconstruction

Dependency Sensitive Convolutional Neural Networks for Modeling Sentences and Documents

2 code implementations NAACL 2016 Rui Zhang, Honglak Lee, Dragomir Radev

Moreover, unlike other CNN-based models that analyze sentences locally by sliding windows, our system captures both the dependency information within each sentence and relationships across sentences in the same document.

Classification General Classification +3

Deep Variational Canonical Correlation Analysis

no code implementations11 Oct 2016 Weiran Wang, Xinchen Yan, Honglak Lee, Karen Livescu

We present deep variational canonical correlation analysis (VCCA), a deep multi-view learning model that extends the latent variable model interpretation of linear CCA to nonlinear observation models parameterized by deep neural networks.


Learning What and Where to Draw

no code implementations NeurIPS 2016 Scott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, Honglak Lee

Generative Adversarial Networks (GANs) have recently demonstrated the capability to synthesize compelling real-world images, such as room interiors, album covers, manga, faces, birds, and flowers.

Ranked #11 on Text-to-Image Generation on CUB (using extra training data)

Text-to-Image Generation

Augmenting Supervised Neural Networks with Unsupervised Objectives for Large-scale Image Classification

no code implementations21 Jun 2016 Yuting Zhang, Kibok Lee, Honglak Lee

Inspired by the recent trend toward revisiting the importance of unsupervised learning, we investigate joint supervised and unsupervised learning in a large-scale setting by augmenting existing neural networks with decoding pathways for reconstruction.

General Classification Image Classification

Learning Deep Representations of Fine-grained Visual Descriptions

9 code implementations CVPR 2016 Scott Reed, Zeynep Akata, Bernt Schiele, Honglak Lee

State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information.

Image Retrieval Retrieval +1

Generative Adversarial Text to Image Synthesis

40 code implementations17 May 2016 Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, Honglak Lee

Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal.

Adversarial Text Text-to-Image Generation

Deep Learning for Reward Design to Improve Monte Carlo Tree Search in ATARI Games

no code implementations24 Apr 2016 Xiaoxiao Guo, Satinder Singh, Richard Lewis, Honglak Lee

We present an adaptation of PGRD (policy-gradient for reward-design) for learning a reward-bonus function to improve UCT (a MCTS algorithm).

Atari Games Decision Making

Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units

1 code implementation16 Mar 2016 Wenling Shang, Kihyuk Sohn, Diogo Almeida, Honglak Lee

Recently, convolutional neural networks (CNNs) have been used as a powerful tool to solve many problems of machine learning and computer vision.

Weakly-supervised Disentangling with Recurrent Transformations for 3D View Synthesis

no code implementations NeurIPS 2015 Jimei Yang, Scott Reed, Ming-Hsuan Yang, Honglak Lee

An important problem for both graphics and vision is to synthesize novel views of a 3D object from a single image.

Deep Visual Analogy-Making

no code implementations NeurIPS 2015 Scott E. Reed, Yi Zhang, Yuting Zhang, Honglak Lee

In addition to identifying the content within a single image, relating images and generating related images are critical tasks for image understanding.

Language Modelling Visual Analogies

Learning Structured Output Representation using Deep Conditional Generative Models

1 code implementation NeurIPS 2015 Kihyuk Sohn, Honglak Lee, Xinchen Yan

The model is trained efficiently in the framework of stochastic gradient variational Bayes, and allows a fast prediction using stochastic feed-forward inference.

Semantic Segmentation Structured Prediction

Action-Conditional Video Prediction using Deep Networks in Atari Games

1 code implementation NeurIPS 2015 Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard Lewis, Satinder Singh

Motivated by vision-based reinforcement learning (RL) problems, in particular Atari games from the recent benchmark Aracade Learning Environment (ALE), we consider spatio-temporal prediction problems where future (image-)frames are dependent on control variables or actions as well as previous frames.

Atari Games Reinforcement Learning (RL) +1

Improving Object Detection with Deep Convolutional Networks via Bayesian Optimization and Structured Prediction

no code implementations CVPR 2015 Yuting Zhang, Kihyuk Sohn, Ruben Villegas, Gang Pan, Honglak Lee

Object detection systems based on the deep convolutional neural network (CNN) have recently made ground- breaking advances on several object detection benchmarks.

Bayesian Optimization object-detection +2

Deep Learning for Real-Time Atari Game Play Using Offline Monte-Carlo Tree Search Planning

no code implementations NeurIPS 2014 Xiaoxiao Guo, Satinder Singh, Honglak Lee, Richard L. Lewis, Xiaoshi Wang

The combination of modern Reinforcement Learning and Deep Learning approaches holds the promise of making significant progress on challenging applications requiring both rich perception and policy-selection.

Atari Games reinforcement-learning +1

Improved Multimodal Deep Learning with Variation of Information

no code implementations NeurIPS 2014 Kihyuk Sohn, Wenling Shang, Honglak Lee

Deep learning has been successfully applied to multimodal representation learning problems, with a common strategy to learning joint representations that are shared across multiple modalities on top of layers of modality-specific networks.

Multimodal Deep Learning Representation Learning

Adaptive Multi-Column Deep Neural Networks with Application to Robust Image Denoising

no code implementations NeurIPS 2013 Forest Agostinelli, Michael R. Anderson, Honglak Lee

Stacked sparse denoising auto-encoders (SSDAs) have recently been shown to be successful at removing noise from corrupted images.

Image Denoising

Weakly Supervised Learning of Mid-Level Features with Beta-Bernoulli Process Restricted Boltzmann Machines

no code implementations CVPR 2013 Roni Mittelman, Honglak Lee, Benjamin Kuipers, Silvio Savarese

In order to address this issue, we propose a weakly supervised approach to learn mid-level features, where only class-level supervision is provided during training.

Object Recognition Weakly-supervised Learning

Augmenting CRFs with Boltzmann Machine Shape Priors for Image Labeling

no code implementations CVPR 2013 Andrew Kae, Kihyuk Sohn, Honglak Lee, Erik Learned-Miller

Although the CRF is a good baseline labeler, we show how an RBM can be added to the architecture to provide a global shape bias that complements the local modeling provided by the CRF.


Deep Learning for Detecting Robotic Grasps

no code implementations16 Jan 2013 Ian Lenz, Honglak Lee, Ashutosh Saxena

We consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects.

Robotic Grasping

Learning to Align from Scratch

no code implementations NeurIPS 2012 Gary Huang, Marwan Mattar, Honglak Lee, Erik G. Learned-Miller

Unsupervised joint alignment of images has been demonstrated to improve performance on recognition tasks such as face verification.

Face Verification

Measuring Invariances in Deep Networks

no code implementations NeurIPS 2009 Ian Goodfellow, Honglak Lee, Quoc V. Le, Andrew Saxe, Andrew Y. Ng

Our evaluation metrics can also be used to evaluate future work in unsupervised deep learning, and thus help the development of future algorithms.

Sparse deep belief net model for visual area V2

no code implementations NeurIPS 2007 Honglak Lee, Chaitanya Ekanadham, Andrew Y. Ng

This suggests that our sparse variant of deep belief networks holds promise for modeling more higher-order features.

Cannot find the paper you are looking for? You can Submit a new open access paper.