Search Results for author: Honglak Lee

Found 172 papers, 86 papers with code

MASSW: A New Dataset and Benchmark Tasks for AI-Assisted Scientific Workflows

1 code implementation10 Jun 2024 Xingjian Zhang, Yutong Xie, Jin Huang, Jinge Ma, Zhaoying Pan, Qijia Liu, Ziyang Xiong, Tolga Ergen, Dongsub Shim, Honglak Lee, Qiaozhu Mei

Scientific innovation relies on detailed workflows, which include critical steps such as analyzing literature, generating ideas, validating these ideas, interpreting results, and inspiring follow-up research.

Navigate

Understanding the Capabilities and Limitations of Large Language Models for Cultural Commonsense

no code implementations7 May 2024 Siqi Shen, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Soujanya Poria, Rada Mihalcea

Large language models (LLMs) have demonstrated substantial commonsense understanding through numerous benchmark evaluations.

Small Language Models Need Strong Verifiers to Self-Correct Reasoning

no code implementations26 Apr 2024 Yunxiang Zhang, Muhammad Khalifa, Lajanugen Logeswaran, Jaekyeom Kim, Moontae Lee, Honglak Lee, Lu Wang

Self-correction has emerged as a promising solution to boost the reasoning performance of large language models (LLMs), where LLMs refine their solutions using self-generated critiques that pinpoint the errors.

Math

Instruction Matters, a Simple yet Effective Task Selection Approach in Instruction Tuning for Specific Tasks

no code implementations25 Apr 2024 Changho Lee, Janghoon Han, Seonghyeon Ye, Stanley Jungkyu Choi, Honglak Lee, Kyunghoon Bae

Instruction tuning has shown its ability to not only enhance zero-shot generalization across various tasks but also its effectiveness in improving the performance of specific tasks.

Zero-shot Generalization

Super-resolution of biomedical volumes with 2D supervision

no code implementations15 Apr 2024 Cheng Jiang, Alexander Gedeon, Yiwei Lyu, Eric Landgraf, Yufeng Zhang, Xinhai Hou, Akhil Kondepudi, Asadur Chowdury, Honglak Lee, Todd Hollon

Volumetric biomedical microscopy has the potential to increase the diagnostic information extracted from clinical tissue specimens and improve the diagnostic accuracy of both human pathologists and computational pathology models.

Super-Resolution

View Selection for 3D Captioning via Diffusion Ranking

1 code implementation11 Apr 2024 Tiange Luo, Justin Johnson, Honglak Lee

Scalable annotation approaches are crucial for constructing extensive 3D-text datasets, facilitating a broader range of applications.

3D Object Captioning Hallucination +4

Source-Aware Training Enables Knowledge Attribution in Language Models

1 code implementation1 Apr 2024 Muhammad Khalifa, David Wadden, Emma Strubell, Honglak Lee, Lu Wang, Iz Beltagy, Hao Peng

We investigate the problem of intrinsic source citation, where LLMs are required to cite the pretraining source supporting a generated response.

Data Augmentation

Step-Calibrated Diffusion for Biomedical Optical Image Restoration

2 code implementations20 Mar 2024 Yiwei Lyu, Sung Jik Cha, Cheng Jiang, Asadur Chowdury, Xinhai Hou, Edward Harake, Akhil Kondepudi, Christian Freudiger, Honglak Lee, Todd C. Hollon

Here, we present Restorative Step-Calibrated Diffusion (RSCD), an unpaired image restoration method that views the image restoration problem as completing the finishing steps of a diffusion-based image generation task.

Image Generation Image Restoration

TOD-Flow: Modeling the Structure of Task-Oriented Dialogues

1 code implementation7 Dec 2023 Sungryull Sohn, Yiwei Lyu, Anthony Liu, Lajanugen Logeswaran, Dong-Ki Kim, Dongsub Shim, Honglak Lee

Our TOD-Flow graph learns what a model can, should, and should not predict, effectively reducing the search space and providing a rationale for the model's prediction.

Dialog Act Classification Response Generation

Projection Regret: Reducing Background Bias for Novelty Detection via Diffusion Models

no code implementations NeurIPS 2023 Sungik Choi, Hankook Lee, Honglak Lee, Moontae Lee

Based on our observation that diffusion models can \emph{project} any sample to an in-distribution sample with similar background information, we propose \emph{Projection Regret (PR)}, an efficient novelty detection method that mitigates the bias of non-semantic information.

Novelty Detection Perceptual Distance

Mitigating Biases for Instruction-following Language Models via Bias Neurons Elimination

no code implementations16 Nov 2023 Nakyeong Yang, Taegwan Kang, JungKyu Choi, Honglak Lee, Kyomin Jung

Furthermore, we propose a novel and practical bias mitigation method, CRISPR, to eliminate bias neurons of language models in instruction-following settings.

Instruction Following Language Modelling

Code Models are Zero-shot Precondition Reasoners

no code implementations16 Nov 2023 Lajanugen Logeswaran, Sungryull Sohn, Yiwei Lyu, Anthony Zhe Liu, Dong-Ki Kim, Dongsub Shim, Moontae Lee, Honglak Lee

One of the fundamental skills required for an agent acting in an environment to complete tasks is the ability to understand what actions are plausible at any given point.

Decision Making

From Heuristic to Analytic: Cognitively Motivated Strategies for Coherent Physical Commonsense Reasoning

1 code implementation24 Oct 2023 Zheyuan Zhang, Shane Storks, Fengyuan Hu, Sungryull Sohn, Moontae Lee, Honglak Lee, Joyce Chai

We incorporate these interlinked dual processes in fine-tuning and in-context learning with PLMs, applying them to two language understanding tasks that require coherent physical commonsense reasoning.

In-Context Learning Physical Commonsense Reasoning

CycleNet: Rethinking Cycle Consistency in Text-Guided Diffusion for Image Manipulation

1 code implementation NeurIPS 2023 Sihan Xu, Ziqiao Ma, Yidong Huang, Honglak Lee, Joyce Chai

Our empirical studies show that Cyclenet is superior in translation consistency and quality, and can generate high-quality images for out-of-domain distributions with a simple change of the textual prompt.

2k Image Generation +2

3D Denoisers are Good 2D Teachers: Molecular Pretraining via Denoising and Cross-Modal Distillation

no code implementations8 Sep 2023 Sungjun Cho, Dae-Woong Jeong, Sung Moon Ko, Jinwoo Kim, Sehui Han, Seunghoon Hong, Honglak Lee, Moontae Lee

Pretraining molecular representations from large unlabeled data is essential for molecular property prediction due to the high cost of obtaining ground-truth labels.

Denoising Knowledge Distillation +4

Go Beyond Imagination: Maximizing Episodic Reachability with World Models

1 code implementation25 Aug 2023 Yao Fu, Run Peng, Honglak Lee

Efficient exploration is a challenging topic in reinforcement learning, especially for sparse reward tasks.

Efficient Exploration

Exploring Demonstration Ensembling for In-context Learning

1 code implementation17 Aug 2023 Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang

The standard approach for ICL is to prompt the LM with concatenated demonstrations followed by the test input.

In-Context Learning

Story Visualization by Online Text Augmentation with Context Memory

1 code implementation ICCV 2023 Daechul Ahn, Daneul Kim, Gwangmo Song, Seung Hwan Kim, Honglak Lee, Dongyeop Kang, Jonghyun Choi

Story visualization (SV) is a challenging text-to-image generation task for the difficulty of not only rendering visual details from the text descriptions but also encoding a long-term context across multiple sentences.

Sentence Story Visualization +2

Fine-grained Text Style Transfer with Diffusion-Based Language Models

1 code implementation31 May 2023 Yiwei Lyu, Tiange Luo, Jiacheng Shi, Todd C. Hollon, Honglak Lee

Diffusion probabilistic models have shown great success in generating high-quality images controllably, and researchers have tried to utilize this controllability into text generation domain.

Style Transfer Text Style Transfer

GRACE: Discriminator-Guided Chain-of-Thought Reasoning

1 code implementation24 May 2023 Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang

To address this issue, we propose Guiding chain-of-thought ReAsoning with a CorrectnEss Discriminator (GRACE), a stepwise decoding approach that steers the decoding process towards producing correct reasoning steps.

GSM8K Math

A Picture is Worth a Thousand Words: Language Models Plan from Pixels

no code implementations16 Mar 2023 Anthony Z. Liu, Lajanugen Logeswaran, Sungryull Sohn, Honglak Lee

Planning is an important capability of artificial agents that perform long-horizon tasks in real-world environments.

Preference Transformer: Modeling Human Preferences using Transformers for RL

1 code implementation2 Mar 2023 Changyeon Kim, Jongjin Park, Jinwoo Shin, Honglak Lee, Pieter Abbeel, Kimin Lee

In this paper, we present Preference Transformer, a neural architecture that models human preferences using transformers.

Decision Making Reinforcement Learning (RL)

Multimodal Subtask Graph Generation from Instructional Videos

no code implementations17 Feb 2023 Yunseok Jang, Sungryull Sohn, Lajanugen Logeswaran, Tiange Luo, Moontae Lee, Honglak Lee

Real-world tasks consist of multiple inter-dependent subtasks (e. g., a dirty pan needs to be washed before it can be used for cooking).

Graph Generation

Composing Task Knowledge with Modular Successor Feature Approximators

1 code implementation28 Jan 2023 Wilka Carvalho, Angelos Filos, Richard L. Lewis, Honglak Lee, Satinder Singh

Recently, the Successor Features and Generalized Policy Improvement (SF&GPI) framework has been proposed as a method for learning, composing, and transferring predictive knowledge and behavior.

Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers

no code implementations27 Jan 2023 Sungmin Cha, Sungjun Cho, Dasol Hwang, Honglak Lee, Taesup Moon, Moontae Lee

Since the recent advent of regulations for data protection (e. g., the General Data Protection Regulation), there has been increasing demand in deleting information learned from sensitive data in pre-trained models without retraining from scratch.

Image Classification

Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching

no code implementations7 Jan 2023 Byoungjip Kim, Sungik Choi, Dasol Hwang, Moontae Lee, Honglak Lee

Despite surprising performance on zero-shot transfer, pre-training a large-scale multimodal model is often prohibitive as it requires a huge amount of data and computing resources.

Language Modelling Self-Supervised Learning

Neural Shape Compiler: A Unified Framework for Transforming between Text, Point Cloud, and Program

no code implementations25 Dec 2022 Tiange Luo, Honglak Lee, Justin Johnson

On Text2Shape, ShapeGlot, ABO, Genre, and Program Synthetic datasets, Neural Shape Compiler shows strengths in $\textit{Text}$ $\Longrightarrow$ $\textit{Point Cloud}$, $\textit{Point Cloud}$ $\Longrightarrow$ $\textit{Text}$, $\textit{Point Cloud}$ $\Longrightarrow$ $\textit{Program}$, and Point Cloud Completion tasks.

Point Cloud Completion

Significantly Improving Zero-Shot X-ray Pathology Classification via Fine-tuning Pre-trained Image-Text Encoders

no code implementations14 Dec 2022 Jongseong Jang, Daeun Kyung, Seung Hwan Kim, Honglak Lee, Kyunghoon Bae, Edward Choi

However, large-scale and high-quality data to train powerful neural networks are rare in the medical domain as the labeling must be done by qualified experts.

Classification Contrastive Learning +2

Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost

1 code implementation27 Oct 2022 Sungjun Cho, Seonwoo Min, Jinwoo Kim, Moontae Lee, Honglak Lee, Seunghoon Hong

The forward and backward cost are thus linear to the number of edges, which each attention head can also choose flexibly based on the input.

Stochastic Block Model

UniCLIP: Unified Framework for Contrastive Language-Image Pre-training

no code implementations27 Sep 2022 Janghyeon Lee, Jongsuk Kim, Hyounguk Shon, Bumsoo Kim, Seung Hwan Kim, Honglak Lee, Junmo Kim

Pre-training vision-language models with contrastive objectives has shown promising results that are both scalable to large uncurated datasets and transferable to many downstream applications.

Grouping-matrix based Graph Pooling with Adaptive Number of Clusters

no code implementations7 Sep 2022 Sung Moon Ko, Sungjun Cho, Dae-Woong Jeong, Sehui Han, Moontae Lee, Honglak Lee

Conventional methods ask users to specify an appropriate number of clusters as a hyperparameter, then assume that all input graphs share the same number of clusters.

Binary Classification Molecular Property Prediction +2

Learning Action Translator for Meta Reinforcement Learning on Sparse-Reward Tasks

no code implementations19 Jul 2022 Yijie Guo, Qiucheng Wu, Honglak Lee

Meta reinforcement learning (meta-RL) aims to learn a policy solving a set of training tasks simultaneously and quickly adapting to new tasks.

Efficient Exploration Meta Reinforcement Learning +2

Pure Transformers are Powerful Graph Learners

2 code implementations6 Jul 2022 Jinwoo Kim, Tien Dat Nguyen, Seonwoo Min, Sungjun Cho, Moontae Lee, Honglak Lee, Seunghoon Hong

We show that standard Transformers without graph-specific modifications can lead to promising results in graph learning both in theory and practice.

Graph Learning Graph Regression +1

Towards More Objective Evaluation of Class Incremental Learning: Representation Learning Perspective

no code implementations16 Jun 2022 Sungmin Cha, Jihwan Kwak, Dongsub Shim, Hyunwoo Kim, Moontae Lee, Honglak Lee, Taesup Moon

While the common method for evaluating CIL algorithms is based on average test accuracy for all learned classes, we argue that maximizing accuracy alone does not necessarily lead to effective CIL algorithms.

Class Incremental Learning Incremental Learning +2

Few-shot Reranking for Multi-hop QA via Language Model Prompting

2 code implementations25 May 2022 Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang

To alleviate the need for a large number of labeled question-document pairs for retriever training, we propose PromptRank, which relies on large language models prompting for multi-hop path reranking.

Open-Domain Question Answering Passage Re-Ranking +2

Fast Inference and Transfer of Compositional Task Structures for Few-shot Task Generalization

no code implementations25 May 2022 Sungryull Sohn, Hyunjae Woo, Jongwook Choi, lyubing qiang, Izzeddin Gur, Aleksandra Faust, Honglak Lee

Different from the previous meta-rl methods trying to directly infer the unstructured task embedding, our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks, and use it as a prior to improve the task inference in testing.

Hierarchical Reinforcement Learning Meta Reinforcement Learning +2

RiCS: A 2D Self-Occlusion Map for Harmonizing Volumetric Objects

no code implementations14 May 2022 Yunseok Jang, Ruben Villegas, Jimei Yang, Duygu Ceylan, Xin Sun, Honglak Lee

We test the effectiveness of our representation on the human image harmonization task by predicting shading that is coherent with a given background image.

Image Harmonization

Learning Parameterized Task Structure for Generalization to Unseen Entities

1 code implementation28 Mar 2022 Anthony Z. Liu, Sungryull Sohn, Mahdi Qazwini, Honglak Lee

These subtasks are defined in terms of entities (e. g., "apple", "pear") that can be recombined to form new subtasks (e. g., "pickup apple", and "pickup pear").

Enriched CNN-Transformer Feature Aggregation Networks for Super-Resolution

1 code implementation15 Mar 2022 Jinsu Yoo, TaeHoon Kim, Sihaeng Lee, Seung Hwan Kim, Honglak Lee, Tae Hyun Kim

Recent transformer-based super-resolution (SR) methods have achieved promising results against conventional CNN-based methods.

Image Restoration Super-Resolution

Lipschitz-constrained Unsupervised Skill Discovery

1 code implementation ICLR 2022 Seohong Park, Jongwook Choi, Jaekyeom Kim, Honglak Lee, Gunhee Kim

To address this issue, we propose Lipschitz-constrained Skill Discovery (LSD), which encourages the agent to discover more diverse, dynamic, and far-reaching skills.

Environment Generation for Zero-Shot Compositional Reinforcement Learning

1 code implementation NeurIPS 2021 Izzeddin Gur, Natasha Jaques, Yingjie Miao, Jongwook Choi, Manoj Tiwari, Honglak Lee, Aleksandra Faust

We learn to generate environments composed of multiple pages or rooms, and train RL agents capable of completing wide-range of complex tasks in those environments.

Navigate reinforcement-learning +1

Improving Transferability of Representations via Augmentation-Aware Self-Supervision

2 code implementations NeurIPS 2021 Hankook Lee, Kibok Lee, Kimin Lee, Honglak Lee, Jinwoo Shin

Recent unsupervised representation learning methods have shown to be effective in a range of vision tasks by learning representations invariant to data augmentations such as random cropping and color jittering.

Representation Learning Transfer Learning

Successor Feature Landmarks for Long-Horizon Goal-Conditioned Reinforcement Learning

1 code implementation NeurIPS 2021 Christopher Hoang, Sungryull Sohn, Jongwook Choi, Wilka Carvalho, Honglak Lee

SFL leverages the ability of successor features (SF) to capture transition dynamics, using it to drive exploration by estimating state-novelty and to enable high-level planning by abstracting the state-space as a non-parametric landmark-based graph.

Efficient Exploration reinforcement-learning +1

Shortest-Path Constrained Reinforcement Learning for Sparse Reward Tasks

1 code implementation13 Jul 2021 Sungryull Sohn, Sungtae Lee, Jongwook Choi, Harm van Seijen, Mehdi Fatemi, Honglak Lee

We propose the k-Shortest-Path (k-SP) constraint: a novel constraint on the agent's trajectory that improves the sample efficiency in sparse-reward MDPs.

Continuous Control reinforcement-learning +1

Variational Empowerment as Representation Learning for Goal-Based Reinforcement Learning

no code implementations2 Jun 2021 Jongwook Choi, Archit Sharma, Honglak Lee, Sergey Levine, Shixiang Shane Gu

Learning to reach goal states and learning diverse skills through mutual information (MI) maximization have been proposed as principled frameworks for self-supervised reinforcement learning, allowing agents to acquire broadly applicable multitask policies with minimal reward engineering.

reinforcement-learning Reinforcement Learning (RL) +1

Pathdreamer: A World Model for Indoor Navigation

1 code implementation ICCV 2021 Jing Yu Koh, Honglak Lee, Yinfei Yang, Jason Baldridge, Peter Anderson

People navigating in unfamiliar buildings take advantage of myriad visual, spatial and semantic cues to efficiently achieve their navigation goals.

Semantic Segmentation Vision and Language Navigation

Revisiting Hierarchical Approach for Persistent Long-Term Video Prediction

1 code implementation ICLR 2021 Wonkwang Lee, Whie Jung, Han Zhang, Ting Chen, Jing Yu Koh, Thomas Huang, Hyungsuk Yoon, Honglak Lee, Seunghoon Hong

Despite the recent advances in the literature, existing approaches are limited to moderately short-term prediction (less than a few seconds), while extrapolating it to a longer future quickly leads to destruction in structure and content.

Translation Video Prediction

Adversarial Environment Generation for Learning to Navigate the Web

1 code implementation2 Mar 2021 Izzeddin Gur, Natasha Jaques, Kevin Malta, Manoj Tiwari, Honglak Lee, Aleksandra Faust

The regret objective trains the adversary to design a curriculum of environments that are "just-the-right-challenge" for the navigator agents; our results show that over time, the adversary learns to generate increasingly complex web navigation tasks.

Benchmarking Decision Making +2

Evolving Reinforcement Learning Algorithms

5 code implementations ICLR 2021 John D. Co-Reyes, Yingjie Miao, Daiyi Peng, Esteban Real, Sergey Levine, Quoc V. Le, Honglak Lee, Aleksandra Faust

Learning from scratch on simple classical control and gridworld tasks, our method rediscovers the temporal-difference (TD) algorithm.

Atari Games Meta-Learning +2

Batch Reinforcement Learning Through Continuation Method

no code implementations ICLR 2021 Yijie Guo, Shengyu Feng, Nicolas Le Roux, Ed Chi, Honglak Lee, Minmin Chen

Many real-world applications of reinforcement learning (RL) require the agent to learn from a fixed set of trajectories, without collecting new interactions.

reinforcement-learning Reinforcement Learning (RL)

Demystifying Loss Functions for Classification

no code implementations1 Jan 2021 Simon Kornblith, Honglak Lee, Ting Chen, Mohammad Norouzi

It is common to use the softmax cross-entropy loss to train neural networks on classification datasets where a single class label is assigned to each example.

Classification General Classification +1

Few-shot Sequence Learning with Transformers

no code implementations17 Dec 2020 Lajanugen Logeswaran, Ann Lee, Myle Ott, Honglak Lee, Marc'Aurelio Ranzato, Arthur Szlam

In the simplest setting, we append a token to an input sequence which represents the particular task to be undertaken, and show that the embedding of this token can be optimized on the fly given few labeled examples.

Few-Shot Learning

Ode to an ODE

no code implementations NeurIPS 2020 Krzysztof M. Choromanski, Jared Quincy Davis, Valerii Likhosherstov, Xingyou Song, Jean-Jacques Slotine, Jacob Varley, Honglak Lee, Adrian Weller, Vikas Sindhwani

We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the orthogonal group O(d).

Text-to-Image Generation Grounded by Fine-Grained User Attention

no code implementations7 Nov 2020 Jing Yu Koh, Jason Baldridge, Honglak Lee, Yinfei Yang

Localized Narratives is a dataset with detailed natural language descriptions of images paired with mouse traces that provide a sparse, fine-grained visual grounding for phrases.

Position Retrieval +3

Why Do Better Loss Functions Lead to Less Transferable Features?

no code implementations NeurIPS 2021 Simon Kornblith, Ting Chen, Honglak Lee, Mohammad Norouzi

We show that many objectives lead to statistically significant improvements in ImageNet accuracy over vanilla softmax cross-entropy, but the resulting fixed feature extractors transfer substantially worse to downstream tasks, and the choice of loss has little effect when networks are fully fine-tuned on the new tasks.

General Classification Image Classification

Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in a First-person Simulated 3D Environment

no code implementations28 Oct 2020 Wilka Carvalho, Anthony Liang, Kimin Lee, Sungryull Sohn, Honglak Lee, Richard L. Lewis, Satinder Singh

In this work, we show that one can learn object-interaction tasks from scratch without supervision by learning an attentive object-model as an auxiliary task during task learning with an object-centric relational RL agent.

Object Reinforcement Learning (RL) +1

Bridging Imagination and Reality for Model-Based Deep Reinforcement Learning

1 code implementation NeurIPS 2020 Guangxiang Zhu, Minghao Zhang, Honglak Lee, Chongjie Zhang

It maximizes the mutual information between imaginary and real trajectories so that the policy improvement learned from imaginary trajectories can be easily generalized to real trajectories.

Model-based Reinforcement Learning reinforcement-learning +1

Text as Neural Operator: Image Manipulation by Text Instruction

1 code implementation11 Aug 2020 Tianhao Zhang, Hung-Yu Tseng, Lu Jiang, Weilong Yang, Honglak Lee, Irfan Essa

In recent years, text-guided image manipulation has gained increasing attention in the multimedia and computer vision community.

Conditional Image Generation Image Captioning +2

Understanding and Diagnosing Vulnerability under Adversarial Attacks

no code implementations17 Jul 2020 Haizhong Zheng, Ziqi Zhang, Honglak Lee, Atul Prakash

Moreover, we design the first diagnostic method to quantify the vulnerability contributed by each layer, which can be used to identify vulnerable parts of model architectures.

Classification General Classification

An Ode to an ODE

no code implementations NeurIPS 2020 Krzysztof Choromanski, Jared Quincy Davis, Valerii Likhosherstov, Xingyou Song, Jean-Jacques Slotine, Jacob Varley, Honglak Lee, Adrian Weller, Vikas Sindhwani

We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the orthogonal group O(d).

CompressNet: Generative Compression at Extremely Low Bitrates

no code implementations14 Jun 2020 Suraj Kiran Raman, Aditya Ramesh, Vijayakrishna Naganoor, Shubham Dash, Giridharan Kumaravelu, Honglak Lee

Compressing images at extremely low bitrates (< 0. 1 bpp) has always been a challenging task since the quality of reconstruction significantly reduces due to the strong imposed constraint on the number of bits allocated for the compressed data.

Context-aware Dynamics Model for Generalization in Model-Based Reinforcement Learning

2 code implementations ICML 2020 Kimin Lee, Younggyo Seo, Seung-Hyun Lee, Honglak Lee, Jinwoo Shin

Model-based reinforcement learning (RL) enjoys several benefits, such as data-efficiency and planning, by learning a model of the environment's dynamics.

Model-based Reinforcement Learning reinforcement-learning +1

Time Dependence in Non-Autonomous Neural ODEs

no code implementations ICLR Workshop DeepDiffEq 2019 Jared Quincy Davis, Krzysztof Choromanski, Jake Varley, Honglak Lee, Jean-Jacques Slotine, Valerii Likhosterov, Adrian Weller, Ameesh Makadia, Vikas Sindhwani

Neural Ordinary Differential Equations (ODEs) are elegant reinterpretations of deep networks where continuous time can replace the discrete notion of depth, ODE solvers perform forward propagation, and the adjoint method enables efficient, constant memory backpropagation.

Image Classification Video Prediction

Improved Consistency Regularization for GANs

no code implementations11 Feb 2020 Zhengli Zhao, Sameer Singh, Honglak Lee, Zizhao Zhang, Augustus Odena, Han Zhang

Recent work has increased the performance of Generative Adversarial Networks (GANs) by enforcing a consistency cost on the discriminator.

Image Generation

BRPO: Batch Residual Policy Optimization

no code implementations8 Feb 2020 Sungryull Sohn, Yin-Lam Chow, Jayden Ooi, Ofir Nachum, Honglak Lee, Ed Chi, Craig Boutilier

In batch reinforcement learning (RL), one often constrains a learned policy to be close to the behavior (data-generating) policy, e. g., by constraining the learned action distribution to differ from the behavior policy by some maximum degree that is the same at each state.

reinforcement-learning Reinforcement Learning (RL)

High-Fidelity Synthesis with Disentangled Representation

2 code implementations ECCV 2020 Wonkwang Lee, Donggyun Kim, Seunghoon Hong, Honglak Lee

Despite the simplicity, we show that the proposed method is highly effective, achieving comparable image generation quality to the state-of-the-art methods using the disentangled representation.

Disentanglement Generative Adversarial Network +2

Meta Reinforcement Learning with Autonomous Inference of Subtask Dependencies

1 code implementation ICLR 2020 Sungryull Sohn, Hyunjae Woo, Jongwook Choi, Honglak Lee

We propose and address a novel few-shot RL problem, where a task is characterized by a subtask graph which describes a set of subtasks and their dependencies that are unknown to the agent.

Efficient Exploration Meta Reinforcement Learning +4

Efficient Adversarial Training with Transferable Adversarial Examples

2 code implementations CVPR 2020 Haizhong Zheng, Ziqi Zhang, Juncheng Gu, Honglak Lee, Atul Prakash

Adversarial training is an effective defense method to protect classification models against adversarial attacks.

How Should an Agent Practice?

no code implementations15 Dec 2019 Janarthanan Rajendran, Richard Lewis, Vivek Veeriah, Honglak Lee, Satinder Singh

We present a method for learning intrinsic reward functions to drive the learning of an agent during periods of practice in which extrinsic task rewards are not available.

High Fidelity Video Prediction with Large Stochastic Recurrent Neural Networks

no code implementations NeurIPS 2019 Ruben Villegas, Arkanath Pathak, Harini Kannan, Dumitru Erhan, Quoc V. Le, Honglak Lee

Predicting future video frames is extremely challenging, as there are many factors of variation that make up the dynamics of how frames change through time.

Inductive Bias Optical Flow Estimation +2

Network Randomization: A Simple Technique for Generalization in Deep Reinforcement Learning

2 code implementations ICLR 2020 Kimin Lee, Kibok Lee, Jinwoo Shin, Honglak Lee

Deep reinforcement learning (RL) agents often fail to generalize to unseen environments (yet semantically similar to trained agents), particularly when they are trained on high-dimensional state spaces, such as images.

Data Augmentation reinforcement-learning +1

Distilling Effective Supervision from Severe Label Noise

2 code implementations CVPR 2020 Zizhao Zhang, Han Zhang, Sercan O. Arik, Honglak Lee, Tomas Pfister

For instance, on CIFAR100 with a $40\%$ uniform noise ratio and only 10 trusted labeled data per class, our method achieves $80. 2{\pm}0. 3\%$ classification accuracy, where the error rate is only $1. 4\%$ higher than a neural network trained without label noise.

Image Classification

Self-Imitation Learning via Trajectory-Conditioned Policy for Hard-Exploration Tasks

no code implementations25 Sep 2019 Yijie Guo, Jongwook Choi, Marcin Moczulski, Samy Bengio, Mohammad Norouzi, Honglak Lee

We propose a new method of learning a trajectory-conditioned policy to imitate diverse trajectories from the agent's own past experiences and show that such self-imitation helps avoid myopic behavior and increases the chance of finding a globally optimal solution for hard-exploration tasks, especially when there are misleading rewards.

Imitation Learning

Memory Based Trajectory-conditioned Policies for Learning from Sparse Rewards

no code implementations NeurIPS 2020 Yijie Guo, Jongwook Choi, Marcin Moczulski, Shengyu Feng, Samy Bengio, Mohammad Norouzi, Honglak Lee

Reinforcement learning with sparse rewards is challenging because an agent can rarely obtain non-zero rewards and hence, gradient-based optimization of parameterized policies can be incremental and slow.

Efficient Exploration Imitation Learning +1

Data-Efficient Learning for Sim-to-Real Robotic Grasping using Deep Point Cloud Prediction Networks

no code implementations21 Jun 2019 Xinchen Yan, Mohi Khansari, Jasmine Hsu, Yuanzheng Gong, Yunfei Bai, Sören Pirk, Honglak Lee

Training a deep network policy for robot manipulation is notoriously costly and time consuming as it depends on collecting a significant amount of real world data.

3D Shape Representation Object +2

SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing

1 code implementation19 Jun 2019 Haonan Qiu, Chaowei Xiao, Lei Yang, Xinchen Yan, Honglak Lee, Bo Li

In this paper, we aim to explore the impact of semantic manipulation on DNNs predictions by manipulating the semantic attributes of images and generate "unrestricted adversarial examples".

Attribute Face Recognition +1

Zero-Shot Entity Linking by Reading Entity Descriptions

3 code implementations ACL 2019 Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Honglak Lee

First, we show that strong reading comprehension models pre-trained on large unlabeled data can be used to generalize to unseen entities.

Entity Linking Reading Comprehension

Similarity of Neural Network Representations Revisited

9 code implementations ICML 2019 2019 Simon Kornblith, Mohammad Norouzi, Honglak Lee, Geoffrey Hinton

We introduce a similarity index that measures the relationship between representational similarity matrices and does not suffer from this limitation.

Robust Determinantal Generative Classifier for Noisy Labels and Adversarial Attacks

no code implementations ICLR 2019 Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, Jinwoo Shin

For instance, on CIFAR-10 dataset containing 45% noisy training labels, we improve the test accuracy of a deep model optimized by the state-of-the-art noise-handling training method from33. 34% to 43. 02%.

Overcoming Catastrophic Forgetting with Unlabeled Data in the Wild

1 code implementation ICCV 2019 Kibok Lee, Kimin Lee, Jinwoo Shin, Honglak Lee

Lifelong learning with deep neural networks is well-known to suffer from catastrophic forgetting: the performance on previous tasks drastically degrades when learning a new task.

Class Incremental Learning Incremental Learning

Robust Inference via Generative Classifiers for Handling Noisy Labels

1 code implementation31 Jan 2019 Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, Jinwoo Shin

Large-scale datasets may contain significant proportions of noisy (incorrect) class labels, and it is well-known that modern deep neural networks (DNNs) poorly generalize from such noisy training datasets.

Diversity-Sensitive Conditional Generative Adversarial Networks

no code implementations ICLR 2019 Dingdong Yang, Seunghoon Hong, Yunseok Jang, Tianchen Zhao, Honglak Lee

We propose a simple yet highly effective method that addresses the mode-collapse problem in the Conditional Generative Adversarial Network (cGAN).

Generative Adversarial Network Image Inpainting +3

Generative Adversarial Self-Imitation Learning

no code implementations ICLR 2019 Yijie Guo, Junhyuk Oh, Satinder Singh, Honglak Lee

This paper explores a simple regularizer for reinforcement learning by proposing Generative Adversarial Self-Imitation Learning (GASIL), which encourages the agent to imitate past good trajectories via generative adversarial imitation learning framework.

Imitation Learning reinforcement-learning +1

Contingency-Aware Exploration in Reinforcement Learning

no code implementations ICLR 2019 Jongwook Choi, Yijie Guo, Marcin Moczulski, Junhyuk Oh, Neal Wu, Mohammad Norouzi, Honglak Lee

This paper investigates whether learning contingency-awareness and controllable aspects of an environment can lead to better exploration in reinforcement learning.

Montezuma's Revenge reinforcement-learning +1

MT-VAE: Learning Motion Transformations to Generate Multimodal Human Dynamics

1 code implementation ECCV 2018 Xinchen Yan, Akash Rastogi, Ruben Villegas, Kalyan Sunkavalli, Eli Shechtman, Sunil Hadap, Ersin Yumer, Honglak Lee

Our model jointly learns a feature embedding for motion modes (that the motion sequence can be reconstructed from) and a feature transformation that represents the transition of one motion mode to the next motion mode.

Human Dynamics Human Pose Forecasting +1

Hierarchical Reinforcement Learning for Zero-shot Generalization with Subtask Dependencies

1 code implementation NeurIPS 2018 Sungryull Sohn, Junhyuk Oh, Honglak Lee

We introduce a new RL problem where the agent is required to generalize to a previously-unseen environment characterized by a subtask graph which describes a set of subtasks and their dependencies.

Hierarchical Reinforcement Learning Network Embedding +3

A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks

4 code implementations NeurIPS 2018 Kimin Lee, Kibok Lee, Honglak Lee, Jinwoo Shin

Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications.

Class Incremental Learning Incremental Learning +1

Sample-Efficient Reinforcement Learning with Stochastic Ensemble Value Expansion

2 code implementations NeurIPS 2018 Jacob Buckman, Danijar Hafner, George Tucker, Eugene Brevdo, Honglak Lee

Integrating model-free and model-based approaches in reinforcement learning has the potential to achieve the high performance of model-free algorithms with low sample complexity.

Continuous Control reinforcement-learning +1

Self-Imitation Learning

4 code implementations ICML 2018 Junhyuk Oh, Yijie Guo, Satinder Singh, Honglak Lee

This paper proposes Self-Imitation Learning (SIL), a simple off-policy actor-critic algorithm that learns to reproduce the agent's past good decisions.

Atari Games Imitation Learning

Hierarchical Long-term Video Prediction without Supervision

no code implementations ICML 2018 Nevan Wichers, Ruben Villegas, Dumitru Erhan, Honglak Lee

Much of recent research has been devoted to video prediction and generation, yet most of the previous works have demonstrated only limited success in generating videos on short-term horizons.

Decoder Video Prediction

Data-Efficient Hierarchical Reinforcement Learning

13 code implementations NeurIPS 2018 Ofir Nachum, Shixiang Gu, Honglak Lee, Sergey Levine

In this paper, we study how we can develop HRL algorithms that are general, in that they do not make onerous additional assumptions beyond standard RL algorithms, and efficient, in the sense that they can be used with modest numbers of interaction samples, making them suitable for real-world problems such as robotic control.

Hierarchical Reinforcement Learning reinforcement-learning +1

Neural Kinematic Networks for Unsupervised Motion Retargetting

1 code implementation CVPR 2018 Ruben Villegas, Jimei Yang, Duygu Ceylan, Honglak Lee

We propose a recurrent neural network architecture with a Forward Kinematics layer and cycle consistency based adversarial training objective for unsupervised motion retargetting.

Unsupervised Discovery of Object Landmarks as Structural Representations

1 code implementation CVPR 2018 Yuting Zhang, Yijie Guo, Yixin Jin, Yijun Luo, Zhiyuan He, Honglak Lee

Deep neural networks can model images with rich latent representations, but they cannot naturally conceptualize structures of object categories in a human-perceptible way.

Object Unsupervised Facial Landmark Detection +2

Hierarchical Novelty Detection for Visual Object Recognition

no code implementations CVPR 2018 Kibok Lee, Kimin Lee, Kyle Min, Yuting Zhang, Jinwoo Shin, Honglak Lee

The essential ingredients of our methods are confidence-calibrated classifiers, data relabeling, and the leave-one-out strategy for modeling novel classes under the hierarchical taxonomy.

Generalized Zero-Shot Learning Novelty Detection +2

An efficient framework for learning sentence representations

6 code implementations ICLR 2018 Lajanugen Logeswaran, Honglak Lee

In this work we propose a simple and efficient framework for learning sentence representations from unlabelled data.

General Classification Representation Learning +1

Unsupervised Hierarchical Video Prediction

no code implementations ICLR 2018 Nevan Wichers, Dumitru Erhan, Honglak Lee

Much recent research has been devoted to video prediction and generation, but mostly for short-scale time horizons.

Video Prediction

Neural Task Graph Execution

no code implementations ICLR 2018 Sungryull Sohn, Junhyuk Oh, Honglak Lee

Unlike existing approaches which explicitly describe what the agent should do, our problem only describes properties of subtasks and relationships between them, which requires the agent to perform a complex reasoning to find the optimal subtask to execute.

Reinforcement Learning (RL)

Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples

3 code implementations ICLR 2018 Kimin Lee, Honglak Lee, Kibok Lee, Jinwoo Shin

The problem of detecting whether a test sample is from in-distribution (i. e., training distribution by a classifier) or out-of-distribution sufficiently different from it arises in many real-world machine learning applications.

Addressee and Response Selection in Multi-Party Conversations with Speaker Interaction RNNs

1 code implementation12 Sep 2017 Rui Zhang, Honglak Lee, Lazaros Polymenakos, Dragomir Radev

In this paper, we study the problem of addressee and response selection in multi-party conversations.

Learning 6-DOF Grasping Interaction via Deep Geometry-aware 3D Representations

1 code implementation24 Aug 2017 Xinchen Yan, Jasmine Hsu, Mohi Khansari, Yunfei Bai, Arkanath Pathak, Abhinav Gupta, James Davidson, Honglak Lee

Our contributions are fourfold: (1) To best of our knowledge, we are presenting for the first time a method to learn a 6-DOF grasping net from RGBD input; (2) We build a grasping dataset from demonstrations in virtual reality with rich sensory and interaction annotations.

3D Geometry Prediction 3D Shape Modeling +1

Value Prediction Network

2 code implementations NeurIPS 2017 Junhyuk Oh, Satinder Singh, Honglak Lee

This paper proposes a novel deep reinforcement learning (RL) architecture, called Value Prediction Network (VPN), which integrates model-free and model-based RL methods into a single neural network.

Atari Games Reinforcement Learning (RL) +1

Decomposing Motion and Content for Natural Video Sequence Prediction

1 code implementation25 Jun 2017 Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, Honglak Lee

To the best of our knowledge, this is the first end-to-end trainable network architecture with motion and content separation to model the spatiotemporal dynamics for pixel-level future prediction in natural videos.

 Ranked #1 on Video Prediction on KTH (Cond metric)

Decoder Future prediction +1

Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning

1 code implementation ICML 2017 Junhyuk Oh, Satinder Singh, Honglak Lee, Pushmeet Kohli

As a step towards developing zero-shot task generalization capabilities in reinforcement learning (RL), we introduce a new RL problem where the agent should learn to execute sequences of instructions after learning useful skills that solve subtasks.

reinforcement-learning Reinforcement Learning (RL)

Towards Understanding the Invertibility of Convolutional Neural Networks

no code implementations24 May 2017 Anna C. Gilbert, Yi Zhang, Kibok Lee, Yuting Zhang, Honglak Lee

Several recent works have empirically observed that Convolutional Neural Nets (CNNs) are (approximately) invertible.

Compressive Sensing General Classification

Exploring the structure of a real-time, arbitrary neural artistic stylization network

20 code implementations18 May 2017 Golnaz Ghiasi, Honglak Lee, Manjunath Kudlur, Vincent Dumoulin, Jonathon Shlens

In this paper, we present a method which combines the flexibility of the neural algorithm of artistic style with the speed of fast style transfer networks to allow real-time stylization using any content/style image pair.

Style Transfer

Learning to Generate Long-term Future via Hierarchical Prediction

2 code implementations ICML 2017 Ruben Villegas, Jimei Yang, Yuliang Zou, Sungryull Sohn, Xunyu Lin, Honglak Lee

To avoid inherent compounding errors in recursive pixel-level prediction, we propose to first estimate high-level structure in the input frames, then predict how that structure evolves in the future, and finally by observing a single frame from the past and the predicted high-level structure, we construct the future frames without having to observe any of the pixel-level predictions.

Decoder Video Prediction

Discriminative Bimodal Networks for Visual Localization and Detection with Natural Language Queries

no code implementations CVPR 2017 Yuting Zhang, Luyao Yuan, Yijie Guo, Zhiyuan He, I-An Huang, Honglak Lee

Our training objective encourages better localization on single images, incorporates text phrases in a broad range, and properly pairs image regions with text phrases into positive and negative examples.

Natural Language Queries Visual Localization

Weakly Supervised Semantic Segmentation using Web-Crawled Videos

no code implementations CVPR 2017 Seunghoon Hong, Donghun Yeo, Suha Kwak, Honglak Lee, Bohyung Han

Our goal is to overcome this limitation with no additional human intervention by retrieving videos relevant to target class labels from web repository, and generating segmentation labels from the retrieved videos to simulate strong supervision for semantic segmentation.

Image Classification Segmentation +2

Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction without 3D Supervision

2 code implementations NeurIPS 2016 Xinchen Yan, Jimei Yang, Ersin Yumer, Yijie Guo, Honglak Lee

We demonstrate the ability of the model in generating 3D volume from a single 2D image with three sets of experiments: (1) learning from single-class objects; (2) learning from multi-class objects and (3) testing on novel object classes.

3D Object Reconstruction Decoder +1

Dependency Sensitive Convolutional Neural Networks for Modeling Sentences and Documents

3 code implementations NAACL 2016 Rui Zhang, Honglak Lee, Dragomir Radev

Moreover, unlike other CNN-based models that analyze sentences locally by sliding windows, our system captures both the dependency information within each sentence and relationships across sentences in the same document.

Classification General Classification +4

Deep Variational Canonical Correlation Analysis

no code implementations11 Oct 2016 Weiran Wang, Xinchen Yan, Honglak Lee, Karen Livescu

We present deep variational canonical correlation analysis (VCCA), a deep multi-view learning model that extends the latent variable model interpretation of linear CCA to nonlinear observation models parameterized by deep neural networks.

MULTI-VIEW LEARNING

Learning What and Where to Draw

no code implementations NeurIPS 2016 Scott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, Honglak Lee

Generative Adversarial Networks (GANs) have recently demonstrated the capability to synthesize compelling real-world images, such as room interiors, album covers, manga, faces, birds, and flowers.

Ranked #13 on Text-to-Image Generation on CUB (using extra training data)

Text-to-Image Generation

Augmenting Supervised Neural Networks with Unsupervised Objectives for Large-scale Image Classification

no code implementations21 Jun 2016 Yuting Zhang, Kibok Lee, Honglak Lee

Inspired by the recent trend toward revisiting the importance of unsupervised learning, we investigate joint supervised and unsupervised learning in a large-scale setting by augmenting existing neural networks with decoding pathways for reconstruction.

General Classification Image Classification

Generative Adversarial Text to Image Synthesis

39 code implementations17 May 2016 Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, Honglak Lee

Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal.

Adversarial Text Text-to-Image Generation

Learning Deep Representations of Fine-grained Visual Descriptions

9 code implementations CVPR 2016 Scott Reed, Zeynep Akata, Bernt Schiele, Honglak Lee

State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information.

Attribute Image Retrieval +2

Deep Learning for Reward Design to Improve Monte Carlo Tree Search in ATARI Games

no code implementations24 Apr 2016 Xiaoxiao Guo, Satinder Singh, Richard Lewis, Honglak Lee

We present an adaptation of PGRD (policy-gradient for reward-design) for learning a reward-bonus function to improve UCT (a MCTS algorithm).

Atari Games Decision Making

Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units

2 code implementations16 Mar 2016 Wenling Shang, Kihyuk Sohn, Diogo Almeida, Honglak Lee

Recently, convolutional neural networks (CNNs) have been used as a powerful tool to solve many problems of machine learning and computer vision.

Deep Visual Analogy-Making

no code implementations NeurIPS 2015 Scott E. Reed, Yi Zhang, Yuting Zhang, Honglak Lee

In addition to identifying the content within a single image, relating images and generating related images are critical tasks for image understanding.

Language Modelling Visual Analogies

Learning Structured Output Representation using Deep Conditional Generative Models

1 code implementation NeurIPS 2015 Kihyuk Sohn, Honglak Lee, Xinchen Yan

The model is trained efficiently in the framework of stochastic gradient variational Bayes, and allows a fast prediction using stochastic feed-forward inference.

Semantic Segmentation Structured Prediction