1 code implementation • ECCV 2020 • Haonan Qiu, Chaowei Xiao, Lei Yang, Xinchen Yan, Honglak Lee, Bo Li
Deep neural networks (DNNs) have achieved great successes in various vision applications due to their strong expressive power.
1 code implementation • 31 May 2023 • Yiwei Lyu, Tiange Luo, Jiacheng Shi, Todd C. Hollon, Honglak Lee
Diffusion probabilistic models have shown great success in generating high-quality images controllably, and researchers have tried to utilize this controllability into text generation domain.
1 code implementation • 24 May 2023 • Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang
In the context of multi-step reasoning, language models (LMs) probabilities are often miscalibrated -- solutions with high probabilities are not always correct.
1 code implementation • 23 Mar 2023 • Todd C. Hollon, Cheng Jiang, Asadur Chowdury, Mustafa Nasir-Moin, Akhil Kondepudi, Alexander Aabedi, Arjun Adapa, Wajd Al-Holou, Jason Heth, Oren Sagher, Pedro Lowenstein, Maria Castro, Lisa Irina Wadiura, Georg Widhalm, Volker Neuschmelting, David Reinecke, Niklas von Spreckelsen, Mitchel S. Berger, Shawn L. Hervey-Jumper, John G. Golfinos, Matija Snuderl, Sandra Camelo-Piragua, Christian Freudiger, Honglak Lee, Daniel A. Orringer
Molecular classification has transformed the management of brain tumors by enabling more accurate prognostication and personalized treatment.
no code implementations • 16 Mar 2023 • Anthony Z. Liu, Lajanugen Logeswaran, Sungryull Sohn, Honglak Lee
Planning is an important capability of artificial agents that perform long-horizon tasks in real-world environments.
no code implementations • 14 Mar 2023 • Hyungjun Lim, Younggwan Kim, Kiho Yeom, Eunjoo Seo, Hoodong Lee, Stanley Jungkyu Choi, Honglak Lee
Self-supervised learning method that provides generalized speech representations has recently received increasing attention.
no code implementations • CVPR 2023 • Cheng Jiang, Xinhai Hou, Akhil Kondepudi, Asadur Chowdury, Christian W. Freudiger, Daniel A. Orringer, Honglak Lee, Todd C. Hollon
Importantly, sampled patches from WSIs of a patient's tumor are a diverse set of image examples that capture the same underlying cancer diagnosis.
1 code implementation • 2 Mar 2023 • Changyeon Kim, Jongjin Park, Jinwoo Shin, Honglak Lee, Pieter Abbeel, Kimin Lee
In this paper, we present Preference Transformer, a neural architecture that models human preferences using transformers.
no code implementations • 17 Feb 2023 • Lajanugen Logeswaran, Sungryull Sohn, Yunseok Jang, Moontae Lee, Honglak Lee
This work explores the problem of generating task graphs of real-world activities.
no code implementations • 17 Feb 2023 • Yunseok Jang, Sungryull Sohn, Lajanugen Logeswaran, Tiange Luo, Moontae Lee, Honglak Lee
Real-world tasks consist of multiple inter-dependent subtasks (e. g., a dirty pan needs to be washed before it can be used for cooking).
1 code implementation • 28 Jan 2023 • Wilka Carvalho, Angelos Filos, Richard L. Lewis, Honglak Lee, Satinder Singh
Recently, the Successor Features and Generalized Policy Improvement (SF&GPI) framework has been proposed as a method for learning, composing, and transferring predictive knowledge and behavior.
no code implementations • 27 Jan 2023 • Sungmin Cha, Sungjun Cho, Dasol Hwang, Honglak Lee, Taesup Moon, Moontae Lee
Since the recent advent of regulations for data protection (e. g., the General Data Protection Regulation), there has been increasing demand in deleting information learned from sensitive data in pre-trained models without retraining from scratch.
no code implementations • 7 Jan 2023 • Byoungjip Kim, Sungik Choi, Dasol Hwang, Moontae Lee, Honglak Lee
Despite surprising performance on zero-shot transfer, pre-training a large-scale multimodal model is often prohibitive as it requires a huge amount of data and computing resources.
no code implementations • 25 Dec 2022 • Tiange Luo, Honglak Lee, Justin Johnson
On Text2Shape, ShapeGlot, ABO, Genre, and Program Synthetic datasets, Neural Shape Compiler shows strengths in $\textit{Text}$ $\Longrightarrow$ $\textit{Point Cloud}$, $\textit{Point Cloud}$ $\Longrightarrow$ $\textit{Text}$, $\textit{Point Cloud}$ $\Longrightarrow$ $\textit{Program}$, and Point Cloud Completion tasks.
no code implementations • 14 Dec 2022 • Jongseong Jang, Daeun Kyung, Seung Hwan Kim, Honglak Lee, Kyunghoon Bae, Edward Choi
However, large-scale and high-quality data to train powerful neural networks are rare in the medical domain as the labeling must be done by qualified experts.
1 code implementation • 27 Oct 2022 • Sungjun Cho, Seonwoo Min, Jinwoo Kim, Moontae Lee, Honglak Lee, Seunghoon Hong
The forward and backward cost are thus linear to the number of edges, which each attention head can also choose flexibly based on the input.
no code implementations • 27 Sep 2022 • Janghyeon Lee, Jongsuk Kim, Hyounguk Shon, Bumsoo Kim, Seung Hwan Kim, Honglak Lee, Junmo Kim
Pre-training vision-language models with contrastive objectives has shown promising results that are both scalable to large uncurated datasets and transferable to many downstream applications.
no code implementations • 7 Sep 2022 • Sung Moon Ko, Sungjun Cho, Dae-Woong Jeong, Sehui Han, Moontae Lee, Honglak Lee
Conventional methods ask users to specify an appropriate number of clusters as a hyperparameter, then assume that all input graphs share the same number of clusters.
no code implementations • 19 Jul 2022 • Yijie Guo, Qiucheng Wu, Honglak Lee
Meta reinforcement learning (meta-RL) aims to learn a policy solving a set of training tasks simultaneously and quickly adapting to new tasks.
1 code implementation • 6 Jul 2022 • Jinwoo Kim, Tien Dat Nguyen, Seonwoo Min, Sungjun Cho, Moontae Lee, Honglak Lee, Seunghoon Hong
We show that standard Transformers without graph-specific modifications can lead to promising results in graph learning both in theory and practice.
Ranked #7 on
Graph Regression
on PCQM4Mv2-LSC
no code implementations • 16 Jun 2022 • Sungmin Cha, Jihwan Kwak, Dongsub Shim, Hyunwoo Kim, Moontae Lee, Honglak Lee, Taesup Moon
While the common method for evaluating CIL algorithms is based on average test accuracy for all learned classes, we argue that maximizing accuracy alone does not necessarily lead to effective CIL algorithms.
no code implementations • 16 Jun 2022 • Cheng Jiang, Asadur Chowdury, Xinhai Hou, Akhil Kondepudi, Christian W. Freudiger, Kyle Conway, Sandra Camelo-Piragua, Daniel A. Orringer, Honglak Lee, Todd C. Hollon
Here, we present OpenSRH, the first public dataset of clinical SRH images from 300+ brain tumors patients and 1300+ unique whole slide optical images.
no code implementations • NAACL 2022 • Lajanugen Logeswaran, Yao Fu, Moontae Lee, Honglak Lee
Pre-trained large language models have shown successful progress in many language understanding benchmarks.
no code implementations • 25 May 2022 • Sungryull Sohn, Hyunjae Woo, Jongwook Choi, lyubing qiang, Izzeddin Gur, Aleksandra Faust, Honglak Lee
Different from the previous meta-rl methods trying to directly infer the unstructured task embedding, our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks, and use it as a prior to improve the task inference in testing.
Hierarchical Reinforcement Learning
Meta Reinforcement Learning
+2
2 code implementations • 25 May 2022 • Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang
To alleviate the need for a large number of labeled question-document pairs for retriever training, we propose PromptRank, which relies on large language models prompting for multi-hop path reranking.
no code implementations • 14 May 2022 • Yunseok Jang, Ruben Villegas, Jimei Yang, Duygu Ceylan, Xin Sun, Honglak Lee
We test the effectiveness of our representation on the human image harmonization task by predicting shading that is coherent with a given background image.
1 code implementation • 6 Apr 2022 • Jing Yu Koh, Harsh Agrawal, Dhruv Batra, Richard Tucker, Austin Waters, Honglak Lee, Yinfei Yang, Jason Baldridge, Peter Anderson
We study the problem of synthesizing immersive 3D indoor scenes from one or more images.
1 code implementation • 28 Mar 2022 • Anthony Z. Liu, Sungryull Sohn, Mahdi Qazwini, Honglak Lee
These subtasks are defined in terms of entities (e. g., "apple", "pear") that can be recombined to form new subtasks (e. g., "pickup apple", and "pickup pear").
no code implementations • ICLR 2022 • Jongjin Park, Younggyo Seo, Jinwoo Shin, Honglak Lee, Pieter Abbeel, Kimin Lee
In order to leverage unlabeled samples for reward learning, we infer pseudo-labels of the unlabeled samples based on the confidence of the preference predictor.
1 code implementation • 15 Mar 2022 • Jinsu Yoo, TaeHoon Kim, Sihaeng Lee, Seung Hwan Kim, Honglak Lee, Tae Hyun Kim
Recent transformer-based super-resolution (SR) methods have achieved promising results against conventional CNN-based methods.
no code implementations • ICLR 2022 • Seohong Park, Jongwook Choi, Jaekyeom Kim, Honglak Lee, Gunhee Kim
To address this issue, we propose Lipschitz-constrained Skill Discovery (LSD), which encourages the agent to discover more diverse, dynamic, and far-reaching skills.
1 code implementation • NeurIPS 2021 • Izzeddin Gur, Natasha Jaques, Yingjie Miao, Jongwook Choi, Manoj Tiwari, Honglak Lee, Aleksandra Faust
We learn to generate environments composed of multiple pages or rooms, and train RL agents capable of completing wide-range of complex tasks in those environments.
1 code implementation • CVPR 2022 • TaeHoon Kim, Gwangmo Song, Sihaeng Lee, Sangyun Kim, Yewon Seo, Soonyoung Lee, Seung Hwan Kim, Honglak Lee, Kyunghoon Bae
Unlike other models, BiART can distinguish between image (or text) as a conditional reference and a generation target.
Ranked #1 on
Image Reconstruction
on ImageNet 256x256
1 code implementation • NeurIPS 2021 • Hankook Lee, Kibok Lee, Kimin Lee, Honglak Lee, Jinwoo Shin
Recent unsupervised representation learning methods have shown to be effective in a range of vision tasks by learning representations invariant to data augmentations such as random cropping and color jittering.
1 code implementation • NeurIPS 2021 • Christopher Hoang, Sungryull Sohn, Jongwook Choi, Wilka Carvalho, Honglak Lee
SFL leverages the ability of successor features (SF) to capture transition dynamics, using it to drive exploration by estimating state-novelty and to enable high-level planning by abstracting the state-space as a non-parametric landmark-based graph.
no code implementations • 8 Aug 2021 • Cheng Jiang, Abhishek Bhattacharya, Joseph Linzey, Rushikesh S. Joshi, Sung Jik Cha, Sudharsan Srinivasan, Daniel Alber, Akhil Kondepudi, Esteban Urias, Balaji Pandian, Wajd Al-Holou, Steve Sullivan, B. Gregory Thompson, Jason Heth, Chris Freudiger, Siri Khalsa, Donato Pacione, John G. Golfinos, Sandra Camelo-Piragua, Daniel A. Orringer, Honglak Lee, Todd Hollon
Conclusion: SRH with trained artificial intelligence models can provide rapid and accurate intraoperative analysis of skull base tumor specimens to inform surgical decision-making.
1 code implementation • 13 Jul 2021 • Sungryull Sohn, Sungtae Lee, Jongwook Choi, Harm van Seijen, Mehdi Fatemi, Honglak Lee
We propose the k-Shortest-Path (k-SP) constraint: a novel constraint on the agent's trajectory that improves the sample efficiency in sparse-reward MDPs.
no code implementations • 2 Jun 2021 • Jongwook Choi, Archit Sharma, Honglak Lee, Sergey Levine, Shixiang Shane Gu
Learning to reach goal states and learning diverse skills through mutual information (MI) maximization have been proposed as principled frameworks for self-supervised reinforcement learning, allowing agents to acquire broadly applicable multitask policies with minimal reward engineering.
1 code implementation • ICCV 2021 • Jing Yu Koh, Honglak Lee, Yinfei Yang, Jason Baldridge, Peter Anderson
People navigating in unfamiliar buildings take advantage of myriad visual, spatial and semantic cues to efficiently achieve their navigation goals.
1 code implementation • ICLR 2021 • Wonkwang Lee, Whie Jung, Han Zhang, Ting Chen, Jing Yu Koh, Thomas Huang, Hyungsuk Yoon, Honglak Lee, Seunghoon Hong
Despite the recent advances in the literature, existing approaches are limited to moderately short-term prediction (less than a few seconds), while extrapolating it to a longer future quickly leads to destruction in structure and content.
1 code implementation • 2 Mar 2021 • Izzeddin Gur, Natasha Jaques, Kevin Malta, Manoj Tiwari, Honglak Lee, Aleksandra Faust
The regret objective trains the adversary to design a curriculum of environments that are "just-the-right-challenge" for the navigator agents; our results show that over time, the adversary learns to generate increasingly complex web navigation tasks.
2 code implementations • ICLR Workshop SSL-RL 2021 • Younggyo Seo, Lili Chen, Jinwoo Shin, Honglak Lee, Pieter Abbeel, Kimin Lee
Recent exploration methods have proven to be a recipe for improving sample-efficiency in deep reinforcement learning (RL).
1 code implementation • CVPR 2021 • Han Zhang, Jing Yu Koh, Jason Baldridge, Honglak Lee, Yinfei Yang
The quality of XMC-GAN's output is a major step up from previous models, as we show on three challenging datasets.
Ranked #22 on
Text-to-Image Generation
on COCO
(using extra training data)
5 code implementations • ICLR 2021 • John D. Co-Reyes, Yingjie Miao, Daiyi Peng, Esteban Real, Sergey Levine, Quoc V. Le, Honglak Lee, Aleksandra Faust
Learning from scratch on simple classical control and gridworld tasks, our method rediscovers the temporal-difference (TD) algorithm.
no code implementations • 1 Jan 2021 • Simon Kornblith, Honglak Lee, Ting Chen, Mohammad Norouzi
It is common to use the softmax cross-entropy loss to train neural networks on classification datasets where a single class label is assigned to each example.
no code implementations • ICLR 2021 • Yijie Guo, Shengyu Feng, Nicolas Le Roux, Ed Chi, Honglak Lee, Minmin Chen
Many real-world applications of reinforcement learning (RL) require the agent to learn from a fixed set of trajectories, without collecting new interactions.
no code implementations • 17 Dec 2020 • Lajanugen Logeswaran, Ann Lee, Myle Ott, Honglak Lee, Marc'Aurelio Ranzato, Arthur Szlam
In the simplest setting, we append a token to an input sequence which represents the particular task to be undertaken, and show that the embedding of this token can be optimized on the fly given few labeled examples.
no code implementations • NeurIPS 2020 • Krzysztof M. Choromanski, Jared Quincy Davis, Valerii Likhosherstov, Xingyou Song, Jean-Jacques Slotine, Jacob Varley, Honglak Lee, Adrian Weller, Vikas Sindhwani
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the orthogonal group O(d).
no code implementations • 7 Nov 2020 • Jing Yu Koh, Jason Baldridge, Honglak Lee, Yinfei Yang
Localized Narratives is a dataset with detailed natural language descriptions of images paired with mouse traces that provide a sparse, fine-grained visual grounding for phrases.
no code implementations • NeurIPS 2021 • Simon Kornblith, Ting Chen, Honglak Lee, Mohammad Norouzi
We show that many objectives lead to statistically significant improvements in ImageNet accuracy over vanilla softmax cross-entropy, but the resulting fixed feature extractors transfer substantially worse to downstream tasks, and the choice of loss has little effect when networks are fully fine-tuned on the new tasks.
no code implementations • 28 Oct 2020 • Wilka Carvalho, Anthony Liang, Kimin Lee, Sungryull Sohn, Honglak Lee, Richard L. Lewis, Satinder Singh
In this work, we show that one can learn object-interaction tasks from scratch without supervision by learning an attentive object-model as an auxiliary task during task learning with an object-centric relational RL agent.
1 code implementation • NeurIPS 2020 • Guangxiang Zhu, Minghao Zhang, Honglak Lee, Chongjie Zhang
It maximizes the mutual information between imaginary and real trajectories so that the policy improvement learned from imaginary trajectories can be easily generalized to real trajectories.
Model-based Reinforcement Learning
reinforcement-learning
+1
3 code implementations • ICLR 2021 • Kibok Lee, Yian Zhu, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin, Honglak Lee
Contrastive representation learning has shown to be effective to learn representations from unlabeled data.
1 code implementation • 11 Aug 2020 • Tianhao Zhang, Hung-Yu Tseng, Lu Jiang, Weilong Yang, Honglak Lee, Irfan Essa
In recent years, text-guided image manipulation has gained increasing attention in the multimedia and computer vision community.
1 code implementation • NeurIPS 2020 • Kuang-Huei Lee, Ian Fischer, Anthony Liu, Yijie Guo, Honglak Lee, John Canny, Sergio Guadarrama
The Predictive Information is the mutual information between the past and the future, I(X_past; X_future).
no code implementations • 17 Jul 2020 • Haizhong Zheng, Ziqi Zhang, Honglak Lee, Atul Prakash
Moreover, we design the first diagnostic method to quantify the vulnerability contributed by each layer, which can be used to identify vulnerable parts of model architectures.
no code implementations • NeurIPS 2020 • Krzysztof Choromanski, Jared Quincy Davis, Valerii Likhosherstov, Xingyou Song, Jean-Jacques Slotine, Jacob Varley, Honglak Lee, Adrian Weller, Vikas Sindhwani
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the orthogonal group O(d).
no code implementations • 14 Jun 2020 • Suraj Kiran Raman, Aditya Ramesh, Vijayakrishna Naganoor, Shubham Dash, Giridharan Kumaravelu, Honglak Lee
Compressing images at extremely low bitrates (< 0. 1 bpp) has always been a challenging task since the quality of reconstruction significantly reduces due to the strong imposed constraint on the number of bits allocated for the compressed data.
2 code implementations • ICML 2020 • Kimin Lee, Younggyo Seo, Seung-Hyun Lee, Honglak Lee, Jinwoo Shin
Model-based reinforcement learning (RL) enjoys several benefits, such as data-efficiency and planning, by learning a model of the environment's dynamics.
Model-based Reinforcement Learning
reinforcement-learning
+1
no code implementations • ICLR Workshop DeepDiffEq 2019 • Jared Quincy Davis, Krzysztof Choromanski, Jake Varley, Honglak Lee, Jean-Jacques Slotine, Valerii Likhosterov, Adrian Weller, Ameesh Makadia, Vikas Sindhwani
Neural Ordinary Differential Equations (ODEs) are elegant reinterpretations of deep networks where continuous time can replace the discrete notion of depth, ODE solvers perform forward propagation, and the adjoint method enables efficient, constant memory backpropagation.
no code implementations • 11 Feb 2020 • Zhengli Zhao, Sameer Singh, Honglak Lee, Zizhao Zhang, Augustus Odena, Han Zhang
Recent work has increased the performance of Generative Adversarial Networks (GANs) by enforcing a consistency cost on the discriminator.
no code implementations • 8 Feb 2020 • Sungryull Sohn, Yin-Lam Chow, Jayden Ooi, Ofir Nachum, Honglak Lee, Ed Chi, Craig Boutilier
In batch reinforcement learning (RL), one often constrains a learned policy to be close to the behavior (data-generating) policy, e. g., by constraining the learned action distribution to differ from the behavior policy by some maximum degree that is the same at each state.
2 code implementations • ECCV 2020 • Wonkwang Lee, Donggyun Kim, Seunghoon Hong, Honglak Lee
Despite the simplicity, we show that the proposed method is highly effective, achieving comparable image generation quality to the state-of-the-art methods using the disentangled representation.
1 code implementation • ICLR 2020 • Sungryull Sohn, Hyunjae Woo, Jongwook Choi, Honglak Lee
We propose and address a novel few-shot RL problem, where a task is characterized by a subtask graph which describes a set of subtasks and their dependencies that are unknown to the agent.
2 code implementations • CVPR 2020 • Haizhong Zheng, Ziqi Zhang, Juncheng Gu, Honglak Lee, Atul Prakash
Adversarial training is an effective defense method to protect classification models against adversarial attacks.
no code implementations • 15 Dec 2019 • Janarthanan Rajendran, Richard Lewis, Vivek Veeriah, Honglak Lee, Satinder Singh
We present a method for learning intrinsic reward functions to drive the learning of an agent during periods of practice in which extrinsic task rewards are not available.
no code implementations • NeurIPS 2019 • Ruben Villegas, Arkanath Pathak, Harini Kannan, Dumitru Erhan, Quoc V. Le, Honglak Lee
Predicting future video frames is extremely challenging, as there are many factors of variation that make up the dynamics of how frames change through time.
no code implementations • ICLR 2020 • Han Zhang, Zizhao Zhang, Augustus Odena, Honglak Lee
Generative Adversarial Networks (GANs) are known to be difficult to train, despite considerable research effort.
Ranked #5 on
Conditional Image Generation
on ArtBench-10 (32x32)
2 code implementations • ICLR 2020 • Kimin Lee, Kibok Lee, Jinwoo Shin, Honglak Lee
Deep reinforcement learning (RL) agents often fail to generalize to unseen environments (yet semantically similar to trained agents), particularly when they are trained on high-dimensional state spaces, such as images.
2 code implementations • CVPR 2020 • Zizhao Zhang, Han Zhang, Sercan O. Arik, Honglak Lee, Tomas Pfister
For instance, on CIFAR100 with a $40\%$ uniform noise ratio and only 10 trusted labeled data per class, our method achieves $80. 2{\pm}0. 3\%$ classification accuracy, where the error rate is only $1. 4\%$ higher than a neural network trained without label noise.
no code implementations • 25 Sep 2019 • Yijie Guo, Jongwook Choi, Marcin Moczulski, Samy Bengio, Mohammad Norouzi, Honglak Lee
We propose a new method of learning a trajectory-conditioned policy to imitate diverse trajectories from the agent's own past experiences and show that such self-imitation helps avoid myopic behavior and increases the chance of finding a globally optimal solution for hard-exploration tasks, especially when there are misleading rewards.
no code implementations • 23 Sep 2019 • Ofir Nachum, Haoran Tang, Xingyu Lu, Shixiang Gu, Honglak Lee, Sergey Levine
Hierarchical reinforcement learning has demonstrated significant success at solving difficult reinforcement learning (RL) tasks.
Hierarchical Reinforcement Learning
reinforcement-learning
+1
no code implementations • NeurIPS 2020 • Yijie Guo, Jongwook Choi, Marcin Moczulski, Shengyu Feng, Samy Bengio, Mohammad Norouzi, Honglak Lee
Reinforcement learning with sparse rewards is challenging because an agent can rarely obtain non-zero rewards and hence, gradient-based optimization of parameterized policies can be incremental and slow.
no code implementations • 21 Jun 2019 • Xinchen Yan, Mohi Khansari, Jasmine Hsu, Yuanzheng Gong, Yunfei Bai, Sören Pirk, Honglak Lee
Training a deep network policy for robot manipulation is notoriously costly and time consuming as it depends on collecting a significant amount of real world data.
1 code implementation • NeurIPS 2019 • Matthias Minderer, Chen Sun, Ruben Villegas, Forrester Cole, Kevin Murphy, Honglak Lee
Extracting and predicting object structure and dynamics from videos without supervision is a major challenge in machine learning.
Ranked #11 on
Video Prediction
on KTH
1 code implementation • 19 Jun 2019 • Haonan Qiu, Chaowei Xiao, Lei Yang, Xinchen Yan, Honglak Lee, Bo Li
In this paper, we aim to explore the impact of semantic manipulation on DNNs predictions by manipulating the semantic attributes of images and generate "unrestricted adversarial examples".
3 code implementations • ACL 2019 • Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Honglak Lee
First, we show that strong reading comprehension models pre-trained on large unlabeled data can be used to generalize to unseen entities.
no code implementations • ICLR 2019 • Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, Jinwoo Shin
For instance, on CIFAR-10 dataset containing 45% noisy training labels, we improve the test accuracy of a deep model optimized by the state-of-the-art noise-handling training method from33. 34% to 43. 02%.
no code implementations • ICLR 2019 • Tianchen Zhao, Dejiao Zhang, Zeyu Sun, Honglak Lee
We formulate an information-based optimization problem for supervised classification.
8 code implementations • ICML 2019 2019 • Simon Kornblith, Mohammad Norouzi, Honglak Lee, Geoffrey Hinton
We introduce a similarity index that measures the relationship between representational similarity matrices and does not suffer from this limitation.
1 code implementation • ICCV 2019 • Kibok Lee, Kimin Lee, Jinwoo Shin, Honglak Lee
Lifelong learning with deep neural networks is well-known to suffer from catastrophic forgetting: the performance on previous tasks drastically degrades when learning a new task.
1 code implementation • 31 Jan 2019 • Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, Jinwoo Shin
Large-scale datasets may contain significant proportions of noisy (incorrect) class labels, and it is well-known that modern deep neural networks (DNNs) poorly generalize from such noisy training datasets.
no code implementations • ICLR 2019 • Dingdong Yang, Seunghoon Hong, Yunseok Jang, Tianchen Zhao, Honglak Lee
We propose a simple yet highly effective method that addresses the mode-collapse problem in the Conditional Generative Adversarial Network (cGAN).
no code implementations • ICLR 2019 • Yijie Guo, Junhyuk Oh, Satinder Singh, Honglak Lee
This paper explores a simple regularizer for reinforcement learning by proposing Generative Adversarial Self-Imitation Learning (GASIL), which encourages the agent to imitate past good trajectories via generative adversarial imitation learning framework.
8 code implementations • 12 Nov 2018 • Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, James Davidson
Planning has been very successful for control tasks with known environment dynamics.
Ranked #2 on
Continuous Control
on DeepMind Cheetah Run (Images)
no code implementations • ICLR 2019 • Jongwook Choi, Yijie Guo, Marcin Moczulski, Junhyuk Oh, Neal Wu, Mohammad Norouzi, Honglak Lee
This paper investigates whether learning contingency-awareness and controllable aspects of an environment can lead to better exploration in reinforcement learning.
Ranked #5 on
Atari Games
on Atari 2600 Montezuma's Revenge
1 code implementation • NeurIPS 2018 • Lajanugen Logeswaran, Honglak Lee, Samy Bengio
We propose an adversarial loss to enforce generated samples to be attribute compatible and realistic.
7 code implementations • ICLR 2019 • Ofir Nachum, Shixiang Gu, Honglak Lee, Sergey Levine
We study the problem of representation learning in goal-conditioned hierarchical reinforcement learning.
1 code implementation • NeurIPS 2018 • Seunghoon Hong, Xinchen Yan, Thomas Huang, Honglak Lee
In this work, we present a novel hierarchical framework for semantic image manipulation.
1 code implementation • ECCV 2018 • Xinchen Yan, Akash Rastogi, Ruben Villegas, Kalyan Sunkavalli, Eli Shechtman, Sunil Hadap, Ersin Yumer, Honglak Lee
Our model jointly learns a feature embedding for motion modes (that the motion sequence can be reconstructed from) and a feature transformation that represents the transition of one motion mode to the next motion mode.
Ranked #7 on
Human Pose Forecasting
on Human3.6M
(ADE metric)
1 code implementation • NeurIPS 2018 • Sungryull Sohn, Junhyuk Oh, Honglak Lee
We introduce a new RL problem where the agent is required to generalize to a previously-unseen environment characterized by a subtask graph which describes a set of subtasks and their dependencies.
5 code implementations • NeurIPS 2018 • Kimin Lee, Kibok Lee, Honglak Lee, Jinwoo Shin
Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications.
Ranked #2 on
Out-of-Distribution Detection
on MS-1M vs. IJB-C
2 code implementations • NeurIPS 2018 • Jacob Buckman, Danijar Hafner, George Tucker, Eugene Brevdo, Honglak Lee
Integrating model-free and model-based approaches in reinforcement learning has the potential to achieve the high performance of model-free algorithms with low sample complexity.
4 code implementations • ICML 2018 • Junhyuk Oh, Yijie Guo, Satinder Singh, Honglak Lee
This paper proposes Self-Imitation Learning (SIL), a simple off-policy actor-critic algorithm that learns to reproduce the agent's past good decisions.
Ranked #3 on
Atari Games
on Atari 2600 Atlantis
no code implementations • ICML 2018 • Nevan Wichers, Ruben Villegas, Dumitru Erhan, Honglak Lee
Much of recent research has been devoted to video prediction and generation, yet most of the previous works have demonstrated only limited success in generating videos on short-term horizons.
11 code implementations • NeurIPS 2018 • Ofir Nachum, Shixiang Gu, Honglak Lee, Sergey Levine
In this paper, we study how we can develop HRL algorithms that are general, in that they do not make onerous additional assumptions beyond standard RL algorithms, and efficient, in the sense that they can be used with modest numbers of interaction samples, making them suitable for real-world problems such as robotic control.
Hierarchical Reinforcement Learning
reinforcement-learning
+1
1 code implementation • CVPR 2018 • Ruben Villegas, Jimei Yang, Duygu Ceylan, Honglak Lee
We propose a recurrent neural network architecture with a Forward Kinematics layer and cycle consistency based adversarial training objective for unsupervised motion retargetting.
1 code implementation • CVPR 2018 • Yuting Zhang, Yijie Guo, Yixin Jin, Yijun Luo, Zhiyuan He, Honglak Lee
Deep neural networks can model images with rich latent representations, but they cannot naturally conceptualize structures of object categories in a human-perceptible way.
Unsupervised Facial Landmark Detection
Unsupervised Human Pose Estimation
+1
no code implementations • CVPR 2018 • Kibok Lee, Kimin Lee, Kyle Min, Yuting Zhang, Jinwoo Shin, Honglak Lee
The essential ingredients of our methods are confidence-calibrated classifiers, data relabeling, and the leave-one-out strategy for modeling novel classes under the hierarchical taxonomy.
6 code implementations • ICLR 2018 • Lajanugen Logeswaran, Honglak Lee
In this work we propose a simple and efficient framework for learning sentence representations from unlabelled data.
no code implementations • CVPR 2018 • Seunghoon Hong, Dingdong Yang, Jongwook Choi, Honglak Lee
We propose a novel hierarchical approach for text-to-image synthesis by inferring semantic layout.
no code implementations • ICLR 2018 • Sungryull Sohn, Junhyuk Oh, Honglak Lee
Unlike existing approaches which explicitly describe what the agent should do, our problem only describes properties of subtasks and relationships between them, which requires the agent to perform a complex reasoning to find the optimal subtask to execute.
no code implementations • ICLR 2018 • Nevan Wichers, Dumitru Erhan, Honglak Lee
Much recent research has been devoted to video prediction and generation, but mostly for short-scale time horizons.
3 code implementations • ICLR 2018 • Kimin Lee, Honglak Lee, Kibok Lee, Jinwoo Shin
The problem of detecting whether a test sample is from in-distribution (i. e., training distribution by a classifier) or out-of-distribution sufficiently different from it arises in many real-world machine learning applications.
1 code implementation • 12 Sep 2017 • Rui Zhang, Honglak Lee, Lazaros Polymenakos, Dragomir Radev
In this paper, we study the problem of addressee and response selection in multi-party conversations.
1 code implementation • 24 Aug 2017 • Xinchen Yan, Jasmine Hsu, Mohi Khansari, Yunfei Bai, Arkanath Pathak, Abhinav Gupta, James Davidson, Honglak Lee
Our contributions are fourfold: (1) To best of our knowledge, we are presenting for the first time a method to learn a 6-DOF grasping net from RGBD input; (2) We build a grasping dataset from demonstrations in virtual reality with rich sensory and interaction annotations.
2 code implementations • NeurIPS 2017 • Junhyuk Oh, Satinder Singh, Honglak Lee
This paper proposes a novel deep reinforcement learning (RL) architecture, called Value Prediction Network (VPN), which integrates model-free and model-based RL methods into a single neural network.
Ranked #9 on
Atari Games
on Atari 2600 Krull
1 code implementation • 25 Jun 2017 • Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, Honglak Lee
To the best of our knowledge, this is the first end-to-end trainable network architecture with motion and content separation to model the spatiotemporal dynamics for pixel-level future prediction in natural videos.
Ranked #1 on
Video Prediction
on KTH
(Cond metric)
1 code implementation • ICML 2017 • Junhyuk Oh, Satinder Singh, Honglak Lee, Pushmeet Kohli
As a step towards developing zero-shot task generalization capabilities in reinforcement learning (RL), we introduce a new RL problem where the agent should learn to execute sequences of instructions after learning useful skills that solve subtasks.
no code implementations • 24 May 2017 • Anna C. Gilbert, Yi Zhang, Kibok Lee, Yuting Zhang, Honglak Lee
Several recent works have empirically observed that Convolutional Neural Nets (CNNs) are (approximately) invertible.
18 code implementations • 18 May 2017 • Golnaz Ghiasi, Honglak Lee, Manjunath Kudlur, Vincent Dumoulin, Jonathon Shlens
In this paper, we present a method which combines the flexibility of the neural algorithm of artistic style with the speed of fast style transfer networks to allow real-time stylization using any content/style image pair.
2 code implementations • ICML 2017 • Ruben Villegas, Jimei Yang, Yuliang Zou, Sungryull Sohn, Xunyu Lin, Honglak Lee
To avoid inherent compounding errors in recursive pixel-level prediction, we propose to first estimate high-level structure in the input frames, then predict how that structure evolves in the future, and finally by observing a single frame from the past and the predicted high-level structure, we construct the future frames without having to observe any of the pixel-level predictions.
no code implementations • CVPR 2017 • Yuting Zhang, Luyao Yuan, Yijie Guo, Zhiyuan He, I-An Huang, Honglak Lee
Our training objective encourages better localization on single images, incorporates text phrases in a broad range, and properly pairs image regions with text phrases into positive and negative examples.
no code implementations • CVPR 2017 • Seunghoon Hong, Donghun Yeo, Suha Kwak, Honglak Lee, Bohyung Han
Our goal is to overcome this limitation with no additional human intervention by retrieving videos relevant to target class labels from web repository, and generating segmentation labels from the retrieved videos to simulate strong supervision for semantic segmentation.
Image Classification
Weakly supervised Semantic Segmentation
+1
2 code implementations • NeurIPS 2016 • Xinchen Yan, Jimei Yang, Ersin Yumer, Yijie Guo, Honglak Lee
We demonstrate the ability of the model in generating 3D volume from a single 2D image with three sets of experiments: (1) learning from single-class objects; (2) learning from multi-class objects and (3) testing on novel object classes.
2 code implementations • 8 Nov 2016 • Lajanugen Logeswaran, Honglak Lee, Dragomir Radev
Modeling the structure of coherent texts is a key NLP problem.
2 code implementations • NAACL 2016 • Rui Zhang, Honglak Lee, Dragomir Radev
Moreover, unlike other CNN-based models that analyze sentences locally by sliding windows, our system captures both the dependency information within each sentence and relationships across sentences in the same document.
no code implementations • 11 Oct 2016 • Weiran Wang, Xinchen Yan, Honglak Lee, Karen Livescu
We present deep variational canonical correlation analysis (VCCA), a deep multi-view learning model that extends the latent variable model interpretation of linear CCA to nonlinear observation models parameterized by deep neural networks.
no code implementations • NeurIPS 2016 • Scott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, Honglak Lee
Generative Adversarial Networks (GANs) have recently demonstrated the capability to synthesize compelling real-world images, such as room interiors, album covers, manga, faces, birds, and flowers.
Ranked #11 on
Text-to-Image Generation
on CUB
(using extra training data)
no code implementations • 21 Jun 2016 • Yuting Zhang, Kibok Lee, Honglak Lee
Inspired by the recent trend toward revisiting the importance of unsupervised learning, we investigate joint supervised and unsupervised learning in a large-scale setting by augmenting existing neural networks with decoding pathways for reconstruction.
no code implementations • 30 May 2016 • Junhyuk Oh, Valliappa Chockalingam, Satinder Singh, Honglak Lee
In this paper, we introduce a new set of reinforcement learning (RL) tasks in Minecraft (a flexible 3D world).
9 code implementations • CVPR 2016 • Scott Reed, Zeynep Akata, Bernt Schiele, Honglak Lee
State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information.
40 code implementations • 17 May 2016 • Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, Honglak Lee
Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal.
no code implementations • 24 Apr 2016 • Xiaoxiao Guo, Satinder Singh, Richard Lewis, Honglak Lee
We present an adaptation of PGRD (policy-gradient for reward-design) for learning a reward-bonus function to improve UCT (a MCTS algorithm).
1 code implementation • 16 Mar 2016 • Wenling Shang, Kihyuk Sohn, Diogo Almeida, Honglak Lee
Recently, convolutional neural networks (CNNs) have been used as a powerful tool to solve many problems of machine learning and computer vision.
3 code implementations • CVPR 2016 • Jimei Yang, Brian Price, Scott Cohen, Honglak Lee, Ming-Hsuan Yang
We develop a deep learning algorithm for contour detection with a fully convolutional encoder-decoder network.
no code implementations • NeurIPS 2015 • Jimei Yang, Scott Reed, Ming-Hsuan Yang, Honglak Lee
An important problem for both graphics and vision is to synthesize novel views of a 3D object from a single image.
no code implementations • CVPR 2016 • Seunghoon Hong, Junhyuk Oh, Bohyung Han, Honglak Lee
We propose a novel weakly-supervised semantic segmentation algorithm based on Deep Convolutional Neural Network (DCNN).
Foreground Segmentation
Weakly supervised Semantic Segmentation
+1
1 code implementation • 2 Dec 2015 • Xinchen Yan, Jimei Yang, Kihyuk Sohn, Honglak Lee
This paper investigates a novel problem of generating images from visual attributes.
no code implementations • NeurIPS 2015 • Scott E. Reed, Yi Zhang, Yuting Zhang, Honglak Lee
In addition to identifying the content within a single image, relating images and generating related images are critical tasks for image understanding.
1 code implementation • NeurIPS 2015 • Kihyuk Sohn, Honglak Lee, Xinchen Yan
The model is trained efficiently in the framework of stochastic gradient variational Bayes, and allows a fast prediction using stochastic feed-forward inference.
Ranked #1 on
Structured Prediction
on MNIST
1 code implementation • NeurIPS 2015 • Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard Lewis, Satinder Singh
Motivated by vision-based reinforcement learning (RL) problems, in particular Atari games from the recent benchmark Aracade Learning Environment (ALE), we consider spatio-temporal prediction problems where future (image-)frames are dependent on control variables or actions as well as previous frames.
no code implementations • CVPR 2015 • Yuting Zhang, Kihyuk Sohn, Ruben Villegas, Gang Pan, Honglak Lee
Object detection systems based on the deep convolutional neural network (CNN) have recently made ground- breaking advances on several object detection benchmarks.
2 code implementations • 20 Dec 2014 • Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, Andrew Rabinovich
On MNIST handwritten digits, we show that our model is robust to label corruption.
no code implementations • NeurIPS 2014 • Xiaoxiao Guo, Satinder Singh, Honglak Lee, Richard L. Lewis, Xiaoshi Wang
The combination of modern Reinforcement Learning and Deep Learning approaches holds the promise of making significant progress on challenging applications requiring both rich perception and policy-selection.
no code implementations • NeurIPS 2014 • Kihyuk Sohn, Wenling Shang, Honglak Lee
Deep learning has been successfully applied to multimodal representation learning problems, with a common strategy to learning joint representations that are shared across multiple modalities on top of layers of modality-specific networks.
2 code implementations • CVPR 2015 • Zeynep Akata, Scott Reed, Daniel Walter, Honglak Lee, Bernt Schiele
Image classification has advanced significantly in recent years with the availability of large-scale image sets.
no code implementations • NeurIPS 2013 • Forest Agostinelli, Michael R. Anderson, Honglak Lee
Stacked sparse denoising auto-encoders (SSDAs) have recently been shown to be successful at removing noise from corrupted images.
no code implementations • CVPR 2013 • Roni Mittelman, Honglak Lee, Benjamin Kuipers, Silvio Savarese
In order to address this issue, we propose a weakly supervised approach to learn mid-level features, where only class-level supervision is provided during training.
no code implementations • CVPR 2013 • Andrew Kae, Kihyuk Sohn, Honglak Lee, Erik Learned-Miller
Although the CRF is a good baseline labeler, we show how an RBM can be added to the architecture to provide a global shape bias that complements the local modeling provided by the CRF.
no code implementations • 16 Jan 2013 • Ian Lenz, Honglak Lee, Ashutosh Saxena
We consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects.
Ranked #7 on
Robotic Grasping
on Cornell Grasp Dataset
no code implementations • NeurIPS 2012 • Gary Huang, Marwan Mattar, Honglak Lee, Erik G. Learned-Miller
Unsupervised joint alignment of images has been demonstrated to improve performance on recognition tasks such as face verification.
no code implementations • NeurIPS 2009 • Ian Goodfellow, Honglak Lee, Quoc V. Le, Andrew Saxe, Andrew Y. Ng
Our evaluation metrics can also be used to evaluate future work in unsupervised deep learning, and thus help the development of future algorithms.
no code implementations • NeurIPS 2009 • Honglak Lee, Peter Pham, Yan Largman, Andrew Y. Ng
In this paper, we apply convolutional deep belief networks to audio data and empirically evaluate them on various audio classification tasks.
no code implementations • NeurIPS 2007 • Honglak Lee, Chaitanya Ekanadham, Andrew Y. Ng
This suggests that our sparse variant of deep belief networks holds promise for modeling more higher-order features.