Search Results for author: Nojun Kwak

Found 101 papers, 32 papers with code

SeqHAND: RGB-Sequence-Based 3D Hand Pose and Shape Estimation

no code implementations ECCV 2020 John Yang, Hyung Jin Chang, Seungeui Lee, Nojun Kwak

In this paper, we attempt to not only consider the appearance of a hand but incorporate the temporal movement information of a hand in motion into the learning framework for better 3D hand pose estimation performance, which leads to the necessity of a large scale dataset with sequential RGB hand images.

3D Hand Pose Estimation

Korean Language Modeling via Syntactic Guide

no code implementations LREC 2022 Hyeondey Kim, Seonhoon Kim, Inho Kang, Nojun Kwak, Pascale Fung

Our experiment results prove that the proposed methods improve the model performance of the investigated Korean language understanding tasks.

Language Modelling POS

FEED: Feature-level Ensemble Effect for knowledge Distillation

no code implementations ICLR 2019 SeongUk Park, Nojun Kwak

This paper proposes a versatile and powerful training algorithm named Feature-level Ensemble Effect for knowledge Distillation(FEED), which is inspired by the work of factor transfer.

Knowledge Distillation Test +1

Fast Sun-aligned Outdoor Scene Relighting based on TensoRF

no code implementations7 Nov 2023 Yeonjin Chang, Yearim Kim, Seunghyeon Seo, Jung Yi, Nojun Kwak

In this work, we introduce our method of outdoor scene relighting for Neural Radiance Fields (NeRF) named Sun-aligned Relighting TensoRF (SR-TensoRF).

ConcatPlexer: Additional Dim1 Batching for Faster ViTs

no code implementations22 Aug 2023 Donghoon Han, Seunghyeon Seo, Donghyeon Jeon, Jiho Jang, Chaerin Kong, Nojun Kwak

Transformers have demonstrated tremendous success not only in the natural language processing (NLP) domain but also the field of computer vision, igniting various creative approaches and applications.

Advancing Beyond Identification: Multi-bit Watermark for Large Language Models

no code implementations1 Aug 2023 KiYoon Yoo, Wonhyuk Ahn, Nojun Kwak

We propose a method to tackle misuses of large language models beyond the identification of machine-generated text.

Language Modelling

FlipNeRF: Flipped Reflection Rays for Few-shot Novel View Synthesis

1 code implementation ICCV 2023 Seunghyeon Seo, Yeonjin Chang, Nojun Kwak

Neural Radiance Field (NeRF) has been a mainstream in novel view synthesis with its remarkable quality of rendered images and simple architecture.

Depth Estimation Novel View Synthesis

AADiff: Audio-Aligned Video Synthesis with Text-to-Image Diffusion

no code implementations6 May 2023 Seungwoo Lee, Chaerin Kong, Donghyeon Jeon, Nojun Kwak

Recent advances in diffusion models have showcased promising results in the text-to-video (T2V) synthesis task.

Robust Multi-bit Natural Language Watermarking through Invariant Features

1 code implementation3 May 2023 KiYoon Yoo, Wonhyuk Ahn, Jiho Jang, Nojun Kwak

Recent years have witnessed a proliferation of valuable original natural language contents found in subscription-based media outlets, web novel platforms, and outputs of large language models.

Active Semi-Supervised Learning by Exploring Per-Sample Uncertainty and Consistency

no code implementations15 Mar 2023 Jaeseung Lim, Jongkeun Na, Nojun Kwak

Active Learning (AL) and Semi-supervised Learning are two techniques that have been studied to reduce the high cost of deep learning by using a small amount of labeled data and a large amount of unlabeled data.

Active Learning

MDPose: Real-Time Multi-Person Pose Estimation via Mixture Density Model

no code implementations17 Feb 2023 Seunghyeon Seo, Jaeyoung Yoo, Jihye Hwang, Nojun Kwak

In this work, we propose a novel framework of single-stage instance-aware pose estimation by modeling the joint distribution of human keypoints with a mixture density model, termed as MDPose.

Keypoint Estimation Multi-Person Pose Estimation

MixNeRF: Modeling a Ray with Mixture Density for Novel View Synthesis from Sparse Inputs

1 code implementation CVPR 2023 Seunghyeon Seo, Donghoon Han, Yeonjin Chang, Nojun Kwak

In this work, we propose MixNeRF, an effective training strategy for novel view synthesis from sparse inputs by modeling a ray with a mixture density model.

Depth Estimation Novel View Synthesis +1

Analyzing Multimodal Objectives Through the Lens of Generative Diffusion Guidance

no code implementations10 Feb 2023 Chaerin Kong, Nojun Kwak

Recent years have witnessed astonishing advances in the field of multimodal representation learning, with contrastive learning being the cornerstone for major breakthroughs.

Contrastive Learning Representation Learning

Unifying Vision-Language Representation Space with Single-tower Transformer

no code implementations21 Nov 2022 Jiho Jang, Chaerin Kong, Donghyeon Jeon, Seonhoon Kim, Nojun Kwak

Contrastive learning is a form of distance learning that aims to learn invariant features from two related representations.

Contrastive Learning Object Localization +3

Leveraging Off-the-shelf Diffusion Model for Multi-attribute Fashion Image Manipulation

no code implementations12 Oct 2022 Chaerin Kong, Donghyeon Jeon, Ohjoon Kwon, Nojun Kwak

Fashion attribute editing is a task that aims to convert the semantic attributes of a given fashion image while preserving the irrelevant regions.

Image Manipulation

Towards Efficient Neural Scene Graphs by Learning Consistency Fields

no code implementations9 Oct 2022 Yeji Song, Chaerin Kong, Seoyoung Lee, Nojun Kwak, Joonseok Lee

Neural Radiance Fields (NeRF) achieves photo-realistic image rendering from novel views, and the Neural Scene Graphs (NSG) \cite{ost2021neural} extends it to dynamic scenes (video) with multiple objects.

Semantics-Guided Object Removal for Facial Images: with Broad Applicability and Robust Style Preservation

no code implementations29 Sep 2022 Jookyung Song, Yeonjin Chang, SeongUk Park, Nojun Kwak

U-net, a conventional approach for conditional GANs, retains fine details of unmasked regions but the style of the reconstructed image is inconsistent with the rest of the original image and only works robustly when the size of the occluding object is small enough.

Image Inpainting

Conservative Generator, Progressive Discriminator: Coordination of Adversaries in Few-shot Incremental Image Synthesis

no code implementations29 Jul 2022 Chaerin Kong, Nojun Kwak

The capacity to learn incrementally from an online stream of data is an envied trait of human learners, as deep neural networks typically suffer from catastrophic forgetting and stability-plasticity dilemma.

Few-Shot Learning Image Generation +1

Unsupervised Domain Adaptation for One-stage Object Detector using Offsets to Bounding Box

no code implementations20 Jul 2022 Jayeon Yoo, Inseop Chung, Nojun Kwak

Most existing domain adaptive object detection methods exploit adversarial feature alignment to adapt the model to a new domain.

object-detection Object Detection +1

End-to-End Multi-Object Detection with a Regularized Mixture Model

no code implementations18 May 2022 Jaeyoung Yoo, Hojun Lee, Seunghyeon Seo, Inseop Chung, Nojun Kwak

Recent end-to-end multi-object detectors simplify the inference pipeline by removing hand-crafted processes such as non-maximum suppression (NMS).

Density Estimation object-detection +1

Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling

no code implementations29 Apr 2022 KiYoon Yoo, Nojun Kwak

For a less complex dataset, a mere 0. 1% of adversary clients is enough to poison the global model effectively.

Federated Learning Model Poisoning +3

Imposing Consistency for Optical Flow Estimation

no code implementations CVPR 2022 Jisoo Jeong, Jamie Menjay Lin, Fatih Porikli, Nojun Kwak

Imposing consistency through proxy tasks has been shown to enhance data-driven learning and enable self-supervision in various tasks.

Optical Flow Estimation Self-Supervised Learning

MatteFormer: Transformer-Based Image Matting via Prior-Tokens

1 code implementation CVPR 2022 Gyutae Park, Sungjoon Son, Jaeyoung Yoo, SeHo Kim, Nojun Kwak

In this paper, we propose a transformer-based image matting model called MatteFormer, which takes full advantage of trimap information in the transformer block.

Image Matting

Pose-MUM : Reinforcing Key Points Relationship for Semi-Supervised Human Pose Estimation

no code implementations15 Mar 2022 Jongmok Kim, Hwijun Lee, Jaeseung Lim, Jongkeun Na, Nojun Kwak, Jin Young Choi

A well-designed strong-weak augmentation strategy and the stable teacher to generate reliable pseudo labels are essential in the teacher-student framework of semi-supervised learning (SSL).

Pose Estimation Semi-Supervised Human Pose Estimation

Detection of Word Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation

no code implementations3 Mar 2022 KiYoon Yoo, Jangho Kim, Jiho Jang, Nojun Kwak

Word-level adversarial attacks have shown success in NLP models, drastically decreasing the performance of transformer-based models in recent years.

Adversarial Defense Density Estimation +3

MUM: Mix Image Tiles and UnMix Feature Tiles for Semi-Supervised Object Detection

1 code implementation CVPR 2022 Jongmok Kim, Jooyoung Jang, Seunghyeon Seo, Jisoo Jeong, Jongkeun Na, Nojun Kwak

Data augmentation strategy plays a significant role in the SSL framework since it is hard to create a weak-strong augmented input pair without losing label information.

Data Augmentation object-detection +2

Self-Distilled Self-Supervised Representation Learning

1 code implementation25 Nov 2021 Jiho Jang, Seonhoon Kim, KiYoon Yoo, Chaerin Kong, Jangho Kim, Nojun Kwak

Through self-distillation, the intermediate layers are better suited for instance discrimination, making the performance of an early-exited sub-network not much degraded from that of the full network.

Representation Learning Self-Supervised Learning

Few-shot Image Generation with Mixup-based Distance Learning

1 code implementation23 Nov 2021 Chaerin Kong, Jeesoo Kim, Donghoon Han, Nojun Kwak

Producing diverse and realistic images with generative models such as GANs typically requires large scale training with vast amount of images.

Image Generation

MUM : Mix Image Tiles and UnMix Feature Tiles for Semi-Supervised Object Detection

1 code implementation22 Nov 2021 Jongmok Kim, Jooyoung Jang, Seunghyeon Seo, Jisoo Jeong, Jongkeun Na, Nojun Kwak

Data augmentation strategy plays a significant role in the SSL framework since it is hard to create a weak-strong augmented input pair without losing label information.

Data Augmentation object-detection +2

Dynamic Iterative Refinement for Efficient 3D Hand Pose Estimation

no code implementations11 Nov 2021 John Yang, Yash Bhalgat, Simyung Chang, Fatih Porikli, Nojun Kwak

While hand pose estimation is a critical component of most interactive extended reality and gesture recognition systems, contemporary approaches are not optimized for computational and memory efficiency.

3D Hand Pose Estimation Gesture Recognition

Self-Evolutionary Optimization for Pareto Front Learning

no code implementations7 Oct 2021 Simyung Chang, KiYoon Yoo, Jiho Jang, Nojun Kwak

Utilizing SEO for PFL, we also introduce self-evolutionary Pareto networks (SEPNet), enabling the unified model to approximate the entire Pareto front set that maximizes the hypervolume.

Multi-Task Learning

Few-Shot Object Detection by Attending to Per-Sample-Prototype

no code implementations16 Sep 2021 Hojun Lee, Myunggi Lee, Nojun Kwak

Second, each support sample is used as a class code to leverage the information by comparing similarities between each support feature and query features.

Few-Shot Object Detection Meta-Learning +1

Dynamic Collective Intelligence Learning: Finding Efficient Sparse Model via Refined Gradients for Pruned Weights

1 code implementation10 Sep 2021 Jangho Kim, Jayeon Yoo, Yeji Song, KiYoon Yoo, Nojun Kwak

To alleviate this problem, dynamic pruning methods have emerged, which try to find diverse sparsity patterns during training by utilizing Straight-Through-Estimator (STE) to approximate gradients of pruned weights.

Normalization Matters in Weakly Supervised Object Localization

1 code implementation ICCV 2021 Jeesoo Kim, Junsuk Choe, Sangdoo Yun, Nojun Kwak

Weakly-supervised object localization (WSOL) enables finding an object using a dataset without any localization information.

Weakly-Supervised Object Localization

PQK: Model Compression via Pruning, Quantization, and Knowledge Distillation

no code implementations25 Jun 2021 Jangho Kim, Simyung Chang, Nojun Kwak

Unlike traditional pruning and KD, PQK makes use of unimportant weights pruned in the pruning process to make a teacher network for training a better student network without pre-training the teacher model.

Keyword Spotting Knowledge Distillation +2

LFI-CAM: Learning Feature Importance for Better Visual Explanation

1 code implementation ICCV 2021 Kwang Hee Lee, Chaewon Park, Junghyun Oh, Nojun Kwak

LFI-CAM generates an attention map for visual explanation during forward propagation, at the same time, leverages the attention map to improve the classification performance through the attention mechanism.

Classification Decision Making +3

Prototype-based Personalized Pruning

no code implementations25 Mar 2021 Jangho Kim, Simyung Chang, Sungrack Yun, Nojun Kwak

We verify the usefulness of PPP on a couple of tasks in computer vision and Keyword spotting.

Keyword Spotting Model Compression

The U-Net based GLOW for Optical-Flow-free Video Interframe Generation

no code implementations17 Mar 2021 Saem Park, Donghoon Han, Nojun Kwak

Through experiments, we \sam {confirmed the feasibility of the proposed algorithm and would like to suggest the U-Net based Generative Flow as a new possibility for baseline in video frame interpolation.

Occlusion Handling Optical Flow Estimation +1

Maximizing Cosine Similarity Between Spatial Features for Unsupervised Domain Adaptation in Semantic Segmentation

no code implementations25 Feb 2021 Inseop Chung, Daesik Kim, Nojun Kwak

We propose a novel method that tackles the problem of unsupervised domain adaptation for semantic segmentation by maximizing the cosine similarity between the source and the target domain at the feature level.

Segmentation Semantic Segmentation +1

Learning Dynamic BERT via Trainable Gate Variables and a Bi-modal Regularizer

no code implementations19 Feb 2021 Seohyeong Jeong, Nojun Kwak

The BERT model has shown significant success on various natural language processing tasks.

Learning Dynamic Network Using a Reuse Gate Function in Semi-supervised Video Object Segmentation

1 code implementation CVPR 2021 Hyojin Park, Jayeon Yoo, Seohyeong Jeong, Ganesh Venkatesh, Nojun Kwak

Current state-of-the-art approaches for Semi-supervised Video Object Segmentation (Semi-VOS) propagates information from previous frames to generate segmentation mask for the current frame.

One-shot visual object segmentation Segmentation +2

StyleUV: Diverse and High-fidelity UV Map Generative Model

no code implementations25 Nov 2020 Myunggi Lee, Wonwoong Cho, Moonheum Kim, David Inouye, Nojun Kwak

Meanwhile, with the advent of Generative Adversarial Networks (GANs), there has been great progress in reconstructing realistic 2D images.

Vocal Bursts Intensity Prediction

Edge Bias in Federated Learning and its Solution by Buffered Knowledge Distillation

no code implementations20 Oct 2020 Sangho Lee, KiYoon Yoo, Nojun Kwak

Federated learning (FL), which utilizes communication between the server (core) and local devices (edges) to indirectly learn from more data, is an emerging field in deep learning research.

Federated Learning Knowledge Distillation

Self-supervised pre-training and contrastive representation learning for multiple-choice video QA

no code implementations17 Sep 2020 Seonhoon Kim, Seohyeong Jeong, Eunbyul Kim, Inho Kang, Nojun Kwak

In this paper, we propose novel training schemes for multiple-choice video question answering with a self-supervised pre-training stage and a supervised contrastive learning in the main stage as an auxiliary learning.

Auxiliary Learning Contrastive Learning +4

On the Orthogonality of Knowledge Distillation with Other Techniques: From an Ensemble Perspective

no code implementations9 Sep 2020 SeongUk Park, KiYoon Yoo, Nojun Kwak

In this paper, we focus on knowledge distillation and demonstrate that knowledge distillation methods are orthogonal to other efficiency-enhancing methods both analytically and empirically.

Data Augmentation Efficient Neural Network +2

Part-Aware Data Augmentation for 3D Object Detection in Point Cloud

1 code implementation27 Jul 2020 Jaeseok Choi, Yeji Song, Nojun Kwak

In this paper, we propose part-aware data augmentation (PA-AUG) that can better utilize rich information of 3D label to enhance the performance of 3D object detectors.

3D Object Detection Data Augmentation +1

Procrustean Regression Networks: Learning 3D Structure of Non-Rigid Objects from 2D Annotations

no code implementations ECCV 2020 Sungheon Park, Minsik Lee, Nojun Kwak

We propose a novel framework for training neural networks which is capable of learning 3D information of non-rigid objects when only 2D annotations are available as ground truths.


SeqHAND:RGB-Sequence-Based 3D Hand Pose and Shape Estimation

no code implementations10 Jul 2020 John Yang, Hyung Jin Chang, Seungeui Lee, Nojun Kwak

In this paper, we attempt to not only consider the appearance of a hand but incorporate the temporal movement information of a hand in motion into the learning framework for better 3D hand pose estimation performance, which leads to the necessity of a large scale dataset with sequential RGB hand images.

3D Hand Pose Estimation

Interpolation-based semi-supervised learning for object detection

1 code implementation CVPR 2021 Jisoo Jeong, Vikas Verma, Minsung Hyun, Juho Kannala, Nojun Kwak

Despite the data labeling cost for the object detection tasks being substantially more than that of the classification tasks, semi-supervised learning methods for object detection have not been studied much.

object-detection Object Detection

KL-Divergence-Based Region Proposal Network for Object Detection

no code implementations22 May 2020 Geonseok Seo, Jaeyoung Yoo, Jae-Seok Choi, Nojun Kwak

The learning of the region proposal in object detection using the deep neural networks (DNN) is divided into two tasks: binary classification and bounding box regression task.

Binary Classification object-detection +2

Position-based Scaled Gradient for Model Quantization and Pruning

1 code implementation NeurIPS 2020 Jangho Kim, KiYoon Yoo, Nojun Kwak

Second, we empirically show that PSG acting as a regularizer to a weight vector is favorable for model compression domains such as quantization and pruning.

Model Compression Quantization

LSQ+: Improving low-bit quantization through learnable offsets and better initialization

4 code implementations20 Apr 2020 Yash Bhalgat, Jinwon Lee, Markus Nagel, Tijmen Blankevoort, Nojun Kwak

To solve this problem, we propose LSQ+, a natural extension of LSQ, wherein we introduce a general asymmetric quantization scheme with trainable scale and offset parameters that can learn to accommodate the negative activations.

Image Classification Quantization

Class-Imbalanced Semi-Supervised Learning

1 code implementation17 Feb 2020 Minsung Hyun, Jisoo Jeong, Nojun Kwak

First, we analyze existing SSL methods in imbalanced environments and examine how the class imbalance affects SSL methods.

Feature-map-level Online Adversarial Knowledge Distillation

no code implementations ICML 2020 Inseop Chung, SeongUk Park, Jangho Kim, Nojun Kwak

By training a network to fool the corresponding discriminator, it can learn the other network's feature map distribution.

Knowledge Distillation

QKD: Quantization-aware Knowledge Distillation

no code implementations28 Nov 2019 Jangho Kim, Yash Bhalgat, Jinwon Lee, Chirag Patel, Nojun Kwak

First, Self-studying (SS) phase fine-tunes a quantized low-precision student network without KD to obtain a good initialization.

Knowledge Distillation Quantization

Training Multi-Object Detector by Estimating Bounding Box Distribution for Input Image

3 code implementations ICCV 2021 Jaeyoung Yoo, Hojun Lee, Inseop Chung, Geonseok Seo, Nojun Kwak

Instead of assigning each ground truth to specific locations of network's output, we train a network by estimating the probability density of bounding boxes in an input image using a mixture model.

Density Estimation object-detection +1

SINet: Extreme Lightweight Portrait Segmentation Networks with Spatial Squeeze Modules and Information Blocking Decoder

8 code implementations20 Nov 2019 Hyojin Park, Lars Lowe Sjösund, Youngjoon Yoo, Nicolas Monet, Jihwan Bang, Nojun Kwak

To solve the first problem, we introduce the new extremely lightweight portrait segmentation model SINet, containing an information blocking decoder and spatial squeeze modules.

Blocking Portrait Segmentation +2

FEED: Feature-level Ensemble for Knowledge Distillation

no code implementations24 Sep 2019 SeongUk Park, Nojun Kwak

We name this method as parallel FEED, andexperimental results on CIFAR-100 and ImageNet show that our method has clear performance enhancements, without introducing any additional parameters or computations at test time.

Knowledge Distillation Test

ExtremeC3Net: Extreme Lightweight Portrait Segmentation Networks using Advanced C3-modules

3 code implementations8 Aug 2019 Hyojin Park, Lars Lowe Sjösund, Youngjoon Yoo, Jihwan Bang, Nojun Kwak

In our qualitative and quantitative analysis on the EG1800 dataset, we show that our method outperforms various existing lightweight segmentation models.

Portrait Segmentation Segmentation +1

Image Enhancement by Recurrently-trained Super-resolution Network

no code implementations26 Jul 2019 Saem Park, Nojun Kwak

The newly generated HR images by the repeatedly trained SR network show better image quality and this strategy of training LR to mimic new HR can lead to a more efficient SR network.

Image Enhancement Super-Resolution

Pose estimator and tracker using temporal flow maps for limbs

no code implementations23 May 2019 Jihye Hwang, Jieun Lee, Sungheon Park, Nojun Kwak

In this paper, we propose temporal flow maps for limbs (TML) and a multi-stride method to estimate and track human poses.

Data Augmentation Pose Estimation +1

Selective Self-Training for semi-supervised Learning

no code implementations ICLR 2019 Jisoo Jeong, Seungeui Lee, Nojun Kwak

While the conventional methods cannot be applied to the new SSL problems where the separated data do not share the classes, our method does not show any performance degradation even if the classes of unlabeled data are different from those of the labeled data.

Feature Fusion for Online Mutual Knowledge Distillation

1 code implementation19 Apr 2019 Jangho Kim, Minsung Hyun, Inseop Chung, Nojun Kwak

We propose a learning framework named Feature Fusion Learning (FFL) that efficiently trains a powerful classifier through a fusion module which combines the feature maps generated from parallel neural networks.

Knowledge Distillation

Disentangling Options with Hellinger Distance Regularizer

no code implementations15 Apr 2019 Minsung Hyun, Junyoung Choi, Nojun Kwak

In reinforcement learning (RL), temporal abstraction still remains as an important and unsolved problem.

reinforcement-learning Reinforcement Learning (RL)

A Comprehensive Overhaul of Feature Distillation

2 code implementations ICCV 2019 Byeongho Heo, Jeesoo Kim, Sangdoo Yun, Hyojin Park, Nojun Kwak, Jin Young Choi

We investigate the design aspects of feature distillation methods achieving network compression and propose a novel feature distillation method in which the distillation loss is designed to make a synergy among various aspects: teacher transform, student transform, distillation feature position and distance function.

General Classification Image Classification +4

Task-oriented Design through Deep Reinforcement Learning

no code implementations13 Mar 2019 Junyoung Choi, Minsung Hyun, Nojun Kwak

We propose a new low-cost machine-learning-based methodology which assists designers in reducing the gap between the problem and the solution in the design process.

BIG-bench Machine Learning reinforcement-learning +1

Unpriortized Autoencoder For Image Generation

no code implementations12 Feb 2019 Jaeyoung Yoo, Hojun Lee, Nojun Kwak

In this paper, we treat the image generation task using an autoencoder, a representative latent model.

Density Estimation Image Generation +1

URNet : User-Resizable Residual Networks with Conditional Gating Module

no code implementations15 Jan 2019 Sang-ho Lee, Simyung Chang, Nojun Kwak

There are methods to reduce the cost by compressing networks or varying its computational path dynamically according to the input image.

C3: Concentrated-Comprehensive Convolution and its application to semantic segmentation

2 code implementations12 Dec 2018 Hyojin Park, Youngjoon Yoo, Geonseok Seo, Dongyoon Han, Sangdoo Yun, Nojun Kwak

To resolve this problem, we propose a new block called Concentrated-Comprehensive Convolution (C3) which applies the asymmetric convolutions before the depth-wise separable dilated convolution to compensate for the information loss due to dilated convolution.

Semantic Segmentation

Genetic-Gated Networks for Deep Reinforcement Learning

no code implementations NeurIPS 2018 Simyung Chang, John Yang, Jaeseok Choi, Nojun Kwak

We introduce the Genetic-Gated Networks (G2Ns), simple neural networks that combine a gate vector composed of binary genetic genes in the hidden layer(s) of networks.

reinforcement-learning Reinforcement Learning (RL)

Sym-parameterized Dynamic Inference for Mixed-Domain Image Translation

1 code implementation ICCV 2019 Simyung Chang, SeongUk Park, John Yang, Nojun Kwak

Recent advances in image-to-image translation have led to some ways to generate multiple domain images through a single network.

Image-to-Image Translation Translation

Genetic-Gated Networks for Deep Reinforcement

no code implementations26 Nov 2018 Simyung Chang, John Yang, Jae-Seok Choi, Nojun Kwak

We introduce the Genetic-Gated Networks (G2Ns), simple neural networks that combine a gate vector composed of binary genetic genes in the hidden layer(s) of networks.

reinforcement-learning Reinforcement Learning (RL)

Towards Governing Agent's Efficacy: Action-Conditional $β$-VAE for Deep Transparent Reinforcement Learning

no code implementations11 Nov 2018 John Yang, Gyujeong Lee, Minsung Hyun, Simyung Chang, Nojun Kwak

We tackle the blackbox issue of deep neural networks in the settings of reinforcement learning (RL) where neural agents learn towards maximizing reward gains in an uncontrollable way.

reinforcement-learning Reinforcement Learning (RL) +1

Textbook Question Answering with Multi-modal Context Graph Understanding and Self-supervised Open-set Comprehension

no code implementations ACL 2019 Daesik Kim, Seonhoon Kim, Nojun Kwak

Moreover, ablation studies validate that both methods of incorporating f-GCN for extracting knowledge from multi-modal contexts and our newly proposed self-supervised learning process are effective for TQA problems.

Open Set Learning Question Answering +2

HC-Net: Memory-based Incremental Dual-Network System for Continual learning

no code implementations27 Sep 2018 Jangho Kim, Jeesoo Kim, Nojun Kwak

The C-Net guarantees no degradation in the performance of the previously learned tasks and the H-Net shows high confidence in finding the origin of an input sample.

Continual Learning Hippocampus

StackNet: Stacking Parameters for Continual learning

no code implementations7 Sep 2018 Jangho Kim, Jeesoo Kim, Nojun Kwak

The StackNet guarantees no degradation in the performance of the previously learned tasks and the index module shows high confidence in finding the origin of an input sample.

Continual Learning

Vehicle Image Generation Going Well with The Surroundings

no code implementations9 Jul 2018 Jeesoo Kim, Jangho Kim, Jaeyoung Yoo, Daesik Kim, Nojun Kwak

Using a subnetwork based on a precedent work of image completion, our model makes the shape of an object.

Colorization Image Generation +6

Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information

no code implementations29 May 2018 Seonhoon Kim, Inho Kang, Nojun Kwak

Inspired by DenseNet, a densely connected convolutional network, we propose a densely-connected co-attentive recurrent neural network, each layer of which uses concatenated information of attentive features as well as hidden features of all the preceding recurrent layers.

Natural Language Inference Paraphrase Identification +1

Image Restoration by Estimating Frequency Distribution of Local Patches

no code implementations CVPR 2018 Jaeyoung Yoo, Sang-ho Lee, Nojun Kwak

In this paper, we propose a method to solve the image restoration problem, which tries to restore the details of a corrupted image, especially due to the loss caused by JPEG compression.

General Classification Image Compression +1

3D Human Pose Estimation with Relational Networks

no code implementations23 May 2018 Sungheon Park, Nojun Kwak

In this paper, we propose a novel 3D human pose estimation algorithm from a single image based on neural networks.

3D Human Pose Estimation 3D Pose Estimation

Music Source Separation Using Stacked Hourglass Networks

4 code implementations22 May 2018 Sungheon Park, Tae-hoon Kim, Kyogu Lee, Nojun Kwak

In this paper, we propose a simple yet effective method for multiple music source separation using convolutional neural networks.

Sound Audio and Speech Processing

MC-GAN: Multi-conditional Generative Adversarial Network for Image Synthesis

1 code implementation3 May 2018 Hyojin Park, YoungJoon Yoo, Nojun Kwak

This block enables MC-GAN to generate a realistic object image with the desired background by controlling the amount of the background information from the given base image using the foreground information from the text attributes.

Paraphrasing Complex Network: Network Compression via Factor Transfer

2 code implementations NeurIPS 2018 Jangho Kim, SeongUk Park, Nojun Kwak

Among the model compression methods, a method called knowledge transfer is to train a student network with a stronger teacher network.

Model Compression Transfer Learning

Broadcasting Convolutional Network for Visual Relational Reasoning

no code implementations ECCV 2018 Simyung Chang, John Yang, SeongUk Park, Nojun Kwak

In this paper, we propose the Broadcasting Convolutional Network (BCN) that extracts key object features from the global field of an entire input image and recognizes their relationship with local features.

Relational Reasoning Relation Network

Dynamic Graph Generation Network: Generating Relational Knowledge from Diagrams

no code implementations CVPR 2018 Daesik Kim, Youngjoon Yoo, Jeesoo Kim, Sangkuk Lee, Nojun Kwak

In this work, we introduce a new algorithm for analyzing a diagram, which contains visual and textual information in an abstract and integrated way.

Graph Generation Question Answering

Butterfly Effect: Bidirectional Control of Classification Performance by Small Additive Perturbation

no code implementations27 Nov 2017 YoungJoon Yoo, SeongUk Park, Junyoung Choi, Sangdoo Yun, Nojun Kwak

In addition to this performance enhancement problem, we show that the proposed PGN can be adopted to solve the classical adversarial problem without utilizing the information on the target classifier.

Classification General Classification

BOOK: Storing Algorithm-Invariant Episodes for Deep Reinforcement Learning

no code implementations5 Sep 2017 Simyung Chang, Youngjoon Yoo, Jae-Seok Choi, Nojun Kwak

Our method learns hundreds to thousand times faster than the conventional methods by learning only a handful of core cluster information, which shows that deep RL agents can effectively learn through the shared knowledge from other agents.

Imitation Learning reinforcement-learning +1

Residual Features and Unified Prediction Network for Single Stage Detection

1 code implementation17 Jul 2017 Kyoungmin Lee, Jae-Seok Choi, Jisoo Jeong, Nojun Kwak

They are much faster than two stage detectors that use region proposal networks (RPN) without much degradation in the detection performances.

Region Proposal

Where to Play: Retrieval of Video Segments using Natural-Language Queries

no code implementations2 Jul 2017 Sangkuk Lee, Daesik Kim, Myunggi Lee, Jihye Hwang, Nojun Kwak

Through quantitative and qualitative evaluation, we show that our method is effective for retrieval of video segments using natural language queries.

Image Captioning Natural Language Queries +2

Superpixel-based Semantic Segmentation Trained by Statistical Process Control

1 code implementation30 Jun 2017 Hyojin Park, Jisoo Jeong, Youngjoon Yoo, Nojun Kwak

Semantic segmentation, like other fields of computer vision, has seen a remarkable performance advance by the use of deep convolution neural networks.

Semantic Segmentation

Enhancement of SSD by concatenating feature maps for object detection

no code implementations26 May 2017 Jisoo Jeong, Hyojin Park, Nojun Kwak

In this paper, we propose and analyze how to use feature maps effectively to improve the performance of the conventional SSD.

object-detection Object Detection +1

3D Human Pose Estimation Using Convolutional Neural Networks with 2D Pose Information

no code implementations10 Aug 2016 Sungheon Park, Jihye Hwang, Nojun Kwak

While there has been a success in 2D human pose estimation with convolutional neural networks (CNNs), 3D human pose estimation has not been thoroughly studied.

2D Human Pose Estimation 2D Pose Estimation +1

Membership Representation for Detecting Block-Diagonal Structure in Low-Rank or Sparse Subspace Clustering

no code implementations CVPR 2015 Minsik Lee, Jieun Lee, Hyeogjin Lee, Nojun Kwak

The proposed method shares the philosophy of the above subspace clustering methods, in that it is a self-expressive system based on a Hadamard product of a membership matrix.

Clustering Philosophy

Cannot find the paper you are looking for? You can Submit a new open access paper.