1 code implementation • Findings (ACL) 2022 • KiYoon Yoo, Jangho Kim, Jiho Jang, Nojun Kwak
Word-level adversarial attacks have shown success in NLP models, drastically decreasing the performance of transformer-based models in recent years.
no code implementations • ICLR 2019 • SeongUk Park, Nojun Kwak
This paper proposes a versatile and powerful training algorithm named Feature-level Ensemble Effect for knowledge Distillation(FEED), which is inspired by the work of factor transfer.
no code implementations • LREC 2022 • Hyeondey Kim, Seonhoon Kim, Inho Kang, Nojun Kwak, Pascale Fung
Our experiment results prove that the proposed methods improve the model performance of the investigated Korean language understanding tasks.
no code implementations • ECCV 2020 • John Yang, Hyung Jin Chang, Seungeui Lee, Nojun Kwak
In this paper, we attempt to not only consider the appearance of a hand but incorporate the temporal movement information of a hand in motion into the learning framework for better 3D hand pose estimation performance, which leads to the necessity of a large scale dataset with sequential RGB hand images.
no code implementations • 18 Mar 2025 • Seung Woo Ko, Joopyo Hong, Suyoung Kim, Seungjai Bang, Sungzoon Cho, Nojun Kwak, Hyung-Sin Kim, Joonseok Lee
Camouflaged object detection (COD) aims to generate a fine-grained segmentation map of camouflaged objects hidden in their background.
no code implementations • 18 Mar 2025 • Dongkwan Lee, Kyomin Hwang, Nojun Kwak
To the best of our knowledge, we are the first to explore a method for incorporating the unconfident-unlabeled samples that were previously disregarded in SSDG setting.
no code implementations • 17 Mar 2025 • Ingyun Lee, Jae Won Jang, Seunghyeon Seo, Nojun Kwak
This allows the model to compare the order of high-probability surface points and filter out inconsistent rays easily without requiring the exact depth.
no code implementations • 13 Mar 2025 • Yeonjin Chang, Erqun Dong, Seunghyeon Seo, Nojun Kwak, Kwang Moo Yi
Isolating individual 3D Gaussian primitives for each object and handling occlusions in scenes remain far from being solved.
no code implementations • 3 Sep 2024 • Yearim Kim, Sangyu Han, Sangbum Han, Nojun Kwak
We utilize Pointwise Feature Vectors (PFVs) and Effective Receptive Fields (ERFs) to decompose model embeddings into interpretable Concept Vectors.
no code implementations • 17 Jul 2024 • Donghoon Han, Eunhwan Park, Gisang Lee, Adam Lee, Nojun Kwak
The rapid expansion of multimedia content has made accurately retrieving relevant videos from large collections increasingly challenging.
no code implementations • 1 May 2024 • Hyunho Lee, JunHoo Lee, Nojun Kwak
Conventional dataset distillation requires significant computational resources and assumes access to the entire dataset, an assumption impractical as it presumes all data resides on a central server.
no code implementations • 22 Apr 2024 • Kyomin Hwang, Suyoung Kim, JunHoo Lee, Nojun Kwak
Large Models (LMs) have heightened expectations for the potential of general AI as they are akin to human intelligence.
no code implementations • 14 Apr 2024 • Hojun Lee, Suyoung Kim, JunHoo Lee, Jaeyoung Yoo, Nojun Kwak
Coreset selection is a method for selecting a small, representative subset of an entire dataset.
1 code implementation • 2 Apr 2024 • Jooyoung Jang, Youngseo Cha, Jisu Kim, SooHyung Lee, Geonu Lee, Minkook Cho, Young Hwang, Nojun Kwak
In this paper, we propose a novel protocol for wildfire detection, leveraging semi-supervised Domain Adaptation for object detection, accompanied by a corresponding dataset designed for use by both academics and industries.
no code implementations • 2 Apr 2024 • Donghoon Han, Seunghyeon Seo, Eunhwan Park, Seong-Uk Nam, Nojun Kwak
Multimodal and large language models (LLMs) have revolutionized the utilization of open-world knowledge, unlocking novel potentials across various tasks and applications.
Ranked #5 on
Highlight Detection
on QVHighlights
no code implementations • 26 Mar 2024 • JunHoo Lee, Hyunho Lee, Kyomin Hwang, Nojun Kwak
To this end, we propose the DeepKKT condition, an adaptation of the traditional Karush-Kuhn-Tucker (KKT) condition for deep learning models, and confirm that generated Deep Support Vectors (DSVs) using this condition exhibit properties similar to traditional support vectors.
no code implementations • 21 Mar 2024 • Yeji Song, Jimyeong Kim, Wonhark Park, Wonsik Shin, Wonjong Rhee, Nojun Kwak
In a surge of text-to-image (T2I) models and their customization methods that generate new images of a user-provided subject, current works focus on alleviating the costs incurred by a lengthy per-subject optimization.
no code implementations • 16 Mar 2024 • Seunghyeon Seo, Yeonjin Chang, Jayeon Yoo, Seungwoo Lee, Hojun Lee, Nojun Kwak
Addressing this, we propose HourglassNeRF, an effective regularization-based approach with a novel hourglass casting strategy.
no code implementations • 2 Mar 2024 • Inseop Chung, Kyomin Hwang, Jayeon Yoo, Nojun Kwak
Continual Test-Time Adaptation (CTA) is a challenging task that aims to adapt a source pre-trained model to continually changing target domains.
1 code implementation • 25 Jan 2024 • Sangyu Han, Yearim Kim, Nojun Kwak
The truthfulness of existing explanation methods in authentically elucidating the underlying model's decision-making process has been questioned.
no code implementations • 10 Jan 2024 • JunHoo Lee, Yearim Kim, Hyunho Lee, Nojun Kwak
Furthermore, we argue that the inherent label equivalence naturally lacks semantic information.
1 code implementation • CVPR 2024 • Jayeon Yoo, Dongkwan Lee, Inseop Chung, Donghyun Kim, Nojun Kwak
It is a well-known fact that the performance of deep learning models deteriorates when they encounter a distribution shift at test time.
1 code implementation • 12 Dec 2023 • Jayeon Yoo, Dongkwan Lee, Inseop Chung, Donghyun Kim, Nojun Kwak
It is a well-known fact that the performance of deep learning models deteriorates when they encounter a distribution shift at test time.
no code implementations • 8 Dec 2023 • Inseop Chung, KiYoon Yoo, Nojun Kwak
To handle this task, the model has to learn a generalizable representation that can be applied to unseen domains while also identify unknown classes that were not present during training.
no code implementations • 5 Dec 2023 • Yeji Song, Wonsik Shin, Junsoo Lee, Jeesoo Kim, Nojun Kwak
Finally, we decouple the motion from the appearance of the source video with an additional pseudo word.
no code implementations • 7 Nov 2023 • Yeonjin Chang, Yearim Kim, Seunghyeon Seo, Jung Yi, Nojun Kwak
In this work, we introduce our method of outdoor scene relighting for Neural Radiance Fields (NeRF) named Sun-aligned Relighting TensoRF (SR-TensoRF).
no code implementations • 5 Sep 2023 • TaeHoon Kim, Pyunghwan Ahn, Sangyun Kim, Sihaeng Lee, Mark Marsden, Alessandra Sala, Seung Hwan Kim, Bohyung Han, Kyoung Mu Lee, Honglak Lee, Kyounghoon Bae, Xiangyu Wu, Yi Gao, Hailiang Zhang, Yang Yang, Weili Guo, Jianfeng Lu, Youngtaek Oh, Jae Won Cho, Dong-Jin Kim, In So Kweon, Junmo Kim, Wooyoung Kang, Won Young Jhoo, Byungseok Roh, Jonghwan Mun, Solgil Oh, Kenan Emir Ak, Gwang-Gook Lee, Yan Xu, Mingwei Shen, Kyomin Hwang, Wonsik Shin, Kamin Lee, Wonhark Park, Dongkwan Lee, Nojun Kwak, Yujin Wang, Yimu Wang, Tiancheng Gu, Xingchang Lv, Mingmao Sun
In this report, we introduce NICE (New frontiers for zero-shot Image Captioning Evaluation) project and share the results and outcomes of 2023 challenge.
no code implementations • 22 Aug 2023 • Donghoon Han, Seunghyeon Seo, Donghyeon Jeon, Jiho Jang, Chaerin Kong, Nojun Kwak
Transformers have demonstrated tremendous success not only in the natural language processing (NLP) domain but also the field of computer vision, igniting various creative approaches and applications.
1 code implementation • 1 Aug 2023 • KiYoon Yoo, Wonhyuk Ahn, Nojun Kwak
By independently embedding sub-units of messages, the proposed method outperforms the existing works in terms of robustness and latency.
1 code implementation • ICCV 2023 • Seunghyeon Seo, Yeonjin Chang, Nojun Kwak
Neural Radiance Field (NeRF) has been a mainstream in novel view synthesis with its remarkable quality of rendered images and simple architecture.
no code implementations • 6 May 2023 • Seungwoo Lee, Chaerin Kong, Donghyeon Jeon, Nojun Kwak
Recent advances in diffusion models have showcased promising results in the text-to-video (T2V) synthesis task.
1 code implementation • 3 May 2023 • KiYoon Yoo, Wonhyuk Ahn, Jiho Jang, Nojun Kwak
Recent years have witnessed a proliferation of valuable original natural language contents found in subscription-based media outlets, web novel platforms, and outputs of large language models.
no code implementations • 15 Mar 2023 • Jaeseung Lim, Jongkeun Na, Nojun Kwak
Active Learning (AL) and Semi-supervised Learning are two techniques that have been studied to reduce the high cost of deep learning by using a small amount of labeled data and a large amount of unlabeled data.
no code implementations • 17 Feb 2023 • Seunghyeon Seo, Jaeyoung Yoo, Jihye Hwang, Nojun Kwak
In this work, we propose a novel framework of single-stage instance-aware pose estimation by modeling the joint distribution of human keypoints with a mixture density model, termed as MDPose.
1 code implementation • CVPR 2023 • Seunghyeon Seo, Donghoon Han, Yeonjin Chang, Nojun Kwak
In this work, we propose MixNeRF, an effective training strategy for novel view synthesis from sparse inputs by modeling a ray with a mixture density model.
no code implementations • 10 Feb 2023 • Chaerin Kong, Nojun Kwak
Recent years have witnessed astonishing advances in the field of multimodal representation learning, with contrastive learning being the cornerstone for major breakthroughs.
no code implementations • 21 Nov 2022 • Jiho Jang, Chaerin Kong, Donghyeon Jeon, Seonhoon Kim, Nojun Kwak
Contrastive learning is a form of distance learning that aims to learn invariant features from two related representations.
no code implementations • 12 Oct 2022 • Chaerin Kong, Donghyeon Jeon, Ohjoon Kwon, Nojun Kwak
Fashion attribute editing is a task that aims to convert the semantic attributes of a given fashion image while preserving the irrelevant regions.
no code implementations • 9 Oct 2022 • Yeji Song, Chaerin Kong, Seoyoung Lee, Nojun Kwak, Joonseok Lee
Neural Radiance Fields (NeRF) achieves photo-realistic image rendering from novel views, and the Neural Scene Graphs (NSG) \cite{ost2021neural} extends it to dynamic scenes (video) with multiple objects.
no code implementations • 29 Sep 2022 • Jookyung Song, Yeonjin Chang, SeongUk Park, Nojun Kwak
U-net, a conventional approach for conditional GANs, retains fine details of unmasked regions but the style of the reconstructed image is inconsistent with the rest of the original image and only works robustly when the size of the occluding object is small enough.
no code implementations • 29 Jul 2022 • Chaerin Kong, Nojun Kwak
The capacity to learn incrementally from an online stream of data is an envied trait of human learners, as deep neural networks typically suffer from catastrophic forgetting and stability-plasticity dilemma.
no code implementations • 20 Jul 2022 • Jayeon Yoo, Inseop Chung, Nojun Kwak
Most existing domain adaptive object detection methods exploit adversarial feature alignment to adapt the model to a new domain.
no code implementations • 18 May 2022 • Jaeyoung Yoo, Hojun Lee, Seunghyeon Seo, Inseop Chung, Nojun Kwak
Recent end-to-end multi-object detectors simplify the inference pipeline by removing hand-crafted processes such as non-maximum suppression (NMS).
no code implementations • 29 Apr 2022 • KiYoon Yoo, Nojun Kwak
For a less complex dataset, a mere 0. 1% of adversary clients is enough to poison the global model effectively.
no code implementations • CVPR 2022 • Jisoo Jeong, Jamie Menjay Lin, Fatih Porikli, Nojun Kwak
Imposing consistency through proxy tasks has been shown to enhance data-driven learning and enable self-supervision in various tasks.
1 code implementation • CVPR 2022 • Gyutae Park, Sungjoon Son, Jaeyoung Yoo, SeHo Kim, Nojun Kwak
In this paper, we propose a transformer-based image matting model called MatteFormer, which takes full advantage of trimap information in the transformer block.
Ranked #5 on
Image Matting
on Composition-1K
no code implementations • 15 Mar 2022 • Jongmok Kim, Hwijun Lee, Jaeseung Lim, Jongkeun Na, Nojun Kwak, Jin Young Choi
A well-designed strong-weak augmentation strategy and the stable teacher to generate reliable pseudo labels are essential in the teacher-student framework of semi-supervised learning (SSL).
no code implementations • 3 Mar 2022 • KiYoon Yoo, Jangho Kim, Jiho Jang, Nojun Kwak
Word-level adversarial attacks have shown success in NLP models, drastically decreasing the performance of transformer-based models in recent years.
1 code implementation • CVPR 2022 • Jongmok Kim, Jooyoung Jang, Seunghyeon Seo, Jisoo Jeong, Jongkeun Na, Nojun Kwak
Data augmentation strategy plays a significant role in the SSL framework since it is hard to create a weak-strong augmented input pair without losing label information.
1 code implementation • 25 Nov 2021 • Jiho Jang, Seonhoon Kim, KiYoon Yoo, Chaerin Kong, Jangho Kim, Nojun Kwak
Through self-distillation, the intermediate layers are better suited for instance discrimination, making the performance of an early-exited sub-network not much degraded from that of the full network.
3 code implementations • 23 Nov 2021 • Chaerin Kong, Jeesoo Kim, Donghoon Han, Nojun Kwak
Producing diverse and realistic images with generative models such as GANs typically requires large scale training with vast amount of images.
1 code implementation • 22 Nov 2021 • Jongmok Kim, Jooyoung Jang, Seunghyeon Seo, Jisoo Jeong, Jongkeun Na, Nojun Kwak
Data augmentation strategy plays a significant role in the SSL framework since it is hard to create a weak-strong augmented input pair without losing label information.
no code implementations • 22 Nov 2021 • SeongUk Park, Nojun Kwak
In this paper, we propose a novel feature distillation (FD) method which is suitable for SISR.
no code implementations • 11 Nov 2021 • John Yang, Yash Bhalgat, Simyung Chang, Fatih Porikli, Nojun Kwak
While hand pose estimation is a critical component of most interactive extended reality and gesture recognition systems, contemporary approaches are not optimized for computational and memory efficiency.
no code implementations • 21 Oct 2021 • Inseop Chung, Jayeon Yoo, Nojun Kwak
It creates a set of pseudo labels for the target domain to give explicit supervision.
no code implementations • 7 Oct 2021 • Simyung Chang, KiYoon Yoo, Jiho Jang, Nojun Kwak
Utilizing SEO for PFL, we also introduce self-evolutionary Pareto networks (SEPNet), enabling the unified model to approximate the entire Pareto front set that maximizes the hypervolume.
no code implementations • 16 Sep 2021 • Hojun Lee, Myunggi Lee, Nojun Kwak
Second, each support sample is used as a class code to leverage the information by comparing similarities between each support feature and query features.
Ranked #14 on
Few-Shot Object Detection
on MS-COCO (30-shot)
1 code implementation • 10 Sep 2021 • Jangho Kim, Jayeon Yoo, Yeji Song, KiYoon Yoo, Nojun Kwak
To alleviate this problem, dynamic pruning methods have emerged, which try to find diverse sparsity patterns during training by utilizing Straight-Through-Estimator (STE) to approximate gradients of pruned weights.
1 code implementation • ICCV 2021 • Jeesoo Kim, Junsuk Choe, Sangdoo Yun, Nojun Kwak
Weakly-supervised object localization (WSOL) enables finding an object using a dataset without any localization information.
no code implementations • 25 Jun 2021 • Jangho Kim, Simyung Chang, Nojun Kwak
Unlike traditional pruning and KD, PQK makes use of unimportant weights pruned in the pruning process to make a teacher network for training a better student network without pre-training the teacher model.
1 code implementation • ICCV 2021 • Kwang Hee Lee, Chaewon Park, Junghyun Oh, Nojun Kwak
LFI-CAM generates an attention map for visual explanation during forward propagation, at the same time, leverages the attention map to improve the classification performance through the attention mechanism.
no code implementations • 25 Mar 2021 • Jangho Kim, Simyung Chang, Sungrack Yun, Nojun Kwak
We verify the usefulness of PPP on a couple of tasks in computer vision and Keyword spotting.
no code implementations • 17 Mar 2021 • Saem Park, Donghoon Han, Nojun Kwak
Through experiments, we \sam {confirmed the feasibility of the proposed algorithm and would like to suggest the U-Net based Generative Flow as a new possibility for baseline in video frame interpolation.
no code implementations • 25 Feb 2021 • Inseop Chung, Daesik Kim, Nojun Kwak
We propose a novel method that tackles the problem of unsupervised domain adaptation for semantic segmentation by maximizing the cosine similarity between the source and the target domain at the feature level.
no code implementations • 19 Feb 2021 • Seohyeong Jeong, Nojun Kwak
The BERT model has shown significant success on various natural language processing tasks.
1 code implementation • CVPR 2021 • Hyojin Park, Jayeon Yoo, Seohyeong Jeong, Ganesh Venkatesh, Nojun Kwak
Current state-of-the-art approaches for Semi-supervised Video Object Segmentation (Semi-VOS) propagates information from previous frames to generate segmentation mask for the current frame.
no code implementations • 25 Nov 2020 • Myunggi Lee, Wonwoong Cho, Moonheum Kim, David Inouye, Nojun Kwak
Meanwhile, with the advent of Generative Adversarial Networks (GANs), there has been great progress in reconstructing realistic 2D images.
1 code implementation • 9 Nov 2020 • Hyojin Park, Ganesh Venkatesh, Nojun Kwak
Our template matching method consists of short-term and long-term matching.
no code implementations • 20 Oct 2020 • Sangho Lee, KiYoon Yoo, Nojun Kwak
Federated learning (FL), which utilizes communication between the server (core) and local devices (edges) to indirectly learn from more data, is an emerging field in deep learning research.
no code implementations • 17 Sep 2020 • Seonhoon Kim, Seohyeong Jeong, Eunbyul Kim, Inho Kang, Nojun Kwak
In this paper, we propose novel training schemes for multiple-choice video question answering with a self-supervised pre-training stage and a supervised contrastive learning in the main stage as an auxiliary learning.
no code implementations • 9 Sep 2020 • SeongUk Park, KiYoon Yoo, Nojun Kwak
In this paper, we focus on knowledge distillation and demonstrate that knowledge distillation methods are orthogonal to other efficiency-enhancing methods both analytically and empirically.
1 code implementation • 27 Jul 2020 • Jaeseok Choi, Yeji Song, Nojun Kwak
In this paper, we propose part-aware data augmentation (PA-AUG) that can better utilize rich information of 3D label to enhance the performance of 3D object detectors.
no code implementations • ECCV 2020 • Sungheon Park, Minsik Lee, Nojun Kwak
We propose a novel framework for training neural networks which is capable of learning 3D information of non-rigid objects when only 2D annotations are available as ground truths.
no code implementations • 10 Jul 2020 • John Yang, Hyung Jin Chang, Seungeui Lee, Nojun Kwak
In this paper, we attempt to not only consider the appearance of a hand but incorporate the temporal movement information of a hand in motion into the learning framework for better 3D hand pose estimation performance, which leads to the necessity of a large scale dataset with sequential RGB hand images.
1 code implementation • CVPR 2021 • Jisoo Jeong, Vikas Verma, Minsung Hyun, Juho Kannala, Nojun Kwak
Despite the data labeling cost for the object detection tasks being substantially more than that of the classification tasks, semi-supervised learning methods for object detection have not been studied much.
1 code implementation • NeurIPS 2020 • Jangho Kim, KiYoon Yoo, Nojun Kwak
Second, we empirically show that PSG acting as a regularizer to a weight vector is favorable for model compression domains such as quantization and pruning.
no code implementations • 22 May 2020 • Geonseok Seo, Jaeyoung Yoo, Jae-Seok Choi, Nojun Kwak
The learning of the region proposal in object detection using the deep neural networks (DNN) is divided into two tasks: binary classification and bounding box regression task.
4 code implementations • 20 Apr 2020 • Yash Bhalgat, Jinwon Lee, Markus Nagel, Tijmen Blankevoort, Nojun Kwak
To solve this problem, we propose LSQ+, a natural extension of LSQ, wherein we introduce a general asymmetric quantization scheme with trainable scale and offset parameters that can learn to accommodate the negative activations.
Ranked #18 on
Quantization
on ImageNet
1 code implementation • 17 Feb 2020 • Minsung Hyun, Jisoo Jeong, Nojun Kwak
First, we analyze existing SSL methods in imbalanced environments and examine how the class imbalance affects SSL methods.
no code implementations • ICML 2020 • Inseop Chung, SeongUk Park, Jangho Kim, Nojun Kwak
By training a network to fool the corresponding discriminator, it can learn the other network's feature map distribution.
2 code implementations • NeurIPS 2019 • Jisoo Jeong, Seungeui Lee, Jeesoo Kim, Nojun Kwak
Making a precise annotation in a large dataset is crucial to the performance of object detection.
Ranked #18 on
Semi-Supervised Object Detection
on COCO 2% labeled data
no code implementations • 28 Nov 2019 • Jangho Kim, Yash Bhalgat, Jinwon Lee, Chirag Patel, Nojun Kwak
First, Self-studying (SS) phase fine-tunes a quantized low-precision student network without KD to obtain a good initialization.
3 code implementations • ICCV 2021 • Jaeyoung Yoo, Hojun Lee, Inseop Chung, Geonseok Seo, Nojun Kwak
Instead of assigning each ground truth to specific locations of network's output, we train a network by estimating the probability density of bounding boxes in an input image using a mixture model.
8 code implementations • 20 Nov 2019 • Hyojin Park, Lars Lowe Sjösund, Youngjoon Yoo, Nicolas Monet, Jihwan Bang, Nojun Kwak
To solve the first problem, we introduce the new extremely lightweight portrait segmentation model SINet, containing an information blocking decoder and spatial squeeze modules.
Ranked #1 on
Portrait Segmentation
on EG1800
no code implementations • 19 Nov 2019 • Daesik Kim, Gyujeong Lee, Jisoo Jeong, Nojun Kwak
In the source domain, we fully train an object detector and the RRPN with full supervision of HOI.
no code implementations • 24 Sep 2019 • SeongUk Park, Nojun Kwak
We name this method as parallel FEED, andexperimental results on CIFAR-100 and ImageNet show that our method has clear performance enhancements, without introducing any additional parameters or computations at test time.
3 code implementations • 8 Aug 2019 • Hyojin Park, Lars Lowe Sjösund, Youngjoon Yoo, Jihwan Bang, Nojun Kwak
In our qualitative and quantitative analysis on the EG1800 dataset, we show that our method outperforms various existing lightweight segmentation models.
no code implementations • 26 Jul 2019 • Saem Park, Nojun Kwak
The newly generated HR images by the repeatedly trained SR network show better image quality and this strategy of training LR to mimic new HR can lead to a more efficient SR network.
no code implementations • 23 May 2019 • Jihye Hwang, Jieun Lee, Sungheon Park, Nojun Kwak
In this paper, we propose temporal flow maps for limbs (TML) and a multi-stride method to estimate and track human poses.
Ranked #5 on
Pose Tracking
on PoseTrack2018
no code implementations • ICLR 2019 • Jisoo Jeong, Seungeui Lee, Nojun Kwak
While the conventional methods cannot be applied to the new SSL problems where the separated data do not share the classes, our method does not show any performance degradation even if the classes of unlabeled data are different from those of the labeled data.
1 code implementation • 19 Apr 2019 • Jangho Kim, Minsung Hyun, Inseop Chung, Nojun Kwak
We propose a learning framework named Feature Fusion Learning (FFL) that efficiently trains a powerful classifier through a fusion module which combines the feature maps generated from parallel neural networks.
no code implementations • 15 Apr 2019 • Minsung Hyun, Junyoung Choi, Nojun Kwak
In reinforcement learning (RL), temporal abstraction still remains as an important and unsolved problem.
2 code implementations • ICCV 2019 • Byeongho Heo, Jeesoo Kim, Sangdoo Yun, Hyojin Park, Nojun Kwak, Jin Young Choi
We investigate the design aspects of feature distillation methods achieving network compression and propose a novel feature distillation method in which the distillation loss is designed to make a synergy among various aspects: teacher transform, student transform, distillation feature position and distance function.
Ranked #49 on
Knowledge Distillation
on ImageNet
no code implementations • 13 Mar 2019 • Junyoung Choi, Minsung Hyun, Nojun Kwak
We propose a new low-cost machine-learning-based methodology which assists designers in reducing the gap between the problem and the solution in the design process.
no code implementations • 12 Feb 2019 • Jaeyoung Yoo, Hojun Lee, Nojun Kwak
In this paper, we treat the image generation task using an autoencoder, a representative latent model.
no code implementations • 15 Jan 2019 • Sang-ho Lee, Simyung Chang, Nojun Kwak
There are methods to reduce the cost by compressing networks or varying its computational path dynamically according to the input image.
2 code implementations • 12 Dec 2018 • Hyojin Park, Youngjoon Yoo, Geonseok Seo, Dongyoon Han, Sangdoo Yun, Nojun Kwak
To resolve this problem, we propose a new block called Concentrated-Comprehensive Convolution (C3) which applies the asymmetric convolutions before the depth-wise separable dilated convolution to compensate for the information loss due to dilated convolution.
no code implementations • NeurIPS 2018 • Simyung Chang, John Yang, Jaeseok Choi, Nojun Kwak
We introduce the Genetic-Gated Networks (G2Ns), simple neural networks that combine a gate vector composed of binary genetic genes in the hidden layer(s) of networks.
1 code implementation • ICCV 2019 • Simyung Chang, SeongUk Park, John Yang, Nojun Kwak
Recent advances in image-to-image translation have led to some ways to generate multiple domain images through a single network.
no code implementations • 26 Nov 2018 • Simyung Chang, John Yang, Jae-Seok Choi, Nojun Kwak
We introduce the Genetic-Gated Networks (G2Ns), simple neural networks that combine a gate vector composed of binary genetic genes in the hidden layer(s) of networks.
no code implementations • 11 Nov 2018 • John Yang, Gyujeong Lee, Minsung Hyun, Simyung Chang, Nojun Kwak
We tackle the blackbox issue of deep neural networks in the settings of reinforcement learning (RL) where neural agents learn towards maximizing reward gains in an uncontrollable way.
no code implementations • ACL 2019 • Daesik Kim, Seonhoon Kim, Nojun Kwak
Moreover, ablation studies validate that both methods of incorporating f-GCN for extracting knowledge from multi-modal contexts and our newly proposed self-supervised learning process are effective for TQA problems.
no code implementations • 27 Sep 2018 • Jangho Kim, Jeesoo Kim, Nojun Kwak
The C-Net guarantees no degradation in the performance of the previously learned tasks and the H-Net shows high confidence in finding the origin of an input sample.
no code implementations • 7 Sep 2018 • Jangho Kim, Jeesoo Kim, Nojun Kwak
The StackNet guarantees no degradation in the performance of the previously learned tasks and the index module shows high confidence in finding the origin of an input sample.
no code implementations • ECCV 2018 • Myunggi Lee, Seungeui Lee, Sungjoon Son, Gyu-tae Park, Nojun Kwak
However, it has an expensive computational cost and requires two-stream (RGB and optical flow) framework.
no code implementations • 9 Jul 2018 • Jeesoo Kim, Jangho Kim, Jaeyoung Yoo, Daesik Kim, Nojun Kwak
Using a subnetwork based on a precedent work of image completion, our model makes the shape of an object.
no code implementations • 29 May 2018 • Seonhoon Kim, Inho Kang, Nojun Kwak
Inspired by DenseNet, a densely connected convolutional network, we propose a densely-connected co-attentive recurrent neural network, each layer of which uses concatenated information of attentive features as well as hidden features of all the preceding recurrent layers.
Ranked #13 on
Natural Language Inference
on SNLI
1 code implementation • 23 May 2018 • Sungheon Park, Nojun Kwak
In this paper, we propose a novel 3D human pose estimation algorithm from a single image based on neural networks.
no code implementations • CVPR 2018 • Jaeyoung Yoo, Sang-ho Lee, Nojun Kwak
In this paper, we propose a method to solve the image restoration problem, which tries to restore the details of a corrupted image, especially due to the loss caused by JPEG compression.
4 code implementations • 22 May 2018 • Sungheon Park, Tae-hoon Kim, Kyogu Lee, Nojun Kwak
In this paper, we propose a simple yet effective method for multiple music source separation using convolutional neural networks.
Sound Audio and Speech Processing
2 code implementations • 3 May 2018 • Hyojin Park, YoungJoon Yoo, Nojun Kwak
This block enables MC-GAN to generate a realistic object image with the desired background by controlling the amount of the background information from the given base image using the foreground information from the text attributes.
2 code implementations • NeurIPS 2018 • Jangho Kim, SeongUk Park, Nojun Kwak
Among the model compression methods, a method called knowledge transfer is to train a student network with a stronger teacher network.
no code implementations • ECCV 2018 • Simyung Chang, John Yang, SeongUk Park, Nojun Kwak
In this paper, we propose the Broadcasting Convolutional Network (BCN) that extracts key object features from the global field of an entire input image and recognizes their relationship with local features.
no code implementations • CVPR 2018 • Daesik Kim, Youngjoon Yoo, Jeesoo Kim, Sangkuk Lee, Nojun Kwak
In this work, we introduce a new algorithm for analyzing a diagram, which contains visual and textual information in an abstract and integrated way.
no code implementations • 27 Nov 2017 • YoungJoon Yoo, SeongUk Park, Junyoung Choi, Sangdoo Yun, Nojun Kwak
In addition to this performance enhancement problem, we show that the proposed PGN can be adopted to solve the classical adversarial problem without utilizing the information on the target classifier.
no code implementations • 5 Sep 2017 • Simyung Chang, Youngjoon Yoo, Jae-Seok Choi, Nojun Kwak
Our method learns hundreds to thousand times faster than the conventional methods by learning only a handful of core cluster information, which shows that deep RL agents can effectively learn through the shared knowledge from other agents.
1 code implementation • 17 Jul 2017 • Kyoungmin Lee, Jae-Seok Choi, Jisoo Jeong, Nojun Kwak
They are much faster than two stage detectors that use region proposal networks (RPN) without much degradation in the detection performances.
no code implementations • 2 Jul 2017 • Sangkuk Lee, Daesik Kim, Myunggi Lee, Jihye Hwang, Nojun Kwak
Through quantitative and qualitative evaluation, we show that our method is effective for retrieval of video segments using natural language queries.
1 code implementation • 30 Jun 2017 • Hyojin Park, Jisoo Jeong, Youngjoon Yoo, Nojun Kwak
Semantic segmentation, like other fields of computer vision, has seen a remarkable performance advance by the use of deep convolution neural networks.
no code implementations • 26 May 2017 • Jisoo Jeong, Hyojin Park, Nojun Kwak
In this paper, we propose and analyze how to use feature maps effectively to improve the performance of the conventional SSD.
no code implementations • 10 Aug 2016 • Sungheon Park, Jihye Hwang, Nojun Kwak
While there has been a success in 2D human pose estimation with convolutional neural networks (CNNs), 3D human pose estimation has not been thoroughly studied.
Ranked #334 on
3D Human Pose Estimation
on Human3.6M
no code implementations • CVPR 2015 • Minsik Lee, Jieun Lee, Hyeogjin Lee, Nojun Kwak
The proposed method shares the philosophy of the above subspace clustering methods, in that it is a self-expressive system based on a Hadamard product of a membership matrix.