no code implementations • ECCV 2020 • Juseung Yun, Byungjoo Kim, Junmo Kim
Active learning is a technique that can reduce the labeling required.
no code implementations • 4 Feb 2025 • JooHyun Kwon, Hanbyel Cho, Junmo Kim
In this work, we propose an efficient dynamic scene editing method that is more scalable in terms of temporal dimension.
1 code implementation • 15 Jan 2025 • Jaemyung Yu, Jaehyun Choi, Dong-Jae Lee, HyeongGwon Hong, Junmo Kim
However, current methods depend on transformation labels and thus struggle with interdependency and complex transformations.
no code implementations • 3 Dec 2024 • Jaehyun Choi, Junwon Ko, Dong-Jae Lee, Junmo Kim
Open compound domain adaptation (OCDA) is a practical domain adaptation problem that consists of a source domain, target compound domain, and unseen open domain.
1 code implementation • 17 Oct 2024 • Jiwan Hur, Dong-Jae Lee, Gyojin Han, Jaehyun Choi, Yunho Jeon, Junmo Kim
A key factor in the performance of continuous diffusion models stems from the guidance methods, which enhance the sample quality at the expense of diversity.
1 code implementation • 10 Oct 2024 • Minchan Kwon, Gaeun Kim, Jongsuk Kim, Haeil Lee, Junmo Kim
In this paper, we propose StablePrompt, which strikes a balance between training stability and search space, mitigating the instability of RL and producing high-performance prompts.
1 code implementation • 7 Oct 2024 • Youngtaek Oh, Jae Won Cho, Dong-Jin Kim, In So Kweon, Junmo Kim
In this paper, we propose a new method to enhance compositional understanding in pre-trained vision and language models (VLMs) without sacrificing performance in zero-shot multi-modal tasks.
no code implementations • 16 Jul 2024 • Haeil Lee, Hansang Lee, Seoyeon Gye, Junmo Kim
Instead of the traditional uniform distribution-based time step sampling, we introduce a Beta distribution-like sampling technique that prioritizes critical steps in the early and late stages of the process.
1 code implementation • 10 Jul 2024 • Jongsuk Kim, Jiwon Shin, Junmo Kim
In recent years, advancements in representation learning and language models have propelled Automated Captioning (AC) to new heights, enabling the generation of human-level descriptions.
1 code implementation • 13 Jun 2024 • Youngtaek Oh, Pyunghwan Ahn, Jinhyung Kim, Gwangmo Song, Soonyoung Lee, In So Kweon, Junmo Kim
Vision and language models (VLMs) such as CLIP have showcased remarkable zero-shot recognition abilities yet face challenges in visio-linguistic compositionality, particularly in linguistic comprehension and fine-grained image-text alignment.
no code implementations • 28 Mar 2024 • Chenshuang Zhang, Chaoning Zhang, Kang Zhang, Axi Niu, Junmo Kim, In So Kweon
There is a growing concern about applying batch normalization (BN) in adversarial training (AT), especially when the model is trained on both adversarial samples and clean samples (termed Hybrid-AT).
1 code implementation • CVPR 2024 • Chenshuang Zhang, Fei Pan, Junmo Kim, In So Kweon, Chengzhi Mao
In this work, we introduce generative model as a data source for synthesizing hard images that benchmark deep models' robustness.
1 code implementation • 14 Mar 2024 • Jongsuk Kim, Hyeongkeun Lee, Kyeongha Rho, Junmo Kim, Joon Son Chung
Recent advancements in self-supervised audio-visual representation learning have demonstrated its potential to capture rich and comprehensive representations.
Ranked #3 on
Audio Classification
on VGGSound
(using extra training data)
no code implementations • 22 Jan 2024 • Woonghyun Ka, Jae Young Lee, Jaehyun Choi, Junmo Kim
In stereo-matching knowledge distillation methods of the self-supervised monocular depth estimation, the stereo-matching network's knowledge is distilled into a monocular depth network through pseudo-depth maps.
no code implementations • 22 Jan 2024 • Jae Young Lee, Woonghyun Ka, Jaehyun Choi, Junmo Kim
We propose a novel stereo-confidence that can be measured externally to various stereo-matching networks, offering an alternative input modality choice of the cost volume for learning-based approaches, especially in safety-critical systems.
1 code implementation • 18 Jan 2024 • Hyungmin Kim, Donghun Kim, Pyunghwan Ahn, Sungho Suh, Hansang Cho, Junmo Kim
With the minimal additional computation cost of image resizing, ContextMix enhances performance compared to existing augmentation techniques.
no code implementations • 22 Dec 2023 • Chanho Lee, Jinsu Son, Hyounguk Shon, Yunho Jeon, Junmo Kim
Compared to state-of-the-art methods, our proposed method delivers comparable performance on DOTA-v1. 0 and outperforms by 1. 5 mAP on DOTA-v1. 5, all while significantly reducing the model parameters to 16%.
no code implementations • 19 Dec 2023 • HyeongGwon Hong, Yooshin Cho, Hanbyel Cho, Jaesung Ahn, Junmo Kim
Gradient norm, which is commonly used as a vulnerability proxy for gradient inversion attack, cannot explain this as it remains constant regardless of the loss function for gradient matching.
no code implementations • 9 Dec 2023 • Sojeong Song, Seoyun Yang, Chang D. Yoo, Junmo Kim
To the best of our knowledge, in the field of steganography, this is the first to introduce diverse modalities to both the secret and cover data.
no code implementations • 19 Nov 2023 • Hoang C. Nguyen, Haeil Lee, Junmo Kim
Transformer becomes more popular in the vision domain in recent years so there is a need for finding an effective way to interpret the Transformer model by visualizing it.
no code implementations • 2 Nov 2023 • Jiwan Hur, Jaehyun Choi, Gyojin Han, Dong-Jae Lee, Junmo Kim
Training diffusion models on limited datasets poses challenges in terms of limited generation capacity and expressiveness, leading to unsatisfactory results in various downstream tasks utilizing pretrained diffusion models, such as domain translation and text-guided image manipulation.
no code implementations • 5 Sep 2023 • TaeHoon Kim, Pyunghwan Ahn, Sangyun Kim, Sihaeng Lee, Mark Marsden, Alessandra Sala, Seung Hwan Kim, Bohyung Han, Kyoung Mu Lee, Honglak Lee, Kyounghoon Bae, Xiangyu Wu, Yi Gao, Hailiang Zhang, Yang Yang, Weili Guo, Jianfeng Lu, Youngtaek Oh, Jae Won Cho, Dong-Jin Kim, In So Kweon, Junmo Kim, Wooyoung Kang, Won Young Jhoo, Byungseok Roh, Jonghwan Mun, Solgil Oh, Kenan Emir Ak, Gwang-Gook Lee, Yan Xu, Mingwei Shen, Kyomin Hwang, Wonsik Shin, Kamin Lee, Wonhark Park, Dongkwan Lee, Nojun Kwak, Yujin Wang, Yimu Wang, Tiancheng Gu, Xingchang Lv, Mingmao Sun
In this report, we introduce NICE (New frontiers for zero-shot Image Captioning Evaluation) project and share the results and outcomes of 2023 challenge.
no code implementations • ICCV 2023 • Seunghee Koh, Hyounguk Shon, Janghyeon Lee, Hyeong Gwon Hong, Junmo Kim
Whether the model successfully unlearns the source task is measured by piggyback learning accuracy (PL accuracy).
1 code implementation • 5 Aug 2023 • Hanbyel Cho, Junmo Kim
In contrast, we propose a generative approach framework, called "Diffusion-based Human Mesh Recovery (Diff-HMR)" that takes advantage of the denoising diffusion process to account for multiple plausible outcomes.
1 code implementation • ICCV 2023 • Hyungmin Kim, Sungho Suh, Daehwan Kim, Daun Jeong, Hansang Cho, Junmo Kim
Existing methods for novel category discovery are limited by their reliance on labeled datasets and prior knowledge about the number of novel categories and the proportion of novel samples in the batch.
no code implementations • 18 Jul 2023 • Haeil Lee, Hansang Lee, Junmo Kim
Mixed Sample Data Augmentation (MSDA) techniques, such as Mixup, CutMix, and PuzzleMix, have been widely acknowledged for enhancing performance in a variety of tasks.
no code implementations • 2 Jul 2023 • Gyojin Han, Dong-Jae Lee, Jiwan Hur, Jaehyun Choi, Junmo Kim
The proposed framework employs INRs to represent the secret data, which can handle data of various modalities and resolutions.
no code implementations • CVPR 2023 • Hanbyel Cho, Yooshin Cho, Jaesung Ahn, Junmo Kim
This is because we have a mental model that allows us to imagine a person's appearance at different viewing directions from a given image and utilize the consistency between them for inference.
Ranked #41 on
3D Human Pose Estimation
on 3DPW
no code implementations • 9 Jun 2023 • Dong-Jae Lee, Jae Young Lee, Hyounguk Shon, Eojindl Yi, Yeong-Hun Park, Sung-Sik Cho, Junmo Kim
While most lightweight monocular depth estimation methods have been developed using convolution neural networks, the Transformer has been gradually utilized in monocular depth estimation recently.
no code implementations • 30 May 2023 • Doyeon Kim, Eunji Ko, Hyunsu Kim, Yunji Kim, Junho Kim, Dongchan Min, Junmo Kim, Sung Ju Hwang
Portrait stylization, which translates a real human face image into an artistically stylized image, has attracted considerable interest and many prior works have shown impressive quality in recent years.
no code implementations • 3 May 2023 • Yooshin Cho, Hanbyel Cho, Hyeong Gwon Hong, Jaesung Ahn, Dongmin Cho, JungWoo Chang, Junmo Kim
In our method, standard spatial attention and networks focus on unmasked regions, and extract mask-invariant features while minimizing the loss of the conventional Face Recognition (FR) performance.
1 code implementation • CVPR 2023 • Gyojin Han, Jaehyun Choi, Haeil Lee, Junmo Kim
Model inversion attacks are a type of privacy attack that reconstructs private data used to train a machine learning model, solely by accessing the model.
1 code implementation • CVPR 2023 • Dongyeun Lee, Jae Young Lee, Doyeon Kim, Jaehyun Choi, Jaejun Yoo, Junmo Kim
This allows our method to smoothly control the degree to which it preserves source features while generating images from an entirely new domain using only a single model.
no code implementations • 14 Mar 2023 • Chenshuang Zhang, Chaoning Zhang, Mengchun Zhang, In So Kweon, Junmo Kim
As a self-contained work, this survey starts with a brief introduction of how diffusion models work for image synthesis, followed by the background for text-conditioned image synthesis.
no code implementations • 16 Jan 2023 • Jiwan Hur, Jae Young Lee, Jaehyun Choi, Junmo Kim
To apply LF-DeOcc in both LF datasets, we propose a framework, ISTY, which is defined and divided into three roles: (1) extract LF features, (2) define the occlusion, and (3) inpaint occluded regions.
no code implementations • 1 Dec 2022 • Hansang Lee, Haeil Lee, Helen Hong, Junmo Kim
Our experiments show that (1) TTMA-DU more effectively differentiates correct and incorrect predictions compared to existing uncertainty measures due to mixup perturbation, and (2) TTMA-CSU provides information on class confusion and class similarity for both datasets.
no code implementations • 1 Dec 2022 • Hansang Lee, Haeil Lee, Helen Hong, Junmo Kim
In the classifier learning, we propose the NoiseMix method based on MixUp and BalancedMix methods by mixing the samples from the noisy and the clean label data.
no code implementations • 29 Nov 2022 • Gyojin Han, Jaehyun Choi, Hyeong Gwon Hong, Junmo Kim
Training data generated by the proposed attack causes performance degradation on a specific task targeted by the attacker.
no code implementations • 20 Nov 2022 • Hyungmin Kim, Sungho Suh, SungHyun Baek, Daehwan Kim, Daun Jeong, Hansang Cho, Junmo Kim
Our model not only distills the deterministic and progressive knowledge which are from the pre-trained and previous epoch predictive probabilities but also transfers the knowledge of the deterministic predictive distributions using adversarial learning.
no code implementations • 27 Sep 2022 • Janghyeon Lee, Jongsuk Kim, Hyounguk Shon, Bumsoo Kim, Seung Hwan Kim, Honglak Lee, Junmo Kim
Pre-training vision-language models with contrastive objectives has shown promising results that are both scalable to large uncurated datasets and transferable to many downstream applications.
no code implementations • 17 Aug 2022 • Hyounguk Shon, Janghyeon Lee, Seung Hwan Kim, Junmo Kim
We show that this allows us to design a linear model where quadratic parameter regularization method is placed as the optimal continual learning policy, and at the same time enjoying the high performance of neural networks.
no code implementations • 27 Jul 2022 • Yooshin Cho, Youngsoo Kim, Hanbyel Cho, Jaesung Ahn, Hyeong Gwon Hong, Junmo Kim
Attention maps normalized with softmax operation highly rely upon magnitude of key vectors, and performance is degenerated if the magnitude information is removed.
no code implementations • 16 Jul 2022 • Hanbyel Cho, Yekang Lee, Jaemyung Yu, Junmo Kim
When a high-resolution (HR) image is degraded into a low-resolution (LR) image, the image loses some of the existing information.
no code implementations • 23 May 2022 • Eojindl Yi, JuYoung Yang, Junmo Kim
We evaluate the performance of our method on the recently proposed LiDAR segmentation UDA scenarios.
1 code implementation • 29 Apr 2022 • Dongyeun Lee, Jae Young Lee, Doyeon Kim, Jaehyun Choi, Junmo Kim
Owing to the disentangled feature space, our method can smoothly control the degree of the source features in a single model.
1 code implementation • IEEE Access 2022 • Youngsoo Kim, JEONGHYO HA, Yooshin Cho, Junmo Kim
Blind super-resolution (blind-SR) is an important task in the field of computer vision and has various applications in real-world.
Ranked #4 on
Blind Super-Resolution
on DIV2KRK - 2x upscaling
no code implementations • 8 Apr 2022 • Jinhyung Kim, Taeoh Kim, Minho Shim, Dongyoon Han, Dongyoon Wee, Junmo Kim
FreqAug stochastically removes specific frequency components from the video so that learned representation captures essential features more from the remaining information for various downstream tasks.
1 code implementation • 4 Feb 2022 • Pyunghwan Ahn, JuYoung Yang, Eojindl Yi, Chanho Lee, Junmo Kim
Point branch consists of MLPs, while projection branch transforms point features into a 2D feature map and then apply 2D convolutions.
3 code implementations • 19 Jan 2022 • Doyeon Kim, Woonghyun Ka, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim
Depth estimation from a single image is an important task that can be applied to various fields in computer vision, and has grown rapidly with the development of convolutional neural networks.
Ranked #30 on
Monocular Depth Estimation
on KITTI Eigen split
no code implementations • ICCV 2021 • JuYoung Yang, Pyunghwan Ahn, Doyeon Kim, Haeil Lee, Junmo Kim
With the development of 3D scanning technologies, 3D vision tasks have become a popular research area.
Ranked #9 on
3D Point Cloud Linear Classification
on ModelNet40
3D Point Cloud Linear Classification
Point cloud reconstruction
1 code implementation • ICCV 2021 • Hanbyel Cho, Yooshin Cho, Jaemyung Yu, Junmo Kim
The proposed method is useful in practice because it does not require camera calibration and additional computations in a testing set-up.
Ranked #197 on
3D Human Pose Estimation
on Human3.6M
1 code implementation • 25 Oct 2021 • Seungbum Hong, Jihun Yoon, Junmo Kim, Min-Kook Choi
The SSKT is independent of the network structure and dataset, and is trained differently from existing knowledge transfer methods; hence, it has an advantage in that the prior knowledge acquired from various tasks can be naturally transferred during the training process to the target task.
no code implementations • 29 Sep 2021 • Sewhan Chun, Jae Young Lee, Junmo Kim
The policy search method with the best level of input data dependency involves training a loss predictor network to estimate suitable transformations for each of the given input image in independent manner, resulting in instance-level transformation extraction.
1 code implementation • CVPR 2022 • Beomyoung Kim, Youngjoon Yoo, Chaeeun Rhee, Junmo Kim
This semantic drift occurs confusion between background and instance in training and consequently degrades the segmentation performance.
Image-level Supervised Instance Segmentation
Point-Supervised Instance Segmentation
+5
1 code implementation • ICCV 2021 • Yooshin Cho, Hanbyel Cho, Youngsoo Kim, Junmo Kim
Batch Whitening is a technique that accelerates and stabilizes training by transforming input features to have a zero mean (Centering) and a unit variance (Scaling), and by removing linear correlation between channels (Decorrelation).
1 code implementation • 23 Apr 2021 • Beomyoung Kim, Janghyeon Lee, Sihaeng Lee, Doyeon Kim, Junmo Kim
We present a novel approach for oriented object detection, named TricubeNet, which localizes oriented objects using visual cues ($i. e.,$ heatmap) instead of oriented box offsets regression.
no code implementations • CVPR 2021 • Youngdong Kim, Juseung Yun, Hyounguk Shon, Junmo Kim
Based on the fact that directly providing the label to the data (Positive Learning; PL) has a risk of allowing CNNs to memorize the contaminated labels for the case of noisy data, the indirect learning approach that uses complementary labels (Negative Learning for Noisy Labels; NLNL) has proven to be highly effective in preventing overfitting to noisy data as it reduces the risk of providing faulty target.
1 code implementation • 12 Mar 2021 • Beomyoung Kim, Sangeun Han, Junmo Kim
However, since localization maps obtained from the classifier focus only on sparse discriminative object regions, it is difficult to generate high-quality segmentation labels.
1 code implementation • 5 Mar 2021 • Jeongwoo Ju, Heechul Jung, Yoonju Oh, Junmo Kim
Self-supervised contrastive learning offers a means of learning informative features from a pool of unlabeled data.
1 code implementation • 1 Jan 2021 • Minju Jung, Hyounguk Shon, Eojindl Yi, SungHyun Baek, Junmo Kim
For the pruning and retraining phase, whether the pruned-and-retrained network benefits from the pretrained network indded is examined.
1 code implementation • 5 Nov 2020 • SeulGi Hong, Heonjin Ha, Junmo Kim, Min-Kook Choi
On the other hand, with the advent of data augmentation metrics as the regularizer on general deep learning, we notice that there can be a mutual influence between the method of unlabeled data selection and the data augmentation-based regularization techniques in active learning scenarios.
no code implementations • 2 Nov 2020 • Byungju Kim, Junho Yim, Junmo Kim
Together with our attempt to analyze the temporal correlation, we expect the Highway Driving dataset to encourage research on semantic video segmentation.
no code implementations • 2 Nov 2020 • JuYoung Yang, Chanho Lee, Pyunghwan Ahn, Haeil Lee, Eojindl Yi, Junmo Kim
In this paper, we propose a simple and efficient architecture named point projection and back-projection network (PBP-Net), which leverages 2D CNNs for the 3D point cloud segmentation.
no code implementations • 29 Oct 2020 • Byungju Kim, Jaeyoung Lee, KyungSu Kim, Sungjin Kim, Junmo Kim
In this paper, we introduce a novel algorithm, Incremental Class Learning with Attribute Sharing (ICLAS), for incremental class learning with deep neural networks.
no code implementations • 4 Sep 2020 • Do-Yeon Kim, Donggyu Joo, Junmo Kim
Advances in technology have led to the development of methods that can create desired visual multimedia.
no code implementations • CVPR 2020 • Janghyeon Lee, Hyeong Gwon Hong, Donggyu Joo, Junmo Kim
We propose a quadratic penalty method for continual learning of neural networks that contain batch normalization (BN) layers.
1 code implementation • 17 Feb 2020 • Janghyeon Lee, Donggyu Joo, Hyeong Gwon Hong, Junmo Kim
We propose a novel continual learning method called Residual Continual Learning (ResCL).
no code implementations • 15 Dec 2019 • ByungIn Yoo, Tristan Sylvain, Yoshua Bengio, Junmo Kim
In this paper, we propose a Generative Translation Classification Network (GTCN) for improving visual classification accuracy in settings where classes are visually similar and data is scarce.
1 code implementation • 4 Dec 2019 • Byungju Kim, Junmo Kim
Inspired by observations, we investigate how the class imbalance affects the decision boundary and deteriorates the performance.
no code implementations • 3 Dec 2019 • Hyeong Gwon Hong, Pyunghwan Ahn, Junmo Kim
Transferrable neural architecture search can be viewed as a binary optimization problem where a single optimal path should be selected among candidate paths in each edge within the repeated cell block of the directed a cyclic graph form.
no code implementations • 26 Sep 2019 • Woo-han Yun, Taewoo Kim, Jaeyeon Lee, Jaehong Kim, Junmo Kim
Then, we show that the original cut-and-paste approach suffers from a new domain gap problem, an unbalanced domain gaps, because it has two separate source domains for foreground and background, unlike the conventional domain shift problem.
1 code implementation • ICCV 2019 • Youngdong Kim, Junho Yim, Juseung Yun, Junmo Kim
The classical method of training CNNs is by labeling images in a supervised manner as in "input image belongs to this label" (Positive Learning; PL), which is a fast and accurate method if the labels are assigned correctly to all images.
no code implementations • 23 Jul 2019 • Sang-Yun Oh, Hye-Jin S. Kim, Jongeun Lee, Junmo Kim
We introduce Repetition-Reduction network (RRNet) for resource-constrained depth estimation, offering significantly improved efficiency in terms of computation, memory and energy consumption.
4 code implementations • CVPR 2019 • Byungju Kim, Hyunwoo Kim, Kyung-Su Kim, Sungjin Kim, Junmo Kim
We propose a novel regularization algorithm to train deep neural networks, in which data at training time is severely biased.
no code implementations • 11 Nov 2018 • Yunho Jeon, Junmo Kim
Furthermore, we extend an ACU to a grouped ACU, which can observe multiple receptive fields in one layer.
2 code implementations • NeurIPS 2018 • Yunho Jeon, Junmo Kim
To cope with various convolutions, we propose a new shift operation called active shift layer (ASL) that formulates the amount of shift as a learnable function with shift parameters.
no code implementations • CVPR 2018 • Donggyu Joo, Do-Yeon Kim, Junmo Kim
Generating a novel image by manipulating two input images is an interesting research problem in the study of generative adversarial networks (GANs).
no code implementations • 16 Nov 2017 • Heechul Jung, Jeongwoo Ju, Minju Jung, Junmo Kim
Expanding the domain that deep neural network has already learned without accessing old domain data is a challenging task because deep neural networks forget previously learned information when learning new data from a new domain.
no code implementations • 12 Sep 2017 • Han S. Lee, Heechul Jung, Alex A. Agarwal, Junmo Kim
To verify how DNNs understand the relatedness between object classes, we conducted experiments on the image database provided in cognitive psychology.
no code implementations • 11 Sep 2017 • Han S. Lee, Alex A. Agarwal, Junmo Kim
In a recent decade, ImageNet has become the most notable and powerful benchmark database in computer vision and machine learning community.
1 code implementation • CVPR 2017 • Junho Yim, Donggyu Joo, Jihoon Bae, Junmo Kim
We introduce a novel technique for knowledge transfer, where knowledge from a pretrained deep neural network (DNN) is distilled and transferred to another DNN.
1 code implementation • CVPR 2017 • Yunho Jeon, Junmo Kim
The convolution layer is the core of the CNN, but few studies have addressed the convolution unit itself.
no code implementations • 21 Mar 2017 • Youngsung Kim, ByungIn Yoo, Youngjun Kwak, Changkyu Choi, Junmo Kim
In this paper, we propose to utilize contrastive representation that embeds a distinctive expressive factor for a discriminative purpose.
Facial Expression Recognition
Facial Expression Recognition (FER)
no code implementations • 21 Feb 2017 • Byungju Kim, Youngsoo Kim, Yeakang Lee, Junmo Kim
This paper proposes a branched residual network for image classification.
7 code implementations • CVPR 2017 • Dongyoon Han, Jiwhan Kim, Junmo Kim
This design, which is discussed in depth together with our new insights, has proven to be an effective means of improving generalization ability.
Ranked #105 on
Image Classification
on CIFAR-10
no code implementations • 1 Jul 2016 • Heechul Jung, Jeongwoo Ju, Minju Jung, Junmo Kim
Surprisingly, our method is very effective to forget less of the information in the source domain, and we show the effectiveness of our method using several experiments.
1 code implementation • 17 May 2016 • Minseok Park, Hanxiang Li, Junmo Kim
In this paper, we introduce the HARRISON dataset, a benchmark on hashtag recommendation for real world images in social networks.
2 code implementations • CVPR 2016 • Gayoung Lee, Yu-Wing Tai, Junmo Kim
Recent advances in saliency detection have utilized deep learning to obtain high level features to detect salient regions in a scene.
no code implementations • ICCV 2015 • Heechul Jung, Sihaeng Lee, Junho Yim, Sunjeong Park, Junmo Kim
Furthermore, we show that our new integration method gives more accurate results than traditional methods, such as a weighted summation and a feature concatenation method.
Facial Expression Recognition
Facial Expression Recognition (FER)
no code implementations • ICCV 2015 • Mohamed Souiai, Martin R. Oswald, Youngwook Kee, Junmo Kim, Marc Pollefeys, Daniel Cremers
Despite their enormous success in solving hard combinatorial problems, convex relaxation approaches often suffer from the fact that the computed solutions are far from binary and that subsequent heuristic binarization may substantially degrade the quality of computed solutions.
no code implementations • CVPR 2015 • Junho Yim, Heechul Jung, ByungIn Yoo, Changkyu Choi, Dusik Park, Junmo Kim
This paper proposes a new deep architecture based on a novel type of multitask learning, which can achieve superior performance in rotating to a target-pose face image from an arbitrary pose and illumination image while preserving identity.
no code implementations • CVPR 2015 • Dongyoon Han, Junmo Kim
Unlike the recent unsupervised feature selection methods, SOCFS does not explicitly use the pre-computed local structure information for data points represented as additional terms of their objective functions, but directly computes latent cluster information by the target matrix conducting orthogonal basis clustering in a single unified term of the proposed objective function.
no code implementations • 5 Mar 2015 • Heechul Jung, Sihaeng Lee, Sunjeong Park, Injae Lee, Chunghyun Ahn, Junmo Kim
Furthermore, one of the main contributions of this paper is that our deep network catches the facial action points automatically.
Facial Expression Recognition
Facial Expression Recognition (FER)
no code implementations • CVPR 2014 • Jiwhan Kim, Dongyoon Han, Yu-Wing Tai, Junmo Kim
By mapping a low dimensional RGB color to a feature vector in a high-dimensional color space, we show that we can linearly separate the salient regions from the background by finding an optimal linear combination of color coefficients in the high-dimensional color space.
no code implementations • CVPR 2014 • Youngwook Kee, Mohamed Souiai, Daniel Cremers, Junmo Kim
We propose an optimization algorithm for mutual-information-based unsupervised figure-ground separation.
no code implementations • CVPR 2014 • Youngwook Kee, Junmo Kim
In this paper, we revisit the phase-field approximation of Ambrosio and Tortorelli for the Mumford--Shah functional.
no code implementations • CVPR 2014 • Heechul Jung, Jeongwoo Ju, Junmo Kim
For evaluation of our algorithm, Hopkins 155 dataset, which is a representative test set for rigid motion segmentation, is adopted; it consists of two and three rigid motions.