1 code implementation • ECCV 2020 • Minho Shim, Hsuan-I Ho, Jinhyung Kim, Dongyoon Wee
Person re-identification (re-ID) is the problem of visually identifying a person given a database of identities.
Image-To-Video Person Re-Identification Video-Based Person Re-Identification
no code implementations • 2 Jun 2023 • Minho Shim, Taeoh Kim, Jinhyung Kim, Dongyoon Wee
Summarizing a video requires a diverse understanding of the video, ranging from recognizing scenes to evaluating how much each frame is essential enough to be selected as a summary.
no code implementations • CVPR 2023 • Pilhyeon Lee, Taeoh Kim, Minho Shim, Dongyoon Wee, Hyeran Byun
Temporal action detection aims to predict the time intervals and the classes of action instances in the video.
no code implementations • 10 Mar 2023 • Jaehyeok Kim, Dongyoon Wee, Dan Xu
In this paper, we tackle this problem by proposing a set of learnable identity codes to expand the capability of the framework for multi-identity free-viewpoint rendering, and an effective pose-conditioned code query mechanism to finely model the pose-dependent non-rigid motions.
1 code implementation • ICCV 2023 • ChangHee Yang, Kyeongbo Kong, SungJun Min, Dongyoon Wee, Ho-Deok Jang, Geonho Cha, SukJu Kang
This paper addresses the problem of three-dimensional (3D) human mesh estimation in complex poses and occluded situations.
Ranked #1 on 2D Human Pose Estimation on OCHuman
1 code implementation • 21 Oct 2022 • Nicolas Monet, Dongyoon Wee
This technical report introduces our solution, MEEV, proposed to the EgoBody Challenge at ECCV 2022.
Ranked #1 on 3D human pose and shape estimation on EgoBody (using extra training data)
no code implementations • 30 Jun 2022 • Taeoh Kim, Jinhyung Kim, Minho Shim, Sangdoo Yun, Myunggu Kang, Dongyoon Wee, Sangyoun Lee
The magnitude of augmentation operations on each frame is changed by an effective mechanism, Fourier Sampling that parameterizes diverse, smooth, and realistic temporal variations.
no code implementations • 10 Jun 2022 • Geonho Cha, Chaehun Shin, Sungroh Yoon, Dongyoon Wee
Finally, for each element in the feature set, the aggregation features are extracted by calculating the weighted means and variances, where the weights are derived from the similarity distributions.
no code implementations • 20 May 2022 • Geonho Cha, Ho-Deok Jang, Dongyoon Wee
Most previous methods have alleviated this issue by removing the dynamic regions in the photometric loss formulation based on the masks estimated from another module, making it difficult to fully utilize the training images.
1 code implementation • 2 May 2022 • Jeongseok Hyun, Myunggu Kang, Dongyoon Wee, Dit-yan Yeung
The strong edge features allow SGT to track targets with tracking candidates selected by top-K scored detections with large K. As a result, even low-scored detections can be tracked, and the missed detections are also recovered.
Ranked #2 on Multi-Object Tracking on HiEve
no code implementations • 8 Apr 2022 • Jinhyung Kim, Taeoh Kim, Minho Shim, Dongyoon Han, Dongyoon Wee, Junmo Kim
FreqAug stochastically removes specific frequency components from the video so that learned representation captures essential features more from the remaining information for various downstream tasks.