no code implementations • 24 Mar 2022 • Jae Shin Yoon, Duygu Ceylan, Tuanfeng Y. Wang, Jingwan Lu, Jimei Yang, Zhixin Shu, Hyun Soo Park
Appearance of dressed humans undergoes a complex geometric transformation induced not only by the static pose but also by its dynamics, i. e., there exists a number of cloth geometric configurations given a pose depending on the way it has moved.
no code implementations • 1 Dec 2021 • Zhijian Yang, Xiaoran Fan, Volkan Isler, Hyun Soo Park
Based on this insight, we introduce a time-invariant transfer function called pose kernel -- the impulse response of audio signals induced by the body pose.
no code implementations • 13 Oct 2021 • Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Abrham Gebreselasie, Cristina Gonzalez, James Hillis, Xuhua Huang, Yifei HUANG, Wenqi Jia, Weslie Khoo, Jachym Kolar, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Ziwei Zhao, Yunyi Zhu, Pablo Arbelaez, David Crandall, Dima Damen, Giovanni Maria Farinella, Christian Fuegen, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik
We introduce Ego4D, a massive-scale egocentric video dataset and benchmark suite.
no code implementations • 1 Oct 2021 • Praneet C. Bala, Jan Zimmermann, Hyun Soo Park, Benjamin Y. Hayden
We hypothesize that there exists a shared representation between the primary and secondary landmarks because the range of motion of the secondary landmarks can be approximately spanned by that of the primary landmarks.
no code implementations • 30 Sep 2021 • Jae Shin Yoon, Zhixuan Yu, Jaesik Park, Hyun Soo Park
We demonstrate that HUMBI is highly effective in learning and reconstructing a complete human model and is complementary to the existing datasets of human body expressions with limited views and subjects such as MPII-Gaze, Multi-PIE, Human3. 6M, and Panoptic Studio datasets.
no code implementations • 20 Sep 2021 • Zhixuan Yu, Haozheng Yu, Long Sha, Sujoy Ganguly, Hyun Soo Park
(2) Geometric consistency: every point in the continuous correspondence fields must satisfy the multiview consistency collectively.
no code implementations • 30 Aug 2021 • Benjamin Hayden, Hyun Soo Park, Jan Zimmermann
The availability of such data has in turn spurred developments in data analysis techniques.
no code implementations • CVPR 2021 • Jingfan Guo, Jie Li, Rahul Narain, Hyun Soo Park
Inspired by the theory of optimal control, we optimize the body states such that the simulated cloth motion is matched to the point cloud measurements, and the analytic gradient of the simulator is back-propagated to update the body states.
1 code implementation • CVPR 2021 • Yasamin Jafarian, Hyun Soo Park
A key challenge of learning a visual representation for the 3D high fidelity geometry of dressed humans lies in the limited availability of the ground truth data (e. g., 3D scanned models), which results in the performance degradation of 3D human reconstruction when applying to real-world imagery.
no code implementations • 29 Jan 2021 • Jae Shin Yoon, Kihwan Kim, Jan Kautz, Hyun Soo Park
In this paper, we present a method of clothes retargeting; generating the potential poses and deformations of a given 3D clothing template model to fit onto a person in a single RGB image.
no code implementations • CVPR 2021 • Jae Shin Yoon, Lingjie Liu, Vladislav Golyanik, Kripasindhu Sarkar, Hyun Soo Park, Christian Theobalt
We present a new pose transfer method for synthesizing a human animation from a single image of a person controlled by a sequence of body poses.
1 code implementation • ECCV 2020 • Tien Do, Khiem Vuong, Stergios I. Roumeliotis, Hyun Soo Park
Our two main hypotheses are: (1) visual scene layout is indicative of the gravity direction; and (2) not all surfaces are equally represented by a learned estimator due to the structured distribution of the training data, thus, there exists a transformation for each tilted image that is more responsive to the learned estimator than others.
no code implementations • CVPR 2020 • Jae Shin Yoon, Kihwan Kim, Orazio Gallo, Hyun Soo Park, Jan Kautz
Our insight is that although its scale and quality are inconsistent with other views, the depth estimation from a single view can be used to reason about the globally coherent geometry of dynamic contents.
no code implementations • CVPR 2019 • Jae Shin Yoon, Takaaki Shiratori, Shoou-I Yu, Hyun Soo Park
In this paper, we propose a self-supervised domain adaptation approach to enable the animation of high-fidelity face models from a commodity camera.
no code implementations • 4 Dec 2018 • Yuan Yao, Hyun Soo Park
We hypothesize that it is possible to leverage multiview image streams that are linked through the underlying 3D geometry, which can provide an additional supervisionary signal to train a segmentation model.
no code implementations • 2 Dec 2018 • Jayant Sharma, Zixing Wang, Alberto Speranzon, Vijay Venkataraman, Hyun Soo Park
We present a new method to localize a camera within a previously unseen environment perceived from an egocentric point of view.
1 code implementation • CVPR 2020 • Zhixuan Yu, Jae Shin Yoon, In Kyu Lee, Prashanth Venkatesh, Jaesik Park, Jihun Yu, Hyun Soo Park
This paper presents a new large multiview dataset called HUMBI for human body expressions with natural clothing.
1 code implementation • 27 Nov 2018 • Yilun Zhang, Hyun Soo Park
This paper presents a semi-supervised learning framework to train a keypoint detector using multiview image streams given the limited labeled data (typically $<$4\%).
1 code implementation • ICCV 2019 • Yuan Yao, Yasamin Jafarian, Hyun Soo Park
While multiview geometry can be used to self-supervise the unlabeled data, integrating the geometry into learning a keypoint detector is challenging due to representation mismatch.
no code implementations • CVPR 2018 • Jae Shin Yoon, Ziwei Li, Hyun Soo Park
This paper presents a method to reconstruct dense semantic trajectory stream of human interactions in 3D from synchronized multiple videos.
no code implementations • CVPR 2017 • Shan Su, Jung Pyo Hong, Jianbo Shi, Hyun Soo Park
This paper presents a method to predict the future movements (location and gaze direction) of basketball players as a whole from their first person videos.
no code implementations • 1 Apr 2017 • Shan Su, Jianbo Shi, Hyun Soo Park
Our conjecture is that the spatial arrangement of a first person visual scene is deployed to afford an action, and therefore, the action can be inversely used to synthesize a new scene such that the action is feasible.
no code implementations • 29 Nov 2016 • Shan Su, Jung Pyo Hong, Jianbo Shi, Hyun Soo Park
This paper presents a method to predict the future movements (location and gaze direction) of basketball players as a whole from their first person videos.
no code implementations • ICCV 2017 • Gedas Bertasius, Hyun Soo Park, Stella X. Yu, Jianbo Shi
Finally, we use this feature to learn a basketball assessment model from pairs of labeled first-person basketball videos, for which a basketball expert indicates, which of the two players is better.
1 code implementation • ICCV 2017 • Gedas Bertasius, Hyun Soo Park, Stella X. Yu, Jianbo Shi
In this work, we show that we can detect important objects in first-person images without the supervision by the camera wearer or even third-person labelers.
no code implementations • CVPR 2016 • Hyun Soo Park, Jyh-Jing Hwang, Jianbo Shi
In this paper, we focus on a problem of Force from Motion---decoding the sensation of 1) passive forces such as the gravity, 2) the physical scale of the motion (speed) and space, and 3) active forces exerted by the observer such as pedaling a bike or banking on a ski turn.
no code implementations • CVPR 2016 • Hyun Soo Park, Jyh-Jing Hwang, Yedong Niu, Jianbo Shi
We refine them by minimizing a cost function that describes compatibility between the obstacles in the EgoRetinal map and trajectories.
no code implementations • 15 Mar 2016 • Gedas Bertasius, Hyun Soo Park, Stella X. Yu, Jianbo Shi
Unlike traditional third-person cameras mounted on robots, a first-person camera, captures a person's visual sensorimotor object interactions from up close.
no code implementations • 9 Nov 2015 • Gedas Bertasius, Hyun Soo Park, Jianbo Shi
We empirically show that this representation can accurately characterize the egocentric object prior by testing it on an egocentric RGBD dataset for three tasks: the 3D saliency detection, future saliency prediction, and interaction classification.
no code implementations • 7 Sep 2015 • Hyun Soo Park, Yedong Niu, Jianbo Shi
As a byproduct of the predicted trajectories of ego-motion, we discover in the image the empty space occluded by foreground objects.
no code implementations • CVPR 2015 • Hyun Soo Park, Jianbo Shi
An ensemble classifier is trained to learn the geometric relationship.
no code implementations • CVPR 2014 • Hanbyul Joo, Hyun Soo Park, Yaser Sheikh
Many traditional challenges in reconstructing 3D motion, such as matching across wide baselines and handling occlusion, reduce in significance as the number of unique viewpoints increases.