1 code implementation • SIGGRAPH 2024 • Peizhuo Li, Sebastian Starke, Yuting Ye, Olga Sorkine-Hornung
We present a new approach for understanding the periodicity structure and semantics of motion datasets, independently of the morphology and skeletal structure of characters.
no code implementations • 14 Jun 2024 • Lingni Ma, Yuting Ye, Fangzhou Hong, Vladimir Guzov, Yifeng Jiang, Rowan Postyeni, Luis Pesqueira, Alexander Gamino, Vijay Baiyya, Hyo Jin Kim, Kevin Bailey, David Soriano Fosas, C. Karen Liu, Ziwei Liu, Jakob Engel, Renzo De Nardi, Richard Newcombe
To the best of our knowledge, the Nymeria dataset is the world largest in-the-wild collection of human motion with natural and diverse activities; first of its kind to provide synchronized and localized multi-device multimodal egocentric data; and the world largest dataset with motion-language descriptions.
1 code implementation • 14 Feb 2024 • Dongseok Yang, Jiho Kang, Lingni Ma, Joseph Greer, Yuting Ye, Sung-Hee Lee
We then condition the otherwise ambiguous lower-body pose with the predictions of foot contact and upper-body pose in a two-stage model.
no code implementations • 16 Oct 2023 • Deok-Kyeong Jang, Yuting Ye, Jungdam Won, Sung-Hee Lee
Central to our framework is the Neural Context Matcher, which generates a motion feature for the target character with the most similar context to the input motion feature.
no code implementations • 24 Sep 2023 • Yifeng Jiang, Jungdam Won, Yuting Ye, C. Karen Liu
We introduce DROP, a novel framework for modeling Dynamics Responses of humans using generative mOtion prior and Projective dynamics.
no code implementations • 4 Jul 2023 • Daniele Reda, Jungdam Won, Yuting Ye, Michiel Van de Panne, Alexander Winkler
We introduce a method to retarget motions in real-time from sparse human sensor data to characters of various morphologies.
no code implementations • 9 Jun 2023 • Sunmin Lee, Sebastian Starke, Yuting Ye, Jungdam Won, Alexander Winkler
Most existing methods for motion tracking avoid environment interaction apart from foot-floor contact due to their complex dynamics and hard constraints.
no code implementations • 30 May 2023 • Thu Nguyen-Phuoc, Gabriel Schwartz, Yuting Ye, Stephen Lombardi, Lei Xiao
Among existing approaches for avatar stylization, direct optimization methods can produce excellent results for arbitrary styles but they are unpleasantly slow.
no code implementations • 14 Mar 2023 • Yunbo Zhang, Alexander Clegg, Sehoon Ha, Greg Turk, Yuting Ye
In-hand object manipulation is challenging to simulate due to complex contact dynamics, non-repetitive finger gaits, and the need to indirectly control unactuated objects.
no code implementations • ICCV 2023 • Mingyi Shi, Sebastian Starke, Yuting Ye, Taku Komura, Jungdam Won
We present a novel motion prior, called PhaseMP, modeling a probability distribution on pose transitions conditioned by a frequency domain feature extracted from a periodic autoencoder.
no code implementations • 20 Sep 2022 • Alexander Winkler, Jungdam Won, Yuting Ye
Real-time tracking of human body motion is crucial for interactive and immersive experiences in AR/VR.
no code implementations • 16 May 2022 • Yuting Ye, Christine Ho, Ci-Ren Jiang, Wayne Tai Lee, Haiyan Huang
We show that classification decisions made by simply sorting objects across classes in descending order of their true mLPRs can, in theory, ensure the class hierarchy and lead to the maximization of CATCH, an objective function we introduce that is related to the area under a hit curve.
1 code implementation • 29 Mar 2022 • Yifeng Jiang, Yuting Ye, Deepak Gopinath, Jungdam Won, Alexander W. Winkler, C. Karen Liu
Real-time human motion reconstruction from a sparse set of (e. g. six) wearable IMUs provides a non-intrusive and economic approach to motion capture.
no code implementations • ICLR 2022 • Da Xu, Yuting Ye, Chuanwei Ruan
The interventional nature of recommendation has attracted increasing attention in recent years.
no code implementations • 27 Feb 2022 • Da Xu, Yuting Ye, Chuanwei Ruan, Bo Yang
Off-policy learning plays a pivotal role in optimizing and evaluating policies prior to the online deployment.
1 code implementation • 27 Dec 2021 • Qi Feng, Kun He, He Wen, Cem Keskin, Yuting Ye
Notably, on CMU Panoptic Studio, we are able to reduce the turn-around time by 60% and annotation cost by 80% when compared to the conventional annotation process.
no code implementations • 30 Sep 2021 • Binbin Xu, Lingni Ma, Yuting Ye, Tanner Schmidt, Christopher D. Twigg, Steven Lovegrove
When applied to dynamically deforming shapes such as the human hands, however, they would need to preserve temporal coherence of the deformation as well as the intrinsic identity of the subject.
no code implementations • ICLR 2021 • Da Xu, Yuting Ye, Chuanwei Ruan
The recent paper by Byrd & Lipton (2019), based on empirical observations, raises a major concern on the impact of importance weighting for the over-parameterized deep learning models.
1 code implementation • NeurIPS 2020 • Yi Zhou, Chenglei Wu, Zimo Li, Chen Cao, Yuting Ye, Jason Saragih, Hao Li, Yaser Sheikh
Learning latent representations of registered meshes is useful for many 3D tasks.
1 code implementation • 28 Aug 2019 • Yuting Ye, Xuwu Wang, Jiangchao Yao, Kunyang Jia, Jingren Zhou, Yanghua Xiao, Hongxia Yang
Low-dimensional embeddings of knowledge graphs and behavior graphs have proved remarkably powerful in varieties of tasks, from predicting unobserved edges between entities to content recommendation.
no code implementations • 16 Apr 2019 • Yikang Li, Chris Twigg, Yuting Ye, Lingling Tao, Xiaogang Wang
Hand pose estimation from the monocular 2D image is challenging due to the variation in lighting, appearance, and background.
no code implementations • 18 Oct 2018 • Christine Ho, Yuting Ye, Ci-Ren Jiang, Wayne Tai Lee, Haiyan Huang
In this article we propose a novel ranking algorithm, referred to as HierLPR, for the multi-label classification problem when the candidate labels follow a known hierarchical structure.
1 code implementation • ECCV 2018 • Xiaoming Li, Ming Liu, Yuting Ye, WangMeng Zuo, Liang Lin, Ruigang Yang
For better recovery of fine facial details, we modify the problem setting by taking both the degraded observation and a high-quality guided image of the same identity as input to our guided face restoration network (GFRNet).
Ranked #1 on Image Super-Resolution on WebFace - 8x upscaling
1 code implementation • 7 Feb 2018 • Abhronil Sengupta, Yuting Ye, Robert Wang, Chiao Liu, Kaushik Roy
Over the past few years, Spiking Neural Networks (SNNs) have become popular as a possible pathway to enable low-power event-driven neuromorphic hardware.