Search Results for author: Taku Komura

Found 47 papers, 17 papers with code

SENC: Handling Self-collision in Neural Cloth Simulation

no code implementations17 Jul 2024 Zhouyingcheng Liao, Sinan Wang, Taku Komura

We present SENC, a novel self-supervised neural cloth simulator that addresses the challenge of cloth self-collision.

Graph Neural Network

DICE: End-to-end Deformation Capture of Hand-Face Interactions from a Single Image

no code implementations26 Jun 2024 Qingxuan Wu, Zhiyang Dou, Sirui Xu, Soshi Shimada, Chen Wang, Zhengming Yu, YuAn Liu, Cheng Lin, Zeyu Cao, Taku Komura, Vladislav Golyanik, Christian Theobalt, Wenping Wang, Lingjie Liu

The first and only method for hand-face interaction recovery, Decaf, introduces a global fitting optimization guided by contact and deformation estimation networks trained on studio-collected data with 3D annotations.

ComboStoc: Combinatorial Stochasticity for Diffusion Generative Models

no code implementations22 May 2024 Rui Xu, Jiepeng Wang, Hao Pan, Yang Liu, Xin Tong, Shiqing Xin, Changhe Tu, Taku Komura, Wenping Wang

We show that the space spanned by the combination of dimensions and attributes is insufficiently sampled by existing training scheme of diffusion generative models, causing degraded test time performance.

InterAct: Capture and Modelling of Realistic, Expressive and Interactive Activities between Two Persons in Daily Scenarios

no code implementations19 May 2024 Yinghao Huang, Leo Ho, Dafei Qin, Mingyi Shi, Taku Komura

We address the problem of accurate capture and expressive modelling of interactive behaviors happening between two persons in daily scenarios.

CWF: Consolidating Weak Features in High-quality Mesh Simplification

no code implementations24 Apr 2024 Rui Xu, Longdu Liu, Ningna Wang, Shuangmin Chen, Shiqing Xin, Xiaohu Guo, Zichun Zhong, Taku Komura, Wenping Wang, Changhe Tu

In mesh simplification, common requirements like accuracy, triangle quality, and feature alignment are often considered as a trade-off.

Taming Diffusion Probabilistic Models for Character Control

1 code implementation23 Apr 2024 Rui Chen, Mingyi Shi, Shaoli Huang, Ping Tan, Taku Komura, Xuelin Chen

We present a novel character control framework that effectively utilizes motion diffusion probabilistic models to generate high-quality and diverse character animations, responding in real-time to a variety of dynamic user-supplied control signals.

Computational Efficiency Diversity

On Optimal Sampling for Learning SDF Using MLPs Equipped with Positional Encoding

no code implementations2 Jan 2024 Guying Lin, Lei Yang, YuAn Liu, Congyi Zhang, Junhui Hou, Xiaogang Jin, Taku Komura, John Keyser, Wenping Wang

Sampling against this intrinsic frequency following the Nyquist-Sannon sampling theorem allows us to determine an appropriate training sampling rate.

DiffusionPhase: Motion Diffusion in Frequency Domain

no code implementations7 Dec 2023 Weilin Wan, Yiming Huang, Shutong Wu, Taku Komura, Wenping Wang, Dinesh Jayaraman, Lingjie Liu

In this study, we introduce a learning-based method for generating high-quality human motion sequences from text descriptions (e. g., ``A person walks forward").


StructRe: Rewriting for Structured Shape Modeling

no code implementations29 Nov 2023 Jiepeng Wang, Hao Pan, Yang Liu, Xin Tong, Taku Komura, Wenping Wang

Such a localized rewriting process enables probabilistic modeling of ambiguous structures and robust generalization across object categories.


TLControl: Trajectory and Language Control for Human Motion Synthesis

no code implementations28 Nov 2023 Weilin Wan, Zhiyang Dou, Taku Komura, Wenping Wang, Dinesh Jayaraman, Lingjie Liu

Controllable human motion synthesis is essential for applications in AR/VR, gaming, movies, and embodied AI.

Motion Synthesis

C$\cdot$ASE: Learning Conditional Adversarial Skill Embeddings for Physics-based Characters

no code implementations20 Sep 2023 Zhiyang Dou, Xuelin Chen, Qingnan Fan, Taku Komura, Wenping Wang

We present C$\cdot$ASE, an efficient and effective framework that learns conditional Adversarial Skill Embeddings for physics-based characters.

Imitation Learning

BodyFormer: Semantics-guided 3D Body Gesture Synthesis with Transformer

no code implementations7 Sep 2023 Kunkun Pang, Dafei Qin, Yingruo Fan, Julian Habekost, Takaaki Shiratori, Junichi Yamagishi, Taku Komura

Learning the mapping between speech and 3D full-body gestures is difficult due to the stochastic nature of the problem and the lack of a rich cross-modal dataset that is needed for training.

Motion In-Betweening with Phase Manifolds

1 code implementation24 Aug 2023 Paul Starke, Sebastian Starke, Taku Komura, Frank Steinicke

This paper introduces a novel data-driven motion in-betweening system to reach target poses of characters by making use of phases variables learned by a Periodic Autoencoder.

NeRO: Neural Geometry and BRDF Reconstruction of Reflective Objects from Multiview Images

1 code implementation27 May 2023 YuAn Liu, Peng Wang, Cheng Lin, Xiaoxiao Long, Jiepeng Wang, Lingjie Liu, Taku Komura, Wenping Wang

We present a neural rendering-based method called NeRO for reconstructing the geometry and the BRDF of reflective objects from multiview images captured in an unknown environment.

Neural Rendering Object

Neural Face Rigging for Animating and Retargeting Facial Meshes in the Wild

1 code implementation15 May 2023 Dafei Qin, Jun Saito, Noam Aigerman, Thibault Groueix, Taku Komura

We propose an end-to-end deep-learning approach for automatic rigging and retargeting of 3D models of human faces in the wild.

F$^{2}$-NeRF: Fast Neural Radiance Field Training with Free Camera Trajectories

1 code implementation28 Mar 2023 Peng Wang, YuAn Liu, Zhaoxi Chen, Lingjie Liu, Ziwei Liu, Taku Komura, Christian Theobalt, Wenping Wang

Based on our analysis, we further propose a novel space-warping method called perspective warping, which allows us to handle arbitrary trajectories in the grid-based NeRF framework.

Novel View Synthesis

Zolly: Zoom Focal Length Correctly for Perspective-Distorted Human Mesh Reconstruction

1 code implementation ICCV 2023 Wenjia Wang, Yongtao Ge, Haiyi Mei, Zhongang Cai, Qingping Sun, Yanjun Wang, Chunhua Shen, Lei Yang, Taku Komura

As it is hard to calibrate single-view RGB images in the wild, existing 3D human mesh reconstruction (3DHMR) methods either use a constant large focal length or estimate one based on the background environment context, which can not tackle the problem of the torso, limb, hand or face distortion caused by perspective camera projection when the camera is close to the human body.

3D Human Pose Estimation 3D Reconstruction

Online Neural Path Guiding with Normalized Anisotropic Spherical Gaussians

no code implementations11 Mar 2023 Jiawei Huang, Akito Iizuka, Hajime Tanaka, Taku Komura, Yoshifumi Kitamura

The variance reduction speed of physically-based rendering is heavily affected by the adopted importance sampling technique.

PhaseMP: Robust 3D Pose Estimation via Phase-conditioned Human Motion Prior

no code implementations ICCV 2023 Mingyi Shi, Sebastian Starke, Yuting Ye, Taku Komura, Jungdam Won

We present a novel motion prior, called PhaseMP, modeling a probability distribution on pose transitions conditioned by a frequency domain feature extracted from a periodic autoencoder.

3D Pose Estimation Motion Estimation

F2-NeRF: Fast Neural Radiance Field Training With Free Camera Trajectories

no code implementations CVPR 2023 Peng Wang, YuAn Liu, Zhaoxi Chen, Lingjie Liu, Ziwei Liu, Taku Komura, Christian Theobalt, Wenping Wang

Existing fast grid-based NeRF training frameworks, like Instant-NGP, Plenoxels, DVGO, or TensoRF, are mainly designed for bounded scenes and rely on space warping to handle unbounded scenes.

Novel View Synthesis

NeuralUDF: Learning Unsigned Distance Fields for Multi-view Reconstruction of Surfaces with Arbitrary Topologies

no code implementations CVPR 2023 Xiaoxiao Long, Cheng Lin, Lingjie Liu, YuAn Liu, Peng Wang, Christian Theobalt, Taku Komura, Wenping Wang

In this paper, we propose to represent surfaces as the Unsigned Distance Function (UDF) and develop a new volume rendering scheme to learn the neural UDF representation.

Neural Rendering

Progressively-connected Light Field Network for Efficient View Synthesis

no code implementations10 Jul 2022 Peng Wang, YuAn Liu, Guying Lin, Jiatao Gu, Lingjie Liu, Taku Komura, Wenping Wang

ProLiF encodes a 4D light field, which allows rendering a large batch of rays in one training step for image- or patch-level losses.

Novel View Synthesis

NeuRIS: Neural Reconstruction of Indoor Scenes Using Normal Priors

1 code implementation27 Jun 2022 Jiepeng Wang, Peng Wang, Xiaoxiao Long, Christian Theobalt, Taku Komura, Lingjie Liu, Wenping Wang

The key idea of NeuRIS is to integrate estimated normal of indoor scenes as a prior in a neural rendering framework for reconstructing large texture-less shapes and, importantly, to do this in an adaptive manner to also enable the reconstruction of irregular shapes with fine details.

3D Reconstruction Neural Rendering

Learn to Predict How Humans Manipulate Large-sized Objects from Interactive Motions

no code implementations25 Jun 2022 Weilin Wan, Lei Yang, Lingjie Liu, Zhuoying Zhang, Ruixing Jia, Yi-King Choi, Jia Pan, Christian Theobalt, Taku Komura, Wenping Wang

We also observe that an object's intrinsic physical properties are useful for the object motion prediction, and thus design a set of object dynamic descriptors to encode such intrinsic properties.

Graph Neural Network Human-Object Interaction Detection +2

SparseNeuS: Fast Generalizable Neural Surface Reconstruction from Sparse Views

1 code implementation12 Jun 2022 Xiaoxiao Long, Cheng Lin, Peng Wang, Taku Komura, Wenping Wang

We introduce SparseNeuS, a novel neural rendering based method for the task of surface reconstruction from multi-view images.

Neural Rendering Surface Reconstruction

Real-Time Style Modelling of Human Locomotion via Feature-Wise Transformations and Local Motion Phases

1 code implementation12 Jan 2022 Ian Mason, Sebastian Starke, Taku Komura

In this work we present a style modelling system that uses an animation synthesis network to model motion content based on local motion phases.

Style Transfer

FaceFormer: Speech-Driven 3D Facial Animation with Transformers

1 code implementation CVPR 2022 Yingruo Fan, Zhaojiang Lin, Jun Saito, Wenping Wang, Taku Komura

Speech-driven 3D facial animation is challenging due to the complex geometry of human faces and the limited availability of 3D audio-visual data.

3D Face Animation

Joint Audio-Text Model for Expressive Speech-Driven 3D Facial Animation

no code implementations4 Dec 2021 Yingruo Fan, Zhaojiang Lin, Jun Saito, Wenping Wang, Taku Komura

The existing datasets are collected to cover as many different phonemes as possible instead of sentences, thus limiting the capability of the audio-based model to learn more diverse contexts.

Language Modelling

DISP6D: Disentangled Implicit Shape and Pose Learning for Scalable 6D Pose Estimation

1 code implementation27 Jul 2021 Yilin Wen, Xiangyu Li, Hao Pan, Lei Yang, Zheng Wang, Taku Komura, Wenping Wang

Scalable 6D pose estimation for rigid objects from RGB images aims at handling multiple objects and generalizing to novel objects.

6D Pose Estimation Metric Learning +2

NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction

7 code implementations NeurIPS 2021 Peng Wang, Lingjie Liu, YuAn Liu, Christian Theobalt, Taku Komura, Wenping Wang

In NeuS, we propose to represent a surface as the zero-level set of a signed distance function (SDF) and develop a new volume rendering method to train a neural SDF representation.

Novel View Synthesis Surface Reconstruction

MotioNet: 3D Human Motion Reconstruction from Monocular Video with Skeleton Consistency

no code implementations22 Jun 2020 Mingyi Shi, Kfir Aberman, Andreas Aristidou, Taku Komura, Dani Lischinski, Daniel Cohen-Or, Baoquan Chen

We introduce MotioNet, a deep neural network that directly reconstructs the motion of a 3D human skeleton from monocular video. While previous methods rely on either rigging or inverse kinematics (IK) to associate a consistent skeleton with temporally coherent joint rotations, our method is the first data-driven approach that directly outputs a kinematic skeleton, which is a complete, commonly used, motion representation.

Local motion phases for learning multi-contact character movements

1 code implementation 10/06 2020 Sebastian Dorothee Starke, Yiwei Zhao, Taku Komura, Kazi A. Zaman

Training a bipedal character to play basketball and interact with objects, or a quadruped character to move in various locomotion modes, are difficult tasks due to the fast and complex contacts happening during the motion.

Learning Whole-body Motor Skills for Humanoids

no code implementations7 Feb 2020 Chuanyu Yang, Kai Yuan, Wolfgang Merkt, Taku Komura, Sethu Vijayakumar, Zhibin Li

This paper presents a hierarchical framework for Deep Reinforcement Learning that acquires motor skills for a variety of push recovery and balancing behaviors, i. e., ankle, hip, foot tilting, and stepping strategies.

Personalized 3D mannequin reconstruction based on 3D scanning

no code implementations16 Apr 2018 Pengpeng Hu, Duan Li, Ge Wu, Taku Komura, Dongliang Zhang, Yueqi Zhong

A personalized mannequin is essential for apparel customization using CAD technologies.

3D textile reconstruction based on KinectFusion and synthesized texture

no code implementations6 Nov 2017 Pengpeng Hu, Taku Komura, Duan Li, Ge Wu, Yueqi Zhong

Purpose The purpose of this paper is to present a novel framework of reconstructing the 3D textile model with synthesized texture.

Scanning and animating characters dressed in multiple-layer garments

no code implementations9 May 2017 Pengpeng Hu, Taku Komura, Daniel Holden, Yueqi Zhong

In this paper, we propose a novel scanning-based solution for modeling and animating characters wearing multiple layers of clothes.

Cannot find the paper you are looking for? You can Submit a new open access paper.