Search Results for author: Nikolaus F. Troje

Found 6 papers, 4 papers with code

ZeroEGGS: Zero-shot Example-based Gesture Generation from Speech

1 code implementation15 Sep 2022 Saeed Ghorbani, Ylva Ferstl, Daniel Holden, Nikolaus F. Troje, Marc-André Carbonneau

In a series of experiments, we first demonstrate the flexibility and generalizability of our model to new speakers and styles.

Gesture Generation

MoVi: A large multi-purpose human motion and video dataset

1 code implementation PLOS ONE 2021 Saeed Ghorbani, Kimia Mahdaviani, Anne Thaler, Konrad Kording, Douglas James Cook, Gunnar Blohm, Nikolaus F. Troje

Large high-quality datasets of human body shape and kinematics lay the foundation for modelling and simulation approaches in computer vision, computer graphics, and biomechanics.

Action Recognition Pose Estimation +1

Gait Recognition using Multi-Scale Partial Representation Transformation with Capsules

no code implementations18 Oct 2020 Alireza Sepas-Moghaddam, Saeed Ghorbani, Nikolaus F. Troje, Ali Etemad

In this context, we propose a novel deep network, learning to transfer multi-scale partial gait representations using capsules to obtain more discriminative gait features.

Gait Recognition

MoVi: A Large Multipurpose Motion and Video Dataset

1 code implementation4 Mar 2020 Saeed Ghorbani, Kimia Mahdaviani, Anne Thaler, Konrad Kording, Douglas James Cook, Gunnar Blohm, Nikolaus F. Troje

In five capture rounds, the same actors and movements were recorded using different hardware systems, including an optical motion capture system, video cameras, and inertial measurement units (IMU).

Auto-labelling of Markers in Optical Motion Capture by Permutation Learning

no code implementations31 Jul 2019 Saeed Ghorbani, Ali Etemad, Nikolaus F. Troje

Optical marker-based motion capture is a vital tool in applications such as motion and behavioural analysis, animation, and biomechanics.

AMASS: Archive of Motion Capture as Surface Shapes

4 code implementations ICCV 2019 Naureen Mahmood, Nima Ghorbani, Nikolaus F. Troje, Gerard Pons-Moll, Michael J. Black

We achieve this using a new method, MoSh++, that converts mocap data into realistic 3D human meshes represented by a rigged body model; here we use SMPL [doi:10. 1145/2816795. 2818013], which is widely used and provides a standard skeletal representation as well as a fully rigged surface mesh.

Cannot find the paper you are looking for? You can Submit a new open access paper.