Search Results for author: Ziqian Bai

Found 7 papers, 3 papers with code

One2Avatar: Generative Implicit Head Avatar For Few-shot User Adaptation

no code implementations19 Feb 2024 Zhixuan Yu, Ziqian Bai, Abhimitra Meka, Feitong Tan, Qiangeng Xu, Rohit Pandey, Sean Fanello, Hyun Soo Park, yinda zhang

Traditional methods for constructing high-quality, personalized head avatars from monocular videos demand extensive face captures and training time, posing a significant challenge for scalability.

Camera Calibration

Riggable 3D Face Reconstruction via In-Network Optimization

1 code implementation CVPR 2021 Ziqian Bai, Zhaopeng Cui, Xiaoming Liu, Ping Tan

This paper presents a method for riggable 3D face reconstruction from monocular images, which jointly estimates a personalized face rig and per-image parameters including expressions, poses, and illuminations.

3D Face Reconstruction

Deep Facial Non-Rigid Multi-View Stereo

1 code implementation CVPR 2020 Ziqian Bai, Zhaopeng Cui, Jamal Ahmed Rahim, Xiaoming Liu, Ping Tan

We facilitate it with a CNN network that learns to regularize the non-rigid 3D face according to the input image and preliminary optimization results.

3D Face Reconstruction

CodeNet: Training Large Scale Neural Networks in Presence of Soft-Errors

no code implementations4 Mar 2019 Sanghamitra Dutta, Ziqian Bai, Tze Meng Low, Pulkit Grover

This work proposes the first strategy to make distributed training of neural networks resilient to computing errors, a problem that has remained unsolved despite being first posed in 1956 by von Neumann.

A Unified Coded Deep Neural Network Training Strategy Based on Generalized PolyDot Codes for Matrix Multiplication

no code implementations27 Nov 2018 Sanghamitra Dutta, Ziqian Bai, Haewon Jeong, Tze Meng Low, Pulkit Grover

First, we propose a novel coded matrix multiplication technique called Generalized PolyDot codes that advances on existing methods for coded matrix multiplication under storage and communication constraints.

Cannot find the paper you are looking for? You can Submit a new open access paper.