Detailed Expression Capture and Animation, or DECA, is a model for 3D face reconstruction that is trained to robustly produce a UV displacement map from a low-dimensional latent representation that consists of person-specific detail parameters and generic expression parameters, while a regressor is trained to predict detail, shape, albedo, expression, pose and illumination parameters from a single image. A detail-consistency loss is used to disentangle person-specific details and expression-dependent wrinkles. This disentanglement allows us to synthesize realistic person-specific wrinkles by controlling expression parameters while keeping person-specific details unchanged.
Source: Learning an Animatable Detailed 3D Face Model from In-The-Wild ImagesPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Denoising | 2 | 13.33% |
Face Alignment | 2 | 13.33% |
3D Face Reconstruction | 2 | 13.33% |
Face Reconstruction | 2 | 13.33% |
Image Classification | 1 | 6.67% |
Hyperspectral Unmixing | 1 | 6.67% |
Recommendation Systems | 1 | 6.67% |
3D Face Animation | 1 | 6.67% |
3D Face Modelling | 1 | 6.67% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |