Search Results for author: Chloe LeGendre

Found 6 papers, 1 papers with code

Jointly Optimizing Color Rendition and In-Camera Backgrounds in an RGB Virtual Production Stage

no code implementations24 May 2022 Chloe LeGendre, Lukas Lepicovsky, Paul Debevec

While the LED panels used in virtual production systems can display vibrant imagery with a wide color gamut, they produce problematic color shifts when used as lighting due to their peaky spectral output from narrow-band red, green, and blue LEDs.

Cross-Camera Convolutional Color Constancy

1 code implementation ICCV 2021 Mahmoud Afifi, Jonathan T. Barron, Chloe LeGendre, Yun-Ta Tsai, Francois Bleibel

We present "Cross-Camera Convolutional Color Constancy" (C5), a learning-based method, trained on images from multiple cameras, that accurately estimates a scene's illuminant color from raw images captured by a new camera previously unseen during training.

Color Constancy

Learning Illumination from Diverse Portraits

no code implementations5 Aug 2020 Chloe LeGendre, Wan-Chun Ma, Rohit Pandey, Sean Fanello, Christoph Rhemann, Jason Dourgarian, Jay Busch, Paul Debevec

We present a learning-based technique for estimating high dynamic range (HDR), omnidirectional illumination from a single low dynamic range (LDR) portrait image captured under arbitrary indoor or outdoor lighting conditions.

Lighting Estimation

Learning Perspective Undistortion of Portraits

no code implementations ICCV 2019 Yajie Zhao, Zeng Huang, Tianye Li, Weikai Chen, Chloe LeGendre, Xinglei Ren, Jun Xing, Ari Shapiro, Hao Li

In contrast to the previous state-of-the-art approach, our method handles even portraits with extreme perspective distortion, as we avoid the inaccurate and error-prone step of first fitting a 3D face model.

3D Reconstruction Camera Calibration +2

DeepLight: Learning Illumination for Unconstrained Mobile Mixed Reality

no code implementations CVPR 2019 Chloe LeGendre, Wan-Chun Ma, Graham Fyffe, John Flynn, Laurent Charbonnel, Jay Busch, Paul Debevec

We present a learning-based method to infer plausible high dynamic range (HDR), omnidirectional illumination given an unconstrained, low dynamic range (LDR) image from a mobile phone camera with a limited field of view (FOV).

Mixed Reality

Deep Volumetric Video From Very Sparse Multi-View Performance Capture

no code implementations ECCV 2018 Zeng Huang, Tianye Li, Weikai Chen, Yajie Zhao, Jun Xing, Chloe LeGendre, Linjie Luo, Chongyang Ma, Hao Li

We present a deep learning-based volumetric capture approach for performance capture using a passive and highly sparse multi-view capture system.

Surface Reconstruction

Cannot find the paper you are looking for? You can Submit a new open access paper.