Search Results for author: Tinghui Zhou

Found 13 papers, 7 papers with code

FlashTex: Fast Relightable Mesh Texturing with LightControlNet

no code implementations20 Feb 2024 Kangle Deng, Timothy Omernick, Alexander Weiss, Deva Ramanan, Jun-Yan Zhu, Tinghui Zhou, Maneesh Agrawala

We introduce LightControlNet, a new text-to-image model based on the ControlNet architecture, which allows the specification of the desired lighting as a conditioning image to the model.

Learning to Factorize and Relight a City

no code implementations ECCV 2020 Andrew Liu, Shiry Ginosar, Tinghui Zhou, Alexei A. Efros, Noah Snavely

We propose a learning-based framework for disentangling outdoor scenes into temporally-varying illumination and permanent scene factors.

Intrinsic Image Decomposition

Exploring Simple and Transferable Recognition-Aware Image Processing

1 code implementation21 Oct 2019 Zhuang Liu, Hung-Ju Wang, Tinghui Zhou, Zhiqiang Shen, Bingyi Kang, Evan Shelhamer, Trevor Darrell

Interestingly, the processing model's ability to enhance recognition quality can transfer when evaluated on models of different architectures, recognized categories, tasks and training datasets.

Image Retrieval Recommendation Systems

Rethinking the Value of Network Pruning

2 code implementations ICLR 2019 Zhuang Liu, Ming-Jie Sun, Tinghui Zhou, Gao Huang, Trevor Darrell

Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned "important" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited "important" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm.

Network Pruning Neural Architecture Search

Everybody Dance Now

13 code implementations ICCV 2019 Caroline Chan, Shiry Ginosar, Tinghui Zhou, Alexei A. Efros

This paper presents a simple method for "do as I do" motion transfer: given a source video of a person dancing, we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves.

Face Generation Image-to-Image Translation +1

Stereo Magnification: Learning View Synthesis using Multiplane Images

1 code implementation24 May 2018 Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, Noah Snavely

The view synthesis problem--generating novel views of a scene from known imagery--has garnered recent attention due in part to compelling applications in virtual and augmented reality.

Novel View Synthesis

Unsupervised Learning of Depth and Ego-Motion from Video

2 code implementations CVPR 2017 Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe

We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences.

Depth And Camera Motion Motion Estimation +1

Multi-view Supervision for Single-view Reconstruction via Differentiable Ray Consistency

no code implementations CVPR 2017 Shubham Tulsiani, Tinghui Zhou, Alexei A. Efros, Jitendra Malik

We study the notion of consistency between a 3D shape and a 2D observation and propose a differentiable formulation which allows computing gradients of the 3D shape given an observation from an arbitrary view.

View Synthesis by Appearance Flow

4 code implementations11 May 2016 Tinghui Zhou, Shubham Tulsiani, Weilun Sun, Jitendra Malik, Alexei A. Efros

We address the problem of novel view synthesis: given an input image, synthesizing new images of the same object or scene observed from arbitrary viewpoints.

Novel View Synthesis

Learning Dense Correspondence via 3D-guided Cycle Consistency

no code implementations CVPR 2016 Tinghui Zhou, Philipp Krähenbühl, Mathieu Aubry, Qi-Xing Huang, Alexei A. Efros

We use ground-truth synthetic-to-synthetic correspondences, provided by the rendering engine, to train a ConvNet to predict synthetic-to-real, real-to-real and real-to-synthetic correspondences that are cycle-consistent with the ground-truth.

Learning Data-driven Reflectance Priors for Intrinsic Image Decomposition

no code implementations ICCV 2015 Tinghui Zhou, Philipp Krähenbühl, Alexei A. Efros

We propose a data-driven approach for intrinsic image decomposition, which is the process of inferring the confounding factors of reflectance and shading in an image.

Image Relighting Intrinsic Image Decomposition

FlowWeb: Joint Image Set Alignment by Weaving Consistent, Pixel-Wise Correspondences

no code implementations CVPR 2015 Tinghui Zhou, Yong Jae Lee, Stella X. Yu, Alyosha A. Efros

Given a set of poorly aligned images of the same visual concept without any annotations, we propose an algorithm to jointly bring them into pixel-wise correspondence by estimating a FlowWeb representation of the image set.

Optical Flow Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.