no code implementations • 20 Feb 2024 • Kangle Deng, Timothy Omernick, Alexander Weiss, Deva Ramanan, Jun-Yan Zhu, Tinghui Zhou, Maneesh Agrawala
We introduce LightControlNet, a new text-to-image model based on the ControlNet architecture, which allows the specification of the desired lighting as a conditioning image to the model.
no code implementations • ECCV 2020 • Andrew Liu, Shiry Ginosar, Tinghui Zhou, Alexei A. Efros, Noah Snavely
We propose a learning-based framework for disentangling outdoor scenes into temporally-varying illumination and permanent scene factors.
1 code implementation • 21 Oct 2019 • Zhuang Liu, Hung-Ju Wang, Tinghui Zhou, Zhiqiang Shen, Bingyi Kang, Evan Shelhamer, Trevor Darrell
Interestingly, the processing model's ability to enhance recognition quality can transfer when evaluated on models of different architectures, recognized categories, tasks and training datasets.
2 code implementations • ICLR 2019 • Zhuang Liu, Ming-Jie Sun, Tinghui Zhou, Gao Huang, Trevor Darrell
Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned "important" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited "important" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm.
13 code implementations • ICCV 2019 • Caroline Chan, Shiry Ginosar, Tinghui Zhou, Alexei A. Efros
This paper presents a simple method for "do as I do" motion transfer: given a source video of a person dancing, we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves.
1 code implementation • 24 May 2018 • Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, Noah Snavely
The view synthesis problem--generating novel views of a scene from known imagery--has garnered recent attention due in part to compelling applications in virtual and augmented reality.
2 code implementations • CVPR 2017 • Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences.
no code implementations • CVPR 2017 • Shubham Tulsiani, Tinghui Zhou, Alexei A. Efros, Jitendra Malik
We study the notion of consistency between a 3D shape and a 2D observation and propose a differentiable formulation which allows computing gradients of the 3D shape given an observation from an arbitrary view.
176 code implementations • CVPR 2017 • Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A. Efros
We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems.
4 code implementations • 11 May 2016 • Tinghui Zhou, Shubham Tulsiani, Weilun Sun, Jitendra Malik, Alexei A. Efros
We address the problem of novel view synthesis: given an input image, synthesizing new images of the same object or scene observed from arbitrary viewpoints.
no code implementations • CVPR 2016 • Tinghui Zhou, Philipp Krähenbühl, Mathieu Aubry, Qi-Xing Huang, Alexei A. Efros
We use ground-truth synthetic-to-synthetic correspondences, provided by the rendering engine, to train a ConvNet to predict synthetic-to-real, real-to-real and real-to-synthetic correspondences that are cycle-consistent with the ground-truth.
no code implementations • ICCV 2015 • Tinghui Zhou, Philipp Krähenbühl, Alexei A. Efros
We propose a data-driven approach for intrinsic image decomposition, which is the process of inferring the confounding factors of reflectance and shading in an image.
no code implementations • CVPR 2015 • Tinghui Zhou, Yong Jae Lee, Stella X. Yu, Alyosha A. Efros
Given a set of poorly aligned images of the same visual concept without any annotations, we propose an algorithm to jointly bring them into pixel-wise correspondence by estimating a FlowWeb representation of the image set.