no code implementations • 22 May 2023 • Prafull Sharma, Julien Philip, Michaël Gharbi, William T. Freeman, Fredo Durand, Valentin Deschaintre
We present a method capable of selecting the regions of a photograph exhibiting the same material as an artist-chosen area.
no code implementations • 22 Jul 2022 • Prafull Sharma, Ayush Tewari, Yilun Du, Sergey Zakharov, Rares Ambrus, Adrien Gaidon, William T. Freeman, Fredo Durand, Joshua B. Tenenbaum, Vincent Sitzmann
We present a method to map 2D image observations of a scene to a persistent 3D scene representation, enabling novel view synthesis and disentangled representation of the movable and immovable components of the scene.
no code implementations • 8 May 2022 • Cameron Smith, Hong-Xing Yu, Sergey Zakharov, Fredo Durand, Joshua B. Tenenbaum, Jiajun Wu, Vincent Sitzmann
Neural scene representations, both continuous and discrete, have recently emerged as a powerful new paradigm for 3D scene understanding.
2 code implementations • CVPR 2022 • Caroline Chan, Fredo Durand, Phillip Isola
We introduce a geometry loss which predicts depth information from the image features of a line drawing, and a semantic loss which matches the CLIP features of a line drawing with its corresponding photograph.
no code implementations • ICCV 2021 • Prafull Sharma, Miika Aittala, Yoav Y. Schechner, Antonio Torralba, Gregory W. Wornell, William T. Freeman, Fredo Durand
We present a passive non-line-of-sight method that infers the number of people or activity of a person from the observation of a blank wall in an unknown room.
1 code implementation • NeurIPS 2021 • Vincent Sitzmann, Semon Rezchikov, William T. Freeman, Joshua B. Tenenbaum, Fredo Durand
In this work, we propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field parameterized via a neural implicit representation.
no code implementations • 1 Jan 2021 • Spandan Madan, Timothy Henry, Jamell Arthur Dozier, Helen Ho, Nishchal Bhandari, Tomotake Sasaki, Fredo Durand, Hanspeter Pfister, Xavier Boix
We find that learning category and viewpoint in separate networks compared to a shared one leads to an increase in selectivity and invariance, as separate networks are not forced to preserve information about both category and viewpoint.
1 code implementation • NeurIPS 2019 • Miika Aittala, Prafull Sharma, Lukas Murmann, Adam B. Yedidia, Gregory W. Wornell, William T. Freeman, Fredo Durand
We recover a video of the motion taking place in a hidden scene by observing changes in indirect illumination in a nearby uncalibrated visible region.
no code implementations • ICCV 2019 • Lukas Murmann, Michael Gharbi, Miika Aittala, Fredo Durand
Collections of images under a single, uncontrolled illumination have enabled the rapid advancement of core computer vision tasks like classification, detection, and segmentation.
no code implementations • ICCV 2019 • Guha Balakrishnan, Adrian V. Dalca, Amy Zhao, John V. Guttag, Fredo Durand, William T. Freeman
We introduce visual deprojection: the task of recovering an image or video that has been collapsed along a dimension.
1 code implementation • 27 Jun 2019 • Valentin Deschaintre, Miika Aittala, Fredo Durand, George Drettakis, Adrien Bousseau
Empowered by deep learning, recent methods for material capture can estimate a spatially-varying reflectance from a single photograph.
Graphics I.3
1 code implementation • 23 Oct 2018 • Valentin Deschaintre, Miika Aittala, Fredo Durand, George Drettakis, Adrien Bousseau
Texture, highlights, and shading are some of many visual cues that allow humans to perceive material appearance in single pictures.
Graphics I.3
no code implementations • ECCV 2018 • Miika Aittala, Fredo Durand
We propose a neural approach for fusing an arbitrary-length burst of photographs suffering from severe camera shake and noise into a sharp and noise-free image.
1 code implementation • 27 Jul 2018 • Spandan Madan, Zoya Bylinskii, Matthew Tancik, Adrià Recasens, Kimberli Zhong, Sami Alsheikh, Hanspeter Pfister, Aude Oliva, Fredo Durand
While automatic text extraction works well on infographics, computer vision approaches trained on natural images fail to identify the stand-alone visual elements in infographics, or `icons'.
1 code implementation • CVPR 2018 • Guha Balakrishnan, Amy Zhao, Adrian V. Dalca, Fredo Durand, John Guttag
Given an image of a person and a desired pose, we produce a depiction of that person in that pose, retaining the appearance of both the person and background.
no code implementations • ICCV 2017 • Katherine L. Bouman, Vickie Ye, Adam B. Yedidia, Fredo Durand, Gregory W. Wornell, Antonio Torralba, William T. Freeman
We show that walls and other obstructions with edges can be exploited as naturally-occurring "cameras" that reveal the hidden scenes beyond them.
1 code implementation • 26 Sep 2017 • Zoya Bylinskii, Sami Alsheikh, Spandan Madan, Adria Recasens, Kimberli Zhong, Hanspeter Pfister, Fredo Durand, Aude Oliva
And second, we use these predicted text tags as a supervisory signal to localize the most diagnostic visual elements from within the infographic i. e. visual hashtags.
1 code implementation • 8 Aug 2017 • Zoya Bylinskii, Nam Wook Kim, Peter O'Donovan, Sami Alsheikh, Spandan Madan, Hanspeter Pfister, Fredo Durand, Bryan Russell, Aaron Hertzmann
Our models are neural networks trained on human clicks and importance annotations on hundreds of designs.
no code implementations • IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 39, NO. 4 2017 • Abe Davis, Katherine L. Bouman, Justin G. Chen, Michael Rubinstein, Oral Buyukozturk, Fredo Durand, William T. Freeman
The estimation of material properties is important for scene understanding, with many applications in vision, robotics, and structural engineering.
no code implementations • IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 39, NO. 4 2017 • Abe Davis, Katherine L. Bouman, Justin G. Chen, Michael Rubinstein, Oral Buyukozturk, Fredo Durand, William T. Freeman
The estimation of material properties is important for scene understanding, with many applications in vision, robotics, andstructural engineering.
no code implementations • 16 Feb 2017 • Nam Wook Kim, Zoya Bylinskii, Michelle A. Borkin, Krzysztof Z. Gajos, Aude Oliva, Fredo Durand, Hanspeter Pfister
In this paper, we present BubbleView, an alternative methodology for eye tracking using discrete mouse clicks to measure which information people consciously choose to examine.
no code implementations • 13 Dec 2016 • Ronnachai Jaroensri, Amy Zhao, Guha Balakrishnan, Derek Lo, Jeremy Schmahmann, John Guttag, Fredo Durand
The performance of our system is comparable to that of a group of ataxia specialists in terms of mean error and correlation, and our system's predictions were consistently within the range of inter-rater variability.
no code implementations • CVPR 2015 • YiChang Shih, Dilip Krishnan, Fredo Durand, William T. Freeman
For single-pane windows, ghosting cues arise from shifted reflections on the two surfaces of the glass pane.
no code implementations • CVPR 2015 • Abe Davis, Katherine L. Bouman, Justin G. Chen, Michael Rubinstein, Fredo Durand, William T. Freeman
The estimation of material properties is important for scene understanding, with many applications in vision, robotics, and structural engineering.
no code implementations • CVPR 2015 • Tianfan Xue, Hossein Mobahi, Fredo Durand, William T. Freeman
We pose and solve a generalization of the aperture problem for moving refractive elements.
no code implementations • CVPR 2015 • Mohamed Elgharib, Mohamed Hefeeda, Fredo Durand, William T. Freeman
Video magnification reveals subtle variations that would be otherwise invisible to the naked eye.
no code implementations • CVPR 2013 • Guha Balakrishnan, Fredo Durand, John Guttag
We extract heart rate and beat lengths from videos by measuring subtle head motion caused by the Newtonian reaction to the influx of blood at each beat.