1 code implementation • ICML 2020 • Tongzhou Wang, Phillip Isola
Contrastive representation learning has been outstandingly successful in practice.
no code implementations • 25 Mar 2023 • Rickard Brüel-Gabrielsson, Tongzhou Wang, Manel Baradad, Justin Solomon
We introduce Deep Augmentation, an approach to data augmentation using dropout to dynamically transform a targeted layer within a neural network, with the option to use the stop-gradient operation, offering significant improvements in model performance and generalization.
no code implementations • 22 Feb 2023 • Sangnie Bhardwaj, Willie McClinton, Tongzhou Wang, Guillaume Lajoie, Chen Sun, Phillip Isola, Dilip Krishnan
In this paper, we propose a method of learning representations that are instead equivariant to data augmentations.
1 code implementation • 29 Nov 2022 • Manel Baradad, Chun-Fu Chen, Jonas Wulff, Tongzhou Wang, Rogerio Feris, Antonio Torralba, Phillip Isola
Learning image representations using synthetic data allows training neural networks without some of the concerns associated with real images, such as privacy and bias.
1 code implementation • 28 Nov 2022 • Tongzhou Wang, Phillip Isola
Asymmetrical distance structures (quasimetrics) are ubiquitous in our lives and are gaining more attention in machine learning applications.
no code implementations • 26 Sep 2022 • Jingwei Ma, Lucy Chai, Minyoung Huh, Tongzhou Wang, Ser-Nam Lim, Phillip Isola, Antonio Torralba
We introduce a new approach to image forensics: placing physical refractive objects, which we call totems, into a scene so as to protect any photograph taken of that scene.
1 code implementation • 30 Jun 2022 • Tongzhou Wang, Simon S. Du, Antonio Torralba, Phillip Isola, Amy Zhang, Yuandong Tian
The ability to separate signal from noise, and reason with clean abstractions, is critical to intelligence.
2 code implementations • 30 Jun 2022 • Tongzhou Wang, Phillip Isola
In contrast, our proposed Poisson Quasimetric Embedding (PQE) is the first quasimetric learning formulation that both is learnable with gradient-based optimization and enjoys strong performance guarantees.
3 code implementations • CVPR 2022 • George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A. Efros, Jun-Yan Zhu
To efficiently obtain the initial and target network parameters for large-scale datasets, we pre-compute and store training trajectories of expert networks trained on the real dataset.
no code implementations • ICLR 2022 • Tongzhou Wang, Phillip Isola
Our world is full of asymmetries.
1 code implementation • NeurIPS 2021 • Manel Baradad, Jonas Wulff, Tongzhou Wang, Phillip Isola, Antonio Torralba
We investigate a suite of image generation models that produce images from simple random processes.
3 code implementations • ECCV 2020 • David Bau, Steven Liu, Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba
To address the problem, we propose a formulation in which the desired rule is changed by manipulating a layer of a deep network as a linear associative memory.
2 code implementations • CVPR 2020 • Steven Liu, Tongzhou Wang, David Bau, Jun-Yan Zhu, Antonio Torralba
We introduce a simple but effective unsupervised method for generating realistic and diverse images.
2 code implementations • 20 May 2020 • Tongzhou Wang, Phillip Isola
Contrastive representation learning has been outstandingly successful in practice.
2 code implementations • 27 Nov 2018 • Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, Alexei A. Efros
Model distillation aims to distill the knowledge of a complex model into a simpler one.
no code implementations • NeurIPS 2018 • Tongzhou Wang, Yi Wu, David A. Moore, Stuart J. Russell
The learned neural proposals generalize to occurrences of common structural motifs across different models, allowing for the construction of a library of learned inference primitives that can accelerate inference on unseen models with no model-specific training required.
1 code implementation • ICCV 2017 • Pratul P. Srinivasan, Tongzhou Wang, Ashwin Sreelal, Ravi Ramamoorthi, Ren Ng
We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction).