Search Results for author: Yonglong Tian

Found 25 papers, 10 papers with code

Does Decentralized Learning with Non-IID Unlabeled Data Benefit from Self Supervision?

1 code implementation20 Oct 2022 Lirui Wang, Kaiqing Zhang, Yunzhu Li, Yonglong Tian, Russ Tedrake

Decentralized learning has been advocated and widely deployed to make efficient use of distributed datasets, with an extensive focus on supervised learning (SL) problems.

Contrastive Learning Representation Learning +1

Self-supervision through Random Segments with Autoregressive Coding (RandSAC)

no code implementations22 Mar 2022 Tianyu Hua, Yonglong Tian, Sucheng Ren, Michalis Raptis, Hang Zhao, Leonid Sigal

We illustrate that randomized serialization of the segments significantly improves the performance and results in distribution over spatially-long (across-segments) and -short (within-segment) predictions which are effective for feature learning.

Representation Learning Self-Supervised Learning

Co-advise: Cross Inductive Bias Distillation

no code implementations CVPR 2022 Sucheng Ren, Zhengqi Gao, Tianyu Hua, Zihui Xue, Yonglong Tian, Shengfeng He, Hang Zhao

Transformers recently are adapted from the community of natural language processing as a promising substitute of convolution-based neural networks for visual learning tasks.

Inductive Bias

Simple Distillation Baselines for Improving Small Self-supervised Models

1 code implementation21 Jun 2021 Jindong Gu, Wei Liu, Yonglong Tian

While large self-supervised models have rivalled the performance of their supervised counterparts, small models still struggle.

Generative Models as a Data Source for Multiview Representation Learning

1 code implementation ICLR 2022 Ali Jahanian, Xavier Puig, Yonglong Tian, Phillip Isola

We investigate this question in the setting of learning general-purpose visual representations from a black-box generative model rather than directly from data.

Representation Learning

Divide and Contrast: Self-supervised Learning from Uncurated Data

no code implementations ICCV 2021 Yonglong Tian, Olivier J. Henaff, Aaron van den Oord

Self-supervised learning holds promise in leveraging large amounts of unlabeled data, however much of its progress has thus far been limited to highly curated pre-training data such as ImageNet.

Contrastive Learning Self-Supervised Image Classification +1

Addressing Feature Suppression in Unsupervised Visual Representations

no code implementations17 Dec 2020 Tianhong Li, Lijie Fan, Yuan Yuan, Hao He, Yonglong Tian, Rogerio Feris, Piotr Indyk, Dina Katabi

However, contrastive learning is susceptible to feature suppression, i. e., it may discard important information relevant to the task of interest, and learn irrelevant features.

Contrastive Learning Representation Learning

What Makes for Good Views for Contrastive Learning?

1 code implementation NeurIPS 2020 Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, Phillip Isola

Contrastive learning between multiple views of the data has recently achieved state of the art performance in the field of self-supervised representation learning.

Contrastive Learning Data Augmentation +8

Supervised Contrastive Learning

20 code implementations NeurIPS 2020 Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, Dilip Krishnan

Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models.

class-incremental learning Contrastive Learning +4

Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?

3 code implementations ECCV 2020 Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B. Tenenbaum, Phillip Isola

The focus of recent meta-learning research has been on the development of learning algorithms that can quickly adapt to test time tasks with limited data and low computational cost.

Few-Shot Image Classification General Classification

Training-Free Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate

no code implementations28 Sep 2019 Lu Mi, Hao Wang, Yonglong Tian, Hao He, Nir Shavit

Uncertainty estimation is an essential step in the evaluation of the robustness for deep learning models in computer vision, especially when applied in risk-sensitive areas.

regression

Contrastive Multiview Coding

6 code implementations ECCV 2020 Yonglong Tian, Dilip Krishnan, Phillip Isola

We analyze key properties of the approach that make it work, finding that the contrastive loss outperforms a popular alternative based on cross-view prediction, and that the more views we learn from, the better the resulting representation captures underlying scene semantics.

Contrastive Learning Self-Supervised Action Recognition +1

ProbGAN: Towards Probabilistic GAN with Theoretical Guarantees

1 code implementation ICLR 2019 Hao He, Hao Wang, Guang-He Lee, Yonglong Tian

Probabilistic modelling is a principled framework to perform model aggregation, which has been a primary mechanism to combat mode collapse in the context of Generative Adversarial Networks (GAN).

Image Generation

Learning to Infer and Execute 3D Shape Programs

no code implementations ICLR 2019 Yonglong Tian, Andrew Luo, Xingyuan Sun, Kevin Ellis, William T. Freeman, Joshua B. Tenenbaum, Jiajun Wu

Human perception of 3D shapes goes beyond reconstructing them as a set of points or a composition of geometric primitives: we also effortlessly understand higher-level shape structure such as the repetition and reflective symmetry of object parts.

Representation Learning on Graphs with Jumping Knowledge Networks

3 code implementations ICML 2018 Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, Stefanie Jegelka

Furthermore, combining the JK framework with models like Graph Convolutional Networks, GraphSAGE and Graph Attention Networks consistently improves those models' performance.

Graph Attention Node Classification +2

Through-Wall Human Pose Estimation Using Radio Signals

no code implementations CVPR 2018 Ming-Min Zhao, Tianhong Li, Mohammad Abu Alsheikh, Yonglong Tian, Hang Zhao, Antonio Torralba, Dina Katabi

Yet, unlike vision-based pose estimation, the radio-based system can estimate 2D poses through walls despite never trained on such scenarios.

RF-based Pose Estimation

Deep Learning Strong Parts for Pedestrian Detection

no code implementations ICCV 2015 Yonglong Tian, Ping Luo, Xiaogang Wang, Xiaoou Tang

Third, each part detector in DeepParts is a strong detector that can detect pedestrian by observing only a part of a proposal.

Occlusion Handling Pedestrian Detection

Pedestrian Detection aided by Deep Learning Semantic Tasks

no code implementations CVPR 2015 Yonglong Tian, Ping Luo, Xiaogang Wang, Xiaoou Tang

Rather than expensively annotating scene attributes, we transfer attributes information from existing scene segmentation datasets to the pedestrian dataset, by proposing a novel deep model to learn high-level features from multiple tasks and multiple data sources.

Pedestrian Detection Scene Segmentation

DeepID-Net: multi-stage and deformable deep convolutional neural networks for object detection

no code implementations11 Sep 2014 Wanli Ouyang, Ping Luo, Xingyu Zeng, Shi Qiu, Yonglong Tian, Hongsheng Li, Shuo Yang, Zhe Wang, Yuanjun Xiong, Chen Qian, Zhenyao Zhu, Ruohui Wang, Chen-Change Loy, Xiaogang Wang, Xiaoou Tang

In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty.

object-detection Object Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.