Search Results for author: Junnan Li

Found 19 papers, 10 papers with code

Noise-Robust Contrastive Learning

no code implementations1 Jan 2021 Junnan Li, Caiming Xiong, Steven Hoi

In contrast to most existing methods, we combat noise by learning robust representation.

Contrastive Learning

MoPro: Webly Supervised Learning with Momentum Prototypes

1 code implementation ICLR 2021 Junnan Li, Caiming Xiong, Steven C. H. Hoi

We propose momentum prototypes (MoPro), a simple contrastive learning method that achieves online label noise correction, out-of-distribution sample removal, and representation learning.

Contrastive Learning Image Classification +2

The Devil is in Classification: A Simple Framework for Long-tail Object Detection and Instance Segmentation

1 code implementation ECCV 2020 Tao Wang, Yu Li, Bingyi Kang, Junnan Li, Junhao Liew, Sheng Tang, Steven Hoi, Jiashi Feng

Specifically, we systematically investigate performance drop of the state-of-the-art two-stage instance segmentation model Mask R-CNN on the recent long-tail LVIS dataset, and unveil that a major cause is the inaccurate classification of object proposals.

General Classification Instance Segmentation +2

Prototypical Contrastive Learning of Unsupervised Representations

2 code implementations ICLR 2021 Junnan Li, Pan Zhou, Caiming Xiong, Steven C. H. Hoi

This paper presents Prototypical Contrastive Learning (PCL), an unsupervised representation learning method that addresses the fundamental limitations of instance-wise contrastive learning.

Contrastive Learning Self-Supervised Image Classification +3

Improving out-of-distribution generalization via multi-task self-supervised pretraining

no code implementations30 Mar 2020 Isabela Albuquerque, Nikhil Naik, Junnan Li, Nitish Keskar, Richard Socher

Self-supervised feature representations have been shown to be useful for supervised classification, few-shot learning, and adversarial robustness.

Domain Generalization Few-Shot Learning +2

Towards Noise-resistant Object Detection with Noisy Annotations

no code implementations3 Mar 2020 Junnan Li, Caiming Xiong, Richard Socher, Steven Hoi

We address the challenging problem of training object detectors with noisy annotations, where the noise contains a mixture of label noise and bounding box noise.

Object Detection

DivideMix: Learning with Noisy Labels as Semi-supervised Learning

1 code implementation ICLR 2020 Junnan Li, Richard Socher, Steven C. H. Hoi

Two prominent directions include learning with noisy labels and semi-supervised learning by exploiting unlabeled data.

Ranked #5 on Image Classification on Clothing1M (using extra training data)

Learning with noisy labels

Weakly-Supervised Multi-Person Action Recognition in 360$^{\circ}$ Videos

no code implementations9 Feb 2020 Junnan Li, Jianquan Liu, Yongkang Wong, Shoji Nishimura, Mohan Kankanhalli

To enable research in this direction, we introduce 360Action, the first omnidirectional video dataset for multi-person action recognition.

Action Localization Action Recognition +1

GradMix: Multi-source Transfer across Domains and Tasks

no code implementations9 Feb 2020 Junnan Li, Ziwei Xu, Yongkang Wong, Qi Zhao, Mohan Kankanhalli

Therefore, it is important to develop algorithms that can leverage off-the-shelf labeled dataset to learn useful knowledge for the target task.

Action Recognition Meta-Learning +1

Classification Calibration for Long-tail Instance Segmentation

1 code implementation29 Oct 2019 Tao Wang, Yu Li, Bingyi Kang, Junnan Li, Jun Hao Liew, Sheng Tang, Steven Hoi, Jiashi Feng

In this report, we investigate the performance drop phenomenon of state-of-the-art two-stage instance segmentation models when processing extreme long-tail training data based on the LVIS [5] dataset, and find a major cause is the inaccurate classification of object proposals.

General Classification Instance Segmentation +1

Learning to Learn from Noisy Labeled Data

1 code implementation CVPR 2019 Junnan Li, Yongkang Wong, Qi Zhao, Mohan Kankanhalli

Despite the success of deep neural networks (DNNs) in image classification tasks, the human-level performance relies on massive training data with high-quality manual annotations, which are expensive and time-consuming to collect.

Ranked #15 on Image Classification on Clothing1M (using extra training data)

Learning with noisy labels Meta-Learning

Unsupervised Learning of View-invariant Action Representations

1 code implementation NeurIPS 2018 Junnan Li, Yongkang Wong, Qi Zhao, Mohan S. Kankanhalli

Different from previous works in video representation learning, our unsupervised learning task is to predict 3D motion in multiple target views using video representation from a source view.

Action Recognition Representation Learning

Interact as You Intend: Intention-Driven Human-Object Interaction Detection

no code implementations29 Aug 2018 Bingjie Xu, Junnan Li, Yongkang Wong, Mohan S. Kankanhalli, Qi Zhao

The recent advances in instance-level detection tasks lay strong foundation for genuine comprehension of the visual scenes.

Human-Object Interaction Detection

Video Storytelling: Textual Summaries for Events

no code implementations25 Jul 2018 Junnan Li, Yongkang Wong, Qi Zhao, Mohan S. Kankanhalli

Video storytelling introduces new challenges, mainly due to the diversity of the story and the length and complexity of the video.

Attention Transfer from Web Images for Video Recognition

no code implementations3 Aug 2017 Junnan Li, Yongkang Wong, Qi Zhao, Mohan Kankanhalli

However, due to the domain shift problem, the performance of Web images trained deep classifiers tend to degrade when directly deployed to videos.

Action Recognition Video Recognition

Dual-Glance Model for Deciphering Social Relationships

1 code implementation ICCV 2017 Junnan Li, Yongkang Wong, Qi Zhao, Mohan S. Kankanhalli

Since the beginning of early civilizations, social relationships derived from each individual fundamentally form the basis of social structure in our daily life.

Object Detection Scene Understanding +1

Cannot find the paper you are looking for? You can Submit a new open access paper.