Search Results for author: Steven Hoi

Found 27 papers, 5 papers with code

A Theory-Driven Self-Labeling Refinement Method for Contrastive Representation Learning

no code implementations28 Jun 2021 Pan Zhou, Caiming Xiong, Xiao-Tong Yuan, Steven Hoi

Although intuitive, such a native label assignment strategy cannot reveal the underlying semantic similarity between a query and its positives and negatives, and impairs performance, since some negatives are semantically similar to the query or even share the same semantic class as the query.

Contrastive Learning Representation Learning +2

Detection and Rectification of Arbitrary Shaped Scene Texts by using Text Keypoints and Links

no code implementations1 Mar 2021 Chuhui Xue, Shijian Lu, Steven Hoi

Detection and recognition of scene texts of arbitrary shapes remain a grand challenge due to the super-rich text shape variation in text line orientations, lengths, curvatures, etc.

Rectification Scene Text +1

Contextual Transformation Networks for Online Continual Learning

no code implementations ICLR 2021 Quang Pham, Chenghao Liu, Doyen Sahoo, Steven Hoi

Continual learning methods with fixed architectures rely on a single network to learn models that can perform well on all tasks.

Continual Learning Transfer Learning

Localized Meta-Learning: A PAC-Bayes Analysis for Meta-Learning Beyond Global Prior

no code implementations1 Jan 2021 Chenghao Liu, Tao Lu, Doyen Sahoo, Yuan Fang, Kun Zhang, Steven Hoi

Meta-learning methods learn the meta-knowledge among various training tasks and aim to promote the learning of new tasks under the task similarity assumption.


Online Continual Learning Under Domain Shift

no code implementations1 Jan 2021 Quang Pham, Chenghao Liu, Steven Hoi

CIER employs an adversarial training to correct the shift in $P(X, Y)$ by matching $P(X|Y)$, which results in an invariant representation that can generalize to unseen domains during inference.

Continual Learning

Noise-Robust Contrastive Learning

no code implementations1 Jan 2021 Junnan Li, Caiming Xiong, Steven Hoi

In contrast to most existing methods, we combat noise by learning robust representation.

Contrastive Learning

PolarNet: Learning to Optimize Polar Keypoints for Keypoint Based Object Detection

no code implementations ICLR 2021 Wu Xiongwei, Doyen Sahoo, Steven Hoi

Despite achieving promising performance at par with anchor-based detectors, the existing anchor-free detectors such as FCOS or CenterNet predict objects based on standard Cartesian coordinates, which often yield poor quality keypoints.

Object Detection

VilNMN: A Neural Module Network approach to Video-Grounded Language Tasks

no code implementations1 Jan 2021 Hung Le, Nancy F. Chen, Steven Hoi

Neural module networks (NMN) have achieved success in image-grounded tasks such as question answering (QA) on synthetic images.

Information Retrieval Question Answering

Adapt-and-Adjust: Overcoming the Long-Tail Problem of Multilingual Speech Recognition

no code implementations3 Dec 2020 Genta Indra Winata, Guangsen Wang, Caiming Xiong, Steven Hoi

One crucial challenge of real-world multilingual speech recognition is the long-tailed distribution problem, where some resource-rich languages like English have abundant training data, but a long tail of low-resource languages have varying amounts of limited training data.

Language Modelling Multi-Task Learning +1

Towards Theoretically Understanding Why SGD Generalizes Better Than ADAM in Deep Learning

no code implementations NeurIPS 2020 Pan Zhou, Jiashi Feng, Chao Ma, Caiming Xiong, Steven Hoi, Weinan E

The result shows that (1) the escaping time of both SGD and ADAM~depends on the Radon measure of the basin positively and the heaviness of gradient noise negatively; (2) for the same basin, SGD enjoys smaller escaping time than ADAM, mainly because (a) the geometry adaptation in ADAM~via adaptively scaling each gradient coordinate well diminishes the anisotropic structure in gradient noise and results in larger Radon measure of a basin; (b) the exponential gradient average in ADAM~smooths its gradient and leads to lighter gradient noise tails than SGD.

Partially Observable Online Change Detection via Smooth-Sparse Decomposition

no code implementations22 Sep 2020 Jie Guo, Hao Yan, Chen Zhang, Steven Hoi

We consider online change detection of high dimensional data streams with sparse changes, where only a subset of data streams can be observed at each sensing time point due to limited sensing capacities.

Bayesian Inference

The Devil is in Classification: A Simple Framework for Long-tail Object Detection and Instance Segmentation

1 code implementation ECCV 2020 Tao Wang, Yu Li, Bingyi Kang, Junnan Li, Junhao Liew, Sheng Tang, Steven Hoi, Jiashi Feng

Specifically, we systematically investigate performance drop of the state-of-the-art two-stage instance segmentation model Mask R-CNN on the recent long-tail LVIS dataset, and unveil that a major cause is the inaccurate classification of object proposals.

General Classification Instance Segmentation +2

Extreme Low-Light Imaging with Multi-granulation Cooperative Networks

no code implementations16 May 2020 Keqi Wang, Peng Gao, Steven Hoi, Qian Guo, Yuhua Qian

Low-light imaging is challenging since images may appear to be dark and noised due to low signal-to-noise ratio, complex image content, and the variety in shooting scenes in extreme low-light condition.

TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented Dialogue

1 code implementation EMNLP 2020 Chien-Sheng Wu, Steven Hoi, Richard Socher, Caiming Xiong

The underlying difference of linguistic patterns between general text and task-oriented dialogue makes existing pre-trained language models less useful in practice.

Dialogue State Tracking Intent Detection +2

Towards Noise-resistant Object Detection with Noisy Annotations

no code implementations3 Mar 2020 Junnan Li, Caiming Xiong, Richard Socher, Steven Hoi

We address the challenging problem of training object detectors with noisy annotations, where the noise contains a mixture of label noise and bounding box noise.

Object Detection

Classification Calibration for Long-tail Instance Segmentation

1 code implementation29 Oct 2019 Tao Wang, Yu Li, Bingyi Kang, Junnan Li, Jun Hao Liew, Sheng Tang, Steven Hoi, Jiashi Feng

In this report, we investigate the performance drop phenomenon of state-of-the-art two-stage instance segmentation models when processing extreme long-tail training data based on the LVIS [5] dataset, and find a major cause is the inaccurate classification of object proposals.

General Classification Instance Segmentation +1

Teacher-Students Knowledge Distillation for Siamese Trackers

no code implementations24 Jul 2019 Yuanpei Liu, Xingping Dong, Xiankai Lu, Fahad Shahbaz Khan, Jianbing Shen, Steven Hoi

To the best of our knowledge, we are the first to investigate knowledge distillation for Siamese trackers and propose a distilled Siamese tracking framework.

Knowledge Distillation Object Tracking

DART: Domain-Adversarial Residual-Transfer Networks for Unsupervised Cross-Domain Image Classification

no code implementations30 Dec 2018 Xianghong Fang, Haoli Bai, Ziyi Guo, Bin Shen, Steven Hoi, Zenglin Xu

In this paper, we propose a new unsupervised domain adaptation method named Domain-Adversarial Residual-Transfer (DART) learning of Deep Neural Networks to tackle cross-domain image classification tasks.

General Classification Image Classification +1

Dynamic Fusion with Intra- and Inter- Modality Attention Flow for Visual Question Answering

no code implementations13 Dec 2018 Gao Peng, Zhengkai Jiang, Haoxuan You, Pan Lu, Steven Hoi, Xiaogang Wang, Hongsheng Li

It can robustly capture the high-level interactions between language and vision domains, thus significantly improves the performance of visual question answering.

Question Answering Visual Question Answering

Question-Guided Hybrid Convolution for Visual Question Answering

no code implementations ECCV 2018 Peng Gao, Pan Lu, Hongsheng Li, Shuang Li, Yikang Li, Steven Hoi, Xiaogang Wang

Most state-of-the-art VQA methods fuse the high-level textual and visual features from the neural network and abandon the visual spatial information when learning multi-modal features. To address these problems, question-guided kernels generated from the input question are designed to convolute with visual features for capturing the textual and visual relationship in the early stage.

Question Answering Visual Question Answering

Active Learning with Expert Advice

no code implementations26 Sep 2013 Peilin Zhao, Steven Hoi, Jinfeng Zhuang

In this paper, we address a new problem of active learning with expert advice, where the outcome of an instance is disclosed only when it is requested by the online learner.

Active Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.