1 code implementation • CVPR 2023 • Lei Zhu, Xinjiang Wang, Zhanghan Ke, Wayne Zhang, Rynson Lau
As the core building block of vision transformers, attention is a powerful tool to capture long-range dependency.
Ranked #3 on
Object Detection
on COCO 2017
(mAP metric)
1 code implementation • CVPR 2023 • Xinjiang Wang, Zeyu Liu, Yu Hu, Wei Xi, Wenxian Yu, Danping Zou
We introduce a lightweight network to improve descriptors of keypoints within the same image.
1 code implementation • CVPR 2023 • Xinjiang Wang, Xingyi Yang, Shilong Zhang, Yijiang Li, Litong Feng, Shijie Fang, Chengqi Lyu, Kai Chen, Wayne Zhang
In this study, we dive deep into the inconsistency of pseudo targets in semi-supervised object detection (SSOD).
1 code implementation • 2 Jun 2022 • Shilong Zhang, Xinjiang Wang, Jiaqi Wang, Jiangmiao Pang, Kai Chen
As both sparse and dense queries are imperfect, then \emph{what are expected queries in end-to-end object detection}?
1 code implementation • CVPR 2022 • Shilong Zhang, Zhuoran Yu, Liyang Liu, Xinjiang Wang, Aojun Zhou, Kai Chen
The core of this task is to train a point-to-box regressor on well-labeled images that can be used to predict credible bounding boxes for each point annotation.
1 code implementation • 8 Sep 2021 • Tao Gong, Kai Chen, Xinjiang Wang, Qi Chu, Feng Zhu, Dahua Lin, Nenghai Yu, Huamin Feng
In this work, considering the features of the same object instance are highly similar among frames in a video, a novel Temporal RoI Align operator is proposed to extract features from other frames feature maps for current frame proposals by utilizing feature similarity.
Ranked #1 on
Video Instance Segmentation
on YouTube-VIS
2 code implementations • 2 Aug 2021 • Liyang Liu, Shilong Zhang, Zhanghui Kuang, Aojun Zhou, Jing-Hao Xue, Xinjiang Wang, Yimin Chen, Wenming Yang, Qingmin Liao, Wayne Zhang
Our method can be used to prune any structures including those with coupled channels.
no code implementations • 21 May 2021 • Shijie Fang, Yuhang Cao, Xinjiang Wang, Kai Chen, Dahua Lin, Wayne Zhang
The performance of object detection, to a great extent, depends on the availability of large annotated datasets.
no code implementations • NeurIPS 2021 • Zhongzhan Huang, Xinjiang Wang, Ping Luo
Channel pruning is a popular technique for compressing convolutional neural networks (CNNs), and various pruning criteria have been proposed to remove the redundant filters of CNNs.
1 code implementation • 2 Sep 2020 • Sirui Xie, Shoukang Hu, Xinjiang Wang, Chunxiao Liu, Jianping Shi, Xunying Liu, Dahua Lin
To this end, we pose questions that future differentiable methods for neural wiring discovery need to confront, hoping to evoke a discussion and rethinking on how much bias has been enforced implicitly in existing NAS methods.
2 code implementations • CVPR 2020 • Xinjiang Wang, Shilong Zhang, Zhuoran Yu, Litong Feng, Wayne Zhang
Inspired by this, a convolution across the pyramid level is proposed in this study, which is termed pyramid convolution and is a modified 3-D convolution.
Ranked #88 on
Object Detection
on COCO test-dev
no code implementations • 24 Apr 2020 • Zhongzhan Huang, Wenqi Shao, Xinjiang Wang, Liang Lin, Ping Luo
Channel pruning is a popular technique for compressing convolutional neural networks (CNNs), where various pruning criteria have been proposed to remove the redundant filters.
1 code implementation • 21 Apr 2020 • Wenjie Li, Zhaoyang Zhang, Xinjiang Wang, Ping Luo
Although adaptive optimization algorithms such as Adam show fast convergence in many machine learning tasks, this paper identifies a problem of Adam by analyzing its performance in a simple non-convex synthetic problem, showing that Adam's fast convergence would possibly lead the algorithm to local minimums.
no code implementations • 30 Jan 2020 • Sheng Zhou, Xinjiang Wang, Ping Luo, Litong Feng, Wenjie Li, Wei zhang
This phenomenon is caused by the normalization effect of BN, which induces a non-trainable region in the parameter space and reduces the network capacity as a result.
no code implementations • NeurIPS 2018 • Guangrun Wang, Jiefeng Peng, Ping Luo, Xinjiang Wang, Liang Lin
In this paper, we present a novel normalization method, called Kalman Normalization (KN), for improving and accelerating the training of DNNs, particularly under the context of micro-batches.
1 code implementation • ICLR 2019 • Ping Luo, Xinjiang Wang, Wenqi Shao, Zhanglin Peng
Batch Normalization (BN) improves both convergence and generalization in training neural networks.
no code implementations • 9 Feb 2018 • Guangrun Wang, Jiefeng Peng, Ping Luo, Xinjiang Wang, Liang Lin
As an indispensable component, Batch Normalization (BN) has successfully improved the training of deep neural networks (DNNs) with mini-batches, by normalizing the distribution of the internal representation for each hidden layer.