no code implementations • 24 Aug 2023 • Wenqi Liang, Gan Sun, Chenxi Liu, Jiahua Dong, Kangru Wang
Meanwhile, the current class-incremental 3D object detection methods neglect the relationships between the object localization information and category semantic information and assume all the knowledge of old model is reliable.
1 code implementation • CVPR 2023 • Yingwei Li, Charles R. Qi, Yin Zhou, Chenxi Liu, Dragomir Anguelov
The MoDAR modality propagates object information from temporal contexts to a target frame, represented as a set of virtual points, one for each object from a waypoint on a forecasted trajectory.
2 code implementations • 25 Jan 2023 • Chenxi Liu, Lixu Wang, Lingjuan Lyu, Chen Sun, Xiao Wang, Qi Zhu
To overcome these limitations of DA and DG in handling the Unfamiliar Period during continual domain shift, we propose RaTP, a framework that focuses on improving models' target domain generalization (TDG) capability, while also achieving effective target domain adaptation (TDA) capability right after training on certain domains and forgetting alleviation (FA) capability on past domains.
no code implementations • 24 Oct 2022 • Zhaoqi Leng, Guowang Li, Chenxi Liu, Ekin Dogus Cubuk, Pei Sun, Tong He, Dragomir Anguelov, Mingxing Tan
Data augmentations are important in training high-performance 3D object detectors for point clouds.
no code implementations • 13 Oct 2022 • Pei Sun, Mingxing Tan, Weiyue Wang, Chenxi Liu, Fei Xia, Zhaoqi Leng, Dragomir Anguelov
3D object detection in point clouds is a core component for modern robotics and autonomous driving systems.
no code implementations • 10 Oct 2022 • Chenxi Liu, Zhaoqi Leng, Pei Sun, Shuyang Cheng, Charles R. Qi, Yin Zhou, Mingxing Tan, Dragomir Anguelov
Developing neural models that accurately understand objects in 3D point clouds is essential for the success of robotics and autonomous driving.
no code implementations • 11 May 2022 • Mao Ye, Chenxi Liu, Maoqing Yao, Weiyue Wang, Zhaoqi Leng, Charles R. Qi, Dragomir Anguelov
While multi-class 3D detectors are needed in many robotics applications, training them with fully labeled datasets can be expensive in labeling cost.
8 code implementations • ICLR 2022 • Zhaoqi Leng, Mingxing Tan, Chenxi Liu, Ekin Dogus Cubuk, Xiaojie Shi, Shuyang Cheng, Dragomir Anguelov
Cross-entropy loss and focal loss are the most common choices when training deep neural networks for classification problems.
Ranked #94 on
Image Classification
on ImageNet
no code implementations • ICLR 2022 • Jiquan Ngiam, Vijay Vasudevan, Benjamin Caine, Zhengdong Zhang, Hao-Tien Lewis Chiang, Jeffrey Ling, Rebecca Roelofs, Alex Bewley, Chenxi Liu, Ashish Venugopal, David J Weiss, Ben Sapp, Zhifeng Chen, Jonathon Shlens
In this work, we formulate a model for predicting the behavior of all agents jointly, producing consistent futures that account for interactions between agents.
1 code implementation • 15 Jun 2021 • Jiquan Ngiam, Benjamin Caine, Vijay Vasudevan, Zhengdong Zhang, Hao-Tien Lewis Chiang, Jeffrey Ling, Rebecca Roelofs, Alex Bewley, Chenxi Liu, Ashish Venugopal, David Weiss, Ben Sapp, Zhifeng Chen, Jonathon Shlens
In this work, we formulate a model for predicting the behavior of all agents jointly, producing consistent futures that account for interactions between agents.
no code implementations • CVPR 2021 • Zefan Li, Chenxi Liu, Alan Yuille, Bingbing Ni, Wenjun Zhang, Wen Gao
For a given unsupervised task, we design multilevel tasks and define different learning stages for the deep network.
no code implementations • 20 Apr 2021 • Scott Ettinger, Shuyang Cheng, Benjamin Caine, Chenxi Liu, Hang Zhao, Sabeek Pradhan, Yuning Chai, Ben Sapp, Charles Qi, Yin Zhou, Zoey Yang, Aurelien Chouard, Pei Sun, Jiquan Ngiam, Vijay Vasudevan, Alexander McCauley, Jonathon Shlens, Dragomir Anguelov
Furthermore, we introduce a new set of metrics that provides a comprehensive evaluation of both single agent and joint agent interaction motion forecasting models.
no code implementations • 25 Feb 2021 • Peng Yang, Kun Guo, Xing Xi, Tony Q. S. Quek, Xianbin Cao, Chenxi Liu
Particularly, we first propose to decompose the sequential decision problem into multiple repeated optimization subproblems via a Lyapunov technique.
Networking and Internet Architecture Signal Processing
no code implementations • ICCV 2021 • Scott Ettinger, Shuyang Cheng, Benjamin Caine, Chenxi Liu, Hang Zhao, Sabeek Pradhan, Yuning Chai, Ben Sapp, Charles R. Qi, Yin Zhou, Zoey Yang, Aurelien Chouard, Pei Sun, Jiquan Ngiam, Vijay Vasudevan, Alexander McCauley, Jonathon Shlens, Dragomir Anguelov
Furthermore, we introduce a new set of metrics that provides a comprehensive evaluation of both single agent and joint agent interaction motion forecasting models.
2 code implementations • ECCV 2020 • Chenxi Liu, Piotr Dollár, Kaiming He, Ross Girshick, Alan Yuille, Saining Xie
Existing neural network architectures in computer vision -- whether designed by humans or by machines -- were typically found using both images and their associated labels.
no code implementations • 25 Nov 2019 • Michelle Shu, Chenxi Liu, Weichao Qiu, Alan Yuille
Different from the existing strategy to always give the same (distribution of) test data, the adversarial examiner will dynamically select the next test data to hand out based on the testing history so far, with the goal being to undermine the model's performance.
1 code implementation • 21 Nov 2019 • Siyuan Qiao, Huiyu Wang, Chenxi Liu, Wei Shen, Alan Yuille
To address this issue, we propose BatchChannel Normalization (BCN), which uses batch knowledge to avoid the elimination singularities in the training of channel-normalized models.
no code implementations • 6 Jun 2019 • Zhuotun Zhu, Chenxi Liu, Dong Yang, Alan Yuille, Daguang Xu
Deep learning algorithms, in particular 2D and 3D fully convolutional neural networks (FCNs), have rapidly become the mainstream methodology for volumetric medical image segmentation.
8 code implementations • 25 Mar 2019 • Siyuan Qiao, Huiyu Wang, Chenxi Liu, Wei Shen, Alan Yuille
Batch Normalization (BN) has become an out-of-box technique to improve deep network training.
Ranked #76 on
Instance Segmentation
on COCO minival
9 code implementations • CVPR 2019 • Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan Yuille, Li Fei-Fei
Therefore, we propose to search the network level structure in addition to the cell level structure, which forms a hierarchical architecture search space.
Ranked #6 on
Semantic Segmentation
on PASCAL VOC 2012 val
3 code implementations • CVPR 2019 • Runtao Liu, Chenxi Liu, Yutong Bai, Alan Yuille
Yet there has been evidence that current benchmark datasets suffer from bias, and current state-of-the-art models cannot be easily evaluated on their intermediate reasoning process.
Ranked #1 on
Referring Expression Segmentation
on CLEVR-Ref+
no code implementations • 7 2018 • Dongxuan He, Chenxi Liu
In this letter, we exploit the potential benefits of machine learning in enhancing physical layer security in multi-input multi-output multi-antenna-eavesdropper wiretap channels.
no code implementations • 10 May 2018 • Alan L. Yuille, Chenxi Liu
We argue that Deep Nets in their current form are unlikely to be able to overcome the fundamental problem of computer vision, namely how to deal with the combinatorial explosion, caused by the enormous complexity of natural images, and obtain the rich understanding of visual scenes that the human visual achieves.
1 code implementation • NAACL 2018 • Yu-Siang Wang, Chenxi Liu, Xiaohui Zeng, Alan Yuille
The scene graphs generated by our learned neural dependency parser achieve an F-score similarity of 49. 67% to ground truth graphs on our evaluation set, surpassing best previous approaches by 5%.
16 code implementations • ECCV 2018 • Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, Kevin Murphy
We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms.
Ranked #14 on
Neural Architecture Search
on NAS-Bench-201, ImageNet-16-120
(Accuracy (Val) metric)
no code implementations • CVPR 2019 • Xiaohui Zeng, Chenxi Liu, Yu-Siang Wang, Weichao Qiu, Lingxi Xie, Yu-Wing Tai, Chi Keung Tang, Alan L. Yuille
Though image-space adversaries can be interpreted as per-pixel albedo change, we verify that they cannot be well explained along these physically meaningful dimensions, which often have a non-local effect.
1 code implementation • CVPR 2018 • Siyuan Qiao, Chenxi Liu, Wei Shen, Alan Yuille
In this paper, we are interested in the few-shot learning problem.
no code implementations • ICCV 2017 • Siyuan Qiao, Wei Shen, Weichao Qiu, Chenxi Liu, Alan Yuille
We argue that estimation of object scales in images is helpful for generating object proposals, especially for supermarket images where object scales are usually within a small range.
1 code implementation • ICCV 2017 • Chenxi Liu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Alan Yuille
In this paper we are interested in the problem of image segmentation given natural language descriptions, i. e. referring expressions.
no code implementations • ICCV 2017 • Yan Wang, Lingxi Xie, Chenxi Liu, Ya zhang, Wenjun Zhang, Alan Yuille
In this paper, we reveal the importance and benefits of introducing second-order operations into deep neural networks.
no code implementations • 31 May 2016 • Chenxi Liu, Junhua Mao, Fei Sha, Alan Yuille
Attention mechanisms have recently been introduced in deep learning for various tasks in natural language processing and computer vision.
no code implementations • CVPR 2015 • Chenxi Liu, Alexander G. Schwing, Kaustav Kundu, Raquel Urtasun, Sanja Fidler
What sets us apart from past work in layout estimation is the use of floor plans as a source of prior knowledge, as well as localization of each image within a bigger space (apartment).