no code implementations • 14 Dec 2024 • Jinrong Zhang, Penghui Wang, Chunxiao Liu, Wei Liu, Dian Jin, Qiong Zhang, Erli Meng, Zhengnan Hu
To achieve this goal, the proposed image prompt paradigm uses just a few image instances as prompts, and we propose a novel framework named \textbf{MI Grounding} for this new paradigm.
no code implementations • 27 Sep 2024 • Xuanjin Jin, Chendong Zeng, Shengfa Zhu, Chunxiao Liu, Panpan Cai
To enhance safety and robustness, the planner further applies importance sampling to refine the driving trajectory conditioned on the planned high-level behavior.
1 code implementation • 29 Jul 2024 • Ezequiel Perez-Zarate, Oscar Ramos-Soto, Chunxiao Liu, Diego Oliva, Marco Perez-Cisneros
To address this challenge, the Adaptive Light Enhancement Network (ALEN) is introduced, whose main approach is the use of a classification mechanism to determine whether local or global illumination enhancement is required.
1 code implementation • NeurIPS 2021 • Zhenghao Peng, Quanyi Li, Ka Ming Hui, Chunxiao Liu, Bolei Zhou
Self-Driven Particles (SDP) describe a category of multi-agent systems common in everyday life, such as flocking birds and traffic flows.
Multi-agent Reinforcement Learning reinforcement-learning +2
1 code implementation • 13 Oct 2021 • Zhenghao Peng, Quanyi Li, Chunxiao Liu, Bolei Zhou
Offline RL technique is further used to learn from the partial demonstration generated by the expert.
1 code implementation • 2 Mar 2021 • Yuenan Hou, Zheng Ma, Chunxiao Liu, Zhe Wang, Chen Change Loy
Channel pruning is broadly recognized as an effective approach to obtain a small compact model through eliminating unimportant channels from a large cumbersome network.
2 code implementations • 26 Dec 2020 • Quanyi Li, Zhenghao Peng, Qihang Zhang, Chunxiao Liu, Bolei Zhou
We validate that training with the increasing number of procedurally generated scenes significantly improves the generalization of the agent across scenarios of different traffic densities and road networks.
1 code implementation • 17 Dec 2020 • Xi Zhu, Zhendong Mao, Chunxiao Liu, Peng Zhang, Bin Wang, Yongdong Zhang
Our method can compensate for the data biases by generating balanced data without introducing external annotations.
no code implementations • 7 Sep 2020 • Hang Yang, Shan Jiang, Xinge Zhu, Mingyang Huang, Zhiqiang Shen, Chunxiao Liu, Jianping Shi
Existing methods on this task usually draw attention on the high-level alignment based on the whole image or object of interest, which naturally, cannot fully utilize the fine-grained channel information.
1 code implementation • 2 Sep 2020 • Sirui Xie, Shoukang Hu, Xinjiang Wang, Chunxiao Liu, Jianping Shi, Xunying Liu, Dahua Lin
To this end, we pose questions that future differentiable methods for neural wiring discovery need to confront, hoping to evoke a discussion and rethinking on how much bias has been enforced implicitly in existing NAS methods.
1 code implementation • ECCV 2020 • Liming Jiang, Changxu Zhang, Mingyang Huang, Chunxiao Liu, Jianping Shi, Chen Change Loy
We introduce a simple and versatile framework for image-to-image translation.
1 code implementation • CVPR 2020 • Yuenan Hou, Zheng Ma, Chunxiao Liu, Tak-Wai Hui, Chen Change Loy
We study the problem of distilling knowledge from a large deep teacher network to a much smaller student network for the task of road marking segmentation.
Ranked #1 on Semantic Segmentation on ApolloScape
1 code implementation • CVPR 2020 • Chunxiao Liu, Zhendong Mao, Tianzhu Zhang, Hongtao Xie, Bin Wang, Yongdong Zhang
The GSMN explicitly models object, relation and attribute as a structured phrase, which not only allows to learn correspondence of object, relation and attribute separately, but also benefits to learn fine-grained correspondence of structured phrase.
Ranked #18 on Cross-Modal Retrieval on Flickr30k
1 code implementation • CVPR 2020 • Shoukang Hu, Sirui Xie, Hehui Zheng, Chunxiao Liu, Jianping Shi, Xunying Liu, Dahua Lin
We argue that given a computer vision task for which a NAS method is expected, this definition can reduce the vaguely-defined NAS evaluation to i) accuracy of this task and ii) the total computation consumed to finally obtain a model with satisfying accuracy.
Ranked #18 on Neural Architecture Search on NAS-Bench-201, ImageNet-16-120 (Accuracy (Val) metric)
no code implementations • 30 Nov 2019 • Junning Huang, Sirui Xie, Jiankai Sun, Qiurui Ma, Chunxiao Liu, Jianping Shi, Dahua Lin, Bolei Zhou
In this work, we propose a hybrid framework to learn neural decisions in the classical modular pipeline through end-to-end imitation learning.
2 code implementations • ICCV 2019 • Yuenan Hou, Zheng Ma, Chunxiao Liu, Chen Change Loy
Training deep models for lane detection is challenging due to the very subtle and sparse supervisory signals inherent in lane annotations.
Ranked #5 on Lane Detection on BDD100K val
2 code implementations • ICLR 2019 • Sirui Xie, Hehui Zheng, Chunxiao Liu, Liang Lin
In experiments on CIFAR-10, SNAS takes less epochs to find a cell architecture with state-of-the-art accuracy than non-differentiable evolution-based and reinforcement-learning-based NAS, which is also transferable to ImageNet.
Ranked #27 on Neural Architecture Search on NAS-Bench-201, CIFAR-10
no code implementations • ICLR 2019 • Sirui Xie, Junning Huang, Lanxin Lei, Chunxiao Liu, Zheng Ma, Wei zhang, Liang Lin
Reinforcement learning agents need exploratory behaviors to escape from local optima.
2 code implementations • 7 Nov 2018 • Yuenan Hou, Zheng Ma, Chunxiao Liu, Chen Change Loy
In this paper, we considerably improve the accuracy and robustness of predictions through heterogeneous auxiliary networks feature mimicking, a new and effective training method that provides us with much richer contextual signals apart from steering direction.
Ranked #1 on Steering Control on BDD100K val