1 code implementation • 24 Mar 2023 • Teng Wang, Yixiao Ge, Feng Zheng, Ran Cheng, Ying Shan, XiaoHu Qie, Ping Luo
FLM successfully frees the prediction rate from the tie-up with the corruption rate while allowing the corruption spans to be customized for each token to be predicted.
no code implementations • 11 Mar 2023 • Teng Wang, Jinrui Zhang, Feng Zheng, Wenhao Jiang, Ran Cheng, Ping Luo
TEG learns to adaptively ground the possible event proposals given a set of sentences by estimating the cross-modal distance in a joint semantic space.
no code implementations • 7 Mar 2023 • Hui Bai, Ran Cheng, Yaochu Jin
This article presents a comprehensive survey of state-of-the-art methods for integrating EC into RL, referred to as evolutionary reinforcement learning (EvoRL).
1 code implementation • 29 Jan 2023 • Beichen Huang, Ran Cheng, Yaochu Jin, Kay Chen Tan
Second, we design a scalable computing framework for running EC algorithms on distributed GPU devices.
no code implementations • 21 Sep 2022 • Hui Bai, Ruimin Shen, Yue Lin, Botian Xu, Ran Cheng
In comparison with the state-of-the-art RLlib, we empirically demonstrate the unique advantages of Lamarckian on benchmark tests with up to 6000 CPU cores: i) both the sampling efficiency and training speed are doubled when running PPO on Google football game; ii) the training speed is 13 times faster when running PBT+PPO on Pong game.
no code implementations • 14 Aug 2022 • Zhichao Lu, Ran Cheng, Shihua Huang, Haoming Zhang, Changxiao Qiu, Fan Yang
The main challenges of applying NAS to semantic segmentation arise from two aspects: (i) high-resolution images to be processed; (ii) additional requirement of real-time inference speed (i. e., real-time semantic segmentation) for applications such as autonomous driving.
no code implementations • 8 Aug 2022 • Zhichao Lu, Ran Cheng, Yaochu Jin, Kay Chen Tan, Kalyanmoy Deb
From an optimization point of view, the NAS tasks involving multiple design criteria are intrinsically multiobjective optimization problems; hence, it is reasonable to adopt evolutionary multiobjective optimization (EMO) algorithms for tackling them.
no code implementations • 12 Jul 2022 • Jia Liu, Ran Cheng, Yaochu Jin
First, we formulate the NAS problem for enhancing adversarial robustness of deep neural networks into a multiobjective optimization problem.
1 code implementation • 3 Jul 2022 • Jinrui Zhang, Teng Wang, Feng Zheng, Ran Cheng, Ping Luo
Previous methods only process the information of a single boundary at a time, which lacks utilization of video context information.
1 code implementation • 17 Jun 2022 • Teng Wang, Wenhao Jiang, Zhichao Lu, Feng Zheng, Ran Cheng, Chengguo Yin, Ping Luo
Existing vision-language pre-training (VLP) methods primarily rely on paired image-text datasets, which are either annotated by enormous human labors, or crawled from the internet followed by elaborate data cleaning techniques.
1 code implementation • 25 May 2022 • Jiaxin Wei, Lige Liu, Ran Cheng, Wenqing Jiang, Minghao Xu, Xinyu Jiang, Tao Sun, Soren Schwertfeger, Laurent Kneip
Recent years have witnessed the surge of learned representations that directly build upon point clouds.
no code implementations • 12 May 2022 • Ran Cheng, Xinyu Jiang, Yuan Chen, Lige Liu, Tao Sun
In the GNN query module, the pose graph is transformed to form a embedding-aggregated reference graph for camera relocalization.
no code implementations • 19 Apr 2022 • Yueming Li, Ying Jiang, Lu Lan, Xiaowei Ge, Ran Cheng, Yuewei Zhan, Guo Chen, Linli Shi, Runyu Wang, Nan Zheng, Chen Yang, Ji-Xin Cheng
Here, we report optically-generated focused ultrasound (OFUS) for non-invasive brain stimulation with ultrahigh precision.
no code implementations • 13 Apr 2022 • Teng Wang, Zhu Liu, Feng Zheng, Zhichao Lu, Ran Cheng, Ping Luo
This report describes the details of our approach for the event dense-captioning task in ActivityNet Challenge 2021.
no code implementations • 20 Oct 2021 • Ran Cheng, Chao Chen, Longfei Xu, Shen Li, Lei Wang, Hengbin Cui, Kaikui Liu, Xiaolong Li
For user representation, we utilize a series of historical navigation to extract user preference.
no code implementations • 8 Oct 2021 • Shengran Hu, Ran Cheng, Cheng He, Zhichao Lu, Jing Wang, Miao Zhang
For the goal of automated design of high-performance deep convolutional neural networks (CNNs), Neural Architecture Search (NAS) methodology is becoming increasingly important for both academia and industries. Due to the costly stochastic gradient descent (SGD) training of CNNs for performance evaluation, most existing NAS methods are computationally expensive for real-world deployments.
1 code implementation • EMNLP 2021 • Yiming Chen, Yan Zhang, Chen Zhang, Grandee Lee, Ran Cheng, Haizhou Li
In this work, we revisit the self-training technique for language model fine-tuning and present a state-of-the-art prompt-based few-shot learner, SFLM.
no code implementations • ICCV 2021 • Ryan Razani, Ran Cheng, Enxu Li, Ehsan Taghavi, Yuan Ren, Liu Bingbing
GP-S3Net is a proposal-free approach in which no object proposals are needed to identify the objects in contrast to conventional two-stage panoptic systems, where a detection network is incorporated for capturing instance information.
1 code implementation • ICCV 2021 • Teng Wang, Ruimao Zhang, Zhichao Lu, Feng Zheng, Ran Cheng, Ping Luo
Dense video captioning aims to generate multiple associated captions with their temporal locations from the video.
Ranked #2 on
Dense Video Captioning
on YouCook2
3 code implementations • ICCV 2021 • Shihua Huang, Zhichao Lu, Ran Cheng, Cheng He
Recent advancements in deep neural networks have made remarkable leap-forwards in dense image prediction.
Ranked #20 on
Semantic Segmentation
on ADE20K val
no code implementations • 28 Jun 2021 • Nan Zheng, Vincent Fitzpatrick, Ran Cheng, Linli Shi, David L. Kaplan, Chen Yang
We also confirmed that photoacoustic neural stimulation promoted neurite outgrowth by impacting the brain-derived neurotrophic factor (BDNF) pathway.
no code implementations • 16 Mar 2021 • Ryan Razani, Ran Cheng, Ehsan Taghavi, Liu Bingbing
Autonomous driving vehicles and robotic systems rely on accurate perception of their surroundings.
no code implementations • 15 Mar 2021 • Ran Cheng, Ryan Razani, Yuan Ren, Liu Bingbing
In literature, several approaches are introduced to attempt LiDAR semantic segmentation task, such as projection-based (range-view or birds-eye-view), and voxel-based approaches.
no code implementations • 11 Mar 2021 • Hantao Zhang, Ran Cheng
Magnon spin Nernst effect was recently proposed as an intrinsic effect in antiferromagnets, where spin diffusion and boundary spin transmission have been ignored.
Mesoscale and Nanoscale Physics Materials Science Other Condensed Matter
no code implementations • 8 Feb 2021 • Ran Cheng, Ryan Razani, Ehsan Taghavi, Enxu Li, Bingbing Liu
Autonomous robotic systems and self driving cars rely on accurate perception of their surroundings as the safety of the passengers and pedestrians is the top priority.
Ranked #1 on
3D Semantic Segmentation
on nuScenes
no code implementations • 16 Dec 2020 • Ran Cheng, Christopher Agia, Yuan Ren, Xinhai Li, Liu Bingbing
With the increasing reliance of self-driving and similar robotic systems on robust 3D vision, the processing of LiDAR scans with deep convolutional neural networks has become a trend in academia and industry alike.
Ranked #1 on
3D Semantic Scene Completion
on SemanticKITTI
no code implementations • 27 Nov 2020 • Shengran Hu, Ran Cheng, Cheng He, Zhichao Lu
In the recent past, neural architecture search (NAS) has attracted increasing attention from both academia and industries.
2 code implementations • 14 Sep 2020 • Hao Tan, Ran Cheng, Shihua Huang, Cheng He, Changxiao Qiu, Fan Yang, Ping Luo
Despite the remarkable successes of Convolutional Neural Networks (CNNs) in computer vision, it is time-consuming and error-prone to manually design a CNN.
no code implementations • 4 Aug 2020 • Shihua Huang, Cheng He, Ran Cheng
Existing I2I translation methods adopt multiple domain-specific content encoders for different domains, where each domain-specific content encoder is trained with images from the same domain only.
no code implementations • 10 Mar 2020 • Yan Xiao, Yaochu Jin, Ran Cheng, Kuangrong Hao
With an exponential explosive growth of various digital text information, it is challenging to efficiently obtain specific knowledge from massive unstructured text information.
1 code implementation • 7 Mar 2020 • Jinjin Xu, Wenli Du, Ran Cheng, Wangli He, Yaochu Jin
Learning over massive data stored in different locations is essential in many real-world applications.
no code implementations • 7 Mar 2020 • Haoyu Zhang, Yaochu Jin, Ran Cheng, Kuangrong Hao
Recently, evolutionary neural architecture search (ENAS) has received increasing attention due to the attractive global optimization capability of evolutionary algorithms.
no code implementations • 11 Oct 2019 • Cheng He, Shihua Huang, Ran Cheng, Kay Chen Tan, Yaochu Jin
The proposed algorithm is tested on 10 benchmark problems with up to 200 decision variables.
no code implementations • 11 Sep 2019 • Hang Yu, Aishan Liu, Xianglong Liu, Gengchao Li, Ping Luo, Ran Cheng, Jichen Yang, Chongzhi Zhang
In other words, DNNs trained with PDA are able to obtain more robustness against both adversarial attacks as well as common corruptions than the recent state-of-the-art methods.
no code implementations • 10 Jul 2019 • Cheng He, Shihua Huang, Ran Cheng, Kay Chen Tan, Yaochu Jin
Recently, more and more works have proposed to drive evolutionary algorithms using machine learning models. Usually, the performance of such model based evolutionary algorithms is highly dependent on the training qualities of the adopted models. Since it usually requires a certain amount of data (i. e. the candidate solutions generated by the algorithms) for model training, the performance deteriorates rapidly with the increase of the problem scales, due to the curse of dimensionality. To address this issue, we propose a multi-objective evolutionary algorithm driven by the generative adversarial networks (GANs). At each generation of the proposed algorithm, the parent solutions are first classified into \emph{real} and \emph{fake} samples to train the GANs; then the offspring solutions are sampled by the trained GANs. Thanks to the powerful generative ability of the GANs, our proposed algorithm is capable of generating promising offspring solutions in high-dimensional decision space with limited training data. The proposed algorithm is tested on 10 benchmark problems with up to 200 decision variables. Experimental results on these test problems demonstrate the effectiveness of the proposed algorithm.
no code implementations • 7 Jun 2018 • Liangli Zhen, Miqing Li, Ran Cheng, Dezhong Peng, Xin Yao
The redundancy of some objectives can lead to the multiobjective problem having a degenerate Pareto front, i. e., the dimension of the Pareto front of the $m$-objective problem be less than (m-1).
no code implementations • 4 Jan 2017 • Ye Tian, Ran Cheng, Xingyi Zhang, Yaochu Jin
To address these issues, we have developed a MATLAB platform for evolutionary multi-objective optimization in this paper, called PlatEMO, which includes more than 50 multi-objective evolutionary algorithms and more than 100 multi-objective test problems, along with several widely used performance indicators.