no code implementations • 26 Mar 2022 • Chunnan Wang, Xingyu Chen, Chengyue Wu, Hongzhi Wang
We allow the effective combination of design experience from different sources, so as to create an effective search space containing a variety of TSF models to support different TSF tasks.
1 code implementation • 24 Jan 2022 • Chunnan Wang, Hongzhi Wang, Xiangyu Shi
Model compression methods can reduce model complexity on the premise of maintaining acceptable performance, and thus promote the application of deep neural networks under resource constrained environments.
no code implementations • 9 Jan 2022 • Chunnan Wang, Chen Liang, Xiang Chen, Hongzhi Wang
They are lack of self-evaluation ability, that is, to examine the rationality of their prediction results, thus failing to guide users to identify high-quality ones from their candidate results.
no code implementations • CVPR 2022 • Chunnan Wang, Xiang Chen, Junzhe Wang, Hongzhi Wang
Although the Trajectory Prediction (TP) model has achieved great success in computer vision and robotics fields, its architecture and training scheme design rely on heavy manual work and domain knowledge, which is not friendly to common users.
no code implementations • 21 Sep 2021 • Guosheng Feng, Chunnan Wang, Hongzhi Wang
Current GNN-oriented NAS methods focus on the search for different layer aggregate components with shallow and simple architectures, which are limited by the 'over-smooth' problem.
no code implementations • 9 Apr 2021 • Chunnan Wang, Bozhou Chen, Geng Li, Hongzhi Wang
Recently, some Neural Architecture Search (NAS) techniques are proposed for the automatic design of Graph Convolutional Network (GCN) architectures.
no code implementations • 15 Oct 2020 • Chunnan Wang, Kaixin Zhang, Hongzhi Wang, Bozhou Chen
In recent years, many spatial-temporal graph convolutional network (STGCN) models are proposed to deal with the spatial-temporal network data forecasting problem.
no code implementations • 7 Jul 2020 • Tianyu Mu, Hongzhi Wang, Chunnan Wang, Zheng Liang
In our work, we present Auto-CASH, a pre-trained model based on meta-learning, to solve the CASH problem more efficiently.
1 code implementation • 6 Jul 2020 • Chunnan Wang, Hongzhi Wang, Guosheng Feng, Fei Geng
To reduce searching cost, most NAS algorithms use fixed outer network level structure, and search the repeatable cell structure only.
no code implementations • 3 Mar 2020 • Bozhou Chen, Kaixin Zhang, Longshen Ou, Chenmin Ba, Hongzhi Wang, Chunnan Wang
However, most machine learning algorithms are sensitive to the hyper-parameters.
no code implementations • 2 Dec 2019 • Chunnan Wang, Hongzhi Wang, Chang Zhou, Hanxiao Chen
Motivated by this, we propose ExperienceThinking algorithm to quickly find the best possible hyperparameter configuration of machine learning algorithms within a few configuration evaluations.
no code implementations • 24 Oct 2019 • Chunnan Wang, Hongzhi Wang, Tianyu Mu, Jianzhong Li, Hong Gao
In many fields, a mass of algorithms with completely different hyperparameters have been developed to address the same type of problems.