no code implementations • 20 May 2022 • Haokai Hong, Min Jiang, Liang Feng, Qiuzhen Lin, Kay Chen Tan
However, these algorithms ignore the significance of tackling this issue from the perspective of decision variables, which makes the algorithm lack the ability to search from different dimensions and limits the performance of the algorithm.
1 code implementation • 15 Oct 2021 • Songbai Liu, Qiuzhen Lin, Kay Chen Tan, Qing Li
Evolutionary transfer multiobjective optimization (ETMO) has been becoming a hot research topic in the field of evolutionary computation, which is based on the fact that knowledge learning and transfer across the related optimization exercises can improve the efficiency of others.
no code implementations • 16 Jul 2021 • Haokai Hong, Kai Ye, Min Jiang, Donglin Cao, Kay Chen Tan
At the same time, due to the adoption of an individual-based evolution mechanism, the computational cost of the proposed method is independent of the number of decision variables, thus avoiding the problem of exponential growth of the search space.
no code implementations • 22 May 2021 • Ye Tian, Xingyi Zhang, Cheng He, Kay Chen Tan, Yaochu Jin
In the past three decades, a large number of metaheuristics have been proposed and shown high performance in solving complex optimization problems.
no code implementations • 23 Feb 2021 • Liang Feng, Qingxia Shang, Yaqing Hou, Kay Chen Tan, Yew-Soon Ong
This paper thus proposes a new search paradigm, namely the multi-space evolutionary search, to enhance the existing evolutionary search methods for solving large-scale optimization problems.
no code implementations • Journal - IEEE Transactions on Neural Networks and Learning Systems 2021 • Cuie Yang, Yiu-ming Cheung, Jinliang Ding, Kay Chen Tan
Then, a domain-wise weighted ensemble is introduced to combine the source and target models to select useful knowledge of each domain.
no code implementations • 8 Jan 2021 • Zhenzhong Wang, Haokai Hong, Kai Ye, Min Jiang, Kay Chen Tan
However, traditional evolutionary algorithms for solving LSMOPs have some deficiencies in dealing with this structural manifold, resulting in poor diversity, local optima, and inefficient searches.
no code implementations • 24 Dec 2020 • Min Jiang, Guokun Chi, Geqiang Pan, Shihui Guo, Kay Chen Tan
Given the high dimensions of control space, this problem is particularly challenging for multi-legged robots walking in complex and unknown environments.
no code implementations • 25 Aug 2020 • Yuqiao Liu, Yanan sun, Bing Xue, Mengjie Zhang, Gary G. Yen, Kay Chen Tan
Deep Neural Networks (DNNs) have achieved great success in many applications.
no code implementations • 2 Jul 2020 • Jibin Wu, Cheng-Lin Xu, Daquan Zhou, Haizhou Li, Kay Chen Tan
In this paper, we propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition, which is referred to as progressive tandem learning of deep SNNs.
no code implementations • 11 May 2020 • Qiang Yu, Shiming Song, Chenxiang Ma, Linqiang Pan, Kay Chen Tan
Traditional neuron models use analog values for information representation and computation, while all-or-nothing spikes are employed in the spiking ones.
no code implementations • 5 May 2020 • Qiang Yu, Chenxiang Ma, Shiming Song, Gaoyan Zhang, Jianwu Dang, Kay Chen Tan
We examine the performance of our methods based on MNIST, Fashion-MNIST and CIFAR10 datasets.
no code implementations • 2 May 2020 • Qiang Yu, Shenglan Li, Huajin Tang, Longbiao Wang, Jianwu Dang, Kay Chen Tan
They are also believed to play an essential role in low-power consumption of the biological systems, whose efficiency attracts increasing attentions to the field of neuromorphic computing.
1 code implementation • 8 Feb 2020 • Ke Li, Zilin Xiang, Tao Chen, Shuo Wang, Kay Chen Tan
Given a tight computational budget, it is more cost-effective to focus on optimizing the parameter configuration of transfer learning algorithms (3) The research on CPDP is far from mature where it is "not difficult" to find a better alternative by making a combination of existing transfer learning and classification techniques.
1 code implementation • 19 Nov 2019 • Jibin Wu, Emre Yilmaz, Malu Zhang, Haizhou Li, Kay Chen Tan
The brain-inspired spiking neural networks (SNN) closely mimic the biological neural networks and can operate on low-power neuromorphic hardware with spike-based computation.
no code implementations • 20 Oct 2019 • Guokun Chi, Min Jiang, Xing Gao, Weizhen Hu, Shihui Guo, Kay Chen Tan
In practical applications, it is often necessary to face online learning problems in which the data samples are achieved sequentially.
no code implementations • 19 Oct 2019 • Zhenzhong Wang, Min Jiang, Xing Gao, Liang Feng, Weizhen Hu, Kay Chen Tan
In recent years, transfer learning has been proven to be a kind of effective approach in solving DMOPs.
no code implementations • 19 Oct 2019 • Weizhen Hu, Min Jiang, Xing Gao, Kay Chen Tan, Yiu-ming Cheung
The main feature of the Dynamic Multi-objective Optimization Problems (DMOPs) is that optimization objective functions will change with times or environments.
no code implementations • 19 Oct 2019 • Min Jiang, Weizhen Hu, Liming Qiu, Minghui Shi, Kay Chen Tan
The algorithm uses the POS that has been obtained to train a SVM and then take the trained SVM to classify the solutions of the dynamic optimization problem at the next moment, and thus it is able to generate an initial population which consists of different individuals recognized by the trained SVM.
no code implementations • 11 Oct 2019 • Cheng He, Shihua Huang, Ran Cheng, Kay Chen Tan, Yaochu Jin
The proposed algorithm is tested on 10 benchmark problems with up to 200 decision variables.
no code implementations • 10 Jul 2019 • Cheng He, Shihua Huang, Ran Cheng, Kay Chen Tan, Yaochu Jin
Recently, more and more works have proposed to drive evolutionary algorithms using machine learning models. Usually, the performance of such model based evolutionary algorithms is highly dependent on the training qualities of the adopted models. Since it usually requires a certain amount of data (i. e. the candidate solutions generated by the algorithms) for model training, the performance deteriorates rapidly with the increase of the problem scales, due to the curse of dimensionality. To address this issue, we propose a multi-objective evolutionary algorithm driven by the generative adversarial networks (GANs). At each generation of the proposed algorithm, the parent solutions are first classified into \emph{real} and \emph{fake} samples to train the GANs; then the offspring solutions are sampled by the trained GANs. Thanks to the powerful generative ability of the GANs, our proposed algorithm is capable of generating promising offspring solutions in high-dimensional decision space with limited training data. The proposed algorithm is tested on 10 benchmark problems with up to 200 decision variables. Experimental results on these test problems demonstrate the effectiveness of the proposed algorithm.
1 code implementation • 2 Jul 2019 • Jibin Wu, Yansong Chua, Malu Zhang, Guoqi Li, Haizhou Li, Kay Chen Tan
Spiking neural networks (SNNs) represent the most prominent biologically inspired computing model for neuromorphic computing (NC) architectures.
no code implementations • 4 Feb 2019 • Qiang Yu, Yanli Yao, Longbiao Wang, Huajin Tang, Jianwu Dang, Kay Chen Tan
Our framework is a unifying system with a consistent integration of three major functional parts which are sparse encoding, efficient learning and robust readout.
no code implementations • 30 Jan 2019 • Ke Li, Zilin Xiang, Kay Chen Tan
Perhaps surprisingly, it is possible to build a cheap-to-evaluate surrogate that models the algorithm's empirical performance as a function of its parameters.
no code implementations • 30 Apr 2018 • Chong Zhang, Geok Soon Hong, Jun-Hong Zhou, Kay Chen Tan, Haizhou Li, Huan Xu, Jihoon Hong, Hian-Leng Chan
For fault diagnosis, a cost-sensitive deep belief network (namely ECS-DBN) is applied to deal with the imbalanced data problem for tool state estimation.
no code implementations • 28 Apr 2018 • Chong Zhang, Kay Chen Tan, Haizhou Li, Geok Soon Hong
Adaptive differential evolution optimization is implemented as the optimization algorithm that automatically updates its corresponding parameters without the need of prior domain knowledge.
no code implementations • 8 Jun 2017 • Yuan Yuan, Yew-Soon Ong, Liang Feng, A. K. Qin, Abhishek Gupta, Bingshui Da, Qingfu Zhang, Kay Chen Tan, Yaochu Jin, Hisao Ishibuchi
In this report, we suggest nine test problems for multi-task multi-objective optimization (MTMOO), each of which consists of two multiobjective optimization tasks that need to be solved simultaneously.