Search Results for author: Kay Chen Tan

Found 27 papers, 4 papers with code

Balancing Exploration and Exploitation for Solving Large-scale Multiobjective Optimization via Attention Mechanism

no code implementations20 May 2022 Haokai Hong, Min Jiang, Liang Feng, Qiuzhen Lin, Kay Chen Tan

However, these algorithms ignore the significance of tackling this issue from the perspective of decision variables, which makes the algorithm lack the ability to search from different dimensions and limits the performance of the algorithm.

Multiobjective Optimization

Benchmark Problems for CEC2021 Competition on Evolutionary Transfer Multiobjectve Optimization

1 code implementation15 Oct 2021 Songbai Liu, Qiuzhen Lin, Kay Chen Tan, Qing Li

Evolutionary transfer multiobjective optimization (ETMO) has been becoming a hot research topic in the field of evolutionary computation, which is based on the fact that knowledge learning and transfer across the related optimization exercises can improve the efficiency of others.

Multiobjective Optimization Transfer Learning

Solving Large-Scale Multi-Objective Optimization via Probabilistic Prediction Model

no code implementations16 Jul 2021 Haokai Hong, Kai Ye, Min Jiang, Donglin Cao, Kay Chen Tan

At the same time, due to the adoption of an individual-based evolution mechanism, the computational cost of the proposed method is independent of the number of decision variables, thus avoiding the problem of exponential growth of the search space.

Principled Design of Translation, Scale, and Rotation Invariant Variation Operators for Metaheuristics

no code implementations22 May 2021 Ye Tian, Xingyi Zhang, Cheng He, Kay Chen Tan, Yaochu Jin

In the past three decades, a large number of metaheuristics have been proposed and shown high performance in solving complex optimization problems.

Translation

Multi-Space Evolutionary Search for Large-Scale Optimization

no code implementations23 Feb 2021 Liang Feng, Qingxia Shang, Yaqing Hou, Kay Chen Tan, Yew-Soon Ong

This paper thus proposes a new search paradigm, namely the multi-space evolutionary search, to enhance the existing evolutionary search methods for solving large-scale optimization problems.

Dimensionality Reduction

Manifold Interpolation for Large-Scale Multi-Objective Optimization via Generative Adversarial Networks

no code implementations8 Jan 2021 Zhenzhong Wang, Haokai Hong, Kai Ye, Min Jiang, Kay Chen Tan

However, traditional evolutionary algorithms for solving LSMOPs have some deficiencies in dealing with this structural manifold, resulting in poor diversity, local optima, and inefficient searches.

Multiobjective Optimization

Evolutionary Gait Transfer of Multi-Legged Robots in Complex Terrains

no code implementations24 Dec 2020 Min Jiang, Guokun Chi, Geqiang Pan, Shihui Guo, Kay Chen Tan

Given the high dimensions of control space, this problem is particularly challenging for multi-legged robots walking in complex and unknown environments.

Legged Robots Transfer Learning

Progressive Tandem Learning for Pattern Recognition with Deep Spiking Neural Networks

no code implementations2 Jul 2020 Jibin Wu, Cheng-Lin Xu, Daquan Zhou, Haizhou Li, Kay Chen Tan

In this paper, we propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition, which is referred to as progressive tandem learning of deep SNNs.

Image Reconstruction Object Recognition +1

Synaptic Learning with Augmented Spikes

no code implementations11 May 2020 Qiang Yu, Shiming Song, Chenxiang Ma, Linqiang Pan, Kay Chen Tan

Traditional neuron models use analog values for information representation and computation, while all-or-nothing spikes are employed in the spiking ones.

Towards Efficient Processing and Learning with Spikes: New Approaches for Multi-Spike Learning

no code implementations2 May 2020 Qiang Yu, Shenglan Li, Huajin Tang, Longbiao Wang, Jianwu Dang, Kay Chen Tan

They are also believed to play an essential role in low-power consumption of the biological systems, whose efficiency attracts increasing attentions to the field of neuromorphic computing.

Understanding the Automated Parameter Optimization on Transfer Learning for CPDP: An Empirical Study

1 code implementation8 Feb 2020 Ke Li, Zilin Xiang, Tao Chen, Shuo Wang, Kay Chen Tan

Given a tight computational budget, it is more cost-effective to focus on optimizing the parameter configuration of transfer learning algorithms (3) The research on CPDP is far from mature where it is "not difficult" to find a better alternative by making a combination of existing transfer learning and classification techniques.

Transfer Learning

Deep Spiking Neural Networks for Large Vocabulary Automatic Speech Recognition

1 code implementation19 Nov 2019 Jibin Wu, Emre Yilmaz, Malu Zhang, Haizhou Li, Kay Chen Tan

The brain-inspired spiking neural networks (SNN) closely mimic the biological neural networks and can operate on low-power neuromorphic hardware with spike-based computation.

Automatic Speech Recognition

Online Bagging for Anytime Transfer Learning

no code implementations20 Oct 2019 Guokun Chi, Min Jiang, Xing Gao, Weizhen Hu, Shihui Guo, Kay Chen Tan

In practical applications, it is often necessary to face online learning problems in which the data samples are achieved sequentially.

online learning Transfer Learning

Evolutionary Dynamic Multi-objective Optimization Via Regression Transfer Learning

no code implementations19 Oct 2019 Zhenzhong Wang, Min Jiang, Xing Gao, Liang Feng, Weizhen Hu, Kay Chen Tan

In recent years, transfer learning has been proven to be a kind of effective approach in solving DMOPs.

Transfer Learning

Solving Dynamic Multi-objective Optimization Problems Using Incremental Support Vector Machine

no code implementations19 Oct 2019 Weizhen Hu, Min Jiang, Xing Gao, Kay Chen Tan, Yiu-ming Cheung

The main feature of the Dynamic Multi-objective Optimization Problems (DMOPs) is that optimization objective functions will change with times or environments.

POS

Solving dynamic multi-objective optimization problems via support vector machine

no code implementations19 Oct 2019 Min Jiang, Weizhen Hu, Liming Qiu, Minghui Shi, Kay Chen Tan

The algorithm uses the POS that has been obtained to train a SVM and then take the trained SVM to classify the solutions of the dynamic optimization problem at the next moment, and thus it is able to generate an initial population which consists of different individuals recognized by the trained SVM.

POS

Evolutionary Multi-Objective Optimization Driven by Generative Adversarial Networks (GANs)

no code implementations11 Oct 2019 Cheng He, Shihua Huang, Ran Cheng, Kay Chen Tan, Yaochu Jin

The proposed algorithm is tested on 10 benchmark problems with up to 200 decision variables.

Evolutionary Multi-Objective Optimization Driven by Generative Adversarial Networks

no code implementations10 Jul 2019 Cheng He, Shihua Huang, Ran Cheng, Kay Chen Tan, Yaochu Jin

Recently, more and more works have proposed to drive evolutionary algorithms using machine learning models. Usually, the performance of such model based evolutionary algorithms is highly dependent on the training qualities of the adopted models. Since it usually requires a certain amount of data (i. e. the candidate solutions generated by the algorithms) for model training, the performance deteriorates rapidly with the increase of the problem scales, due to the curse of dimensionality. To address this issue, we propose a multi-objective evolutionary algorithm driven by the generative adversarial networks (GANs). At each generation of the proposed algorithm, the parent solutions are first classified into \emph{real} and \emph{fake} samples to train the GANs; then the offspring solutions are sampled by the trained GANs. Thanks to the powerful generative ability of the GANs, our proposed algorithm is capable of generating promising offspring solutions in high-dimensional decision space with limited training data. The proposed algorithm is tested on 10 benchmark problems with up to 200 decision variables. Experimental results on these test problems demonstrate the effectiveness of the proposed algorithm.

A Tandem Learning Rule for Effective Training and Rapid Inference of Deep Spiking Neural Networks

1 code implementation2 Jul 2019 Jibin Wu, Yansong Chua, Malu Zhang, Guoqi Li, Haizhou Li, Kay Chen Tan

Spiking neural networks (SNNs) represent the most prominent biologically inspired computing model for neuromorphic computing (NC) architectures.

Event-based vision Frame

Robust Environmental Sound Recognition with Sparse Key-point Encoding and Efficient Multi-spike Learning

no code implementations4 Feb 2019 Qiang Yu, Yanli Yao, Longbiao Wang, Huajin Tang, Jianwu Dang, Kay Chen Tan

Our framework is a unifying system with a consistent integration of three major functional parts which are sparse encoding, efficient learning and robust readout.

Decision Making

Which Surrogate Works for Empirical Performance Modelling? A Case Study with Differential Evolution

no code implementations30 Jan 2019 Ke Li, Zilin Xiang, Kay Chen Tan

Perhaps surprisingly, it is possible to build a cheap-to-evaluate surrogate that models the algorithm's empirical performance as a function of its parameters.

A Multi-State Diagnosis and Prognosis Framework with Feature Learning for Tool Condition Monitoring

no code implementations30 Apr 2018 Chong Zhang, Geok Soon Hong, Jun-Hong Zhou, Kay Chen Tan, Haizhou Li, Huan Xu, Jihoon Hong, Hian-Leng Chan

For fault diagnosis, a cost-sensitive deep belief network (namely ECS-DBN) is applied to deal with the imbalanced data problem for tool state estimation.

Representation Learning

A Cost-Sensitive Deep Belief Network for Imbalanced Classification

no code implementations28 Apr 2018 Chong Zhang, Kay Chen Tan, Haizhou Li, Geok Soon Hong

Adaptive differential evolution optimization is implemented as the optimization algorithm that automatically updates its corresponding parameters without the need of prior domain knowledge.

Classification General Classification +1

Evolutionary Multitasking for Multiobjective Continuous Optimization: Benchmark Problems, Performance Metrics and Baseline Results

no code implementations8 Jun 2017 Yuan Yuan, Yew-Soon Ong, Liang Feng, A. K. Qin, Abhishek Gupta, Bingshui Da, Qingfu Zhang, Kay Chen Tan, Yaochu Jin, Hisao Ishibuchi

In this report, we suggest nine test problems for multi-task multi-objective optimization (MTMOO), each of which consists of two multiobjective optimization tasks that need to be solved simultaneously.

Multiobjective Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.