no code implementations • 22 Nov 2023 • Xingyu Wu, Yan Zhong, Jibin Wu, Kay Chen Tan
Following the extraction of embedding vectors for both algorithms and problems, the most suitable algorithm is determined through calculations of matching degrees.
no code implementations • 3 Nov 2023 • Xinmeng Xu, Jibin Wu, XiaoYong Wei, Yan Liu, Richard So, Yuhong Yang, Weiping tu, Kay Chen Tan
To overcome this limitation, we introduce a strategy to map monaural speech into a fixed simulation space for better differentiation between target speech and noise.
no code implementations • 23 Oct 2023 • Qu Yang, Malu Zhang, Jibin Wu, Kay Chen Tan, Haizhou Li
With TTFS coding, we can achieve up to orders of magnitude saving in computation over ANN and other rate-based SNNs.
no code implementations • 19 Oct 2023 • huan zhang, Jinliang Ding, Liang Feng, Kay Chen Tan, Ke Li
Although data-driven evolutionary optimization and Bayesian optimization (BO) approaches have shown promise in solving expensive optimization problems in static environments, the attempts to develop such approaches in dynamic environments remain rarely unexplored.
1 code implementation • 11 Oct 2023 • Xiang Hao, Jibin Wu, Jianwei Yu, Chenglin Xu, Kay Chen Tan
However, the effectiveness of these models is hindered in real-world scenarios due to the unreliable or even absence of pre-registered cues.
no code implementations • 29 Aug 2023 • Xinyi Chen, Jibin Wu, Huajin Tang, Qinyuan Ren, Kay Chen Tan
The human brain exhibits remarkable abilities in integrating temporally distant sensory inputs for decision-making.
no code implementations • 25 Aug 2023 • Shimin Zhang, Qu Yang, Chenxiang Ma, Jibin Wu, Haizhou Li, Kay Chen Tan
The identification of sensory cues associated with potential opportunities and dangers is frequently complicated by unrelated events that separate useful cues by long delays.
no code implementations • 14 Jul 2023 • Shimin Zhang, Qu Yang, Chenxiang Ma, Jibin Wu, Haizhou Li, Kay Chen Tan
The identification of sensory cues associated with potential opportunities and dangers is frequently complicated by unrelated events that separate useful cues by long delays.
1 code implementation • 22 Jun 2023 • Junjia Liu, Zhihao LI, WanYu Lin, Sylvain Calinon, Kay Chen Tan, Fei Chen
Soft object manipulation tasks in domestic scenes pose a significant challenge for existing robotic skill learning techniques due to their complex dynamics and variable shape characteristics.
no code implementations • 26 May 2023 • Xinyi Chen, Qu Yang, Jibin Wu, Haizhou Li, Kay Chen Tan
The biological neural systems evolved to adapt to ecological environment for efficiency and effectiveness, wherein neurons with heterogeneous structures and rich dynamics are optimized to accomplish complex cognitive tasks.
2 code implementations • 17 Apr 2023 • Xiaoming Xue, Cuie Yang, Liang Feng, Kai Zhang, Linqi Song, Kay Chen Tan
Lastly, a benchmark suite with 12 STO problems featured by a variety of customized similarity relationships is developed using the proposed generator.
no code implementations • 12 Apr 2023 • Wei-neng Chen, Feng-Feng Wei, Tian-Fang Zhao, Kay Chen Tan, Jun Zhang
Based on this taxonomy, existing studies on DEC are reviewed in terms of purpose, parallel structure of the algorithm, parallel model for implementation, and the implementation environment.
no code implementations • 8 Apr 2023 • Haokai Hong, Min Jiang, Jonathan M. Garibaldi, Qiuzhen Lin, Kay Chen Tan
The idea of the proposed method is to transform the defined such very large-scale problems into a problem that can be tackled by a recommender system.
1 code implementation • 29 Jan 2023 • Beichen Huang, Ran Cheng, Zhuozhao Li, Yaochu Jin, Kay Chen Tan
At the core of EvoX lies a functional programming model that simplifies the development of parallelized EC algorithms, seamlessly integrated with a high-performance computation model designed specifically for distributed GPU-accelerated execution.
no code implementations • 28 Dec 2022 • Yuwei Ou, Xiangning Xie, Shangce Gao, Yanan sun, Kay Chen Tan, Jiancheng Lv
Deep neural networks (DNNs) are found to be vulnerable to adversarial attacks, and various methods have been proposed for the defense.
1 code implementation • 17 Dec 2022 • Lingjie Li, Manlin Xuan, Qiuzhen Lin, Min Jiang, Zhong Ming, Kay Chen Tan
Thus, this paper devises a new EMT algorithm for FS in high-dimensional classification, which first adopts different filtering methods to produce multiple tasks and then modifies a competitive swarm optimizer to efficiently solve these related tasks via knowledge transfer.
1 code implementation • 8 Aug 2022 • Zhichao Lu, Ran Cheng, Yaochu Jin, Kay Chen Tan, Kalyanmoy Deb
From an optimization point of view, the NAS tasks involving multiple design criteria are intrinsically multiobjective optimization problems; hence, it is reasonable to adopt evolutionary multiobjective optimization (EMO) algorithms for tackling them.
no code implementations • 3 Jul 2022 • Xiangning Xie, Yuqiao Liu, Yanan sun, Mengjie Zhang, Kay Chen Tan
Performance predictors can greatly alleviate the prohibitive cost of NAS by directly predicting the performance of DNNs.
no code implementations • 23 Jun 2022 • Songbai Liu, Qiuzhen Lin, Jianqiang Li, Kay Chen Tan
This paper begins with a general taxonomy of scaling-up MOPs and learnable MOEAs, followed by an analysis of the challenges that these MOPs pose to traditional MOEAs.
no code implementations • 20 May 2022 • Haokai Hong, Min Jiang, Liang Feng, Qiuzhen Lin, Kay Chen Tan
However, these algorithms ignore the significance of tackling this issue from the perspective of decision variables, which makes the algorithm lack the ability to search from different dimensions and limits the performance of the algorithm.
1 code implementation • CVPR 2022 • Weibo Shu, Jia Wan, Kay Chen Tan, Sam Kwong, Antoni B. Chan
By transforming the density map into the frequency domain and using the nice properties of the characteristic function, we propose a novel method that is simple, effective, and efficient.
1 code implementation • 15 Oct 2021 • Songbai Liu, Qiuzhen Lin, Kay Chen Tan, Qing Li
Evolutionary transfer multiobjective optimization (ETMO) has been becoming a hot research topic in the field of evolutionary computation, which is based on the fact that knowledge learning and transfer across the related optimization exercises can improve the efficiency of others.
no code implementations • 16 Jul 2021 • Haokai Hong, Kai Ye, Min Jiang, Donglin Cao, Kay Chen Tan
At the same time, due to the adoption of an individual-based evolution mechanism, the computational cost of the proposed method is independent of the number of decision variables, thus avoiding the problem of exponential growth of the search space.
no code implementations • 22 May 2021 • Ye Tian, Xingyi Zhang, Cheng He, Kay Chen Tan, Yaochu Jin
In the past three decades, a large number of metaheuristics have been proposed and shown high performance in solving complex optimization problems.
no code implementations • 23 Feb 2021 • Liang Feng, Qingxia Shang, Yaqing Hou, Kay Chen Tan, Yew-Soon Ong
This paper thus proposes a new search paradigm, namely the multi-space evolutionary search, to enhance the existing evolutionary search methods for solving large-scale optimization problems.
no code implementations • Journal - IEEE Transactions on Neural Networks and Learning Systems 2021 • Cuie Yang, Yiu-ming Cheung, Jinliang Ding, Kay Chen Tan
Then, a domain-wise weighted ensemble is introduced to combine the source and target models to select useful knowledge of each domain.
no code implementations • 8 Jan 2021 • Zhenzhong Wang, Haokai Hong, Kai Ye, Min Jiang, Kay Chen Tan
However, traditional evolutionary algorithms for solving LSMOPs have some deficiencies in dealing with this structural manifold, resulting in poor diversity, local optima, and inefficient searches.
no code implementations • 24 Dec 2020 • Min Jiang, Guokun Chi, Geqiang Pan, Shihui Guo, Kay Chen Tan
Given the high dimensions of control space, this problem is particularly challenging for multi-legged robots walking in complex and unknown environments.
no code implementations • 25 Aug 2020 • Yuqiao Liu, Yanan sun, Bing Xue, Mengjie Zhang, Gary G. Yen, Kay Chen Tan
Deep Neural Networks (DNNs) have achieved great success in many applications.
no code implementations • 2 Jul 2020 • Jibin Wu, Cheng-Lin Xu, Daquan Zhou, Haizhou Li, Kay Chen Tan
In this paper, we propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition, which is referred to as progressive tandem learning of deep SNNs.
no code implementations • 11 May 2020 • Qiang Yu, Shiming Song, Chenxiang Ma, Linqiang Pan, Kay Chen Tan
Traditional neuron models use analog values for information representation and computation, while all-or-nothing spikes are employed in the spiking ones.
no code implementations • 5 May 2020 • Qiang Yu, Chenxiang Ma, Shiming Song, Gaoyan Zhang, Jianwu Dang, Kay Chen Tan
We examine the performance of our methods based on MNIST, Fashion-MNIST and CIFAR10 datasets.
no code implementations • 2 May 2020 • Qiang Yu, Shenglan Li, Huajin Tang, Longbiao Wang, Jianwu Dang, Kay Chen Tan
They are also believed to play an essential role in low-power consumption of the biological systems, whose efficiency attracts increasing attentions to the field of neuromorphic computing.
1 code implementation • 8 Feb 2020 • Ke Li, Zilin Xiang, Tao Chen, Shuo Wang, Kay Chen Tan
Given a tight computational budget, it is more cost-effective to focus on optimizing the parameter configuration of transfer learning algorithms (3) The research on CPDP is far from mature where it is "not difficult" to find a better alternative by making a combination of existing transfer learning and classification techniques.
1 code implementation • 19 Nov 2019 • Jibin Wu, Emre Yilmaz, Malu Zhang, Haizhou Li, Kay Chen Tan
The brain-inspired spiking neural networks (SNN) closely mimic the biological neural networks and can operate on low-power neuromorphic hardware with spike-based computation.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
no code implementations • 20 Oct 2019 • Guokun Chi, Min Jiang, Xing Gao, Weizhen Hu, Shihui Guo, Kay Chen Tan
In practical applications, it is often necessary to face online learning problems in which the data samples are achieved sequentially.
no code implementations • 19 Oct 2019 • Weizhen Hu, Min Jiang, Xing Gao, Kay Chen Tan, Yiu-ming Cheung
The main feature of the Dynamic Multi-objective Optimization Problems (DMOPs) is that optimization objective functions will change with times or environments.
no code implementations • 19 Oct 2019 • Zhenzhong Wang, Min Jiang, Xing Gao, Liang Feng, Weizhen Hu, Kay Chen Tan
In recent years, transfer learning has been proven to be a kind of effective approach in solving DMOPs.
no code implementations • 19 Oct 2019 • Min Jiang, Weizhen Hu, Liming Qiu, Minghui Shi, Kay Chen Tan
The algorithm uses the POS that has been obtained to train a SVM and then take the trained SVM to classify the solutions of the dynamic optimization problem at the next moment, and thus it is able to generate an initial population which consists of different individuals recognized by the trained SVM.
no code implementations • 11 Oct 2019 • Cheng He, Shihua Huang, Ran Cheng, Kay Chen Tan, Yaochu Jin
The proposed algorithm is tested on 10 benchmark problems with up to 200 decision variables.
no code implementations • 10 Jul 2019 • Cheng He, Shihua Huang, Ran Cheng, Kay Chen Tan, Yaochu Jin
Recently, more and more works have proposed to drive evolutionary algorithms using machine learning models. Usually, the performance of such model based evolutionary algorithms is highly dependent on the training qualities of the adopted models. Since it usually requires a certain amount of data (i. e. the candidate solutions generated by the algorithms) for model training, the performance deteriorates rapidly with the increase of the problem scales, due to the curse of dimensionality. To address this issue, we propose a multi-objective evolutionary algorithm driven by the generative adversarial networks (GANs). At each generation of the proposed algorithm, the parent solutions are first classified into \emph{real} and \emph{fake} samples to train the GANs; then the offspring solutions are sampled by the trained GANs. Thanks to the powerful generative ability of the GANs, our proposed algorithm is capable of generating promising offspring solutions in high-dimensional decision space with limited training data. The proposed algorithm is tested on 10 benchmark problems with up to 200 decision variables. Experimental results on these test problems demonstrate the effectiveness of the proposed algorithm.
1 code implementation • 2 Jul 2019 • Jibin Wu, Yansong Chua, Malu Zhang, Guoqi Li, Haizhou Li, Kay Chen Tan
Spiking neural networks (SNNs) represent the most prominent biologically inspired computing model for neuromorphic computing (NC) architectures.
no code implementations • 4 Feb 2019 • Qiang Yu, Yanli Yao, Longbiao Wang, Huajin Tang, Jianwu Dang, Kay Chen Tan
Our framework is a unifying system with a consistent integration of three major functional parts which are sparse encoding, efficient learning and robust readout.
no code implementations • 30 Jan 2019 • Ke Li, Zilin Xiang, Kay Chen Tan
Perhaps surprisingly, it is possible to build a cheap-to-evaluate surrogate that models the algorithm's empirical performance as a function of its parameters.
no code implementations • 30 Apr 2018 • Chong Zhang, Geok Soon Hong, Jun-Hong Zhou, Kay Chen Tan, Haizhou Li, Huan Xu, Jihoon Hong, Hian-Leng Chan
For fault diagnosis, a cost-sensitive deep belief network (namely ECS-DBN) is applied to deal with the imbalanced data problem for tool state estimation.
no code implementations • 28 Apr 2018 • Chong Zhang, Kay Chen Tan, Haizhou Li, Geok Soon Hong
Adaptive differential evolution optimization is implemented as the optimization algorithm that automatically updates its corresponding parameters without the need of prior domain knowledge.
no code implementations • 8 Jun 2017 • Yuan Yuan, Yew-Soon Ong, Liang Feng, A. K. Qin, Abhishek Gupta, Bingshui Da, Qingfu Zhang, Kay Chen Tan, Yaochu Jin, Hisao Ishibuchi
In this report, we suggest nine test problems for multi-task multi-objective optimization (MTMOO), each of which consists of two multiobjective optimization tasks that need to be solved simultaneously.