no code implementations • 7 May 2025 • Liu-Yue Luo, Zhi-Hui Zhan, Kay Chen Tan, Jun Zhang
We apply SDMC to two algorithm types that are unsuitable for traditional methods, confirming its effectiveness in global convergence analysis.
no code implementations • 13 Apr 2025 • Zongxian Yang, Jiayu Qian, Zhi-An Huang, Kay Chen Tan
Large language models (LLMs) face significant challenges in specialized biomedical tasks due to the inherent complexity of medical reasoning and the sensitive nature of clinical data.
no code implementations • 27 Mar 2025 • Xiaoming Xue, Liang Feng, Yinglan Feng, Rui Liu, Kai Zhang, Kay Chen Tan
Evolutionary transfer optimization (ETO) has been gaining popularity in research over the years due to its outstanding knowledge transfer ability to address various challenges in optimization.
1 code implementation • 13 Feb 2025 • Chenxiang Ma, Xinyi Chen, Yanchen Li, Qu Yang, Yujie Wu, Guoqi Li, Gang Pan, Huajin Tang, Kay Chen Tan, Jibin Wu
Temporal processing is fundamental for both biological and artificial intelligence systems, as it enables the comprehension of dynamic environments and facilitates timely responses.
1 code implementation • 4 Feb 2025 • Yu-An Huang, Yao Hu, Yue-Chao Li, Xiyue Cao, Xinyuan Li, Kay Chen Tan, Zhu-Hong You, Zhi-An Huang
Functional MRI (fMRI) and single-cell transcriptomics are pivotal in Alzheimer's disease (AD) research, each providing unique insights into neural function and molecular mechanisms.
2 code implementations • 25 Jan 2025 • Bowen Zheng, Ran Cheng, Kay Chen Tan
In addition to its performance-oriented design, $\texttt{$\textbf{EvoRL}$}$ offers a comprehensive platform for EvoRL research, encompassing implementations of traditional RL algorithms (e. g., A2C, PPO, DDPG, TD3, SAC), Evolutionary Algorithms (e. g., CMA-ES, OpenES, ARS), and hybrid EvoRL paradigms such as Evolutionary-guided RL (e. g., ERL, CEM-RL) and Population-Based AutoRL (e. g., PBT).
no code implementations • 6 Jan 2025 • Yuxin Ma, Zherui Zhang, Ran Cheng, Yaochu Jin, Kay Chen Tan
In the domain of multi-objective optimization, evolutionary algorithms are distinguished by their capability to generate a diverse population of solutions that navigate the trade-offs inherent among competing objectives.
no code implementations • 10 Nov 2024 • Jiaxin Chen, Jinliang Ding, Kay Chen Tan, Jiancheng Qian, Ke Li
Finally, a MBLO algorithm is presented to solve this problem while achieving high adaptability effectively.
2 code implementations • 1 Nov 2024 • Zeyuan Ma, Hongshu Guo, Yue-Jiao Gong, Jun Zhang, Kay Chen Tan
Based on the evaluation results, we meticulously identify a set of core designs that enhance the generalization and learning effectiveness of MetaBBO.
1 code implementation • 7 Oct 2024 • Xiang Hao, Chenxiang Ma, Qu Yang, Jibin Wu, Kay Chen Tan
In recent years, deep learning-based methods have significantly improved speech enhancement performance, but they often come with a high computational cost, which is prohibitive for a large number of edge devices, such as headsets and hearing aids.
no code implementations • 27 Sep 2024 • Yu Zhou, Xingyu Wu, Jibin Wu, Liang Feng, Kay Chen Tan
Model merging is a technique that combines multiple large pretrained models into a single model with enhanced performance and broader task adaptability.
no code implementations • 6 Sep 2024 • Yuxiao Huang, Xuebin Lv, Shenghao Wu, Jibin Wu, Liang Feng, Kay Chen Tan
To facilitate EMTO's performance, various knowledge transfer models have been developed for specific optimization tasks.
no code implementations • 27 Aug 2024 • Xinyi Chen, Jibin Wu, Chenxiang Ma, Yinsong Yan, Yujie Wu, Kay Chen Tan
Our experimental results on a wide range of pattern recognition tasks demonstrate the superiority of PMSN.
1 code implementation • 21 Aug 2024 • Xun Zhou, Xingyu Wu, Liang Feng, Zhichao Lu, Kay Chen Tan
In LAPT, LLM is applied to automatically reason the design principles from a set of given architectures, and then a principle adaptation method is applied to refine these principles progressively based on the new search results.
1 code implementation • 13 Aug 2024 • Xiaoming Xue, Yao Hu, Liang Feng, Kai Zhang, Linqi Song, Kay Chen Tan
Expensive optimization problems (EOPs) have attracted increasing research attention over the decades due to their ubiquity in a variety of practical applications.
no code implementations • 20 Jun 2024 • Sheng-hao Wu, Yuxiao Huang, Xingyu Wu, Liang Feng, Zhi-Hui Zhan, Kay Chen Tan
However, current approaches in implicit EMT face challenges in adaptability, due to the use of a limited number of evolution operators and insufficient utilization of evolutionary states for performing KT.
no code implementations • 13 Jun 2024 • Yuxiao Huang, Shenghao Wu, Wenjie Zhang, Jibin Wu, Liang Feng, Kay Chen Tan
Multi-objective optimization problems (MOPs) are ubiquitous in real-world applications, presenting a complex challenge of balancing multiple conflicting objectives.
no code implementations • 25 May 2024 • Zhenzhong Wang, Zehui Lin, WanYu Lin, Ming Yang, Minggang Zeng, Kay Chen Tan
Providing explainable molecular property predictions is critical for many scientific domains, such as drug discovery and material science.
no code implementations • 24 May 2024 • Haokai Hong, WanYu Lin, Kay Chen Tan
This paper proposes a new 3D molecule generation framework, called GOAT, for fast and effective 3D molecule generation based on the flow-matching optimal transport objective.
no code implementations • 18 May 2024 • Xingyu Wu, Yan Zhong, Jibin Wu, Yuxiao Huang, Sheng-hao Wu, Kay Chen Tan
In the algorithm selection research, the discussion surrounding algorithm features has been significantly overshadowed by the emphasis on problem features.
no code implementations • 9 May 2024 • Zeyi Wang, Songbai Liu, Jianyong Chen, Kay Chen Tan
Solution's quality is assessed based on their constraint violations and objective-based performance.
no code implementations • 19 Apr 2024 • Zhenzhong Wang, Qingyuan Zeng, WanYu Lin, Min Jiang, Kay Chen Tan
While graph neural networks (GNNs) have become the de-facto standard for graph-based node classification, they impose a strong assumption on the availability of sufficient labeled samples.
no code implementations • 9 Apr 2024 • Yu Zhou, Xingyu Wu, Beicheng Huang, Jibin Wu, Liang Feng, Kay Chen Tan
The ability to understand causality significantly impacts the competence of large language models (LLMs) in output explanation and counterfactual reasoning, as causality reveals the underlying data distribution.
no code implementations • 9 Apr 2024 • Beichen Huang, Xingyu Wu, Yu Zhou, Jibin Wu, Liang Feng, Ran Cheng, Kay Chen Tan
Large language models (LLMs) have demonstrated exceptional performance not only in natural language processing tasks but also in a great variety of non-linguistic domains.
no code implementations • 1 Apr 2024 • Haokai Hong, WanYu Lin, Kay Chen Tan
These structure variations are encoded with an equivariant encoder and treated as domain supervisors to control denoising.
no code implementations • 4 Mar 2024 • Yuxiao Huang, Wenjie Zhang, Liang Feng, Xingyu Wu, Kay Chen Tan
Recently, large language models (LLMs) have notably positioned them as capable tools for addressing complex optimization challenges.
1 code implementation • 27 Feb 2024 • Chenxiang Ma, Jibin Wu, Chenyang Si, Kay Chen Tan
AugLocal constructs each hidden layer's auxiliary network by uniformly selecting a small subset of layers from its subsequent network layers to enhance their synergy.
no code implementations • 25 Feb 2024 • Yujia Yin, Xinyi Chen, Chenxiang Ma, Jibin Wu, Kay Chen Tan
The brain-inspired Spiking Neural Networks (SNNs) have garnered considerable research interest due to their superior performance and energy efficiency in processing temporal signals.
1 code implementation • 18 Jan 2024 • Xingyu Wu, Sheng-hao Wu, Jibin Wu, Liang Feng, Kay Chen Tan
As the first comprehensive review focused on the EA research in the era of LLMs, this paper provides a foundational stepping stone for understanding the collaborative potential of LLMs and EAs.
no code implementations • 3 Jan 2024 • Yinglan Feng, Liang Feng, Songbai Liu, Sam Kwong, Kay Chen Tan
A task-specific knowledge transfer mechanism is designed to leverage the advantage information of each task, enabling the discovery and effective transmission of high-quality solutions during the search process.
1 code implementation • 22 Nov 2023 • Xingyu Wu, Yan Zhong, Jibin Wu, Bingbing Jiang, Kay Chen Tan
The high-dimensional algorithm representation extracted by LLM, after undergoing a feature selection module, is combined with the problem representation and passed to the similarity calculation module.
no code implementations • 23 Oct 2023 • Qu Yang, Malu Zhang, Jibin Wu, Kay Chen Tan, Haizhou Li
With TTFS coding, we can achieve up to orders of magnitude saving in computation over ANN and other rate-based SNNs.
1 code implementation • 19 Oct 2023 • huan zhang, Jinliang Ding, Liang Feng, Kay Chen Tan, Ke Li
Although data-driven evolutionary optimization and Bayesian optimization (BO) approaches have shown promise in solving expensive optimization problems in static environments, the attempts to develop such approaches in dynamic environments remain rarely unexplored.
1 code implementation • 11 Oct 2023 • Xiang Hao, Jibin Wu, Jianwei Yu, Chenglin Xu, Kay Chen Tan
We demonstrate that textual descriptions alone can effectively serve as cues for extraction, thus addressing privacy concerns and reducing dependency on voiceprints.
no code implementations • 29 Aug 2023 • Xinyi Chen, Jibin Wu, Huajin Tang, Qinyuan Ren, Kay Chen Tan
The human brain exhibits remarkable abilities in integrating temporally distant sensory inputs for decision-making.
1 code implementation • 25 Aug 2023 • Shimin Zhang, Qu Yang, Chenxiang Ma, Jibin Wu, Haizhou Li, Kay Chen Tan
The identification of sensory cues associated with potential opportunities and dangers is frequently complicated by unrelated events that separate useful cues by long delays.
no code implementations • 14 Jul 2023 • Shimin Zhang, Qu Yang, Chenxiang Ma, Jibin Wu, Haizhou Li, Kay Chen Tan
The identification of sensory cues associated with potential opportunities and dangers is frequently complicated by unrelated events that separate useful cues by long delays.
1 code implementation • 22 Jun 2023 • Junjia Liu, Zhihao LI, WanYu Lin, Sylvain Calinon, Kay Chen Tan, Fei Chen
Soft object manipulation tasks in domestic scenes pose a significant challenge for existing robotic skill learning techniques due to their complex dynamics and variable shape characteristics.
1 code implementation • 26 May 2023 • Xinyi Chen, Qu Yang, Jibin Wu, Haizhou Li, Kay Chen Tan
As an initial exploration in this direction, we propose a hybrid neural coding and learning framework, which encompasses a neural coding zoo with diverse neural coding schemes discovered in neuroscience.
2 code implementations • 17 Apr 2023 • Xiaoming Xue, Cuie Yang, Liang Feng, Kai Zhang, Linqi Song, Kay Chen Tan
Lastly, a benchmark suite with 12 STO problems featured by a variety of customized similarity relationships is developed using the proposed generator.
no code implementations • 12 Apr 2023 • Wei-neng Chen, Feng-Feng Wei, Tian-Fang Zhao, Kay Chen Tan, Jun Zhang
Based on this taxonomy, existing studies on DEC are reviewed in terms of purpose, parallel structure of the algorithm, parallel model for implementation, and the implementation environment.
no code implementations • 8 Apr 2023 • Haokai Hong, Min Jiang, Qiuzhen Lin, Kay Chen Tan
To sample the most suitable evolutionary directions for different solutions, Thompson sampling is adopted for its effectiveness in recommending from a very large number of items within limited historical evaluations.
1 code implementation • 29 Jan 2023 • Beichen Huang, Ran Cheng, Zhuozhao Li, Yaochu Jin, Kay Chen Tan
Inspired by natural evolutionary processes, Evolutionary Computation (EC) has established itself as a cornerstone of Artificial Intelligence.
no code implementations • 28 Dec 2022 • Yuwei Ou, Xiangning Xie, Shangce Gao, Yanan sun, Kay Chen Tan, Jiancheng Lv
Deep neural networks (DNNs) are found to be vulnerable to adversarial attacks, and various methods have been proposed for the defense.
1 code implementation • 17 Dec 2022 • Lingjie Li, Manlin Xuan, Qiuzhen Lin, Min Jiang, Zhong Ming, Kay Chen Tan
Thus, this paper devises a new EMT algorithm for FS in high-dimensional classification, which first adopts different filtering methods to produce multiple tasks and then modifies a competitive swarm optimizer to efficiently solve these related tasks via knowledge transfer.
2 code implementations • 8 Aug 2022 • Zhichao Lu, Ran Cheng, Yaochu Jin, Kay Chen Tan, Kalyanmoy Deb
From an optimization point of view, the NAS tasks involving multiple design criteria are intrinsically multiobjective optimization problems; hence, it is reasonable to adopt evolutionary multiobjective optimization (EMO) algorithms for tackling them.
no code implementations • 3 Jul 2022 • Xiangning Xie, Yuqiao Liu, Yanan sun, Mengjie Zhang, Kay Chen Tan
Performance predictors can greatly alleviate the prohibitive cost of NAS by directly predicting the performance of DNNs.
no code implementations • 23 Jun 2022 • Songbai Liu, Qiuzhen Lin, Jianqiang Li, Kay Chen Tan
This paper begins with a general taxonomy of scaling-up MOPs and learnable MOEAs, followed by an analysis of the challenges that these MOPs pose to traditional MOEAs.
no code implementations • 20 May 2022 • Haokai Hong, Min Jiang, Liang Feng, Qiuzhen Lin, Kay Chen Tan
However, these algorithms ignore the significance of tackling this issue from the perspective of decision variables, which makes the algorithm lack the ability to search from different dimensions and limits the performance of the algorithm.
1 code implementation • CVPR 2022 • Weibo Shu, Jia Wan, Kay Chen Tan, Sam Kwong, Antoni B. Chan
By transforming the density map into the frequency domain and using the nice properties of the characteristic function, we propose a novel method that is simple, effective, and efficient.
1 code implementation • 15 Oct 2021 • Songbai Liu, Qiuzhen Lin, Kay Chen Tan, Qing Li
Evolutionary transfer multiobjective optimization (ETMO) has been becoming a hot research topic in the field of evolutionary computation, which is based on the fact that knowledge learning and transfer across the related optimization exercises can improve the efficiency of others.
no code implementations • 16 Jul 2021 • Haokai Hong, Kai Ye, Min Jiang, Donglin Cao, Kay Chen Tan
At the same time, due to the adoption of an individual-based evolution mechanism, the computational cost of the proposed method is independent of the number of decision variables, thus avoiding the problem of exponential growth of the search space.
no code implementations • 22 May 2021 • Ye Tian, Xingyi Zhang, Cheng He, Kay Chen Tan, Yaochu Jin
In the past three decades, a large number of metaheuristics have been proposed and shown high performance in solving complex optimization problems.
no code implementations • 23 Feb 2021 • Liang Feng, Qingxia Shang, Yaqing Hou, Kay Chen Tan, Yew-Soon Ong
This paper thus proposes a new search paradigm, namely the multi-space evolutionary search, to enhance the existing evolutionary search methods for solving large-scale optimization problems.
no code implementations • Journal - IEEE Transactions on Neural Networks and Learning Systems 2021 • Cuie Yang, Yiu-ming Cheung, Jinliang Ding, Kay Chen Tan
Then, a domain-wise weighted ensemble is introduced to combine the source and target models to select useful knowledge of each domain.
no code implementations • 8 Jan 2021 • Zhenzhong Wang, Haokai Hong, Kai Ye, Min Jiang, Kay Chen Tan
However, traditional evolutionary algorithms for solving LSMOPs have some deficiencies in dealing with this structural manifold, resulting in poor diversity, local optima, and inefficient searches.
no code implementations • 24 Dec 2020 • Min Jiang, Guokun Chi, Geqiang Pan, Shihui Guo, Kay Chen Tan
Given the high dimensions of control space, this problem is particularly challenging for multi-legged robots walking in complex and unknown environments.
no code implementations • 25 Aug 2020 • Yuqiao Liu, Yanan sun, Bing Xue, Mengjie Zhang, Gary G. Yen, Kay Chen Tan
Deep Neural Networks (DNNs) have achieved great success in many applications.
no code implementations • 2 Jul 2020 • Jibin Wu, Cheng-Lin Xu, Daquan Zhou, Haizhou Li, Kay Chen Tan
In this paper, we propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition, which is referred to as progressive tandem learning of deep SNNs.
no code implementations • 11 May 2020 • Qiang Yu, Shiming Song, Chenxiang Ma, Linqiang Pan, Kay Chen Tan
Traditional neuron models use analog values for information representation and computation, while all-or-nothing spikes are employed in the spiking ones.
no code implementations • 5 May 2020 • Qiang Yu, Chenxiang Ma, Shiming Song, Gaoyan Zhang, Jianwu Dang, Kay Chen Tan
We examine the performance of our methods based on MNIST, Fashion-MNIST and CIFAR10 datasets.
no code implementations • 2 May 2020 • Qiang Yu, Shenglan Li, Huajin Tang, Longbiao Wang, Jianwu Dang, Kay Chen Tan
They are also believed to play an essential role in low-power consumption of the biological systems, whose efficiency attracts increasing attentions to the field of neuromorphic computing.
1 code implementation • 8 Feb 2020 • Ke Li, Zilin Xiang, Tao Chen, Shuo Wang, Kay Chen Tan
Given a tight computational budget, it is more cost-effective to focus on optimizing the parameter configuration of transfer learning algorithms (3) The research on CPDP is far from mature where it is "not difficult" to find a better alternative by making a combination of existing transfer learning and classification techniques.
1 code implementation • 19 Nov 2019 • Jibin Wu, Emre Yilmaz, Malu Zhang, Haizhou Li, Kay Chen Tan
The brain-inspired spiking neural networks (SNN) closely mimic the biological neural networks and can operate on low-power neuromorphic hardware with spike-based computation.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
no code implementations • 20 Oct 2019 • Guokun Chi, Min Jiang, Xing Gao, Weizhen Hu, Shihui Guo, Kay Chen Tan
In practical applications, it is often necessary to face online learning problems in which the data samples are achieved sequentially.
no code implementations • 19 Oct 2019 • Weizhen Hu, Min Jiang, Xing Gao, Kay Chen Tan, Yiu-ming Cheung
The main feature of the Dynamic Multi-objective Optimization Problems (DMOPs) is that optimization objective functions will change with times or environments.
no code implementations • 19 Oct 2019 • Zhenzhong Wang, Min Jiang, Xing Gao, Liang Feng, Weizhen Hu, Kay Chen Tan
In recent years, transfer learning has been proven to be a kind of effective approach in solving DMOPs.
no code implementations • 19 Oct 2019 • Min Jiang, Weizhen Hu, Liming Qiu, Minghui Shi, Kay Chen Tan
The algorithm uses the POS that has been obtained to train a SVM and then take the trained SVM to classify the solutions of the dynamic optimization problem at the next moment, and thus it is able to generate an initial population which consists of different individuals recognized by the trained SVM.
no code implementations • 11 Oct 2019 • Cheng He, Shihua Huang, Ran Cheng, Kay Chen Tan, Yaochu Jin
The proposed algorithm is tested on 10 benchmark problems with up to 200 decision variables.
no code implementations • 10 Jul 2019 • Cheng He, Shihua Huang, Ran Cheng, Kay Chen Tan, Yaochu Jin
Recently, more and more works have proposed to drive evolutionary algorithms using machine learning models. Usually, the performance of such model based evolutionary algorithms is highly dependent on the training qualities of the adopted models. Since it usually requires a certain amount of data (i. e. the candidate solutions generated by the algorithms) for model training, the performance deteriorates rapidly with the increase of the problem scales, due to the curse of dimensionality. To address this issue, we propose a multi-objective evolutionary algorithm driven by the generative adversarial networks (GANs). At each generation of the proposed algorithm, the parent solutions are first classified into \emph{real} and \emph{fake} samples to train the GANs; then the offspring solutions are sampled by the trained GANs. Thanks to the powerful generative ability of the GANs, our proposed algorithm is capable of generating promising offspring solutions in high-dimensional decision space with limited training data. The proposed algorithm is tested on 10 benchmark problems with up to 200 decision variables. Experimental results on these test problems demonstrate the effectiveness of the proposed algorithm.
1 code implementation • 2 Jul 2019 • Jibin Wu, Yansong Chua, Malu Zhang, Guoqi Li, Haizhou Li, Kay Chen Tan
Spiking neural networks (SNNs) represent the most prominent biologically inspired computing model for neuromorphic computing (NC) architectures.
no code implementations • 4 Feb 2019 • Qiang Yu, Yanli Yao, Longbiao Wang, Huajin Tang, Jianwu Dang, Kay Chen Tan
Our framework is a unifying system with a consistent integration of three major functional parts which are sparse encoding, efficient learning and robust readout.
no code implementations • 30 Jan 2019 • Ke Li, Zilin Xiang, Kay Chen Tan
Perhaps surprisingly, it is possible to build a cheap-to-evaluate surrogate that models the algorithm's empirical performance as a function of its parameters.
no code implementations • 30 Apr 2018 • Chong Zhang, Geok Soon Hong, Jun-Hong Zhou, Kay Chen Tan, Haizhou Li, Huan Xu, Jihoon Hong, Hian-Leng Chan
For fault diagnosis, a cost-sensitive deep belief network (namely ECS-DBN) is applied to deal with the imbalanced data problem for tool state estimation.
no code implementations • 28 Apr 2018 • Chong Zhang, Kay Chen Tan, Haizhou Li, Geok Soon Hong
Adaptive differential evolution optimization is implemented as the optimization algorithm that automatically updates its corresponding parameters without the need of prior domain knowledge.
no code implementations • 8 Jun 2017 • Yuan Yuan, Yew-Soon Ong, Liang Feng, A. K. Qin, Abhishek Gupta, Bingshui Da, Qingfu Zhang, Kay Chen Tan, Yaochu Jin, Hisao Ishibuchi
In this report, we suggest nine test problems for multi-task multi-objective optimization (MTMOO), each of which consists of two multiobjective optimization tasks that need to be solved simultaneously.