no code implementations • 27 Mar 2025 • Xiaoming Xue, Liang Feng, Yinglan Feng, Rui Liu, Kai Zhang, Kay Chen Tan
Evolutionary transfer optimization (ETO) has been gaining popularity in research over the years due to its outstanding knowledge transfer ability to address various challenges in optimization.
1 code implementation • 30 Nov 2024 • Yuchen Shi, Huaxin Pei, Liang Feng, Yi Zhang, Danya Yao
Agent faults pose a significant threat to the performance of multi-agent reinforcement learning (MARL) algorithms, introducing two key challenges.
Multi-agent Reinforcement Learning
reinforcement-learning
+1
no code implementations • 27 Sep 2024 • Yu Zhou, Xingyu Wu, Jibin Wu, Liang Feng, Kay Chen Tan
Model merging is a technique that combines multiple large pretrained models into a single model with enhanced performance and broader task adaptability.
no code implementations • 12 Sep 2024 • Liang Feng, Ming Xu, Lihua Wen, Zhixuan Shen
Pose estimation is a crucial task in computer vision, with wide applications in autonomous driving, human motion capture, and virtual reality.
no code implementations • 12 Sep 2024 • Liang Feng, Zhixuan Shen, Lihua Wen, Shiyao Li, Ming Xu
This paper introduces GateAttentionPose, an innovative approach that enhances the UniRepLKNet architecture for pose estimation tasks.
no code implementations • 6 Sep 2024 • Yuxiao Huang, Xuebin Lv, Shenghao Wu, Jibin Wu, Liang Feng, Kay Chen Tan
To facilitate EMTO's performance, various knowledge transfer models have been developed for specific optimization tasks.
1 code implementation • 21 Aug 2024 • Xun Zhou, Xingyu Wu, Liang Feng, Zhichao Lu, Kay Chen Tan
In LAPT, LLM is applied to automatically reason the design principles from a set of given architectures, and then a principle adaptation method is applied to refine these principles progressively based on the new search results.
1 code implementation • 13 Aug 2024 • Xiaoming Xue, Yao Hu, Liang Feng, Kai Zhang, Linqi Song, Kay Chen Tan
Expensive optimization problems (EOPs) have attracted increasing research attention over the decades due to their ubiquity in a variety of practical applications.
1 code implementation • 13 Jul 2024 • Zhicheng Yang, Yiwei Wang, Yinya Huang, Zhijiang Guo, Wei Shi, Xiongwei Han, Liang Feng, Linqi Song, Xiaodan Liang, Jing Tang
Furthermore, to alleviate the data scarcity for optimization problems, and to bridge the gap between open-source LLMs on a small scale (e. g., Llama-3-8b) and closed-source LLMs (e. g., GPT-4), we further propose a data synthesis method namely ReSocratic.
no code implementations • 20 Jun 2024 • Sheng-hao Wu, Yuxiao Huang, Xingyu Wu, Liang Feng, Zhi-Hui Zhan, Kay Chen Tan
However, current approaches in implicit EMT face challenges in adaptability, due to the use of a limited number of evolution operators and insufficient utilization of evolutionary states for performing KT.
no code implementations • 13 Jun 2024 • Yuxiao Huang, Shenghao Wu, Wenjie Zhang, Jibin Wu, Liang Feng, Kay Chen Tan
Multi-objective optimization problems (MOPs) are ubiquitous in real-world applications, presenting a complex challenge of balancing multiple conflicting objectives.
no code implementations • 9 Apr 2024 • Yu Zhou, Xingyu Wu, Beicheng Huang, Jibin Wu, Liang Feng, Kay Chen Tan
The ability to understand causality significantly impacts the competence of large language models (LLMs) in output explanation and counterfactual reasoning, as causality reveals the underlying data distribution.
no code implementations • 9 Apr 2024 • Beichen Huang, Xingyu Wu, Yu Zhou, Jibin Wu, Liang Feng, Ran Cheng, Kay Chen Tan
Large language models (LLMs) have demonstrated exceptional performance not only in natural language processing tasks but also in a great variety of non-linguistic domains.
no code implementations • 4 Mar 2024 • Yuxiao Huang, Wenjie Zhang, Liang Feng, Xingyu Wu, Kay Chen Tan
Recently, large language models (LLMs) have notably positioned them as capable tools for addressing complex optimization challenges.
1 code implementation • 18 Jan 2024 • Xingyu Wu, Sheng-hao Wu, Jibin Wu, Liang Feng, Kay Chen Tan
As the first comprehensive review focused on the EA research in the era of LLMs, this paper provides a foundational stepping stone for understanding the collaborative potential of LLMs and EAs.
no code implementations • 3 Jan 2024 • Yinglan Feng, Liang Feng, Songbai Liu, Sam Kwong, Kay Chen Tan
A task-specific knowledge transfer mechanism is designed to leverage the advantage information of each task, enabling the discovery and effective transmission of high-quality solutions during the search process.
1 code implementation • 22 Nov 2023 • Zhicheng Yang, Yinya Huang, Jing Xiong, Liang Feng, Xiaodan Liang, Yiwei Wang, Jing Tang
Large Language Models prompting, such as using in-context demonstrations, is a mainstream technique for invoking LLMs to perform high-performance and solid complex reasoning (e. g., mathematical reasoning, commonsense reasoning), and has the potential for further human-machine collaborative scientific findings.
1 code implementation • 19 Oct 2023 • huan zhang, Jinliang Ding, Liang Feng, Kay Chen Tan, Ke Li
Although data-driven evolutionary optimization and Bayesian optimization (BO) approaches have shown promise in solving expensive optimization problems in static environments, the attempts to develop such approaches in dynamic environments remain rarely unexplored.
2 code implementations • 17 Apr 2023 • Xiaoming Xue, Cuie Yang, Liang Feng, Kai Zhang, Linqi Song, Kay Chen Tan
Lastly, a benchmark suite with 12 STO problems featured by a variety of customized similarity relationships is developed using the proposed generator.
no code implementations • 20 May 2022 • Haokai Hong, Min Jiang, Liang Feng, Qiuzhen Lin, Kay Chen Tan
However, these algorithms ignore the significance of tackling this issue from the perspective of decision variables, which makes the algorithm lack the ability to search from different dimensions and limits the performance of the algorithm.
1 code implementation • IEEE Transactions on Evolutionary Computation 2021 • Yinglan Feng, Liang Feng, Senior Member, Sam Kwong, and Kay Chen Tan, Fellow, IEEE
In this way, the number of subpopulations is adaptively adjusted and better performing subpopulations obtain more individuals.
no code implementations • 23 Feb 2021 • Liang Feng, Qingxia Shang, Yaqing Hou, Kay Chen Tan, Yew-Soon Ong
This paper thus proposes a new search paradigm, namely the multi-space evolutionary search, to enhance the existing evolutionary search methods for solving large-scale optimization problems.
no code implementations • 12 Oct 2020 • Wenqi Jiang, Zhenhao He, Shuai Zhang, Thomas B. Preußer, Kai Zeng, Liang Feng, Jiansong Zhang, Tongxuan Liu, Yong Li, Jingren Zhou, Ce Zhang, Gustavo Alonso
MicroRec accelerates recommendation inference by (1) redesigning the data structures involved in the embeddings to reduce the number of lookups needed and (2) taking advantage of the availability of High-Bandwidth Memory (HBM) in FPGA accelerators to tackle the latency by enabling parallel lookups.
no code implementations • 5 Jun 2020 • Jieru Zhao, Tingyuan Liang, Liang Feng, Wenchao Ding, Sharad Sinha, Wei zhang, Shaojie Shen
To reduce the design effort and achieve the right balance, we propose FP-Stereo for building high-performance stereo matching pipelines on FPGAs automatically.
no code implementations • 18 Jan 2020 • Zhengping Liang, Jian Zhang, Liang Feng, Zexuan Zhu
However, as growing demand for cloud services, the existing EAs fail to implement in large-scale virtual machine placement (LVMP) problem due to the high time complexity and poor scalability.
no code implementations • 19 Oct 2019 • Zhenzhong Wang, Min Jiang, Xing Gao, Liang Feng, Weizhen Hu, Kay Chen Tan
In recent years, transfer learning has been proven to be a kind of effective approach in solving DMOPs.
no code implementations • 12 Jun 2017 • Bingshui Da, Yew-Soon Ong, Liang Feng, A. K. Qin, Abhishek Gupta, Zexuan Zhu, Chuan-Kang Ting, Ke Tang, Xin Yao
In this report, we suggest nine test problems for multi-task single-objective optimization (MTSOO), each of which consists of two single-objective optimization tasks that need to be solved simultaneously.
no code implementations • 8 Jun 2017 • Yuan Yuan, Yew-Soon Ong, Liang Feng, A. K. Qin, Abhishek Gupta, Bingshui Da, Qingfu Zhang, Kay Chen Tan, Yaochu Jin, Hisao Ishibuchi
In this report, we suggest nine test problems for multi-task multi-objective optimization (MTMOO), each of which consists of two multiobjective optimization tasks that need to be solved simultaneously.