Search Results for author: Liang Feng

Found 28 papers, 9 papers with code

A Theoretical Analysis of Analogy-Based Evolutionary Transfer Optimization

no code implementations27 Mar 2025 Xiaoming Xue, Liang Feng, Yinglan Feng, Rui Liu, Kai Zhang, Kay Chen Tan

Evolutionary transfer optimization (ETO) has been gaining popularity in research over the years due to its outstanding knowledge transfer ability to address various challenges in optimization.

Transfer Learning

Towards Fault Tolerance in Multi-Agent Reinforcement Learning

1 code implementation30 Nov 2024 Yuchen Shi, Huaxin Pei, Liang Feng, Yi Zhang, Danya Yao

Agent faults pose a significant threat to the performance of multi-agent reinforcement learning (MARL) algorithms, introducing two key challenges.

Multi-agent Reinforcement Learning reinforcement-learning +1

HM3: Hierarchical Multi-Objective Model Merging for Pretrained Models

no code implementations27 Sep 2024 Yu Zhou, Xingyu Wu, Jibin Wu, Liang Feng, Kay Chen Tan

Model merging is a technique that combines multiple large pretrained models into a single model with enhanced performance and broader task adaptability.

Code Generation Mathematical Reasoning

GatedUniPose: A Novel Approach for Pose Estimation Combining UniRepLKNet and Gated Convolution

no code implementations12 Sep 2024 Liang Feng, Ming Xu, Lihua Wen, Zhixuan Shen

Pose estimation is a crucial task in computer vision, with wide applications in autonomous driving, human motion capture, and virtual reality.

Autonomous Driving Pose Estimation

GateAttentionPose: Enhancing Pose Estimation with Agent Attention and Improved Gated Convolutions

no code implementations12 Sep 2024 Liang Feng, Zhixuan Shen, Lihua Wen, Shiyao Li, Ming Xu

This paper introduces GateAttentionPose, an innovative approach that enhances the UniRepLKNet architecture for pose estimation tasks.

Autonomous Driving Computational Efficiency +1

Advancing Automated Knowledge Transfer in Evolutionary Multitasking via Large Language Models

no code implementations6 Sep 2024 Yuxiao Huang, Xuebin Lv, Shenghao Wu, Jibin Wu, Liang Feng, Kay Chen Tan

To facilitate EMTO's performance, various knowledge transfer models have been developed for specific optimization tasks.

Transfer Learning

Design Principle Transfer in Neural Architecture Search via Large Language Models

1 code implementation21 Aug 2024 Xun Zhou, Xingyu Wu, Liang Feng, Zhichao Lu, Kay Chen Tan

In LAPT, LLM is applied to automatically reason the design principles from a set of given architectures, and then a principle adaptation method is applied to refine these principles progressively based on the new search results.

Language Modelling Large Language Model +1

Surrogate-Assisted Search with Competitive Knowledge Transfer for Expensive Optimization

1 code implementation13 Aug 2024 Xiaoming Xue, Yao Hu, Liang Feng, Kai Zhang, Linqi Song, Kay Chen Tan

Expensive optimization problems (EOPs) have attracted increasing research attention over the decades due to their ubiquity in a variety of practical applications.

Evolutionary Algorithms Transfer Learning

OptiBench Meets ReSocratic: Measure and Improve LLMs for Optimization Modeling

1 code implementation13 Jul 2024 Zhicheng Yang, Yiwei Wang, Yinya Huang, Zhijiang Guo, Wei Shi, Xiongwei Han, Liang Feng, Linqi Song, Xiaodan Liang, Jing Tang

Furthermore, to alleviate the data scarcity for optimization problems, and to bridge the gap between open-source LLMs on a small scale (e. g., Llama-3-8b) and closed-source LLMs (e. g., GPT-4), we further propose a data synthesis method namely ReSocratic.

Benchmarking Math +1

Learning to Transfer for Evolutionary Multitasking

no code implementations20 Jun 2024 Sheng-hao Wu, Yuxiao Huang, Xingyu Wu, Liang Feng, Zhi-Hui Zhan, Kay Chen Tan

However, current approaches in implicit EMT face challenges in adaptability, due to the use of a limited number of evolution operators and insufficient utilization of evolutionary states for performing KT.

Evolutionary Algorithms Transfer Learning

Autonomous Multi-Objective Optimization Using Large Language Model

no code implementations13 Jun 2024 Yuxiao Huang, Shenghao Wu, Wenjie Zhang, Jibin Wu, Liang Feng, Kay Chen Tan

Multi-objective optimization problems (MOPs) are ubiquitous in real-world applications, presenting a complex challenge of balancing multiple conflicting objectives.

Evolutionary Algorithms Language Modeling +2

CausalBench: A Comprehensive Benchmark for Causal Learning Capability of LLMs

no code implementations9 Apr 2024 Yu Zhou, Xingyu Wu, Beicheng Huang, Jibin Wu, Liang Feng, Kay Chen Tan

The ability to understand causality significantly impacts the competence of large language models (LLMs) in output explanation and counterfactual reasoning, as causality reveals the underlying data distribution.

counterfactual Counterfactual Reasoning +1

Exploring the True Potential: Evaluating the Black-box Optimization Capability of Large Language Models

no code implementations9 Apr 2024 Beichen Huang, Xingyu Wu, Yu Zhou, Jibin Wu, Liang Feng, Ran Cheng, Kay Chen Tan

Large language models (LLMs) have demonstrated exceptional performance not only in natural language processing tasks but also in a great variety of non-linguistic domains.

Evolutionary Computation in the Era of Large Language Model: Survey and Roadmap

1 code implementation18 Jan 2024 Xingyu Wu, Sheng-hao Wu, Jibin Wu, Liang Feng, Kay Chen Tan

As the first comprehensive review focused on the EA research in the era of LLMs, this paper provides a foundational stepping stone for understanding the collaborative potential of LLMs and EAs.

Code Generation Evolutionary Algorithms +5

Towards Multi-Objective High-Dimensional Feature Selection via Evolutionary Multitasking

no code implementations3 Jan 2024 Yinglan Feng, Liang Feng, Songbai Liu, Sam Kwong, Kay Chen Tan

A task-specific knowledge transfer mechanism is designed to leverage the advantage information of each task, enabling the discovery and effective transmission of high-quality solutions during the search process.

feature selection Transfer Learning

AlignedCoT: Prompting Large Language Models via Native-Speaking Demonstrations

1 code implementation22 Nov 2023 Zhicheng Yang, Yinya Huang, Jing Xiong, Liang Feng, Xiaodan Liang, Yiwei Wang, Jing Tang

Large Language Models prompting, such as using in-context demonstrations, is a mainstream technique for invoking LLMs to perform high-performance and solid complex reasoning (e. g., mathematical reasoning, commonsense reasoning), and has the potential for further human-machine collaborative scientific findings.

Common Sense Reasoning GSM8K +4

Solving Expensive Optimization Problems in Dynamic Environments with Meta-learning

1 code implementation19 Oct 2023 huan zhang, Jinliang Ding, Liang Feng, Kay Chen Tan, Ke Li

Although data-driven evolutionary optimization and Bayesian optimization (BO) approaches have shown promise in solving expensive optimization problems in static environments, the attempts to develop such approaches in dynamic environments remain rarely unexplored.

Bayesian Optimization Meta-Learning

A Scalable Test Problem Generator for Sequential Transfer Optimization

2 code implementations17 Apr 2023 Xiaoming Xue, Cuie Yang, Liang Feng, Kai Zhang, Linqi Song, Kay Chen Tan

Lastly, a benchmark suite with 12 STO problems featured by a variety of customized similarity relationships is developed using the proposed generator.

Balancing Exploration and Exploitation for Solving Large-scale Multiobjective Optimization via Attention Mechanism

no code implementations20 May 2022 Haokai Hong, Min Jiang, Liang Feng, Qiuzhen Lin, Kay Chen Tan

However, these algorithms ignore the significance of tackling this issue from the perspective of decision variables, which makes the algorithm lack the ability to search from different dimensions and limits the performance of the algorithm.

Evolutionary Algorithms Multiobjective Optimization

Multi-Space Evolutionary Search for Large-Scale Optimization

no code implementations23 Feb 2021 Liang Feng, Qingxia Shang, Yaqing Hou, Kay Chen Tan, Yew-Soon Ong

This paper thus proposes a new search paradigm, namely the multi-space evolutionary search, to enhance the existing evolutionary search methods for solving large-scale optimization problems.

Dimensionality Reduction Evolutionary Algorithms

MicroRec: Efficient Recommendation Inference by Hardware and Data Structure Solutions

no code implementations12 Oct 2020 Wenqi Jiang, Zhenhao He, Shuai Zhang, Thomas B. Preußer, Kai Zeng, Liang Feng, Jiansong Zhang, Tongxuan Liu, Yong Li, Jingren Zhou, Ce Zhang, Gustavo Alonso

MicroRec accelerates recommendation inference by (1) redesigning the data structures involved in the embeddings to reduce the number of lookups needed and (2) taking advantage of the availability of High-Bandwidth Memory (HBM) in FPGA accelerators to tackle the latency by enabling parallel lookups.

Recommendation Systems

FP-Stereo: Hardware-Efficient Stereo Vision for Embedded Applications

no code implementations5 Jun 2020 Jieru Zhao, Tingyuan Liang, Liang Feng, Wenchao Ding, Sharad Sinha, Wei zhang, Shaojie Shen

To reduce the design effort and achieve the right balance, we propose FP-Stereo for building high-performance stereo matching pipelines on FPGAs automatically.

C++ code Depth Estimation +1

Multi-factorial Optimization for Large-scale Virtual Machine Placement in Cloud Computing

no code implementations18 Jan 2020 Zhengping Liang, Jian Zhang, Liang Feng, Zexuan Zhu

However, as growing demand for cloud services, the existing EAs fail to implement in large-scale virtual machine placement (LVMP) problem due to the high time complexity and poor scalability.

Cloud Computing Evolutionary Algorithms

Evolutionary Multitasking for Single-objective Continuous Optimization: Benchmark Problems, Performance Metric, and Baseline Results

no code implementations12 Jun 2017 Bingshui Da, Yew-Soon Ong, Liang Feng, A. K. Qin, Abhishek Gupta, Zexuan Zhu, Chuan-Kang Ting, Ke Tang, Xin Yao

In this report, we suggest nine test problems for multi-task single-objective optimization (MTSOO), each of which consists of two single-objective optimization tasks that need to be solved simultaneously.

Evolutionary Multitasking for Multiobjective Continuous Optimization: Benchmark Problems, Performance Metrics and Baseline Results

no code implementations8 Jun 2017 Yuan Yuan, Yew-Soon Ong, Liang Feng, A. K. Qin, Abhishek Gupta, Bingshui Da, Qingfu Zhang, Kay Chen Tan, Yaochu Jin, Hisao Ishibuchi

In this report, we suggest nine test problems for multi-task multi-objective optimization (MTMOO), each of which consists of two multiobjective optimization tasks that need to be solved simultaneously.

Multiobjective Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.