Search Results for author: Zexuan Zhu

Found 5 papers, 1 papers with code

AdaptIR: Parameter Efficient Multi-task Adaptation for Pre-trained Image Restoration Models

1 code implementation12 Dec 2023 Hang Guo, Tao Dai, Yuanchao Bai, Bin Chen, Shu-Tao Xia, Zexuan Zhu

Recently, Parameter Efficient Transfer Learning (PETL) offers an efficient alternative solution to full fine-tuning, yet still faces great challenges for pre-trained image restoration models, due to the diversity of different degradations.

Image Denoising Image Restoration +1

Multi-factorial Optimization for Large-scale Virtual Machine Placement in Cloud Computing

no code implementations18 Jan 2020 Zhengping Liang, Jian Zhang, Liang Feng, Zexuan Zhu

However, as growing demand for cloud services, the existing EAs fail to implement in large-scale virtual machine placement (LVMP) problem due to the high time complexity and poor scalability.

Cloud Computing Evolutionary Algorithms

A Two stage Adaptive Knowledge Transfer Evolutionary Multi-tasking Based on Population Distribution for Multi/Many-Objective Optimization

no code implementations3 Jan 2020 Zhengping Liang, Weiqi Liang, Xiuju Xu, Ling Liu, Zexuan Zhu

Experimental results on multi-tasking multi-objective optimization test suites show that EMT-PD is superior to other six state-of-the-art evolutionary multi/single-tasking algorithms.

Transfer Learning

Evolutionary Multitasking for Single-objective Continuous Optimization: Benchmark Problems, Performance Metric, and Baseline Results

no code implementations12 Jun 2017 Bingshui Da, Yew-Soon Ong, Liang Feng, A. K. Qin, Abhishek Gupta, Zexuan Zhu, Chuan-Kang Ting, Ke Tang, Xin Yao

In this report, we suggest nine test problems for multi-task single-objective optimization (MTSOO), each of which consists of two single-objective optimization tasks that need to be solved simultaneously.

Concept Drift Adaptation by Exploiting Historical Knowledge

no code implementations12 Feb 2017 Yu Sun, Ke Tang, Zexuan Zhu, Xin Yao

Incremental learning with concept drift has often been tackled by ensemble methods, where models built in the past can be re-trained to attain new models for the current data.

Ensemble Learning Incremental Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.