Search Results for author: Changhe Li

Found 6 papers, 2 papers with code

Evolutionary Dynamic Optimization Laboratory: A MATLAB Optimization Platform for Education and Experimentation in Dynamic Environments

1 code implementation24 Aug 2023 Mai Peng, Zeneng She, Delaram Yazdani, Danial Yazdani, Wenjian Luo, Changhe Li, Juergen Branke, Trung Thanh Nguyen, Amir H. Gandomi, Yaochu Jin, Xin Yao

In this paper, to assist researchers in performing experiments and comparing their algorithms against several EDOAs, we develop an open-source MATLAB platform for EDOAs, called Evolutionary Dynamic Optimization LABoratory (EDOLAB).

Eye-gaze-guided Vision Transformer for Rectifying Shortcut Learning

no code implementations25 May 2022 Chong Ma, Lin Zhao, Yuzhong Chen, Lu Zhang, Zhenxiang Xiao, Haixing Dai, David Liu, Zihao Wu, Zhengliang Liu, Sheng Wang, Jiaxing Gao, Changhe Li, Xi Jiang, Tuo Zhang, Qian Wang, Dinggang Shen, Dajiang Zhu, Tianming Liu

To address this problem, we propose to infuse human experts' intelligence and domain knowledge into the training of deep neural networks.

Brain Cortical Functional Gradients Predict Cortical Folding Patterns via Attention Mesh Convolution

no code implementations21 May 2022 Li Yang, Zhibin He, Changhe Li, Junwei Han, Dajiang Zhu, Tianming Liu, Tuo Zhang

The convolution on mesh considers the spatial organization of functional gradients and folding patterns on a cortical sheet and the newly designed channel attention block enhances the interpretability of the contribution of different functional gradients to cortical folding prediction.

Anatomy

Mask-guided Vision Transformer (MG-ViT) for Few-Shot Learning

no code implementations20 May 2022 Yuzhong Chen, Zhenxiang Xiao, Lin Zhao, Lu Zhang, Haixing Dai, David Weizhong Liu, Zihao Wu, Changhe Li, Tuo Zhang, Changying Li, Dajiang Zhu, Tianming Liu, Xi Jiang

However, for data-intensive models such as vision transformer (ViT), current fine-tuning based FSL approaches are inefficient in knowledge generalization and thus degenerate the downstream task performances.

Active Learning Few-Shot Learning

Competition on Dynamic Optimization Problems Generated by Generalized Moving Peaks Benchmark (GMPB)

1 code implementation11 Jun 2021 Danial Yazdani, Michalis Mavrovouniotis, Changhe Li, Wenjian Luo, Mohammad Nabi Omidvar, Amir H. Gandomi, Trung Thanh Nguyen, Juergen Branke, XiaoDong Li, Shengxiang Yang, Xin Yao

This document introduces the Generalized Moving Peaks Benchmark (GMPB), a tool for generating continuous dynamic optimization problem instances that is used for the CEC 2024 Competition on Dynamic Optimization.

Cannot find the paper you are looking for? You can Submit a new open access paper.