1 code implementation • 3 Oct 2024 • Guodong Du, Junlin Lee, Jing Li, Runhua Jiang, Yifei Guo, Shuyang Yu, Hanting Liu, Sim Kuan Goh, Ho-Kin Tang, Daojing He, Min Zhang
Recently developed model merging techniques enable the direct integration of multiple models, each fine-tuned for distinct tasks, into a single model.
no code implementations • 10 Aug 2024 • Yisheng Yang, Sim Kuan Goh, Qing Cai, Shen Yuong Wong, Ho-Kin Tang
Specifically, we enhance the Yin-Yang pair in YYPO with a proposed Yi-point, in which we use Cauchy flight to update the solution, by implementing both the harmony and reversal concept of Yi Jing.
no code implementations • 10 Aug 2024 • Guodong Du, Runhua Jiang, Senqiao Yang, Haoyang Li, Wei Chen, Keren Li, Sim Kuan Goh, Ho-Kin Tang
The empirical results show that the proposed framework has positive impacts on the network, with reduced over-fitting and an order of magnitude lower time complexity compared to BP.
no code implementations • 10 Aug 2024 • Yisheng Yang, Guodong Du, Chean Khim Toa, Ho-Kin Tang, Sim Kuan Goh
This paper presents Success-History-based Self-adaptive Differential Evolution with a Joint Point Interaction Dimension Search (SHSADE-PIDS), an evolutionary NAS framework that encodes discrete deep neural network architectures to continuous spaces and performs searches in the continuous spaces for efficient point cloud neural architectures.
1 code implementation • 18 Jun 2024 • Guodong Du, Jing Li, Hanting Liu, Runhua Jiang, Shuyang Yu, Yifei Guo, Sim Kuan Goh, Ho-Kin Tang
Fine-tuning pre-trained language models, particularly large language models, demands extensive computing resources and can result in varying performance outcomes across different domains and datasets.
1 code implementation • 4 Jun 2024 • Runhua Jiang, Guodong Du, Shuyang Yu, Yifei Guo, Sim Kuan Goh, Ho-Kin Tang
This paper attempts to tackle the challenges by introducing Cosine Annealing Differential Evolution (CADE), designed to modulate the mutation factor (F) and crossover rate (CR) of differential evolution (DE) for the SNN model, i. e., Spiking Element Wise (SEW) ResNet.
1 code implementation • 17 Apr 2021 • Ho-Kin Tang, Sim Kuan Goh
As a conceptual prototype, we examine YI with IEEE CEC 2017 benchmark and compare its performance with a Levy flight-based optimizer CV1. 0, the state-of-the-art dynamical Yin-Yang pair optimization in YYPO family and a few classical optimizers.