1 code implementation • 3 Oct 2024 • Guodong Du, Junlin Lee, Jing Li, Runhua Jiang, Yifei Guo, Shuyang Yu, Hanting Liu, Sim Kuan Goh, Ho-Kin Tang, Daojing He, Min Zhang
Recently developed model merging techniques enable the direct integration of multiple models, each fine-tuned for distinct tasks, into a single model.
no code implementations • 10 Aug 2024 • Guodong Du, Runhua Jiang, Senqiao Yang, Haoyang Li, Wei Chen, Keren Li, Sim Kuan Goh, Ho-Kin Tang
The empirical results show that the proposed framework has positive impacts on the network, with reduced over-fitting and an order of magnitude lower time complexity compared to BP.
1 code implementation • 18 Jun 2024 • Guodong Du, Jing Li, Hanting Liu, Runhua Jiang, Shuyang Yu, Yifei Guo, Sim Kuan Goh, Ho-Kin Tang
Fine-tuning pre-trained language models, particularly large language models, demands extensive computing resources and can result in varying performance outcomes across different domains and datasets.
1 code implementation • 4 Jun 2024 • Runhua Jiang, Guodong Du, Shuyang Yu, Yifei Guo, Sim Kuan Goh, Ho-Kin Tang
This paper attempts to tackle the challenges by introducing Cosine Annealing Differential Evolution (CADE), designed to modulate the mutation factor (F) and crossover rate (CR) of differential evolution (DE) for the SNN model, i. e., Spiking Element Wise (SEW) ResNet.
1 code implementation • 9 Mar 2024 • Runhua Jiang, Yahong Han
To address this issue, we propose a model reprogramming framework, which translates out-of-sample degradations by quantum mechanic and wave functions.