no code implementations • 27 Sep 2024 • Yu Zhou, Xingyu Wu, Jibin Wu, Liang Feng, Kay Chen Tan
Model merging is a technique that combines multiple large pretrained models into a single model with enhanced performance and broader task adaptability.
no code implementations • 6 Sep 2024 • Yuxiao Huang, Xuebin Lv, Shenghao Wu, Jibin Wu, Liang Feng, Kay Chen Tan
To facilitate EMTO's performance, various knowledge transfer models have been developed for specific optimization tasks.
no code implementations • 27 Aug 2024 • Xinyi Chen, Jibin Wu, Chenxiang Ma, Yinsong Yan, Yujie Wu, Kay Chen Tan
Our experimental results on a wide range of pattern recognition tasks demonstrate the superiority of PMSN.
no code implementations • 27 Aug 2024 • Yujie Wu, Siyuan Xu, Jibin Wu, Lei Deng, Mingkun Xu, Qinghao Wen, Guoqi Li
The Forward-Forward (FF) algorithm was recently proposed as a local learning method to address the limitations of backpropagation (BP), offering biological plausibility along with memory-efficient and highly parallelized computational benefits.
no code implementations • 19 Jun 2024 • Shuai Wang, Dehao Zhang, Kexin Shi, Yuchen Wang, Wenjie Wei, Jibin Wu, Malu Zhang
Here, we take advantage of spiking neural networks' energy efficiency and propose an end-to-end lightweight KWS model.
no code implementations • 13 Jun 2024 • Yuxiao Huang, Shenghao Wu, Wenjie Zhang, Jibin Wu, Liang Feng, Kay Chen Tan
Multi-objective optimization problems (MOPs) are ubiquitous in real-world applications, presenting a complex challenge of balancing multiple conflicting objectives.
no code implementations • 18 May 2024 • Xingyu Wu, Yan Zhong, Jibin Wu, Yuxiao Huang, Sheng-hao Wu, Kay Chen Tan
In the algorithm selection research, the discussion surrounding algorithm features has been significantly overshadowed by the emphasis on problem features.
no code implementations • 9 Apr 2024 • Beichen Huang, Xingyu Wu, Yu Zhou, Jibin Wu, Liang Feng, Ran Cheng, Kay Chen Tan
Large language models (LLMs) have demonstrated exceptional performance not only in natural language processing tasks but also in a great variety of non-linguistic domains.
no code implementations • 9 Apr 2024 • Yu Zhou, Xingyu Wu, Beicheng Huang, Jibin Wu, Liang Feng, Kay Chen Tan
The ability to understand causality significantly impacts the competence of large language models (LLMs) in output explanation and counterfactual reasoning, as causality reveals the underlying data distribution.
no code implementations • 1 Mar 2024 • Wenjie Wei, Malu Zhang, Jilin Zhang, Ammar Belatreche, Jibin Wu, Zijing Xu, Xuerui Qiu, Hong Chen, Yang Yang, Haizhou Li
Specifically, we introduce two novel event-driven learning methods: the spike-timing-dependent event-driven (STD-ED) and membrane-potential-dependent event-driven (MPD-ED) algorithms.
1 code implementation • 27 Feb 2024 • Chenxiang Ma, Jibin Wu, Chenyang Si, Kay Chen Tan
AugLocal constructs each hidden layer's auxiliary network by uniformly selecting a small subset of layers from its subsequent network layers to enhance their synergy.
no code implementations • 25 Feb 2024 • Yujia Yin, Xinyi Chen, Chenxiang Ma, Jibin Wu, Kay Chen Tan
The brain-inspired Spiking Neural Networks (SNNs) have garnered considerable research interest due to their superior performance and energy efficiency in processing temporal signals.
1 code implementation • 18 Jan 2024 • Xingyu Wu, Sheng-hao Wu, Jibin Wu, Liang Feng, Kay Chen Tan
As the first comprehensive review focused on the EA research in the era of LLMs, this paper provides a foundational stepping stone for understanding the collaborative potential of LLMs and EAs.
1 code implementation • 22 Nov 2023 • Xingyu Wu, Yan Zhong, Jibin Wu, Bingbing Jiang, Kay Chen Tan
The high-dimensional algorithm representation extracted by LLM, after undergoing a feature selection module, is combined with the problem representation and passed to the similarity calculation module.
no code implementations • 23 Oct 2023 • Pengfei Sun, Jibin Wu, Malu Zhang, Paul Devos, Dick Botteldooren
Recurrent Neural Networks (RNNs) are renowned for their adeptness in modeling temporal dependencies, a trait that has driven their widespread adoption for sequential data processing.
no code implementations • 23 Oct 2023 • Qu Yang, Malu Zhang, Jibin Wu, Kay Chen Tan, Haizhou Li
With TTFS coding, we can achieve up to orders of magnitude saving in computation over ANN and other rate-based SNNs.
1 code implementation • 11 Oct 2023 • Xiang Hao, Jibin Wu, Jianwei Yu, Chenglin Xu, Kay Chen Tan
However, the effectiveness of these models is hindered in real-world scenarios due to the unreliable or even absence of pre-registered cues.
no code implementations • 18 Sep 2023 • Zeyang Song, Jibin Wu, Malu Zhang, Mike Zheng Shou, Haizhou Li
Brain-inspired spiking neural networks (SNNs) have demonstrated great potential for temporal signal processing.
no code implementations • 29 Aug 2023 • Xinyi Chen, Jibin Wu, Huajin Tang, Qinyuan Ren, Kay Chen Tan
The human brain exhibits remarkable abilities in integrating temporally distant sensory inputs for decision-making.
1 code implementation • 25 Aug 2023 • Shimin Zhang, Qu Yang, Chenxiang Ma, Jibin Wu, Haizhou Li, Kay Chen Tan
The identification of sensory cues associated with potential opportunities and dangers is frequently complicated by unrelated events that separate useful cues by long delays.
no code implementations • 14 Jul 2023 • Shimin Zhang, Qu Yang, Chenxiang Ma, Jibin Wu, Haizhou Li, Kay Chen Tan
The identification of sensory cues associated with potential opportunities and dangers is frequently complicated by unrelated events that separate useful cues by long delays.
1 code implementation • 26 May 2023 • Xinyi Chen, Qu Yang, Jibin Wu, Haizhou Li, Kay Chen Tan
As an initial exploration in this direction, we propose a hybrid neural coding and learning framework, which encompasses a neural coding zoo with diverse neural coding schemes discovered in neuroscience.
1 code implementation • 10 Oct 2022 • Qu Yang, Jibin Wu, Malu Zhang, Yansong Chua, Xinchao Wang, Haizhou Li
The LTL rule follows the teacher-student learning approach by mimicking the intermediate feature representations of a pre-trained ANN.
1 code implementation • 30 Mar 2021 • Chenglin Xu, Wei Rao, Jibin Wu, Haizhou Li
Inspired by the study on target speaker extraction, e. g., SpEx, we propose a unified speaker verification framework for both single- and multi-talker speech, that is able to pay selective auditory attention to the target speaker.
no code implementations • Interspeech 2020 • Emre Yilmaz, Özgür Bora Gevrek, Jibin Wu, Yuxiang Chen, Xuanbo Meng, Haizhou Li
To explore the effectiveness and computational complexity of SNN on KWS and wakeword detection, we compare the performance and computational costs of spiking fully-connected and convolutional neural networks with ANN counterparts under clean and noisy testing conditions.
no code implementations • 7 Jul 2020 • Zihan Pan, Malu Zhang, Jibin Wu, Haizhou Li
Inspired by the mammal's auditory localization pathway, in this paper we propose a pure spiking neural network (SNN) based computational model for precise sound localization in the noisy real-world environment, and implement this algorithm in a real-time robotic system with a microphone array.
no code implementations • 2 Jul 2020 • Jibin Wu, Cheng-Lin Xu, Daquan Zhou, Haizhou Li, Kay Chen Tan
In this paper, we propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition, which is referred to as progressive tandem learning of deep SNNs.
no code implementations • 3 Jun 2020 • Srivatsa P, Kyle Timothy Ng Chu, Burin Amornpaisannon, Yaswanth Tavva, Venkata Pavan Kumar Miriyala, Jibin Wu, Malu Zhang, Haizhou Li, Trevor E. Carlson
Rate-encoded SNNs could be seen as inefficient as an encoding scheme because it involves the transmission of a large number of spikes.
no code implementations • 26 Mar 2020 • Malu Zhang, Jiadong Wang, Burin Amornpaisannon, Zhixuan Zhang, VPK Miriyala, Ammar Belatreche, Hong Qu, Jibin Wu, Yansong Chua, Trevor E. Carlson, Haizhou Li
In STDBP algorithm, the timing of individual spikes is used to convey information (temporal coding), and learning (back-propagation) is performed based on spike timing in an event-driven manner.
1 code implementation • 19 Nov 2019 • Jibin Wu, Emre Yilmaz, Malu Zhang, Haizhou Li, Kay Chen Tan
The brain-inspired spiking neural networks (SNN) closely mimic the biological neural networks and can operate on low-power neuromorphic hardware with spike-based computation.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 12 Sep 2019 • Zihan Pan, Jibin Wu, Yansong Chua, Malu Zhang, Haizhou Li
We show that, with population neural codings, the encoded patterns are linearly separable using the Support Vector Machine (SVM).
no code implementations • 3 Sep 2019 • Zihan Pan, Yansong Chua, Jibin Wu, Malu Zhang, Haizhou Li, Eliathamby Ambikairajah
The neural encoding scheme, that we call Biologically plausible Auditory Encoding (BAE), emulates the functions of the perceptual components of the human auditory system, that include the cochlear filter bank, the inner hair cells, auditory masking effects from psychoacoustic models, and the spike neural encoding by the auditory nerve.
1 code implementation • 2 Jul 2019 • Jibin Wu, Yansong Chua, Malu Zhang, Guoqi Li, Haizhou Li, Kay Chen Tan
Spiking neural networks (SNNs) represent the most prominent biologically inspired computing model for neuromorphic computing (NC) architectures.
no code implementations • 15 Feb 2019 • Jibin Wu, Yansong Chua, Malu Zhang, Qu Yang, Guoqi Li, Haizhou Li
Deep spiking neural networks (SNNs) support asynchronous event-driven computation, massive parallelism and demonstrate great potential to improve the energy efficiency of its synchronous analog counterpart.