no code implementations • 11 Apr 2024 • Jiing-Ping Wang, Ming-Guang Lin, An-Yeu, Wu
With the rise of Transformer models in NLP and CV domain, Multi-Head Attention has been proven to be a game-changer.
no code implementations • 26 Jan 2024 • Yu-Shan Tai, An-Yeu, Wu
However, without considering the asymmetry in activations and relying on hand-crafted settings, these methods often struggle to maintain performance under low-bit quantization.
no code implementations • 22 May 2023 • Yu-Shan Tai, Ming-Guang Lin, An-Yeu, Wu
Due to the non-normally distributed values after Softmax and GeLU, post-training quantization on ViTs results in severe accuracy degradation.
1 code implementation • 25 Jul 2022 • Cheng-Yen Hsieh, Yu-Chuan Chuang, An-Yeu, Wu
Based on the simulation results on CIFAR-10 and CIFAR-100, our method achieves a 16x compression ratio with negligible accuracy drops compared with the vanilla SL.
1 code implementation • 3 Nov 2021 • Win-Ken Beh, Yi-Hsuan Wu, An-Yeu, Wu
Besides, we also presents a reproducible baseline system as a preliminary benchmark (The code of the baseline system on MAUS dataset is available on Github: https://github. com/rickwu11/MAUS\_dataset\_baseline\_system), which testing accuracy are 71. 6 %, 66. 7 %, and 59. 9 % in ECG, fingertip PPG, wristband PPG, respectively.
no code implementations • 16 Sep 2019 • Yi-Ta Chen, Yu-Chuan Chuang, An-Yeu, Wu
In this paper, we propose an AdaBoost-assisted extreme learning machine for efficient online sequential classification (AOS-ELM).