no code implementations • 4 Jan 2024 • Tzu-Han Lin, How-Shing Wang, Hao-Yung Weng, Kuang-Chen Peng, Zih-Ching Chen, Hung-Yi Lee
Our study conducts extensive experiments to compare different PEFT methods and their layer-wise placement adapting Differentiable Architecture Search (DARTS).
1 code implementation • 1 Jun 2023 • Zih-Ching Chen, Chao-Han Huck Yang, Bo Li, Yu Zhang, Nanxin Chen, Shuo-Yiin Chang, Rohit Prabhavalkar, Hung-Yi Lee, Tara N. Sainath
In this work, we introduce a "score-based assessment" framework for estimating the transferability of pre-trained speech models (PSMs) for fine-tuning target tasks.
no code implementations • 1 Dec 2022 • Zih-Ching Chen, Yu-Shun Sung, Hung-Yi Lee
However, such efficient tuning techniques only provide adaptation at the transformer layer, but failed to perform adaptation at the feature extractor.
no code implementations • 10 Oct 2022 • Zih-Ching Chen, Chin-Lun Fu, Chih-Ying Liu, Shang-Wen Li, Hung-Yi Lee
In downstream tasks, the parameters of SSL models are frozen, and only the adapters are trained.
no code implementations • 16 Aug 2022 • Zih-Ching Chen, Lin-Hsi Tsao, Chin-Lun Fu, Shang-Fu Chen, Yu-Chiang Frank Wang
Face anti-spoofing (FAS) aims at distinguishing face spoof attacks from the authentic ones, which is typically approached by learning proper models for performing the associated classification task.
1 code implementation • Findings (NAACL) 2022 • Chin-Lun Fu, Zih-Ching Chen, Yun-Ru Lee, Hung-Yi Lee
Transformer-based pre-trained models with millions of parameters require large storage.