Search Results for author: Yu-Chen Lin

Found 12 papers, 4 papers with code

Domain-Generalized Face Anti-Spoofing with Unknown Attacks

2 code implementations18 Oct 2023 Zong-Wei Hong, Yu-Chen Lin, Hsuan-Tung Liu, Yi-Ren Yeh, Chu-Song Chen

Although face anti-spoofing (FAS) methods have achieved remarkable performance on specific domains or attack types, few studies have focused on the simultaneous presence of domain changes and unknown attacks, which is closer to real application scenarios.

Domain Generalization Face Anti-Spoofing

SERIL: Noise Adaptive Speech Enhancement using Regularization-based Incremental Learning

1 code implementation24 May 2020 Chi-Chang Lee, Yu-Chen Lin, Hsuan-Tien Lin, Hsin-Min Wang, Yu Tsao

The results verify that the SERIL model can effectively adjust itself to new noise environments while overcoming the catastrophic forgetting issue.

Incremental Learning Speech Enhancement

Linear Classifier: An Often-Forgotten Baseline for Text Classification

1 code implementation12 Jun 2023 Yu-Chen Lin, Si-An Chen, Jie-Jyun Liu, Chih-Jen Lin

Large-scale pre-trained language models such as BERT are popular solutions for text classification.

text-classification Text Classification

Adapting pretrained speech model for Mandarin lyrics transcription and alignment

1 code implementation21 Nov 2023 Jun-You Wang, Chon-In Leong, Yu-Chen Lin, Li Su, Jyh-Shing Roger Jang

With the use of data augmentation and source separation model, results show that the proposed method achieves a character error rate of less than 18% on a Mandarin polyphonic dataset for lyrics transcription, and a mean absolute error of 0. 071 seconds for lyrics alignment.

Automatic Lyrics Transcription Data Augmentation

A study on speech enhancement using exponent-only floating point quantized neural network (EOFP-QNN)

no code implementations17 Aug 2018 Yi-Te Hsu, Yu-Chen Lin, Szu-Wei Fu, Yu Tsao, Tei-Wei Kuo

We evaluated the proposed EOFP quantization technique on two types of neural networks, namely, bidirectional long short-term memory (BLSTM) and fully convolutional neural network (FCN), on a speech enhancement task.

Quantization regression +1

Stock Prices Prediction using Deep Learning Models

no code implementations25 Sep 2019 Jialin Liu, Fei Chao, Yu-Chen Lin, Chih-Min Lin

The results show that predicting stock price through price rate of change is better than predicting absolute prices directly.

Speech Recovery for Real-World Self-powered Intermittent Devices

no code implementations9 Jun 2021 Yu-Chen Lin, Tsun-An Hsieh, Kuo-Hsuan Hung, Cheng Yu, Harinath Garudadri, Yu Tsao, Tei-Wei Kuo

The incompleteness of speech inputs severely degrades the performance of all the related speech signal processing applications.

SEOFP-NET: Compression and Acceleration of Deep Neural Networks for Speech Enhancement Using Sign-Exponent-Only Floating-Points

no code implementations8 Nov 2021 Yu-Chen Lin, Cheng Yu, Yi-Te Hsu, Szu-Wei Fu, Yu Tsao, Tei-Wei Kuo

In this paper, a novel sign-exponent-only floating-point network (SEOFP-NET) technique is proposed to compress the model size and accelerate the inference time for speech enhancement, a regression task of speech signal processing.

Model Compression regression +1

Novel Preprocessing Technique for Data Embedding in Engineering Code Generation Using Large Language Model

no code implementations27 Nov 2023 Yu-Chen Lin, Akhilesh Kumar, Norman Chang, Wenliang Zhang, Muhammad Zakir, Rucha Apte, Haiyang He, Chao Wang, Jyh-Shing Roger Jang

We present four main contributions to enhance the performance of Large Language Models (LLMs) in generating domain-specific code: (i) utilizing LLM-based data splitting and data renovation techniques to improve the semantic representation of embeddings' space; (ii) introducing the Chain of Density for Renovation Credibility (CoDRC), driven by LLMs, and the Adaptive Text Renovation (ATR) algorithm for assessing data renovation reliability; (iii) developing the Implicit Knowledge Expansion and Contemplation (IKEC) Prompt technique; and (iv) effectively refactoring existing scripts to generate new and high-quality scripts with LLMs.

Code Generation Language Modelling +2

Improving Facial Landmark Detection Accuracy and Efficiency with Knowledge Distillation

no code implementations9 Apr 2024 Zong-Wei Hong, Yu-Chen Lin

The domain of computer vision has experienced significant advancements in facial-landmark detection, becoming increasingly essential across various applications such as augmented reality, facial recognition, and emotion analysis.

Emotion Recognition Facial Landmark Detection +4

Cannot find the paper you are looking for? You can Submit a new open access paper.