Search Results for author: Kuan-Yu Chen

Found 52 papers, 4 papers with code

A context-aware knowledge transferring strategy for CTC-based ASR

1 code implementation12 Oct 2022 Ke-Han Lu, Kuan-Yu Chen

Non-autoregressive automatic speech recognition (ASR) modeling has received increasing attention recently because of its fast decoding speed and superior performance.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

HypR: A comprehensive study for ASR hypothesis revising with a reference corpus

1 code implementation18 Sep 2023 Yi-Wei Wang, Ke-Han Lu, Kuan-Yu Chen

In addition, we implement and compare several classic and representative methods, showing the recent research progress in revising speech recognition results.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Trompt: Towards a Better Deep Neural Network for Tabular Data

1 code implementation29 May 2023 Kuan-Yu Chen, Ping-Han Chiang, Hsin-Rung Chou, Ting-Wei Chen, Tien-Hao Chang

However, based on a recently published tabular benchmark, we can see deep neural networks still fall behind tree-based models on tabular datasets.

Scalable Sentiment for Sequence-to-sequence Chatbot Response with Performance Analysis

no code implementations7 Apr 2018 Chih-Wei Lee, Yau-Shian Wang, Tsung-Yuan Hsu, Kuan-Yu Chen, Hung-Yi Lee, Lin-shan Lee

Conventional seq2seq chatbot models only try to find the sentences with the highest probabilities conditioned on the input sequences, without considering the sentiment of the output sentences.

Chatbot reinforcement-learning +1

Completely Unsupervised Phoneme Recognition by Adversarially Learning Mapping Relationships from Audio Embeddings

no code implementations1 Apr 2018 Da-Rong Liu, Kuan-Yu Chen, Hung-Yi Lee, Lin-shan Lee

Unsupervised discovery of acoustic tokens from audio corpora without annotation and learning vector representations for these tokens have been widely studied.

Generative Adversarial Network

Learning to Distill: The Essence Vector Modeling Framework

no code implementations COLING 2016 Kuan-Yu Chen, Shih-Hung Liu, Berlin Chen, Hsin-Min Wang

The D-EV model not only inherits the advantages of the EV model but also can infer a more robust representation for a given spoken paragraph against imperfect speech recognition.

Denoising Document Embedding +6

Novel Word Embedding and Translation-based Language Modeling for Extractive Speech Summarization

no code implementations22 Jul 2016 Kuan-Yu Chen, Shih-Hung Liu, Berlin Chen, Hsin-Min Wang, Hsin-Hsi Chen

Word embedding methods revolve around learning continuous distributed vector representations of words with neural networks, which can capture semantic and/or syntactic cues, and in turn be used to induce similarity measures among words, sentences and documents in context.

Language Modelling Representation Learning +1

Improved Spoken Document Summarization with Coverage Modeling Techniques

no code implementations20 Jan 2016 Kuan-Yu Chen, Shih-Hung Liu, Berlin Chen, Hsin-Min Wang

In addition to MMR, there is only a dearth of research concentrating on reducing redundancy or increasing diversity for the spoken document summarization task, as far as we are aware.

Document Summarization Extractive Summarization +1

Leveraging Word Embeddings for Spoken Document Summarization

no code implementations14 Jun 2015 Kuan-Yu Chen, Shih-Hung Liu, Hsin-Min Wang, Berlin Chen, Hsin-Hsi Chen

Owing to the rapidly growing multimedia content available on the Internet, extractive spoken document summarization, with the purpose of automatically selecting a set of representative sentences from a spoken document to concisely express the most important theme of the document, has been an active area of research and experimentation.

Document Summarization Sentence +1

Completely Unsupervised Speech Recognition By A Generative Adversarial Network Harmonized With Iteratively Refined Hidden Markov Models

no code implementations8 Apr 2019 Kuan-Yu Chen, Che-Ping Tsai, Da-Rong Liu, Hung-Yi Lee, Lin-shan Lee

Producing a large annotated speech corpus for training ASR systems remains difficult for more than 95% of languages all over the world which are low-resourced, but collecting a relatively big unlabeled data set for such languages is more achievable.

Generative Adversarial Network speech-recognition +2

A neural document language modeling framework for spoken document retrieval

no code implementations31 Oct 2019 Li-Phen Yen, Zhen-Yu Wu, Kuan-Yu Chen

Recent developments in deep learning have led to a significant innovation in various classic and practical subjects, including speech recognition, computer vision, question answering, information retrieval and so on.

Information Retrieval Language Modelling +4

An Audio-enriched BERT-based Framework for Spoken Multiple-choice Question Answering

no code implementations25 May 2020 Chia-Chih Kuo, Shang-Bao Luo, Kuan-Yu Chen

In a spoken multiple-choice question answering (SMCQA) task, given a passage, a question, and multiple choices all in the form of speech, the machine needs to pick the correct choice to answer the question.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Investigation of Sentiment Controllable Chatbot

no code implementations11 Jul 2020 Hung-Yi Lee, Cheng-Hao Ho, Chien-Fu Lin, Chiung-Chih Chang, Chih-Wei Lee, Yau-Shian Wang, Tsung-Yuan Hsu, Kuan-Yu Chen

Conventional seq2seq chatbot models attempt only to find sentences with the highest probabilities conditioned on the input sequences, without considering the sentiment of the output sentences.

Chatbot reinforcement-learning +1

Speech Recognition by Simply Fine-tuning BERT

no code implementations30 Jan 2021 Wen-Chin Huang, Chia-Hua Wu, Shang-Bao Luo, Kuan-Yu Chen, Hsin-Min Wang, Tomoki Toda

We propose a simple method for automatic speech recognition (ASR) by fine-tuning BERT, which is a language model (LM) trained on large-scale unlabeled text data and can generate rich contextual representations.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Non-autoregressive Transformer-based End-to-end ASR using BERT

no code implementations10 Apr 2021 Fu-Hao Yu, Kuan-Yu Chen

Transformer-based models have led to significant innovation in classical and practical subjects as varied as speech processing, natural language processing, and computer vision.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

A Transformer-based Cross-modal Fusion Model with Adversarial Training for VQA Challenge 2021

no code implementations24 Jun 2021 Ke-Han Lu, Bo-Han Fang, Kuan-Yu Chen

In this paper, inspired by the successes of visionlanguage pre-trained models and the benefits from training with adversarial attacks, we present a novel transformerbased cross-modal fusion modeling by incorporating the both notions for VQA challenge 2021.

Visual Question Answering (VQA)

A Flexible and Extensible Framework for Multiple Answer Modes Question Answering

no code implementations ROCLING 2021 Cheng-Chung Fan, Chia-Chih Kuo, Shang-Bao Luo, Pei-Jun Liao, Kuang-Yu Chang, Chiao-Wei Hsu, Meng-Tse Wu, Shih-Hong Tsai, Tzu-Man Wu, Aleksandra Smolka, Chao-Chun Liang, Hsin-Min Wang, Kuan-Yu Chen, Yu Tsao, Keh-Yih Su

Only a few of them adopt several answer generation modules for providing different mechanisms; however, they either lack an aggregation mechanism to merge the answers from various modules, or are too complicated to be implemented with neural networks.

Answer Generation Question Answering

A BERT-based Siamese-structured Retrieval Model

no code implementations ROCLING 2021 Hung-Yun Chiang, Kuan-Yu Chen

Due to the development of deep learning, the natural language processing tasks have made great progresses by leveraging the bidirectional encoder representations from Transformers (BERT).

Information Retrieval Retrieval

ntust-nlp-2 at ROCLING-2021 Shared Task: BERT-based semantic analyzer with word-level information

no code implementations ROCLING 2021 Ke-Han Lu, Kuan-Yu Chen

In this paper, we proposed a BERT-based dimensional semantic analyzer, which is designed by incorporating with word-level information.

Sentiment Analysis

A Lexical-aware Non-autoregressive Transformer-based ASR Model

no code implementations18 May 2023 Chong-En Lin, Kuan-Yu Chen

Non-autoregressive automatic speech recognition (ASR) has become a mainstream of ASR modeling because of its fast decoding speed and satisfactory result.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Cannot find the paper you are looking for? You can Submit a new open access paper.