Search Results for author: Kuan-Yu Chen

Found 46 papers, 1 papers with code

ntust-nlp-2 at ROCLING-2021 Shared Task: BERT-based semantic analyzer with word-level information

no code implementations ROCLING 2021 Ke-Han Lu, Kuan-Yu Chen

In this paper, we proposed a BERT-based dimensional semantic analyzer, which is designed by incorporating with word-level information.

Sentiment Analysis

A BERT-based Siamese-structured Retrieval Model

no code implementations ROCLING 2021 Hung-Yun Chiang, Kuan-Yu Chen

Due to the development of deep learning, the natural language processing tasks have made great progresses by leveraging the bidirectional encoder representations from Transformers (BERT).

Information Retrieval

A Flexible and Extensible Framework for Multiple Answer Modes Question Answering

no code implementations ROCLING 2021 Cheng-Chung Fan, Chia-Chih Kuo, Shang-Bao Luo, Pei-Jun Liao, Kuang-Yu Chang, Chiao-Wei Hsu, Meng-Tse Wu, Shih-Hong Tsai, Tzu-Man Wu, Aleksandra Smolka, Chao-Chun Liang, Hsin-Min Wang, Kuan-Yu Chen, Yu Tsao, Keh-Yih Su

Only a few of them adopt several answer generation modules for providing different mechanisms; however, they either lack an aggregation mechanism to merge the answers from various modules, or are too complicated to be implemented with neural networks.

Question Answering

A Transformer-based Cross-modal Fusion Model with Adversarial Training for VQA Challenge 2021

no code implementations24 Jun 2021 Ke-Han Lu, Bo-Han Fang, Kuan-Yu Chen

In this paper, inspired by the successes of visionlanguage pre-trained models and the benefits from training with adversarial attacks, we present a novel transformerbased cross-modal fusion modeling by incorporating the both notions for VQA challenge 2021.

Visual Question Answering

Non-autoregressive Transformer-based End-to-end ASR using BERT

no code implementations10 Apr 2021 Fu-Hao Yu, Kuan-Yu Chen

Transformer-based models have led to a significant innovation in various classic and practical subjects, including speech processing, natural language processing, and computer vision.

Speech Recognition

Speech Recognition by Simply Fine-tuning BERT

no code implementations30 Jan 2021 Wen-Chin Huang, Chia-Hua Wu, Shang-Bao Luo, Kuan-Yu Chen, Hsin-Min Wang, Tomoki Toda

We propose a simple method for automatic speech recognition (ASR) by fine-tuning BERT, which is a language model (LM) trained on large-scale unlabeled text data and can generate rich contextual representations.

Speech Recognition

Investigation of Sentiment Controllable Chatbot

no code implementations11 Jul 2020 Hung-Yi Lee, Cheng-Hao Ho, Chien-Fu Lin, Chiung-Chih Chang, Chih-Wei Lee, Yau-Shian Wang, Tsung-Yuan Hsu, Kuan-Yu Chen

Conventional seq2seq chatbot models attempt only to find sentences with the highest probabilities conditioned on the input sequences, without considering the sentiment of the output sentences.


An Audio-enriched BERT-based Framework for Spoken Multiple-choice Question Answering

no code implementations25 May 2020 Chia-Chih Kuo, Shang-Bao Luo, Kuan-Yu Chen

In a spoken multiple-choice question answering (SMCQA) task, given a passage, a question, and multiple choices all in the form of speech, the machine needs to pick the correct choice to answer the question.

Question Answering Speech Recognition

A neural document language modeling framework for spoken document retrieval

no code implementations31 Oct 2019 Li-Phen Yen, Zhen-Yu Wu, Kuan-Yu Chen

Recent developments in deep learning have led to a significant innovation in various classic and practical subjects, including speech recognition, computer vision, question answering, information retrieval and so on.

Information Retrieval Question Answering +1

Completely Unsupervised Speech Recognition By A Generative Adversarial Network Harmonized With Iteratively Refined Hidden Markov Models

no code implementations8 Apr 2019 Kuan-Yu Chen, Che-Ping Tsai, Da-Rong Liu, Hung-Yi Lee, Lin-shan Lee

Producing a large annotated speech corpus for training ASR systems remains difficult for more than 95% of languages all over the world which are low-resourced, but collecting a relatively big unlabeled data set for such languages is more achievable.

Speech Recognition Unsupervised Speech Recognition

Scalable Sentiment for Sequence-to-sequence Chatbot Response with Performance Analysis

no code implementations7 Apr 2018 Chih-Wei Lee, Yau-Shian Wang, Tsung-Yuan Hsu, Kuan-Yu Chen, Hung-Yi Lee, Lin-shan Lee

Conventional seq2seq chatbot models only try to find the sentences with the highest probabilities conditioned on the input sequences, without considering the sentiment of the output sentences.


Completely Unsupervised Phoneme Recognition by Adversarially Learning Mapping Relationships from Audio Embeddings

no code implementations1 Apr 2018 Da-Rong Liu, Kuan-Yu Chen, Hung-Yi Lee, Lin-shan Lee

Unsupervised discovery of acoustic tokens from audio corpora without annotation and learning vector representations for these tokens have been widely studied.

Learning to Distill: The Essence Vector Modeling Framework

no code implementations COLING 2016 Kuan-Yu Chen, Shih-Hung Liu, Berlin Chen, Hsin-Min Wang

The D-EV model not only inherits the advantages of the EV model but also can infer a more robust representation for a given spoken paragraph against imperfect speech recognition.

Denoising Document Embedding +3

Novel Word Embedding and Translation-based Language Modeling for Extractive Speech Summarization

no code implementations22 Jul 2016 Kuan-Yu Chen, Shih-Hung Liu, Berlin Chen, Hsin-Min Wang, Hsin-Hsi Chen

Word embedding methods revolve around learning continuous distributed vector representations of words with neural networks, which can capture semantic and/or syntactic cues, and in turn be used to induce similarity measures among words, sentences and documents in context.

Language Modelling Representation Learning +1

Improved Spoken Document Summarization with Coverage Modeling Techniques

no code implementations20 Jan 2016 Kuan-Yu Chen, Shih-Hung Liu, Berlin Chen, Hsin-Min Wang

In addition to MMR, there is only a dearth of research concentrating on reducing redundancy or increasing diversity for the spoken document summarization task, as far as we are aware.

Document Summarization Extractive Summarization

Leveraging Word Embeddings for Spoken Document Summarization

no code implementations14 Jun 2015 Kuan-Yu Chen, Shih-Hung Liu, Hsin-Min Wang, Berlin Chen, Hsin-Hsi Chen

Owing to the rapidly growing multimedia content available on the Internet, extractive spoken document summarization, with the purpose of automatically selecting a set of representative sentences from a spoken document to concisely express the most important theme of the document, has been an active area of research and experimentation.

Document Summarization Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.