Search Results for author: Suwon Shon

Found 17 papers, 5 papers with code

On the Use of External Data for Spoken Named Entity Recognition

no code implementations14 Dec 2021 Ankita Pasad, Felix Wu, Suwon Shon, Karen Livescu, Kyu J. Han

In this work we focus on low-resource spoken named entity recognition (NER) and address the question: Beyond self-supervised pre-training, how can we use external speech and/or text data that are not annotated for the task?

Knowledge Distillation Named Entity Recognition +4

Leveraging Pre-trained Language Model for Speech Sentiment Analysis

no code implementations11 Jun 2021 Suwon Shon, Pablo Brusco, Jing Pan, Kyu J. Han, Shinji Watanabe

In this paper, we explore the use of pre-trained language models to learn sentiment information of written texts for speech sentiment analysis.

Automatic Speech Recognition Sentiment Analysis

Time-Contrastive Learning Based Deep Bottleneck Features for Text-Dependent Speaker Verification

no code implementations11 May 2019 Achintya kr. Sarkar, Zheng-Hua Tan, Hao Tang, Suwon Shon, James Glass

There are a number of studies about extraction of bottleneck (BN) features from deep neural networks (DNNs)trained to discriminate speakers, pass-phrases and triphone states for improving the performance of text-dependent speaker verification (TD-SV).

Automatic Speech Recognition Contrastive Learning +1

VoiceID Loss: Speech Enhancement for Speaker Verification

no code implementations7 Apr 2019 Suwon Shon, Hao Tang, James Glass

In this paper, we propose VoiceID loss, a novel loss function for training a speech enhancement model to improve the robustness of speaker verification.

Speaker Verification Speech Enhancement

Domain Attentive Fusion for End-to-end Dialect Identification with Unknown Target Domain

no code implementations4 Dec 2018 Suwon Shon, Ahmed Ali, James Glass

An important issue for end-to-end systems is to have some knowledge of the application domain, because the system can be vulnerable to use cases that were not seen in the training phase; such a scenario is often referred to as a domain mismatched condition.

Dialect Identification

Noise-tolerant Audio-visual Online Person Verification using an Attention-based Neural Network Fusion

no code implementations27 Nov 2018 Suwon Shon, Tae-Hyun Oh, James Glass

In this paper, we present a multi-modal online person verification system using both speech and visual signals.

Large-scale Speaker Retrieval on Random Speaker Variability Subspace

no code implementations27 Nov 2018 Suwon Shon, Young-Gun Lee, Taesu Kim

In this paper, we proposed Random Speaker-variability Subspace (RSS) projection to map a data into LSH based hash tables.

Learning pronunciation from a foreign language in speech synthesis networks

2 code implementations23 Nov 2018 Young-Gun Lee, Suwon Shon, Taesu Kim

First, we train the speech synthesis network bilingually in English and Korean and analyze how the network learns the relations of phoneme pronunciation between the languages.

Speech Synthesis

Frame-level speaker embeddings for text-independent speaker recognition and analysis of end-to-end model

1 code implementation12 Sep 2018 Suwon Shon, Hao Tang, James Glass

In this paper, we propose a Convolutional Neural Network (CNN) based speaker recognition model for extracting robust speaker embeddings.

Frame Speaker Recognition +1

Unsupervised Representation Learning of Speech for Dialect Identification

no code implementations12 Sep 2018 Suwon Shon, Wei-Ning Hsu, James Glass

In this paper, we explore the use of a factorized hierarchical variational autoencoder (FHVAE) model to learn an unsupervised latent representation for dialect identification (DID).

Dialect Identification Disentanglement

MCE 2018: The 1st Multi-target Speaker Detection and Identification Challenge Evaluation (MCE) Plan, Dataset and Baseline System

1 code implementation17 Jul 2018 Suwon Shon, Najim Dehak, Douglas Reynolds, James Glass

The Multitarget Challenge aims to assess how well current speech technology is able to determine whether or not a recorded utterance was spoken by one of a large number of 'blacklisted' speakers.

Audio and Speech Processing Sound

Convolutional Neural Networks and Language Embeddings for End-to-End Dialect Recognition

2 code implementations12 Mar 2018 Suwon Shon, Ahmed Ali, James Glass

Although the Siamese network with language embeddings did not achieve as good a result as the end-to-end DID system, the two approaches had good synergy when combined together in a fused system.

Sound Audio and Speech Processing

MIT-QCRI Arabic Dialect Identification System for the 2017 Multi-Genre Broadcast Challenge

no code implementations28 Aug 2017 Suwon Shon, Ahmed Ali, James Glass

In order to achieve a robust ADI system, we explored both Siamese neural network models to learn similarity and dissimilarities among Arabic dialects, as well as i-vector post-processing to adapt domain mismatches.

Arabic Speech Recognition Dialect Identification +1

KU-ISPL Speaker Recognition Systems under Language mismatch condition for NIST 2016 Speaker Recognition Evaluation

no code implementations3 Feb 2017 Suwon Shon, Hanseok Ko

As development dataset which is spoken in Cebuano and Mandarin, we could prepare the evaluation trials through preliminary experiments to compensate the language mismatched condition.

Speaker Recognition

KU-ISPL Language Recognition System for NIST 2015 i-Vector Machine Learning Challenge

no code implementations21 Sep 2016 Suwon Shon, Seongkyu Mun, John H. L. Hansen, Hanseok Ko

The experimental results show that the use of duration and score fusion improves language recognition performance by 5% relative in LRiMLC15 cost.

Cannot find the paper you are looking for? You can Submit a new open access paper.