Search Results for author: Feilong Bao

Found 10 papers, 3 papers with code

Mongolian Questions Classification Based on Mulit-Head Attention

no code implementations CCL 2020 Guangyi Wang, Feilong Bao, Weihua Wang

This paper proposes a classification model, which combines the Bi-LSTM model with the Multi-Head Attention mechanism.

Classification Question Answering

L$^2$GC: Lorentzian Linear Graph Convolutional Networks For Node Classification

1 code implementation10 Mar 2024 Qiuyu Liang, Weihua Wang, Feilong Bao, Guanglai Gao

Specifically, we map the learned features of graph nodes into hyperbolic space, and then perform a Lorentzian linear feature transformation to capture the underlying tree-like structure of data.

Node Classification

MnTTS2: An Open-Source Multi-Speaker Mongolian Text-to-Speech Synthesis Dataset

1 code implementation11 Dec 2022 Kailin Liang, Bin Liu, Yifan Hu, Rui Liu, Feilong Bao, Guanglai Gao

Text-to-Speech (TTS) synthesis for low-resource languages is an attractive research issue in academia and industry nowadays.

Speech Synthesis Text-To-Speech Synthesis

MnTTS: An Open-Source Mongolian Text-to-Speech Synthesis Dataset and Accompanied Baseline

1 code implementation22 Sep 2022 Yifan Hu, Pengkai Yin, Rui Liu, Feilong Bao, Guanglai Gao

This paper introduces a high-quality open-source text-to-speech (TTS) synthesis dataset for Mongolian, a low-resource language spoken by over 10 million people worldwide.

Speech Synthesis Text-To-Speech Synthesis

Modeling Prosodic Phrasing with Multi-Task Learning in Tacotron-based TTS

no code implementations11 Aug 2020 Rui Liu, Berrak Sisman, Feilong Bao, Guanglai Gao, Haizhou Li

We propose a multi-task learning scheme for Tacotron training, that optimizes the system to predict both Mel spectrum and phrase breaks.

Multi-Task Learning Speech Synthesis

WaveTTS: Tacotron-based TTS with Joint Time-Frequency Domain Loss

no code implementations2 Feb 2020 Rui Liu, Berrak Sisman, Feilong Bao, Guanglai Gao, Haizhou Li

To address this problem, we propose a new training scheme for Tacotron-based TTS, referred to as WaveTTS, that has 2 loss functions: 1) time-domain loss, denoted as the waveform loss, that measures the distortion between the natural and generated waveform; and 2) frequency-domain loss, that measures the Mel-scale acoustic feature loss between the natural and generated acoustic features.

Teacher-Student Training for Robust Tacotron-based TTS

no code implementations7 Nov 2019 Rui Liu, Berrak Sisman, Jingdong Li, Feilong Bao, Guanglai Gao, Haizhou Li

We first train a Tacotron2-based TTS model by always providing natural speech frames to the decoder, that serves as a teacher model.

Decoder Knowledge Distillation

A LSTM Approach with Sub-Word Embeddings for Mongolian Phrase Break Prediction

no code implementations COLING 2018 Rui Liu, Feilong Bao, Guanglai Gao, HUI ZHANG, Yonghe Wang

In this paper, we first utilize the word embedding that focuses on sub-word units to the Mongolian Phrase Break (PB) prediction task by using Long-Short-Term-Memory (LSTM) model.

Dictionary Learning Machine Translation +2

Mongolian Named Entity Recognition System with Rich Features

no code implementations COLING 2016 Weihua Wang, Feilong Bao, Guanglai Gao

The system based on segmenting suffixes with all proposed features yields benchmark result of F-measure=84. 65 on this corpus.

Machine Translation named-entity-recognition +3

Cannot find the paper you are looking for? You can Submit a new open access paper.