Search Results for author: Ming Tu

Found 13 papers, 1 papers with code

Select, Answer and Explain: Interpretable Multi-hop Reading Comprehension over Multiple Documents

1 code implementation1 Nov 2019 Ming Tu, Kevin Huang, Guangtao Wang, Jing Huang, Xiaodong He, Bo-Wen Zhou

Interpretable multi-hop reading comprehension (RC) over multiple documents is a challenging problem because it demands reasoning over multiple information sources and explaining the answer prediction by providing supporting evidences.

Learning-To-Rank Multi-Hop Reading Comprehension +2

Reducing the Model Order of Deep Neural Networks Using Information Theory

no code implementations16 May 2016 Ming Tu, Visar Berisha, Yu Cao, Jae-sun Seo

In this paper, we propose a method to compress deep neural networks by using the Fisher Information metric, which we estimate through a stochastic optimization method that keeps track of second-order information in the network.

General Classification Network Pruning +2

Multiple instance learning with graph neural networks

no code implementations12 Jun 2019 Ming Tu, Jing Huang, Xiaodong He, Bo-Wen Zhou

In this paper, we propose a new end-to-end graph neural network (GNN) based algorithm for MIL: we treat each bag as a graph and use GNN to learn the bag embedding, in order to explore the useful structural information among instances in bags.

Multiple Instance Learning

Speaker-invariant Affective Representation Learning via Adversarial Training

no code implementations4 Nov 2019 Haoqi Li, Ming Tu, Jing Huang, Shrikanth Narayanan, Panayiotis Georgiou

In this paper, we propose a machine learning framework to obtain speech emotion representations by limiting the effect of speaker variability in the speech signals.

Emotion Classification Representation Learning +1

Graph Sequential Network for Reasoning over Sequences

no code implementations4 Apr 2020 Ming Tu, Jing Huang, Xiaodong He, Bo-Wen Zhou

We validate the proposed GSN on two NLP tasks: interpretable multi-hop reading comprehension on HotpotQA and graph based fact verification on FEVER.

Fact Verification Machine Reading Comprehension +1

Language-universal phonetic encoder for low-resource speech recognition

no code implementations19 May 2023 Siyuan Feng, Ming Tu, Rui Xia, Chuanzeng Huang, Yuxuan Wang

Our main approach and adaptation are effective on extremely low-resource languages, even within domain- and language-mismatched scenarios.

speech-recognition Speech Recognition

Language-Universal Phonetic Representation in Multilingual Speech Pretraining for Low-Resource Speech Recognition

no code implementations19 May 2023 Siyuan Feng, Ming Tu, Rui Xia, Chuanzeng Huang, Yuxuan Wang

Moreover, on 3 of the 4 languages, comparing to the standard HuBERT, the approach performs better, meanwhile is able to save supervised training data by 1. 5k hours (75%) at most.

Self-Supervised Learning speech-recognition +1

VoiceShop: A Unified Speech-to-Speech Framework for Identity-Preserving Zero-Shot Voice Editing

no code implementations10 Apr 2024 Philip Anastassiou, Zhenyu Tang, Kainan Peng, Dongya Jia, Jiaxin Li, Ming Tu, Yuping Wang, Yuxuan Wang, Mingbo Ma

We present VoiceShop, a novel speech-to-speech framework that can modify multiple attributes of speech, such as age, gender, accent, and speech style, in a single forward pass while preserving the input speaker's timbre.

Attribute

Cannot find the paper you are looking for? You can Submit a new open access paper.