Search Results for author: Xuedong Huang

Found 24 papers, 10 papers with code

ComSL: A Composite Speech-Language Model for End-to-End Speech-to-Text Translation

1 code implementation NeurIPS 2023 Chenyang Le, Yao Qian, Long Zhou, Shujie Liu, Yanmin Qian, Michael Zeng, Xuedong Huang

Joint speech-language training is challenging due to the large demand for training data and GPU consumption, as well as the modality gap between speech and language.

Language Modelling Multi-Task Learning +2

i-Code Studio: A Configurable and Composable Framework for Integrative AI

no code implementations23 May 2023 Yuwei Fang, Mahmoud Khademi, Chenguang Zhu, ZiYi Yang, Reid Pryzant, Yichong Xu, Yao Qian, Takuya Yoshioka, Lu Yuan, Michael Zeng, Xuedong Huang

Artificial General Intelligence (AGI) requires comprehensive understanding and generation capabilities for a variety of tasks spanning different modalities and functionalities.

Question Answering Retrieval +4

i-Code V2: An Autoregressive Generation Framework over Vision, Language, and Speech Data

no code implementations21 May 2023 ZiYi Yang, Mahmoud Khademi, Yichong Xu, Reid Pryzant, Yuwei Fang, Chenguang Zhu, Dongdong Chen, Yao Qian, Mei Gao, Yi-Ling Chen, Robert Gmyr, Naoyuki Kanda, Noel Codella, Bin Xiao, Yu Shi, Lu Yuan, Takuya Yoshioka, Michael Zeng, Xuedong Huang

The convergence of text, visual, and audio data is a key step towards human-like artificial intelligence, however the current Vision-Language-Speech landscape is dominated by encoder-only models which lack generative abilities.

Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention

2 code implementations6 Dec 2021 Yichong Xu, Chenguang Zhu, Shuohang Wang, Siqi Sun, Hao Cheng, Xiaodong Liu, Jianfeng Gao, Pengcheng He, Michael Zeng, Xuedong Huang

In particular, we focus on the task of Commonsense Reasoning, demonstrating that the proposed external attention mechanism can augment existing transformer models and significantly improve the model's reasoning capabilities.

 Ranked #1 on Common Sense Reasoning on CommonsenseQA (using extra training data)

Common Sense Reasoning

Florence: A New Foundation Model for Computer Vision

1 code implementation22 Nov 2021 Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, Ce Liu, Mengchen Liu, Zicheng Liu, Yumao Lu, Yu Shi, Lijuan Wang, JianFeng Wang, Bin Xiao, Zhen Xiao, Jianwei Yang, Michael Zeng, Luowei Zhou, Pengchuan Zhang

Computer vision foundation models, which are trained on diverse, large-scale dataset and can be adapted to a wide range of downstream tasks, are critical for this mission to solve real-world computer vision applications.

Action Classification Action Recognition In Videos +12

One model to enhance them all: array geometry agnostic multi-channel personalized speech enhancement

no code implementations20 Oct 2021 Hassan Taherian, Sefik Emre Eskimez, Takuya Yoshioka, Huaming Wang, Zhuo Chen, Xuedong Huang

Experimental results show that the proposed geometry agnostic model outperforms the model trained on a specific microphone array geometry in both speech quality and automatic speech recognition accuracy.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Personalized Speech Enhancement: New Models and Comprehensive Evaluation

no code implementations18 Oct 2021 Sefik Emre Eskimez, Takuya Yoshioka, Huaming Wang, Xiaofei Wang, Zhuo Chen, Xuedong Huang

Our results show that the proposed models can yield better speech recognition accuracy, speech intelligibility, and perceptual quality than the baseline models, and the multi-task training can alleviate the TSOS issue in addition to improving the speech recognition accuracy.

Speech Enhancement speech-recognition +1

UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data

3 code implementations19 Jan 2021 Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang

In this paper, we propose a unified pre-training approach called UniSpeech to learn speech representations with both unlabeled and labeled data, in which supervised phonetic CTC learning and phonetically-aware contrastive self-supervised learning are conducted in a multi-task learning manner.

Multi-Task Learning Representation Learning +3

Fusing Context Into Knowledge Graph for Commonsense Question Answering

2 code implementations Findings (ACL) 2021 Yichong Xu, Chenguang Zhu, Ruochen Xu, Yang Liu, Michael Zeng, Xuedong Huang

However, although a KG contains rich structural information, it lacks the context to provide a more precise understanding of the concepts.

Ranked #4 on Common Sense Reasoning on CommonsenseQA (using extra training data)

Common Sense Reasoning Knowledge Graphs +3

Mind The Facts: Knowledge-Boosted Coherent Abstractive Text Summarization

no code implementations27 Jun 2020 Beliz Gunel, Chenguang Zhu, Michael Zeng, Xuedong Huang

In this work, we propose a novel architecture that extends Transformer encoder-decoder architecture in order to improve on these shortcomings.

Abstractive Text Summarization Language Modelling +1

Leveraging Lead Bias for Zero-shot Abstractive News Summarization

no code implementations25 Dec 2019 Chenguang Zhu, Ziyi Yang, Robert Gmyr, Michael Zeng, Xuedong Huang

A typical journalistic convention in news articles is to deliver the most salient information in the beginning, also known as the lead bias.

Domain Adaptation News Summarization

SIM: A Slot-Independent Neural Model for Dialogue State Tracking

no code implementations WS 2019 Chenguang Zhu, Michael Zeng, Xuedong Huang

In this paper, we put forward a slot-independent neural model (SIM) to track dialogue states while keeping the model complexity invariant to the number of dialogue slots.

Dialogue State Tracking Task-Oriented Dialogue Systems

Make Lead Bias in Your Favor: A Simple and Effective Method for News Summarization

no code implementations25 Sep 2019 Chenguang Zhu, ZiYi Yang, Robert Gmyr, Michael Zeng, Xuedong Huang

For example, the pretrained model without finetuning outperforms pointer-generator network on CNN/DailyMail dataset.

News Summarization

Cannot find the paper you are looking for? You can Submit a new open access paper.