Search Results for author: MingBin Xu

Found 14 papers, 2 papers with code

Conformer-Based Speech Recognition On Extreme Edge-Computing Devices

no code implementations16 Dec 2023 MingBin Xu, Alex Jin, Sicheng Wang, Mu Su, Tim Ng, Henry Mason, Shiyi Han, Yaqiao Deng, Zhen Huang, Mahesh Krishnamoorthy

With increasingly more powerful compute capabilities and resources in today's devices, traditionally compute-intensive automatic speech recognition (ASR) has been moving from the cloud to devices to better protect user privacy.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Personalization of CTC-based End-to-End Speech Recognition Using Pronunciation-Driven Subword Tokenization

no code implementations16 Oct 2023 Zhihong Lei, Ernest Pusateri, Shiyi Han, Leo Liu, MingBin Xu, Tim Ng, Ruchir Travadi, Youyuan Zhang, Mirko Hannemann, Man-Hung Siu, Zhen Huang

Recent advances in deep learning and automatic speech recognition have improved the accuracy of end-to-end speech recognition systems, but recognition of personal content such as contact names remains a challenge.

Automatic Speech Recognition speech-recognition +1

Acoustic Model Fusion for End-to-end Speech Recognition

no code implementations10 Oct 2023 Zhihong Lei, MingBin Xu, Shiyi Han, Leo Liu, Zhen Huang, Tim Ng, Yuanyuan Zhang, Ernest Pusateri, Mirko Hannemann, Yaqiao Deng, Man-Hung Siu

Recent advances in deep learning and automatic speech recognition (ASR) have enabled the end-to-end (E2E) ASR system and boosted the accuracy to a new level.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Effective Context and Fragment Feature Usage for Named Entity Recognition

no code implementations5 Apr 2019 Nargiza Nosirova, MingBin Xu, Hui Jiang

In this paper, we explore a new approach to named entity recognition (NER) with the goal of learning from context and fragment features more effectively, contributing to the improvement of overall recognition performance.

named-entity-recognition Named Entity Recognition +3

Fixed-Size Ordinally Forgetting Encoding Based Word Sense Disambiguation

no code implementations23 Feb 2019 Xi Zhu, MingBin Xu, Hui Jiang

In this paper, we present our method of using fixed-size ordinally forgetting encoding (FOFE) to solve the word sense disambiguation (WSD) problem.

Language Modelling Word Sense Disambiguation

Dual Fixed-Size Ordinally Forgetting Encoding (FOFE) for Competitive Neural Language Models

no code implementations EMNLP 2018 Sedtawut Watcharawittayakul, MingBin Xu, Hui Jiang

In this paper, we propose a new approach to employ the fixed-size ordinally-forgetting encoding (FOFE) (Zhang et al., 2015b) in neural languages modelling, called dual-FOFE.

Language Modelling Machine Translation +2

Word Embeddings based on Fixed-Size Ordinally Forgetting Encoding

no code implementations EMNLP 2017 Joseph Sanu, MingBin Xu, Hui Jiang, Quan Liu

In this paper, we propose to learn word embeddings based on the recent fixed-size ordinally forgetting encoding (FOFE) method, which can almost uniquely encode any variable-length sequence into a fixed-size representation.

Language Modelling Semantic Textual Similarity +2

A Local Detection Approach for Named Entity Recognition and Mention Detection

no code implementations ACL 2017 Mingbin Xu, Hui Jiang, Sedtawut Watcharawittayakul

In this paper, we study a novel approach for named entity recognition (NER) and mention detection (MD) in natural language processing.

Feature Engineering Image Classification +5

A FOFE-based Local Detection Approach for Named Entity Recognition and Mention Detection

1 code implementation2 Nov 2016 Mingbin Xu, Hui Jiang

In this paper, we study a novel approach for named entity recognition (NER) and mention detection in natural language processing.

named-entity-recognition Named Entity Recognition +2

A Fixed-Size Encoding Method for Variable-Length Sequences with its Application to Neural Network Language Models

1 code implementation6 May 2015 Shiliang Zhang, Hui Jiang, MingBin Xu, JunFeng Hou, Li-Rong Dai

In this paper, we propose the new fixed-size ordinally-forgetting encoding (FOFE) method, which can almost uniquely encode any variable-length sequence of words into a fixed-size representation.

Cannot find the paper you are looking for? You can Submit a new open access paper.