Search Results for author: Ming Gong

Found 57 papers, 15 papers with code

RUEL: Retrieval-Augmented User Representation with Edge Browser Logs for Sequential Recommendation

no code implementations19 Sep 2023 Ning Wu, Ming Gong, Linjun Shou, Jian Pei, Daxin Jiang

RUEL is the first method that connects user browsing data with typical recommendation datasets and can be generalized to various recommendation scenarios and datasets.

Contrastive Learning Retrieval +3

Alleviating Over-smoothing for Unsupervised Sentence Representation

1 code implementation9 May 2023 Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Bowen Cao, Jianhui Chang, Daxin Jiang, Jia Li

Currently, learning better unsupervised sentence representations is the pursuit of many natural language processing communities.

Contrastive Learning Semantic Textual Similarity

Typos-aware Bottlenecked Pre-Training for Robust Dense Retrieval

no code implementations17 Apr 2023 Shengyao Zhuang, Linjun Shou, Jian Pei, Ming Gong, Houxing Ren, Guido Zuccon, Daxin Jiang

To address this challenge, we propose ToRoDer (TypOs-aware bottlenecked pre-training for RObust DEnse Retrieval), a novel \textit{pre-training} strategy for DRs that increases their robustness to misspelled queries while preserving their effectiveness in downstream retrieval tasks.

Language Modelling Retrieval

TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs

no code implementations29 Mar 2023 Yaobo Liang, Chenfei Wu, Ting Song, Wenshan Wu, Yan Xia, Yu Liu, Yang Ou, Shuai Lu, Lei Ji, Shaoguang Mao, Yun Wang, Linjun Shou, Ming Gong, Nan Duan

On the other hand, there are also many existing models and systems (symbolic-based or neural-based) that can do some domain-specific tasks very well.

Code Generation Common Sense Reasoning

Lexicon-Enhanced Self-Supervised Training for Multilingual Dense Retrieval

no code implementations27 Mar 2023 Houxing Ren, Linjun Shou, Jian Pei, Ning Wu, Ming Gong, Daxin Jiang

In this paper, we propose to mine and generate self-supervised training data based on a large-scale unlabeled corpus.


Empowering Dual-Encoder with Query Generator for Cross-Lingual Dense Retrieval

no code implementations27 Mar 2023 Houxing Ren, Linjun Shou, Ning Wu, Ming Gong, Daxin Jiang

However, we find that the performance of the cross-encoder re-ranker is heavily influenced by the number of training samples and the quality of negative samples, which is hard to obtain in the cross-lingual setting.

Knowledge Distillation Retrieval

Large Language Models are Diverse Role-Players for Summarization Evaluation

no code implementations27 Mar 2023 Ning Wu, Ming Gong, Linjun Shou, Shining Liang, Daxin Jiang

First, we propose to model objective and subjective dimensions of generated text based on roleplayers prompting mechanism.

Informativeness Text Summarization

Bridge the Gap between Language models and Tabular Understanding

no code implementations16 Feb 2023 Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Chenyu You, Jianhui Chang, Daxin Jiang, Jia Li

For instance, TPLMs jointly pre-trained with table and text input could be effective for tasks also with table-text joint input like table question answering, but it may fail for tasks with only tables or text as input such as table retrieval.

Contrastive Learning Language Modelling +2

Modeling Sequential Sentence Relation to Improve Cross-lingual Dense Retrieval

no code implementations3 Feb 2023 Shunyu Zhang, Yaobo Liang, Ming Gong, Daxin Jiang, Nan Duan

Specifically, we propose a multilingual PLM called masked sentence model (MSM), which consists of a sentence encoder to generate the sentence representations, and a document encoder applied to a sequence of sentence vectors from a document.

Representation Learning Retrieval +1

Mixed-modality Representation Learning and Pre-training for Joint Table-and-Text Retrieval in OpenQA

1 code implementation11 Oct 2022 JunJie Huang, Wanjun Zhong, Qian Liu, Ming Gong, Daxin Jiang, Nan Duan

However, training an effective dense table-text retriever is difficult due to the challenges of table-text discrepancy and data sparsity problem.

Open-Domain Question Answering Representation Learning +2

Bridging the Gap Between Indexing and Retrieval for Differentiable Search Index with Query Generation

1 code implementation21 Jun 2022 Shengyao Zhuang, Houxing Ren, Linjun Shou, Jian Pei, Ming Gong, Guido Zuccon, Daxin Jiang

This problem is further exacerbated when using DSI for cross-lingual retrieval, where document text and query text are in different languages.

Passage Retrieval Retrieval

Unsupervised Context Aware Sentence Representation Pretraining for Multi-lingual Dense Retrieval

1 code implementation7 Jun 2022 Ning Wu, Yaobo Liang, Houxing Ren, Linjun Shou, Nan Duan, Ming Gong, Daxin Jiang

On the multilingual sentence retrieval task Tatoeba, our model achieves new SOTA results among methods without using bilingual data.

Language Modelling Passage Retrieval +3

Negative Sampling for Contrastive Representation Learning: A Review

no code implementations1 Jun 2022 Lanling Xu, Jianxun Lian, Wayne Xin Zhao, Ming Gong, Linjun Shou, Daxin Jiang, Xing Xie, Ji-Rong Wen

The learn-to-compare paradigm of contrastive representation learning (CRL), which compares positive samples with negative ones for representation learning, has achieved great success in a wide range of domains, including natural language processing, computer vision, information retrieval and graph learning.

Graph Learning Information Retrieval +2

Label-aware Multi-level Contrastive Learning for Cross-lingual Spoken Language Understanding

no code implementations7 May 2022 Shining Liang, Linjun Shou, Jian Pei, Ming Gong, Wanli Zuo, Xianglin Zuo, Daxin Jiang

Despite the great success of spoken language understanding (SLU) in high-resource languages, it remains challenging in low-resource languages mainly due to the lack of labeled training data.

Contrastive Learning Spoken Language Understanding +1

Bridging the Gap between Language Models and Cross-Lingual Sequence Labeling

no code implementations NAACL 2022 Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Daxin Jiang

Large-scale cross-lingual pre-trained language models (xPLMs) have shown effectiveness in cross-lingual sequence labeling tasks (xSL), such as cross-lingual machine reading comprehension (xMRC) by transferring knowledge from a high-resource language to low-resource languages.

Contrastive Learning Language Modelling +1

Transformer-Empowered Content-Aware Collaborative Filtering

no code implementations2 Apr 2022 Weizhe Lin, Linjun Shou, Ming Gong, Pei Jian, Zhilin Wang, Bill Byrne, Daxin Jiang

Knowledge graph (KG) based Collaborative Filtering is an effective approach to personalizing recommendation systems for relatively static domains such as movies and books, by leveraging structured information from KG to enrich both item and user representations.

Collaborative Filtering Contrastive Learning +1

Multi-View Document Representation Learning for Open-Domain Dense Retrieval

no code implementations ACL 2022 Shunyu Zhang, Yaobo Liang, Ming Gong, Daxin Jiang, Nan Duan

Second, to prevent multi-view embeddings from collapsing to the same one, we further propose a global-local loss with annealed temperature to encourage the multiple viewers to better align with different potential queries.

Representation Learning Retrieval

Learning from Multiple Noisy Augmented Data Sets for Better Cross-Lingual Spoken Language Understanding

no code implementations EMNLP 2021 YingMei Guo, Linjun Shou, Jian Pei, Ming Gong, Mingxing Xu, Zhiyong Wu, Daxin Jiang

Although various data augmentation approaches have been proposed to synthesize training data in low-resource target languages, the augmented data sets are often noisy, and thus impede the performance of SLU models.

Data Augmentation Denoising +1

A Joint and Domain-Adaptive Approach to Spoken Language Understanding

no code implementations25 Jul 2021 Linhao Zhang, Yu Shi, Linjun Shou, Ming Gong, Houfeng Wang, Michael Zeng

In this paper, we attempt to bridge these two lines of research and propose a joint and domain adaptive approach to SLU.

Domain Adaptation Intent Detection +3

Retrieval Enhanced Model for Commonsense Generation

1 code implementation Findings (ACL) 2021 Han Wang, Yang Liu, Chenguang Zhu, Linjun Shou, Ming Gong, Yichong Xu, Michael Zeng

Commonsense generation is a challenging task of generating a plausible sentence describing an everyday scenario using provided concepts.

Retrieval Text Generation

Generating Human Readable Transcript for Automatic Speech Recognition with Pre-trained Language Model

no code implementations22 Feb 2021 Junwei Liao, Yu Shi, Ming Gong, Linjun Shou, Sefik Eskimez, Liyang Lu, Hong Qu, Michael Zeng

Many downstream tasks and human readers rely on the output of the ASR system; therefore, errors introduced by the speaker and ASR system alike will be propagated to the next task in the pipeline.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Improving Zero-shot Neural Machine Translation on Language-specific Encoders-Decoders

no code implementations12 Feb 2021 Junwei Liao, Yu Shi, Ming Gong, Linjun Shou, Hong Qu, Michael Zeng

However, the performance of using multiple encoders and decoders on zero-shot translation still lags behind universal NMT.

Denoising Machine Translation +2

Quantum walks on a programmable two-dimensional 62-qubit superconducting processor

no code implementations4 Feb 2021 Ming Gong, Shiyu Wang, Chen Zha, Ming-Cheng Chen, He-Liang Huang, Yulin Wu, Qingling Zhu, YouWei Zhao, Shaowei Li, Shaojun Guo, Haoran Qian, Yangsen Ye, Fusheng Chen, Jiale Yu, Daojing Fan, Dachao Wu, Hong Su, Hui Deng, Hao Rong, Jin Lin, Yu Xu, Lihua Sun, Cheng Guo, Futian Liang, Kae Nemoto, W. J. Munro, Chao-Yang Lu, Cheng-Zhi Peng, Xiaobo Zhu, Jian-Wei Pan

Quantum walks are the quantum mechanical analogue of classical random walks and an extremely powerful tool in quantum simulations, quantum search algorithms, and even for universal quantum computing.

Quantum Physics

Syntax-Enhanced Pre-trained Model

1 code implementation ACL 2021 Zenan Xu, Daya Guo, Duyu Tang, Qinliang Su, Linjun Shou, Ming Gong, Wanjun Zhong, Xiaojun Quan, Nan Duan, Daxin Jiang

We study the problem of leveraging the syntactic structure of text to enhance pre-trained models such as BERT and RoBERTa.

Entity Typing Question Answering +1

Experimental characterization of quantum many-body localization transition

no code implementations21 Dec 2020 Ming Gong, Gentil D. de Moraes Neto, Chen Zha, Yulin Wu, Hao Rong, Yangsen Ye, Shaowei Li, Qingling Zhu, Shiyu Wang, YouWei Zhao, Futian Liang, Jin Lin, Yu Xu, Cheng-Zhi Peng, Hui Deng, Abolfazl Bayat, Xiaobo Zhu, Jian-Wei Pan

Here, we experimentally implement a scalable protocol for detecting the many-body localization transition point, using the dynamics of a $N=12$ superconducting qubit array.

Quantum Physics Mesoscale and Nanoscale Physics Strongly Correlated Electrons

Reinforced Multi-Teacher Selection for Knowledge Distillation

no code implementations11 Dec 2020 Fei Yuan, Linjun Shou, Jian Pei, Wutao Lin, Ming Gong, Yan Fu, Daxin Jiang

When multiple teacher models are available in distillation, the state-of-the-art methods assign a fixed weight to a teacher model in the whole distillation.

Knowledge Distillation Model Compression

CalibreNet: Calibration Networks for Multilingual Sequence Labeling

no code implementations11 Nov 2020 Shining Liang, Linjun Shou, Jian Pei, Ming Gong, Wanli Zuo, Daxin Jiang

To tackle the challenge of lack of training data in low-resource languages, we dedicatedly develop a novel unsupervised phrase boundary recovery pre-training task to enhance the multilingual boundary detection capability of CalibreNet.

Boundary Detection Cross-Lingual NER +4

Nonvolatile electric control of exciton complex in monolayer MoSe$_2$ with two dimensional ferroelectric CuInP$_2$S$_6$

no code implementations10 Nov 2020 Xiaoyu Mao, Jun Fu, Chen Chen, Yue Li, Heng Liu, Ming Gong, Hualing Zeng

With the saturated ferroelectric polarization of CIPS, electron-doped or hole-doped MoSe$_2$ is realized in a single device with a large carrier density tunability up to $5\times 10^{12}$cm$^{-2}$.

Materials Science

Cross-lingual Machine Reading Comprehension with Language Branch Knowledge Distillation

no code implementations COLING 2020 Junhao Liu, Linjun Shou, Jian Pei, Ming Gong, Min Yang, Daxin Jiang

Then, we devise a multilingual distillation approach to amalgamate knowledge from multiple language branch models to a single model for all target languages.

Knowledge Distillation Machine Reading Comprehension +1

A Graph Representation of Semi-structured Data for Web Question Answering

no code implementations COLING 2020 Xingyao Zhang, Linjun Shou, Jian Pei, Ming Gong, Lijie Wen, Daxin Jiang

The abundant semi-structured data on the Web, such as HTML-based tables and lists, provide commercial search engines a rich information source for question answering (QA).

Question Answering

No Answer is Better Than Wrong Answer: A Reflection Model for Document Level Machine Reading Comprehension

no code implementations Findings of the Association for Computational Linguistics 2020 Xuguang Wang, Linjun Shou, Ming Gong, Nan Duan, Daxin Jiang

The Natural Questions (NQ) benchmark set brings new challenges to Machine Reading Comprehension: the answers are not only at different levels of granularity (long and short), but also of richer types (including no-answer, yes/no, single-span and multi-span).

Machine Reading Comprehension Natural Questions

Mining Implicit Relevance Feedback from User Behavior for Web Question Answering

no code implementations13 Jun 2020 Linjun Shou, Shining Bo, Feixiang Cheng, Ming Gong, Jian Pei, Daxin Jiang

In this paper, we make the first study to explore the correlation between user behavior and passage relevance, and propose a novel approach for mining training data for Web QA.

Passage Ranking Question Answering

Enhancing Answer Boundary Detection for Multilingual Machine Reading Comprehension

no code implementations ACL 2020 Fei Yuan, Linjun Shou, Xuanyu Bai, Ming Gong, Yaobo Liang, Nan Duan, Yan Fu, Daxin Jiang

Multilingual pre-trained models could leverage the training data from a rich source language (such as English) to improve performance on low resource languages.

Boundary Detection Machine Reading Comprehension +1

Pre-training Text Representations as Meta Learning

no code implementations12 Apr 2020 Shangwen Lv, Yuechen Wang, Daya Guo, Duyu Tang, Nan Duan, Fuqing Zhu, Ming Gong, Linjun Shou, Ryan Ma, Daxin Jiang, Guihong Cao, Ming Zhou, Songlin Hu

In this work, we introduce a learning algorithm which directly optimizes model's ability to learn text representations for effective learning of downstream tasks.

Language Modelling Meta-Learning +2

Improving Readability for Automatic Speech Recognition Transcription

no code implementations9 Apr 2020 Junwei Liao, Sefik Emre Eskimez, Liyang Lu, Yu Shi, Ming Gong, Linjun Shou, Hong Qu, Michael Zeng

In this work, we propose a novel NLP task called ASR post-processing for readability (APR) that aims to transform the noisy ASR output into a readable text for humans and downstream tasks while maintaining the semantic meaning of the speaker.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation

2 code implementations3 Apr 2020 Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Ruofei Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Daniel Campos, Rangan Majumder, Ming Zhou

In this paper, we introduce XGLUE, a new benchmark dataset that can be used to train large-scale cross-lingual pre-trained models using multilingual and bilingual corpora and evaluate their performance across a diverse set of cross-lingual tasks.

Natural Language Understanding XLM-R

TGGLines: A Robust Topological Graph Guided Line Segment Detector for Low Quality Binary Images

no code implementations27 Feb 2020 Ming Gong, Liping Yang, Catherine Potts, Vijayan K. Asari, Diane Oyen, Brendt Wohlberg

Line segment detection is an essential task in computer vision and image analysis, as it is the critical foundation for advanced tasks such as shape modeling and road lane line detection for autonomous driving.

Autonomous Driving Line Detection +1

Model Compression with Two-stage Multi-teacher Knowledge Distillation for Web Question Answering System

no code implementations18 Oct 2019 Ze Yang, Linjun Shou, Ming Gong, Wutao Lin, Daxin Jiang

The experiment results show that our method can significantly outperform the baseline methods and even achieve comparable results with the original teacher models, along with substantial speedup of model inference.

General Knowledge Knowledge Distillation +3

Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training

no code implementations16 Aug 2019 Gen Li, Nan Duan, Yuejian Fang, Ming Gong, Daxin Jiang, Ming Zhou

We propose Unicoder-VL, a universal encoder that aims to learn joint representations of vision and language in a pre-training manner.

Ranked #5 on Image-to-Text Retrieval on COCO (Recall@10 metric)

Image-to-Text Retrieval Language Modelling +4

Model Compression with Multi-Task Knowledge Distillation for Web-scale Question Answering System

no code implementations21 Apr 2019 Ze Yang, Linjun Shou, Ming Gong, Wutao Lin, Daxin Jiang

Deep pre-training and fine-tuning models (like BERT, OpenAI GPT) have demonstrated excellent results in question answering areas.

Knowledge Distillation Model Compression +1

NeuronBlocks: Building Your NLP DNN Models Like Playing Lego

2 code implementations IJCNLP 2019 Ming Gong, Linjun Shou, Wutao Lin, Zhijie Sang, Quanjia Yan, Ze Yang, Feixiang Cheng, Daxin Jiang

Deep Neural Networks (DNN) have been widely employed in industry to address various Natural Language Processing (NLP) tasks.

Naturally Combined Shape-Color Moment Invariants under Affine Transformations

no code implementations31 May 2017 Ming Gong, You Hao, Hanlin Mo, Hua Li

We proposed a kind of naturally combined shape-color affine moment invariants (SCAMI), which consider both shape and color affine transformations simultaneously in one single system.

Cannot find the paper you are looking for? You can Submit a new open access paper.