no code implementations • CONLL 2016 • Zhiguo Wang, Haitao Mi, Abraham Ittycheriah
In this work, we propose a semi-supervised method for short text clustering, where we represent texts as distributed vectors with neural networks, and use a small amount of labeled data to specify our intention for clustering.
no code implementations • EMNLP 2016 • Haitao Mi, Baskaran Sankaran, Zhiguo Wang, Abe Ittycheriah
In this paper, we enhance the attention-based neural machine translation (NMT) by adding explicit coverage embedding models to alleviate issues of repeating and dropping translations in NMT.
no code implementations • 9 Aug 2016 • Baskaran Sankaran, Haitao Mi, Yaser Al-Onaizan, Abe Ittycheriah
Attention-based Neural Machine Translation (NMT) models suffer from attention deficiency issues as has been observed in recent research.
no code implementations • EMNLP 2016 • Haitao Mi, Zhiguo Wang, Abe Ittycheriah
We simply compute the distance between the machine attentions and the "true" alignments, and minimize this cost in the training procedure.
no code implementations • SEMEVAL 2016 • Linfeng Song, Zhiguo Wang, Haitao Mi, Daniel Gildea
In the training stage, our method induces several sense centroids (embedding) for each polysemous word.
Ranked #4 on Word Sense Induction on SemEval 2010 WSI
no code implementations • ACL 2016 • Haitao Mi, Zhiguo Wang, Abe Ittycheriah
Our method simply takes into account the translation options of each word or phrase in the source sentence, and picks a very small target vocabulary for each sentence based on a word-to-word translation model or a bilingual phrase library learned from a traditional machine translation model.
no code implementations • Findings (ACL) 2021 • Shuang Peng, Mengdi Zhou, Minghui Yang, Haitao Mi, Shaosheng Cao, Zujie Wen, Teng Xu, Hongbin Wang, Lei Liu
In the Chinese medical insurance industry, the assessor's role is essential and requires significant efforts to converse with the claimant.
no code implementations • 29 Dec 2021 • Jian Du, Haitao Mi
Our DP-FP employs novel (1) representation clipping followed by noise addition in the forward propagation stage, as well as (2) micro-batch construction via subsampling to achieve DP amplification and reduce noise power to $1/M$, where $M$ is the number of micro-batch in a step.
no code implementations • 8 Mar 2022 • Ruijie Yan, Shuang Peng, Haitao Mi, Liang Jiang, Shihui Yang, Yuchi Zhang, Jiajun Li, Liangrui Peng, Yongliang Wang, Zujie Wen
Building robust and general dialogue models for spoken conversations is challenging due to the gap in distributions of spoken and written data.
no code implementations • 8 Nov 2022 • Wenyue Hua, Lifeng Jin, Linfeng Song, Haitao Mi, Yongfeng Zhang, Dong Yu
Pretrained natural language processing (NLP) models have achieved high overall performance, but they still make systematic errors.
no code implementations • 31 Jan 2023 • Mian Zhang, Lifeng Jin, Linfeng Song, Haitao Mi, Xiabing Zhou, Dong Yu
Current self-training methods such as standard self-training, co-training, tri-training, and others often focus on improving model performance on a single task, utilizing differences in input features, model architectures, and training processes.
1 code implementation • 16 Feb 2023 • Ante Wang, Linfeng Song, Qi Liu, Haitao Mi, Longyue Wang, Zhaopeng Tu, Jinsong Su, Dong Yu
We propose a dialogue model that can access the vast and dynamic information from any search engine for response generation.
no code implementations • 18 Sep 2023 • Baolin Peng, Linfeng Song, Ye Tian, Lifeng Jin, Haitao Mi, Dong Yu
Large Language Models (LLMs) have revolutionized natural language processing, yet aligning these models with human values and preferences using RLHF remains a significant challenge.
1 code implementation • 18 Jan 2024 • Mian Zhang, Lifeng Jin, Linfeng Song, Haitao Mi, Dong Yu
One critical issue for chat systems is to stay consistent about preferences, opinions, beliefs and facts of itself, which has been shown a difficult problem.
no code implementations • 14 Feb 2024 • Xiaoying Zhang, Baolin Peng, Ye Tian, Jingyan Zhou, Lifeng Jin, Linfeng Song, Haitao Mi, Helen Meng
Despite showing increasingly human-like abilities, large language models (LLMs) often struggle with factual inaccuracies, i. e. "hallucinations", even when they hold relevant knowledge.
no code implementations • 23 Feb 2024 • Ante Wang, Linfeng Song, Baolin Peng, Ye Tian, Lifeng Jin, Haitao Mi, Jinsong Su, Dong Yu
Experiments on Biographies show that our method can effectively improve the factuality of generations with simple and intuitive prompts across different scales of LLMs.
no code implementations • 28 Feb 2024 • Lifeng Jin, Baolin Peng, Linfeng Song, Haitao Mi, Ye Tian, Dong Yu
The most common training pipeline for large language models includes pretraining, finetuning and aligning phases, with their respective resulting models, such as the pretrained model and the finetuned model.
1 code implementation • 6 Mar 2024 • Xiangci Li, Linfeng Song, Lifeng Jin, Haitao Mi, Jessica Ouyang, Dong Yu
In this paper, we present a high-quality benchmark named multi-source Wizard of Wikipedia (Ms. WoW) for evaluating multi-source dialogue knowledge selection and response generation.
no code implementations • 14 Mar 2024 • Ante Wang, Linfeng Song, Ye Tian, Baolin Peng, Lifeng Jin, Haitao Mi, Jinsong Su, Dong Yu
Calibration, which establishes the correlation between accuracy and model confidence, is important for LLM development.
no code implementations • 14 Apr 2024 • Souvik Das, Lifeng Jin, Linfeng Song, Haitao Mi, Baolin Peng, Dong Yu
Current state-of-the-art approaches refine decoding by contrasting early-exit distributions from a lower layer with the final layer to exploit information related to factuality within the model forward procedure.
no code implementations • 18 Apr 2024 • Ye Tian, Baolin Peng, Linfeng Song, Lifeng Jin, Dian Yu, Haitao Mi, Dong Yu
Despite the impressive capabilities of Large Language Models (LLMs) on various tasks, they still struggle with scenarios that involves complex reasoning and planning.
1 code implementation • 28 Sep 2023 • Lingfeng Shen, Sihao Chen, Linfeng Song, Lifeng Jin, Baolin Peng, Haitao Mi, Daniel Khashabi, Dong Yu
We propose Contrast Instructions -- a benchmarking strategy for the consistency of RM.
1 code implementation • 22 Oct 2022 • Songyang Zhang, Linfeng Song, Lifeng Jin, Haitao Mi, Kun Xu, Dong Yu, Jiebo Luo
While previous work focuses on building systems for inducing grammars on text that are well-aligned with video content, we investigate the scenario, in which text and video are only in loose correspondence.
1 code implementation • COLING 2016 • Zhiguo Wang, Haitao Mi, Abraham Ittycheriah
Most conventional sentence similarity methods only focus on similar parts of two input sentences, and simply ignore the dissimilar parts, which usually give us some clues and semantic meanings about the sentences.
Ranked #13 on Question Answering on WikiQA
1 code implementation • ACL 2021 • Xiang Hu, Haitao Mi, Zujie Wen, Yafang Wang, Yi Su, Jing Zheng, Gerard de Melo
Human language understanding operates at multiple levels of granularity (e. g., words, phrases, and sentences) with increasing levels of abstraction that can be hierarchically combined.
2 code implementations • 1 Mar 2022 • Xiang Hu, Haitao Mi, Liang Li, Gerard de Melo
We propose to use a top-down parser as a model-based pruning method, which also enables parallel encoding during inference.
1 code implementation • 13 Dec 2016 • Zhiguo Wang, Haitao Mi, Wael Hamza, Radu Florian
Based on this dataset, we propose a Multi-Perspective Context Matching (MPCM) model, which is an end-to-end system that directly predicts the answer beginning and ending points in a passage.
Ranked #3 on Open-Domain Question Answering on SQuAD1.1