Search Results for author: Haitao Mi

Found 27 papers, 8 papers with code

Self-Alignment for Factuality: Mitigating Hallucinations in LLMs via Self-Evaluation

no code implementations14 Feb 2024 Xiaoying Zhang, Baolin Peng, Ye Tian, Jingyan Zhou, Lifeng Jin, Linfeng Song, Haitao Mi, Helen Meng

Despite showing increasingly human-like abilities, large language models (LLMs) often struggle with factual inaccuracies, i. e. "hallucinations", even when they hold relevant knowledge.

Inconsistent dialogue responses and how to recover from them

1 code implementation18 Jan 2024 Mian Zhang, Lifeng Jin, Linfeng Song, Haitao Mi, Dong Yu

One critical issue for chat systems is to stay consistent about preferences, opinions, beliefs and facts of itself, which has been shown a difficult problem.

Stabilizing RLHF through Advantage Model and Selective Rehearsal

no code implementations18 Sep 2023 Baolin Peng, Linfeng Song, Ye Tian, Lifeng Jin, Haitao Mi, Dong Yu

Large Language Models (LLMs) have revolutionized natural language processing, yet aligning these models with human values and preferences using RLHF remains a significant challenge.

Search-Engine-augmented Dialogue Response Generation with Cheaply Supervised Query Production

1 code implementation16 Feb 2023 Ante Wang, Linfeng Song, Qi Liu, Haitao Mi, Longyue Wang, Zhaopeng Tu, Jinsong Su, Dong Yu

We propose a dialogue model that can access the vast and dynamic information from any search engine for response generation.

Chatbot Response Generation

Friend-training: Learning from Models of Different but Related Tasks

no code implementations31 Jan 2023 Mian Zhang, Lifeng Jin, Linfeng Song, Haitao Mi, Xiabing Zhou, Dong Yu

Current self-training methods such as standard self-training, co-training, tri-training, and others often focus on improving model performance on a single task, utilizing differences in input features, model architectures, and training processes.

Dialogue Rewriting Dialogue Understanding +1

Discover, Explanation, Improvement: An Automatic Slice Detection Framework for Natural Language Processing

no code implementations8 Nov 2022 Wenyue Hua, Lifeng Jin, Linfeng Song, Haitao Mi, Yongfeng Zhang, Dong Yu

Pretrained natural language processing (NLP) models have achieved high overall performance, but they still make systematic errors.

Learning a Grammar Inducer from Massive Uncurated Instructional Videos

1 code implementation22 Oct 2022 Songyang Zhang, Linfeng Song, Lifeng Jin, Haitao Mi, Kun Xu, Dong Yu, Jiebo Luo

While previous work focuses on building systems for inducing grammars on text that are well-aligned with video content, we investigate the scenario, in which text and video are only in loose correspondence.

Language Acquisition Video Alignment

DP-FP: Differentially Private Forward Propagation for Large Models

no code implementations29 Dec 2021 Jian Du, Haitao Mi

Our DP-FP employs novel (1) representation clipping followed by noise addition in the forward propagation stage, as well as (2) micro-batch construction via subsampling to achieve DP amplification and reduce noise power to $1/M$, where $M$ is the number of micro-batch in a step.

Privacy Preserving Privacy Preserving Deep Learning

A Dialogue-based Information Extraction System for Medical Insurance Assessment

no code implementations Findings (ACL) 2021 Shuang Peng, Mengdi Zhou, Minghui Yang, Haitao Mi, Shaosheng Cao, Zujie Wen, Teng Xu, Hongbin Wang, Lei Liu

In the Chinese medical insurance industry, the assessor's role is essential and requires significant efforts to converse with the claimant.

R2D2: Recursive Transformer based on Differentiable Tree for Interpretable Hierarchical Language Modeling

1 code implementation ACL 2021 Xiang Hu, Haitao Mi, Zujie Wen, Yafang Wang, Yi Su, Jing Zheng, Gerard de Melo

Human language understanding operates at multiple levels of granularity (e. g., words, phrases, and sentences) with increasing levels of abstraction that can be hierarchically combined.

Language Modelling

Multi-Perspective Context Matching for Machine Comprehension

1 code implementation13 Dec 2016 Zhiguo Wang, Haitao Mi, Wael Hamza, Radu Florian

Based on this dataset, we propose a Multi-Perspective Context Matching (MPCM) model, which is an end-to-end system that directly predicts the answer beginning and ending points in a passage.

Question Answering Reading Comprehension +1

Temporal Attention Model for Neural Machine Translation

no code implementations9 Aug 2016 Baskaran Sankaran, Haitao Mi, Yaser Al-Onaizan, Abe Ittycheriah

Attention-based Neural Machine Translation (NMT) models suffer from attention deficiency issues as has been observed in recent research.

Machine Translation NMT +2

Supervised Attentions for Neural Machine Translation

no code implementations EMNLP 2016 Haitao Mi, Zhiguo Wang, Abe Ittycheriah

We simply compute the distance between the machine attentions and the "true" alignments, and minimize this cost in the training procedure.

Machine Translation Sentence +1

Coverage Embedding Models for Neural Machine Translation

no code implementations EMNLP 2016 Haitao Mi, Baskaran Sankaran, Zhiguo Wang, Abe Ittycheriah

In this paper, we enhance the attention-based neural machine translation (NMT) by adding explicit coverage embedding models to alleviate issues of repeating and dropping translations in NMT.

Machine Translation NMT +2

Vocabulary Manipulation for Neural Machine Translation

no code implementations ACL 2016 Haitao Mi, Zhiguo Wang, Abe Ittycheriah

Our method simply takes into account the translation options of each word or phrase in the source sentence, and picks a very small target vocabulary for each sentence based on a word-to-word translation model or a bilingual phrase library learned from a traditional machine translation model.

Machine Translation Sentence +2

Sentence Similarity Learning by Lexical Decomposition and Composition

1 code implementation COLING 2016 Zhiguo Wang, Haitao Mi, Abraham Ittycheriah

Most conventional sentence similarity methods only focus on similar parts of two input sentences, and simply ignore the dissimilar parts, which usually give us some clues and semantic meanings about the sentences.

Paraphrase Identification Question Answering +2

Semi-supervised Clustering for Short Text via Deep Representation Learning

no code implementations CONLL 2016 Zhiguo Wang, Haitao Mi, Abraham Ittycheriah

In this work, we propose a semi-supervised method for short text clustering, where we represent texts as distributed vectors with neural networks, and use a small amount of labeled data to specify our intention for clustering.

Clustering Representation Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.