Search Results for author: Zhuosheng Zhang

Found 101 papers, 51 papers with code

Can Watermarks Survive Translation? On the Cross-lingual Consistency of Text Watermark for Large Language Models

no code implementations21 Feb 2024 Zhiwei He, Binglin Zhou, Hongkun Hao, Aiwei Liu, Xing Wang, Zhaopeng Tu, Zhuosheng Zhang, Rui Wang

Furthermore, we analyze two key factors that contribute to the cross-lingual consistency in text watermarking and propose a defense method that increases the AUC from 0. 67 to 0. 88 under CWRA.

TAG

Acquiring Clean Language Models from Backdoor Poisoned Datasets by Downscaling Frequency Space

no code implementations19 Feb 2024 Zongru Wu, Zhuosheng Zhang, Pengzhou Cheng, Gongshen Liu

In this paper, we investigate the learning mechanisms of backdoor LMs in the frequency space by Fourier analysis.

Comprehensive Cognitive LLM Agent for Smartphone GUI Automation

1 code implementation19 Feb 2024 Xinbei Ma, Zhuosheng Zhang, Hai Zhao

Large language models (LLMs) have shown remarkable potential as human-like autonomous language agents to interact with real-world environments, especially for graphical user interface (GUI) automation.

Type prediction

Investigating Multi-Hop Factual Shortcuts in Knowledge Editing of Large Language Models

no code implementations19 Feb 2024 Tianjie Ju, Yijin Chen, Xinwei Yuan, Zhuosheng Zhang, Wei Du, Yubin Zheng, Gongshen Liu

Recent work has showcased the powerful capability of large language models (LLMs) in recalling knowledge and reasoning.

knowledge editing

Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science

no code implementations6 Feb 2024 Xiangru Tang, Qiao Jin, Kunlun Zhu, Tongxin Yuan, Yichi Zhang, Wangchunshu Zhou, Meng Qu, Yilun Zhao, Jian Tang, Zhuosheng Zhang, Arman Cohan, Zhiyong Lu, Mark Gerstein

Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines.

GLaPE: Gold Label-agnostic Prompt Evaluation and Optimization for Large Language Model

1 code implementation4 Feb 2024 Xuanchang Zhang, Zhuosheng Zhang, Hai Zhao

Despite the rapid progress of large language models (LLMs), their task performance remains sensitive to prompt design.

Language Modelling Large Language Model

Improving Machine Translation with Human Feedback: An Exploration of Quality Estimation as a Reward Model

1 code implementation23 Jan 2024 Zhiwei He, Xing Wang, Wenxiang Jiao, Zhuosheng Zhang, Rui Wang, Shuming Shi, Zhaopeng Tu

In this work, we investigate the potential of employing the QE model as the reward model (the QE-based reward model) to predict human preferences for feedback training.

Machine Translation Translation

R-Judge: Benchmarking Safety Risk Awareness for LLM Agents

1 code implementation18 Jan 2024 Tongxin Yuan, Zhiwei He, Lingzhong Dong, Yiming Wang, Ruijie Zhao, Tian Xia, Lizhen Xu, Binglin Zhou, Fangqi Li, Zhuosheng Zhang, Rui Wang, Gongshen Liu

We introduce R-Judge, a benchmark crafted to evaluate the proficiency of LLMs in judging and identifying safety risks given agent interaction records.

Benchmarking

Igniting Language Intelligence: The Hitchhiker's Guide From Chain-of-Thought Reasoning to Language Agents

1 code implementation20 Nov 2023 Zhuosheng Zhang, Yao Yao, Aston Zhang, Xiangru Tang, Xinbei Ma, Zhiwei He, Yiming Wang, Mark Gerstein, Rui Wang, Gongshen Liu, Hai Zhao

Large language models (LLMs) have dramatically enhanced the field of language intelligence, as demonstrably evidenced by their formidable empirical performance across a spectrum of complex reasoning tasks.

Structured Chemistry Reasoning with Large Language Models

1 code implementation16 Nov 2023 Siru Ouyang, Zhuosheng Zhang, Bing Yan, Xuan Liu, Yejin Choi, Jiawei Han, Lianhui Qin

Large Language Models (LLMs) excel in diverse areas, yet struggle with complex scientific reasoning, especially in the field of chemistry.

General Knowledge

MedAgents: Large Language Models as Collaborators for Zero-shot Medical Reasoning

1 code implementation16 Nov 2023 Xiangru Tang, Anni Zou, Zhuosheng Zhang, Ziming Li, Yilun Zhao, Xingyao Zhang, Arman Cohan, Mark Gerstein

Large language models (LLMs), despite their remarkable progress across various general domains, encounter significant barriers in medicine and healthcare.

Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models

1 code implementation10 Oct 2023 Anni Zou, Zhuosheng Zhang, Hai Zhao, Xiangru Tang

Large language models (LLMs) have unveiled remarkable reasoning capabilities by exploiting chain-of-thought (CoT) prompting, which generates intermediate reasoning chains to serve as the rationale for deriving the answer.

You Only Look at Screens: Multimodal Chain-of-Action Agents

2 code implementations20 Sep 2023 Zhuosheng Zhang, Aston Zhang

Autonomous user interface (UI) agents aim to facilitate task automation by interacting with the user interface without manual intervention.

Type prediction

Multi-turn Dialogue Comprehension from a Topic-aware Perspective

no code implementations18 Sep 2023 Xinbei Ma, Yi Xu, Hai Zhao, Zhuosheng Zhang

On the other hand, the split segments are an appropriate element of multi-turn dialogue response selection.

Machine Reading Comprehension

Meta-Reasoning: Semantics-Symbol Deconstruction for Large Language Models

1 code implementation30 Jun 2023 Yiming Wang, Zhuosheng Zhang, Pei Zhang, Baosong Yang, Rui Wang

Neural-symbolic methods have demonstrated efficiency in enhancing the reasoning abilities of large language models (LLMs).

Domain Generalization In-Context Learning +1

Modeling Hierarchical Reasoning Chains by Linking Discourse Units and Key Phrases for Reading Comprehension

1 code implementation COLING 2022 Jialin Chen, Zhuosheng Zhang, Hai Zhao

Machine reading comprehension (MRC) poses new challenges over logical reasoning, which aims to understand the implicit logical relations entailed in the given contexts and perform inference over them.

Logical Reasoning Machine Reading Comprehension +2

Element-aware Summarization with Large Language Models: Expert-aligned Evaluation and Chain-of-Thought Method

1 code implementation22 May 2023 Yiming Wang, Zhuosheng Zhang, Rui Wang

Further, we propose a Summary Chain-of-Thought (SumCoT) technique to elicit LLMs to generate summaries step by step, which helps them integrate more fine-grained details of source documents into the final summaries that correlate with the human writing mindset.

Benchmarking Hallucination

Decker: Double Check with Heterogeneous Knowledge for Commonsense Fact Verification

1 code implementation10 May 2023 Anni Zou, Zhuosheng Zhang, Hai Zhao

Commonsense fact verification, as a challenging branch of commonsense question-answering (QA), aims to verify through facts whether a given commonsense claim is correct or not.

Fact Verification Question Answering

Exploring Human-Like Translation Strategy with Large Language Models

2 code implementations6 May 2023 Zhiwei He, Tian Liang, Wenxiang Jiao, Zhuosheng Zhang, Yujiu Yang, Rui Wang, Zhaopeng Tu, Shuming Shi, Xing Wang

Compared to typical machine translation that focuses solely on source-to-target mapping, LLM-based translation can potentially mimic the human translation process which might take preparatory steps to ensure high-quality translation.

Hallucination Machine Translation +2

Is ChatGPT a General-Purpose Natural Language Processing Task Solver?

1 code implementation8 Feb 2023 Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, Diyi Yang

Spurred by advancements in scale, large language models (LLMs) have demonstrated the ability to perform a variety of natural language processing (NLP) tasks zero-shot -- i. e., without adaptation on downstream data.

Arithmetic Reasoning Zero-Shot Learning

Multimodal Chain-of-Thought Reasoning in Language Models

3 code implementations2 Feb 2023 Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, Alex Smola

Large language models (LLMs) have shown impressive performance on complex reasoning by leveraging chain-of-thought (CoT) prompting to generate intermediate reasoning chains as the rationale to infer the answer.

Language Modelling Science Question Answering

Channel-aware Decoupling Network for Multi-turn Dialogue Comprehension

no code implementations10 Jan 2023 Zhuosheng Zhang, Hai Zhao, Longxiang Liu

We decouple the contextualized word representations by masking mechanisms in Transformer-based PrLM, making each word only focus on the words in current utterance, other utterances, and two speaker roles (i. e., utterances of sender and utterances of the receiver), respectively.

Self-Prompting Large Language Models for Zero-Shot Open-Domain QA

no code implementations16 Dec 2022 Junlong Li, Zhuosheng Zhang, Hai Zhao

Open-Domain Question Answering (ODQA) aims at answering factoid questions without explicitly providing specific background documents.

In-Context Learning Open-Domain Question Answering +1

Language Model Pre-training on True Negatives

no code implementations1 Dec 2022 Zhuosheng Zhang, Hai Zhao, Masao Utiyama, Eiichiro Sumita

Discriminative pre-trained language models (PLMs) learn to predict original texts from intentionally corrupted ones.

Language Modelling

Retrieval Augmentation for Commonsense Reasoning: A Unified Approach

1 code implementation23 Oct 2022 Wenhao Yu, Chenguang Zhu, Zhihan Zhang, Shuohang Wang, Zhuosheng Zhang, Yuwei Fang, Meng Jiang

However, applying such methods to commonsense reasoning tasks faces two unique challenges, i. e., the lack of a general large-scale corpus for retrieval and a corresponding effective commonsense retriever.

Retrieval

Towards End-to-End Open Conversational Machine Reading

no code implementations13 Oct 2022 Sizhe Zhou, Siru Ouyang, Zhuosheng Zhang, Hai Zhao

In open-retrieval conversational machine reading (OR-CMR) task, machines are required to do multi-turn question answering given dialogue history and a textual knowledge base.

Decision Making Question Answering +4

Task Compass: Scaling Multi-task Pre-training with Task Prefix

1 code implementation12 Oct 2022 Zhuosheng Zhang, Shuohang Wang, Yichong Xu, Yuwei Fang, Wenhao Yu, Yang Liu, Hai Zhao, Chenguang Zhu, Michael Zeng

Leveraging task-aware annotated data as supervised signals to assist with self-supervised learning on large-scale unlabeled data has become a new trend in pre-training language models.

Common Sense Reasoning Data Augmentation +4

Instance Regularization for Discriminative Language Model Pre-training

1 code implementation11 Oct 2022 Zhuosheng Zhang, Hai Zhao, Ming Zhou

They treat training instances equally throughout the training process, with little attention on the individual contribution of those instances.

Denoising Language Modelling +2

Automatic Chain of Thought Prompting in Large Language Models

5 code implementations7 Oct 2022 Zhuosheng Zhang, Aston Zhang, Mu Li, Alex Smola

Providing these steps for prompting demonstrations is called chain-of-thought (CoT) prompting.

Learning Better Masking for Better Language Model Pre-training

1 code implementation23 Aug 2022 Dongjie Yang, Zhuosheng Zhang, Hai Zhao

Masked Language Modeling (MLM) has been widely used as the denoising objective in pre-training language models (PrLMs).

Denoising Language Modelling +1

Rethinking Textual Adversarial Defense for Pre-trained Language Models

no code implementations21 Jul 2022 Jiayi Wang, Rongzhou Bao, Zhuosheng Zhang, Hai Zhao

However, we find that most existing textual adversarial examples are unnatural, which can be easily distinguished by both human and machine.

Adversarial Attack Adversarial Defense +1

Back to the Future: Bidirectional Information Decoupling Network for Multi-turn Dialogue Modeling

1 code implementation18 Apr 2022 Yiyang Li, Hai Zhao, Zhuosheng Zhang

Multi-turn dialogue modeling as a challenging branch of natural language understanding (NLU), aims to build representations for machines to understand human dialogues, which provides a solid foundation for multiple downstream tasks.

Natural Language Understanding

Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model

1 code implementation Findings (ACL) 2022 Jiayi Wang, Rongzhou Bao, Zhuosheng Zhang, Hai Zhao

We question the validity of current evaluation of robustness of PrLMs based on these non-natural adversarial samples and propose an anomaly detector to evaluate the robustness of PrLMs with more natural adversarial samples.

Data Augmentation Language Modelling

IF equation: a feature extractor for high-concentration time-frequency representation of mixed signals

no code implementations23 Nov 2021 Xiangxiang Zhu, Kunde Yang, Zhuosheng Zhang

By the analysis of the properties of the IF equation, we prove that a good IF equation can unify the well-known IF and group delay estimators and provides an effective way to characterize the mixture of time-varying and frequency-varying signals.

Structural Characterization for Dialogue Disentanglement

1 code implementation ACL 2022 Xinbei Ma, Zhuosheng Zhang, Hai Zhao

Tangled multi-party dialogue contexts lead to challenges for dialogue reading comprehension, where multiple dialogue threads flow simultaneously within a common dialogue record, increasing difficulties in understanding the dialogue history for both human and machine.

Disentanglement Feature Engineering +1

Tracing Origins: Coreference-aware Machine Reading Comprehension

1 code implementation ACL 2022 Baorong Huang, Zhuosheng Zhang, Hai Zhao

In this paper, we imitate the human reading process in connecting the anaphoric expressions and explicitly leverage the coreference information of the entities to enhance the word embeddings from the pre-trained language model, in order to highlight the coreference mentions of the entities that must be identified for coreference-intensive question answering in QUOREF, a relatively new dataset that is specifically designed to evaluate the coreference-related performance of a model.

Language Modelling Machine Reading Comprehension +2

Sentence-aware Contrastive Learning for Open-Domain Passage Retrieval

no code implementations ACL 2022 Bohong Wu, Zhuosheng Zhang, JinYuan Wang, Hai Zhao

In detail, we introduce an in-passage negative sampling strategy to encourage a diverse generation of sentence representations within the same passage.

Contrastive Learning Passage Retrieval +2

Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese

1 code implementation13 Oct 2021 Zhuosheng Zhang, Hanqing Zhang, Keming Chen, Yuhang Guo, Jingyun Hua, Yulong Wang, Ming Zhou

Although pre-trained models (PLMs) have achieved remarkable improvements in a wide range of NLP tasks, they are expensive in terms of time and resources.

Advances in Multi-turn Dialogue Comprehension: A Survey

no code implementations11 Oct 2021 Zhuosheng Zhang, Hai Zhao

In this paper, we review the previous methods from the technical perspective of dialogue modeling for the dialogue comprehension task.

Reading Comprehension

Multi-tasking Dialogue Comprehension with Discourse Parsing

1 code implementation PACLIC 2021 Yuchen He, Zhuosheng Zhang, Hai Zhao

Multi-party dialogue machine reading comprehension (MRC) raises an even more challenging understanding goal on dialogue with more than two involved speakers, compared with the traditional plain passage style MRC.

Discourse Parsing Machine Reading Comprehension +1

Logic Pre-Training of Language Models

no code implementations29 Sep 2021 Siru Ouyang, Zhuosheng Zhang, Hai Zhao

Pre-trained language models (PrLMs) have been shown useful for enhancing a broad range of natural language understanding (NLU) tasks.

Logical Reasoning Machine Reading Comprehension +4

Enhanced Speaker-aware Multi-party Multi-turn Dialogue Comprehension

no code implementations9 Sep 2021 Xinbei Ma, Zhuosheng Zhang, Hai Zhao

Multi-party multi-turn dialogue comprehension brings unprecedented challenges on handling the complicated scenarios from multiple speakers and criss-crossed discourse relationship among speaker-aware utterances.

Question Answering

Span Fine-tuning for Pre-trained Language Models

no code implementations Findings (EMNLP) 2021 Rongzhou Bao, Zhuosheng Zhang, Hai Zhao

Pre-trained language models (PrLM) have to carefully manage input units when training on a very large text with a vocabulary consisting of millions of words.

Smoothing Dialogue States for Open Conversational Machine Reading

1 code implementation EMNLP 2021 Zhuosheng Zhang, Siru Ouyang, Hai Zhao, Masao Utiyama, Eiichiro Sumita

In this work, we propose an effective gating strategy by smoothing the two dialogue states in only one decoder and bridge decision making and question generation to provide a richer dialogue state reference.

Decision Making Question Generation +2

High-resolution chirplet transform: from parameters analysis to parameters combination

no code implementations2 Aug 2021 Xiangxiang Zhu, Bei Li, Kunde Yang, Zhuosheng Zhang, Wenting Li

The standard chirplet transform (CT) with a chirp-modulated Gaussian window provides a valuable tool for analyzing linear chirp signals.

Vocal Bursts Intensity Prediction

Cross-lingual Transferring of Pre-trained Contextualized Language Models

no code implementations27 Jul 2021 Zuchao Li, Kevin Parnow, Hai Zhao, Zhuosheng Zhang, Rui Wang, Masao Utiyama, Eiichiro Sumita

Though the pre-trained contextualized language model (PrLM) has made a significant impact on NLP, training PrLMs in languages other than English can be impractical for two reasons: other languages often lack corpora sufficient for training powerful PrLMs, and because of the commonalities among human languages, computationally expensive PrLM training for different languages is somewhat redundant.

Language Modelling Machine Translation +1

Graph-free Multi-hop Reading Comprehension: A Select-to-Guide Strategy

no code implementations25 Jul 2021 Bohong Wu, Zhuosheng Zhang, Hai Zhao

Multi-hop reading comprehension (MHRC) requires not only to predict the correct answer span in the given passage, but also to provide a chain of supporting evidences for reasoning interpretability.

Multi-Hop Reading Comprehension

BO-DBA: Query-Efficient Decision-Based Adversarial Attacks via Bayesian Optimization

no code implementations4 Jun 2021 Zhuosheng Zhang, Shucheng Yu

However, extending BO to the setting of DBA is nontrivial because in DBA only output labels instead of real-valued scores, as needed by BO, are available to attackers.

Bayesian Optimization

Structural Pre-training for Dialogue Comprehension

no code implementations ACL 2021 Zhuosheng Zhang, Hai Zhao

Pre-trained language models (PrLMs) have demonstrated superior performance due to their strong ability to learn universal language representations from self-supervised pre-training.

Sentence

Fact-driven Logical Reasoning for Machine Reading Comprehension

2 code implementations NeurIPS 2021 Siru Ouyang, Zhuosheng Zhang, Hai Zhao

Recent years have witnessed an increasing interest in training machines with reasoning ability, which deeply relies on accurately and clearly presented clue forms.

Logical Reasoning Machine Reading Comprehension +1

Advances in Multi-turn Dialogue Comprehension: A Survey

no code implementations4 Mar 2021 Zhuosheng Zhang, Hai Zhao

In this paper, we review the previous methods from the technical perspective of dialogue modeling for the dialogue comprehension task.

Language Modelling Question Answering +1

Text Compression-aided Transformer Encoding

no code implementations11 Feb 2021 Zuchao Li, Zhuosheng Zhang, Hai Zhao, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita

In this paper, we propose explicit and implicit text compression approaches to enhance the Transformer encoding and evaluate models using this approach on several typical downstream tasks that rely on the encoding heavily.

Text Compression

Multi-turn Dialogue Reading Comprehension with Pivot Turns and Knowledge

no code implementations10 Feb 2021 Zhuosheng Zhang, Junlong Li, Hai Zhao

Experimental results on four dialogue comprehension benchmark tasks show that our proposed model achieves great improvements on baselines.

Reading Comprehension

Later Span Adaptation for Language Understanding

no code implementations1 Jan 2021 Rongzhou Bao, Zhuosheng Zhang, Hai Zhao

Instead of too early fixing the linguistic unit input as nearly all previous work did, we propose a novel method that combines span-level information into the representations generated by PrLMs during fine-tuning phase for better flexibility.

Natural Language Understanding Sentence

Cross-lingual Transfer Learning for Pre-trained Contextualized Language Models

no code implementations1 Jan 2021 Zuchao Li, Kevin Barry Parnow, Hai Zhao, Zhuosheng Zhang, Rui Wang, Masao Utiyama, Eiichiro Sumita

Though the pre-trained contextualized language model (PrLM) has made a significant impact on NLP, training PrLMs in languages other than English can be impractical for two reasons: other languages often lack corpora sufficient for training powerful PrLMs, and because of the commonalities among human languages, computationally expensive PrLM training for different languages is somewhat redundant.

Cross-Lingual Transfer Language Modelling +3

Enhancing Pre-trained Language Model with Lexical Simplification

no code implementations30 Dec 2020 Rongzhou Bao, Jiayi Wang, Zhuosheng Zhang, Hai Zhao

By substituting complex words with simple alternatives, lexical simplification (LS) is a recognized method to reduce such lexical diversity, and therefore to improve the understandability of sentences.

General Classification Language Modelling +4

SG-Net: Syntax Guided Transformer for Language Representation

no code implementations27 Dec 2020 Zhuosheng Zhang, Yuwei Wu, Junru Zhou, Sufeng Duan, Hai Zhao, Rui Wang

In detail, for self-attention network (SAN) sponsored Transformer-based encoder, we introduce syntactic dependency of interest (SDOI) design into the SAN to form an SDOI-SAN with syntax-guided self-attention.

Machine Reading Comprehension Machine Translation +2

Reference Knowledgeable Network for Machine Reading Comprehension

1 code implementation7 Dec 2020 Yilin Zhao, Zhuosheng Zhang, Hai Zhao

Thus we propose a novel reference-based knowledge enhancement model called Reference Knowledgeable Network (RekNet), which simulates human reading strategies to refine critical information from the passage and quote explicit knowledge in necessity.

Machine Reading Comprehension Multi-Choice MRC

LIMIT-BERT : Linguistics Informed Multi-Task BERT

1 code implementation Findings of the Association for Computational Linguistics 2020 Junru Zhou, Zhuosheng Zhang, Hai Zhao, Shuailiang Zhang

Besides, LIMIT-BERT takes a semi-supervised learning strategy to offer the same large amount of linguistics task data as that for the language model training.

Language Modelling Multi-Task Learning +3

Topic-Aware Multi-turn Dialogue Modeling

1 code implementation26 Sep 2020 Yi Xu, Hai Zhao, Zhuosheng Zhang

In the retrieval-based multi-turn dialogue modeling, it remains a challenge to select the most appropriate response according to extracting salient features in context utterances.

Retrieval

Composing Answer from Multi-spans for Reading Comprehension

no code implementations14 Sep 2020 Zhuosheng Zhang, Yiqing Zhang, Hai Zhao, Xi Zhou, Xiang Zhou

This paper presents a novel method to generate answers for non-extraction machine reading comprehension (MRC) tasks whose answers cannot be simply extracted as one span from the given passages.

Machine Reading Comprehension

Filling the Gap of Utterance-aware and Speaker-aware Representation for Multi-turn Dialogue

1 code implementation14 Sep 2020 Longxiang Liu, Zhuosheng Zhang, Hai Zhao, Xi Zhou, Xiang Zhou

A multi-turn dialogue is composed of multiple utterances from two or more different speaker roles.

Retrieval

Dialogue-adaptive Language Model Pre-training From Quality Estimation

1 code implementation10 Sep 2020 Junlong Li, Zhuosheng Zhang, Hai Zhao

Pre-trained language models (PrLMs) have achieved great success on a wide range of natural language processing tasks by virtue of the universal language representation ability obtained by self-supervised learning on a large corpus.

Informativeness Language Modelling +2

Machine Reading Comprehension: The Role of Contextualized Language Models and Beyond

1 code implementation13 May 2020 Zhuosheng Zhang, Hai Zhao, Rui Wang

In this survey, we provide a comprehensive and comparative review on MRC covering overall research topics about 1) the origin and development of MRC and CLM, with a particular focus on the role of CLMs; 2) the impact of MRC and CLM to the NLP community; 3) the definition, datasets, and evaluation of MRC; 4) general MRC architecture and technical methods in the view of two-stage Encoder-Decoder solving architecture from the insights of the cognitive process of humans; 5) previous highlights, emerging topics, and our empirical analysis, among which we especially focus on what works in different periods of MRC researches.

Machine Reading Comprehension Text Matching

Neural Machine Translation with Universal Visual Representation

1 code implementation ICLR 2020 Zhuosheng Zhang, Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Zuchao Li, Hai Zhao

Though visual information has been introduced for enhancing neural machine translation (NMT), its effectiveness strongly relies on the availability of large amounts of bilingual parallel sentence pairs with manual image annotations.

Machine Translation NMT +2

Data-dependent Gaussian Prior Objective for Language Generation

no code implementations ICLR 2020 Zuchao Li, Rui Wang, Kehai Chen, Masso Utiyama, Eiichiro Sumita, Zhuosheng Zhang, Hai Zhao

However, MLE focuses on once-to-all matching between the predicted sequence and gold-standard, consequently treating all incorrect predictions as being equally incorrect.

Image Captioning L2 Regularization +4

Knowledgeable Dialogue Reading Comprehension on Key Turns

no code implementations29 Apr 2020 Junlong Li, Zhuosheng Zhang, Hai Zhao

In this paper, the relevance of each turn to the question are calculated to choose key turns.

Answer Selection Language Modelling +1

Retrospective Reader for Machine Reading Comprehension

2 code implementations27 Jan 2020 Zhuosheng Zhang, Junjie Yang, Hai Zhao

Inspired by how humans solve reading comprehension questions, we proposed a retrospective reader (Retro-Reader) that integrates two stages of reading and verification strategies: 1) sketchy reading that briefly investigates the overall interactions of passage and question, and yield an initial judgment; 2) intensive reading that verifies the answer and gives the final prediction.

Machine Reading Comprehension Question Answering

Explicit Sentence Compression for Neural Machine Translation

1 code implementation27 Dec 2019 Zuchao Li, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Zhuosheng Zhang, Hai Zhao

In this paper, we propose an explicit sentence compression method to enhance the source sentence representation for NMT.

Machine Translation NMT +3

Probing Contextualized Sentence Representations with Visual Awareness

no code implementations7 Nov 2019 Zhuosheng Zhang, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Hai Zhao

We present a universal framework to model contextualized sentence representations with visual awareness that is motivated to overcome the shortcomings of the multimodal parallel data with manual annotations.

Machine Translation Natural Language Inference +2

SJTU-NICT at MRP 2019: Multi-Task Learning for End-to-End Uniform Semantic Graph Parsing

no code implementations CONLL 2019 Zuchao Li, Hai Zhao, Zhuosheng Zhang, Rui Wang, Masao Utiyama, Eiichiro Sumita

This paper describes our SJTU-NICT{'}s system for participating in the shared task on Cross-Framework Meaning Representation Parsing (MRP) at the 2019 Conference for Computational Language Learning (CoNLL).

Multi-Task Learning

LIMIT-BERT : Linguistic Informed Multi-Task BERT

no code implementations31 Oct 2019 Junru Zhou, Zhuosheng Zhang, Hai Zhao, Shuailiang Zhang

In this paper, we present a Linguistic Informed Multi-Task BERT (LIMIT-BERT) for learning language representations across multiple linguistic tasks by Multi-Task Learning (MTL).

Multi-Task Learning POS +2

Semantics-aware BERT for Language Understanding

1 code implementation5 Sep 2019 Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, Xiang Zhou

The latest work on language representations carefully integrates contextualized features into language model training, which enables a series of success especially in various machine reading comprehension and natural language inference tasks.

Language Modelling Machine Reading Comprehension +5

Modeling Named Entity Embedding Distribution into Hypersphere

no code implementations3 Sep 2019 Zhuosheng Zhang, Bingjie Tang, Zuchao Li, Hai Zhao

This work models named entity distribution from a way of visualizing topological structure of embedding space, so that we make an assumption that most, if not all, named entities (NEs) for a language tend to aggregate together to be accommodated by a specific hypersphere in embedding space.

named-entity-recognition Named Entity Recognition +1

A Smart Sliding Chinese Pinyin Input Method Editor on Touchscreen

no code implementations3 Sep 2019 Zhuosheng Zhang, Zhen Meng, Hai Zhao

This paper presents a smart sliding Chinese pinyin Input Method Editor (IME) for touchscreen devices which allows user finger sliding from one key to another on the touchscreen instead of tapping keys one by one, while the target Chinese character sequence will be predicted during the sliding process to help user input Chinese characters efficiently.

Open Named Entity Modeling from Embedding Distribution

no code implementations31 Aug 2019 Ying Luo, Hai Zhao, Zhuosheng Zhang, Bingjie Tang

For monolingual cases, the proposed named entity model gives an open description of diverse named entity types and different languages.

named-entity-recognition Named Entity Recognition +2

DCMN+: Dual Co-Matching Network for Multi-choice Reading Comprehension

2 code implementations30 Aug 2019 Shuailiang Zhang, Hai Zhao, Yuwei Wu, Zhuosheng Zhang, Xi Zhou, Xiang Zhou

Multi-choice reading comprehension is a challenging task to select an answer from a set of candidate options when given passage and question.

Reading Comprehension Sentence

SG-Net: Syntax-Guided Machine Reading Comprehension

1 code implementation14 Aug 2019 Zhuosheng Zhang, Yuwei Wu, Junru Zhou, Sufeng Duan, Hai Zhao, Rui Wang

In detail, for self-attention network (SAN) sponsored Transformer-based encoder, we introduce syntactic dependency of interest (SDOI) design into the SAN to form an SDOI-SAN with syntax-guided self-attention.

Language Modelling Machine Reading Comprehension +1

Judging Chemical Reaction Practicality From Positive Sample only Learning

no code implementations22 Apr 2019 Shu Jiang, Zhuosheng Zhang, Hai Zhao, Jiangtong Li, Yang Yang, Bao-liang Lu, Ning Xia

Chemical reaction practicality is the core task among all symbol intelligence based chemical information processing, for example, it provides indispensable clue for further automatic synthesis route inference.

Open Vocabulary Learning for Neural Chinese Pinyin IME

1 code implementation ACL 2019 Zhuosheng Zhang, Yafang Huang, Hai Zhao

Pinyin-to-character (P2C) conversion is the core component of pinyin-based Chinese input method engine (IME).

Joint Learning of POS and Dependencies for Multilingual Universal Dependency Parsing

1 code implementation CONLL 2018 Zuchao Li, Shexia He, Zhuosheng Zhang, Hai Zhao

This paper describes the system of team LeisureX in the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies.

Lemmatization Part-Of-Speech Tagging +3

Attentive Semantic Role Labeling with Boundary Indicator

no code implementations8 Sep 2018 Zhuosheng Zhang, Shexia He, Zuchao Li, Hai Zhao

The goal of semantic role labeling (SRL) is to discover the predicate-argument structure of a sentence, which plays a critical role in deep processing of natural language.

Semantic Role Labeling Sentence

Explicit Contextual Semantics for Text Comprehension

no code implementations8 Sep 2018 Zhuosheng Zhang, Yuwei Wu, Zuchao Li, Hai Zhao

Who did what to whom is a major focus in natural language understanding, which is right the aim of semantic role labeling (SRL) task.

Machine Reading Comprehension Natural Language Understanding +1

Moon IME: Neural-based Chinese Pinyin Aided Input Method with Customizable Association

no code implementations ACL 2018 Yafang Huang, Zuchao Li, Zhuosheng Zhang, Hai Zhao

Chinese pinyin input method engine (IME) lets user conveniently input Chinese into a computer by typing pinyin through the common keyboard.

Information Retrieval Machine Translation +3

One-shot Learning for Question-Answering in Gaokao History Challenge

1 code implementation COLING 2018 Zhuosheng Zhang, Hai Zhao

Answering questions from university admission exams (Gaokao in Chinese) is a challenging AI task since it requires effective representation to capture complicated semantic relations between questions and answers.

One-Shot Learning Question Answering

Modeling Multi-turn Conversation with Deep Utterance Aggregation

1 code implementation COLING 2018 Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, Gongshen Liu

In this paper, we formulate previous utterances into context using a proposed deep utterance aggregation model to form a fine-grained context representation.

Conversational Response Selection Retrieval

SJTU-NLP at SemEval-2018 Task 9: Neural Hypernym Discovery with Term Embeddings

no code implementations SEMEVAL 2018 Zhuosheng Zhang, Jiangtong Li, Hai Zhao, Bingjie Tang

This paper describes a hypernym discovery system for our participation in the SemEval-2018 Task 9, which aims to discover the best (set of) candidate hypernyms for input concepts or entities, given the search space of a pre-defined vocabulary.

Hypernym Discovery

Cannot find the paper you are looking for? You can Submit a new open access paper.