no code implementations • ACL (RepL4NLP) 2021 • Qiwei Peng, David Weir, Julie Weeds
Recently, impressive performance on various natural language understanding tasks has been achieved by explicitly incorporating syntax and semantic information into pre-trained models, such as BERT and RoBERTa.
Natural Language Understanding Semantic Textual Similarity +4
no code implementations • ACL 2022 • Qiwei Peng, David Weir, Julie Weeds, Yekun Chai
Paraphrase identification involves identifying whether a pair of sentences express the same or similar meanings.
1 code implementation • 11 Apr 2024 • Qingyi Liu, Yekun Chai, Shuohuan Wang, Yu Sun, Qiwei Peng, Keze Wang, Hua Wu
This paper presents GPTfluence, a novel approach that leverages a featurized simulation to assess the impact of training examples on the training dynamics of GPT models.
1 code implementation • 26 Feb 2024 • Qiwei Peng, Yekun Chai, Xuhong LI
These benchmarks have overlooked the vast landscape of massively multilingual NL to multilingual code, leaving a critical gap in the evaluation of multilingual LLMs.
no code implementations • COLING 2022 • Qiwei Peng, David Weir, Julie Weeds
Therefore, we here propose to combine sentence encoders with an alignment component by representing each sentence as a list of predicate-argument spans (where their span representations are derived from sentence encoders), and decomposing the sentence-level meaning comparison into the alignment between their spans for paraphrase identification tasks.
1 code implementation • Findings (ACL) 2021 • Lorenzo Bertolini, Julie Weeds, David Weir, Qiwei Peng
The exploitation of syntactic graphs (SyGs) as a word's context has been shown to be beneficial for distributional semantic models (DSMs), both at the level of individual word representations and in deriving phrasal representations via composition.