no code implementations • ACL (RepL4NLP) 2021 • Qiwei Peng, David Weir, Julie Weeds
Recently, impressive performance on various natural language understanding tasks has been achieved by explicitly incorporating syntax and semantic information into pre-trained models, such as BERT and RoBERTa.
Natural Language Understanding Semantic Textual Similarity +4
1 code implementation • COLING 2022 • Lorenzo Bertolini, Julie Weeds, David Weir
Here, we investigate whether lexical entailment (LE, i. e. hyponymy or the is a relation between words) can be generalised in a compositional manner.
no code implementations • ACL 2022 • Qiwei Peng, David Weir, Julie Weeds, Yekun Chai
Paraphrase identification involves identifying whether a pair of sentences express the same or similar meanings.
1 code implementation • COLING 2022 • Wing Yan Li, Julie Weeds, David Weir
This paper addresses a deficiency in existing cross-lingual information retrieval (CLIR) datasets and provides a robust evaluation of CLIR systems’ disambiguation ability.
no code implementations • 15 Mar 2023 • Nestor Prieto-Chavana, Julie Weeds, David Weir
This process requires a fact-checker to formulate a search query based on the fact and to present it to a search engine.
no code implementations • COLING 2022 • Qiwei Peng, David Weir, Julie Weeds
Therefore, we here propose to combine sentence encoders with an alignment component by representing each sentence as a list of predicate-argument spans (where their span representations are derived from sentence encoders), and decomposing the sentence-level meaning comparison into the alignment between their spans for paraphrase identification tasks.
1 code implementation • Findings (ACL) 2021 • Lorenzo Bertolini, Julie Weeds, David Weir, Qiwei Peng
The exploitation of syntactic graphs (SyGs) as a word's context has been shown to be beneficial for distributional semantic models (DSMs), both at the level of individual word representations and in deriving phrasal representations via composition.
no code implementations • COLING 2020 • Colin Ashby, David Weir
HTML tags are typically discarded in free text Named Entity Recognition from Web pages.
1 code implementation • EACL 2021 • Thomas Kober, Julie Weeds, Lorenzo Bertolini, David Weir
The automatic detection of hypernymy relationships represents a challenging problem in NLP.
1 code implementation • ACL 2017 • Thomas Kober, Julie Weeds, Jeremy Reffin, David Weir
Count-based distributional semantic models suffer from sparsity due to unobserved but plausible co-occurrences in any text collection.
no code implementations • EACL 2017 • Julie Weeds, Thomas Kober, Jeremy Reffin, David Weir
Non-compositional phrases such as \textit{red herring} and weakly compositional phrases such as \textit{spelling bee} are an integral part of natural language (Sag, 2002).
1 code implementation • WS 2017 • Thomas Kober, Julie Weeds, John Wilkie, Jeremy Reffin, David Weir
In this paper, we investigate whether an a priori disambiguation of word senses is strictly necessary or whether the meaning of a word in context can be disambiguated through composition alone.
no code implementations • CL 2016 • David Weir, Julie Weeds, Jeremy Reffin, Thomas Kober
We present a new framework for compositional distributional semantics in which the distributional contexts of lexemes are expressed in terms of anchored packed dependency trees.
1 code implementation • EMNLP 2016 • Thomas Kober, Julie Weeds, Jeremy Reffin, David Weir
Distributional models are derived from co-occurrences in a corpus, where only a small proportion of all possible plausible co-occurrences will be observed.