Search Results for author: David Weir

Found 21 papers, 8 papers with code

Structure-aware Sentence Encoder in Bert-Based Siamese Network

no code implementations ACL (RepL4NLP) 2021 Qiwei Peng, David Weir, Julie Weeds

Recently, impressive performance on various natural language understanding tasks has been achieved by explicitly incorporating syntax and semantic information into pre-trained models, such as BERT and RoBERTa.

Natural Language Understanding Semantic Textual Similarity +4

Testing Large Language Models on Compositionality and Inference with Phrase-Level Adjective-Noun Entailment

1 code implementation COLING 2022 Lorenzo Bertolini, Julie Weeds, David Weir

Here, we investigate whether lexical entailment (LE, i. e. hyponymy or the is a relation between words) can be generalised in a compositional manner.

Lexical Entailment Transfer Learning

MuSeCLIR: A Multiple Senses and Cross-lingual Information Retrieval Dataset

1 code implementation COLING 2022 Wing Yan Li, Julie Weeds, David Weir

This paper addresses a deficiency in existing cross-lingual information retrieval (CLIR) datasets and provides a robust evaluation of CLIR systems’ disambiguation ability.

Cross-Lingual Information Retrieval Retrieval +1

Automated Query Generation for Evidence Collection from Web Search Engines

no code implementations15 Mar 2023 Nestor Prieto-Chavana, Julie Weeds, David Weir

This process requires a fact-checker to formulate a search query based on the fact and to present it to a search engine.

Text Generation

Towards Structure-aware Paraphrase Identification with Phrase Alignment Using Sentence Encoders

no code implementations COLING 2022 Qiwei Peng, David Weir, Julie Weeds

Therefore, we here propose to combine sentence encoders with an alignment component by representing each sentence as a list of predicate-argument spans (where their span representations are derived from sentence encoders), and decomposing the sentence-level meaning comparison into the alignment between their spans for paraphrase identification tasks.

Paraphrase Identification Sentence

Representing Syntax and Composition with Geometric Transformations

1 code implementation Findings (ACL) 2021 Lorenzo Bertolini, Julie Weeds, David Weir, Qiwei Peng

The exploitation of syntactic graphs (SyGs) as a word's context has been shown to be beneficial for distributional semantic models (DSMs), both at the level of individual word representations and in deriving phrasal representations via composition.

Knowledge Graphs

Data Augmentation for Hypernymy Detection

1 code implementation EACL 2021 Thomas Kober, Julie Weeds, Lorenzo Bertolini, David Weir

The automatic detection of hypernymy relationships represents a challenging problem in NLP.

Data Augmentation

Improving Semantic Composition with Offset Inference

1 code implementation ACL 2017 Thomas Kober, Julie Weeds, Jeremy Reffin, David Weir

Count-based distributional semantic models suffer from sparsity due to unobserved but plausible co-occurrences in any text collection.

Semantic Composition

When a Red Herring in Not a Red Herring: Using Compositional Methods to Detect Non-Compositional Phrases

no code implementations EACL 2017 Julie Weeds, Thomas Kober, Jeremy Reffin, David Weir

Non-compositional phrases such as \textit{red herring} and weakly compositional phrases such as \textit{spelling bee} are an integral part of natural language (Sag, 2002).

One Representation per Word - Does it make Sense for Composition?

1 code implementation WS 2017 Thomas Kober, Julie Weeds, John Wilkie, Jeremy Reffin, David Weir

In this paper, we investigate whether an a priori disambiguation of word senses is strictly necessary or whether the meaning of a word in context can be disambiguated through composition alone.

Aligning Packed Dependency Trees: a theory of composition for distributional semantics

no code implementations CL 2016 David Weir, Julie Weeds, Jeremy Reffin, Thomas Kober

We present a new framework for compositional distributional semantics in which the distributional contexts of lexemes are expressed in terms of anchored packed dependency trees.

Improving Sparse Word Representations with Distributional Inference for Semantic Composition

1 code implementation EMNLP 2016 Thomas Kober, Julie Weeds, Jeremy Reffin, David Weir

Distributional models are derived from co-occurrences in a corpus, where only a small proportion of all possible plausible co-occurrences will be observed.

Semantic Composition Word Similarity

Cannot find the paper you are looking for? You can Submit a new open access paper.