no code implementations • CL (ACL) 2021 • Miloš Stanojević, Mark Steedman
Abstract Steedman (2020) proposes as a formal universal of natural language grammar that grammatical permutations of the kind that have given rise to transformational rules are limited to a class known to mathematicians and computer scientists as the “separable” permutations.
no code implementations • EMNLP (insights) 2021 • Sabine Weber, Mark Steedman
The training of NLP models often requires large amounts of labelled training data, which makes it difficult to expand existing models to new languages.
1 code implementation • Findings (EMNLP) 2021 • Mohammad Javad Hosseini, Shay B. Cohen, Mark Johnson, Mark Steedman
In this paper, we introduce the new task of open-domain contextual link prediction which has access to both the textual context and the KG structure to perform link prediction.
1 code implementation • NAACL (TextGraphs) 2021 • Sabine Weber, Mark Steedman
We use a German WordNet equivalent, GermaNet, to automatically generate training data for German general entity typing.
1 code implementation • ACL (IWPT) 2021 • Tianyi Li, Sujian Li, Mark Steedman
Strong and affordable in-domain data is a desirable asset when transferring trained semantic parsers to novel domains.
1 code implementation • IWCS (ACL) 2021 • Miloš Stanojević, Mark Steedman
We present a method for computing all quantifer scopes that can be extracted from a single CCG derivation.
no code implementations • NAACL (CMCL) 2021 • Miloš Stanojević, Shohini Bhattasali, Donald Dunagan, Luca Campanelli, Mark Steedman, Jonathan Brennan, John Hale
Hierarchical sentence structure plays a role in word-by-word human sentence comprehension, but it remains unclear how best to characterize this structure and unknown how exactly it would be recognized in a step-by-step process model.
1 code implementation • 14 Mar 2025 • Liang Cheng, Tianyi Li, Zhaowei Wang, Tianyang Liu, Mark Steedman
Extensive evaluations show that our framework can significantly reduce hallucinations from attestation bias.
1 code implementation • 15 Oct 2024 • Kaiqiao Han, Tianqing Fang, Zhaowei Wang, Yangqiu Song, Mark Steedman
While Large Language Models (LLMs) have showcased remarkable proficiency in reasoning, there is still a concern about hallucinations and unreliable reasoning issues due to semantic associations and superficial logical chains.
no code implementations • 26 Aug 2024 • Tianyang Liu, Tianyi Li, Liang Cheng, Mark Steedman
Large Language Models (LLMs) are reported to hold undesirable attestation bias on inference tasks: when asked to predict if a premise P entails a hypothesis H, instead of considering H's conditional truthfulness entailed by P, LLMs tend to use the out-of-context truth label of H as a fragile proxy.
no code implementations • 22 Aug 2024 • Louis Mahon, Omri Abend, Uri Berger, Katherine Demuth, Mark Johnson, Mark Steedman
This work reimplements a recent semantic bootstrapping child-language acquisition model, which was originally designed for English, and trains it to learn a new language: Hebrew.
1 code implementation • 22 Feb 2024 • Wendi Zhou, Tianyi Li, Pavlos Vougiouklis, Mark Steedman, Jeff Z. Pan
In this paper, we focus on predicative user intents as "how a customer uses a product", and pose intent understanding as a natural language reasoning task, independent of product ontologies.
1 code implementation • 29 Jan 2024 • Nikita Moghe, Arnisa Fazla, Chantal Amrhein, Tom Kocmi, Mark Steedman, Alexandra Birch, Rico Sennrich, Liane Guillou
We benchmark metric performance, assess their incremental performance over successive campaigns, and measure their sensitivity to a range of linguistic phenomena.
1 code implementation • 26 May 2023 • Matt Grenander, Shay B. Cohen, Mark Steedman
We propose a sentence-incremental neural coreference resolution system which incrementally builds clusters after marking mention boundaries in a shift-reduce method.
1 code implementation • 23 May 2023 • Nick McKenna, Tianyi Li, Liang Cheng, Mohammad Javad Hosseini, Mark Johnson, Mark Steedman
Large Language Models (LLMs) are claimed to be capable of Natural Language Inference (NLI), necessary for applied tasks like question answering and summarization.
1 code implementation • 23 Feb 2023 • Elizabeth Nielsen, Sharon Goldwater, Mark Steedman
Parsing spoken dialogue presents challenges that parsing text does not, including a lack of clear sentence boundaries.
no code implementations • 20 Dec 2022 • Nikita Moghe, Tom Sherborne, Mark Steedman, Alexandra Birch
We calculate the correlation between the metric's ability to predict a good/bad translation with the success/failure on the final task for the Translate-Test setup.
no code implementations • 28 Oct 2022 • Miloš Stanojević, Jonathan R. Brennan, Donald Dunagan, Mark Steedman, John T. Hale
These effects are spatially distinct from bilateral superior temporal effects that are unique to predictability.
1 code implementation • 10 Oct 2022 • Tianyi Li, Mohammad Javad Hosseini, Sabine Weber, Mark Steedman
We examine LMs' competence of directional predicate entailments by supervised fine-tuning with prompts.
1 code implementation • 1 Aug 2022 • Ratish Puduppully, Parag Jain, Nancy F. Chen, Mark Steedman
In Multi-Document Summarization (MDS), the input can be modeled as a set of documents, and the output is its summary.
1 code implementation • 30 Jul 2022 • Nick McKenna, Tianyi Li, Mark Johnson, Mark Steedman
The diversity and Zipfian frequency distribution of natural language predicates in corpora leads to sparsity in Entailment Graphs (EGs) built by Open Relation Extraction (ORE).
no code implementations • 5 Jul 2022 • Malihe Alikhani, Thomas Kober, Bashar Alhafni, Yue Chen, Mert Inan, Elizabeth Nielsen, Shahab Raji, Mark Steedman, Matthew Stone
Typologically diverse languages offer systems of lexical and grammatical aspect that allow speakers to focus on facets of event structure in ways that comport with the specific communicative setting and discourse constraints they face.
1 code implementation • Findings (ACL) 2022 • Tianyi Li, Sabine Weber, Mohammad Javad Hosseini, Liane Guillou, Mark Steedman
Predicate entailment detection is a crucial task for question-answering from text, where previous work has explored unsupervised learning of entailment graphs from typed open relation triples.
1 code implementation • EMNLP 2021 • Nikita Moghe, Mark Steedman, Alexandra Birch
In this work, we enhance the transfer learning process by intermediate fine-tuning of pretrained multilingual models, where the multilingual models are fine-tuned with different but related data and/or tasks.
2 code implementations • 22 Sep 2021 • Ida Szubert, Omri Abend, Nathan Schneider, Samuel Gibbon, Louis Mahon, Sharon Goldwater, Mark Steedman
We then demonstrate the utility of the compiled corpora through (1) a longitudinal corpus study of the prevalence of different syntactic and semantic phenomena in the CDS, and (2) applying an existing computational model of language acquisition to the two corpora and briefly comparing the results across languages.
1 code implementation • EMNLP (insights) 2021 • Liane Guillou, Sander Bijl de Vroe, Mark Johnson, Mark Steedman
Understanding linguistic modality is widely seen as important for downstream tasks such as Question Answering and Knowledge Graph Population.
1 code implementation • ACL (CASE) 2021 • Sander Bijl de Vroe, Liane Guillou, Miloš Stanojević, Nick McKenna, Mark Steedman
Language provides speakers with a rich system of modality for expressing thoughts about events, without being committed to their actual occurrence.
1 code implementation • COLING (TextGraphs) 2020 • Liane Guillou, Sander Bijl de Vroe, Mohammad Javad Hosseini, Mark Johnson, Mark Steedman
We present a novel method for injecting temporality into entailment graphs to address the problem of spurious entailments, which may arise from similar but temporally distinct events involving the same pair of entities.
no code implementations • ACL 2021 • Elizabeth Nielsen, Mark Steedman, Sharon Goldwater
We investigate how prosody affects a parser that receives an entire dialogue turn as input (a turn-based model), instead of gold standard pre-segmented SUs (an SU-based model).
no code implementations • EMNLP 2021 • Nick McKenna, Liane Guillou, Mohammad Javad Hosseini, Sander Bijl de Vroe, Mark Johnson, Mark Steedman
Drawing inferences between open-domain natural language predicates is a necessity for true language understanding.
no code implementations • Joint Conference on Lexical and Computational Semantics 2020 • Nick McKenna, Mark Steedman
We present a semi-supervised model which learns the semantics of negation purely through analysis of syntactic structure.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Ida Szubert, Marco Damonte, Shay B. Cohen, Mark Steedman
Abstract Meaning Representation (AMR) parsing aims at converting sentences into AMR representations.
no code implementations • COLING 2020 • Thomas Kober, Malihe Alikhani, Matthew Stone, Mark Steedman
The interpretation of the lexical aspect of verbs in English plays a crucial role for recognizing textual entailment and learning discourse-level inferences.
no code implementations • WS 2020 • Milo{\v{s}} Stanojevi{\'c}, Mark Steedman
Concretely, by using a grammar formalism to restrict the space of possible trees we can use dynamic programming parsing algorithms for exact search for the most probable tree.
no code implementations • ACL 2020 • Milo{\v{s}} Stanojevi{\'c}, Mark Steedman
Incremental syntactic parsing has been an active research area both for cognitive scientists trying to model human sentence processing and for NLP researchers attempting to combine incremental parsing with language modelling for ASR and MT.
no code implementations • EMNLP 2020 • Elizabeth Nielsen, Mark Steedman, Sharon Goldwater
We find that these innovations lead to an improvement from 87. 5% to 88. 7% accuracy on pitch accent detection on American English speech in the Boston University Radio News Corpus, a state-of-the-art result.
no code implementations • WS 2019 • Ida Szubert, Mark Steedman
Combining two graphs requires merging the nodes which are counterparts of each other.
no code implementations • 23 Aug 2019 • Zhepei Wei, Yantao Jia, Yuan Tian, Mohammad Javad Hosseini, Sujian Li, Mark Steedman, Yi Chang
In this work, we first introduce the hierarchical dependency and horizontal commonality between the two levels, and then propose an entity-enhanced dual tagging framework that enables the triple extraction (TE) task to utilize such interactions with self-learned entity features through an auxiliary entity extraction (EE) task, without breaking the joint decoding of relational triples.
no code implementations • WS 2019 • Sabine Weber, Mark Steedman
This paper presents ongoing work on the construction and alignment of predicate entailment graphs in English and German.
1 code implementation • ACL 2019 • Mohammad Javad Hosseini, Shay B. Cohen, Mark Johnson, Mark Steedman
The new entailment score outperforms prior state-of-the-art results on a standard entialment dataset and the new link prediction scores show improvements over the raw link prediction scores.
no code implementations • ACL 2019 • John Torr, Milos Stanojevic, Mark Steedman, Shay B. Cohen
Minimalist Grammars (Stabler, 1997) are a computationally oriented, and rigorous formalisation of many aspects of Chomsky{'}s (1995) Minimalist Program.
1 code implementation • NAACL 2019 • Milo{\v{s}} Stanojevi{\'c}, Mark Steedman
The main obstacle to incremental sentence processing arises from right-branching constituent structures, which are present in the majority of English sentences, as well as optional constituents that adjoin on the right, such as right adjuncts and right conjuncts.
5 code implementations • WS 2019 • Thomas Kober, Sander Bijl de Vroe, Mark Steedman
Inferences regarding "Jane's arrival in London" from predications such as "Jane is going to London" or "Jane has gone to London" depend on tense and aspect of the predications.
2 code implementations • EMNLP 2018 • Gözde Gül Şahin, Mark Steedman
Neural NLP systems achieve high scores in the presence of sizable training dataset.
no code implementations • ACL 2018 • Mark Johnson, Peter Anderson, Mark Dras, Mark Steedman
Because obtaining training data is often the most difficult part of an NLP or ML project, we develop methods for predicting how much data is required to achieve a desired test accuracy by extrapolating results from models trained on a small pilot training dataset.
1 code implementation • ACL 2018 • Gözde Gül Şahin, Mark Steedman
Character-level models have become a popular approach specially for their accessibility and ability to handle unseen data.
1 code implementation • TACL 2018 • Mohammad Javad Hosseini, Nathanael Chambers, Siva Reddy, Xavier R. Holt, Shay B. Cohen, Mark Johnson, Mark Steedman
We instead propose a scalable method that learns globally consistent similarity scores based on new soft constraints that consider both the structures across typed entailment graphs and inside each graph.
1 code implementation • EMNLP 2017 • Siva Reddy, Oscar Täckström, Slav Petrov, Mark Steedman, Mirella Lapata
In this work, we introduce UDepLambda, a semantic interface for UD, which maps natural language to logical forms in an almost language-independent fashion and can process dependency graphs.
1 code implementation • EMNLP 2016 • Yonatan Bisk, Siva Reddy, John Blitzer, Julia Hockenmaier, Mark Steedman
We compare the effectiveness of four different syntactic CCG parsers for a semantic slot-filling task to explore how much syntactic supervision is required for downstream semantic analysis.
1 code implementation • TACL 2016 • Siva Reddy, Oscar T{\"a}ckstr{\"o}m, Michael Collins, Tom Kwiatkowski, Dipanjan Das, Mark Steedman, Mirella Lapata
In contrast{---}partly due to the lack of a strong type system{---}dependency structures are easy to annotate and have become a widely used form of syntactic analysis for many languages.
no code implementations • TACL 2014 • Siva Reddy, Mirella Lapata, Mark Steedman
In this paper we introduce a novel semantic parsing approach to query Freebase in natural language without requiring manual annotations or question-answer pairs.
no code implementations • TACL 2014 • Mike Lewis, Mark Steedman
Current supervised parsers are limited by the size of their labelled training data, making improving them with unlabelled data an important goal.
no code implementations • TACL 2013 • Mike Lewis, Mark Steedman
We introduce a new approach to semantics which combines the benefits of distributional and formal logical semantics.