Search Results for author: Lawrence S. Moss

Found 7 papers, 4 papers with code

Extracting Mathematical Concepts with Large Language Models

no code implementations29 Aug 2023 Valeria de Paiva, Qiyue Gao, Pavel Kovalev, Lawrence S. Moss

Where our study diverges from previous work is in (1) providing a more thorough analysis of what makes mathematical term extraction a difficult problem to begin with; (2) paying close attention to inter-annotator disagreements; (3) providing a set of guidelines which both human and machine annotators could use to standardize the extraction process; (4) introducing a new annotation tool to help humans with ATE, applicable to any mathematical field and even beyond mathematics; (5) using prompts to ChatGPT as part of the extraction process, and proposing best practices for such prompts; and (6) raising the question of whether ChatGPT could be used as an annotator on the same level as human experts.

Term Extraction

OCNLI: Original Chinese Natural Language Inference

1 code implementation Findings of the Association for Computational Linguistics 2020 Hai Hu, Kyle Richardson, Liang Xu, Lu Li, Sandra Kuebler, Lawrence S. Moss

In this paper, we present the first large-scale NLI dataset (consisting of ~56, 000 annotated sentence pairs) for Chinese called the Original Chinese Natural Language Inference dataset (OCNLI).

Natural Language Inference Sentence +1

MonaLog: a Lightweight System for Natural Language Inference Based on Monotonicity

1 code implementation SCiL 2020 Hai Hu, Qi Chen, Kyle Richardson, Atreyee Mukherjee, Lawrence S. Moss, Sandra Kuebler

We present a new logic-based inference engine for natural language inference (NLI) called MonaLog, which is based on natural logic and the monotonicity calculus.

Data Augmentation Natural Language Inference

Probing Natural Language Inference Models through Semantic Fragments

3 code implementations16 Sep 2019 Kyle Richardson, Hai Hu, Lawrence S. Moss, Ashish Sabharwal

Our experiments, using a library of 8 such semantic fragments, reveal two remarkable findings: (a) State-of-the-art models, including BERT, that are pre-trained on existing NLI benchmark datasets perform poorly on these new fragments, even though the phenomena probed here are central to the NLI task.

Natural Language Inference

Proceedings Seventeenth Conference on Theoretical Aspects of Rationality and Knowledge

no code implementations19 Jul 2019 Lawrence S. Moss

This is the proceedings of the Seventeenth conference on Theoretical Aspects of Rationality and Knowledge, 17-19 July 2019, Institut de Recherche en Informatique de Toulouse (IRIT), Toulouse University Toulouse, France.

Computer Science and Game Theory Logic in Computer Science

Exploring the Landscape of Relational Syllogistic Logics

no code implementations3 Sep 2018 Alex Kruckman, Lawrence S. Moss

This paper explores relational syllogistic logics, a family of logical systems related to reasoning about relations in extensions of the classical syllogistic.

Cannot find the paper you are looking for? You can Submit a new open access paper.