Search Results for author: Robert Vacareanu

Found 13 papers, 4 papers with code

PatternRank: Jointly Ranking Patterns and Extractions for Relation Extraction Using Graph-Based Algorithms

no code implementations PANDL (COLING) 2022 Robert Vacareanu, Dane Bell, Mihai Surdeanu

In this paper we revisit the direction of using lexico-syntactic patterns for relation extraction instead of today’s ubiquitous neural classifiers.

Relation Relation Extraction

Neural-Guided Program Synthesis of Information Extraction Rules Using Self-Supervision

no code implementations PANDL (COLING) 2022 Enrique Noriega-Atala, Robert Vacareanu, Gus Hahn-Powell, Marco A. Valenzuela-Escárcega

We propose a neural-based approach for rule synthesis designed to help bridge the gap between the interpretability, precision and maintainability exhibited by rule-based information extraction systems with the scalability and convenience of statistical information extraction systems.

Language Modelling Program Synthesis

A Human-machine Interface for Few-shot Rule Synthesis for Information Extraction

no code implementations NAACL (ACL) 2022 Robert Vacareanu, George C.G. Barbosa, Enrique Noriega-Atala, Gus Hahn-Powell, Rebecca Sharp, Marco A. Valenzuela-Escárcega, Mihai Surdeanu

We propose a system that assists a user in constructing transparent information extraction models, consisting of patterns (or rules) written in a declarative language, through program synthesis. Users of our system can specify their requirements through the use of examples, which are collected with a search interface. The rule-synthesis system proposes rule candidates and the results of applying them on a textual corpus; the user has the option to accept the candidate, request another option, or adjust the examples provided to the system. Through an interactive evaluation, we show that our approach generates high-precision rules even in a 1-shot setting.

Relation Extraction

General Purpose Verification for Chain of Thought Prompting

no code implementations30 Apr 2024 Robert Vacareanu, Anurag Pratik, Evangelia Spiliopoulou, Zheng Qi, Giovanni Paolini, Neha Anna John, Jie Ma, Yassine Benajiba, Miguel Ballesteros

Many of the recent capabilities demonstrated by Large Language Models (LLMs) arise primarily from their ability to exploit contextual information.

From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples

1 code implementation11 Apr 2024 Robert Vacareanu, Vlad-Andrei Negru, Vasile Suciu, Mihai Surdeanu

We analyze how well pre-trained large language models (e. g., Llama2, GPT-4, Claude 3, etc) can do linear and non-linear regression when given in-context examples, without any additional training or gradient updates.

Language Modelling Large Language Model +1

Towards Realistic Few-Shot Relation Extraction: A New Meta Dataset and Evaluation

no code implementations5 Apr 2024 Fahmida Alam, Md Asiful Islam, Robert Vacareanu, Mihai Surdeanu

We introduce a meta dataset for few-shot relation extraction, which includes two datasets derived from existing supervised relation extraction datasets NYT29 (Takanobu et al., 2019; Nayak and Ng, 2020) and WIKIDATA (Sorokin and Gurevych, 2017) as well as a few-shot form of the TACRED dataset (Sabo et al., 2021).

Relation Relation Extraction

Best of Both Worlds: A Pliable and Generalizable Neuro-Symbolic Approach for Relation Classification

no code implementations5 Mar 2024 Robert Vacareanu, Fahmida Alam, Md Asiful Islam, Haris Riaz, Mihai Surdeanu

Human interventions to the rules for the TACRED relation \texttt{org:parents} boost the performance on that relation by as much as 26\% relative improvement, without negatively impacting the other relations, and without retraining the semantic matching component.

Few-Shot Relation Classification Relation +2

Synthetic Dataset for Evaluating Complex Compositional Knowledge for Natural Language Inference

1 code implementation11 Jul 2023 Sushma Anand Akoju, Robert Vacareanu, Haris Riaz, Eduardo Blanco, Mihai Surdeanu

To this end, we modify the original texts using a set of phrases - modifiers that correspond to universal quantifiers, existential quantifiers, negation, and other concept modifiers in Natural Logic (NL) (MacCartney, 2009).

Natural Language Inference Negation +2

A Weak Supervision Approach for Few-Shot Aspect Based Sentiment

no code implementations19 May 2023 Robert Vacareanu, Siddharth Varia, Kishaloy Halder, Shuai Wang, Giovanni Paolini, Neha Anna John, Miguel Ballesteros, Smaranda Muresan

We explore how weak supervision on abundant unlabeled data can be leveraged to improve few-shot performance in aspect-based sentiment analysis (ABSA) tasks.

Aspect-Based Sentiment Analysis Aspect Extraction +3

Instruction Tuning for Few-Shot Aspect-Based Sentiment Analysis

1 code implementation12 Oct 2022 Siddharth Varia, Shuai Wang, Kishaloy Halder, Robert Vacareanu, Miguel Ballesteros, Yassine Benajiba, Neha Anna John, Rishita Anubhai, Smaranda Muresan, Dan Roth

Aspect-based Sentiment Analysis (ABSA) is a fine-grained sentiment analysis task which involves four elements from user-generated texts: aspect term, aspect category, opinion term, and sentiment polarity.

Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +2

An Unsupervised Method for Learning Representations of Multi-word Expressions for Semantic Classification

no code implementations COLING 2020 Robert Vacareanu, Marco A. Valenzuela-Esc{\'a}rcega, Rebecca Sharp, Mihai Surdeanu

This paper explores an unsupervised approach to learning a compositional representation function for multi-word expressions (MWEs), and evaluates it on the Tratz dataset, which associates two-word expressions with the semantic relation between the compound constituents (e. g. the label employer is associated with the noun compound government agency) (Tratz, 2011).

Parsing as Tagging

no code implementations LREC 2020 Robert Vacareanu, George Caique Gouveia Barbosa, Marco A. Valenzuela-Esc{\'a}rcega, Mihai Surdeanu

For example, for the sentence John eats cake, the tag to be predicted for the token cake is -1 because its head (eats) occurs one token to the left.

Dependency Parsing Position +2

Cannot find the paper you are looking for? You can Submit a new open access paper.