Search Results for author: Adithya Kulkarni

Found 6 papers, 4 papers with code

Empirical Analysis for Unsupervised Universal Dependency Parse Tree Aggregation

no code implementations28 Mar 2024 Adithya Kulkarni, Oliver Eulenstein, Qi Li

Dependency parsing is an essential task in NLP, and the quality of dependency parsers is crucial for many downstream tasks.

Dependency Parsing

An Empirical Study of Using ChatGPT for Fact Verification Task

no code implementations11 Nov 2023 Mohna Chakraborty, Adithya Kulkarni, Qi Li

(2) What are different prompts performance using ChatGPT for fact verification tasks?

Fact Verification

Zero-shot Approach to Overcome Perturbation Sensitivity of Prompts

1 code implementation25 May 2023 Mohna Chakraborty, Adithya Kulkarni, Qi Li

We empirically demonstrate that the top-ranked prompts are high-quality and significantly outperform the base prompt and the prompts generated using few-shot learning for the binary sentence-level sentiment classification task.

Classification Few-Shot Learning +3

CPTAM: Constituency Parse Tree Aggregation Method

1 code implementation19 Jan 2022 Adithya Kulkarni, Nasim Sabetpour, Alexey Markin, Oliver Eulenstein, Qi Li

This paper adopts the truth discovery idea to aggregate constituency parse trees from different parsers by estimating their reliability in the absence of ground truth.

Constituency Parsing Sentence

Truth Discovery in Sequence Labels from Crowds

1 code implementation9 Sep 2021 Nasim Sabetpour, Adithya Kulkarni, Sihong Xie, Qi Li

The proposed Aggregation method for Sequential Labels from Crowds ($AggSLC$) jointly considers the characteristics of sequential labeling tasks, workers' reliabilities, and advanced machine learning techniques.

named-entity-recognition Named Entity Recognition +2

Cannot find the paper you are looking for? You can Submit a new open access paper.