Search Results for author: Dananjay Srinivas

Found 6 papers, 2 papers with code

Show, Don't Tell: Learning Reward Machines from Demonstrations for Reinforcement Learning-Based Cardiac Pacemaker Synthesis

no code implementations4 Nov 2024 John Komp, Dananjay Srinivas, Maria Pacheco, Ashutosh Trivedi

Therefore, we explore the possibility of learning correctness specifications from such labeled demonstrations in the form of a reward machine and training an RL agent to synthesize a cardiac pacemaker based on the resulting reward machine.

Reinforcement Learning (RL)

All Entities are Not Created Equal: Examining the Long Tail for Fine-Grained Entity Typing

no code implementations22 Oct 2024 Advait Deshmukh, Ashwin Umadi, Dananjay Srinivas, Maria Leonor Pacheco

Pre-trained language models (PLMs) are trained on large amounts of data, which helps capture world knowledge alongside linguistic competence.

Entity Typing World Knowledge

On the Potential and Limitations of Few-Shot In-Context Learning to Generate Metamorphic Specifications for Tax Preparation Software

no code implementations20 Nov 2023 Dananjay Srinivas, Rohan Das, Saeid Tizpaz-Niari, Ashutosh Trivedi, Maria Leonor Pacheco

Due to the ever-increasing complexity of income tax laws in the United States, the number of US taxpayers filing their taxes using tax preparation software (henceforth, tax software) continues to increase.

In-Context Learning

Discourse Analysis via Questions and Answers: Parsing Dependency Structures of Questions Under Discussion

1 code implementation12 Oct 2022 Wei-Jen Ko, Yating Wu, Cutter Dalton, Dananjay Srinivas, Greg Durrett, Junyi Jessy Li

Human evaluation results show that QUD dependency parsing is possible for language models trained with this crowdsourced, generalizable annotation scheme.

Dependency Parsing Question Answering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.