Search Results for author: Varsha Embar

Found 4 papers, 2 papers with code

Evaluating Pretrained Transformer Models for Entity Linking inTask-Oriented Dialog

1 code implementation ICON 2021 Sai Muralidhar Jayanthi, Varsha Embar, Karthik Raghunathan

The wide applicability of pretrained transformer models (PTMs) for natural language tasks is well demonstrated, but their ability to comprehend short phrases of text is less explored.

Entity Linking text similarity

Evaluating Pretrained Transformer Models for Entity Linking in Task-Oriented Dialog

1 code implementation15 Dec 2021 Sai Muralidhar Jayanthi, Varsha Embar, Karthik Raghunathan

The wide applicability of pretrained transformer models (PTMs) for natural language tasks is well demonstrated, but their ability to comprehend short phrases of text is less explored.

Entity Linking text similarity

Towards Inference-Oriented Reading Comprehension: ParallelQA

no code implementations WS 2018 Soumya Wadhwa, Varsha Embar, Matthias Grabmair, Eric Nyberg

In this paper, we investigate the tendency of end-to-end neural Machine Reading Comprehension (MRC) models to match shallow patterns rather than perform inference-oriented reasoning on RC benchmarks.

Machine Reading Comprehension

Cannot find the paper you are looking for? You can Submit a new open access paper.