no code implementations • EMNLP 2020 • Aakriti Budhraja, Madhura Pande, Preksha Nema, Pratyush Kumar, Mitesh M. Khapra
Given the success of Transformer-based models, two directions of study have emerged: interpreting role of individual attention heads and down-sizing the models for efficiency.
no code implementations • 3 Dec 2022 • Anubhav Jangra, Preksha Nema, Aravindan Raghuveer
In this work, we study the usefulness of Abstract Meaning Representation (AMR) graph as the intermediate style agnostic representation.
no code implementations • 9 Oct 2021 • Sahana Ramnath, Preksha Nema, Deep Sahni, Mitesh M. Khapra
As neural-network-based QA models become deeper and more complex, there is a demand for robust frameworks which can access a model's rationale for its prediction.
1 code implementation • 22 Jan 2021 • Madhura Pande, Aakriti Budhraja, Preksha Nema, Pratyush Kumar, Mitesh M. Khapra
There are two main challenges with existing methods for classification: (a) there are no standard scores across studies or across functional roles, and (b) these scores are often average quantities measured across sentences without capturing statistical significance.
no code implementations • 1 Jan 2021 • Preksha Nema, Alexandros Karatzoglou, Filip Radlinski
Untangle gives control on critiquing recommendations based on users preferences, without sacrificing on recommendation accuracy.
1 code implementation • EMNLP 2020 • Sahana Ramnath, Preksha Nema, Deep Sahni, Mitesh M. Khapra
BERT and its variants have achieved state-of-the-art performance in various NLP tasks.
no code implementations • 13 Aug 2020 • Madhura Pande, Aakriti Budhraja, Preksha Nema, Pratyush Kumar, Mitesh M. Khapra
We show that a larger fraction of heads have a locality bias as compared to a syntactic bias.
2 code implementations • ACL 2020 • Akash Kumar Mohankumar, Preksha Nema, Sharan Narasimhan, Mitesh M. Khapra, Balaji Vasan Srinivasan, Balaraman Ravindran
To make attention mechanisms more faithful and plausible, we propose a modified LSTM cell with a diversity-driven training objective that ensures that the hidden representations learned at different time steps are diverse.
1 code implementation • IJCNLP 2019 • Preksha Nema, Akash Kumar Mohankumar, Mitesh M. Khapra, Balaji Vasan Srinivasan, Balaraman Ravindran
It is desired that the generated question should be (i) grammatically correct (ii) answerable from the passage and (iii) specific to the given answer.
1 code implementation • ICLR 2018 • Soham Parikh, Ananya B. Sai, Preksha Nema, Mitesh M. Khapra
Specifically, it has gates which decide whether an option can be eliminated given the passage, question pair and if so it tries to make the passage representation orthogonal to this eliminated option (akin to ignoring portions of the passage corresponding to the eliminated option).
no code implementations • 4 Apr 2019 • Soham Parikh, Ananya B. Sai, Preksha Nema, Mitesh M. Khapra
We believe that the non-adversarial dataset created as a part of this work would complement the research on adversarial evaluation and give a more realistic assessment of the ability of RC models.
1 code implementation • EMNLP 2018 • Preksha Nema, Mitesh M. Khapra
In particular, it is important to verify whether such metrics used for evaluating AQG systems focus on answerability of the generated question by preferring questions which contain all relevant information such as question type (Wh-types), entities, relations, etc.
2 code implementations • NAACL 2018 • Preksha Nema, Shreyas Shetty, Parag Jain, Anirban Laha, Karthik Sankaranarayanan, Mitesh M. Khapra
For example, while generating descriptions from a table, a human would attend to information at two levels: (i) the fields (macro level) and (ii) the values within the field (micro level).
1 code implementation • NAACL 2018 • Parag Jain, Anirban Laha, Karthik Sankaranarayanan, Preksha Nema, Mitesh M. Khapra, Shreyas Shetty
Structured data summarization involves generation of natural language summaries from structured input data.
2 code implementations • ACL 2017 • Preksha Nema, Mitesh Khapra, Anirban Laha, Balaraman Ravindran
Abstractive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion.
Ranked #2 on
Query-Based Extractive Summarization
on Debatepedia