Search Results for author: Preksha Nema

Found 16 papers, 9 papers with code

On the weak link between importance and prunability of attention heads

no code implementations EMNLP 2020 Aakriti Budhraja, Madhura Pande, Preksha Nema, Pratyush Kumar, Mitesh M. Khapra

Given the success of Transformer-based models, two directions of study have emerged: interpreting role of individual attention heads and down-sizing the models for efficiency.

ReTAG: Reasoning Aware Table to Analytic Text Generation

no code implementations19 May 2023 Deepanway Ghosal, Preksha Nema, Aravindan Raghuveer

The task of table summarization involves generating text that both succinctly and accurately represents the table or a specific set of highlighted cells within a table.

Data-to-Text Generation Descriptive +2

T-STAR: Truthful Style Transfer using AMR Graph as Intermediate Representation

no code implementations3 Dec 2022 Anubhav Jangra, Preksha Nema, Aravindan Raghuveer

In this work, we study the usefulness of Abstract Meaning Representation (AMR) graph as the intermediate style agnostic representation.

Decoder Sentence +2

A Framework for Rationale Extraction for Deep QA models

no code implementations9 Oct 2021 Sahana Ramnath, Preksha Nema, Deep Sahni, Mitesh M. Khapra

As neural-network-based QA models become deeper and more complex, there is a demand for robust frameworks which can access a model's rationale for its prediction.

Explanation Generation Question Answering +1

The heads hypothesis: A unifying statistical approach towards understanding multi-headed attention in BERT

1 code implementation22 Jan 2021 Madhura Pande, Aakriti Budhraja, Preksha Nema, Pratyush Kumar, Mitesh M. Khapra

There are two main challenges with existing methods for classification: (a) there are no standard scores across studies or across functional roles, and (b) these scores are often average quantities measured across sentences without capturing statistical significance.


Untangle: Critiquing Disentangled Recommendations

no code implementations1 Jan 2021 Preksha Nema, Alexandros Karatzoglou, Filip Radlinski

Untangle gives control on critiquing recommendations based on users preferences, without sacrificing on recommendation accuracy.

Collaborative Filtering

Towards Transparent and Explainable Attention Models

2 code implementations ACL 2020 Akash Kumar Mohankumar, Preksha Nema, Sharan Narasimhan, Mitesh M. Khapra, Balaji Vasan Srinivasan, Balaraman Ravindran

To make attention mechanisms more faithful and plausible, we propose a modified LSTM cell with a diversity-driven training objective that ensures that the hidden representations learned at different time steps are diverse.


Let's Ask Again: Refine Network for Automatic Question Generation

1 code implementation IJCNLP 2019 Preksha Nema, Akash Kumar Mohankumar, Mitesh M. Khapra, Balaji Vasan Srinivasan, Balaraman Ravindran

It is desired that the generated question should be (i) grammatically correct (ii) answerable from the passage and (iii) specific to the given answer.

Decoder Question Generation +1

ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions

1 code implementation ICLR 2018 Soham Parikh, Ananya B. Sai, Preksha Nema, Mitesh M. Khapra

Specifically, it has gates which decide whether an option can be eliminated given the passage, question pair and if so it tries to make the passage representation orthogonal to this eliminated option (akin to ignoring portions of the passage corresponding to the eliminated option).

Multiple-choice Reading Comprehension

Frustratingly Poor Performance of Reading Comprehension Models on Non-adversarial Examples

no code implementations4 Apr 2019 Soham Parikh, Ananya B. Sai, Preksha Nema, Mitesh M. Khapra

We believe that the non-adversarial dataset created as a part of this work would complement the research on adversarial evaluation and give a more realistic assessment of the ability of RC models.

Reading Comprehension

Towards a Better Metric for Evaluating Question Generation Systems

1 code implementation EMNLP 2018 Preksha Nema, Mitesh M. Khapra

In particular, it is important to verify whether such metrics used for evaluating AQG systems focus on answerability of the generated question by preferring questions which contain all relevant information such as question type (Wh-types), entities, relations, etc.

Knowledge Graphs Question Generation +1

Generating Descriptions from Structured Data Using a Bifocal Attention Mechanism and Gated Orthogonalization

2 code implementations NAACL 2018 Preksha Nema, Shreyas Shetty, Parag Jain, Anirban Laha, Karthik Sankaranarayanan, Mitesh M. Khapra

For example, while generating descriptions from a table, a human would attend to information at two levels: (i) the fields (macro level) and (ii) the values within the field (micro level).

Cannot find the paper you are looking for? You can Submit a new open access paper.