Search Results for author: Harsh Trivedi

Found 13 papers, 11 papers with code

IrEne-viz: Visualizing Energy Consumption of Transformer Models

1 code implementation EMNLP (ACL) 2021 Yash Kumar Lal, Reetu Singh, Harsh Trivedi, Qingqing Cao, Aruna Balasubramanian, Niranjan Balasubramanian

IrEne is an energy prediction system that accurately predicts the interpretable inference energy consumption of a wide range of Transformer-based NLP models.

Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions

1 code implementation20 Dec 2022 Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, Ashish Sabharwal

While using the question to retrieve relevant text from an external knowledge source helps LLMs, we observe that this one-step retrieve-and-read approach is insufficient for multi-step QA.

Hallucination Question Answering +1

Decomposed Prompting: A Modular Approach for Solving Complex Tasks

1 code implementation5 Oct 2022 Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, Ashish Sabharwal

On symbolic reasoning tasks, we can further decompose sub-tasks that are hard for LLMs into even simpler solvable sub-tasks.

Information Retrieval Retrieval

Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts

1 code implementation25 May 2022 Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, Ashish Sabharwal

We show how to use question decompositions to teach language models these broad reasoning skills in a robust fashion.

Question Answering

Single-Turn Debate Does Not Help Humans Answer Hard Reading-Comprehension Questions

no code implementations LNLS (ACL) 2022 Alicia Parrish, Harsh Trivedi, Ethan Perez, Angelica Chen, Nikita Nangia, Jason Phang, Samuel R. Bowman

We use long contexts -- humans familiar with the context write convincing explanations for pre-selected correct and incorrect answers, and we test if those explanations allow humans who have not read the full context to more accurately determine the correct answer.

Multiple-choice Reading Comprehension

Summarize-then-Answer: Generating Concise Explanations for Multi-hop Reading Comprehension

1 code implementation EMNLP 2021 Naoya Inoue, Harsh Trivedi, Steven Sinha, Niranjan Balasubramanian, Kentaro Inui

Instead, we advocate for an abstractive approach, where we propose to generate a question-focused, abstractive summary of input paragraphs and then feed it to an RC system.

2k Multi-Hop Reading Comprehension

MuSiQue: Multihop Questions via Single-hop Question Composition

1 code implementation2 Aug 2021 Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, Ashish Sabharwal

Multihop reasoning remains an elusive goal as existing multihop benchmarks are known to be largely solvable via shortcuts.

Multi-hop Question Answering Question Answering

IrEne: Interpretable Energy Prediction for Transformers

1 code implementation ACL 2021 Qingqing Cao, Yash Kumar Lal, Harsh Trivedi, Aruna Balasubramanian, Niranjan Balasubramanian

We present IrEne, an interpretable and extensible energy prediction system that accurately predicts the inference energy consumption of a wide range of Transformer-based NLP models.

What Ingredients Make for an Effective Crowdsourcing Protocol for Difficult NLU Data Collection Tasks?

1 code implementation ACL 2021 Nikita Nangia, Saku Sugawara, Harsh Trivedi, Alex Warstadt, Clara Vania, Samuel R. Bowman

However, we find that training crowdworkers, and then using an iterative process of collecting data, sending feedback, and qualifying workers based on expert judgments is an effective means of collecting challenging data.

Multiple-choice Natural Language Understanding +1

Is Multihop QA in DiRe Condition? Measuring and Reducing Disconnected Reasoning

1 code implementation EMNLP 2020 Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, Ashish Sabharwal

For a recent large-scale model (XLNet), we show that only 18 points out of its answer F1 score of 72 on HotpotQA are obtained through multifact reasoning, roughly the same as that of a simpler RNN baseline.

Multi-hop Question Answering Question Answering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.