Search Results for author: Deepak Ramachandran

Found 11 papers, 4 papers with code

TaskLAMA: Probing the Complex Task Understanding of Language Models

no code implementations29 Aug 2023 Quan Yuan, Mehran Kazemi, Xin Xu, Isaac Noble, Vaiva Imbrasaite, Deepak Ramachandran

Our experiments reveal that LLMs are able to decompose complex tasks into individual steps effectively, with a relative improvement of 15% to 280% over the best baseline.

BoardgameQA: A Dataset for Natural Language Reasoning with Contradictory Information

no code implementations13 Jun 2023 Mehran Kazemi, Quan Yuan, Deepti Bhatia, Najoung Kim, Xin Xu, Vaiva Imbrasaite, Deepak Ramachandran

One widely-applicable way of resolving conflicts is to impose preferences over information sources (e. g., based on source credibility or information recency) and adopt the source with higher preference.

Pushing the Accuracy-Group Robustness Frontier with Introspective Self-play

no code implementations11 Feb 2023 Jeremiah Zhe Liu, Krishnamurthy Dj Dvijotham, Jihyeon Lee, Quan Yuan, Martin Strobel, Balaji Lakshminarayanan, Deepak Ramachandran

Standard empirical risk minimization (ERM) training can produce deep neural network (DNN) models that are accurate on average but under-perform in under-represented population subgroups, especially when there are imbalanced group distributions in the long-tailed training data.

Active Learning Fairness

Understanding Finetuning for Factual Knowledge Extraction from Language Models

no code implementations26 Jan 2023 Mehran Kazemi, Sid Mittal, Deepak Ramachandran

Recently, it has been shown that finetuning LMs on a set of factual knowledge makes them produce better answers to queries from a different set, thus making finetuned LMs a good candidate for knowledge extraction and, consequently, knowledge graph construction.

graph construction

LAMBADA: Backward Chaining for Automated Reasoning in Natural Language

no code implementations20 Dec 2022 Mehran Kazemi, Najoung Kim, Deepti Bhatia, Xin Xu, Deepak Ramachandran

Remarkable progress has been made on automated reasoning with natural text, by using Language Models (LMs) and methods such as Chain-of-Thought and Selection-Inference.

LAMBADA Logical Reasoning

Tackling Provably Hard Representative Selection via Graph Neural Networks

1 code implementation20 May 2022 Mehran Kazemi, Anton Tsitsulin, Hossein Esfandiari, Mohammadhossein Bateni, Deepak Ramachandran, Bryan Perozzi, Vahab Mirrokni

Representative Selection (RS) is the problem of finding a small subset of exemplars from a dataset that is representative of the dataset.

Active Learning Data Compression +1

FETA: A Benchmark for Few-Sample Task Transfer in Open-Domain Dialogue

1 code implementation12 May 2022 Alon Albalak, Yi-Lin Tuan, Pegah Jandaghi, Connor Pryor, Luke Yoffe, Deepak Ramachandran, Lise Getoor, Jay Pujara, William Yang Wang

Task transfer, transferring knowledge contained in related tasks, holds the promise of reducing the quantity of labeled data required to fine-tune language models.

Dialogue Understanding Domain Adaptation +1

Discovering Personalized Semantics for Soft Attributes in Recommender Systems using Concept Activation Vectors

1 code implementation6 Feb 2022 Christina Göpfert, Alex Haig, Yinlam Chow, Chih-Wei Hsu, Ivan Vendrov, Tyler Lu, Deepak Ramachandran, Hubert Pham, Mohammad Ghavamzadeh, Craig Boutilier

Interactive recommender systems have emerged as a promising paradigm to overcome the limitations of the primitive user feedback used by traditional recommender systems (e. g., clicks, item consumption, ratings).

Recommendation Systems

Which Linguist Invented the Lightbulb? Presupposition Verification for Question-Answering

no code implementations ACL 2021 Najoung Kim, Ellie Pavlick, Burcu Karagol Ayan, Deepak Ramachandran

Through a user preference study, we demonstrate that the oracle behavior of our proposed system that provides responses based on presupposition failure is preferred over the oracle behavior of existing QA systems.

Explanation Generation Natural Questions +1

Do Language Embeddings Capture Scales?

no code implementations EMNLP (BlackboxNLP) 2020 Xikun Zhang, Deepak Ramachandran, Ian Tenney, Yanai Elazar, Dan Roth

Pretrained Language Models (LMs) have been shown to possess significant linguistic, common sense, and factual knowledge.

Common Sense Reasoning

Cannot find the paper you are looking for? You can Submit a new open access paper.