no code implementations • 29 Aug 2023 • Quan Yuan, Mehran Kazemi, Xin Xu, Isaac Noble, Vaiva Imbrasaite, Deepak Ramachandran
Our experiments reveal that LLMs are able to decompose complex tasks into individual steps effectively, with a relative improvement of 15% to 280% over the best baseline.
no code implementations • 13 Jun 2023 • Mehran Kazemi, Quan Yuan, Deepti Bhatia, Najoung Kim, Xin Xu, Vaiva Imbrasaite, Deepak Ramachandran
One widely-applicable way of resolving conflicts is to impose preferences over information sources (e. g., based on source credibility or information recency) and adopt the source with higher preference.
no code implementations • 11 Feb 2023 • Jeremiah Zhe Liu, Krishnamurthy Dj Dvijotham, Jihyeon Lee, Quan Yuan, Martin Strobel, Balaji Lakshminarayanan, Deepak Ramachandran
Standard empirical risk minimization (ERM) training can produce deep neural network (DNN) models that are accurate on average but under-perform in under-represented population subgroups, especially when there are imbalanced group distributions in the long-tailed training data.
no code implementations • 26 Jan 2023 • Mehran Kazemi, Sid Mittal, Deepak Ramachandran
Recently, it has been shown that finetuning LMs on a set of factual knowledge makes them produce better answers to queries from a different set, thus making finetuned LMs a good candidate for knowledge extraction and, consequently, knowledge graph construction.
no code implementations • 20 Dec 2022 • Mehran Kazemi, Najoung Kim, Deepti Bhatia, Xin Xu, Deepak Ramachandran
Remarkable progress has been made on automated reasoning with natural text, by using Language Models (LMs) and methods such as Chain-of-Thought and Selection-Inference.
1 code implementation • 20 May 2022 • Mehran Kazemi, Anton Tsitsulin, Hossein Esfandiari, Mohammadhossein Bateni, Deepak Ramachandran, Bryan Perozzi, Vahab Mirrokni
Representative Selection (RS) is the problem of finding a small subset of exemplars from a dataset that is representative of the dataset.
1 code implementation • 12 May 2022 • Alon Albalak, Yi-Lin Tuan, Pegah Jandaghi, Connor Pryor, Luke Yoffe, Deepak Ramachandran, Lise Getoor, Jay Pujara, William Yang Wang
Task transfer, transferring knowledge contained in related tasks, holds the promise of reducing the quantity of labeled data required to fine-tune language models.
1 code implementation • 6 Feb 2022 • Christina Göpfert, Alex Haig, Yinlam Chow, Chih-Wei Hsu, Ivan Vendrov, Tyler Lu, Deepak Ramachandran, Hubert Pham, Mohammad Ghavamzadeh, Craig Boutilier
Interactive recommender systems have emerged as a promising paradigm to overcome the limitations of the primitive user feedback used by traditional recommender systems (e. g., clicks, item consumption, ratings).
no code implementations • ACL 2021 • Najoung Kim, Ellie Pavlick, Burcu Karagol Ayan, Deepak Ramachandran
Through a user preference study, we demonstrate that the oracle behavior of our proposed system that provides responses based on presupposition failure is preferred over the oracle behavior of existing QA systems.
no code implementations • EMNLP (BlackboxNLP) 2020 • Xikun Zhang, Deepak Ramachandran, Ian Tenney, Yanai Elazar, Dan Roth
Pretrained Language Models (LMs) have been shown to possess significant linguistic, common sense, and factual knowledge.
1 code implementation • ACL 2019 • Yanai Elazar, Abhijit Mahabal, Deepak Ramachandran, Tania Bedrax-Weiss, Dan Roth
Most current NLP systems have little knowledge about quantitative attributes of objects and events.