1 code implementation • 7 Jul 2023 • Shantanu Ghosh, Ke Yu, Forough Arabshahi, Kayhan Batmanghelich
ML model design either starts with an interpretable model or a Blackbox and explains it post hoc.
1 code implementation • 20 Feb 2023 • Shantanu Ghosh, Ke Yu, Forough Arabshahi, Kayhan Batmanghelich
The FOLs from the finetuned-BB-derived MoIE verify the elimination of the shortcut.
no code implementations • 21 Oct 2022 • Sumedha Singla, Nihal Murali, Forough Arabshahi, Sofia Triantafyllou, Kayhan Batmanghelich
The classification outcome should reflect a high uncertainty on ambiguous in-distribution samples that lie close to the decision boundary.
no code implementations • 12 Oct 2021 • Pooja Sethi, Denis Savenkov, Forough Arabshahi, Jack Goetz, Micaela Tolliver, Nicolas Scheffer, Ilknur Kabul, Yue Liu, Ahmed Aly
Improving the quality of Natural Language Understanding (NLU) models, and more specifically, task-oriented semantic parsing models, in production is a cumbersome task.
no code implementations • EMNLP 2021 • Forough Arabshahi, Jennifer Lee, Antoine Bosselut, Yejin Choi, Tom Mitchell
Our reasoner uses a state-of-the-art transformer-based generative commonsense knowledge base (KB) as its source of background knowledge for reasoning.
1 code implementation • 17 Jun 2020 • Forough Arabshahi, Jennifer Lee, Mikayla Gawarecki, Kathryn Mazaitis, Amos Azaria, Tom Mitchell
More precisely, we consider the problem of identifying the unstated presumptions of the speaker that allow the requested action to achieve the desired goal from the given state (perhaps elaborated by making the implicit presumptions explicit).
3 code implementations • 5 Nov 2019 • Forough Arabshahi, Zhichu Lu, Pranay Mundra, Sameer Singh, Animashree Anandkumar
We study compositional generalization, viz., the problem of zero-shot generalization to novel compositions of concepts in a domain.
1 code implementation • IJCNLP 2019 • Zhichu Lu, Forough Arabshahi, Igor Labutov, Tom Mitchell
In this paper, we propose a semantic parser that generalizes to out-of-domain examples by learning a general strategy for parsing an unseen utterance through adapting the logical forms of seen utterances, instead of learning to generate a logical form from scratch.
1 code implementation • ICLR 2018 • Forough Arabshahi, Sameer Singh, Animashree Anandkumar
This is because they mostly rely either on black-box function evaluations that do not capture the structure of the program, or on detailed execution traces that are expensive to obtain, and hence the training data has poor coverage of the domain under consideration.
no code implementations • 30 May 2016 • Forough Arabshahi, Animashree Anandkumar
NID distributions are generated through the process of normalizing a family of independent Infinitely Divisible (ID) random variables.