Search Results for author: Forough Arabshahi

Found 10 papers, 6 papers with code

Dividing and Conquering a BlackBox to a Mixture of Interpretable Models: Route, Interpret, Repeat

1 code implementation7 Jul 2023 Shantanu Ghosh, Ke Yu, Forough Arabshahi, Kayhan Batmanghelich

ML model design either starts with an interpretable model or a Blackbox and explains it post hoc.

Augmentation by Counterfactual Explanation -- Fixing an Overconfident Classifier

no code implementations21 Oct 2022 Sumedha Singla, Nihal Murali, Forough Arabshahi, Sofia Triantafyllou, Kayhan Batmanghelich

The classification outcome should reflect a high uncertainty on ambiguous in-distribution samples that lie close to the decision boundary.

Autonomous Driving counterfactual +1

AutoNLU: Detecting, root-causing, and fixing NLU model errors

no code implementations12 Oct 2021 Pooja Sethi, Denis Savenkov, Forough Arabshahi, Jack Goetz, Micaela Tolliver, Nicolas Scheffer, Ilknur Kabul, Yue Liu, Ahmed Aly

Improving the quality of Natural Language Understanding (NLU) models, and more specifically, task-oriented semantic parsing models, in production is a cumbersome task.

Active Learning Natural Language Understanding +1

Conversational Neuro-Symbolic Commonsense Reasoning

1 code implementation17 Jun 2020 Forough Arabshahi, Jennifer Lee, Mikayla Gawarecki, Kathryn Mazaitis, Amos Azaria, Tom Mitchell

More precisely, we consider the problem of identifying the unstated presumptions of the speaker that allow the requested action to achieve the desired goal from the given state (perhaps elaborated by making the implicit presumptions explicit).

Compositional Generalization with Tree Stack Memory Units

3 code implementations5 Nov 2019 Forough Arabshahi, Zhichu Lu, Pranay Mundra, Sameer Singh, Animashree Anandkumar

We study compositional generalization, viz., the problem of zero-shot generalization to novel compositions of concepts in a domain.

Mathematical Reasoning Zero-shot Generalization

Look-up and Adapt: A One-shot Semantic Parser

1 code implementation IJCNLP 2019 Zhichu Lu, Forough Arabshahi, Igor Labutov, Tom Mitchell

In this paper, we propose a semantic parser that generalizes to out-of-domain examples by learning a general strategy for parsing an unseen utterance through adapting the logical forms of seen utterances, instead of learning to generate a logical form from scratch.

Combining Symbolic Expressions and Black-box Function Evaluations in Neural Programs

1 code implementation ICLR 2018 Forough Arabshahi, Sameer Singh, Animashree Anandkumar

This is because they mostly rely either on black-box function evaluations that do not capture the structure of the program, or on detailed execution traces that are expensive to obtain, and hence the training data has poor coverage of the domain under consideration.

valid

Spectral Methods for Correlated Topic Models

no code implementations30 May 2016 Forough Arabshahi, Animashree Anandkumar

NID distributions are generated through the process of normalizing a family of independent Infinitely Divisible (ID) random variables.

Topic Models

Cannot find the paper you are looking for? You can Submit a new open access paper.