no code implementations • BioNLP (ACL) 2022 • Russell Richie, Sachin Grover, Fuchiang (Rich) Tsui
It is commonly claimed that inter-annotator agreement (IAA) is the ceiling of machine learning (ML) performance, i. e., that the agreement between an ML system’s predictions and an annotator can not be higher than the agreement between two annotators.
no code implementations • 9 Jun 2023 • Shiwali Mohan, Wiktor Piotrowski, Roni Stern, Sachin Grover, Sookyung Kim, Jacob Le, Johan de Kleer
Model-based reasoning agents are ill-equipped to act in novel situations in which their model of the environment no longer sufficiently represents the world.
no code implementations • 29 Mar 2023 • Wiktor Piotrowski, Yoni Sher, Sachin Grover, Roni Stern, Shiwali Mohan
This paper studies how a domain-independent planner and combinatorial search can be employed to play Angry Birds, a well established AI challenge problem.
no code implementations • 4 Nov 2021 • Donghoon Shin, Sachin Grover, Kenneth Holstein, Adam Perer
Explainable AI (XAI) is a promising means of supporting human-AI collaborations for high-stakes visual detection tasks, such as damage detection tasks from satellite imageries, as fully-automated approaches are unlikely to be perfectly safe and reliable.
1 code implementation • 27 Sep 2021 • Mayank Agarwal, Tathagata Chakraborti, Sachin Grover, Arunima Chaudhary
While India has been one of the hotspots of COVID-19, data about the pandemic from the country has proved to be largely inaccessible at scale.
no code implementations • 24 Nov 2020 • Sachin Grover, David Smith, Subbarao Kambhampati
We show how to generate questions to refine the robot's understanding of the teammate's model.
no code implementations • 3 Feb 2018 • Tathagata Chakraborti, Sarath Sreedharan, Sachin Grover, Subbarao Kambhampati
Recent work in explanation generation for decision making agents has looked at how unexplained behavior of autonomous systems can be understood in terms of differences in the model of the system and the human's understanding of the same, and how the explanation process as a result of this mismatch can be then seen as a process of reconciliation of these models.
1 code implementation • 23 Dec 2017 • Rohan Chandra, Sachin Grover, Kyungjun Lee, Moustafa Meshry, Ahmed Taha
A novel loss function, FLTBNK, is used for training the texture synthesizer.