1 code implementation • 28 Feb 2023 • Christopher Wang, Vighnesh Subramaniam, Adam Uri Yaari, Gabriel Kreiman, Boris Katz, Ignacio Cases, Andrei Barbu
We create a reusable Transformer, BrainBERT, for intracranial recordings bringing modern representation learning approaches to neuroscience.
no code implementations • 23 Nov 2022 • Mengmi Zhang, Giorgia Dellaferrera, Ankur Sikarwar, Marcelo Armendariz, Noga Mudrik, Prachi Agrawal, Spandan Madan, Andrei Barbu, Haochen Yang, Tanishq Kumar, Meghna Sadwani, Stella Dellaferrera, Michele Pizzochero, Hanspeter Pfister, Gabriel Kreiman
To address this question, we turn to the Turing test and systematically benchmark current AIs in their abilities to imitate humans.
no code implementations • 5 Oct 2022 • Julian Alverio, Boris Katz, Andrei Barbu
Curricula for goal-conditioned reinforcement learning agents typically rely on poor estimates of the agent's epistemic uncertainty or fail to consider the agents' epistemic uncertainty altogether, resulting in poor sample efficiency.
1 code implementation • 14 Jul 2022 • Vijay Gadepally, Gregory Angelides, Andrei Barbu, Andrew Bowne, Laura J. Brattain, Tamara Broderick, Armando Cabrera, Glenn Carl, Ronisha Carter, Miriam Cha, Emilie Cowen, Jesse Cummings, Bill Freeman, James Glass, Sam Goldberg, Mark Hamilton, Thomas Heldt, Kuan Wei Huang, Phillip Isola, Boris Katz, Jamie Koerner, Yen-Chen Lin, David Mayo, Kyle McAlpin, Taylor Perron, Jean Piou, Hrishikesh M. Rao, Hayley Reynolds, Kaira Samuel, Siddharth Samsi, Morgan Schmidt, Leslie Shing, Olga Simek, Brandon Swenson, Vivienne Sze, Jonathan Taylor, Paul Tylkin, Mark Veillette, Matthew L Weiss, Allan Wollaber, Sophia Yuditskaya, Jeremy Kepner
Through a series of federal initiatives and orders, the U. S. Government has been making a concerted effort to ensure American leadership in AI.
1 code implementation • NeurIPS Workshop SVRHM 2021 • Binxu Wang, David Mayo, Arturo Deza, Andrei Barbu, Colin Conwell
Critically, we find that random cropping can be substituted by cortical magnification, and saccade-like sampling of the image could also assist the representation learning.
1 code implementation • NeurIPS 2021 • Colin Conwell, David Mayo, Andrei Barbu, Michael Buice, George Alvarez, Boris Katz
Using our benchmark as an atlas, we offer preliminary answers to overarching questions about levels of analysis (e. g. do models that better predict the representations of individual neurons also predict representational similarity across neural populations?
no code implementations • 19 Oct 2021 • Yen-Ling Kuo, Xin Huang, Andrei Barbu, Stephen G. McGill, Boris Katz, John J. Leonard, Guy Rosman
Language allows humans to build mental models that interpret what is happening around them resulting in more accurate long-term predictions.
1 code implementation • 14 Oct 2021 • Ian Palmer, Andrew Rouditchenko, Andrei Barbu, Boris Katz, James Glass
These results show that models trained on other datasets and then evaluated on Spoken ObjectNet tend to perform poorly due to biases in other datasets that the models have learned.
no code implementations • NeurIPS Workshop SVRHM 2020 • Aviv Netanyahu, Tianmin Shu, Boris Katz, Andrei Barbu, Joshua B. Tenenbaum
The ability to perceive and reason about social interactions in the context of physical environments is core to human social intelligence and human-machine cooperation.
no code implementations • ICLR 2021 • Adam Uri Yaari, Maxwell Sherman, Oliver Clarke Priebe, Po-Ru Loh, Boris Katz, Andrei Barbu, Bonnie Berger
Detection of cancer-causing mutations within the vast and mostly unexploredhuman genome is a major challenge.
no code implementations • 7 Aug 2020 • Christopher Wang, Candace Ross, Yen-Ling Kuo, Boris Katz, Andrei Barbu
We take a step toward robots that can do the same by training a grounded semantic parser, which discovers latent linguistic representations that can be used for the execution of natural-language commands.
1 code implementation • Findings (EMNLP) 2021 • Yen-Ling Kuo, Boris Katz, Andrei Barbu
Recent work has shown that while deep networks can mimic some human language abilities when presented with novel sentences, systematic variation uncovers the limitations in the language-understanding abilities of networks.
1 code implementation • 1 Jun 2020 • Yen-Ling Kuo, Boris Katz, Andrei Barbu
We demonstrate a reinforcement learning agent which uses a compositional recurrent neural network that takes as input an LTL formula and determines satisfying actions.
1 code implementation • NAACL 2021 • Candace Ross, Boris Katz, Andrei Barbu
We generalize the notion of social biases from language embeddings to grounded vision and language embeddings.
no code implementations • 12 Feb 2020 • Yen-Ling Kuo, Boris Katz, Andrei Barbu
We demonstrate how a sampling-based robotic planner can be augmented to learn to understand a sequence of natural language commands in a continuous configuration space to move and manipulate objects.
no code implementations • NeurIPS 2019 • Andrei Barbu, David Mayo, Julian Alverio, William Luo, Christopher Wang, Dan Gutfreund, Josh Tenenbaum, Boris Katz
Although we focus on object recognition here, data with controls can be gathered at scale using automated tools throughout machine learning to generate datasets that exercise models in new ways thus providing valuable feedback to researchers.
Ranked #51 on Image Classification on ObjectNet (using extra training data)
no code implementations • EMNLP 2018 • C Ross, ace, Andrei Barbu, Yevgeni Berzak, Battushig Myanganbayar, Boris Katz
We develop a semantic parser that is trained in a grounded setting using pairs of videos captioned with sentences.
no code implementations • EMNLP 2016 • Yevgeni Berzak, Yan Huang, Andrei Barbu, Anna Korhonen, Boris Katz
Our agreement results control for parser bias, and are consequential in that they are on par with state of the art parsing performance for English newswire.
no code implementations • EMNLP 2015 • Yevgeni Berzak, Andrei Barbu, Daniel Harari, Boris Katz, Shimon Ullman
Understanding language goes hand in hand with the ability to integrate complex contextual information obtained via perception.
no code implementations • 9 Aug 2014 • Andrei Barbu, Alexander Bridge, Zachary Burchill, Dan Coroian, Sven Dickinson, Sanja Fidler, Aaron Michaux, Sam Mussman, Siddharth Narayanaswamy, Dhaval Salvi, Lara Schmidt, Jiangnan Shangguan, Jeffrey Mark Siskind, Jarrell Waggoner, Song Wang, Jinlian Wei, Yifan Yin, Zhiqi Zhang
We present a system that produces sentential descriptions of video: who did what to whom, and where and how they did it.
no code implementations • 20 Sep 2013 • Andrei Barbu, N. Siddharth, Jeffrey Mark Siskind
We present an approach to searching large video corpora for video clips which depict a natural-language query in the form of a sentence.
no code implementations • CVPR 2014 • N. Siddharth, Andrei Barbu, Jeffrey Mark Siskind
We present a system that demonstrates how the compositional structure of events, in concert with the compositional structure of language, can interplay with the underlying focusing mechanisms in video action recognition, thereby providing a medium, not only for top-down and bottom-up integration, but also for multi-modal integration between vision and language.
no code implementations • CVPR 2013 • Yu Cao, Daniel Barrett, Andrei Barbu, Siddharth Narayanaswamy, Haonan Yu, Aaron Michaux, Yuewei Lin, Sven Dickinson, Jeffrey Mark Siskind, Song Wang
In this paper, we propose a new method that can recognize human activities from partially observed videos in the general case.