no code implementations • 13 Nov 2024 • Christopher Wang, Adam Uri Yaari, Aaditya K Singh, Vighnesh Subramaniam, Dana Rosenfarb, Jan DeWitt, Pranav Misra, Joseph R. Madsen, Scellig Stone, Gabriel Kreiman, Boris Katz, Ignacio Cases, Andrei Barbu
We present the Brain Treebank, a large-scale dataset of electrophysiological neural responses, recorded from intracranial probes while 10 subjects watched one or more Hollywood movies.
no code implementations • 5 Nov 2024 • David Mayo, Christopher Wang, Asa Harbin, Abdulrahman Alabdulkareem, Albert Eaton Shaw, Boris Katz, Andrei Barbu
When evaluating stimuli reconstruction results it is tempting to assume that higher fidelity text and image generation is due to an improved understanding of the brain or more powerful signal extraction from neural recordings.
no code implementations • 31 Oct 2024 • Colin Conwell, Christopher Hamblin, Chelsea Boccagno, David Mayo, Jesse Cummings, Leyla Isik, Andrei Barbu
We show that unimodal vision models (e. g. SimCLR) account for the vast majority of explainable variance in these ratings.
no code implementations • 26 Oct 2024 • Vighnesh Subramaniam, David Mayo, Colin Conwell, Tomaso Poggio, Boris Katz, Brian Cheung, Andrei Barbu
If the guide is trained, this transfers over part of the architectural prior and knowledge of the guide to the target.
no code implementations • 18 Jul 2024 • Nathan Cloos, Meagan Jens, Michelangelo Naim, Yen-Ling Kuo, Ignacio Cases, Andrei Barbu, Christopher J. Cueva
Humans solve problems by following existing rules and procedures, and also by leaps of creativity to redefine those rules and objectives.
1 code implementation • 20 Jun 2024 • Vighnesh Subramaniam, Colin Conwell, Christopher Wang, Gabriel Kreiman, Boris Katz, Ignacio Cases, Andrei Barbu
We operationalize sites of multimodal integration as regions where a multimodal vision-language model predicts recordings better than unimodal language, unimodal vision, or linearly-integrated language-vision models.
1 code implementation • 5 Jun 2024 • Geeling Chau, Christopher Wang, Sabera Talukder, Vighnesh Subramaniam, Saraswati Soedarmadji, Yisong Yue, Boris Katz, Andrei Barbu
We present a self-supervised framework that learns population-level codes for arbitrary ensembles of neural recordings at scale.
1 code implementation • 16 May 2024 • Abdulrahman Alabdulkareem, Christian M Arnold, Yerim Lee, Pieter M Feenstra, Boris Katz, Andrei Barbu
We reflect the compositional nature of such security mechanisms back into the structure of LLMs to build a provably secure LLM; that we term SecureLLM.
1 code implementation • 28 Feb 2023 • Christopher Wang, Vighnesh Subramaniam, Adam Uri Yaari, Gabriel Kreiman, Boris Katz, Ignacio Cases, Andrei Barbu
We create a reusable Transformer, BrainBERT, for intracranial recordings bringing modern representation learning approaches to neuroscience.
no code implementations • 23 Nov 2022 • Mengmi Zhang, Giorgia Dellaferrera, Ankur Sikarwar, Caishun Chen, Marcelo Armendariz, Noga Mudrik, Prachi Agrawal, Spandan Madan, Mranmay Shetty, Andrei Barbu, Haochen Yang, Tanishq Kumar, Shui'Er Han, Aman RAJ Singh, Meghna Sadwani, Stella Dellaferrera, Michele Pizzochero, Brandon Tang, Yew Soon Ong, Hanspeter Pfister, Gabriel Kreiman
To address this question, we turn to the Turing test and systematically benchmark current AIs in their abilities to imitate humans in three language tasks (Image captioning, Word association, and Conversation) and three vision tasks (Object detection, Color estimation, and Attention prediction).
no code implementations • 5 Oct 2022 • Julian Alverio, Boris Katz, Andrei Barbu
Curricula for goal-conditioned reinforcement learning agents typically rely on poor estimates of the agent's epistemic uncertainty or fail to consider the agents' epistemic uncertainty altogether, resulting in poor sample efficiency.
1 code implementation • 14 Jul 2022 • Vijay Gadepally, Gregory Angelides, Andrei Barbu, Andrew Bowne, Laura J. Brattain, Tamara Broderick, Armando Cabrera, Glenn Carl, Ronisha Carter, Miriam Cha, Emilie Cowen, Jesse Cummings, Bill Freeman, James Glass, Sam Goldberg, Mark Hamilton, Thomas Heldt, Kuan Wei Huang, Phillip Isola, Boris Katz, Jamie Koerner, Yen-Chen Lin, David Mayo, Kyle McAlpin, Taylor Perron, Jean Piou, Hrishikesh M. Rao, Hayley Reynolds, Kaira Samuel, Siddharth Samsi, Morgan Schmidt, Leslie Shing, Olga Simek, Brandon Swenson, Vivienne Sze, Jonathan Taylor, Paul Tylkin, Mark Veillette, Matthew L Weiss, Allan Wollaber, Sophia Yuditskaya, Jeremy Kepner
Through a series of federal initiatives and orders, the U. S. Government has been making a concerted effort to ensure American leadership in AI.
1 code implementation • NeurIPS Workshop SVRHM 2021 • Binxu Wang, David Mayo, Arturo Deza, Andrei Barbu, Colin Conwell
Critically, we find that random cropping can be substituted by cortical magnification, and saccade-like sampling of the image could also assist the representation learning.
1 code implementation • NeurIPS 2021 • Colin Conwell, David Mayo, Andrei Barbu, Michael Buice, George Alvarez, Boris Katz
Using our benchmark as an atlas, we offer preliminary answers to overarching questions about levels of analysis (e. g. do models that better predict the representations of individual neurons also predict representational similarity across neural populations?
no code implementations • 19 Oct 2021 • Yen-Ling Kuo, Xin Huang, Andrei Barbu, Stephen G. McGill, Boris Katz, John J. Leonard, Guy Rosman
Language allows humans to build mental models that interpret what is happening around them resulting in more accurate long-term predictions.
1 code implementation • 14 Oct 2021 • Ian Palmer, Andrew Rouditchenko, Andrei Barbu, Boris Katz, James Glass
These results show that models trained on other datasets and then evaluated on Spoken ObjectNet tend to perform poorly due to biases in other datasets that the models have learned.
no code implementations • NeurIPS Workshop SVRHM 2020 • Aviv Netanyahu, Tianmin Shu, Boris Katz, Andrei Barbu, Joshua B. Tenenbaum
The ability to perceive and reason about social interactions in the context of physical environments is core to human social intelligence and human-machine cooperation.
no code implementations • ICLR 2021 • Adam Uri Yaari, Maxwell Sherman, Oliver Clarke Priebe, Po-Ru Loh, Boris Katz, Andrei Barbu, Bonnie Berger
Detection of cancer-causing mutations within the vast and mostly unexploredhuman genome is a major challenge.
no code implementations • 7 Aug 2020 • Christopher Wang, Candace Ross, Yen-Ling Kuo, Boris Katz, Andrei Barbu
We take a step toward robots that can do the same by training a grounded semantic parser, which discovers latent linguistic representations that can be used for the execution of natural-language commands.
1 code implementation • Findings (EMNLP) 2021 • Yen-Ling Kuo, Boris Katz, Andrei Barbu
Recent work has shown that while deep networks can mimic some human language abilities when presented with novel sentences, systematic variation uncovers the limitations in the language-understanding abilities of networks.
1 code implementation • 1 Jun 2020 • Yen-Ling Kuo, Boris Katz, Andrei Barbu
We demonstrate a reinforcement learning agent which uses a compositional recurrent neural network that takes as input an LTL formula and determines satisfying actions.
1 code implementation • NAACL 2021 • Candace Ross, Boris Katz, Andrei Barbu
We generalize the notion of social biases from language embeddings to grounded vision and language embeddings.
no code implementations • 12 Feb 2020 • Yen-Ling Kuo, Boris Katz, Andrei Barbu
We demonstrate how a sampling-based robotic planner can be augmented to learn to understand a sequence of natural language commands in a continuous configuration space to move and manipulate objects.
no code implementations • NeurIPS 2019 • Andrei Barbu, David Mayo, Julian Alverio, William Luo, Christopher Wang, Dan Gutfreund, Josh Tenenbaum, Boris Katz
Although we focus on object recognition here, data with controls can be gathered at scale using automated tools throughout machine learning to generate datasets that exercise models in new ways thus providing valuable feedback to researchers.
Ranked #51 on Image Classification on ObjectNet (using extra training data)
no code implementations • EMNLP 2018 • C Ross, ace, Andrei Barbu, Yevgeni Berzak, Battushig Myanganbayar, Boris Katz
We develop a semantic parser that is trained in a grounded setting using pairs of videos captioned with sentences.
no code implementations • EMNLP 2016 • Yevgeni Berzak, Yan Huang, Andrei Barbu, Anna Korhonen, Boris Katz
Our agreement results control for parser bias, and are consequential in that they are on par with state of the art parsing performance for English newswire.
no code implementations • EMNLP 2015 • Yevgeni Berzak, Andrei Barbu, Daniel Harari, Boris Katz, Shimon Ullman
Understanding language goes hand in hand with the ability to integrate complex contextual information obtained via perception.
no code implementations • 9 Aug 2014 • Andrei Barbu, Alexander Bridge, Zachary Burchill, Dan Coroian, Sven Dickinson, Sanja Fidler, Aaron Michaux, Sam Mussman, Siddharth Narayanaswamy, Dhaval Salvi, Lara Schmidt, Jiangnan Shangguan, Jeffrey Mark Siskind, Jarrell Waggoner, Song Wang, Jinlian Wei, Yifan Yin, Zhiqi Zhang
We present a system that produces sentential descriptions of video: who did what to whom, and where and how they did it.
no code implementations • 20 Sep 2013 • Andrei Barbu, N. Siddharth, Jeffrey Mark Siskind
We present an approach to searching large video corpora for video clips which depict a natural-language query in the form of a sentence.
no code implementations • CVPR 2014 • N. Siddharth, Andrei Barbu, Jeffrey Mark Siskind
We present a system that demonstrates how the compositional structure of events, in concert with the compositional structure of language, can interplay with the underlying focusing mechanisms in video action recognition, thereby providing a medium, not only for top-down and bottom-up integration, but also for multi-modal integration between vision and language.
no code implementations • CVPR 2013 • Yu Cao, Daniel Barrett, Andrei Barbu, Siddharth Narayanaswamy, Haonan Yu, Aaron Michaux, Yuewei Lin, Sven Dickinson, Jeffrey Mark Siskind, Song Wang
In this paper, we propose a new method that can recognize human activities from partially observed videos in the general case.