1 code implementation • 9 Jul 2020 • Chuang Gan, Jeremy Schwartz, Seth Alter, Damian Mrowca, Martin Schrimpf, James Traer, Julian De Freitas, Jonas Kubilius, Abhishek Bhandwaldar, Nick Haber, Megumi Sano, Kuno Kim, Elias Wang, Michael Lingelbach, Aidan Curtis, Kevin Feigelis, Daniel M. Bear, Dan Gutfreund, David Cox, Antonio Torralba, James J. DiCarlo, Joshua B. Tenenbaum, Josh H. McDermott, Daniel L. K. Yamins
We introduce ThreeDWorld (TDW), a platform for interactive multi-modal physical simulation.
1 code implementation • 20 Dec 2022 • Eric Zelikman, Qian Huang, Gabriel Poesia, Noah D. Goodman, Nick Haber
Despite recent success in large language model (LLM) reasoning, LLMs struggle with hierarchical multi-step reasoning tasks like generating complex programs.
Ranked #7 on Code Generation on HumanEval
1 code implementation • 14 Mar 2024 • Eric Zelikman, Georges Harik, Yijia Shao, Varuna Jayasiri, Nick Haber, Noah D. Goodman
Crucially, these improvements require no fine-tuning on these tasks.
1 code implementation • 14 Dec 2023 • Yue Yang, Fan-Yun Sun, Luca Weihs, Eli VanderBilt, Alvaro Herrasti, Winson Han, Jiajun Wu, Nick Haber, Ranjay Krishna, Lingjie Liu, Chris Callison-Burch, Mark Yatskar, Aniruddha Kembhavi, Christopher Clark
3D simulated environments play a critical role in Embodied AI, but their creation requires expertise and extensive manual effort, restricting their diversity and scope.
1 code implementation • 28 Jun 2023 • Isaac Kauvar, Chris Doyle, Linqi Zhou, Nick Haber
Agents must be able to adapt quickly as an environment changes.
1 code implementation • 16 Jun 2023 • Eric Zelikman, Qian Huang, Percy Liang, Nick Haber, Noah D. Goodman
Language model training in distributed settings is limited by the communication cost of gradient exchanges.
1 code implementation • 21 Sep 2023 • Elisa Kreiss, Eric Zelikman, Christopher Potts, Nick Haber
None of the methods is successful with ContextRef, but we show that careful fine-tuning yields substantial improvements.
no code implementations • 21 Feb 2018 • Nick Haber, Damian Mrowca, Li Fei-Fei, Daniel L. K. Yamins
Moreover, the world model that the agent learns supports improved performance on object dynamics prediction and localization tasks.
no code implementations • 21 Feb 2018 • Nick Haber, Damian Mrowca, Li Fei-Fei, Daniel L. K. Yamins
We demonstrate that this policy causes the agent to explore novel and informative interactions with its environment, leading to the generation of a spectrum of complex behaviors, including ego-motion prediction, object attention, and object gathering.
no code implementations • NeurIPS 2018 • Damian Mrowca, Chengxu Zhuang, Elias Wang, Nick Haber, Li Fei-Fei, Joshua B. Tenenbaum, Daniel L. K. Yamins
Humans have a remarkable capacity to understand the physical dynamics of objects in their environment, flexibly capturing complex structures and interactions at multiple levels of detail.
no code implementations • NeurIPS 2018 • Nick Haber, Damian Mrowca, Stephanie Wang, Li F. Fei-Fei, Daniel L. Yamins
We demonstrate that this policy causes the agent to explore novel and informative interactions with its environment, leading to the generation of a spectrum of complex behaviors, including ego-motion prediction, object attention, and object gathering.
no code implementations • 19 Apr 2020 • Nick Haber, Catalin Voss, Jena Daniels, Peter Washington, Azar Fazel, Aaron Kline, Titas De, Terry Winograd, Carl Feinstein, Dennis P. Wall
With most recent estimates giving an incidence rate of 1 in 68 children in the United States, the autism spectrum disorder (ASD) is a growing public health crisis.
no code implementations • 15 Jul 2020 • Kuno Kim, Megumi Sano, Julian De Freitas, Nick Haber, Daniel Yamins
Humans learn world models by curiously exploring their environment, in the process acquiring compact abstractions of high bandwidth sensory inputs, the ability to plan across long temporal horizons, and an understanding of the behavioral patterns of other agents.
no code implementations • ICML 2020 • Kuno Kim, Megumi Sano, Julian De Freitas, Nick Haber, Daniel Yamins
World models are a family of predictive models that solve self-supervised problems on how the world evolves.
no code implementations • 16 Dec 2020 • Peter Washington, Haik Kalantarian, Jack Kent, Arman Husic, Aaron Kline, Emilie Leblanc, Cathy Hou, Cezmi Mutlu, Kaitlyn Dunlap, Yordan Penev, Maya Varma, Nate Stockham, Brianna Chrisman, Kelley Paskov, Min Woo Sun, Jae-Yoon Jung, Catalin Voss, Nick Haber, Dennis P. Wall
The classifier achieved 66. 9% balanced accuracy and 67. 4% F1-score on the entirety of CAFE as well as 79. 1% balanced accuracy and 78. 0% F1-score on CAFE Subset A, a subset containing at least 60% human agreement on emotions labels.
no code implementations • 10 Jan 2021 • Peter Washington, Onur Cezmi Mutlu, Emilie Leblanc, Aaron Kline, Cathy Hou, Brianna Chrisman, Nate Stockham, Kelley Paskov, Catalin Voss, Nick Haber, Dennis Wall
While the F1-score for a one-hot encoded classifier is much higher (94. 33% vs. 78. 68%) with respect to the ground truth CAFE labels, the output probability vector of the crowd-trained classifier more closely resembles the distribution of human labels (t=3. 2827, p=0. 0014).
no code implementations • 26 Jan 2022 • Peter Washington, Cezmi Onur Mutlu, Aaron Kline, Kelley Paskov, Nate Tyler Stockham, Brianna Chrisman, Nick Deveau, Mourya Surhabi, Nick Haber, Dennis P. Wall
Computer Vision (CV) classifiers which distinguish and detect nonverbal social human behavior and mental state can aid digital diagnostics and therapeutics for psychiatry and the behavioral sciences.
no code implementations • 23 Aug 2022 • Fan-Yun Sun, Isaac Kauvar, Ruohan Zhang, Jiachen Li, Mykel Kochenderfer, Jiajun Wu, Nick Haber
Modeling multi-agent systems requires understanding how agents interact.
no code implementations • 3 Apr 2023 • Fan-Yun Sun, Jonathan Tremblay, Valts Blukis, Kevin Lin, Danfei Xu, Boris Ivanovic, Peter Karkus, Stan Birchfield, Dieter Fox, Ruohan Zhang, Yunzhu Li, Jiajun Wu, Marco Pavone, Nick Haber
At inference, given one or more views of a novel real-world object, FINV first finds a set of latent codes for the object by inverting the generative model from multiple initial seeds.
no code implementations • 22 May 2023 • Julio Martinez, Felix Binder, Haoliang Wang, Nick Haber, Judith Fan, Daniel L. K. Yamins
Finally, linearly combining the adversarial model with the number of collisions in a scene leads to the greatest improvement in predictivity of human responses, suggesting humans are driven towards scenarios that result in high information gain and physical activity.
no code implementations • 22 May 2023 • Chris Doyle, Sarah Shader, Michelle Lau, Megumi Sano, Daniel L. K. Yamins, Nick Haber
We also find that learning a world model in the presence of an attentive caregiver helps the infant agent learn how to predict scenarios with challenging social and physical dynamics.
no code implementations • 11 Sep 2023 • Ruocheng Wang, Eric Zelikman, Gabriel Poesia, Yewen Pu, Nick Haber, Noah D. Goodman
Because of the prohibitive cost of generation with state-of-the-art LLMs, we consider a middle step to filter the set of hypotheses that will be implemented into programs: we either ask the LLM to summarize into a smaller set of hypotheses, or ask human annotators to select a subset of the hypotheses.
no code implementations • 10 Oct 2023 • Eric Zelikman, Wanjing Anya Ma, Jasmine E. Tran, Diyi Yang, Jason D. Yeatman, Nick Haber
Developing an educational test can be expensive and time-consuming, as each item must be written by experts and then evaluated by collecting hundreds of student responses.
no code implementations • 12 Oct 2023 • Karen D. Wang, Eric Burkholder, Carl Wieman, Shima Salehi, Nick Haber
The study explores the capabilities of OpenAI's ChatGPT in solving different types of physics problems.