1 code implementation • 23 Dec 2023 • Gabe Guo, Judah Goldfeder, Ling Lan, Aniv Ray, Albert Hanming Yang, Boyuan Chen, Simon JL Billinge, Hod Lipson
The revolution in materials in the past century was built on a knowledge of the atomic arrangements and the structure-property relationship.
no code implementations • NeurIPS 2020 • Oscar Chang, Lampros Flokas, Hod Lipson, Michael Spranger
We propose an MNIST based test as an easy instance of the symbol grounding problem that can serve as a sanity check for differentiable symbolic solvers in general.
no code implementations • 13 Dec 2023 • Oscar Chang, Hod Lipson
The success of gradient-based meta-learning is primarily attributed to its ability to leverage related tasks to learn task-invariant information.
no code implementations • ICLR 2020 • Oscar Chang, Lampros Flokas, Hod Lipson
Hypernetworks are meta neural networks that generate weights for a main neural network in an end-to-end differentiable manner.
no code implementations • ICLR 2018 • Oscar Chang, Hod Lipson
We also present two novel hash functions, the Dirichlet hash and the Neighborhood hash, and use them to demonstrate experimentally that balanced and deterministic weight-sharing helps with the performance of a neural network.
no code implementations • 20 Nov 2023 • Yuhang Hu, Jiong Lin, Hod Lipson
Simulation enables robots to plan and estimate the outcomes of prospective actions without the need to physically execute them.
no code implementations • 6 Oct 2023 • Yuhang Hu, Zhizhuo Zhang, Xinyue Zhu, Ruibo Liu, Philippe Wyder, Hod Lipson
Addressing the challenge of organizing scattered items in domestic spaces is complicated by the diversity and subjective nature of tidiness.
no code implementations • 5 Oct 2023 • Lennart Schulze, Hod Lipson
A robot self-model is a task-agnostic representation of the robot's physical morphology that can be used for motion planning tasks in the absence of a classical geometric kinematic model.
1 code implementation • 10 Mar 2023 • Christopher Benka, Carl Gross, Riya Gupta, Hod Lipson
Real-time motion planning requires accurate and efficient construction of configuration spaces.
no code implementations • 5 Sep 2022 • Robert Kwiatkowski, Yuhang Hu, Boyuan Chen, Hod Lipson
Self-Modeling is the process by which an agent, such as an animal or machine, learns to create a predictive model of its own dynamics.
1 code implementation • 20 Dec 2021 • Boyuan Chen, Kuang Huang, Sunand Raghupathi, Ishaan Chandratreya, Qiang Du, Hod Lipson
All physical laws are described as relationships between state variables that give a complete and non-redundant description of the relevant system dynamics.
2 code implementations • 14 Nov 2021 • Philippe M. Wyder, Hod Lipson
In this work we aim to mimic the human ability to acquire the intuition to estimate the performance of a design from visual inspection and experience alone.
1 code implementation • 11 Nov 2021 • Boyuan Chen, Robert Kwiatkowski, Carl Vondrick, Hod Lipson
Internal computational models of physical bodies are fundamental to the ability of robots and animals alike to plan and control their actions.
no code implementations • 26 May 2021 • Boyuan Chen, Yuhang Hu, Lianfeng Li, Sara Cummings, Hod Lipson
At present, progress in this field is hindered by the fact that each facial expression needs to be programmed by humans.
1 code implementation • 17 May 2021 • Boyuan Chen, Mia Chiquier, Hod Lipson, Carl Vondrick
Due to the many ways that robots use containers, we believe the box will have a number of applications in robotics.
no code implementations • 11 May 2021 • Boyuan Chen, Yuhang Hu, Robert Kwiatkowski, Shuran Song, Hod Lipson
We suggest that visual behavior modeling and perspective taking skills will play a critical role in the ability of physical robots to fully integrate into real-world multi-agent activities.
1 code implementation • ICLR 2021 • Boyuan Chen, Yu Li, Sunand Raghupathi, Hod Lipson
Our experiments reveal that high dimensional, high entropy labels achieve comparable accuracy to text (categorical) labels on the standard image classification task, but features learned through our label representations exhibit more robustness under various adversarial attacks and better effectiveness with a limited amount of training data.
no code implementations • 15 Oct 2019 • Boyuan Chen, Shuran Song, Hod Lipson, Carl Vondrick
We train embodied agents to play Visual Hide and Seek where a prey must navigate in a simulated environment in order to avoid capture from a predator.
no code implementations • 4 Oct 2019 • Robert Kwiatkowski, Hod Lipson
Task transfer is achieved using a self-model that encapsulates the dynamics of a system and serves as an environment for reinforcement learning.
no code implementations • 1 Oct 2019 • Delia Bullock, Andrew Mangeni, Tyr Wiesner-Hanks, Chad DeChant, Ethan L. Stewart, Nicholas Kaczmar, Judith M. Kolkman, Rebecca J. Nelson, Michael A. Gore, Hod Lipson
In this paper, we demonstrate the ability to discriminate between cultivated maize plant and grass or grass-like weed image segments using the context surrounding the image segments.
no code implementations • 23 May 2019 • Oscar Chang, Yuling Yao, David Williams-King, Hod Lipson
Two main obstacles preventing the widespread adoption of variational Bayesian neural networks are the high parameter overhead that makes them infeasible on large networks, and the difficulty of implementation, which can be thought of as "programming overhead."
no code implementations • ICML Workshop Deep_Phenomen 2019 • Chad DeChant, Seungwook Han, Hod Lipson
We show that information about whether a neural network's output will be correct or incorrect is present in the outputs of the network's intermediate layers.
no code implementations • 18 Feb 2019 • Oscar Chang, Hod Lipson
We present seven myths commonly believed to be true in machine learning research, circa Feb 2019.
no code implementations • 12 Nov 2018 • Oscar Chang, Robert Kwiatkowski, Siyuan Chen, Hod Lipson
Linearly interpolating between the latent embeddings for a good agent and a bad agent yields an agent embedding that generates a network with intermediate performance, where the performance can be tuned according to the coefficient of interpolation.
1 code implementation • 15 Mar 2018 • Oscar Chang, Hod Lipson
We also describe a method we call regeneration to train the network without explicit optimization, by injecting the network with predictions of its own parameters.
no code implementations • 9 Mar 2018 • Joel Lehman, Jeff Clune, Dusan Misevic, Christoph Adami, Lee Altenberg, Julie Beaulieu, Peter J. Bentley, Samuel Bernard, Guillaume Beslon, David M. Bryson, Patryk Chrabaszcz, Nick Cheney, Antoine Cully, Stephane Doncieux, Fred C. Dyer, Kai Olav Ellefsen, Robert Feldt, Stephan Fischer, Stephanie Forrest, Antoine Frénoy, Christian Gagné, Leni Le Goff, Laura M. Grabowski, Babak Hodjat, Frank Hutter, Laurent Keller, Carole Knibbe, Peter Krcah, Richard E. Lenski, Hod Lipson, Robert MacCurdy, Carlos Maestre, Risto Miikkulainen, Sara Mitri, David E. Moriarty, Jean-Baptiste Mouret, Anh Nguyen, Charles Ofria, Marc Parizeau, David Parsons, Robert T. Pennock, William F. Punch, Thomas S. Ray, Marc Schoenauer, Eric Shulte, Karl Sims, Kenneth O. Stanley, François Taddei, Danesh Tarapore, Simon Thibault, Westley Weimer, Richard Watson, Jason Yosinski
Biological evolution provides a creative fount of complex and subtle adaptations, often surprising the scientists who discover them.
no code implementations • 2 Mar 2018 • Boyuan Chen, Harvey Wu, Warren Mo, Ishanu Chattopadhyay, Hod Lipson
We introduce an automatic machine learning (AutoML) modeling architecture called Autostacker, which combines an innovative hierarchical stacking architecture and an Evolutionary Algorithm (EA) to perform efficient parameter search.
no code implementations • ICLR 2018 • Boyuan Chen, Warren Mo, Ishanu Chattopadhyay, Hod Lipson
We significantly reduce the time of AutoML with a naturally inspired algorithm - Parallel Hill Climbing (PHC).
no code implementations • 19 Jun 2017 • Nick Cheney, Josh Bongard, Vytas SunSpiral, Hod Lipson
In psychology, the theory of embodied cognition posits that behavior arises from a close coupling between body plan and sensorimotor control, which suggests why co-optimizing these two subsystems is so difficult: most evolutionary changes to morphology tend to adversely impact sensorimotor control, leading to an overall decrease in behavioral performance.
1 code implementation • 24 Nov 2015 • Yixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, John Hopcroft
Recent success in training deep neural networks have prompted active investigation into the features learned on their intermediate layers.
7 code implementations • 22 Jun 2015 • Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, Hod Lipson
The first is a tool that visualizes the activations produced on each layer of a trained convnet as it processes an image or video (e. g. a live webcam stream).
3 code implementations • NeurIPS 2014 • Jason Yosinski, Jeff Clune, Yoshua Bengio, Hod Lipson
Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks.
1 code implementation • 3 Jan 2014 • Ishanu Chattopadhyay, Hod Lipson
Here, we propose a universal solution to this problem: we delineate a principle for quantifying similarity between sources of arbitrary data streams, without a priori knowledge, features or training.
no code implementations • 3 Jan 2014 • Ishanu Chattopadhyay, Hod Lipson
Entropy rate of sequential data-streams naturally quantifies the complexity of the generative process.
no code implementations • 17 Apr 2013 • Nick Cheney, Jeff Clune, Jason Yosinski, Hod Lipson
Interactive evolution has shown the potential to create amazing and complex forms in both 2-D and 3-D settings.
no code implementations • 11 Jul 2012 • Jeff Clune, Jean-Baptiste Mouret, Hod Lipson
A central biological question is how natural organisms are so evolvable (capable of quickly adapting to new environments).