Search Results for author: Hod Lipson

Found 39 papers, 12 papers with code

Towards End-to-End Structure Solutions from Information-Compromised Diffraction Data via Generative Deep Learning

1 code implementation23 Dec 2023 Gabe Guo, Judah Goldfeder, Ling Lan, Aniv Ray, Albert Hanming Yang, Boyuan Chen, Simon JL Billinge, Hod Lipson

The revolution in materials in the past century was built on a knowledge of the atomic arrangements and the structure-property relationship.

Accelerating Meta-Learning by Sharing Gradients

no code implementations13 Dec 2023 Oscar Chang, Hod Lipson

The success of gradient-based meta-learning is primarily attributed to its ability to leverage related tasks to learn task-invariant information.

Meta-Learning Multi-Task Learning

Principled Weight Initialization for Hypernetworks

no code implementations ICLR 2020 Oscar Chang, Lampros Flokas, Hod Lipson

Hypernetworks are meta neural networks that generate weights for a main neural network in an end-to-end differentiable manner.

Multi-Task Learning

Balanced and Deterministic Weight-sharing Helps Network Performance

no code implementations ICLR 2018 Oscar Chang, Hod Lipson

We also present two novel hash functions, the Dirichlet hash and the Neighborhood hash, and use them to demonstrate experimentally that balanced and deterministic weight-sharing helps with the performance of a neural network.

Neural Network Compression

Assessing SATNet's Ability to Solve the Symbol Grounding Problem

no code implementations NeurIPS 2020 Oscar Chang, Lampros Flokas, Hod Lipson, Michael Spranger

We propose an MNIST based test as an easy instance of the symbol grounding problem that can serve as a sanity check for differentiable symbolic solvers in general.

Logical Reasoning

Teaching Robots to Build Simulations of Themselves

no code implementations20 Nov 2023 Yuhang Hu, Jiong Lin, Hod Lipson

Simulation enables robots to plan and estimate the outcomes of prospective actions without the need to physically execute them.

Motion Planning Self-Supervised Learning

Knolling Bot: Learning Robotic Object Arrangement from Tidy Demonstrations

no code implementations6 Oct 2023 Yuhang Hu, Zhizhuo Zhang, Xinyue Zhu, Ruibo Liu, Philippe Wyder, Hod Lipson

Addressing the challenge of organizing scattered items in domestic spaces is complicated by the diversity and subjective nature of tidiness.

Position Self-Supervised Learning

High-Degrees-of-Freedom Dynamic Neural Fields for Robot Self-Modeling and Motion Planning

no code implementations5 Oct 2023 Lennart Schulze, Hod Lipson

A robot self-model is a task-agnostic representation of the robot's physical morphology that can be used for motion planning tasks in absence of classical geometric kinematic models.

Motion Planning

Direct Robot Configuration Space Construction using Convolutional Encoder-Decoders

1 code implementation10 Mar 2023 Christopher Benka, Carl Gross, Riya Gupta, Hod Lipson

Real-time motion planning requires accurate and efficient construction of configuration spaces.

Motion Planning

On the Origins of Self-Modeling

no code implementations5 Sep 2022 Robert Kwiatkowski, Yuhang Hu, Boyuan Chen, Hod Lipson

Self-Modeling is the process by which an agent, such as an animal or machine, learns to create a predictive model of its own dynamics.

Discovering State Variables Hidden in Experimental Data

1 code implementation20 Dec 2021 Boyuan Chen, Kuang Huang, Sunand Raghupathi, Ishaan Chandratreya, Qiang Du, Hod Lipson

All physical laws are described as relationships between state variables that give a complete and non-redundant description of the relevant system dynamics.

Symbolic Regression

Visual design intuition: Predicting dynamic properties of beams from raw cross-section images

2 code implementations14 Nov 2021 Philippe M. Wyder, Hod Lipson

In this work we aim to mimic the human ability to acquire the intuition to estimate the performance of a design from visual inspection and experience alone.

Full-Body Visual Self-Modeling of Robot Morphologies

1 code implementation11 Nov 2021 Boyuan Chen, Robert Kwiatkowski, Carl Vondrick, Hod Lipson

Internal computational models of physical bodies are fundamental to the ability of robots and animals alike to plan and control their actions.

Motion Planning

Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models

no code implementations26 May 2021 Boyuan Chen, Yuhang Hu, Lianfeng Li, Sara Cummings, Hod Lipson

At present, progress in this field is hindered by the fact that each facial expression needs to be programmed by humans.

Camera Calibration Self-Supervised Learning

The Boombox: Visual Reconstruction from Acoustic Vibrations

1 code implementation17 May 2021 Boyuan Chen, Mia Chiquier, Hod Lipson, Carl Vondrick

Due to the many ways that robots use containers, we believe the box will have a number of applications in robotics.

Visual Perspective Taking for Opponent Behavior Modeling

no code implementations11 May 2021 Boyuan Chen, Yuhang Hu, Robert Kwiatkowski, Shuran Song, Hod Lipson

We suggest that visual behavior modeling and perspective taking skills will play a critical role in the ability of physical robots to fully integrate into real-world multi-agent activities.

Beyond Categorical Label Representations for Image Classification

1 code implementation ICLR 2021 Boyuan Chen, Yu Li, Sunand Raghupathi, Hod Lipson

Our experiments reveal that high dimensional, high entropy labels achieve comparable accuracy to text (categorical) labels on the standard image classification task, but features learned through our label representations exhibit more robustness under various adversarial attacks and better effectiveness with a limited amount of training data.

Classification General Classification +1

Visual Hide and Seek

no code implementations15 Oct 2019 Boyuan Chen, Shuran Song, Hod Lipson, Carl Vondrick

We train embodied agents to play Visual Hide and Seek where a prey must navigate in a simulated environment in order to avoid capture from a predator.

Navigate

Zero Shot Learning on Simulated Robots

no code implementations4 Oct 2019 Robert Kwiatkowski, Hod Lipson

Task transfer is achieved using a self-model that encapsulates the dynamics of a system and serves as an environment for reinforcement learning.

reinforcement-learning Reinforcement Learning (RL) +1

Automated Weed Detection in Aerial Imagery with Context

no code implementations1 Oct 2019 Delia Bullock, Andrew Mangeni, Tyr Wiesner-Hanks, Chad DeChant, Ethan L. Stewart, Nicholas Kaczmar, Judith M. Kolkman, Rebecca J. Nelson, Michael A. Gore, Hod Lipson

In this paper, we demonstrate the ability to discriminate between cultivated maize plant and grass or grass-like weed image segments using the context surrounding the image segments.

object-detection Object Detection

Ensemble Model Patching: A Parameter-Efficient Variational Bayesian Neural Network

no code implementations23 May 2019 Oscar Chang, Yuling Yao, David Williams-King, Hod Lipson

Two main obstacles preventing the widespread adoption of variational Bayesian neural networks are the high parameter overhead that makes them infeasible on large networks, and the difficulty of implementation, which can be thought of as "programming overhead."

Predicting the accuracy of neural networks from final and intermediate layer outputs

no code implementations ICML Workshop Deep_Phenomen 2019 Chad DeChant, Seungwook Han, Hod Lipson

We show that information about whether a neural network's output will be correct or incorrect is present in the outputs of the network's intermediate layers.

Seven Myths in Machine Learning Research

no code implementations18 Feb 2019 Oscar Chang, Hod Lipson

We present seven myths commonly believed to be true in machine learning research, circa Feb 2019.

BIG-bench Machine Learning

Agent Embeddings: A Latent Representation for Pole-Balancing Networks

no code implementations12 Nov 2018 Oscar Chang, Robert Kwiatkowski, Siyuan Chen, Hod Lipson

Linearly interpolating between the latent embeddings for a good agent and a bad agent yields an agent embedding that generates a network with intermediate performance, where the performance can be tuned according to the coefficient of interpolation.

Neural Network Quine

1 code implementation15 Mar 2018 Oscar Chang, Hod Lipson

We also describe a method we call regeneration to train the network without explicit optimization, by injecting the network with predictions of its own parameters.

General Classification Image Classification

Autostacker: A Compositional Evolutionary Learning System

no code implementations2 Mar 2018 Boyuan Chen, Harvey Wu, Warren Mo, Ishanu Chattopadhyay, Hod Lipson

We introduce an automatic machine learning (AutoML) modeling architecture called Autostacker, which combines an innovative hierarchical stacking architecture and an Evolutionary Algorithm (EA) to perform efficient parameter search.

BIG-bench Machine Learning Hyperparameter Optimization

Scalable Co-Optimization of Morphology and Control in Embodied Machines

no code implementations19 Jun 2017 Nick Cheney, Josh Bongard, Vytas SunSpiral, Hod Lipson

In psychology, the theory of embodied cognition posits that behavior arises from a close coupling between body plan and sensorimotor control, which suggests why co-optimizing these two subsystems is so difficult: most evolutionary changes to morphology tend to adversely impact sensorimotor control, leading to an overall decrease in behavioral performance.

Convergent Learning: Do different neural networks learn the same representations?

1 code implementation24 Nov 2015 Yixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, John Hopcroft

Recent success in training deep neural networks have prompted active investigation into the features learned on their intermediate layers.

Clustering

Understanding Neural Networks Through Deep Visualization

7 code implementations22 Jun 2015 Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, Hod Lipson

The first is a tool that visualizes the activations produced on each layer of a trained convnet as it processes an image or video (e. g. a live webcam stream).

Interpretable Machine Learning

How transferable are features in deep neural networks?

3 code implementations NeurIPS 2014 Jason Yosinski, Jeff Clune, Yoshua Bengio, Hod Lipson

Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks.

Specificity

Computing Entropy Rate Of Symbol Sources & A Distribution-free Limit Theorem

no code implementations3 Jan 2014 Ishanu Chattopadhyay, Hod Lipson

Entropy rate of sequential data-streams naturally quantifies the complexity of the generative process.

Data Smashing

1 code implementation3 Jan 2014 Ishanu Chattopadhyay, Hod Lipson

Here, we propose a universal solution to this problem: we delineate a principle for quantifying similarity between sources of arbitrary data streams, without a priori knowledge, features or training.

Hands-free Evolution of 3D-printable Objects via Eye Tracking

no code implementations17 Apr 2013 Nick Cheney, Jeff Clune, Jason Yosinski, Hod Lipson

Interactive evolution has shown the potential to create amazing and complex forms in both 2-D and 3-D settings.

The evolutionary origins of modularity

no code implementations11 Jul 2012 Jeff Clune, Jean-Baptiste Mouret, Hod Lipson

A central biological question is how natural organisms are so evolvable (capable of quickly adapting to new environments).

Cannot find the paper you are looking for? You can Submit a new open access paper.