no code implementations • 16 Jun 2023 • Chad DeChant, Iretiayo Akinola, Daniel Bauer
We therefore demonstrate the task of learning to summarize and answer questions about a robot agent's past actions using natural language alone.
no code implementations • 13 Mar 2022 • Chad DeChant, Daniel Bauer
We propose and demonstrate the task of giving natural language summaries of the actions of a robotic agent in a virtual environment.
no code implementations • 1 Oct 2019 • Delia Bullock, Andrew Mangeni, Tyr Wiesner-Hanks, Chad DeChant, Ethan L. Stewart, Nicholas Kaczmar, Judith M. Kolkman, Rebecca J. Nelson, Michael A. Gore, Hod Lipson
In this paper, we demonstrate the ability to discriminate between cultivated maize plant and grass or grass-like weed image segments using the context surrounding the image segments.
no code implementations • ICML Workshop Deep_Phenomen 2019 • Chad DeChant, Seungwook Han, Hod Lipson
We show that information about whether a neural network's output will be correct or incorrect is present in the outputs of the network's intermediate layers.
no code implementations • 27 Sep 2016 • Jacob Varley, Chad DeChant, Adam Richardson, Joaquín Ruales, Peter Allen
At runtime, a 2. 5D pointcloud captured from a single point of view is fed into the CNN, which fills in the occluded regions of the scene, allowing grasps to be planned and executed on the completed object.
Robotics