Search Results for author: Chad DeChant

Found 5 papers, 0 papers with code

Learning to Summarize and Answer Questions about a Virtual Robot's Past Actions

no code implementations16 Jun 2023 Chad DeChant, Iretiayo Akinola, Daniel Bauer

We therefore demonstrate the task of learning to summarize and answer questions about a robot agent's past actions using natural language alone.

Language Modelling Large Language Model +1

Summarizing a virtual robot's past actions in natural language

no code implementations13 Mar 2022 Chad DeChant, Daniel Bauer

We propose and demonstrate the task of giving natural language summaries of the actions of a robotic agent in a virtual environment.

Instruction Following

Automated Weed Detection in Aerial Imagery with Context

no code implementations1 Oct 2019 Delia Bullock, Andrew Mangeni, Tyr Wiesner-Hanks, Chad DeChant, Ethan L. Stewart, Nicholas Kaczmar, Judith M. Kolkman, Rebecca J. Nelson, Michael A. Gore, Hod Lipson

In this paper, we demonstrate the ability to discriminate between cultivated maize plant and grass or grass-like weed image segments using the context surrounding the image segments.

object-detection Object Detection

Predicting the accuracy of neural networks from final and intermediate layer outputs

no code implementations ICML Workshop Deep_Phenomen 2019 Chad DeChant, Seungwook Han, Hod Lipson

We show that information about whether a neural network's output will be correct or incorrect is present in the outputs of the network's intermediate layers.

Shape Completion Enabled Robotic Grasping

no code implementations27 Sep 2016 Jacob Varley, Chad DeChant, Adam Richardson, Joaquín Ruales, Peter Allen

At runtime, a 2. 5D pointcloud captured from a single point of view is fed into the CNN, which fills in the occluded regions of the scene, allowing grasps to be planned and executed on the completed object.

Robotics

Cannot find the paper you are looking for? You can Submit a new open access paper.