no code implementations • 28 Sep 2022 • Zhao Han, Emmanuel Senft, Muneeb I. Ahmad, Shelly Bagchi, Amir Yazdani, Jason R. Wilson, Boyoung Kim, Ruchen Wen, Justin W. Hart, Daniel Hernández García, Matteo Leonetti, Ross Mead, Reuth Mirsky, Ahalya Prabhakar, Megan L. Zimmerman
The Artificial Intelligence (AI) for Human-Robot Interaction (HRI) Symposium has been a successful venue of discussion and collaboration on AI theory and methods aimed at HRI since 2014.
no code implementations • 5 Jun 2020 • Ian Abraham, Ahalya Prabhakar, Todd D. Murphey
We show that our method is able to maintain Lyapunov attractiveness with respect to the equilibrium task while actively generating data for learning tasks such, as Bayesian optimization, model learning, and off-policy reinforcement learning.
Active Learning Robotics
no code implementations • 8 Feb 2019 • Ian Abraham, Ahalya Prabhakar, Todd D. Murphey
This paper develops a method for robots to integrate stability into actively seeking out informative measurements through coverage.
Robotics
no code implementations • 8 Sep 2017 • Ahalya Prabhakar, Anastasia Mavrommati, Jarvis Schultz, Todd Murphey
This paper addresses the problem of enabling a robot to represent and recreate visual information through physical motion, focusing on drawing using pens, brushes, or other tools.
Robotics
no code implementations • 5 Sep 2017 • Ian Abraham, Ahalya Prabhakar, Mitra J. Z. Hartmann, Todd D. Murphey
Current methods to estimate object shape---using either vision or touch---generally depend on high-resolution sensing.
Robotics