2 code implementations • 5 Apr 2024 • Zifu Wan, Yuhao Wang, Silong Yong, Pingping Zhang, Simon Stepputtis, Katia Sycara, Yaqi Xie
In this work, we introduce Sigma, a Siamese Mamba network for multi-modal semantic segmentation, utilizing the Selective Structured State Space Model, Mamba.
no code implementations • 26 Mar 2024 • Samuel Li, Sarthak Bhagat, Joseph Campbell, Yaqi Xie, Woojun Kim, Katia Sycara, Simon Stepputtis
Task-oriented grasping of unfamiliar objects is a necessary skill for robots in dynamic in-home environments.
1 code implementation • 19 Mar 2024 • Ce Zhang, Simon Stepputtis, Katia Sycara, Yaqi Xie
Recently, large-scale pre-trained Vision-Language Models (VLMs) have demonstrated great potential in learning open-world visual representations, and exhibit remarkable performance across a wide range of downstream tasks through efficient fine-tuning.
1 code implementation • 18 Mar 2024 • Ce Zhang, Simon Stepputtis, Joseph Campbell, Katia Sycara, Yaqi Xie
Being able to understand visual scenes is a precursor for many downstream tasks, including autonomous driving, robotics, and other vision-based approaches.
no code implementations • 30 Nov 2023 • Renos Zabounidis, Ini Oguntola, Konghao Zhao, Joseph Campbell, Simon Stepputtis, Katia Sycara
Concept bottleneck models (CBMs) are interpretable models that first predict a set of semantically meaningful features, i. e., concepts, from observations that are subsequently used to condition a downstream task.
no code implementations • 29 Nov 2023 • Xijia Zhang, Yue Guo, Simon Stepputtis, Katia Sycara, Joseph Campbell
Intelligent agents such as robots are increasingly deployed in real-world, safety-critical settings.
no code implementations • 9 Nov 2023 • Simon Stepputtis, Joseph Campbell, Yaqi Xie, Zhengyang Qi, Wenxin Sharon Zhang, Ruiyi Wang, Sanketh Rangreji, Michael Lewis, Katia Sycara
We discuss the capabilities of LLMs to utilize deceptive long-horizon conversations between six human players to determine each player's goal and motivation.
no code implementations • 16 Oct 2023 • Huao Li, Yu Quan Chong, Simon Stepputtis, Joseph Campbell, Dana Hughes, Michael Lewis, Katia Sycara
While Large Language Models (LLMs) have demonstrated impressive accomplishments in both reasoning and planning, their abilities in multi-agent collaborations remains largely unexplored.
no code implementations • 19 Sep 2023 • Xijia Zhang, Yue Guo, Simon Stepputtis, Katia Sycara, Joseph Campbell
Intelligent agents such as robots are increasingly deployed in real-world, safety-critical settings.
no code implementations • 12 Sep 2023 • Sarthak Bhagat, Simon Stepputtis, Joseph Campbell, Katia Sycara
This work focuses on anticipating long-term human actions, particularly using short video segments, which can speed up editing workflows through improved suggestions while fostering creativity by suggesting narratives.
no code implementations • 3 Jul 2023 • Ini Oguntola, Joseph Campbell, Simon Stepputtis, Katia Sycara
The ability to model the mental states of others is crucial to human social intelligence, and can offer similar benefits to artificial agents with respect to the social dynamics induced in multi-agent settings.
no code implementations • 21 Jun 2023 • Joseph Campbell, Yue Guo, Fiona Xie, Simon Stepputtis, Katia Sycara
Transfer learning can be applied in deep reinforcement learning to accelerate the training of a policy in a target task by transferring knowledge from a policy learned in a related source task.
1 code implementation • 15 Jun 2023 • Sarthak Bhagat, Simon Stepputtis, Joseph Campbell, Katia Sycara
Despite the advances made in visual object recognition, state-of-the-art deep learning models struggle to effectively recognize novel objects in a few-shot setting where only a limited number of examples are provided.
no code implementations • 23 Feb 2023 • Renos Zabounidis, Joseph Campbell, Simon Stepputtis, Dana Hughes, Katia Sycara
Multi-agent robotic systems are increasingly operating in real-world environments in close proximity to humans, yet are largely controlled by policy models with inscrutable deep neural network representations.
1 code implementation • 15 Nov 2022 • Yue Guo, Joseph Campbell, Simon Stepputtis, Ruiyu Li, Dana Hughes, Fei Fang, Katia Sycara
This allows the student to self-reflect on what it has learned, enabling advice generalization and leading to improved sample efficiency and learning performance - even in environments where the teacher is sub-optimal.
Multi-agent Reinforcement Learning reinforcement-learning +2
1 code implementation • NeurIPS 2020 • Simon Stepputtis, Joseph Campbell, Mariano Phielipp, Stefan Lee, Chitta Baral, Heni Ben Amor
Imitation learning is a popular approach for teaching motor skills to robots.
no code implementations • 26 Nov 2019 • Simon Stepputtis, Joseph Campbell, Mariano Phielipp, Chitta Baral, Heni Ben Amor
In this work we propose a novel end-to-end imitation learning approach which combines natural language, vision, and motion information to produce an abstract representation of a task, which in turn is used to synthesize specific motion controllers at run-time.
no code implementations • 15 Nov 2019 • Kevin Sebastian Luck, Mel Vecerik, Simon Stepputtis, Heni Ben Amor, Jonathan Scholz
This work evaluates the use of model-based trajectory optimization methods used for exploration in Deep Deterministic Policy Gradient when trained on a latent image embedding.
no code implementations • 25 Sep 2019 • Simon Stepputtis, Joseph Campbell, Mariano Phielipp, Chitta Baral, Heni Ben Amor
In this work we propose a novel end-to-end imitation learning approach which combines natural language, vision, and motion information to produce an abstract representation of a task, which in turn can be used to synthesize specific motion controllers at run-time.
no code implementations • 15 Aug 2019 • Joseph Campbell, Arne Hitzmann, Simon Stepputtis, Shuhei Ikemoto, Koh Hosoda, Heni Ben Amor
Musculoskeletal robots that are based on pneumatic actuation have a variety of properties, such as compliance and back-drivability, that render them particularly appealing for human-robot collaboration.
2 code implementations • 14 Aug 2019 • Joseph Campbell, Simon Stepputtis, Heni Ben Amor
Human-robot interaction benefits greatly from multimodal sensor inputs as they enable increased robustness and generalization accuracy.