Search Results for author: Ali Shafti

Found 13 papers, 0 papers with code

Neuromuscular Reinforcement Learning to Actuate Human Limbs through FES

no code implementations16 Sep 2022 Nat Wannawas, Ali Shafti, A. Aldo Faisal

Functional Electrical Stimulation (FES) is a technique to evoke muscle contraction through low-energy electrical signals.

reinforcement-learning Reinforcement Learning (RL)

The role of haptic communication in dyadic collaborative object manipulation tasks

no code implementations2 Mar 2022 Yiming Liu, Raz Leib, William Dudley, Ali Shafti, A. Aldo Faisal, David W. Franklin

The task requires that the two sides coordinate with each other, in real-time, to balance the ball at the target.

The Response Shift Paradigm to Quantify Human Trust in AI Recommendations

no code implementations16 Feb 2022 Ali Shafti, Victoria Derks, Hannah Kay, A. Aldo Faisal

Explainability, interpretability and how much they affect human trust in AI systems are ultimately problems of human cognition as much as machine learning, yet the effectiveness of AI recommendations and the trust afforded by end-users are typically not evaluated quantitatively.

Explainable Artificial Intelligence (XAI)

MIDAS: Deep learning human action intention prediction from natural eye movement patterns

no code implementations22 Jan 2022 Paul Festor, Ali Shafti, Alex Harston, Michey Li, Pavel Orlov, A. Aldo Faisal

Our evaluation shows that intention prediction is not a naive result of the data, but rather relies on non-linear temporal processing of gaze cues.

Time Series Analysis Time Series Classification

I am Robot: Neuromuscular Reinforcement Learning to Actuate Human Limbs through Functional Electrical Stimulation

no code implementations9 Mar 2021 Nat Wannawas, Ali Shafti, A. Aldo Faisal

However, an open challenge remains on how to restore motor abilities to human limbs through FES, as the problem of controlling the stimulation is unclear.

Reinforcement Learning (RL)

Gaze-contingent decoding of human navigation intention on an autonomous wheelchair platform

no code implementations4 Mar 2021 Mahendran Subramanian, Suhyung Park, Pavel Orlov, Ali Shafti, A. Aldo Faisal

We have pioneered the Where-You-Look-Is Where-You-Go approach to controlling mobility platforms by decoding how the user looks at the environment to understand where they want to navigate their mobility device.

Motor Imagery Navigate +1

Non-invasive Cognitive-level Human Interfacing for the Robotic Restoration of Reaching & Grasping

no code implementations25 Feb 2021 Ali Shafti, A. Aldo Faisal

We combine wearable eye tracking, the visual context of the environment and the structural grammar of human actions to create a cognitive-level assistive robotic setup that enables the users in fulfilling activities of daily living, while conserving interpretability, and the agency of the user.

Real-World Human-Robot Collaborative Reinforcement Learning

no code implementations2 Mar 2020 Ali Shafti, Jonas Tjomsland, William Dudley, A. Aldo Faisal

We then use this setup to perform systematic experiments on human/agent behaviour and adaptation when co-learning a policy for the collaborative game.

reinforcement-learning Reinforcement Learning (RL)

Human-Robot Collaboration via Deep Reinforcement Learning of Real-World Interactions

no code implementations2 Dec 2019 Jonas Tjomsland, Ali Shafti, A. Aldo Faisal

We present a robotic setup for real-world testing and evaluation of human-robot and human-human collaborative learning.

reinforcement-learning Reinforcement Learning (RL)

FastOrient: Lightweight Computer Vision for Wrist Control in Assistive Robotic Grasping

no code implementations22 Jul 2018 Mireia Ruiz Maymo, Ali Shafti, A. Aldo Faisal

Here we are demonstrating the off-loading of low-level control of assistive robotics and active orthotics, through automatic end-effector orientation control for grasping.

Robotic Grasping

Cannot find the paper you are looking for? You can Submit a new open access paper.