Search Results for author: Tingguang Li

Found 8 papers, 4 papers with code

VidEgoThink: Assessing Egocentric Video Understanding Capabilities for Embodied AI

1 code implementation15 Oct 2024 Sijie Cheng, Kechen Fang, Yangyang Yu, Sicheng Zhou, Bohao Li, Ye Tian, Tingguang Li, Lei Han, Yang Liu

In conclusion, VidEgoThink reflects a research trend towards employing MLLMs for egocentric vision, akin to human capabilities, enabling active observation and interaction in the complex real-world environments.

Question Answering Video Question Answering +2

NEURAL MARIONETTE: A Transformer-based Multi-action Human Motion Synthesis System

no code implementations27 Sep 2022 Weiqiang Wang, Xuefei Zhe, Qiuhong Ke, Di Kang, Tingguang Li, Ruizhi Chen, Linchao Bao

Along with the novel system, we also present a new dataset dedicated to the multi-action motion synthesis task, which contains both action tags and their contextual information.

Motion Generation Motion Synthesis +2

Learning to Solve a Rubik's Cube with a Dexterous Hand

1 code implementation26 Jul 2019 Tingguang Li, Weitao Xi, Meng Fang, Jia Xu, Max Qing-Hu Meng

We present a learning-based approach to solving a Rubik's cube with a multi-fingered dexterous hand.

Robotics

Learning to Interrupt: A Hierarchical Deep Reinforcement Learning Framework for Efficient Exploration

no code implementations30 Jul 2018 Tingguang Li, Jin Pan, Delong Zhu, Max Q. -H. Meng

Our architecture has two key components: options, represented by existing human-designed methods, can significantly speed up the training process and interruption mechanism, based on learnable termination functions, enables our system to quickly respond to the external environment.

Deep Reinforcement Learning Efficient Exploration +4

Cannot find the paper you are looking for? You can Submit a new open access paper.