Search Results for author: Tao Gao

Found 15 papers, 6 papers with code

LRRU: Long-short Range Recurrent Updating Networks for Depth Completion

no code implementations ICCV 2023 YuFei Wang, Bo Li, Ge Zhang, Qi Liu, Tao Gao, Yuchao Dai

Existing deep learning-based depth completion methods generally employ massive stacked layers to predict the dense depth map from sparse input data.

Depth Completion

Multi-dimension Queried and Interacting Network for Stereo Image Deraining

1 code implementation19 Sep 2023 Yuanbo Wen, Tao Gao, ZiQi Li, Jing Zhang, Ting Chen

This module leverages dimension-wise queries that are independent of the input features and employs global context-aware attention (GCA) to capture essential features while avoiding the entanglement of redundant or irrelevant information.

Rain Removal

Towards an Effective and Efficient Transformer for Rain-by-snow Weather Removal

1 code implementation6 Apr 2023 Tao Gao, Yuanbo Wen, Kaihao Zhang, Peng Cheng, Ting Chen

Rain-by-snow weather removal is a specialized task in weather-degraded image restoration aiming to eliminate coexisting rain streaks and snow particles.

Image Restoration

From heavy rain removal to detail restoration: A faster and better network

1 code implementation7 May 2022 Yuanbo Wen, Tao Gao, Jing Zhang, Kaihao Zhang, Ting Chen

This approach comprises two key modules, a rain streaks removal network (R$^2$Net) focusing on accurate rain removal, and a details reconstruction network (DRNet) designed to recover the textural details of rain-free images.

Rain Removal

Modeling human intention inference in continuous 3D domains by inverse planning and body kinematics

no code implementations2 Dec 2021 Yingdong Qian, Marta Kryven, Tao Gao, Hanbyul Joo, Josh Tenenbaum

We describe Generative Body Kinematics model, which predicts human intention inference in this domain using Bayesian inverse planning and inverse body kinematics.

YouRefIt: Embodied Reference Understanding with Language and Gesture

no code implementations ICCV 2021 Yixin Chen, Qing Li, Deqian Kong, Yik Lun Kei, Song-Chun Zhu, Tao Gao, Yixin Zhu, Siyuan Huang

To the best of our knowledge, this is the first embodied reference dataset that allows us to study referring expressions in daily physical scenes to understand referential behavior, human communication, and human-robot interaction.

Individual vs. Joint Perception: a Pragmatic Model of Pointing as Communicative Smithian Helping

no code implementations3 Jun 2021 Kaiwen Jiang, Stephanie Stacy, Chuyu Wei, Adelpha Chan, Federico Rossano, Yixin Zhu, Tao Gao

We add another agent as a guide who can only help by marking an observation already perceived by the hunter with a pointing or not, without providing new observations or offering any instrumental help.

Modeling Communication to Coordinate Perspectives in Cooperation

no code implementations3 Jun 2021 Stephanie Stacy, Chenfei Li, Minglu Zhao, Yiling Yun, Qingyi Zhao, Max Kleiman-Weiner, Tao Gao

We propose a computational account of overloaded signaling from a shared agency perspective which we call the Imagined We for Communication.

Learning Triadic Belief Dynamics in Nonverbal Communication from Videos

1 code implementation CVPR 2021 Lifeng Fan, Shuwen Qiu, Zilong Zheng, Tao Gao, Song-Chun Zhu, Yixin Zhu

By aggregating different beliefs and true world states, our model essentially forms "five minds" during the interactions between two agents.

Scene Understanding

Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs

no code implementations25 Apr 2020 Tao Yuan, Hangxin Liu, Lifeng Fan, Zilong Zheng, Tao Gao, Yixin Zhu, Song-Chun Zhu

Aiming to understand how human (false-)belief--a core socio-cognitive ability--would affect human interactions with robots, this paper proposes to adopt a graphical model to unify the representation of object states, robot knowledge, and human (false-)beliefs.

Object Object Tracking

Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common Sense

no code implementations20 Apr 2020 Yixin Zhu, Tao Gao, Lifeng Fan, Siyuan Huang, Mark Edmonds, Hangxin Liu, Feng Gao, Chi Zhang, Siyuan Qi, Ying Nian Wu, Joshua B. Tenenbaum, Song-Chun Zhu

We demonstrate the power of this perspective to develop cognitive AI systems with humanlike common sense by showing how to observe and apply FPICU with little training data to solve a wide range of challenging tasks, including tool use, planning, utility inference, and social learning.

Common Sense Reasoning Small Data Image Classification

Measuring and modeling the perception of natural and unconstrained gaze in humans and machines

no code implementations29 Nov 2016 Daniel Harari, Tao Gao, Nancy Kanwisher, Joshua Tenenbaum, Shimon Ullman

How accurate are humans in determining the gaze direction of others in lifelike scenes, when they can move their heads and eyes freely, and what are the sources of information for the underlying perceptual processes?

When Computer Vision Gazes at Cognition

1 code implementation8 Dec 2014 Tao Gao, Daniel Harari, Joshua Tenenbaum, Shimon Ullman

(1) Human accuracy of discriminating targets 8{\deg}-10{\deg} of visual angle apart is around 40% in a free looking gaze task; (2) The ability to interpret gaze of different lookers vary dramatically; (3) This variance can be captured by the computational model; (4) Human outperforms the current model significantly.

Task 2

Cannot find the paper you are looking for? You can Submit a new open access paper.