Search Results for author: Tao Gao

Found 11 papers, 3 papers with code

From Heavy Rain Removal to Detail Restoration: A Faster and Better Network

1 code implementation7 May 2022 Tao Gao, Yuanbo Wen, Jing Zhang, Kaihao Zhang, Ting Chen

Firstly, a dilated dense residual block (DDRB) within the rain streaks removal network is presented to aggregate high/low level features of heavy rain.

Rain Removal

Modeling human intention inference in continuous 3D domains by inverse planning and body kinematics

no code implementations2 Dec 2021 Yingdong Qian, Marta Kryven, Tao Gao, Hanbyul Joo, Josh Tenenbaum

We describe Generative Body Kinematics model, which predicts human intention inference in this domain using Bayesian inverse planning and inverse body kinematics.

Emergent Graphical Conventions in a Visual Communication Game

no code implementations28 Nov 2021 Shuwen Qiu, Sirui Xie, Lifeng Fan, Tao Gao, Song-Chun Zhu, Yixin Zhu

While recent studies of emergent communication primarily focus on symbolic languages, their settings overlook the graphical sketches existing in human communication; they do not account for the evolution process through which symbolic sign systems emerge in the trade-off between iconicity and symbolicity.

YouRefIt: Embodied Reference Understanding with Language and Gesture

no code implementations ICCV 2021 Yixin Chen, Qing Li, Deqian Kong, Yik Lun Kei, Song-Chun Zhu, Tao Gao, Yixin Zhu, Siyuan Huang

To the best of our knowledge, this is the first embodied reference dataset that allows us to study referring expressions in daily physical scenes to understand referential behavior, human communication, and human-robot interaction.

Individual vs. Joint Perception: a Pragmatic Model of Pointing as Communicative Smithian Helping

no code implementations3 Jun 2021 Kaiwen Jiang, Stephanie Stacy, Chuyu Wei, Adelpha Chan, Federico Rossano, Yixin Zhu, Tao Gao

We add another agent as a guide who can only help by marking an observation already perceived by the hunter with a pointing or not, without providing new observations or offering any instrumental help.

Modeling Communication to Coordinate Perspectives in Cooperation

no code implementations3 Jun 2021 Stephanie Stacy, Chenfei Li, Minglu Zhao, Yiling Yun, Qingyi Zhao, Max Kleiman-Weiner, Tao Gao

We propose a computational account of overloaded signaling from a shared agency perspective which we call the Imagined We for Communication.

Learning Triadic Belief Dynamics in Nonverbal Communication from Videos

1 code implementation CVPR 2021 Lifeng Fan, Shuwen Qiu, Zilong Zheng, Tao Gao, Song-Chun Zhu, Yixin Zhu

By aggregating different beliefs and true world states, our model essentially forms "five minds" during the interactions between two agents.

Scene Understanding

Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs

no code implementations25 Apr 2020 Tao Yuan, Hangxin Liu, Lifeng Fan, Zilong Zheng, Tao Gao, Yixin Zhu, Song-Chun Zhu

Aiming to understand how human (false-)belief--a core socio-cognitive ability--would affect human interactions with robots, this paper proposes to adopt a graphical model to unify the representation of object states, robot knowledge, and human (false-)beliefs.

Object Tracking

Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common Sense

no code implementations20 Apr 2020 Yixin Zhu, Tao Gao, Lifeng Fan, Siyuan Huang, Mark Edmonds, Hangxin Liu, Feng Gao, Chi Zhang, Siyuan Qi, Ying Nian Wu, Joshua B. Tenenbaum, Song-Chun Zhu

We demonstrate the power of this perspective to develop cognitive AI systems with humanlike common sense by showing how to observe and apply FPICU with little training data to solve a wide range of challenging tasks, including tool use, planning, utility inference, and social learning.

Common Sense Reasoning Small Data Image Classification

Measuring and modeling the perception of natural and unconstrained gaze in humans and machines

no code implementations29 Nov 2016 Daniel Harari, Tao Gao, Nancy Kanwisher, Joshua Tenenbaum, Shimon Ullman

How accurate are humans in determining the gaze direction of others in lifelike scenes, when they can move their heads and eyes freely, and what are the sources of information for the underlying perceptual processes?

When Computer Vision Gazes at Cognition

1 code implementation8 Dec 2014 Tao Gao, Daniel Harari, Joshua Tenenbaum, Shimon Ullman

(1) Human accuracy of discriminating targets 8{\deg}-10{\deg} of visual angle apart is around 40% in a free looking gaze task; (2) The ability to interpret gaze of different lookers vary dramatically; (3) This variance can be captured by the computational model; (4) Human outperforms the current model significantly.

Cannot find the paper you are looking for? You can Submit a new open access paper.