Search Results for author: Hongjing Lu

Found 14 papers, 3 papers with code

Zero-shot visual reasoning through probabilistic analogical mapping

no code implementations29 Sep 2022 Taylor W. Webb, Shuhao Fu, Trevor Bihl, Keith J. Holyoak, Hongjing Lu

Human reasoning is grounded in an ability to identify highly abstract commonalities governing superficially dissimilar visual inputs.

Visual Reasoning

CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing Human Trust in Image Recognition Models

1 code implementation3 Sep 2021 Arjun R. Akula, Keze Wang, Changsong Liu, Sari Saba-Sadiya, Hongjing Lu, Sinisa Todorovic, Joyce Chai, Song-Chun Zhu

More concretely, our CX-ToM framework generates sequence of explanations in a dialog by mediating the differences between the minds of machine and human user.

Explainable Artificial Intelligence (XAI)

Visual analogy: Deep learning versus compositional models

no code implementations14 May 2021 Nicholas Ichien, Qing Liu, Shuhao Fu, Keith J. Holyoak, Alan Yuille, Hongjing Lu

We compared human performance to that of two recent deep learning models (Siamese Network and Relation Network) directly trained to solve these analogy problems, as well as to that of a compositional model that assesses relational similarity between part-based representations.

Relation Network Visual Analogies

Probabilistic Analogical Mapping with Semantic Relation Networks

no code implementations30 Mar 2021 Hongjing Lu, Nicholas Ichien, Keith J. Holyoak

The human ability to flexibly reason using analogies with domain-general content depends on mechanisms for identifying relations between concepts, and for mapping concepts and their relations across analogs.

Graph Matching Retrieval +1

Show Me What You Can Do: Capability Calibration on Reachable Workspace for Human-Robot Collaboration

no code implementations6 Mar 2021 Xiaofeng Gao, Luyao Yuan, Tianmin Shu, Hongjing Lu, Song-Chun Zhu

Our experiments with human participants demonstrate that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground truth.

Motion Planning

Learning Perceptual Inference by Contrasting

1 code implementation NeurIPS 2019 Chi Zhang, Baoxiong Jia, Feng Gao, Yixin Zhu, Hongjing Lu, Song-Chun Zhu

"Thinking in pictures," [1] i. e., spatial-temporal reasoning, effortless and instantaneous for humans, is believed to be a significant ability to perform logical induction and a crucial factor in the intellectual history of technology development.

Theory-based Causal Transfer: Integrating Instance-level Induction and Abstract-level Structure Learning

no code implementations25 Nov 2019 Mark Edmonds, Xiaojian Ma, Siyuan Qi, Yixin Zhu, Hongjing Lu, Song-Chun Zhu

Given these general theories, the goal is to train an agent by interactively exploring the problem space to (i) discover, form, and transfer useful abstract and structural knowledge, and (ii) induce useful knowledge from the instance-level attributes observed in the environment.

Reinforcement Learning (RL) Transfer Learning

Functional form of motion priors in human motion perception

no code implementations NeurIPS 2010 Hongjing Lu, Tungyou Lin, Alan Lee, Luminita Vese, Alan L. Yuille

We then measured human performance for motion tasks and found that we obtained better fit for the L1-norm (Laplace) than for the L2-norm (Gaussian).

Motion Estimation

A unified model of short-range and long-range motion perception

no code implementations NeurIPS 2010 Shuang Wu, Xuming He, Hongjing Lu, Alan L. Yuille

The human vision system is able to effortlessly perceive both short-range and long-range motion patterns in complex dynamic scenes.

The Noisy-Logical Distribution and its Application to Causal Inference

no code implementations NeurIPS 2007 Hongjing Lu, Alan L. Yuille

We describe a novel noisy-logical distribution for representing the distribution of a binary output variable conditioned on multiple binary input variables.

Causal Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.