Search Results for author: Kexin Yi

Found 6 papers, 3 papers with code

ComPhy: Compositional Physical Reasoning of Objects and Events from Videos

no code implementations ICLR 2022 Zhenfang Chen, Kexin Yi, Yunzhu Li, Mingyu Ding, Antonio Torralba, Joshua B. Tenenbaum, Chuang Gan

In this paper, we take an initial step to highlight the importance of inferring the hidden physical properties not directly observable from visual appearances, by introducing the Compositional Physical Reasoning (ComPhy) dataset.

Immersive Text Game and Personality Classification

no code implementations20 Mar 2022 Wanshui Li, Yifan Bai, Jiaxuan Lu, Kexin Yi

We designed and built a game called \textit{Immersive Text Game}, which allows the player to choose a story and a character, and interact with other characters in the story in an immersive manner of dialogues.

Classification Language Modelling +1

Visual Grounding of Learned Physical Models

1 code implementation ICML 2020 Yunzhu Li, Toru Lin, Kexin Yi, Daniel M. Bear, Daniel L. K. Yamins, Jiajun Wu, Joshua B. Tenenbaum, Antonio Torralba

The abilities to perform physical reasoning and to adapt to new environments, while intrinsic to humans, remain challenging to state-of-the-art computational models.

Visual Grounding

CLEVRER: CoLlision Events for Video REpresentation and Reasoning

3 code implementations ICLR 2020 Kexin Yi, Chuang Gan, Yunzhu Li, Pushmeet Kohli, Jiajun Wu, Antonio Torralba, Joshua B. Tenenbaum

While these models thrive on the perception-based task (descriptive), they perform poorly on the causal tasks (explanatory, predictive and counterfactual), suggesting that a principled approach for causal reasoning should incorporate the capability of both perceiving complex visual and language inputs, and understanding the underlying dynamics and causal relations.

counterfactual Descriptive +1

Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding

2 code implementations NeurIPS 2018 Kexin Yi, Jiajun Wu, Chuang Gan, Antonio Torralba, Pushmeet Kohli, Joshua B. Tenenbaum

Second, the model is more data- and memory-efficient: it performs well after learning on a small number of training data; it can also encode an image into a compact representation, requiring less storage than existing methods for offline question answering.

Question Answering Representation Learning +1

Roll-back Hamiltonian Monte Carlo

no code implementations8 Sep 2017 Kexin Yi, Finale Doshi-Velez

We propose a new framework for Hamiltonian Monte Carlo (HMC) on truncated probability distributions with smooth underlying density functions.

Bayesian Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.