Search Results for author: Tomer Ullman

Found 15 papers, 4 papers with code

Forking Paths in Neural Text Generation

no code implementations10 Dec 2024 Eric Bigelow, Ari Holtzman, Hidenori Tanaka, Tomer Ullman

Estimating uncertainty in Large Language Models (LLMs) is important for properly evaluating LLMs, and ensuring safety for users.

Text Generation

The Illusion-Illusion: Vision Language Models See Illusions Where There are None

no code implementations7 Dec 2024 Tomer Ullman

Here, I invert the standard use of perceptual illusions to examine basic processing errors in current vision language models.

Philosophy

Relations, Negations, and Numbers: Looking for Logic in Generative Text-to-Image Models

1 code implementation26 Nov 2024 Colin Conwell, Rupert Tawiah-Quashie, Tomer Ullman

We asked human respondents (N=178 in total) to evaluate images generated by a state-of-the-art image-generating AI (DALL-E 3) prompted with these `logical probes', and find that none reliably produce human agreement scores greater than 50\%.

Text-to-Image Generation

One fish, two fish, but not the whole sea: Alignment reduces language models' conceptual diversity

no code implementations7 Nov 2024 Sonia K. Murthy, Tomer Ullman, Jennifer Hu

Researchers in social science and psychology have recently proposed using large language models (LLMs) as replacements for humans in behavioral research.

Diversity

MMToM-QA: Multimodal Theory of Mind Question Answering

1 code implementation16 Jan 2024 Chuanyang Jin, Yutong Wu, Jing Cao, Jiannan Xiang, Yen-Ling Kuo, Zhiting Hu, Tomer Ullman, Antonio Torralba, Joshua B. Tenenbaum, Tianmin Shu

To engineer multimodal ToM capacity, we propose a novel method, BIP-ALM (Bayesian Inverse Planning Accelerated by Language Models).

Question Answering Theory of Mind Modeling

Type theory in human-like learning and inference

no code implementations4 Oct 2022 Felix A. Sosa, Tomer Ullman

Humans can generate reasonable answers to novel queries (Schulz, 2012): if I asked you what kind of food you want to eat for lunch, you would respond with a food, not a time.

Vocal Bursts Type Prediction

A Bayesian-Symbolic Approach to Reasoning and Learning in Intuitive Physics

no code implementations NeurIPS 2021 Kai Xu, Akash Srivastava, Dan Gutfreund, Felix Sosa, Tomer Ullman, Josh Tenenbaum, Charles Sutton

In this paper, we propose a Bayesian-symbolic framework (BSP) for physical reasoning and learning that is close to human-level sample-efficiency and accuracy.

Bayesian Inference Bilevel Optimization +3

A Bayesian-Symbolic Approach to Learning and Reasoning for Intuitive Physics

no code implementations1 Jan 2021 Kai Xu, Akash Srivastava, Dan Gutfreund, Felix Sosa, Tomer Ullman, Joshua B. Tenenbaum, Charles Sutton

As such, learning the laws is then reduced to symbolic regression and Bayesian inference methods are used to obtain the distribution of unobserved properties.

Bayesian Inference Common Sense Reasoning +2

Temporal and Object Quantification Nets

no code implementations1 Jan 2021 Jiayuan Mao, Zhezheng Luo, Chuang Gan, Joshua B. Tenenbaum, Jiajun Wu, Leslie Pack Kaelbling, Tomer Ullman

We aim to learn generalizable representations for complex activities by quantifying over both entities and time, as in “the kicker is behind all the other players,” or “the player controls the ball until it moves toward the goal.” Such a structural inductive bias of object relations, object quantification, and temporal orders will enable the learned representation to generalize to situations with varying numbers of agents, objects, and time courses.

Event Detection Inductive Bias +1

Modeling Expectation Violation in Intuitive Physics with Coarse Probabilistic Object Representations

1 code implementation NeurIPS 2019 Kevin Smith, Lingjie Mei, Shunyu Yao, Jiajun Wu, Elizabeth Spelke, Josh Tenenbaum, Tomer Ullman

We also present a new test set for measuring violations of physical expectations, using a range of scenarios derived from developmental psychology.

Scene Understanding

A Compositional Object-Based Approach to Learning Physical Dynamics

1 code implementation1 Dec 2016 Michael B. Chang, Tomer Ullman, Antonio Torralba, Joshua B. Tenenbaum

By comparing to less structured architectures, we show that the NPE's compositional representation of the structure in physical interactions improves its ability to predict movement, generalize across variable object count and different scene configurations, and infer latent properties of objects such as mass.

Object

Cannot find the paper you are looking for? You can Submit a new open access paper.