no code implementations • 17 Mar 2024 • Lance Ying, Kunal Jha, Shivam Aarya, Joshua B. Tenenbaum, Antonio Torralba, Tianmin Shu
GOMA formulates verbal communication as a planning problem that minimizes the misalignment between the parts of agents' mental states that are relevant to the goals.
1 code implementation • 27 Feb 2024 • Tan Zhi-Xuan, Lance Ying, Vikash Mansinghka, Joshua B. Tenenbaum
Our agent assists a human by modeling them as a cooperative planner who communicates joint plans to the assistant, then performs multimodal Bayesian inference over the human's goal from actions and language, using large language models (LLMs) to evaluate the likelihood of an instruction given a hypothesized plan.
no code implementations • 16 Feb 2024 • Lance Ying, Tan Zhi-Xuan, Lionel Wong, Vikash Mansinghka, Joshua Tenenbaum
In this paper, we take a step towards an answer by grounding the semantics of belief statements in a Bayesian theory-of-mind: By modeling how humans jointly infer coherent sets of goals, beliefs, and plans that explain an agent's actions, then evaluating statements about the agent's beliefs against these inferences via epistemic logic, our framework provides a conceptual role semantics for belief, explaining the gradedness and compositionality of human belief attributions, as well as their intimate connection with goals and plans.
no code implementations • 28 Jun 2023 • Lance Ying, Tan Zhi-Xuan, Vikash Mansinghka, Joshua B. Tenenbaum
When humans cooperate, they frequently coordinate their activity through both verbal communication and non-verbal actions, using this information to infer a shared goal and plan.
no code implementations • 25 Jun 2023 • Lance Ying, Katherine M. Collins, Megan Wei, Cedegao E. Zhang, Tan Zhi-Xuan, Adrian Weller, Joshua B. Tenenbaum, Lionel Wong
To test our model, we design and run a human experiment on a linguistic goal inference task.
1 code implementation • 9 Sep 2021 • Lance Ying, Amrit Romana, Emily Mower Provost
In recent years, deep-learning-based speech emotion recognition models have outperformed classical machine learning models.