Search Results for author: Michael Johnston

Found 14 papers, 1 papers with code

Lightweight Transformers for Conversational AI

no code implementations NAACL (ACL) 2022 Daniel Pressel, Wenshuo Liu, Michael Johnston, Minhua Chen

To understand how training on conversational language impacts performance of pre-trained models on downstream dialogue tasks, we build compact Transformer-based Language Models from scratch on several large corpora of conversational data.

Intent Detection Natural Language Understanding

Is the House Ready For Sleeptime? Generating and Evaluating Situational Queries for Embodied Question Answering

no code implementations8 May 2024 Vishnu Sashank Dorbala, Prasoon Goyal, Robinson Piramuthu, Michael Johnston, Reza Ghanadhan, Dinesh Manocha

However, in evaluating the data using an LLM, we observe a low correlation of 46. 2% with the ground truth human annotations; indicating that while LLMs are good at generating situational data, they struggle to answer them according to consensus.

2k Embodied Question Answering +5

Improving Open-Domain Dialogue Evaluation with a Causal Inference Model

no code implementations31 Jan 2023 Cat P. Le, Luke Dai, Michael Johnston, Yang Liu, Marilyn Walker, Reza Ghanadan

We project these features to the dialogue level and train a dialogue-level MLP regression model, a dialogue-level LSTM, and a novel causal inference model called counterfactual-LSTM (CF-LSTM) to predict ratings.

Causal Inference counterfactual +1

GIVL: Improving Geographical Inclusivity of Vision-Language Models with Pre-Training Methods

no code implementations CVPR 2023 Da Yin, Feng Gao, Govind Thattai, Michael Johnston, Kai-Wei Chang

A key goal for the advancement of AI is to develop technologies that serve the needs not just of one group but of all communities regardless of their geographical region.

Deep Clustering with Measure Propagation

no code implementations18 Apr 2021 Minhua Chen, Badrinath Jayakumar, Padmasundari Gopalakrishnan, Qiming Huang, Michael Johnston, Patrick Haffner

For example, deep embedded clustering (DEC) has greatly improved the unsupervised clustering performance, by using stacked autoencoders for representation learning.

Clustering Deep Clustering +3

Cannot find the paper you are looking for? You can Submit a new open access paper.