Search Results for author: Gagan Bansal

Found 16 papers, 3 papers with code

Reading Between the Lines: Modeling User Behavior and Costs in AI-Assisted Programming

1 code implementation25 Oct 2022 Hussein Mozannar, Gagan Bansal, Adam Fourney, Eric Horvitz

However, to fully realize their potential, we must understand how programmers interact with these systems and identify ways to improve that interaction.

Code Completion Recommendation Systems

When to Show a Suggestion? Integrating Human Feedback in AI-Assisted Programming

1 code implementation8 Jun 2023 Hussein Mozannar, Gagan Bansal, Adam Fourney, Eric Horvitz

Using data from 535 programmers, we perform a retrospective evaluation of CDHF and show that we can avoid displaying a significant fraction of suggestions that would have been rejected.

Recommendation Systems

The Challenge of Crafting Intelligible Intelligence

no code implementations9 Mar 2018 Daniel S. Weld, Gagan Bansal

Since Artificial Intelligence (AI) software uses techniques like deep lookahead search and stochastic optimization of huge neural networks to fit mammoth datasets, it often results in complex behavior that is difficult for people to understand.

Stochastic Optimization

Revenue Forecasting for Enterprise Products

no code implementations21 Nov 2016 Amita Gajewar, Gagan Bansal

For any business, planning is a continuous process, and typically business-owners focus on making both long-term planning aligned with a particular strategy as well as short-term planning that accommodates the dynamic market situations.

Gmail Smart Compose: Real-Time Assisted Writing

no code implementations17 May 2019 Mia Xu Chen, Benjamin N Lee, Gagan Bansal, Yuan Cao, Shuyuan Zhang, Justin Lu, Jackie Tsay, Yinan Wang, Andrew M. Dai, Zhifeng Chen, Timothy Sohn, Yonghui Wu

In this paper, we present Smart Compose, a novel system for generating interactive, real-time suggestions in Gmail that assists users in writing mails by reducing repetitive typing.

Language Modelling Model Selection

A Case for Backward Compatibility for Human-AI Teams

no code implementations4 Jun 2019 Gagan Bansal, Besmira Nushi, Ece Kamar, Dan Weld, Walter Lasecki, Eric Horvitz

We introduce the notion of the compatibility of an AI update with prior user experience and present methods for studying the role of compatibility in human-AI teams.

Decision Making

Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork

no code implementations27 Apr 2020 Gagan Bansal, Besmira Nushi, Ece Kamar, Eric Horvitz, Daniel S. Weld

To optimize the team performance for this setting we maximize the team's expected utility, expressed in terms of the quality of the final decision, cost of verifying, and individual accuracies of people and machines.

Decision Making

Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA

no code implementations30 Dec 2020 Ana Valeria Gonzalez, Gagan Bansal, Angela Fan, Robin Jia, Yashar Mehdad, Srinivasan Iyer

While research on explaining predictions of open-domain QA systems (ODQA) to users is gaining momentum, most works have failed to evaluate the extent to which explanations improve user trust.

Using Machine Translation to Localize Task Oriented NLG Output

no code implementations9 Jul 2021 Scott Roy, Cliff Brunk, Kyu-Young Kim, Justin Zhao, Markus Freitag, Mihir Kale, Gagan Bansal, Sidharth Mudgal, Chris Varano

One of the challenges in a task oriented natural language application like the Google Assistant, Siri, or Alexa is to localize the output to many languages.

Domain Adaptation Machine Translation +1

Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations

no code implementations18 Jan 2023 Valerie Chen, Q. Vera Liao, Jennifer Wortman Vaughan, Gagan Bansal

AI explanations are often mentioned as a way to improve human-AI decision-making, but empirical studies have not found consistent evidence of explanations' effectiveness and, on the contrary, suggest that they can increase overreliance when the AI system is wrong.

Decision Making

Generation Probabilities Are Not Enough: Exploring the Effectiveness of Uncertainty Highlighting in AI-Powered Code Completions

no code implementations14 Feb 2023 Helena Vasconcelos, Gagan Bansal, Adam Fourney, Q. Vera Liao, Jennifer Wortman Vaughan

Through a mixed-methods study with 30 programmers, we compare three conditions: providing the AI system's code completion alone, highlighting tokens with the lowest likelihood of being generated by the underlying generative model, and highlighting tokens with the highest predicted likelihood of being edited by a programmer.

Code Completion

Cannot find the paper you are looking for? You can Submit a new open access paper.