Search Results for author: Jason D. Williams

Found 17 papers, 3 papers with code

Feedback Effect in User Interaction with Intelligent Assistants: Delayed Engagement, Adaption and Drop-out

no code implementations17 Mar 2023 Zidi Xiu, Kai-Chen Cheng, David Q. Sun, Jiannan Lu, Hadas Kotek, Yuhan Zhang, Paul McCarthy, Christopher Klein, Stephen Pulman, Jason D. Williams

Next, we expand the time horizon to examine behavior changes and show that as users discover the limitations of the IA's understanding and functional capabilities, they learn to adjust the scope and wording of their requests to increase the likelihood of receiving a helpful response from the IA.

Improving Human-Labeled Data through Dynamic Automatic Conflict Resolution

no code implementations COLING 2020 David Q. Sun, Hadas Kotek, Christopher Klein, Mayank Gupta, William Li, Jason D. Williams

This paper develops and implements a scalable methodology for (a) estimating the noisiness of labels produced by a typical crowdsourcing semantic annotation task, and (b) reducing the resulting error of the labeling process by as much as 20-30% in comparison to other common labeling strategies.

text-classification Text Classification

Learning to Globally Edit Images with Textual Description

no code implementations13 Oct 2018 Hai Wang, Jason D. Williams, SingBing Kang

The models (bucket, filter bank, and end-to-end) differ in how much expert knowledge is encoded, with the most general version being purely end-to-end.

Generative Adversarial Network

Hybrid Code Networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning

3 code implementations ACL 2017 Jason D. Williams, Kavosh Asadi, Geoffrey Zweig

End-to-end learning of recurrent neural networks (RNNs) is an attractive solution for dialog systems; however, current techniques are data-intensive and require thousands of dialogs to learn simple behaviors.

reinforcement-learning Reinforcement Learning (RL)

Sample-efficient Deep Reinforcement Learning for Dialog Control

no code implementations18 Dec 2016 Kavosh Asadi, Jason D. Williams

Representing a dialog policy as a recurrent neural network (RNN) is attractive because it handles partial observability, infers a latent representation of state, and can be optimized with supervised learning (SL) or reinforcement learning (RL).

Policy Gradient Methods reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.