Search Results for author: Dustin Arendt

Found 11 papers, 2 papers with code

Evaluating and Explaining Natural Language Generation with GenX

1 code implementation NAACL (DaSH) 2021 Kayla Duskin, Shivam Sharma, Ji Young Yun, Emily Saldanha, Dustin Arendt

Current methods for evaluation of natural language generation models focus on measuring text quality but fail to probe the model creativity, i. e., its ability to generate novel but coherent text sequences not seen in the training corpus.

Memorization Text Generation

Evaluating Deception Detection Model Robustness To Linguistic Variation

no code implementations NAACL (SocialNLP) 2021 Maria Glenski, Ellyn Ayton, Robin Cosbey, Dustin Arendt, Svitlana Volkova

With the increasing use of machine-learning driven algorithmic judgements, it is critical to develop models that are robust to evolving or manipulated inputs.

Adversarial Defense Deception Detection +1

Towards Trustworthy Deception Detection: Benchmarking Model Robustness across Domains, Modalities, and Languages

no code implementations RDSM (COLING) 2020 Maria Glenski, Ellyn Ayton, Robin Cosbey, Dustin Arendt, Svitlana Volkova

Our analyses reveal a significant drop in performance when testing neural models on out-of-domain data and non-English languages that may be mitigated using diverse training data.

Benchmarking Deception Detection +2

Evaluating Neural Model Robustness for Machine Comprehension

no code implementations EACL 2021 Winston Wu, Dustin Arendt, Svitlana Volkova

We evaluate neural model robustness to adversarial attacks using different types of linguistic unit perturbations {--} character and word, and propose a new method for strategic sentence-level perturbations.

Adversarial Attack Reading Comprehension +2

Hokey Pokey Causal Discovery: Using Deep Learning Model Errors to Learn Causal Structure

no code implementations1 Jan 2021 Emily Saldanha, Dustin Arendt, Svitlana Volkova

Many existing algorithms for the discovery of causal structure from observational data rely on evaluating the conditional independence relationships among features to account for the effects of confounding.

Causal Discovery

Measure Utility, Gain Trust: Practical Advice for XAI Researcher

no code implementations27 Sep 2020 Brittany Davis, Maria Glenski, William Sealy, Dustin Arendt

However, the focus on trust is too narrow, and has led the research community astray from tried and true empirical methods that produced more defensible scientific knowledge about people and explanations.

BIG-bench Machine Learning Explainable Artificial Intelligence (XAI)

Evaluating Neural Machine Comprehension Model Robustness to Noisy Inputs and Adversarial Attacks

no code implementations1 May 2020 Winston Wu, Dustin Arendt, Svitlana Volkova

We evaluate machine comprehension models' robustness to noise and adversarial attacks by performing novel perturbations at the character, word, and sentence level.

Reading Comprehension Sentence

Intrinsic and Extrinsic Evaluation of Spatiotemporal Text Representations in Twitter Streams

no code implementations WS 2017 Lawrence Phillips, Kyle Shaffer, Dustin Arendt, Nathan Hodas, Svitlana Volkova

Language in social media is a dynamic system, constantly evolving and adapting, with words and concepts rapidly emerging, disappearing, and changing their meaning.

Representation Learning Type prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.