no code implementations • nlppower (ACL) 2022 • Wencong You, Daniel Lowd
We propose to combine human and AI expertise in generating adversarial examples, benefiting from humans’ expertise in language and automated attacks’ ability to probe the target system more quickly and thoroughly.
no code implementations • EMNLP (BlackboxNLP) 2021 • Zhouhang Xie, Jonathan Brophy, Adam Noack, Wencong You, Kalyani Asthana, Carter Perkins, Sabrina Reis, Zayd Hammoudeh, Daniel Lowd, Sameer Singh
Adversarial attacks curated against NLP models are increasingly becoming practical threats.
no code implementations • 28 Oct 2023 • Wencong You, Zayd Hammoudeh, Daniel Lowd
Backdoor attacks manipulate model predictions by inserting innocuous triggers into training and test data.
2 code implementations • 22 Feb 2023 • Zayd Hammoudeh, Daniel Lowd
Sparse or $\ell_0$ adversarial attacks arbitrarily perturb an unknown subset of the features.
1 code implementation • 9 Dec 2022 • Zayd Hammoudeh, Daniel Lowd
Good models require good training data.
1 code implementation • 21 Oct 2022 • Kalyani Asthana, Zhouhang Xie, Wencong You, Adam Noack, Jonathan Brophy, Sameer Singh, Daniel Lowd
In addition to the primary tasks of detecting and labeling attacks, TCAB can also be used for attack localization, attack target labeling, and attack characterization.
1 code implementation • 29 Aug 2022 • Zayd Hammoudeh, Daniel Lowd
We also show that the assumptions made by existing state-of-the-art certified classifiers are often overly pessimistic.
1 code implementation • 23 May 2022 • Jonathan Brophy, Daniel Lowd
We also find that IBUG can achieve improved probabilistic performance by using different base GBRT models, and can more flexibly model the posterior distribution of a prediction than competing methods.
1 code implementation • 30 Apr 2022 • Jonathan Brophy, Zayd Hammoudeh, Daniel Lowd
In the pursuit of better understanding GBDT predictions and generally improving these models, we adapt recent and popular influence-estimation methods designed for deep learning models to GBDTs.
1 code implementation • 25 Jan 2022 • Zayd Hammoudeh, Daniel Lowd
This work proposes the task of target identification, which determines whether a specific test instance is the target of a training-set attack.
no code implementations • 21 Jan 2022 • Zhouhang Xie, Jonathan Brophy, Adam Noack, Wencong You, Kalyani Asthana, Carter Perkins, Sabrina Reis, Sameer Singh, Daniel Lowd
The landscape of adversarial attacks against text classifiers continues to grow, with new attacks developed every year and many of them available in standard toolkits, such as TextAttack and OpenAttack.
1 code implementation • 11 Sep 2020 • Jonathan Brophy, Daniel Lowd
The weights in the kernel expansion of the surrogate model are used to define the global or local importance of each training example.
3 code implementations • 11 Sep 2020 • Jonathan Brophy, Daniel Lowd
The upper levels of DaRE trees use random nodes, which choose split attributes and thresholds uniformly at random.
1 code implementation • NeurIPS 2020 • Zayd Hammoudeh, Daniel Lowd
A common simplifying assumption is that the positive data is representative of the target positive class.
no code implementations • 14 Jan 2020 • Jonathan Brophy, Daniel Lowd
In this paper, we present Extended Group-based Graphical models for Spam (EGGS), a general-purpose method for classifying spam in online social networks.
3 code implementations • COLING 2018 • Javid Ebrahimi, Daniel Lowd, Dejing Dou
Evaluating on adversarial examples has become a standard procedure to measure robustness of deep learning models.
2 code implementations • ACL 2018 • Javid Ebrahimi, Anyi Rao, Daniel Lowd, Dejing Dou
We propose an efficient method to generate white-box adversarial examples to trick a character-level neural classifier.
no code implementations • 10 Nov 2017 • Tarek R. Besold, Artur d'Avila Garcez, Sebastian Bader, Howard Bowman, Pedro Domingos, Pascal Hitzler, Kai-Uwe Kuehnberger, Luis C. Lamb, Daniel Lowd, Priscila Machado Vieira Lima, Leo de Penning, Gadi Pinkas, Hoifung Poon, Gerson Zaverucha
Recent studies in cognitive science, artificial intelligence, and psychology have produced a number of cognitive models of reasoning, learning, and language that are underpinned by computation.
no code implementations • COLING 2016 • Javid Ebrahimi, Dejing Dou, Daniel Lowd
Classifying the stance expressed in online microblogging social media is an emerging problem in opinion mining.
no code implementations • 12 Jul 2015 • Shangpu Jiang, Daniel Lowd, Dejing Dou
In this paper, we focus on a novel knowledge reuse scenario where the knowledge in the source schema needs to be translated to a semantically heterogeneous target schema.
no code implementations • 11 Jul 2015 • Shangpu Jiang, Daniel Lowd, Dejing Dou
We use a probabilistic framework to integrate this new knowledge-based strategy with standard terminology-based and structure-based strategies.
no code implementations • 1 Apr 2015 • Daniel Lowd, Amirmohammad Rooshenas
The Libra Toolkit is a collection of algorithms for learning and inference with discrete probabilistic models, including Bayesian networks, Markov networks, dependency networks, and sum-product networks.
no code implementations • NeurIPS 2010 • Daniel Lowd, Pedro Domingos
Arithmetic circuits (ACs) exploit context-specific independence and determinism to allow exact inference even in networks with high treewidth.