Search Results for author: Daniel Lowd

Found 24 papers, 12 papers with code

Approximate Inference by Compilation to Arithmetic Circuits

no code implementations NeurIPS 2010 Daniel Lowd, Pedro Domingos

Arithmetic circuits (ACs) exploit context-specific independence and determinism to allow exact inference even in networks with high treewidth.

Variational Inference

The Libra Toolkit for Probabilistic Models

no code implementations1 Apr 2015 Daniel Lowd, Amirmohammad Rooshenas

The Libra Toolkit is a collection of algorithms for learning and inference with discrete probabilistic models, including Bayesian networks, Markov networks, dependency networks, and sum-product networks.

Ontology Matching with Knowledge Rules

no code implementations11 Jul 2015 Shangpu Jiang, Daniel Lowd, Dejing Dou

We use a probabilistic framework to integrate this new knowledge-based strategy with standard terminology-based and structure-based strategies.

Ontology Matching

A Probabilistic Approach to Knowledge Translation

no code implementations12 Jul 2015 Shangpu Jiang, Daniel Lowd, Dejing Dou

In this paper, we focus on a novel knowledge reuse scenario where the knowledge in the source schema needs to be translated to a semantically heterogeneous target schema.

Transfer Learning Translation

Neural-Symbolic Learning and Reasoning: A Survey and Interpretation

no code implementations10 Nov 2017 Tarek R. Besold, Artur d'Avila Garcez, Sebastian Bader, Howard Bowman, Pedro Domingos, Pascal Hitzler, Kai-Uwe Kuehnberger, Luis C. Lamb, Daniel Lowd, Priscila Machado Vieira Lima, Leo de Penning, Gadi Pinkas, Hoifung Poon, Gerson Zaverucha

Recent studies in cognitive science, artificial intelligence, and psychology have produced a number of cognitive models of reasoning, learning, and language that are underpinned by computation.

Philosophy

HotFlip: White-Box Adversarial Examples for Text Classification

2 code implementations ACL 2018 Javid Ebrahimi, Anyi Rao, Daniel Lowd, Dejing Dou

We propose an efficient method to generate white-box adversarial examples to trick a character-level neural classifier.

General Classification text-classification +1

On Adversarial Examples for Character-Level Neural Machine Translation

3 code implementations COLING 2018 Javid Ebrahimi, Daniel Lowd, Dejing Dou

Evaluating on adversarial examples has become a standard procedure to measure robustness of deep learning models.

Machine Translation NMT +1

EGGS: A Flexible Approach to Relational Modeling of Social Network Spam

no code implementations14 Jan 2020 Jonathan Brophy, Daniel Lowd

In this paper, we present Extended Group-based Graphical models for Spam (EGGS), a general-purpose method for classifying spam in online social networks.

Learning from Positive and Unlabeled Data with Arbitrary Positive Shift

1 code implementation NeurIPS 2020 Zayd Hammoudeh, Daniel Lowd

A common simplifying assumption is that the positive data is representative of the target positive class.

TREX: Tree-Ensemble Representer-Point Explanations

1 code implementation11 Sep 2020 Jonathan Brophy, Daniel Lowd

The weights in the kernel expansion of the surrogate model are used to define the global or local importance of each training example.

Machine Unlearning for Random Forests

3 code implementations11 Sep 2020 Jonathan Brophy, Daniel Lowd

The upper levels of DaRE trees use random nodes, which choose split attributes and thresholds uniformly at random.

Machine Unlearning

Identifying Adversarial Attacks on Text Classifiers

no code implementations21 Jan 2022 Zhouhang Xie, Jonathan Brophy, Adam Noack, Wencong You, Kalyani Asthana, Carter Perkins, Sabrina Reis, Sameer Singh, Daniel Lowd

The landscape of adversarial attacks against text classifiers continues to grow, with new attacks developed every year and many of them available in standard toolkits, such as TextAttack and OpenAttack.

Abuse Detection Adversarial Text +2

Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation

1 code implementation25 Jan 2022 Zayd Hammoudeh, Daniel Lowd

This work proposes the task of target identification, which determines whether a specific test instance is the target of a training-set attack.

Adapting and Evaluating Influence-Estimation Methods for Gradient-Boosted Decision Trees

1 code implementation30 Apr 2022 Jonathan Brophy, Zayd Hammoudeh, Daniel Lowd

In the pursuit of better understanding GBDT predictions and generally improving these models, we adapt recent and popular influence-estimation methods designed for deep learning models to GBDTs.

Decision Making

Instance-Based Uncertainty Estimation for Gradient-Boosted Regression Trees

1 code implementation23 May 2022 Jonathan Brophy, Daniel Lowd

We also find that IBUG can achieve improved probabilistic performance by using different base GBRT models, and can more flexibly model the posterior distribution of a prediction than competing methods.

regression tabular-regression

Reducing Certified Regression to Certified Classification for General Poisoning Attacks

1 code implementation29 Aug 2022 Zayd Hammoudeh, Daniel Lowd

We also show that the assumptions made by existing state-of-the-art certified classifiers are often overly pessimistic.

Classification regression

TCAB: A Large-Scale Text Classification Attack Benchmark

1 code implementation21 Oct 2022 Kalyani Asthana, Zhouhang Xie, Wencong You, Adam Noack, Jonathan Brophy, Sameer Singh, Daniel Lowd

In addition to the primary tasks of detecting and labeling attacks, TCAB can also be used for attack localization, attack target labeling, and attack characterization.

Abuse Detection Sentiment Analysis +2

Provable Robustness Against a Union of $\ell_0$ Adversarial Attacks

2 code implementations22 Feb 2023 Zayd Hammoudeh, Daniel Lowd

Sparse or $\ell_0$ adversarial attacks arbitrarily perturb an unknown subset of the features.

Large Language Models Are Better Adversaries: Exploring Generative Clean-Label Backdoor Attacks Against Text Classifiers

no code implementations28 Oct 2023 Wencong You, Zayd Hammoudeh, Daniel Lowd

Backdoor attacks manipulate model predictions by inserting innocuous triggers into training and test data.

Towards Stronger Adversarial Baselines Through Human-AI Collaboration

no code implementations nlppower (ACL) 2022 Wencong You, Daniel Lowd

We propose to combine human and AI expertise in generating adversarial examples, benefiting from humans’ expertise in language and automated attacks’ ability to probe the target system more quickly and thoroughly.

Cannot find the paper you are looking for? You can Submit a new open access paper.