Search Results for author: Ian Molloy

Found 9 papers, 2 papers with code

URET: Universal Robustness Evaluation Toolkit (for Evasion)

1 code implementation3 Aug 2023 Kevin Eykholt, Taesung Lee, Douglas Schales, Jiyong Jang, Ian Molloy, Masha Zorin

In this work, we propose a new framework to enable the generation of adversarial inputs irrespective of the input type and task domain.

Image Classification

Adaptive Verifiable Training Using Pairwise Class Similarity

no code implementations14 Dec 2020 Shiqi Wang, Kevin Eykholt, Taesung Lee, Jiyong Jang, Ian Molloy

On CIFAR10, a non-robust LeNet model has a 21. 63% error rate, while a model created using verifiable training and a L-infinity robustness criterion of 8/255, has an error rate of 57. 10%.

Attribute

Adversarial Examples and Metrics

no code implementations14 Jul 2020 Nico Döttling, Kathrin Grosse, Michael Backes, Ian Molloy

In this work we study the limitations of robust classification if the target metric is uncertain.

Classification General Classification +1

Backdoor Smoothing: Demystifying Backdoor Attacks on Deep Neural Networks

no code implementations11 Jun 2020 Kathrin Grosse, Taesung Lee, Battista Biggio, Youngja Park, Michael Backes, Ian Molloy

Backdoor attacks mislead machine-learning models to output an attacker-specified class when presented a specific trigger at test time.

Reaching Data Confidentiality and Model Accountability on the CalTrain

no code implementations7 Dec 2018 Zhongshu Gu, Hani Jamjoom, Dong Su, Heqing Huang, Jialong Zhang, Tengfei Ma, Dimitrios Pendarakis, Ian Molloy

We also demonstrate that when malicious training participants tend to implant backdoors during model training, CALTRAIN can accurately and precisely discover the poisoned and mislabeled training data that lead to the runtime mispredictions.

Data Poisoning

Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering

1 code implementation9 Nov 2018 Bryant Chen, Wilka Carvalho, Nathalie Baracaldo, Heiko Ludwig, Benjamin Edwards, Taesung Lee, Ian Molloy, Biplav Srivastava

While machine learning (ML) models are being increasingly trusted to make decisions in different and varying areas, the safety of systems using such models has become an increasing concern.

Clustering

Defending Against Machine Learning Model Stealing Attacks Using Deceptive Perturbations

no code implementations31 May 2018 Taesung Lee, Benjamin Edwards, Ian Molloy, Dong Su

Machine learning models are vulnerable to simple model stealing attacks if the adversary can obtain output labels for chosen inputs.

BIG-bench Machine Learning

DinTucker: Scaling up Gaussian process models on multidimensional arrays with billions of elements

no code implementations12 Nov 2013 Shandian Zhe, Yuan Qi, Youngja Park, Ian Molloy, Suresh Chari

To overcome this limitation, we present Distributed Infinite Tucker (DINTUCKER), a large-scale nonlinear tensor decomposition algorithm on MAPREDUCE.

Tensor Decomposition Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.