Search Results for author: Olga Ohrimenko

Found 16 papers, 6 papers with code

CERT-ED: Certifiably Robust Text Classification for Edit Distance

no code implementations1 Aug 2024 Zhuoqun Huang, Neil G Marchant, Olga Ohrimenko, Benjamin I. P. Rubinstein

With the growing integration of AI in daily life, ensuring the robustness of systems to inference-time attacks is crucial.

text-classification Text Classification

RS-Reg: Probabilistic and Robust Certified Regression Through Randomized Smoothing

1 code implementation14 May 2024 Aref Miri Rekavandi, Olga Ohrimenko, Benjamin I. P. Rubinstein

Randomized smoothing has shown promising certified robustness against adversaries in classification tasks.

regression valid

Information Leakage from Data Updates in Machine Learning Models

no code implementations20 Sep 2023 Tian Hui, Farhad Farokhi, Olga Ohrimenko

We validate that two snapshots of the model can result in higher information leakage in comparison to having access to only the updated model.

Attribute

Fingerprint Attack: Client De-Anonymization in Federated Learning

1 code implementation12 Sep 2023 Qiongkai Xu, Trevor Cohn, Olga Ohrimenko

Federated Learning allows collaborative training without data sharing in settings where participants do not trust the central server and one another.

Clustering Federated Learning

RS-Del: Edit Distance Robustness Certificates for Sequence Classifiers via Randomized Deletion

1 code implementation NeurIPS 2023 Zhuoqun Huang, Neil G. Marchant, Keane Lucas, Lujo Bauer, Olga Ohrimenko, Benjamin I. P. Rubinstein

When applied to the popular MalConv malware detection model, our smoothing mechanism RS-Del achieves a certified accuracy of 91% at an edit distance radius of 128 bytes.

Binary Classification Malware Detection

DDoD: Dual Denial of Decision Attacks on Human-AI Teams

no code implementations7 Dec 2022 Benjamin Tag, Niels van Berkel, Sunny Verma, Benjamin Zi Hao Zhao, Shlomo Berkovsky, Dali Kaafar, Vassilis Kostakos, Olga Ohrimenko

Artificial Intelligence (AI) systems have been increasingly used to make decision-making processes faster, more accurate, and more efficient.

Decision Making

Protecting Global Properties of Datasets with Distribution Privacy Mechanisms

1 code implementation18 Jul 2022 Michelle Chen, Olga Ohrimenko

In this work, we demonstrate how a distribution privacy framework can be applied to formalize such data confidentiality.

Attribute Inference Attack

Oblivious Sampling Algorithms for Private Data Analysis

no code implementations NeurIPS 2019 Sajin Sasy, Olga Ohrimenko

We study secure and privacy-preserving data analysis based on queries executed on samples from a dataset.

Privacy Preserving

Attribute Privacy: Framework and Mechanisms

no code implementations8 Sep 2020 Wanrong Zhang, Olga Ohrimenko, Rachel Cummings

We propose definitions to capture \emph{attribute privacy} in two relevant cases where global attributes may need to be protected: (1) properties of a specific dataset and (2) parameters of the underlying distribution from which dataset is sampled.

Attribute

Replication-Robust Payoff-Allocation for Machine Learning Data Markets

no code implementations25 Jun 2020 Dongge Han, Michael Wooldridge, Alex Rogers, Olga Ohrimenko, Sebastian Tschiatschek

In this paper, we systematically study the replication manipulation in submodular games and investigate replication robustness, a metric that quantitatively measures the robustness of solution concepts against replication.

BIG-bench Machine Learning

Leakage of Dataset Properties in Multi-Party Machine Learning

1 code implementation12 Jun 2020 Wanrong Zhang, Shruti Tople, Olga Ohrimenko

Using multiple machine learning models, we show that leakage occurs even if the sensitive attribute is not included in the training data and has a low correlation with other attributes or the target variable.

Attribute BIG-bench Machine Learning

Analyzing Information Leakage of Updates to Natural Language Models

no code implementations17 Dec 2019 Santiago Zanella-Béguelin, Lukas Wutschitz, Shruti Tople, Victor Rühle, Andrew Paverd, Olga Ohrimenko, Boris Köpf, Marc Brockschmidt

To continuously improve quality and reflect changes in data, machine learning applications have to regularly retrain and update their core models.

Language Modelling

Collaborative Machine Learning Markets with Data-Replication-Robust Payments

no code implementations8 Nov 2019 Olga Ohrimenko, Shruti Tople, Sebastian Tschiatschek

We study the problem of collaborative machine learning markets where multiple parties can achieve improved performance on their machine learning tasks by combining their training data.

BIG-bench Machine Learning

Analyzing Privacy Loss in Updates of Natural Language Models

no code implementations25 Sep 2019 Shruti Tople, Marc Brockschmidt, Boris Köpf, Olga Ohrimenko, Santiago Zanella-Béguelin

To continuously improve quality and reflect changes in data, machine learning-based services have to regularly re-train and update their core models.

Contamination Attacks and Mitigation in Multi-Party Machine Learning

no code implementations NeurIPS 2018 Jamie Hayes, Olga Ohrimenko

Machine learning is data hungry; the more data a model has access to in training, the more likely it is to perform well at inference time.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.