Search Results for author: Adrià Gascón

Found 8 papers, 5 papers with code

MPC-Friendly Commitments for Publicly Verifiable Covert Security

no code implementations15 Sep 2021 Nitin Agrawal, James Bell, Adrià Gascón, Matt J. Kusner

We address the problem of efficiently verifying a commitment in a two-party computation.

Data Generation for Neural Programming by Example

1 code implementation6 Nov 2019 Judith Clymo, Haik Manukian, Nathanaël Fijalkow, Adrià Gascón, Brooks Paige

A particular challenge lies in generating meaningful sets of inputs and outputs, which well-characterize a given program and accurately demonstrate its behavior.

BIG-bench Machine Learning Synthetic Data Generation

Private Protocols for U-Statistics in the Local Model and Beyond

no code implementations9 Oct 2019 James Bell, Aurélien Bellet, Adrià Gascón, tejas kulkarni

In this paper, we study the problem of computing $U$-statistics of degree $2$, i. e., quantities that come in the form of averages over pairs of data points, in the local model of differential privacy (LDP).

Clustering Metric Learning

QUOTIENT: Two-Party Secure Neural Network Training and Prediction

no code implementations8 Jul 2019 Nitin Agrawal, Ali Shahin Shamsabadi, Matt J. Kusner, Adrià Gascón

In this work, we investigate the advantages of designing training algorithms alongside a novel secure protocol, incorporating optimizations on both fronts.

Vocal Bursts Valence Prediction

TAPAS: Tricks to Accelerate (encrypted) Prediction As a Service

1 code implementation ICML 2018 Amartya Sanyal, Matt J. Kusner, Adrià Gascón, Varun Kanade

The main drawback of using fully homomorphic encryption is the amount of time required to evaluate large machine learning models on encrypted data.

BIG-bench Machine Learning Binarization

Blind Justice: Fairness with Encrypted Sensitive Attributes

1 code implementation ICML 2018 Niki Kilbertus, Adrià Gascón, Matt J. Kusner, Michael Veale, Krishna P. Gummadi, Adrian Weller

Recent work has explored how to train machine learning models which do not discriminate against any subgroup of the population as determined by sensitive attributes such as gender or race.


Cannot find the paper you are looking for? You can Submit a new open access paper.