Search Results for author: Graham Cormode

Found 16 papers, 8 papers with code

Opacus: User-Friendly Differential Privacy Library in PyTorch

3 code implementations25 Sep 2021 Ashkan Yousefpour, Igor Shilov, Alexandre Sablayrolles, Davide Testuggine, Karthik Prasad, Mani Malek, John Nguyen, Sayan Ghosh, Akash Bharadwaj, Jessica Zhao, Graham Cormode, Ilya Mironov

We introduce Opacus, a free, open-source PyTorch library for training deep learning models with differential privacy (hosted at opacus. ai).

Federated Boosted Decision Trees with Differential Privacy

1 code implementation6 Oct 2022 Samuel Maddock, Graham Cormode, Tianhao Wang, Carsten Maple, Somesh Jha

There is great demand for scalable, secure, and efficient privacy-preserving machine learning models that can be trained over distributed data.

Privacy Preserving

On the Importance of Difficulty Calibration in Membership Inference Attacks

1 code implementation ICLR 2022 Lauren Watson, Chuan Guo, Graham Cormode, Alex Sablayrolles

The vulnerability of machine learning models to membership inference attacks has received much attention in recent years.

Reconciling Security and Communication Efficiency in Federated Learning

1 code implementation26 Jul 2022 Karthik Prasad, Sayan Ghosh, Graham Cormode, Ilya Mironov, Ashkan Yousefpour, Pierre Stock

Cross-device Federated Learning is an increasingly popular machine learning setting to train a model by leveraging a large population of client devices with high privacy and security guarantees.

Federated Learning Quantization

Node Classification in Social Networks

1 code implementation17 Jan 2011 Smriti Bhagat, Graham Cormode, S. Muthukrishnan

When dealing with large graphs, such as those that arise in the context of online social networks, a subset of nodes may be labeled.

Social and Information Networks Physics and Society

Iterative Hessian Sketch in Input Sparsity Time

1 code implementation30 Oct 2019 Graham Cormode, Charlie Dickens

Scalable algorithms to solve optimization and regression tasks even approximately, are needed to work with large datasets.

regression

Learning Graphical Models from a Distributed Stream

no code implementations5 Oct 2017 Yu Zhang, Srikanta Tirthapura, Graham Cormode

We study Bayesian networks, the workhorse of graphical models, and present a communication-efficient method for continuously learning and maintaining a Bayesian network model over data that is arriving as a distributed stream partitioned across multiple processors.

Management

Leveraging Well-Conditioned Bases: Streaming and Distributed Summaries in Minkowski $p$-Norms

no code implementations ICML 2018 Charlie Dickens, Graham Cormode, David Woodruff

Work on approximate linear algebra has led to efficient distributed and streaming algorithms for problems such as approximate matrix multiplication, low rank approximation, and regression, primarily for the Euclidean norm $\ell_2$.

regression

Frequency Estimation Under Multiparty Differential Privacy: One-shot and Streaming

no code implementations5 Apr 2021 Ziyue Huang, Yuan Qiu, Ke Yi, Graham Cormode

We study the fundamental problem of frequency estimation under both privacy and communication constraints, where the data is distributed among $k$ parties.

Optimal Membership Inference Bounds for Adaptive Composition of Sampled Gaussian Mechanisms

no code implementations12 Apr 2022 Saeed Mahloujifar, Alexandre Sablayrolles, Graham Cormode, Somesh Jha

A common countermeasure against MI attacks is to utilize differential privacy (DP) during model training to mask the presence of individual examples.

PrivBayes: Private Data release via Bayesian networks

no code implementations Proceedings of the 2014 ACM SIGMOD International Conference on Management of Data 2014 Jun Zhang, Graham Cormode, Cecilia M. Procopiuc, Divesh Srivastava, Xiaokui Xiao

Given a dataset D, PRIVBAYES first constructs a Bayesian network N , which (i) provides a succinct model of the correlations among the attributes in D and (ii) allows us to approximate the distribution of data in D using a set P of lowdimensional marginals of D. After that, PRIVBAYES injects noise into each marginal in P to ensure differential privacy, and then uses the noisy marginals and the Bayesian network to construct an approximation of the data distribution in D. Finally, PRIVBAYES samples tuples from the approximate distribution to construct a synthetic dataset, and then releases the synthetic data.

Privacy Preserving

Federated Calibration and Evaluation of Binary Classifiers

no code implementations22 Oct 2022 Graham Cormode, Igor Markov

We address two major obstacles to practical use of supervised classifiers on distributed private data.

Pruning Compact ConvNets for Efficient Inference

no code implementations11 Jan 2023 Sayan Ghosh, Karthik Prasad, Xiaoliang Dai, Peizhao Zhang, Bichen Wu, Graham Cormode, Peter Vajda

The resulting family of pruned models can consistently obtain better performance than existing FBNetV3 models at the same level of computation, and thus provide state-of-the-art results when trading off between computational complexity and generalization performance on the ImageNet benchmark.

Network Pruning Neural Architecture Search

Cannot find the paper you are looking for? You can Submit a new open access paper.