Search Results for author: Andrew Trask

Found 18 papers, 8 papers with code

Exploring the Relevance of Data Privacy-Enhancing Technologies for AI Governance Use Cases

no code implementations15 Mar 2023 Emma Bluemke, Tantum Collins, Ben Garfinkel, Andrew Trask

The development of privacy-enhancing technologies has made immense progress in reducing trade-offs between privacy and performance in data exchange and analysis.

Towards General-purpose Infrastructure for Protecting Scientific Data Under Study

1 code implementation4 Oct 2021 Andrew Trask, Kritika Prakash

The scientific method presents a key challenge to privacy because it requires many samples to support a claim.

An automatic differentiation system for the age of differential privacy

no code implementations22 Sep 2021 Dmitrii Usynin, Alexander Ziller, Moritz Knolle, Andrew Trask, Kritika Prakash, Daniel Rueckert, Georgios Kaissis

We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML).

BIG-bench Machine Learning

Sensitivity analysis in differentially private machine learning using hybrid automatic differentiation

no code implementations9 Jul 2021 Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Kritika Prakash, Andrew Trask, Rickmer Braren, Marcus Makowski, Daniel Rueckert, Georgios Kaissis

Reconciling large-scale ML with the closed-form reasoning required for the principled analysis of individual privacy loss requires the introduction of new tools for automatic sensitivity analysis and for tracking an individual's data and their features through the flow of computation.

BIG-bench Machine Learning

DP-SGD vs PATE: Which Has Less Disparate Impact on Model Accuracy?

1 code implementation22 Jun 2021 Archit Uniyal, Rakshit Naidu, Sasikanth Kotti, Sahib Singh, Patrik Joslin Kenfack, FatemehSadat Mireshghallah, Andrew Trask

Recent advances in differentially private deep learning have demonstrated that application of differential privacy, specifically the DP-SGD algorithm, has a disparate impact on different sub-groups in the population, which leads to a significantly high drop-in model utility for sub-populations that are under-represented (minorities), compared to well-represented ones.

Fairness

Beyond Privacy Trade-offs with Structured Transparency

no code implementations15 Dec 2020 Andrew Trask, Emma Bluemke, Ben Garfinkel, Claudia Ghezzou Cuervas-Mons, Allan Dafoe

The copy problem is often amplified by three related problems which we term the bundling, edit, and recursive enforcement problems.

Federated Learning Cryptography and Security Computers and Society

Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy

2 code implementations10 Sep 2020 Tom Farrand, FatemehSadat Mireshghallah, Sahib Singh, Andrew Trask

Deployment of deep learning in different fields and industries is growing day by day due to its performance, which relies on the availability of data and compute.

Fairness

Benchmarking Differentially Private Residual Networks for Medical Imagery

1 code implementation27 May 2020 Sahib Singh, Harshvardhan Sikka, Sasikanth Kotti, Andrew Trask

In this paper we measure the effectiveness of $\epsilon$-Differential Privacy (DP) when applied to medical imaging.

Benchmarking

The Future of Digital Health with Federated Learning

no code implementations18 Mar 2020 Nicola Rieke, Jonny Hancox, Wenqi Li, Fausto Milletari, Holger Roth, Shadi Albarqouni, Spyridon Bakas, Mathieu N. Galtier, Bennett Landman, Klaus Maier-Hein, Sebastien Ourselin, Micah Sheller, Ronald M. Summers, Andrew Trask, Daguang Xu, Maximilian Baust, M. Jorge Cardoso

Data-driven Machine Learning has emerged as a promising approach for building accurate and robust statistical models from medical data, which is collected in huge volumes by modern healthcare systems.

Federated Learning

Scaling shared model governance via model splitting

no code implementations ICLR 2019 Miljan Martic, Jan Leike, Andrew Trask, Matteo Hessel, Shane Legg, Pushmeet Kohli

Currently the only techniques for sharing governance of a deep learning model are homomorphic encryption and secure multiparty computation.

reinforcement-learning Reinforcement Learning (RL)

Neural Arithmetic Logic Units

21 code implementations NeurIPS 2018 Andrew Trask, Felix Hill, Scott Reed, Jack Rae, Chris Dyer, Phil Blunsom

Neural networks can learn to represent and manipulate numerical information, but they seldom generalize well outside of the range of numerical values encountered during training.

sense2vec - A Fast and Accurate Method for Word Sense Disambiguation In Neural Word Embeddings

1 code implementation19 Nov 2015 Andrew Trask, Phil Michalak, John Liu

Neural word representations have proven useful in Natural Language Processing (NLP) tasks due to their ability to efficiently model complex semantic and syntactic word relationships.

Dependency Parsing Word Embeddings +1

Modeling Order in Neural Word Embeddings at Scale

no code implementations8 Jun 2015 Andrew Trask, David Gilmore, Matthew Russell

Natural Language Processing (NLP) systems commonly leverage bag-of-words co-occurrence techniques to capture semantic and syntactic word relationships.

Language Modelling Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.