Search Results for author: Christopher Frye

Found 9 papers, 1 papers with code

Representation Learning for High-Dimensional Data Collection under Local Differential Privacy

no code implementations23 Oct 2020 Alex Mansbridge, Gregory Barbour, Davide Piras, Michael Murray, Christopher Frye, Ilya Feige, David Barber

In this work, our contributions are two-fold: first, by adapting state-of-the-art techniques from representation learning, we introduce a novel approach to learning LDP mechanisms.

Denoising Representation Learning +1

Explainability for fair machine learning

no code implementations14 Oct 2020 Tom Begley, Tobias Schwedes, Christopher Frye, Ilya Feige

Moreover, motivated by the linearity of Shapley explainability, we propose a meta algorithm for applying existing training-time fairness interventions, wherein one trains a perturbation to the original model, rather than a new model entirely.

Attribute BIG-bench Machine Learning +1

Human-interpretable model explainability on high-dimensional data

no code implementations14 Oct 2020 Damien de Mijolla, Christopher Frye, Markus Kunesch, John Mansir, Ilya Feige

The importance of explainability in machine learning continues to grow, as both neural-network architectures and the data they model become increasingly complex.

Image Classification Image-to-Image Translation +2

Shapley explainability on the data manifold

no code implementations ICLR 2021 Christopher Frye, Damien de Mijolla, Tom Begley, Laurence Cowton, Megan Stanley, Ilya Feige

Explainability in AI is crucial for model development, compliance with regulation, and providing operational nuance to predictions.

Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability

1 code implementation NeurIPS 2020 Christopher Frye, Colin Rowat, Ilya Feige

We introduce a less restrictive framework, Asymmetric Shapley values (ASVs), which are rigorously founded on a set of axioms, applicable to any AI system, and flexible enough to incorporate any causal structure known to be respected by the data.

feature selection Time Series +1

Binary JUNIPR: an interpretable probabilistic model for discrimination

no code implementations24 Jun 2019 Anders Andreassen, Ilya Feige, Christopher Frye, Matthew D. Schwartz

We refer to this refined approach as Binary JUNIPR.

High Energy Physics - Phenomenology

Parenting: Safe Reinforcement Learning from Human Input

no code implementations18 Feb 2019 Christopher Frye, Ilya Feige

Autonomous agents trained via reinforcement learning present numerous safety concerns: reward hacking, negative side effects, and unsafe exploration, among others.

reinforcement-learning Reinforcement Learning (RL) +1

JUNIPR: a Framework for Unsupervised Machine Learning in Particle Physics

no code implementations25 Apr 2018 Anders Andreassen, Ilya Feige, Christopher Frye, Matthew D. Schwartz

As a third application, JUNIPR models can reweight events from one (e. g. simulated) data set to agree with distributions from another (e. g. experimental) data set.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.