no code implementations • 8 Jun 2023 • Bethany Connolly, Kim Moore, Tobias Schwedes, Alexander Adam, Gary Willis, Ilya Feige, Christopher Frye
Understanding causality should be a core requirement of any attempt to build real impact through AI.
no code implementations • 23 Oct 2020 • Alex Mansbridge, Gregory Barbour, Davide Piras, Michael Murray, Christopher Frye, Ilya Feige, David Barber
In this work, our contributions are two-fold: first, by adapting state-of-the-art techniques from representation learning, we introduce a novel approach to learning LDP mechanisms.
no code implementations • 14 Oct 2020 • Tom Begley, Tobias Schwedes, Christopher Frye, Ilya Feige
Moreover, motivated by the linearity of Shapley explainability, we propose a meta algorithm for applying existing training-time fairness interventions, wherein one trains a perturbation to the original model, rather than a new model entirely.
no code implementations • 14 Oct 2020 • Damien de Mijolla, Christopher Frye, Markus Kunesch, John Mansir, Ilya Feige
The importance of explainability in machine learning continues to grow, as both neural-network architectures and the data they model become increasingly complex.
no code implementations • ICLR 2021 • Christopher Frye, Damien de Mijolla, Tom Begley, Laurence Cowton, Megan Stanley, Ilya Feige
Explainability in AI is crucial for model development, compliance with regulation, and providing operational nuance to predictions.
1 code implementation • NeurIPS 2020 • Christopher Frye, Colin Rowat, Ilya Feige
We introduce a less restrictive framework, Asymmetric Shapley values (ASVs), which are rigorously founded on a set of axioms, applicable to any AI system, and flexible enough to incorporate any causal structure known to be respected by the data.
no code implementations • 24 Jun 2019 • Anders Andreassen, Ilya Feige, Christopher Frye, Matthew D. Schwartz
We refer to this refined approach as Binary JUNIPR.
High Energy Physics - Phenomenology
no code implementations • 18 Feb 2019 • Christopher Frye, Ilya Feige
Autonomous agents trained via reinforcement learning present numerous safety concerns: reward hacking, negative side effects, and unsafe exploration, among others.
no code implementations • 25 Apr 2018 • Anders Andreassen, Ilya Feige, Christopher Frye, Matthew D. Schwartz
As a third application, JUNIPR models can reweight events from one (e. g. simulated) data set to agree with distributions from another (e. g. experimental) data set.