Search Results for author: Anian Ruoss

Found 15 papers, 11 papers with code

Grandmaster-Level Chess Without Search

no code implementations7 Feb 2024 Anian Ruoss, Grégoire Delétang, Sourabh Medapati, Jordi Grau-Moya, Li Kevin Wenliang, Elliot Catt, John Reid, Tim Genewein

Unlike traditional chess engines that rely on complex heuristics, explicit search, or a combination of both, we train a 270M parameter transformer model with supervised learning on a dataset of 10 million chess games.

Learning Universal Predictors

1 code implementation26 Jan 2024 Jordi Grau-Moya, Tim Genewein, Marcus Hutter, Laurent Orseau, Grégoire Delétang, Elliot Catt, Anian Ruoss, Li Kevin Wenliang, Christopher Mattern, Matthew Aitchison, Joel Veness

Meta-learning has emerged as a powerful approach to train neural networks to learn new tasks quickly from limited data.

Meta-Learning

Distributional Bellman Operators over Mean Embeddings

1 code implementation9 Dec 2023 Li Kevin Wenliang, Grégoire Delétang, Matthew Aitchison, Marcus Hutter, Anian Ruoss, Arthur Gretton, Mark Rowland

We propose a novel algorithmic framework for distributional reinforcement learning, based on learning finite-dimensional mean embeddings of return distributions.

Atari Games Distributional Reinforcement Learning +1

Finding Increasingly Large Extremal Graphs with AlphaZero and Tabu Search

no code implementations6 Nov 2023 Abbas Mehrabian, Ankit Anand, Hyunjik Kim, Nicolas Sonnerat, Matej Balog, Gheorghe Comanici, Tudor Berariu, Andrew Lee, Anian Ruoss, Anna Bulanova, Daniel Toyama, Sam Blackwell, Bernardino Romera Paredes, Petar Veličković, Laurent Orseau, Joonkyung Lee, Anurag Murty Naredla, Doina Precup, Adam Zsolt Wagner

This work studies a central extremal graph theory problem inspired by a 1975 conjecture of Erd\H{o}s, which aims to find graphs with a given size (number of nodes) that maximize the number of edges without having 3- or 4-cycles.

Decision Making Graph Generation

Language Modeling Is Compression

1 code implementation19 Sep 2023 Grégoire Delétang, Anian Ruoss, Paul-Ambroise Duquenne, Elliot Catt, Tim Genewein, Christopher Mattern, Jordi Grau-Moya, Li Kevin Wenliang, Matthew Aitchison, Laurent Orseau, Marcus Hutter, Joel Veness

We show that large language models are powerful general-purpose predictors and that the compression viewpoint provides novel insights into scaling laws, tokenization, and in-context learning.

In-Context Learning Language Modelling

Beyond Bayes-optimality: meta-learning what you know you don't know

no code implementations30 Sep 2022 Jordi Grau-Moya, Grégoire Delétang, Markus Kunesch, Tim Genewein, Elliot Catt, Kevin Li, Anian Ruoss, Chris Cundy, Joel Veness, Jane Wang, Marcus Hutter, Christopher Summerfield, Shane Legg, Pedro Ortega

This is in contrast to risk-sensitive agents, which additionally exploit the higher-order moments of the return, and ambiguity-sensitive agents, which act differently when recognizing situations in which they lack knowledge.

Decision Making Meta-Learning

Latent Space Smoothing for Individually Fair Representations

1 code implementation26 Nov 2021 Momchil Peychev, Anian Ruoss, Mislav Balunović, Maximilian Baader, Martin Vechev

This enables us to learn individually fair representations that map similar individuals close together by using adversarial training to minimize the distance between their representations.

Fairness Representation Learning

Fair Normalizing Flows

1 code implementation ICLR 2022 Mislav Balunović, Anian Ruoss, Martin Vechev

Fair representation learning is an attractive approach that promises fairness of downstream predictors by encoding sensitive data.

Fairness Representation Learning +1

Robustness Certification for Point Cloud Models

1 code implementation ICCV 2021 Tobias Lorenz, Anian Ruoss, Mislav Balunović, Gagandeep Singh, Martin Vechev

In this work, we address this challenge and introduce 3DCertify, the first verifier able to certify the robustness of point cloud models.

Efficient Certification of Spatial Robustness

1 code implementation19 Sep 2020 Anian Ruoss, Maximilian Baader, Mislav Balunović, Martin Vechev

Recent work has exposed the vulnerability of computer vision models to vector field attacks.

Learning Certified Individually Fair Representations

1 code implementation NeurIPS 2020 Anian Ruoss, Mislav Balunović, Marc Fischer, Martin Vechev

That is, our method enables the data producer to learn and certify a representation where for a data point all similar individuals are at $\ell_\infty$-distance at most $\epsilon$, thus allowing data consumers to certify individual fairness by proving $\epsilon$-robustness of their classifier.

Fairness Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.