no code implementations • 6 Feb 2024 • Geoffrey Cideron, Sertan Girgin, Mauro Verzetti, Damien Vincent, Matej Kastelic, Zalán Borsos, Brian McWilliams, Victor Ungureanu, Olivier Bachem, Olivier Pietquin, Matthieu Geist, Léonard Hussenot, Neil Zeghidour, Andrea Agostinelli
MusicRL is a pretrained autoregressive MusicLM (Agostinelli et al., 2023) model of discrete audio tokens finetuned with reinforcement learning to maximise sequence-level rewards.
1 code implementation • 15 Sep 2022 • Xinyang Zhang, Yury Malkov, Omar Florez, Serim Park, Brian McWilliams, Jiawei Han, Ahmed El-Kishky
Most existing PLMs are not tailored to the noisy user-generated text on social media, and the pre-training does not factor in the valuable social engagement logs available in a social network.
no code implementations • 10 Jun 2022 • Ian Gemp, Charlie Chen, Brian McWilliams
In this work, we develop a game-theoretic formulation of the top-$k$ SGEP whose Nash equilibrium is the set of generalized eigenvectors.
no code implementations • 13 Jan 2022 • Nenad Tomasev, Ioana Bica, Brian McWilliams, Lars Buesing, Razvan Pascanu, Charles Blundell, Jovana Mitrovic
Most notably, ReLICv2 is the first unsupervised representation learning method to consistently outperform the supervised baseline in a like-for-like comparison over a range of ResNet architectures.
Ranked #14 on Semantic Segmentation on PASCAL VOC 2012 val
Representation Learning Self-Supervised Image Classification +3
1 code implementation • ICLR 2022 • Ian Gemp, Brian McWilliams, Claire Vernade, Thore Graepel
We build on the recently proposed EigenGame that views eigendecomposition as a competitive game.
no code implementations • NeurIPS Workshop ICBINB 2020 • Jovana Mitrovic, Brian McWilliams, Melanie Rey
Usually the other datapoints in the batch serve as the negatives for the given datapoint.
2 code implementations • 15 Oct 2020 • Jovana Mitrovic, Brian McWilliams, Jacob Walker, Lars Buesing, Charles Blundell
Self-supervised learning has emerged as a strategy to reduce the reliance on costly supervised signal by pretraining representations only using unlabeled data.
Ranked #77 on Self-Supervised Image Classification on ImageNet
2 code implementations • ICLR 2021 • Ian Gemp, Brian McWilliams, Claire Vernade, Thore Graepel
We present a novel view on principal component analysis (PCA) as a competitive game in which each approximate eigenvector is controlled by a player whose goal is to maximize their own utility function.
no code implementations • 6 Feb 2020 • Kevin R. McKee, Ian Gemp, Brian McWilliams, Edgar A. Duéñez-Guzmán, Edward Hughes, Joel Z. Leibo
Recent research on reinforcement learning in pure-conflict and pure-common interest games has emphasized the importance of population heterogeneity.
no code implementations • 15 Jan 2019 • Abhimanyu Sahai, Romann Weber, Brian McWilliams
In this paper we study deep learning-based music source separation, and explore using an alternative loss to the standard spectrogram pixel-level L2 loss for model training.
2 code implementations • 11 Aug 2018 • Thomas Müller, Brian McWilliams, Fabrice Rousselle, Markus Gross, Jan Novák
We propose to use deep neural networks for generating samples in Monte Carlo integration.
6 code implementations • 9 Apr 2018 • Yifan Wang, Federico Perazzi, Brian McWilliams, Alexander Sorkine-Hornung, Olga Sorkine-Hornung, Christopher Schroers
Recent deep learning approaches to single image super-resolution have achieved impressive results in terms of traditional error measures and perceptual quality.
Ranked #14 on Image Super-Resolution on BSD100 - 4x upscaling
no code implementations • CVPR 2018 • Simone Meyer, Abdelaziz Djelouah, Brian McWilliams, Alexander Sorkine-Hornung, Markus Gross, Christopher Schroers
We show that this is superior to the hand-crafted heuristics previously used in phase-based methods and also compares favorably to recent deep learning based approaches for video frame interpolation on challenging datasets.
no code implementations • 15 Sep 2017 • Simon Kallweit, Thomas Müller, Brian McWilliams, Markus Gross, Jan Novák
To render a new scene, we sample visible points of the cloud and, for each, extract a hierarchical 3D descriptor of the cloud geometry with respect to the shading location and the light source.
no code implementations • ACM Transactions on Graphics 2017 • Steve Bako, Thijs Vogels, Brian McWilliams, Mark Meyer, Jan Novák, Alex Harvill, Pradeep Sen, Tony Derose, Fabrice Rousselle
In a second approach, we introduce a novel, kernel-prediction network which uses the CNN to estimate the local weighting kernels used to compute each denoised pixel from its neighbors.
no code implementations • 1 Mar 2017 • Christina Heinze-Deml, Brian McWilliams, Nicolai Meinshausen
Privacy is crucial in many applications of machine learning.
1 code implementation • ICML 2017 • David Balduzzi, Marcus Frean, Lennox Leary, JP Lewis, Kurt Wan-Duo Ma, Brian McWilliams
A long-standing obstacle to progress in deep learning is the problem of vanishing and exploding gradients.
no code implementations • NeurIPS 2016 • Gabriel Krummenacher, Brian McWilliams, Yannic Kilcher, Joachim M. Buhmann, Nicolai Meinshausen
We show that the regret of Ada-LR is close to the regret of full-matrix AdaGrad which can have an up-to exponentially smaller dependence on the dimension than the diagonal variant.
no code implementations • ICML 2017 • David Balduzzi, Brian McWilliams, Tony Butler-Yeoman
Modern convolutional networks, incorporating rectifiers and max-pooling, are neither smooth nor convex; standard guarantees therefore do not apply.
1 code implementation • CVPR 2016 • Federico Perazzi, Jordi Pont-Tuset, Brian McWilliams, Luc van Gool, Markus Gross, Alexander Sorkine-Hornung
The dataset, named DAVIS (Densely Annotated VIdeo Segmentation), consists of fifty high quality, Full HD video sequences, spanning multiple occurrences of common video object segmentation challenges such as occlusions, motion-blur and appearance changes.
no code implementations • 29 Jul 2015 • Barbora Micenková, Brian McWilliams, Ira Assent
We demonstrate the good performance of BORE compared to a variety of competing methods in the non-budgeted and the budgeted outlier detection problem on 12 real-world datasets.
no code implementations • NeurIPS 2015 • Thomas Hofmann, Aurelien Lucchi, Simon Lacoste-Julien, Brian McWilliams
As a side-product we provide a unified convergence analysis for a family of variance reduction algorithms, which we call memorization algorithms.
no code implementations • 8 Jun 2015 • Christina Heinze, Brian McWilliams, Nicolai Meinshausen
We present DUAL-LOCO, a communication-efficient algorithm for distributed statistical estimation.
no code implementations • 28 Mar 2015 • Aurelien Lucchi, Brian McWilliams, Thomas Hofmann
Quasi-Newton methods are widely used in practise for convex loss minimization problems.
no code implementations • 13 Jun 2014 • Christina Heinze, Brian McWilliams, Nicolai Meinshausen, Gabriel Krummenacher
We propose LOCO, an algorithm for large-scale ridge regression which distributes the features across workers on a cluster.
no code implementations • NeurIPS 2014 • Brian McWilliams, Gabriel Krummenacher, Mario Lucic, Joachim M. Buhmann
Subsampling methods have been recently proposed to speed up least squares estimation in large scale settings.
no code implementations • NeurIPS 2013 • Brian McWilliams, David Balduzzi, Joachim M. Buhmann
Random views are justified by recent theoretical and empirical work showing that regression with random features closely approximates kernel regression, implying that random views can be expected to contain accurate estimators.