Search Results for author: Mario Lucic

Found 48 papers, 23 papers with code

Audiovisual Masked Autoencoders

2 code implementations ICCV 2023 Mariana-Iuliana Georgescu, Eduardo Fonseca, Radu Tudor Ionescu, Mario Lucic, Cordelia Schmid, Anurag Arnab

Can we leverage the audiovisual information already present in video to improve self-supervised representation learning?

 Ranked #1 on Audio Classification on EPIC-KITCHENS-100 (using extra training data)

Audio Classification Representation Learning

RUST: Latent Neural Scene Representations from Unposed Imagery

no code implementations CVPR 2023 Mehdi S. M. Sajjadi, Aravindh Mahendran, Thomas Kipf, Etienne Pot, Daniel Duckworth, Mario Lucic, Klaus Greff

Our main insight is that one can train a Pose Encoder that peeks at the target image and learns a latent pose embedding which is used by the decoder for view synthesis.

Novel View Synthesis

VCT: A Video Compression Transformer

1 code implementation15 Jun 2022 Fabian Mentzer, George Toderici, David Minnen, Sung-Jin Hwang, Sergi Caelles, Mario Lucic, Eirikur Agustsson

The resulting video compression transformer outperforms previous methods on standard video compression data sets.

motion prediction Video Compression

PolyViT: Co-training Vision Transformers on Images, Videos and Audio

no code implementations25 Nov 2021 Valerii Likhosherstov, Anurag Arnab, Krzysztof Choromanski, Mario Lucic, Yi Tay, Adrian Weller, Mostafa Dehghani

Can we train a single transformer model capable of processing multiple modalities and datasets, whilst sharing almost all of its learnable parameters?

Audio Classification

Scene Representation Transformer: Geometry-Free Novel View Synthesis Through Set-Latent Scene Representations

1 code implementation CVPR 2022 Mehdi S. M. Sajjadi, Henning Meyer, Etienne Pot, Urs Bergmann, Klaus Greff, Noha Radwan, Suhani Vora, Mario Lucic, Daniel Duckworth, Alexey Dosovitskiy, Jakob Uszkoreit, Thomas Funkhouser, Andrea Tagliasacchi

In this work, we propose the Scene Representation Transformer (SRT), a method which processes posed or unposed RGB images of a new area, infers a "set-latent scene representation", and synthesises novel views, all in a single feed-forward pass.

Novel View Synthesis Semantic Segmentation

A Near-Optimal Algorithm for Debiasing Trained Machine Learning Models

1 code implementation NeurIPS 2021 Ibrahim Alabdulmohsin, Mario Lucic

We present a scalable post-processing algorithm for debiasing trained models, including deep neural networks (DNNs), which we prove to be near-optimal by bounding its excess Bayes risk.

BIG-bench Machine Learning

Unconditional Synthesis of Complex Scenes Using a Semantic Bottleneck

no code implementations1 Jan 2021 Samaneh Azadi, Michael Tschannen, Eric Tzeng, Sylvain Gelly, Trevor Darrell, Mario Lucic

Coupling the high-fidelity generation capabilities of label-conditional image synthesis methods with the flexibility of unconditional generative models, we propose a semantic bottleneck GAN model for unconditional synthesis of complex scenes.

Image Generation Segmentation

A Near-Optimal Recipe for Debiasing Trained Machine Learning Models

no code implementations1 Jan 2021 Ibrahim Alabdulmohsin, Mario Lucic

We present an efficient and scalable algorithm for debiasing trained models, including deep neural networks (DNNs), which we prove to be near-optimal by bounding its excess Bayes risk.

BIG-bench Machine Learning Classification +1

A Sober Look at the Unsupervised Learning of Disentangled Representations and their Evaluation

no code implementations27 Oct 2020 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The idea behind the \emph{unsupervised} learning of \emph{disentangled} representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.

Disentanglement

Which Model to Transfer? Finding the Needle in the Growing Haystack

no code implementations CVPR 2022 Cedric Renggli, André Susano Pinto, Luka Rimanic, Joan Puigcerver, Carlos Riquelme, Ce Zhang, Mario Lucic

Transfer learning has been recently popularized as a data-efficient alternative to training models from scratch, in particular for computer vision tasks where it provides a remarkably solid baseline.

Transfer Learning

A Commentary on the Unsupervised Learning of Disentangled Representations

no code implementations28 Jul 2020 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The goal of the unsupervised learning of disentangled representations is to separate the independent explanatory factors of variation in the data without access to supervision.

Self-Supervised Learning of Video-Induced Visual Invariances

no code implementations CVPR 2020 Michael Tschannen, Josip Djolonga, Marvin Ritter, Aravindh Mahendran, Xiaohua Zhai, Neil Houlsby, Sylvain Gelly, Mario Lucic

We propose a general framework for self-supervised learning of transferable visual representations based on Video-Induced Visual Invariances (VIVI).

Ranked #15 on Image Classification on VTAB-1k (using extra training data)

Image Classification Self-Supervised Learning +1

Semantic Bottleneck Scene Generation

2 code implementations26 Nov 2019 Samaneh Azadi, Michael Tschannen, Eric Tzeng, Sylvain Gelly, Trevor Darrell, Mario Lucic

For the former, we use an unconditional progressive segmentation generation network that captures the distribution of realistic semantic scene layouts.

Conditional Image Generation Image-to-Image Translation +2

On Mutual Information Maximization for Representation Learning

2 code implementations ICLR 2020 Michael Tschannen, Josip Djolonga, Paul K. Rubenstein, Sylvain Gelly, Mario Lucic

Many recent methods for unsupervised or self-supervised representation learning train feature extractors by maximizing an estimate of the mutual information (MI) between different views of the data.

Inductive Bias Representation Learning +1

Precision-Recall Curves Using Information Divergence Frontiers

no code implementations26 May 2019 Josip Djolonga, Mario Lucic, Marco Cuturi, Olivier Bachem, Olivier Bousquet, Sylvain Gelly

Despite the tremendous progress in the estimation of generative models, the development of tools for diagnosing their failures and assessing their performance has advanced at a much slower pace.

Image Generation Information Retrieval +1

The GAN Landscape: Losses, Architectures, Regularization, and Normalization

no code implementations ICLR 2019 Karol Kurach, Mario Lucic, Xiaohua Zhai, Marcin Michalski, Sylvain Gelly

Generative adversarial networks (GANs) are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion.

Recent Advances in Autoencoder-Based Representation Learning

no code implementations12 Dec 2018 Michael Tschannen, Olivier Bachem, Mario Lucic

Finally, we provide an analysis of autoencoder-based representation learning through the lens of rate-distortion theory and identify a clear tradeoff between the amount of prior knowledge available about the downstream tasks, and how useful the representation is for this task.

Disentanglement

Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

8 code implementations ICML 2019 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.

Disentanglement

Self-Supervised GANs via Auxiliary Rotation Loss

4 code implementations CVPR 2019 Ting Chen, Xiaohua Zhai, Marvin Ritter, Mario Lucic, Neil Houlsby

In this work we exploit two popular unsupervised learning techniques, adversarial training and self-supervision, and take a step towards bridging the gap between conditional and unconditional GANs.

Image Generation Representation Learning

A Large-Scale Study on Regularization and Normalization in GANs

5 code implementations ICLR 2019 Karol Kurach, Mario Lucic, Xiaohua Zhai, Marcin Michalski, Sylvain Gelly

Generative adversarial networks (GANs) are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion.

Assessing Generative Models via Precision and Recall

4 code implementations NeurIPS 2018 Mehdi S. M. Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, Sylvain Gelly

Recent advances in generative modeling have led to an increased interest in the study of statistical divergences as means of model comparison.

Stochastic Submodular Maximization: The Case of Coverage Functions

no code implementations NeurIPS 2017 Mohammad Reza Karimi, Mario Lucic, Hamed Hassani, Andreas Krause

By exploiting that common extensions act linearly on the class of submodular functions, we employ projected stochastic gradient ascent and its variants in the continuous domain, and perform rounding to obtain discrete solutions.

Clustering Stochastic Optimization

Uniform Deviation Bounds for k-Means Clustering

no code implementations ICML 2017 Olivier Bachem, Mario Lucic, S. Hamed Hassani, Andreas Krause

In this paper, we provide a novel framework to obtain uniform deviation bounds for loss functions which are unbounded.

Clustering

Distributed and Provably Good Seedings for k-Means in Constant Rounds

no code implementations ICML 2017 Olivier Bachem, Mario Lucic, Andreas Krause

The k-Means++ algorithm is the state of the art algorithm to solve k-Means clustering problems as the computed clusterings are O(log k) competitive in expectation.

Clustering

Training Gaussian Mixture Models at Scale via Coresets

no code implementations23 Mar 2017 Mario Lucic, Matthew Faulkner, Andreas Krause, Dan Feldman

In this work we show how to construct coresets for mixtures of Gaussians.

Practical Coreset Constructions for Machine Learning

2 code implementations19 Mar 2017 Olivier Bachem, Mario Lucic, Andreas Krause

We investigate coresets - succinct, small summaries of large data sets - so that solutions found on the summary are provably competitive with solution found on the full data set.

BIG-bench Machine Learning Clustering +1

Scalable k-Means Clustering via Lightweight Coresets

1 code implementation27 Feb 2017 Olivier Bachem, Mario Lucic, Andreas Krause

As such, they have been successfully used to scale up clustering models to massive data sets.

Clustering Data Summarization

Uniform Deviation Bounds for Unbounded Loss Functions like k-Means

no code implementations27 Feb 2017 Olivier Bachem, Mario Lucic, S. Hamed Hassani, Andreas Krause

In this paper, we provide a novel framework to obtain uniform deviation bounds for loss functions which are *unbounded*.

Clustering

Fast and Provably Good Seedings for k-Means

1 code implementation NeurIPS 2016 Olivier Bachem, Mario Lucic, Hamed Hassani, Andreas Krause

Seeding - the task of finding initial cluster centers - is critical in obtaining high-quality clusterings for k-Means.

Clustering

Horizontally Scalable Submodular Maximization

no code implementations31 May 2016 Mario Lucic, Olivier Bachem, Morteza Zadimoghaddam, Andreas Krause

A variety of large-scale machine learning problems can be cast as instances of constrained submodular maximization.

Tradeoffs for Space, Time, Data and Risk in Unsupervised Learning

no code implementations2 May 2016 Mario Lucic, Mesrob I. Ohannessian, Amin Karbasi, Andreas Krause

Using k-means clustering as a prototypical unsupervised learning problem, we show how we can strategically summarize the data (control space) in order to trade off risk and time when data is generated by a probabilistic model.

Clustering Navigate

Strong Coresets for Hard and Soft Bregman Clustering with Applications to Exponential Family Mixtures

no code implementations21 Aug 2015 Mario Lucic, Olivier Bachem, Andreas Krause

We propose a single, practical algorithm to construct strong coresets for a large class of hard and soft clustering problems based on Bregman divergences.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.