Search Results for author: Josip Djolonga

Found 28 papers, 10 papers with code

End-to-End Spatio-Temporal Action Localisation with Video Transformers

no code implementations24 Apr 2023 Alexey Gritsenko, Xuehan Xiong, Josip Djolonga, Mostafa Dehghani, Chen Sun, Mario Lučić, Cordelia Schmid, Anurag Arnab

The most performant spatio-temporal action localisation models use external person proposals and complex external memory banks.

 Ranked #1 on Action Recognition on AVA v2.1 (using extra training data)

Action Detection Action Recognition +1

You Only Train Once: Loss-Conditional Training of Deep Networks

no code implementations ICLR 2020 Alexey Dosovitskiy, Josip Djolonga

At test time a model trained this way can be conditioned to generate outputs corresponding to any loss from the training distribution of losses.

Image Compression Style Transfer

Fast Differentiable Sorting and Ranking

2 code implementations ICML 2020 Mathieu Blondel, Olivier Teboul, Quentin Berthet, Josip Djolonga

While numerous works have proposed differentiable proxies to sorting and ranking, they do not achieve the $O(n \log n)$ time complexity one would expect from sorting and ranking operations.

Self-Supervised Learning of Video-Induced Visual Invariances

no code implementations CVPR 2020 Michael Tschannen, Josip Djolonga, Marvin Ritter, Aravindh Mahendran, Xiaohua Zhai, Neil Houlsby, Sylvain Gelly, Mario Lucic

We propose a general framework for self-supervised learning of transferable visual representations based on Video-Induced Visual Invariances (VIVI).

Ranked #15 on Image Classification on VTAB-1k (using extra training data)

Image Classification Self-Supervised Learning +1

On Mutual Information Maximization for Representation Learning

2 code implementations ICLR 2020 Michael Tschannen, Josip Djolonga, Paul K. Rubenstein, Sylvain Gelly, Mario Lucic

Many recent methods for unsupervised or self-supervised representation learning train feature extractors by maximizing an estimate of the mutual information (MI) between different views of the data.

Inductive Bias Representation Learning +1

Practical and Consistent Estimation of f-Divergences

1 code implementation NeurIPS 2019 Paul K. Rubenstein, Olivier Bousquet, Josip Djolonga, Carlos Riquelme, Ilya Tolstikhin

The estimation of an f-divergence between two probability distributions based on samples is a fundamental problem in statistics and machine learning.

BIG-bench Machine Learning Mutual Information Estimation +1

Precision-Recall Curves Using Information Divergence Frontiers

no code implementations26 May 2019 Josip Djolonga, Mario Lucic, Marco Cuturi, Olivier Bachem, Olivier Bousquet, Sylvain Gelly

Despite the tremendous progress in the estimation of generative models, the development of tools for diagnosing their failures and assessing their performance has advanced at a much slower pace.

Image Generation Information Retrieval +1

Provable Variational Inference for Constrained Log-Submodular Models

no code implementations NeurIPS 2018 Josip Djolonga, Stefanie Jegelka, Andreas Krause

Submodular maximization problems appear in several areas of machine learning and data science, as many useful modelling concepts such as diversity and coverage satisfy this natural diminishing returns property.

Variational Inference

Differentiable Learning of Submodular Models

no code implementations NeurIPS 2017 Josip Djolonga, Andreas Krause

In this paper we focus on the problem of submodular minimization, for which we show that such layers are indeed possible.

Variational Inference

Learning Implicit Generative Models Using Differentiable Graph Tests

no code implementations4 Sep 2017 Josip Djolonga, Andreas Krause

Recently, there has been a growing interest in the problem of learning rich implicit models - those from which we can sample, but can not evaluate their density.

Stochastic Optimization

Cooperative Graphical Models

no code implementations NeurIPS 2016 Josip Djolonga, Stefanie Jegelka, Sebastian Tschiatschek, Andreas Krause

We study a rich family of distributions that capture variable interactions significantly more expressive than those representable with low-treewidth or pairwise graphical models, or log-supermodular models.

Variational Inference

Variational Inference in Mixed Probabilistic Submodular Models

no code implementations NeurIPS 2016 Josip Djolonga, Sebastian Tschiatschek, Andreas Krause

We consider the problem of variational inference in probabilistic models with both log-submodular and log-supermodular higher-order potentials.

Variational Inference

High-Dimensional Gaussian Process Bandits

no code implementations NeurIPS 2013 Josip Djolonga, Andreas Krause, Volkan Cevher

Many applications in machine learning require optimizing unknown functions defined over a high-dimensional space from noisy samples that are expensive to obtain.

Bayesian Optimization Vocal Bursts Intensity Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.