Search Results for author: Christos Louizos

Found 26 papers, 11 papers with code

InterroGate: Learning to Share, Specialize, and Prune Representations for Multi-task Learning

no code implementations26 Feb 2024 Babak Ehteshami Bejnordi, Gaurav Kumar, Amelie Royer, Christos Louizos, Tijmen Blankevoort, Mohsen Ghafoorian

In this work, we propose \textit{InterroGate}, a novel multi-task learning (MTL) architecture designed to mitigate task interference while optimizing inference computational efficiency.

Computational Efficiency Multi-Task Learning

Protect Your Score: Contact Tracing With Differential Privacy Guarantees

no code implementations18 Dec 2023 Rob Romijnders, Christos Louizos, Yuki M. Asano, Max Welling

The pandemic in 2020 and 2021 had enormous economic and societal consequences, and studies show that contact tracing algorithms can be key in the early containment of the virus.

Quantization Robust Federated Learning for Efficient Inference on Heterogeneous Devices

no code implementations22 Jun 2022 Kartik Gupta, Marios Fournarakis, Matthias Reisser, Christos Louizos, Markus Nagel

We perform extensive experiments on standard FL benchmarks to evaluate our proposed FedAvg variants for quantization robustness and provide a convergence analysis for our Quantization-Aware variants in FL.

BIG-bench Machine Learning Federated Learning +1

An Expectation-Maximization Perspective on Federated Learning

no code implementations19 Nov 2021 Christos Louizos, Matthias Reisser, Joseph Soriaga, Max Welling

Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.

Federated Learning

DP-REC: Private & Communication-Efficient Federated Learning

no code implementations9 Nov 2021 Aleksei Triastcyn, Matthias Reisser, Christos Louizos

Privacy and communication efficiency are important challenges in federated training of neural networks, and combining them is still an open problem.

Federated Learning

Federated Mixture of Experts

no code implementations14 Jul 2021 Matthias Reisser, Christos Louizos, Efstratios Gavves, Max Welling

Federated learning (FL) has emerged as the predominant approach for collaborative training of neural network models across multiple users, without the need to gather the data at a central location.

Federated Learning

Federated Learning of User Verification Models Without Sharing Embeddings

no code implementations18 Apr 2021 Hossein Hosseini, Hyunsin Park, Sungrack Yun, Christos Louizos, Joseph Soriaga, Max Welling

We consider the problem of training User Verification (UV) models in federated setting, where each user has access to the data of only one class and user embeddings cannot be shared with the server or other users.

Federated Learning

Secure Federated Learning of User Verification Models

no code implementations1 Jan 2021 Hossein Hosseini, Hyunsin Park, Sungrack Yun, Christos Louizos, Joseph Soriaga, Max Welling

We consider the problem of training User Verification (UV) models in federated setup, where the conventional loss functions are not applicable due to the constraints that each user has access to the data of only one class and user embeddings cannot be shared with the server or other users.

Federated Learning

Federated Averaging as Expectation Maximization

no code implementations1 Jan 2021 Christos Louizos, Matthias Reisser, Joseph Soriaga, Max Welling

Federated averaging (FedAvg), despite its simplicity, has been the main approach in training neural networks in the federated learning setting.

Federated Learning

Federated Learning of User Authentication Models

no code implementations9 Jul 2020 Hossein Hosseini, Sungrack Yun, Hyunsin Park, Christos Louizos, Joseph Soriaga, Max Welling

In this paper, we propose Federated User Authentication (FedUA), a framework for privacy-preserving training of UA models.

Federated Learning Privacy Preserving +1

Bayesian Bits: Unifying Quantization and Pruning

1 code implementation NeurIPS 2020 Mart van Baalen, Christos Louizos, Markus Nagel, Rana Ali Amjad, Ying Wang, Tijmen Blankevoort, Max Welling

We introduce Bayesian Bits, a practical method for joint mixed precision quantization and pruning through gradient based optimization.

Quantization

Up or Down? Adaptive Rounding for Post-Training Quantization

no code implementations ICML 2020 Markus Nagel, Rana Ali Amjad, Mart van Baalen, Christos Louizos, Tijmen Blankevoort

In this paper, we propose AdaRound, a better weight-rounding mechanism for post-training quantization that adapts to the data and the task loss.

Quantization

Gradient $\ell_1$ Regularization for Quantization Robustness

no code implementations ICLR 2020 Milad Alizadeh, Arash Behboodi, Mart van Baalen, Christos Louizos, Tijmen Blankevoort, Max Welling

We analyze the effect of quantizing weights and activations of neural networks on their loss and derive a simple regularization scheme that improves robustness against post-training quantization.

Quantization

The Functional Neural Process

1 code implementation NeurIPS 2019 Christos Louizos, Xiahan Shi, Klamer Schutte, Max Welling

We present a new family of exchangeable stochastic processes, the Functional Neural Processes (FNPs).

Image Classification

DIVA: Domain Invariant Variational Autoencoders

3 code implementations24 May 2019 Maximilian Ilse, Jakub M. Tomczak, Christos Louizos, Max Welling

We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain.

Domain Generalization Rotated MNIST

DIVA: Domain Invariant Variational Autoencoder

no code implementations ICLR Workshop DeepGenStruct 2019 Maximilian Ilse, Jakub M. Tomczak, Christos Louizos, Max Welling

We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain.

Domain Generalization Rotated MNIST

Relaxed Quantization for Discretized Neural Networks

1 code implementation ICLR 2019 Christos Louizos, Matthias Reisser, Tijmen Blankevoort, Efstratios Gavves, Max Welling

Neural network quantization has become an important research area due to its great impact on deployment of large models on resource constrained devices.

General Classification Quantization

Learning Sparse Neural Networks through L_0 Regularization

no code implementations ICLR 2018 Christos Louizos, Max Welling, Diederik P. Kingma

We further propose the \emph{hard concrete} distribution for the gates, which is obtained by ``stretching'' a binary concrete distribution and then transforming its samples with a hard-sigmoid.

Model Selection

Learning Sparse Neural Networks through $L_0$ Regularization

4 code implementations4 Dec 2017 Christos Louizos, Max Welling, Diederik P. Kingma

We further propose the \emph{hard concrete} distribution for the gates, which is obtained by "stretching" a binary concrete distribution and then transforming its samples with a hard-sigmoid.

Model Selection

Bayesian Compression for Deep Learning

3 code implementations NeurIPS 2017 Christos Louizos, Karen Ullrich, Max Welling

Compression and computational efficiency in deep learning have become a problem of great significance.

Computational Efficiency

Causal Effect Inference with Deep Latent-Variable Models

6 code implementations NeurIPS 2017 Christos Louizos, Uri Shalit, Joris Mooij, David Sontag, Richard Zemel, Max Welling

Learning individual-level causal effects from observational data, such as inferring the most effective medication for a specific patient, is a problem of growing importance for policy makers.

Causal Inference

Multiplicative Normalizing Flows for Variational Bayesian Neural Networks

7 code implementations ICML 2017 Christos Louizos, Max Welling

We reinterpret multiplicative noise in neural networks as auxiliary random variables that augment the approximate posterior in a variational setting for Bayesian neural networks.

Structured and Efficient Variational Deep Learning with Matrix Gaussian Posteriors

2 code implementations15 Mar 2016 Christos Louizos, Max Welling

We introduce a variational Bayesian neural network where the parameters are governed via a probability distribution on random matrices.

Gaussian Processes

The Variational Fair Autoencoder

2 code implementations3 Nov 2015 Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, Richard Zemel

We investigate the problem of learning representations that are invariant to certain nuisance or sensitive factors of variation in the data while retaining as much of the remaining information as possible.

General Classification Sentiment Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.