no code implementations • 31 Oct 2024 • Denis Korzhenkov, Christos Louizos
The problem of heterogeneous clients in federated learning has recently drawn a lot of attention.
no code implementations • 23 Oct 2024 • Ashish Khisti, M. Reza Ebrahimi, Hassan Dbouk, Arash Behboodi, Roland Memisevic, Christos Louizos
In this work we show that the optimal scheme can be decomposed into a two-step solution: in the first step an importance sampling (IS) type scheme is used to select one intermediate token; in the second step (single-draft) speculative sampling is applied to generate the output token.
no code implementations • 9 Jul 2024 • Fabio Valerio Massoli, Christos Louizos, Arash Behboodi
Then, we propose to learn a distribution over dictionaries via a variational approach, dubbed Variational Learning ISTA (VLISTA).
no code implementations • 13 May 2024 • Mahdi Morafah, Matthias Reisser, Bill Lin, Christos Louizos
The proliferation of edge devices has brought Federated Learning (FL) to the forefront as a promising paradigm for decentralized and collaborative model training while preserving the privacy of clients' data.
no code implementations • 3 May 2024 • Christos Louizos, Matthias Reisser, Denis Korzhenkov
Along with the proposed SimCLR extensions, we also study how different sources of non-i. i. d.-ness can impact the performance of federated unsupervised learning through global mutual information maximization; we find that a global objective is beneficial for some sources of non-i. i. d.-ness but can be detrimental for others.
no code implementations • 3 May 2024 • Alvaro H. C. Correia, Fabio Valerio Massoli, Christos Louizos, Arash Behboodi
Conformal Prediction (CP) is a distribution-free uncertainty estimation framework that constructs prediction sets guaranteed to contain the true answer with a user-specified probability.
no code implementations • 20 Apr 2024 • Rob Romijnders, Christos Louizos, Yuki M. Asano, Max Welling
The COVID19 pandemic had enormous economic and societal consequences.
no code implementations • 26 Feb 2024 • Babak Ehteshami Bejnordi, Gaurav Kumar, Amelie Royer, Christos Louizos, Tijmen Blankevoort, Mohsen Ghafoorian
In this work, we propose \textit{InterroGate}, a novel multi-task learning (MTL) architecture designed to mitigate task interference while optimizing inference computational efficiency.
no code implementations • 18 Dec 2023 • Rob Romijnders, Christos Louizos, Yuki M. Asano, Max Welling
The pandemic in 2020 and 2021 had enormous economic and societal consequences, and studies show that contact tracing algorithms can be key in the early containment of the virus.
no code implementations • 28 Apr 2023 • Bruno Mlodozeniec, Matthias Reisser, Christos Louizos
Well-tuned hyperparameters are crucial for obtaining good generalization behavior in neural networks.
no code implementations • 22 Jun 2022 • Kartik Gupta, Marios Fournarakis, Matthias Reisser, Christos Louizos, Markus Nagel
We perform extensive experiments on standard FL benchmarks to evaluate our proposed FedAvg variants for quantization robustness and provide a convergence analysis for our Quantization-Aware variants in FL.
no code implementations • 19 Nov 2021 • Christos Louizos, Matthias Reisser, Joseph Soriaga, Max Welling
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
no code implementations • 9 Nov 2021 • Aleksei Triastcyn, Matthias Reisser, Christos Louizos
Privacy and communication efficiency are important challenges in federated training of neural networks, and combining them is still an open problem.
no code implementations • 14 Jul 2021 • Matthias Reisser, Christos Louizos, Efstratios Gavves, Max Welling
Federated learning (FL) has emerged as the predominant approach for collaborative training of neural network models across multiple users, without the need to gather the data at a central location.
no code implementations • 18 Apr 2021 • Hossein Hosseini, Hyunsin Park, Sungrack Yun, Christos Louizos, Joseph Soriaga, Max Welling
We consider the problem of training User Verification (UV) models in federated setting, where each user has access to the data of only one class and user embeddings cannot be shared with the server or other users.
no code implementations • 1 Jan 2021 • Hossein Hosseini, Hyunsin Park, Sungrack Yun, Christos Louizos, Joseph Soriaga, Max Welling
We consider the problem of training User Verification (UV) models in federated setup, where the conventional loss functions are not applicable due to the constraints that each user has access to the data of only one class and user embeddings cannot be shared with the server or other users.
no code implementations • 1 Jan 2021 • Christos Louizos, Matthias Reisser, Joseph Soriaga, Max Welling
Federated averaging (FedAvg), despite its simplicity, has been the main approach in training neural networks in the federated learning setting.
1 code implementation • 25 Aug 2020 • Rik Helwegen, Christos Louizos, Patrick Forré
Recent work on fairness metrics shows the need for causal reasoning in fairness constraints.
no code implementations • 9 Jul 2020 • Hossein Hosseini, Sungrack Yun, Hyunsin Park, Christos Louizos, Joseph Soriaga, Max Welling
In this paper, we propose Federated User Authentication (FedUA), a framework for privacy-preserving training of UA models.
1 code implementation • NeurIPS 2020 • Mart van Baalen, Christos Louizos, Markus Nagel, Rana Ali Amjad, Ying Wang, Tijmen Blankevoort, Max Welling
We introduce Bayesian Bits, a practical method for joint mixed precision quantization and pruning through gradient based optimization.
no code implementations • ICML 2020 • Markus Nagel, Rana Ali Amjad, Mart van Baalen, Christos Louizos, Tijmen Blankevoort
In this paper, we propose AdaRound, a better weight-rounding mechanism for post-training quantization that adapts to the data and the task loss.
no code implementations • ICLR 2020 • Milad Alizadeh, Arash Behboodi, Mart van Baalen, Christos Louizos, Tijmen Blankevoort, Max Welling
We analyze the effect of quantizing weights and activations of neural networks on their loss and derive a simple regularization scheme that improves robustness against post-training quantization.
1 code implementation • NeurIPS 2019 • Christos Louizos, Xiahan Shi, Klamer Schutte, Max Welling
We present a new family of exchangeable stochastic processes, the Functional Neural Processes (FNPs).
4 code implementations • 24 May 2019 • Maximilian Ilse, Jakub M. Tomczak, Christos Louizos, Max Welling
We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain.
no code implementations • ICLR Workshop DeepGenStruct 2019 • Maximilian Ilse, Jakub M. Tomczak, Christos Louizos, Max Welling
We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain.
1 code implementation • ICLR 2019 • Christos Louizos, Matthias Reisser, Tijmen Blankevoort, Efstratios Gavves, Max Welling
Neural network quantization has become an important research area due to its great impact on deployment of large models on resource constrained devices.
no code implementations • ICLR 2018 • Christos Louizos, Max Welling, Diederik P. Kingma
We further propose the \emph{hard concrete} distribution for the gates, which is obtained by ``stretching'' a binary concrete distribution and then transforming its samples with a hard-sigmoid.
4 code implementations • 4 Dec 2017 • Christos Louizos, Max Welling, Diederik P. Kingma
We further propose the \emph{hard concrete} distribution for the gates, which is obtained by "stretching" a binary concrete distribution and then transforming its samples with a hard-sigmoid.
6 code implementations • NeurIPS 2017 • Christos Louizos, Uri Shalit, Joris Mooij, David Sontag, Richard Zemel, Max Welling
Learning individual-level causal effects from observational data, such as inferring the most effective medication for a specific patient, is a problem of growing importance for policy makers.
Ranked #9 on Causal Inference on IHDP
3 code implementations • NeurIPS 2017 • Christos Louizos, Karen Ullrich, Max Welling
Compression and computational efficiency in deep learning have become a problem of great significance.
7 code implementations • ICML 2017 • Christos Louizos, Max Welling
We reinterpret multiplicative noise in neural networks as auxiliary random variables that augment the approximate posterior in a variational setting for Bayesian neural networks.
2 code implementations • 15 Mar 2016 • Christos Louizos, Max Welling
We introduce a variational Bayesian neural network where the parameters are governed via a probability distribution on random matrices.
2 code implementations • 3 Nov 2015 • Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, Richard Zemel
We investigate the problem of learning representations that are invariant to certain nuisance or sensitive factors of variation in the data while retaining as much of the remaining information as possible.
Ranked #4 on Sentiment Analysis on Multi-Domain Sentiment Dataset