1 code implementation • 14 Mar 2024 • Zhuo Zhi, Ziquan Liu, Moe Elbadawi, Adam Daneshmend, Mine Orlu, Abdul Basit, Andreas Demosthenous, Miguel Rodrigues
The proposed data-dependent framework exhibits a higher degree of sample efficiency and is empirically demonstrated to enhance the classification model's performance on both full- and missing-modality data in the low-data regime across various multimodal learning tasks.
no code implementations • 22 Feb 2024 • Afroditi Papadaki, Natalia Martinez, Martin Bertran, Guillermo Sapiro, Miguel Rodrigues
Current approaches to group fairness in federated learning assume the existence of predefined and labeled sensitive groups during training.
1 code implementation • 9 Feb 2024 • Martin Ferianc, Miguel Rodrigues
YAMLE: Yet Another Machine Learning Environment is an open-source framework that facilitates rapid prototyping and experimentation with machine learning (ML) models and methods.
no code implementations • 9 Feb 2024 • Martin Ferianc, Hongxiang Fan, Miguel Rodrigues
To improve the hardware efficiency of ensembles of separate NNs, recent methods create ensembles within a single network via adding early exits or considering multi input multi output approaches.
no code implementations • 4 Feb 2024 • Ziquan Liu, Zhuo Zhi, Ilija Bogunovic, Carsten Gerner-Beuerle, Miguel Rodrigues
Our paper offers a new approach to certify the performance of machine learning models in the presence of adversarial attacks with population level risk guarantees.
no code implementations • 22 Jan 2024 • Zhuo Zhi, Moe Elbadawi, Adam Daneshmend, Mine Orlu, Abdul Basit, Andreas Demosthenous, Miguel Rodrigues
EHR-based hemoglobin level/anemia degree prediction is non-invasive and rapid but still faces some challenges due to the fact that EHR data is typically an irregular multivariate time series containing a significant number of missing values and irregular time intervals.
1 code implementation • 25 Aug 2023 • Reem I. Masoud, Ziquan Liu, Martin Ferianc, Philip Treleaven, Miguel Rodrigues
Our results quantify the cultural alignment of LLMs and reveal the difference between LLMs in explanatory cultural dimensions.
1 code implementation • 30 Jun 2023 • Martin Ferianc, Ondrej Bohdal, Timothy Hospedales, Miguel Rodrigues
Enhancing the generalisation abilities of neural networks (NNs) through integrating noise such as MixUp or Dropout during training has emerged as a powerful and adaptable technique.
no code implementations • 11 Jun 2023 • Daniel Jakubovitz, David Uliel, Miguel Rodrigues, Raja Giryes
We focus on the task of semi-supervised transfer learning, in which unlabeled samples from the target dataset are available during network training on the source dataset.
no code implementations • 27 Apr 2023 • Yuheng Bu, Harsha Vardhan Tetali, Gholamali Aminian, Miguel Rodrigues, Gregory Wornell
We analyze the generalization ability of joint-training meta learning algorithms via the Gibbs algorithm.
no code implementations • 15 Oct 2022 • Haiyun He, Gholamali Aminian, Yuheng Bu, Miguel Rodrigues, Vincent Y. F. Tan
Our findings offer new insights that the generalization performance of SSL with pseudo-labeling is affected not only by the information between the output hypothesis and input training data but also by the information {\em shared} between the {\em labeled} and {\em pseudo-labeled} data samples.
no code implementations • 19 May 2022 • Martin Ferianc, Miguel Rodrigues
We demonstrate the generality of the approach on combinations of toy data, SVHN/CIFAR-10, simple to complex NN architectures and different tasks.
no code implementations • 24 Feb 2022 • Gholamali Aminian, Yuheng Bu, Gregory Wornell, Miguel Rodrigues
Due to the convexity of the information measures, the proposed bounds in terms of Wasserstein distance and total variation distance are shown to be tighter than their counterparts based on individual samples in the literature.
no code implementations • 20 Jan 2022 • Afroditi Papadaki, Natalia Martinez, Martin Bertran, Guillermo Sapiro, Miguel Rodrigues
Federated learning is an increasingly popular paradigm that enables a large number of entities to collaboratively learn better models.
no code implementations • NeurIPS 2021 • Gholamali Aminian, Yuheng Bu, Laura Toni, Miguel Rodrigues, Gregory Wornell
Various approaches have been developed to upper bound the generalization error of a supervised learning algorithm.
no code implementations • 2 Nov 2021 • Yuheng Bu, Gholamali Aminian, Laura Toni, Miguel Rodrigues, Gregory Wornell
We provide an information-theoretic analysis of the generalization ability of Gibbs-based transfer learning algorithms by focusing on two popular transfer learning approaches, $\alpha$-weighted-ERM and two-stage-ERM.
no code implementations • 5 Oct 2021 • Afroditi Papadaki, Natalia Martinez, Martin Bertran, Guillermo Sapiro, Miguel Rodrigues
Federated learning is an increasingly popular paradigm that enables a large number of entities to collaboratively learn better models.
no code implementations • 4 Jun 2021 • Martin Ferianc, Zhiqiang Que, Hongxiang Fan, Wayne Luk, Miguel Rodrigues
To further improve the overall algorithmic-hardware performance, a co-design framework is proposed to explore the most fitting algorithmic-hardware configurations for Bayesian RNNs.
no code implementations • 12 May 2021 • Hongxiang Fan, Martin Ferianc, Miguel Rodrigues, HongYu Zhou, Xinyu Niu, Wayne Luk
Neural networks (NNs) have demonstrated their potential in a wide range of applications such as image recognition, decision making or recommendation systems.
1 code implementation • 14 Apr 2021 • Martin Ferianc, Divyansh Manocha, Hongxiang Fan, Miguel Rodrigues
Fully convolutional U-shaped neural networks have largely been the dominant approach for pixel-wise image segmentation.
1 code implementation • 22 Feb 2021 • Martin Ferianc, Partha Maji, Matthew Mattina, Miguel Rodrigues
Bayesian neural networks (BNNs) are making significant progress in many research areas where decision-making needs to be accompanied by uncertainty estimation.
no code implementations • 12 Jul 2020 • Martin Ferianc, Hongxiang Fan, Miguel Rodrigues
In recent years, neural architecture search (NAS) has received intensive scientific and industrial interest due to its capability of finding a neural architecture with high accuracy for various artificial intelligence tasks such as image classification or object detection.
no code implementations • ICLR 2019 • Martin Bertran, Natalia Martinez, Afroditi Papadaki, Qiang Qiu, Miguel Rodrigues, Guillermo Sapiro
We study space-preserving transformations where the utility provider can use the same algorithm on original and sanitized data, a critical and novel attribute to help service providers accommodate varying privacy requirements with a single set of utility algorithms.
no code implementations • 18 May 2018 • Martin Bertran, Natalia Martinez, Afroditi Papadaki, Qiang Qiu, Miguel Rodrigues, Guillermo Sapiro
As such, users and utility providers should collaborate in data privacy, a paradigm that has not yet been developed in the privacy research community.
no code implementations • NeurIPS 2013 • Liming Wang, David E. Carlson, Miguel Rodrigues, David Wilcox, Robert Calderbank, Lawrence Carin
We consider design of linear projection measurements for a vector Poisson signal model.
no code implementations • 28 Jan 2013 • Liming Wang, Miguel Rodrigues, Lawrence Carin
We investigate connections between information-theoretic and estimation-theoretic quantities in vector Poisson channel models.