no code implementations • 27 Apr 2023 • Yuheng Bu, Harsha Vardhan Tetali, Gholamali Aminian, Miguel Rodrigues, Gregory Wornell
We analyze the generalization ability of joint-training meta learning algorithms via the Gibbs algorithm.
no code implementations • 15 Oct 2022 • Haiyun He, Gholamali Aminian, Yuheng Bu, Miguel Rodrigues, Vincent Y. F. Tan
Our findings offer new insights that the generalization performance of SSL with pseudo-labeling is affected not only by the information between the output hypothesis and input training data but also by the information {\em shared} between the {\em labeled} and {\em pseudo-labeled} data samples.
no code implementations • 15 Sep 2022 • Gholamali Aminian, Roberto Vega, Omar Rivasplata, Laura Toni, Miguel Rodrigues
Counterfactual risk minimization is a framework for offline policy optimization with logged data which consists of context, action, propensity score, and reward for each sample point.
no code implementations • 19 May 2022 • Martin Ferianc, Miguel Rodrigues
We demonstrate the generality of the approach on combinations of toy data, SVHN/CIFAR-10, simple to complex NN architectures and different tasks.
no code implementations • 24 Feb 2022 • Gholamali Aminian, Yuheng Bu, Gregory Wornell, Miguel Rodrigues
Due to the convexity of the information measures, the proposed bounds in terms of Wasserstein distance and total variation distance are shown to be tighter than their counterparts based on individual samples in the literature.
no code implementations • 20 Jan 2022 • Afroditi Papadaki, Natalia Martinez, Martin Bertran, Guillermo Sapiro, Miguel Rodrigues
Federated learning is an increasingly popular paradigm that enables a large number of entities to collaboratively learn better models.
no code implementations • NeurIPS 2021 • Gholamali Aminian, Yuheng Bu, Laura Toni, Miguel Rodrigues, Gregory Wornell
Various approaches have been developed to upper bound the generalization error of a supervised learning algorithm.
no code implementations • 2 Nov 2021 • Yuheng Bu, Gholamali Aminian, Laura Toni, Miguel Rodrigues, Gregory Wornell
We provide an information-theoretic analysis of the generalization ability of Gibbs-based transfer learning algorithms by focusing on two popular transfer learning approaches, $\alpha$-weighted-ERM and two-stage-ERM.
no code implementations • 5 Oct 2021 • Afroditi Papadaki, Natalia Martinez, Martin Bertran, Guillermo Sapiro, Miguel Rodrigues
Federated learning is an increasingly popular paradigm that enables a large number of entities to collaboratively learn better models.
no code implementations • 4 Jun 2021 • Martin Ferianc, Zhiqiang Que, Hongxiang Fan, Wayne Luk, Miguel Rodrigues
To further improve the overall algorithmic-hardware performance, a co-design framework is proposed to explore the most fitting algorithmic-hardware configurations for Bayesian RNNs.
no code implementations • 12 May 2021 • Hongxiang Fan, Martin Ferianc, Miguel Rodrigues, HongYu Zhou, Xinyu Niu, Wayne Luk
Neural networks (NNs) have demonstrated their potential in a wide range of applications such as image recognition, decision making or recommendation systems.
1 code implementation • 14 Apr 2021 • Martin Ferianc, Divyansh Manocha, Hongxiang Fan, Miguel Rodrigues
Fully convolutional U-shaped neural networks have largely been the dominant approach for pixel-wise image segmentation.
1 code implementation • 22 Feb 2021 • Martin Ferianc, Partha Maji, Matthew Mattina, Miguel Rodrigues
Bayesian neural networks (BNNs) are making significant progress in many research areas where decision-making needs to be accompanied by uncertainty estimation.
no code implementations • 12 Jul 2020 • Martin Ferianc, Hongxiang Fan, Miguel Rodrigues
In recent years, neural architecture search (NAS) has received intensive scientific and industrial interest due to its capability of finding a neural architecture with high accuracy for various artificial intelligence tasks such as image classification or object detection.
no code implementations • ICLR 2019 • Martin Bertran, Natalia Martinez, Afroditi Papadaki, Qiang Qiu, Miguel Rodrigues, Guillermo Sapiro
We study space-preserving transformations where the utility provider can use the same algorithm on original and sanitized data, a critical and novel attribute to help service providers accommodate varying privacy requirements with a single set of utility algorithms.
no code implementations • 18 May 2018 • Martin Bertran, Natalia Martinez, Afroditi Papadaki, Qiang Qiu, Miguel Rodrigues, Guillermo Sapiro
As such, users and utility providers should collaborate in data privacy, a paradigm that has not yet been developed in the privacy research community.
no code implementations • NeurIPS 2013 • Liming Wang, David E. Carlson, Miguel Rodrigues, David Wilcox, Robert Calderbank, Lawrence Carin
We consider design of linear projection measurements for a vector Poisson signal model.
no code implementations • 28 Jan 2013 • Liming Wang, Miguel Rodrigues, Lawrence Carin
We investigate connections between information-theoretic and estimation-theoretic quantities in vector Poisson channel models.