Search Results for author: Gholamali Aminian

Found 19 papers, 1 papers with code

Semi-pessimistic Reinforcement Learning

no code implementations25 May 2025 Jin Zhu, Xin Zhou, Jiaang Yao, Gholamali Aminian, Omar Rivasplata, Simon Little, Lexin Li, Chengchun Shi

However, it faces challenges of distributional shift, where the learned policy may encounter unseen scenarios not covered in the offline data.

reinforcement-learning Reinforcement Learning +1

Generalization Error of $f$-Divergence Stabilized Algorithms via Duality

no code implementations20 Feb 2025 Francisco Daunas, Iñaki Esnaola, Samir M. Perlaza, Gholamali Aminian

The solution to empirical risk minimization with $f$-divergence regularization (ERM-$f$DR) is extended to constrained optimization problems, establishing conditions for equivalence between the solution and constraints.

Private Synthetic Graph Generation and Fused Gromov-Wasserstein Distance

no code implementations17 Feb 2025 Leoni Carla Wirth, Gholamali Aminian, Gesine Reinert

The network generator should be easy to implement and should come with theoretical guarantees.

Graph Generation

Understanding Transfer Learning via Mean-field Analysis

no code implementations22 Oct 2024 Gholamali Aminian, Łukasz Szpruch, Samuel N. Cohen

We propose a novel framework for exploring generalization errors of transfer learning through the lens of differential calculus on the space of probability measures.

Transfer Learning

Generalization Error of the Tilted Empirical Risk

no code implementations28 Sep 2024 Gholamali Aminian, Amir R. Asadi, Tian Li, Ahmad Beirami, Gesine Reinert, Samuel N. Cohen

The generalization error (risk) of a supervised statistical learning algorithm quantifies its prediction ability on previously unseen data.

Robust Semi-supervised Learning via $f$-Divergence and $α$-Rényi Divergence

no code implementations1 May 2024 Gholamali Aminian, Amirhossien Bagheri, Mahyar JafariNodeh, Radmehr Karimian, Mohammad-Hossein Yassaee

This paper investigates a range of empirical risk functions and regularization methods suitable for self-training methods in semi-supervised learning.

Pseudo Label

Generalization Error of Graph Neural Networks in the Mean-field Regime

1 code implementation10 Feb 2024 Gholamali Aminian, Yixuan He, Gesine Reinert, Łukasz Szpruch, Samuel N. Cohen

This work provides a theoretical framework for assessing the generalization error of graph neural networks in the over-parameterized regime, where the number of parameters surpasses the quantity of data points.

Graph Classification

Mean-field Analysis of Generalization Errors

no code implementations20 Jun 2023 Gholamali Aminian, Samuel N. Cohen, Łukasz Szpruch

We propose a novel framework for exploring weak and $L_2$ generalization errors of algorithms through the lens of differential calculus on the space of probability measures.

On the Generalization Error of Meta Learning for the Gibbs Algorithm

no code implementations27 Apr 2023 Yuheng Bu, Harsha Vardhan Tetali, Gholamali Aminian, Miguel Rodrigues, Gregory Wornell

We analyze the generalization ability of joint-training meta learning algorithms via the Gibbs algorithm.

Meta-Learning

How Does Pseudo-Labeling Affect the Generalization Error of the Semi-Supervised Gibbs Algorithm?

no code implementations15 Oct 2022 Haiyun He, Gholamali Aminian, Yuheng Bu, Miguel Rodrigues, Vincent Y. F. Tan

Our findings offer new insights that the generalization performance of SSL with pseudo-labeling is affected not only by the information between the output hypothesis and input training data but also by the information {\em shared} between the {\em labeled} and {\em pseudo-labeled} data samples.

regression

Learning Algorithm Generalization Error Bounds via Auxiliary Distributions

no code implementations2 Oct 2022 Gholamali Aminian, Saeed Masiha, Laura Toni, Miguel R. D. Rodrigues

Additionally, we demonstrate how our auxiliary distribution method can be used to derive the upper bounds on excess risk of some learning algorithms in the supervised learning context {\blue and the generalization error under the distribution mismatch scenario in supervised learning algorithms, where the distribution mismatch is modeled as $\alpha$-Jensen-Shannon or $\alpha$-R\'enyi divergence between the distribution of test and training data samples distributions.}

Semi-supervised Batch Learning From Logged Data

no code implementations15 Sep 2022 Gholamali Aminian, Armin Behnamnia, Roberto Vega, Laura Toni, Chengchun Shi, Hamid R. Rabiee, Omar Rivasplata, Miguel R. D. Rodrigues

We propose learning methods for problems where feedback is missing for some samples, so there are samples with feedback and samples missing-feedback in the logged data.

counterfactual

An Information-theoretical Approach to Semi-supervised Learning under Covariate-shift

no code implementations24 Feb 2022 Gholamali Aminian, Mahed Abroshan, Mohammad Mahdi Khalili, Laura Toni, Miguel R. D. Rodrigues

A common assumption in semi-supervised learning is that the labeled, unlabeled, and test data are drawn from the same distribution.

Tighter Expected Generalization Error Bounds via Convexity of Information Measures

no code implementations24 Feb 2022 Gholamali Aminian, Yuheng Bu, Gregory Wornell, Miguel Rodrigues

Due to the convexity of the information measures, the proposed bounds in terms of Wasserstein distance and total variation distance are shown to be tighter than their counterparts based on individual samples in the literature.

An Exact Characterization of the Generalization Error for the Gibbs Algorithm

no code implementations NeurIPS 2021 Gholamali Aminian, Yuheng Bu, Laura Toni, Miguel Rodrigues, Gregory Wornell

Various approaches have been developed to upper bound the generalization error of a supervised learning algorithm.

Characterizing and Understanding the Generalization Error of Transfer Learning with Gibbs Algorithm

no code implementations2 Nov 2021 Yuheng Bu, Gholamali Aminian, Laura Toni, Miguel Rodrigues, Gregory Wornell

We provide an information-theoretic analysis of the generalization ability of Gibbs-based transfer learning algorithms by focusing on two popular transfer learning approaches, $\alpha$-weighted-ERM and two-stage-ERM.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.