Search Results for author: Andrea Agazzi

Found 8 papers, 0 papers with code

Scalable Bayesian inference for the generalized linear mixed model

no code implementations5 Mar 2024 Samuel I. Berchuck, Felipe A. Medeiros, Sayan Mukherjee, Andrea Agazzi

The generalized linear mixed model (GLMM) is a popular statistical approach for handling correlated data, and is used extensively in applications areas where big data is common, including biomedical data settings.

Bayesian Inference Uncertainty Quantification

Global Optimality of Elman-type RNN in the Mean-Field Regime

no code implementations12 Mar 2023 Andrea Agazzi, Jianfeng Lu, Sayan Mukherjee

We analyze Elman-type Recurrent Reural Networks (RNNs) and their training in the mean-field regime.

Vocal Bursts Type Prediction

Large deviations for Markov jump processes with uniformly diminishing rates

no code implementations25 Feb 2021 Andrea Agazzi, Luisa Andreis, Robert I. A. Patterson, D. R. Michiel Renger

We prove a large-deviation principle (LDP) for the sample paths of jump Markov processes in the small noise limit when, possibly, all the jump rates vanish uniformly, but slowly enough, in a region of the state space.

Probability Molecular Networks 60F10, 60J75, 80A30

Urgency-aware Optimal Routing in Repeated Games through Artificial Currencies

no code implementations23 Nov 2020 Mauro Salazar, Dario Paccagnan, Andrea Agazzi, W. P. M. H., Heemels

In this paper we address this question within a repeated game framework and propose a fair incentive mechanism based on artificial currencies that routes selfish agents in a system-optimal fashion, while accounting for their temporal preferences.

Global optimality of softmax policy gradient with single hidden layer neural networks in the mean-field regime

no code implementations ICLR 2021 Andrea Agazzi, Jianfeng Lu

We study the problem of policy optimization for infinite-horizon discounted Markov Decision Processes with softmax policy and nonlinear function approximation trained with policy gradient algorithms.

Temporal-difference learning for nonlinear value function approximation in the lazy training regime

no code implementations25 Sep 2019 Andrea Agazzi, Jianfeng Lu

We then give examples of such convergence results in the case of models that diverge if trained with non-lazy TD learning, and in the case of neural networks.

Temporal-difference learning with nonlinear function approximation: lazy training and mean field regimes

no code implementations27 May 2019 Andrea Agazzi, Jianfeng Lu

We finally give examples of our convergence results in the case of models that diverge if trained with non-lazy TD learning, and in the case of neural networks.

Diffusion Fingerprints

no code implementations21 Aug 2014 Jimmy Dubuisson, Jean-Pierre Eckmann, Andrea Agazzi

We introduce, test and discuss a method for classifying and clustering data modeled as directed graphs.

Clustering Dimensionality Reduction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.