Search Results for author: Xinjie Lan

Found 8 papers, 1 papers with code

A Probabilistic Representation of DNNs: Bridging Mutual Information and Generalization

1 code implementation18 Jun 2021 Xinjie Lan, Kenneth Barner

However, it is intractable to accurately estimate the MI in DNNs, thus most previous works have to relax the MI bound, which in turn weakens the information theoretic explanation for generalization.

A Probabilistic Representation for Deep Learning: Delving into The Information Bottleneck Principle

no code implementations NeurIPS 2021 Xinjie Lan, Kenneth Barner

The Information Bottleneck (IB) principle has recently attracted great attention to explaining Deep Neural Networks (DNNs), and the key is to accurately estimate the mutual information between a hidden layer and dataset.

A Probabilistic Representation of Deep Learning for Improving The Information Theoretic Interpretability

no code implementations27 Oct 2020 Xinjie Lan, Kenneth E. Barner

Based on the probabilistic explanations for MLPs, we improve the information-theoretic interpretability of MLPs in three aspects: (i) the random variable of f is discrete and the corresponding entropy is finite; (ii) the information bottleneck theory cannot correctly explain the information flow in MLPs if we take into account the back-propagation; and (iii) we propose novel information-theoretic explanations for the generalization of MLPs.

PAC-Bayesian Generalization Bounds for MultiLayer Perceptrons

no code implementations16 Jun 2020 Xinjie Lan, Xin Guo, Kenneth E. Barner

We study PAC-Bayesian generalization bounds for Multilayer Perceptrons (MLPs) with the cross entropy loss.

Generalization Bounds Variational Inference

Probabilistic modeling the hidden layers of deep neural networks

no code implementations25 Sep 2019 Xinjie Lan, Kenneth E. Barner

Based on the probabilistic representation, we demonstrate that the entire architecture of DNNs can be explained as a Bayesian hierarchical model.

valid

A Probabilistic Representation of Deep Learning

no code implementations26 Aug 2019 Xinjie Lan, Kenneth E. Barner

In this work, we introduce a novel probabilistic representation of deep learning, which provides an explicit explanation for the Deep Neural Networks (DNNs) in three aspects: (i) neurons define the energy of a Gibbs distribution; (ii) the hidden layers of DNNs formulate Gibbs distributions; and (iii) the whole architecture of DNNs can be interpreted as a Bayesian neural network.

A synthetic dataset for deep learning

no code implementations1 Jun 2019 Xinjie Lan

In this paper, we propose a novel method for generating a synthetic dataset obeying Gaussian distribution.

Cannot find the paper you are looking for? You can Submit a new open access paper.