Search Results for author: Kenneth E. Barner

Found 10 papers, 2 papers with code

MMASD: A Multimodal Dataset for Autism Intervention Analysis

1 code implementation14 Jun 2023 Jicheng Li, Vuthea Chheang, Pinar Kullu, Eli Brignac, Zhang Guo, Kenneth E. Barner, Anjana Bhat, Roghayeh Leila Barmaki

This work presents a novel privacy-preserving open-source dataset, MMASD as a MultiModal ASD benchmark dataset, collected from play therapy interventions of children with Autism.

Action Quality Assessment Optical Flow Estimation +1

A Probabilistic Representation of Deep Learning for Improving The Information Theoretic Interpretability

no code implementations27 Oct 2020 Xinjie Lan, Kenneth E. Barner

Based on the probabilistic explanations for MLPs, we improve the information-theoretic interpretability of MLPs in three aspects: (i) the random variable of f is discrete and the corresponding entropy is finite; (ii) the information bottleneck theory cannot correctly explain the information flow in MLPs if we take into account the back-propagation; and (iii) we propose novel information-theoretic explanations for the generalization of MLPs.

PAC-Bayesian Generalization Bounds for MultiLayer Perceptrons

no code implementations16 Jun 2020 Xinjie Lan, Xin Guo, Kenneth E. Barner

We study PAC-Bayesian generalization bounds for Multilayer Perceptrons (MLPs) with the cross entropy loss.

Generalization Bounds Variational Inference

Probabilistic modeling the hidden layers of deep neural networks

no code implementations25 Sep 2019 Xinjie Lan, Kenneth E. Barner

Based on the probabilistic representation, we demonstrate that the entire architecture of DNNs can be explained as a Bayesian hierarchical model.

valid

A Probabilistic Representation of Deep Learning

no code implementations26 Aug 2019 Xinjie Lan, Kenneth E. Barner

In this work, we introduce a novel probabilistic representation of deep learning, which provides an explicit explanation for the Deep Neural Networks (DNNs) in three aspects: (i) neurons define the energy of a Gibbs distribution; (ii) the hidden layers of DNNs formulate Gibbs distributions; and (iii) the whole architecture of DNNs can be interpreted as a Bayesian neural network.

Smile detection in the wild based on transfer learning

no code implementations17 Jan 2018 Xin Guo, Luisa F. Polanía, Kenneth E. Barner

Compared to the size of databases for face recognition, far less labeled data is available for training smile detection systems.

4k Face Recognition +1

Exploiting Restricted Boltzmann Machines and Deep Belief Networks in Compressed Sensing

no code implementations30 May 2017 Luisa F. Polania, Kenneth E. Barner

This paper proposes a CS scheme that exploits the representational power of restricted Boltzmann machines and deep learning architectures to model the prior distribution of the sparsity pattern of signals belonging to the same class.

Cannot find the paper you are looking for? You can Submit a new open access paper.