Search Results for author: Joseph JaJa

Found 7 papers, 0 papers with code

ProtoVAE: Prototypical Networks for Unsupervised Disentanglement

no code implementations16 May 2023 Vaishnavi Patil, Matthew Evanusa, Joseph JaJa

Generative modeling and self-supervised learning have in recent years made great strides towards learning from data in a completely unsupervised way.

Disentanglement Metric Learning +1

DOT-VAE: Disentangling One Factor at a Time

no code implementations19 Oct 2022 Vaishnavi Patil, Matthew Evanusa, Joseph JaJa

One promising approach to this endeavour is the problem of Disentanglement, which aims at learning the underlying generative latent factors, called the factors of variation, of the data and encoding them in disjoint latent representations.

Disentanglement

FedNet2Net: Saving Communication and Computations in Federated Learning with Model Growing

no code implementations19 Jul 2022 Amit Kumar Kundu, Joseph JaJa

Federated learning (FL) is a recently developed area of machine learning, in which the private data of a large number of distributed clients is used to develop a global model under the coordination of a central server without explicitly exposing the data.

Federated Learning

Disentangling One Factor at a Time

no code implementations29 Sep 2021 Vaishnavi S Patil, Matthew S Evanusa, Joseph JaJa

While GANs have good performance, they suffer from difficulty in training and mode collapse, and while VAEs are stable to train, they do not perform as well as GANs in terms of interpretability.

Disentanglement Generative Adversarial Network

Class-Similarity Based Label Smoothing for Confidence Calibration

no code implementations24 Jun 2020 Chihuang Liu, Joseph JaJa

The output of a neural network is a probability distribution where the scores are estimated confidences of the input belonging to the corresponding classes, and hence they represent a complete estimate of the output likelihood relative to all classes.

Decision Making Semantic Similarity +1

Feature Prioritization and Regularization Improve Standard Accuracy and Adversarial Robustness

no code implementations4 Oct 2018 Chihuang Liu, Joseph JaJa

We propose a model that employs feature prioritization by a nonlinear attention module and $L_2$ feature regularization to improve the adversarial robustness and the standard accuracy relative to adversarial training.

Adversarial Robustness Denoising

From Maxout to Channel-Out: Encoding Information on Sparse Pathways

no code implementations18 Nov 2013 Qi. Wang, Joseph JaJa

Motivated by an important insight from neural science, we propose a new framework for understanding the success of the recently proposed "maxout" networks.

General Classification Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.