no code implementations • 16 May 2023 • Vaishnavi Patil, Matthew Evanusa, Joseph JaJa
Generative modeling and self-supervised learning have in recent years made great strides towards learning from data in a completely unsupervised way.
no code implementations • 19 Oct 2022 • Vaishnavi Patil, Matthew Evanusa, Joseph JaJa
One promising approach to this endeavour is the problem of Disentanglement, which aims at learning the underlying generative latent factors, called the factors of variation, of the data and encoding them in disjoint latent representations.
no code implementations • 19 Jul 2022 • Amit Kumar Kundu, Joseph JaJa
Federated learning (FL) is a recently developed area of machine learning, in which the private data of a large number of distributed clients is used to develop a global model under the coordination of a central server without explicitly exposing the data.
no code implementations • 29 Sep 2021 • Vaishnavi S Patil, Matthew S Evanusa, Joseph JaJa
While GANs have good performance, they suffer from difficulty in training and mode collapse, and while VAEs are stable to train, they do not perform as well as GANs in terms of interpretability.
no code implementations • 24 Jun 2020 • Chihuang Liu, Joseph JaJa
The output of a neural network is a probability distribution where the scores are estimated confidences of the input belonging to the corresponding classes, and hence they represent a complete estimate of the output likelihood relative to all classes.
no code implementations • 4 Oct 2018 • Chihuang Liu, Joseph JaJa
We propose a model that employs feature prioritization by a nonlinear attention module and $L_2$ feature regularization to improve the adversarial robustness and the standard accuracy relative to adversarial training.
no code implementations • 18 Nov 2013 • Qi. Wang, Joseph JaJa
Motivated by an important insight from neural science, we propose a new framework for understanding the success of the recently proposed "maxout" networks.