Search Results for author: Ali Geisa

Found 6 papers, 2 papers with code

Deep discriminative to kernel generative modeling

1 code implementation31 Jan 2022 Jayanta Dey, Will LeVine, Ashwin De Silva, Ali Geisa, Jong M. Shin, Haoyin Xu, Tiffany Chu, Leyla Isik, Joshua T. Vogelstein

Via human experiments we illustrate that our kernel generative networks (Kragen) behave more like humans than deep discriminative networks.

Out-of-Distribution Detection

Towards a theory of out-of-distribution learning

no code implementations29 Sep 2021 Ali Geisa, Ronak Mehta, Hayden S. Helm, Jayanta Dey, Eric Eaton, Jeffery Dick, Carey E. Priebe, Joshua T. Vogelstein

This assumption renders these theories inadequate for characterizing 21$^{st}$ century real world data problems, which are typically characterized by evaluation distributions that differ from the training data distributions (referred to as out-of-distribution learning).

Learning Theory

Inducing a hierarchy for multi-class classification problems

no code implementations20 Feb 2021 Hayden S. Helm, Weiwei Yang, Sujeeth Bharadwaj, Kate Lytvynets, Oriana Riva, Christopher White, Ali Geisa, Carey E. Priebe

In applications where categorical labels follow a natural hierarchy, classification methods that exploit the label structure often outperform those that do not.

Classification General Classification +1

A partition-based similarity for classification distributions

no code implementations12 Nov 2020 Hayden S. Helm, Ronak D. Mehta, Brandon Duderstadt, Weiwei Yang, Christoper M. White, Ali Geisa, Joshua T. Vogelstein, Carey E. Priebe

Herein we define a measure of similarity between classification distributions that is both principled from the perspective of statistical pattern recognition and useful from the perspective of machine learning practitioners.

Classification General Classification +2

Omnidirectional Transfer for Quasilinear Lifelong Learning

1 code implementation27 Apr 2020 Joshua T. Vogelstein, Jayanta Dey, Hayden S. Helm, Will LeVine, Ronak D. Mehta, Ali Geisa, Haoyin Xu, Gido M. van de Ven, Emily Chang, Chenyu Gao, Weiwei Yang, Bryan Tower, Jonathan Larson, Christopher M. White, Carey E. Priebe

But striving to avoid forgetting sets the goal unnecessarily low: the goal of lifelong learning, whether biological or artificial, should be to improve performance on all tasks (including past and future) with any new data.

Federated Learning Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.