1 code implementation • 29 Mar 2023 • Qingyang Wang, Michael A. Powell, Ali Geisa, Eric W. Bridgeford, Joshua T. Vogelstein
Natural intelligences (NIs) thrive in a dynamic world - they learn quickly, sometimes with only a few samples.
no code implementations • 5 Aug 2022 • Qingyang Wang, Michael A. Powell, Ali Geisa, Eric Bridgeford, Carey E. Priebe, Joshua T. Vogelstein
Why do deep networks have negative weights?
1 code implementation • 31 Jan 2022 • Jayanta Dey, Haoyin Xu, Ashwin De Silva, Will LeVine, Tyler M. Tomita, Ali Geisa, Tiffany Chu, Jacob Desman, Joshua T. Vogelstein
We leveraged the fact that deep models, including both random forests and deep networks, learn internal representations which are unions of polytopes with affine activation functions to conceptualize them both as generalized partitioning rules.
no code implementations • 29 Sep 2021 • Ali Geisa, Ronak Mehta, Hayden S. Helm, Jayanta Dey, Eric Eaton, Jeffery Dick, Carey E. Priebe, Joshua T. Vogelstein
This assumption renders these theories inadequate for characterizing 21$^{st}$ century real world data problems, which are typically characterized by evaluation distributions that differ from the training data distributions (referred to as out-of-distribution learning).
no code implementations • 20 Feb 2021 • Hayden S. Helm, Weiwei Yang, Sujeeth Bharadwaj, Kate Lytvynets, Oriana Riva, Christopher White, Ali Geisa, Carey E. Priebe
In applications where categorical labels follow a natural hierarchy, classification methods that exploit the label structure often outperform those that do not.
no code implementations • 12 Nov 2020 • Hayden S. Helm, Ronak D. Mehta, Brandon Duderstadt, Weiwei Yang, Christoper M. White, Ali Geisa, Joshua T. Vogelstein, Carey E. Priebe
Herein we define a measure of similarity between classification distributions that is both principled from the perspective of statistical pattern recognition and useful from the perspective of machine learning practitioners.
1 code implementation • 27 Apr 2020 • Joshua T. Vogelstein, Jayanta Dey, Hayden S. Helm, Will LeVine, Ronak D. Mehta, Ali Geisa, Haoyin Xu, Gido M. van de Ven, Emily Chang, Chenyu Gao, Weiwei Yang, Bryan Tower, Jonathan Larson, Christopher M. White, Carey E. Priebe
But striving to avoid forgetting sets the goal unnecessarily low: the goal of lifelong learning, whether biological or artificial, should be to improve performance on all tasks (including past and future) with any new data.