1 code implementation • 31 Jan 2022 • Jayanta Dey, Haoyin Xu, Ashwin De Silva, Will LeVine, Tyler M. Tomita, Ali Geisa, Tiffany Chu, Jacob Desman, Joshua T. Vogelstein
We leveraged the fact that deep models, including both random forests and deep networks, learn internal representations which are unions of polytopes with affine activation functions to conceptualize them both as generalized partitioning rules.
no code implementations • 19 Jan 2022 • Ashwin De Silva, Rahul Ramesh, Lyle Ungar, Marshall Hussain Shuler, Noah J. Cowan, Michael Platt, Chen Li, Leyla Isik, Seung-Eon Roh, Adam Charles, Archana Venkataraman, Brian Caffo, Javier J. How, Justus M Kebschull, John W. Krakauer, Maxim Bichuch, Kaleab Alemayehu Kinfu, Eva Yezerets, Dinesh Jayaraman, Jong M. Shin, Soledad Villar, Ian Phillips, Carey E. Priebe, Thomas Hartung, Michael I. Miller, Jayanta Dey, Ningyuan, Huang, Eric Eaton, Ralph Etienne-Cummings, Elizabeth L. Ogburn, Randal Burns, Onyema Osuagwu, Brett Mensh, Alysson R. Muotri, Julia Brown, Chris White, Weiwei Yang, Andrei A. Rusu, Timothy Verstynen, Konrad P. Kording, Pratik Chaudhari, Joshua T. Vogelstein
We conjecture that certain sequences of tasks are not retrospectively learnable (in which the data distribution is fixed), but are prospectively learnable (in which distributions may be dynamic), suggesting that prospective learning is more difficult in kind than retrospective learning.
2 code implementations • 16 Oct 2021 • Haoyin Xu, Jayanta Dey, Sambit Panda, Joshua T. Vogelstein
On three standard datasets, we illustrate that our approach, Stream Decision Forest (SDF), does not suffer from either of the aforementioned limitations.
no code implementations • 29 Sep 2021 • Ali Geisa, Ronak Mehta, Hayden S. Helm, Jayanta Dey, Eric Eaton, Jeffery Dick, Carey E. Priebe, Joshua T. Vogelstein
This assumption renders these theories inadequate for characterizing 21$^{st}$ century real world data problems, which are typically characterized by evaluation distributions that differ from the training data distributions (referred to as out-of-distribution learning).
2 code implementations • 31 Aug 2021 • Haoyin Xu, Kaleab A. Kinfu, Will LeVine, Sambit Panda, Jayanta Dey, Michael Ainsworth, Yu-Chung Peng, Madi Kusmanov, Florian Engert, Christopher M. White, Joshua T. Vogelstein, Carey E. Priebe
Empirically, we compare these two strategies on hundreds of tabular data settings, as well as several vision and auditory settings.
1 code implementation • 27 Apr 2020 • Joshua T. Vogelstein, Jayanta Dey, Hayden S. Helm, Will LeVine, Ronak D. Mehta, Ali Geisa, Haoyin Xu, Gido M. van de Ven, Emily Chang, Chenyu Gao, Weiwei Yang, Bryan Tower, Jonathan Larson, Christopher M. White, Carey E. Priebe
But striving to avoid forgetting sets the goal unnecessarily low: the goal of lifelong learning, whether biological or artificial, should be to improve performance on all tasks (including past and future) with any new data.
no code implementations • 2 Dec 2019 • Jayanta Dey, Md. Kamrul Hasan
In this paper, we propose a multiframe-based adaptive despeckling (MADS) algorithm to reconstruct a high-resolution B-mode image from raw radio-frequency (RF) data that is based on a multiple input single output (MISO) model.
no code implementations • 1 Dec 2019 • Jayanta Dey, Mohammad Ariful Haque
Thus, we start from the problem of classifying grids with different nominal frequencies and simplify the problem of classification in three stages based on nominal frequency, SVM and finally using PM classifier.
no code implementations • 5 Feb 2019 • Jayanta Dey, Md Sanzid Bin Hossain, Mohammad Ariful Haque
Therefore, a VAD with low complexity and high accuracy is highly desirable for speech processing applications.