no code implementations • 3 Aug 2023 • Zachary A. Daniels, Jun Hu, Michael Lomnitz, Phil Miller, Aswin Raghavan, Joe Zhang, Michael Piacentino, David Zhang
This paper presents the Encoder-Adaptor-Reconfigurator (EAR) framework for efficient continual learning under domain shifts.
no code implementations • 17 Aug 2022 • Michael Lomnitz, Zachary Daniels, David Zhang, Michael Piacentino
To enable learning on edge devices with fast convergence and low memory, we present a novel backpropagation-free optimization algorithm dubbed Target Projection Stochastic Gradient Descent (tpSGD).
no code implementations • 10 Jun 2022 • Indhumathi Kandaswamy, Saurabh Farkya, Zachary Daniels, Gooitzen van der Wal, Aswin Raghavan, Yuzheng Zhang, Jun Hu, Michael Lomnitz, Michael Isnardi, David Zhang, Michael Piacentino
In this paper we present Hyper-Dimensional Reconfigurable Analytics at the Tactical Edge (HyDRATE) using low-SWaP embedded hardware that can perform real-time reconfiguration at the edge leveraging non-MAC (free of floating-point MultiplyACcumulate operations) deep neural nets (DNN) combined with hyperdimensional (HD) computing accelerators.
no code implementations • 3 Sep 2020 • Michael Lomnitz, Zigfried Hampel-Arias, Nina Lopatina, Felipe A. Mejia
Employing machine learning models in the real world requires collecting large amounts of data, which is both time consuming and costly to collect.
no code implementations • 31 Oct 2019 • Michael Lomnitz, Nina Lopatina, Paul Gamble, Zigfried Hampel-Arias, Lucas Tindall, Felipe A. Mejia, Maria Alejandra Barrios
It is critical to understand the privacy and robustness vulnerabilities of machine learning models, as their implementation expands in scope.
no code implementations • 15 Jun 2019 • Felipe A. Mejia, Paul Gamble, Zigfried Hampel-Arias, Michael Lomnitz, Nina Lopatina, Lucas Tindall, Maria Alejandra Barrios
Adversarial training was introduced as a way to improve the robustness of deep learning models to adversarial attacks.