no code implementations • 12 Sep 2024 • Le Zhang, Onat Gungor, Flavio Ponzina, Tajana Rosing
Ensemble learning is a meta-learning approach that combines the predictions of multiple learners, demonstrating improved accuracy and robustness.
no code implementations • 24 Mar 2024 • Flavio Ponzina, Tajana Rosing
Hyperdimensional computing (HDC) is emerging as a promising AI approach that can effectively target TinyML applications thanks to its lightweight computing and memory requirements.
1 code implementation • 7 Mar 2024 • Xiaofan Yu, Anthony Thomas, Ivannia Gomez Moreno, Louis Gutierrez, Tajana Rosing
On-device learning has emerged as a prevailing trend that avoids the slow response time and costly communication of cloud-based learning.
no code implementations • 26 Dec 2023 • Kazim Ergun, Rishikanth Chandrasekaran, Tajana Rosing
The strategies we propose to improve the communication efficiency enable our design to reduce communication costs by 66$\times$ vs. DNNs, local client compute and energy consumption by ~1. 5 - 6$\times$, while being highly robust to network errors.
no code implementations • 20 Nov 2023 • Sumukh Pinge, Weihong Xu, Jaeyoung Kang, Tianqi Zhang, Neima Moshiri, Wout Bittremieux, Tajana Rosing
This approach markedly improves clustering speed and efficiency, serving as a catalyst for real-time, high-throughput data analysis in future healthcare applications.
no code implementations • 12 May 2023 • Gopi Krishna Jha, Anthony Thomas, Nilesh Jain, Sameh Gobriel, Tajana Rosing, Ravi Iyer
Deep learning-based recommendation systems (e. g., DLRMs) are widely used AI models to provide high-quality personalized recommendations.
no code implementations • 23 Jan 2023 • Onat Gungor, Tajana Rosing, Baris Aksanli
The results show that our double defense strategy is highly efficient where we can improve model robustness by up to 64. 6% and 52% compared to standard and adversarial retraining, respectively.
1 code implementation • 17 Jan 2023 • Xiaofan Yu, Ludmila Cherkasova, Harsh Vardhan, Quanling Zhao, Emily Ekaireb, Xiyuan Zhang, Arya Mazumdar, Tajana Rosing
To fully unleash the potential of Async-HFL in converging speed under system heterogeneities and stragglers, we design device selection at the gateway level and device-gateway association at the cloud level.
no code implementations • 20 Sep 2022 • Anthony Thomas, Behnam Khaleghi, Gopi Krishna Jha, Sanjoy Dasgupta, Nageen Himayat, Ravi Iyer, Nilesh Jain, Tajana Rosing
Hyperdimensional computing (HDC) is a paradigm for data representation and learning originating in computational neuroscience.
1 code implementation • 24 Aug 2022 • Xiaofan Yu, Yunhui Guo, Sicun Gao, Tajana Rosing
To address the challenges, we propose Self-Supervised ContrAstive Lifelong LEarning without Prior Knowledge (SCALE) which can extract and memorize representations on the fly purely from the data continuum.
no code implementations • 14 Mar 2022 • Onat Gungor, Tajana Rosing, Baris Aksanli
Hyper-dimensional computing (HDC) is a brain-inspired machine learning method that has been shown to be sufficiently accurate while being extremely robust, fast, and energy-efficient.
no code implementations • 14 Oct 2020 • Anthony Thomas, Sanjoy Dasgupta, Tajana Rosing
Hyperdimensional (HD) computing is a set of neurally inspired methods for obtaining high-dimensional, low-precision, distributed representations of data.
no code implementations • 20 Jul 2020 • Behnam Khaleghi, Sahand Salamat, Anthony Thomas, Fatemeh Asgarinejad, Yeseong Kim, Tajana Rosing
In this paper, we propose SHEARer, an algorithm-hardware co-optimization to improve the performance and energy consumption of HD computing.
no code implementations • 14 May 2020 • Behnam Khaleghi, Mohsen Imani, Tajana Rosing
In this paper, we target privacy-preserving training and inference of brain-inspired Hyperdimensional (HD) computing, a new learning algorithm that is gaining traction due to its light-weight computation and robustness particularly appealing for edge devices with tight constraints.
no code implementations • 5 Feb 2020 • Sahand Salamat, Tajana Rosing
In this survey, we introduce three main DNA alignment algorithms and FPGA-based implementation of these algorithms to accelerate the DNA alignment.
no code implementations • ICLR 2020 • Yunhui Guo, Mingrui Liu, Yandong Li, Liqiang Wang, Tianbao Yang, Tajana Rosing
We evaluate the effectiveness of traditional attack methods such as FGSM and PGD. The results show that A-GEM still possesses strong continual learning ability in the presence of adversarial examples in the memory and simple defense techniques such as label smoothing can further alleviate the adversarial effects.
2 code implementations • ECCV 2020 • Yunhui Guo, Noel C. Codella, Leonid Karlinsky, James V. Codella, John R. Smith, Kate Saenko, Tajana Rosing, Rogerio Feris
Extensive experiments on the proposed benchmark are performed to evaluate state-of-art meta-learning approaches, transfer learning approaches, and newer methods for cross-domain few-shot learning.
Ranked #3 on Cross-Domain Few-Shot on Plantae
cross-domain few-shot learning Few-Shot Image Classification +1
no code implementations • 21 Nov 2019 • Yunhui Guo, Yandong Li, Liqiang Wang, Tajana Rosing
Fine-tuning is a popular transfer learning technique for deep neural networks where a few rounds of training are applied to the parameters of a pre-trained model to adapt them to a new task.
no code implementations • 25 Sep 2019 • Yunhui Guo, Mingrui Liu, Tianbao Yang, Tajana Rosing
In this paper, we introduce a novel and effective lifelong learning algorithm, called MixEd stochastic GrAdient (MEGA), which allows deep neural networks to acquire the ability of retaining performance on old tasks while learning new tasks.
1 code implementation • NeurIPS 2020 • Yunhui Guo, Mingrui Liu, Tianbao Yang, Tajana Rosing
This view leads to two improved schemes for episodic memory based lifelong learning, called MEGA-I and MEGA-II.
1 code implementation • 3 Feb 2019 • Yunhui Guo, Yandong Li, Rogerio Feris, Liqiang Wang, Tajana Rosing
A model aware of the relationships between different domains can also be trained to work on new domains with less resources.
3 code implementations • CVPR 2019 • Yunhui Guo, Honghui Shi, Abhishek Kumar, Kristen Grauman, Tajana Rosing, Rogerio Feris
Transfer learning, which allows a source task to affect the inductive bias of the target task, is widely used in computer vision.
no code implementations • 15 Jun 2018 • Mohsen Imani, Mohammad Samragh, Yeseong Kim, Saransh Gupta, Farinaz Koushanfar, Tajana Rosing
To enable in-memory processing, RAPIDNN reinterprets a DNN model and maps it into a specialized accelerator, which is designed using non-volatile memory blocks that model four fundamental DNN operations, i. e., multiplication, addition, activation functions, and pooling.