Search Results for author: Tajana Rosing

Found 18 papers, 6 papers with code

Mem-Rec: Memory Efficient Recommendation System using Alternative Representation

no code implementations12 May 2023 Gopi Krishna Jha, Anthony Thomas, Nilesh Jain, Sameh Gobriel, Tajana Rosing, Ravi Iyer

Deep learning-based recommendation systems (e. g., DLRMs) are widely used AI models to provide high-quality personalized recommendations.

Recommendation Systems

DODEM: DOuble DEfense Mechanism Against Adversarial Attacks Towards Secure Industrial Internet of Things Analytics

no code implementations23 Jan 2023 Onat Gungor, Tajana Rosing, Baris Aksanli

The results show that our double defense strategy is highly efficient where we can improve model robustness by up to 64. 6% and 52% compared to standard and adversarial retraining, respectively.

Adversarial Attack

Async-HFL: Efficient and Robust Asynchronous Federated Learning in Hierarchical IoT Networks

1 code implementation17 Jan 2023 Xiaofan Yu, Ludmila Cherkasova, Harsh Vardhan, Quanling Zhao, Emily Ekaireb, Xiyuan Zhang, Arya Mazumdar, Tajana Rosing

To fully unleash the potential of Async-HFL in converging speed under system heterogeneities and stragglers, we design device selection at the gateway level and device-gateway association at the cloud level.

Federated Learning

Streaming Encoding Algorithms for Scalable Hyperdimensional Computing

no code implementations20 Sep 2022 Anthony Thomas, Behnam Khaleghi, Gopi Krishna Jha, Sanjoy Dasgupta, Nageen Himayat, Ravi Iyer, Nilesh Jain, Tajana Rosing

Hyperdimensional computing (HDC) is a paradigm for data representation and learning originating in computational neuroscience.

SCALE: Online Self-Supervised Lifelong Learning without Prior Knowledge

1 code implementation24 Aug 2022 Xiaofan Yu, Yunhui Guo, Sicun Gao, Tajana Rosing

To address the challenges, we propose Self-Supervised ContrAstive Lifelong LEarning without Prior Knowledge (SCALE) which can extract and memorize representations on the fly purely from the data continuum.

RES-HD: Resilient Intelligent Fault Diagnosis Against Adversarial Attacks Using Hyper-Dimensional Computing

no code implementations14 Mar 2022 Onat Gungor, Tajana Rosing, Baris Aksanli

Hyper-dimensional computing (HDC) is a brain-inspired machine learning method that has been shown to be sufficiently accurate while being extremely robust, fast, and energy-efficient.

BIG-bench Machine Learning

A Theoretical Perspective on Hyperdimensional Computing

no code implementations14 Oct 2020 Anthony Thomas, Sanjoy Dasgupta, Tajana Rosing

Hyperdimensional (HD) computing is a set of neurally inspired methods for obtaining high-dimensional, low-precision, distributed representations of data.

Prive-HD: Privacy-Preserved Hyperdimensional Computing

no code implementations14 May 2020 Behnam Khaleghi, Mohsen Imani, Tajana Rosing

In this paper, we target privacy-preserving training and inference of brain-inspired Hyperdimensional (HD) computing, a new learning algorithm that is gaining traction due to its light-weight computation and robustness particularly appealing for edge devices with tight constraints.

Privacy Preserving Quantization

FPGA Acceleration of Sequence Alignment: A Survey

no code implementations5 Feb 2020 Sahand Salamat, Tajana Rosing

In this survey, we introduce three main DNA alignment algorithms and FPGA-based implementation of these algorithms to accelerate the DNA alignment.

Attacking Lifelong Learning Models with Gradient Reversion

no code implementations ICLR 2020 Yunhui Guo, Mingrui Liu, Yandong Li, Liqiang Wang, Tianbao Yang, Tajana Rosing

We evaluate the effectiveness of traditional attack methods such as FGSM and PGD. The results show that A-GEM still possesses strong continual learning ability in the presence of adversarial examples in the memory and simple defense techniques such as label smoothing can further alleviate the adversarial effects.

Continual Learning

A Broader Study of Cross-Domain Few-Shot Learning

2 code implementations ECCV 2020 Yunhui Guo, Noel C. Codella, Leonid Karlinsky, James V. Codella, John R. Smith, Kate Saenko, Tajana Rosing, Rogerio Feris

Extensive experiments on the proposed benchmark are performed to evaluate state-of-art meta-learning approaches, transfer learning approaches, and newer methods for cross-domain few-shot learning.

cross-domain few-shot learning Few-Shot Image Classification +1

AdaFilter: Adaptive Filter Fine-tuning for Deep Transfer Learning

no code implementations21 Nov 2019 Yunhui Guo, Yandong Li, Liqiang Wang, Tajana Rosing

Fine-tuning is a popular transfer learning technique for deep neural networks where a few rounds of training are applied to the parameters of a pre-trained model to adapt them to a new task.

General Classification Image Classification +1

Learning with Long-term Remembering: Following the Lead of Mixed Stochastic Gradient

no code implementations25 Sep 2019 Yunhui Guo, Mingrui Liu, Tianbao Yang, Tajana Rosing

In this paper, we introduce a novel and effective lifelong learning algorithm, called MixEd stochastic GrAdient (MEGA), which allows deep neural networks to acquire the ability of retaining performance on old tasks while learning new tasks.

Improved Schemes for Episodic Memory-based Lifelong Learning

1 code implementation NeurIPS 2020 Yunhui Guo, Mingrui Liu, Tianbao Yang, Tajana Rosing

This view leads to two improved schemes for episodic memory based lifelong learning, called MEGA-I and MEGA-II.

Depthwise Convolution is All You Need for Learning Multiple Visual Domains

1 code implementation3 Feb 2019 Yunhui Guo, Yandong Li, Rogerio Feris, Liqiang Wang, Tajana Rosing

A model aware of the relationships between different domains can also be trained to work on new domains with less resources.

Continual Learning

SpotTune: Transfer Learning through Adaptive Fine-tuning

3 code implementations CVPR 2019 Yunhui Guo, Honghui Shi, Abhishek Kumar, Kristen Grauman, Tajana Rosing, Rogerio Feris

Transfer learning, which allows a source task to affect the inductive bias of the target task, is widely used in computer vision.

Inductive Bias Transfer Learning

RAPIDNN: In-Memory Deep Neural Network Acceleration Framework

no code implementations15 Jun 2018 Mohsen Imani, Mohammad Samragh, Yeseong Kim, Saransh Gupta, Farinaz Koushanfar, Tajana Rosing

To enable in-memory processing, RAPIDNN reinterprets a DNN model and maps it into a specialized accelerator, which is designed using non-volatile memory blocks that model four fundamental DNN operations, i. e., multiplication, addition, activation functions, and pooling.

speech-recognition Speech Recognition +2

Cannot find the paper you are looking for? You can Submit a new open access paper.