Search Results for author: Ramana Rao Kompella

Found 14 papers, 8 papers with code

ProDiF: Protecting Domain-Invariant Features to Secure Pre-Trained Models Against Extraction

no code implementations17 Mar 2025 Tong Zhou, Shijin Duan, Gaowen Liu, Charles Fleming, Ramana Rao Kompella, Shaolei Ren, Xiaolin Xu

Pre-trained models are valuable intellectual property, capturing both domain-specific and domain-invariant features within their weight spaces.

Model extraction

Towards Vector Optimization on Low-Dimensional Vector Symbolic Architecture

1 code implementation19 Feb 2025 Shijin Duan, Yejia Liu, Gaowen Liu, Ramana Rao Kompella, Shaolei Ren, Xiaolin Xu

Vector Symbolic Architecture (VSA) is emerging in machine learning due to its efficiency, but they are hindered by issues of hyperdimensionality and accuracy.

Knowledge Distillation

LightPure: Realtime Adversarial Image Purification for Mobile Devices Using Diffusion Models

1 code implementation31 Aug 2024 Hossein Khalili, Seongbin Park, Vincent Li, Brandan Bright, Ali Payani, Ramana Rao Kompella, Nader Sehatbakhsh

Our results show that LightPure can outperform existing methods by up to 10x in terms of latency while achieving higher accuracy and robustness for various attack scenarios.

Adversarial Robustness Computational Efficiency +1

Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference

1 code implementation12 Jun 2024 Jiabao Ji, Yujian Liu, Yang Zhang, Gaowen Liu, Ramana Rao Kompella, Sijia Liu, Shiyu Chang

To achieve both goals, a mainstream class of LLM unlearning methods introduces an optimization framework with a combination of two objectives - maximizing the prediction loss on the forget documents while minimizing that on the retain documents, which suffers from two challenges, degenerated output and catastrophic forgetting.

Efficient Multitask Dense Predictor via Binarization

no code implementations CVPR 2024 Yuzhang Shang, Dan Xu, Gaowen Liu, Ramana Rao Kompella, Yan Yan

Moreover, we introduce a knowledge distillation mechanism to correct the direction of information flow in backward propagation.

Binarization Knowledge Distillation +2

Not All Federated Learning Algorithms Are Created Equal: A Performance Evaluation Study

no code implementations26 Mar 2024 Gustav A. Baumgart, Jaemin Shin, Ali Payani, Myungjin Lee, Ramana Rao Kompella

(3) However, algorithms such as FedDyn and SCAFFOLD are more prone to catastrophic failures without the support of additional techniques such as gradient clipping.

All Federated Learning

UnlearnCanvas: Stylized Image Dataset for Enhanced Machine Unlearning Evaluation in Diffusion Models

1 code implementation19 Feb 2024 Yihua Zhang, Chongyu Fan, Yimeng Zhang, Yuguang Yao, Jinghan Jia, Jiancheng Liu, Gaoyuan Zhang, Gaowen Liu, Ramana Rao Kompella, Xiaoming Liu, Sijia Liu

The technological advancements in diffusion models (DMs) have demonstrated unprecedented capabilities in text-to-image generation and are widely used in diverse applications.

Machine Unlearning Style Transfer +1

Enhancing Post-training Quantization Calibration through Contrastive Learning

no code implementations CVPR 2024 Yuzhang Shang, Gaowen Liu, Ramana Rao Kompella, Yan Yan

We aim to calibrate the quantized activations by maximizing the mutual information between the pre- and post-quantized activations.

Contrastive Learning Quantization

Graphene: Infrastructure Security Posture Analysis with AI-generated Attack Graphs

no code implementations20 Dec 2023 Xin Jin, Charalampos Katsis, Fan Sang, Jiahao Sun, Elisa Bertino, Ramana Rao Kompella, Ashish Kundu

In this paper, we propose Graphene, an advanced system designed to provide a detailed analysis of the security posture of computing infrastructures.

Riemannian Multinomial Logistics Regression for SPD Neural Networks

2 code implementations CVPR 2024 Ziheng Chen, Yue Song, Gaowen Liu, Ramana Rao Kompella, XiaoJun Wu, Nicu Sebe

Besides, our framework offers a novel intrinsic explanation for the most popular LogEig classifier in existing SPD networks.

Action Recognition EEG +2

Flame: Simplifying Topology Extension in Federated Learning

1 code implementation9 May 2023 Harshit Daga, Jaemin Shin, Dhruv Garg, Ada Gavrilovska, Myungjin Lee, Ramana Rao Kompella

We present Flame, a new system that provides flexibility of the topology configuration of distributed FL applications around the specifics of a particular deployment context, and is easily extensible to support new FL architectures.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.