no code implementations • 14 Oct 2024 • Jayneel Vora, Aditya Krishnan, Nader Bouacida, Prabhu RV Shankar, Prasant Mohapatra
The increasing reliance on diffusion models for generating synthetic images has amplified concerns about the unauthorized use of personal data, particularly facial images, in model training.
no code implementations • 20 Sep 2024 • Jayneel Vora, Aditya Krishnan, Nader Bouacida, Prabhu RV Shankar, Prasant Mohapatra
Yet, direct application of PTQ to diffusion models can degrade synthesis quality due to accumulated quantization noise across multiple denoising steps, particularly in conditional tasks like text-to-audio synthesis.
no code implementations • 23 Jul 2024 • Aditya Krishnan, Jayneel Vora, Prasant Mohapatra
We perform rigorous evaluations with the DeepViewAgg model on the complete point cloud as our baseline by measuring the Intersection over Union (IoU) accuracy, inference time latency, and memory consumption.
no code implementations • 20 Jul 2024 • Jayneel Vora, Nader Bouacida, Aditya Krishnan, Prasant Mohapatra
We propose a suite of training algorithms that leverage the U-Net architecture as the backbone for our diffusion models.
no code implementations • 6 Jun 2024 • Hadi Askari, Anshuman Chhabra, Muhao Chen, Prasant Mohapatra
To bridge this gap, we propose relevance paraphrasing, a simple strategy that can be used to measure the robustness of LLMs as summarizers.
no code implementations • 5 Jun 2024 • Dom Huh, Prasant Mohapatra
Sample efficiency remains a key challenge in multi-agent reinforcement learning (MARL).
no code implementations • 6 May 2024 • Anshuman Chhabra, Bo Li, Jian Chen, Prasant Mohapatra, Hongfu Liu
In this paper, we establish a bridge between identifying detrimental training samples via influence functions and outlier gradient detection.
no code implementations • 3 May 2024 • Sairamvinay Vijayaraghavan, Prasant Mohapatra
In this work, we present a general framework for feature-aware explainable recommenders that can withstand external attacks and provide robust and generalized explanations.
no code implementations • 3 May 2024 • Sairamvinay Vijayaraghavan, Prasant Mohapatra
Experimental results verify our hypothesis that the ability to explain recommendations does decrease along with increasing noise levels and particularly adversarial noise does contribute to a much stronger decrease.
1 code implementation • 3 Jan 2024 • Anshuman Chhabra, Hadi Askari, Prasant Mohapatra
We characterize and study zero-shot abstractive summarization in Large Language Models (LLMs) by measuring position bias, which we propose as a general formulation of the more restrictive lead bias phenomenon studied previously in the literature.
no code implementations • 15 Dec 2023 • Dom Huh, Prasant Mohapatra
Multi-agent systems (MAS) are widely prevalent and crucially important in numerous real-world applications, where multiple agents must make decisions to achieve their objectives in a shared environment.
no code implementations • 27 Apr 2023 • Abhishek Roy, Prasant Mohapatra
We provide online multiplier bootstrap method to estimate the asymptotic covariance to construct online CI.
1 code implementation • 21 Jan 2023 • Dom Huh, Prasant Mohapatra
This paper addresses the considerations that comes along with adopting decentralized communication for multi-agent localization applications in discrete state spaces.
1 code implementation • 24 Nov 2022 • Huanle Zhang, Lei Fu, Mi Zhang, Pengfei Hu, Xiuzhen Cheng, Prasant Mohapatra, Xin Liu
In this paper, we propose FedTune, an automatic FL hyper-parameter tuning algorithm tailored to applications' diverse system requirements in FL training.
1 code implementation • 4 Oct 2022 • Anshuman Chhabra, Peizhao Li, Prasant Mohapatra, Hongfu Liu
Experimentally, we observe that CFC is highly robust to the proposed attack and is thus a truly robust fair clustering alternative.
no code implementations • 4 Oct 2022 • Anshuman Chhabra, Ashwin Sekhari, Prasant Mohapatra
Clustering models constitute a class of unsupervised machine learning methods which are used in a number of application pipelines, and play a vital role in modern data science.
no code implementations • 22 Oct 2021 • Anshuman Chhabra, Adish Singla, Prasant Mohapatra
As a first step, we propose a fairness degrading attack algorithm for k-median clustering that operates under a whitebox threat model -- where the clustering algorithm, fairness notion, and the input dataset are known to the adversary.
1 code implementation • 6 Oct 2021 • Huanle Zhang, Mi Zhang, Xin Liu, Prasant Mohapatra, Michael DeLucia
Federated learning (FL) hyper-parameters significantly affect the training overheads in terms of computation time, transmission time, computation load, and transmission load.
no code implementations • 29 Sep 2021 • Huanle Zhang, Mi Zhang, Xin Liu, Prasant Mohapatra, Michael DeLucia
Federated Learning (FL) is a distributed model training paradigm that preserves clients' data privacy.
no code implementations • 1 Jun 2021 • Anshuman Chhabra, Adish Singla, Prasant Mohapatra
Extensive experiments on different clustering algorithms and fairness notions show that our algorithms can achieve desired levels of fairness on many real-world datasets with a very small percentage of antidote data added.
no code implementations • NeurIPS 2020 • Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra
We next analyze Stochastic Cubic-Regularized Newton (SCRN) algorithm under interpolation-like conditions, and show that the oracle complexity to reach an $\epsilon$-local-minimizer under interpolation-like conditions, is $O(1/\epsilon^{2. 5})$.
no code implementations • 28 Sep 2020 • Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra
We next analyze Stochastic Cubic-Regularized Newton (SCRN) algorithm under interpolation-like conditions, and show that the oracle complexity to reach an $\epsilon$-local-minimizer under interpolation-like conditions, is $\tilde{\mathcal{O}}(1/\epsilon^{2. 5})$.
no code implementations • 29 Jun 2020 • Tianbo Gu, Allaukik Abhishek, Hao Fu, Huanle Zhang, Debraj Basu, Prasant Mohapatra
These low-rate attacks are challenging to detect and can persist in the networks.
no code implementations • 7 May 2020 • Anshuman Chhabra, Prasant Mohapatra
Hierarchical Agglomerative Clustering (HAC) algorithms are extensively utilized in modern data science, and seek to partition the dataset into clusters while generating a hierarchical relationship between the data samples.
no code implementations • 3 Dec 2019 • Abhishek Roy, Yifang Chen, Krishnakumar Balasubramanian, Prasant Mohapatra
We establish sub-linear regret bounds on the proposed notions of regret in both the online and bandit setting.
no code implementations • 16 Nov 2019 • Anshuman Chhabra, Abhishek Roy, Prasant Mohapatra
To the best of our knowledge, this is the first work that generates spill-over adversarial samples without the knowledge of the true metric ensuring that the perturbed sample is not an outlier, and theoretically proves the above.
no code implementations • 31 Jul 2019 • Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra
In this paper, motivated by online reinforcement learning problems, we propose and analyze bandit algorithms for both general and structured nonconvex problems with nonstationary (or dynamic) regret as the performance measure, in both stochastic and non-stochastic settings.
no code implementations • 28 Jan 2019 • Anshuman Chhabra, Abhishek Roy, Prasant Mohapatra
We first provide a strong (iterative) black-box adversarial attack that can craft adversarial samples which will be incorrectly clustered irrespective of the choice of clustering algorithm.
no code implementations • 1 Dec 2016 • Zizhan Zheng, Ness B. Shroff, Prasant Mohapatra
As these attacks are often designed to disable a system (or a critical resource, e. g., a user account) repeatedly, it is crucial for the defender to keep updating its security measures to strike a balance between the risk of being compromised and the cost of security updates.