1 code implementation • 3 Jan 2024 • Anshuman Chhabra, Hadi Askari, Prasant Mohapatra
We characterize and study zero-shot abstractive summarization in Large Language Models (LLMs) by measuring position bias, which we propose as a general formulation of the more restrictive lead bias phenomenon studied previously in the literature.
no code implementations • 15 Dec 2023 • Dom Huh, Prasant Mohapatra
The prevalence of multi-agent applications pervades various interconnected systems in our everyday lives.
no code implementations • 27 Apr 2023 • Abhishek Roy, Prasant Mohapatra
We provide online multiplier bootstrap method to estimate the asymptotic covariance to construct online CI.
1 code implementation • 21 Jan 2023 • Dom Huh, Prasant Mohapatra
This paper addresses the considerations that comes along with adopting decentralized communication for multi-agent localization applications in discrete state spaces.
1 code implementation • 24 Nov 2022 • Huanle Zhang, Lei Fu, Mi Zhang, Pengfei Hu, Xiuzhen Cheng, Prasant Mohapatra, Xin Liu
In this paper, we propose FedTune, an automatic FL hyper-parameter tuning algorithm tailored to applications' diverse system requirements in FL training.
1 code implementation • 4 Oct 2022 • Anshuman Chhabra, Peizhao Li, Prasant Mohapatra, Hongfu Liu
Experimentally, we observe that CFC is highly robust to the proposed attack and is thus a truly robust fair clustering alternative.
no code implementations • 4 Oct 2022 • Anshuman Chhabra, Ashwin Sekhari, Prasant Mohapatra
Clustering models constitute a class of unsupervised machine learning methods which are used in a number of application pipelines, and play a vital role in modern data science.
no code implementations • 22 Oct 2021 • Anshuman Chhabra, Adish Singla, Prasant Mohapatra
As a first step, we propose a fairness degrading attack algorithm for k-median clustering that operates under a whitebox threat model -- where the clustering algorithm, fairness notion, and the input dataset are known to the adversary.
1 code implementation • 6 Oct 2021 • Huanle Zhang, Mi Zhang, Xin Liu, Prasant Mohapatra, Michael DeLucia
Federated learning (FL) hyper-parameters significantly affect the training overheads in terms of computation time, transmission time, computation load, and transmission load.
no code implementations • 29 Sep 2021 • Huanle Zhang, Mi Zhang, Xin Liu, Prasant Mohapatra, Michael DeLucia
Federated Learning (FL) is a distributed model training paradigm that preserves clients' data privacy.
no code implementations • 1 Jun 2021 • Anshuman Chhabra, Adish Singla, Prasant Mohapatra
Extensive experiments on different clustering algorithms and fairness notions show that our algorithms can achieve desired levels of fairness on many real-world datasets with a very small percentage of antidote data added.
no code implementations • NeurIPS 2020 • Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra
We next analyze Stochastic Cubic-Regularized Newton (SCRN) algorithm under interpolation-like conditions, and show that the oracle complexity to reach an $\epsilon$-local-minimizer under interpolation-like conditions, is $O(1/\epsilon^{2. 5})$.
no code implementations • 28 Sep 2020 • Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra
We next analyze Stochastic Cubic-Regularized Newton (SCRN) algorithm under interpolation-like conditions, and show that the oracle complexity to reach an $\epsilon$-local-minimizer under interpolation-like conditions, is $\tilde{\mathcal{O}}(1/\epsilon^{2. 5})$.
no code implementations • 29 Jun 2020 • Tianbo Gu, Allaukik Abhishek, Hao Fu, Huanle Zhang, Debraj Basu, Prasant Mohapatra
These low-rate attacks are challenging to detect and can persist in the networks.
no code implementations • 7 May 2020 • Anshuman Chhabra, Prasant Mohapatra
Hierarchical Agglomerative Clustering (HAC) algorithms are extensively utilized in modern data science, and seek to partition the dataset into clusters while generating a hierarchical relationship between the data samples.
no code implementations • 3 Dec 2019 • Abhishek Roy, Yifang Chen, Krishnakumar Balasubramanian, Prasant Mohapatra
We establish sub-linear regret bounds on the proposed notions of regret in both the online and bandit setting.
no code implementations • 16 Nov 2019 • Anshuman Chhabra, Abhishek Roy, Prasant Mohapatra
To the best of our knowledge, this is the first work that generates spill-over adversarial samples without the knowledge of the true metric ensuring that the perturbed sample is not an outlier, and theoretically proves the above.
no code implementations • 31 Jul 2019 • Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra
In this paper, motivated by online reinforcement learning problems, we propose and analyze bandit algorithms for both general and structured nonconvex problems with nonstationary (or dynamic) regret as the performance measure, in both stochastic and non-stochastic settings.
no code implementations • 28 Jan 2019 • Anshuman Chhabra, Abhishek Roy, Prasant Mohapatra
We first provide a strong (iterative) black-box adversarial attack that can craft adversarial samples which will be incorrectly clustered irrespective of the choice of clustering algorithm.
no code implementations • 1 Dec 2016 • Zizhan Zheng, Ness B. Shroff, Prasant Mohapatra
As these attacks are often designed to disable a system (or a critical resource, e. g., a user account) repeatedly, it is crucial for the defender to keep updating its security measures to strike a balance between the risk of being compromised and the cost of security updates.