no code implementations • 28 Jun 2023 • Meiyu Zhong, Ravi Tandon
In comparison to several existing approaches for learning fair classifiers (including pre-processing, post-processing and other regularization methods), we show that the proposed F-divergence based framework achieves state-of-the-art performance with respect to the trade-off between accuracy and fairness.
no code implementations • 17 May 2023 • Sudarshan Adiga, Xin Xiao, Ravi Tandon, Bane Vasic, Tamal Bose
We present new theoretical results which bound this gap and show the dependence on the decoder complexity, in terms of code parameters (blocklength, message length, variable/check node degrees), decoding iterations, and the training dataset size.
no code implementations • 31 Jan 2022 • Mohamed Seif, Dung Nguyen, Anil Vullikanti, Ravi Tandon
To the best of our knowledge, this is the first work to study the impact of privacy constraints on the fundamental limits for community detection.
no code implementations • 27 Jan 2022 • Sudarshan Adiga, Ravi Tandon
We present a theoretical justification as well as accuracy guarantees which show that the proposed statistic can reliably detect statistical changes, irrespective of the split point.
no code implementations • 30 Jul 2021 • Sennur Ulukus, Salman Avestimehr, Michael Gastpar, Syed Jafar, Ravi Tandon, Chao Tian
Most of our lives are conducted in the cyberspace.
no code implementations • 10 May 2021 • Xin Xiao, Nithin Raveendran, Bane Vasic, Shu Lin, Ravi Tandon
Decoder diversity is a powerful error correction framework in which a collection of decoders collaboratively correct a set of error patterns otherwise uncorrectable by any individual decoder.
1 code implementation • 2 Mar 2021 • Mohamed Seif, Wei-Ting Chang, Ravi Tandon
Specifically, the central DP privacy leakage has been shown to scale as $\mathcal{O}(1/K^{1/2})$, where $K$ is the number of users.
no code implementations • 15 Aug 2020 • Alex Berian, Kory Staab, Noel Teku, Gregory Ditzler, Tamal Bose, Ravi Tandon
This paper considers the problem of secure modulation classification, where a transmitter (Alice) wants to maximize MC accuracy at a legitimate receiver (Bob) while minimizing MC accuracy at an eavesdropper (Eve).
no code implementations • 4 Jun 2020 • Islam Samy, Mohamed A. Attia, Ravi Tandon, Loukas Lazos
Such relaxation is relevant in applications where privacy can be traded for communication efficiency.
no code implementations • 12 Feb 2020 • Mohamed Seif, Ravi Tandon, Ming Li
In this paper, we study the problem of federated learning (FL) over a wireless channel, modeled by a Gaussian multiple access channel (MAC), subject to local differential privacy (LDP) constraints.
Cryptography and Security Information Theory Information Theory
no code implementations • 23 Jan 2020 • Wei-Ting Chang, Ravi Tandon
In particular, we focus on the design of digital gradient transmission schemes over a MAC, where gradients at each user are first quantized, and then transmitted over a MAC to be decoded individually at the PS.
no code implementations • 16 Jan 2020 • Islam Samy, Mohamed A. Attia, Ravi Tandon, Loukas Lazos
To prevent such information leakage, the goal of classical PIR is to hide the identity of the content/message being accessed, which subsequently also hides the latent attributes.
no code implementations • 16 May 2019 • Wei-Ting Chang, Ravi Tandon
Such approximate schemes make use of randomization techniques to speed up the computation process.
no code implementations • 6 Apr 2018 • Bo Jiang, Ming Li, Ravi Tandon
The notion of context-awareness is incorporated in LIP by the introduction of priors, which enables the design of privacy-preserving data aggregation with knowledge of priors.
no code implementations • 5 Jan 2018 • Mohamed A. Attia, Ravi Tandon
Data shuffling between distributed cluster of nodes is one of the critical steps in implementing large-scale learning algorithms.
no code implementations • 30 Sep 2016 • Mohamed Attia, Ravi Tandon
At each iteration over the data, it is common practice to randomly re-shuffle the data at the master node, assigning different batches for each worker to process.
no code implementations • 16 Sep 2016 • Mohamed Attia, Ravi Tandon
Data shuffling is one of the fundamental building blocks for distributed learning algorithms, that increases the statistical gain for each step of the learning process.
no code implementations • 31 Mar 2016 • Prithwish Chakraborty, Sathappan Muthiah, Ravi Tandon, Naren Ramakrishnan
We propose hierarchical quickest change detection (HQCD), a framework that formalizes the process of incorporating additional correlated sources for early changepoint detection.
no code implementations • 20 Mar 2016 • Pejman Khadivi, Ravi Tandon, Naren Ramakrishnan
Feed-forward deep neural networks have been used extensively in various machine learning applications.