Search Results for author: Shiva Prasad Kasiviswanathan

Found 18 papers, 2 papers with code

Differentially Private Conditional Independence Testing

no code implementations11 Jun 2023 Iden Kalemaj, Shiva Prasad Kasiviswanathan, Aaditya Ramdas

We provide theoretical guarantees on the performance of our tests and validate them empirically.

Debiasing Conditional Stochastic Optimization

no code implementations NeurIPS 2023 Lie He, Shiva Prasad Kasiviswanathan

In this paper, we study the conditional stochastic optimization (CSO) problem which covers a variety of applications including portfolio selection, reinforcement learning, robust learning, causal inference, etc.

Causal Inference Stochastic Optimization

Interventional and Counterfactual Inference with Diffusion Models

2 code implementations2 Feb 2023 Patrick Chao, Patrick Blöbaum, Shiva Prasad Kasiviswanathan

We consider the problem of answering observational, interventional, and counterfactual queries in a causally sufficient setting where only observational data and the causal graph are available.

counterfactual Counterfactual Inference

Thompson Sampling with Diffusion Generative Prior

no code implementations12 Jan 2023 Yu-Guan Hsieh, Shiva Prasad Kasiviswanathan, Branislav Kveton, Patrick Blöbaum

In this work, we initiate the idea of using denoising diffusion models to learn priors for online decision making problems.

Decision Making Denoising +2

Sequential Kernelized Independence Testing

1 code implementation14 Dec 2022 Aleksandr Podkopaev, Patrick Blöbaum, Shiva Prasad Kasiviswanathan, Aaditya Ramdas

Independence testing is a classical statistical problem that has been extensively studied in the batch setting when one fixes the sample size before collecting data.

valid

Uplifting Bandits

no code implementations8 Jun 2022 Yu-Guan Hsieh, Shiva Prasad Kasiviswanathan, Branislav Kveton

We introduce a multi-armed bandit model where the reward is a sum of multiple random variables, and each action only alters the distributions of some of them.

Marketing Recommendation Systems

Reconstructing Test Labels from Noisy Loss Functions

no code implementations7 Jul 2021 Abhinav Aggarwal, Shiva Prasad Kasiviswanathan, Zekun Xu, Oluwaseyi Feyisetan, Nathanael Teissier

Machine learning classifiers rely on loss functions for performance evaluation, often on a private (hidden) dataset.

Collaborative Causal Discovery with Atomic Interventions

no code implementations NeurIPS 2021 Raghavendra Addanki, Shiva Prasad Kasiviswanathan

We introduce a new Collaborative Causal Discovery problem, through which we model a common scenario in which we have multiple independent entities each with their own causal graph, and the goal is to simultaneously learn all these causal graphs.

Causal Discovery Clustering

Label Inference Attacks from Log-loss Scores

no code implementations18 May 2021 Abhinav Aggarwal, Shiva Prasad Kasiviswanathan, Zekun Xu, Oluwaseyi Feyisetan, Nathanael Teissier

Log-loss (also known as cross-entropy loss) metric is ubiquitously used across machine learning applications to assess the performance of classification algorithms.

Efficient Intervention Design for Causal Discovery with Latents

no code implementations ICML 2020 Raghavendra Addanki, Shiva Prasad Kasiviswanathan, Andrew Mcgregor, Cameron Musco

We consider recovering a causal graph in presence of latent variables, where we seek to minimize the cost of interventions used in the recovery process.

Causal Discovery

Restricted Isometry Property under High Correlations

no code implementations11 Apr 2019 Shiva Prasad Kasiviswanathan, Mark Rudelson

Matrices satisfying the Restricted Isometry Property (RIP) play an important role in the areas of compressed sensing and statistical learning.

Dimensionality Reduction Vocal Bursts Intensity Prediction

Deep Neural Network Approximation using Tensor Sketching

no code implementations21 Oct 2017 Shiva Prasad Kasiviswanathan, Nina Narodytska, Hongxia Jin

Deep neural networks are powerful learning models that achieve state-of-the-art performance on many computer vision, speech, and language processing tasks.

Verifying Properties of Binarized Deep Neural Networks

no code implementations19 Sep 2017 Nina Narodytska, Shiva Prasad Kasiviswanathan, Leonid Ryzhyk, Mooly Sagiv, Toby Walsh

To the best of our knowledge, this is the first work on verifying properties of deep neural networks using an exact Boolean encoding of the network.

Image Classification

Restricted Eigenvalue from Stable Rank with Applications to Sparse Linear Regression

no code implementations25 Jul 2017 Shiva Prasad Kasiviswanathan, Mark Rudelson

This construction allows incorporating a fixed matrix that has an easily {\em verifiable} condition into the design process, and allows for generation of {\em compressed} design matrices that have a lower storage requirement than a standard design matrix.

regression

Private Incremental Regression

no code implementations4 Jan 2017 Shiva Prasad Kasiviswanathan, Kobbi Nissim, Hongxia Jin

Our first contribution is a generic transformation of private batch ERM mechanisms into private incremental ERM mechanisms, based on a simple idea of invoking the private batch ERM procedure at some regular time intervals.

BIG-bench Machine Learning regression

Simple Black-Box Adversarial Perturbations for Deep Networks

no code implementations19 Dec 2016 Nina Narodytska, Shiva Prasad Kasiviswanathan

In this work, we focus on deep convolutional neural networks and demonstrate that adversaries can easily craft adversarial examples even without any internal knowledge of the target network.

Spectral Norm of Random Kernel Matrices with Applications to Privacy

no code implementations22 Apr 2015 Shiva Prasad Kasiviswanathan, Mark Rudelson

In this paper, we initiate the study of non-asymptotic spectral theory of random kernel matrices.

Attribute regression

What Can We Learn Privately?

no code implementations6 Mar 2008 Shiva Prasad Kasiviswanathan, Homin K. Lee, Kobbi Nissim, Sofya Raskhodnikova, Adam Smith

Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (non-private) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples.

Cannot find the paper you are looking for? You can Submit a new open access paper.