no code implementations • 11 Jun 2023 • Iden Kalemaj, Shiva Prasad Kasiviswanathan, Aaditya Ramdas
We provide theoretical guarantees on the performance of our tests and validate them empirically.
no code implementations • NeurIPS 2023 • Lie He, Shiva Prasad Kasiviswanathan
In this paper, we study the conditional stochastic optimization (CSO) problem which covers a variety of applications including portfolio selection, reinforcement learning, robust learning, causal inference, etc.
2 code implementations • 2 Feb 2023 • Patrick Chao, Patrick Blöbaum, Shiva Prasad Kasiviswanathan
We consider the problem of answering observational, interventional, and counterfactual queries in a causally sufficient setting where only observational data and the causal graph are available.
no code implementations • 12 Jan 2023 • Yu-Guan Hsieh, Shiva Prasad Kasiviswanathan, Branislav Kveton, Patrick Blöbaum
In this work, we initiate the idea of using denoising diffusion models to learn priors for online decision making problems.
1 code implementation • 14 Dec 2022 • Aleksandr Podkopaev, Patrick Blöbaum, Shiva Prasad Kasiviswanathan, Aaditya Ramdas
Independence testing is a classical statistical problem that has been extensively studied in the batch setting when one fixes the sample size before collecting data.
no code implementations • 8 Jun 2022 • Yu-Guan Hsieh, Shiva Prasad Kasiviswanathan, Branislav Kveton
We introduce a multi-armed bandit model where the reward is a sum of multiple random variables, and each action only alters the distributions of some of them.
no code implementations • 7 Jul 2021 • Abhinav Aggarwal, Shiva Prasad Kasiviswanathan, Zekun Xu, Oluwaseyi Feyisetan, Nathanael Teissier
Machine learning classifiers rely on loss functions for performance evaluation, often on a private (hidden) dataset.
no code implementations • NeurIPS 2021 • Raghavendra Addanki, Shiva Prasad Kasiviswanathan
We introduce a new Collaborative Causal Discovery problem, through which we model a common scenario in which we have multiple independent entities each with their own causal graph, and the goal is to simultaneously learn all these causal graphs.
no code implementations • 18 May 2021 • Abhinav Aggarwal, Shiva Prasad Kasiviswanathan, Zekun Xu, Oluwaseyi Feyisetan, Nathanael Teissier
Log-loss (also known as cross-entropy loss) metric is ubiquitously used across machine learning applications to assess the performance of classification algorithms.
no code implementations • ICML 2020 • Raghavendra Addanki, Shiva Prasad Kasiviswanathan, Andrew Mcgregor, Cameron Musco
We consider recovering a causal graph in presence of latent variables, where we seek to minimize the cost of interventions used in the recovery process.
no code implementations • 11 Apr 2019 • Shiva Prasad Kasiviswanathan, Mark Rudelson
Matrices satisfying the Restricted Isometry Property (RIP) play an important role in the areas of compressed sensing and statistical learning.
no code implementations • 21 Oct 2017 • Shiva Prasad Kasiviswanathan, Nina Narodytska, Hongxia Jin
Deep neural networks are powerful learning models that achieve state-of-the-art performance on many computer vision, speech, and language processing tasks.
no code implementations • 19 Sep 2017 • Nina Narodytska, Shiva Prasad Kasiviswanathan, Leonid Ryzhyk, Mooly Sagiv, Toby Walsh
To the best of our knowledge, this is the first work on verifying properties of deep neural networks using an exact Boolean encoding of the network.
no code implementations • 25 Jul 2017 • Shiva Prasad Kasiviswanathan, Mark Rudelson
This construction allows incorporating a fixed matrix that has an easily {\em verifiable} condition into the design process, and allows for generation of {\em compressed} design matrices that have a lower storage requirement than a standard design matrix.
no code implementations • 4 Jan 2017 • Shiva Prasad Kasiviswanathan, Kobbi Nissim, Hongxia Jin
Our first contribution is a generic transformation of private batch ERM mechanisms into private incremental ERM mechanisms, based on a simple idea of invoking the private batch ERM procedure at some regular time intervals.
no code implementations • 19 Dec 2016 • Nina Narodytska, Shiva Prasad Kasiviswanathan
In this work, we focus on deep convolutional neural networks and demonstrate that adversaries can easily craft adversarial examples even without any internal knowledge of the target network.
no code implementations • 22 Apr 2015 • Shiva Prasad Kasiviswanathan, Mark Rudelson
In this paper, we initiate the study of non-asymptotic spectral theory of random kernel matrices.
no code implementations • 6 Mar 2008 • Shiva Prasad Kasiviswanathan, Homin K. Lee, Kobbi Nissim, Sofya Raskhodnikova, Adam Smith
Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (non-private) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples.