You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 22 Nov 2022 • Satwik Bhattamishra, Arkil Patel, Varun Kanade, Phil Blunsom

(ii) When trained on Boolean functions, both Transformers and LSTMs prioritize learning functions of low sensitivity, with Transformers ultimately converging to functions of lower sensitivity.

no code implementations • 12 Oct 2022 • Pascale Gourdeau, Varun Kanade, Marta Kwiatkowska, James Worrell

Distributional assumptions have been shown to be necessary for the robust learnability of concept classes when considering the exact-in-the-ball robust risk and access to random examples by Gourdeau et al. (2019).

no code implementations • 25 Aug 2022 • Varun Kanade, Elad Hazan, Adam Tauman Kalai

In the matrix completion problem, one wishes to reconstruct a low-rank matrix based on a revealed set of (possibly noisy) entries.

no code implementations • 24 May 2022 • Limor Gultchin, Vincent Cohen-Addad, Sophie Giffard-Roisin, Varun Kanade, Frederik Mallmann-Trenn

Among the various aspects of algorithmic fairness studied in recent years, the tension between satisfying both \textit{sufficiency} and \textit{separation} -- e. g. the ratios of positive or negative predictive values, and false positive or false negative rates across groups -- has received much attention.

no code implementations • 12 May 2022 • Pascale Gourdeau, Varun Kanade, Marta Kwiatkowska, James Worrell

A fundamental problem in adversarial machine learning is to quantify how much training data is needed in the presence of evasion attacks.

no code implementations • 23 Feb 2022 • Varun Kanade, Patrick Rebeschini, Tomas Vaskevicius

Our main result is an exponential-tail excess risk bound expressed in terms of the offset Rademacher complexity that yields results at least as sharp as those obtainable via the classical theory.

no code implementations • NeurIPS 2021 • Adam Tauman Kalai, Varun Kanade

Our work builds on a recent abstention algorithm of Goldwasser, Kalais, and Montasser (2020) for transductive binary classification.

no code implementations • 15 Feb 2021 • Adam Kalai, Varun Kanade

We give an efficient algorithm for learning a binary function in a given class C of bounded VC dimension, with training data distributed according to P and test data according to Q, where P and Q may be arbitrary distributions over X.

no code implementations • ICLR 2021 • Amartya Sanyal, Puneet K. Dokania, Varun Kanade, Philip Torr

We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorly) trained models.

no code implementations • 16 Jul 2020 • Bryn Elesedy, Varun Kanade, Yee Whye Teh

We analyse the pruning procedure behind the lottery ticket hypothesis arXiv:1803. 03635v5, iterative magnitude pruning (IMP), when applied to linear models trained by gradient flow.

no code implementations • 8 Jul 2020 • Amartya Sanyal, Puneet K. Dokania, Varun Kanade, Philip H. S. Torr

We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorly) trained models.

1 code implementation • 3 Mar 2020 • Limor Gultchin, Matt J. Kusner, Varun Kanade, Ricardo Silva

Discovering the causal effect of a decision is critical to nearly all forms of decision-making.

no code implementations • NeurIPS 2020 • Tomas Vaškevičius, Varun Kanade, Patrick Rebeschini

Recently there has been a surge of interest in understanding implicit regularization properties of iterative gradient-based optimization algorithms.

no code implementations • 15 Sep 2019 • Vincent Cohen-Addad, Benjamin Guedj, Varun Kanade, Guy Rom

The specific formulation we use is the $k$-means objective: At each time step the algorithm has to maintain a set of k candidate centers and the loss incurred is the squared distance between the new point and the closest center.

no code implementations • NeurIPS 2019 • Pascale Gourdeau, Varun Kanade, Marta Kwiatkowska, James Worrell

However if the adversary is restricted to perturbing $O(\log n)$ bits, then the class of monotone conjunctions can be robustly learned with respect to a general class of distributions (that includes the uniform distribution).

1 code implementation • NeurIPS 2019 • Tomas Vaškevičius, Varun Kanade, Patrick Rebeschini

We investigate implicit regularization schemes for gradient descent methods applied to unpenalized least squares regression to solve the problem of reconstructing a sparse signal from an underdetermined system of linear measurements under the restricted isometry assumption.

1 code implementation • NeurIPS 2020 • Qiong Wu, Felix Ming Fai Wong, Zhenming Liu, Yanhua Li, Varun Kanade

We study the low rank regression problem $\my = M\mx + \epsilon$, where $\mx$ and $\my$ are $d_1$ and $d_2$ dimensional vectors respectively.

no code implementations • NeurIPS 2018 • Vincent Cohen-Addad, Varun Kanade, Frederik Mallmann-Trenn

In this work, we take a different approach, based on the observation that the consistency axiom fails to be satisfied when the “correct” number of clusters changes.

1 code implementation • NeurIPS 2019 • David Martínez-Rubio, Varun Kanade, Patrick Rebeschini

We design a fully decentralized algorithm that uses an accelerated consensus procedure to compute (delayed) estimates of the average of rewards obtained by all the agents for each arm, and then uses an upper confidence bound (UCB) algorithm that accounts for the delay and error of the estimates.

no code implementations • 6 Aug 2018 • Quentin Berthet, Varun Kanade

We study the problem of hypothesis testing between two discrete distributions, where we only have access to samples after the action of a known reversible Markov chain, playing the role of noise.

1 code implementation • ICML 2018 • Amartya Sanyal, Matt J. Kusner, Adrià Gascón, Varun Kanade

The main drawback of using fully homomorphic encryption is the amount of time required to evaluate large machine learning models on encrypted data.

no code implementations • ICLR 2019 • Amartya Sanyal, Varun Kanade, Philip H. S. Torr, Puneet K. Dokania

To achieve low dimensionality of learned representations, we propose an easy-to-use, end-to-end trainable, low-rank regularizer (LR) that can be applied to any intermediate layer representation of a DNN.

no code implementations • 15 Feb 2018 • Varun Kanade, Andrea Rocchetto, Simone Severini

We show that DNF formulae can be quantum PAC-learned in polynomial time under product distributions using a quantum example oracle.

no code implementations • NeurIPS 2017 • Cheng Li, Felix Mf Wong, Zhenming Liu, Varun Kanade

This work focuses on unifying two of the most widely used link-formation models: the stochastic block model (SBM) and the small world (or latent space) model (SWM).

no code implementations • NeurIPS 2017 • Vincent Cohen-Addad, Varun Kanade, Frederik Mallmann-Trenn

Hiererachical clustering, that is computing a recursive partitioning of a dataset to obtain clusters at increasingly finer granularity is a fundamental problem in data analysis.

no code implementations • 3 Nov 2017 • Cheng Li, Felix Wong, Zhenming Liu, Varun Kanade

Discovering statistical structure from links is a fundamental problem in the analysis of social networks.

no code implementations • 7 Apr 2017 • Vincent Cohen-Addad, Varun Kanade, Frederik Mallmann-Trenn, Claire Mathieu

For similarity-based hierarchical clustering, Dasgupta showed that the divisive sparsest-cut approach achieves an $O(\log^{3/2} n)$-approximation.

no code implementations • 30 Nov 2016 • Surbhi Goel, Varun Kanade, Adam Klivans, Justin Thaler

These results are in contrast to known efficient algorithms for reliably learning linear threshold functions, where $\epsilon$ must be $\Omega(1)$ and strong assumptions are required on the marginal distribution.

no code implementations • 7 Apr 2016 • Vincent Cohen-Addad, Varun Kanade

We study online optimization of smoothed piecewise constant functions over the domain [0, 1).

no code implementations • 20 May 2015 • Steve Hanneke, Varun Kanade, Liu Yang

Some of the results also describe an active learning variant of this setting, and provide bounds on the number of queries for the labels of points in the sequence sufficient to obtain the stated bounds on the error rates.

no code implementations • 20 Feb 2014 • Varun Kanade, Justin Thaler

The goal in the positive reliable agnostic framework is to output a hypothesis with the following properties: (i) its false positive error rate is at most $\epsilon$, (ii) its false negative error rate is at most $\epsilon$ more than that of the best positive reliable classifier from the class.

no code implementations • NeurIPS 2013 • Yasin Abbasi, Peter L. Bartlett, Varun Kanade, Yevgeny Seldin, Csaba Szepesvari

The goal of the learning algorithm is to choose a path that minimizes the loss while traversing from the start to finish node.

no code implementations • 16 Sep 2013 • Elaine Angelino, Varun Kanade

In a seminal paper, Valiant (2006) introduced a computational model for evolution to address the question of complexity that can arise through Darwinian mechanisms.

no code implementations • 13 Jul 2013 • Varun Kanade, Elchanan Mossel

The theory of learning under the uniform distribution is rich and deep, with connections to cryptography, computational complexity, and the analysis of boolean functions to name a few areas.

no code implementations • NeurIPS 2012 • Varun Kanade, Zhenming Liu, Bozidar Radunovic

This paper shows the difficulty of simultaneously achieving regret asymptotically better than \sqrt{kT} and communication better than T. We give a novel algorithm that for an oblivious adversary achieves a non-trivial trade-off: regret O(\sqrt{k^{5(1+\epsilon)/6} T}) and communication O(T/k^\epsilon), for any value of \epsilon in (0, 1/5).

no code implementations • 5 Nov 2012 • Pranjal Awasthi, Vitaly Feldman, Varun Kanade

We introduce a new model of membership query (MQ) learning, where the learning algorithm is restricted to query points that are \emph{close} to random examples drawn from the underlying distribution.

no code implementations • NeurIPS 2011 • Sham M. Kakade, Varun Kanade, Ohad Shamir, Adam Kalai

In this paper, we provide algorithms for learning GLMs and SIMs, which are both computationally and statistically efficient.

no code implementations • NeurIPS 2009 • Varun Kanade, Adam Kalai

We prove strong noise-tolerance properties of a potential-based boosting algorithm, similar to MadaBoost (Domingo and Watanabe, 2000) and SmoothBoost (Servedio, 2003).

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.