Search Results for author: Anant Raj

Found 22 papers, 5 papers with code

A simpler approach to accelerated optimization: iterative averaging meets optimism

no code implementations ICML 2020 Pooria Joulani, Anant Raj, András György, Csaba Szepesvari

In this paper, we show that there is a simpler approach to obtaining accelerated rates: applying generic, well-known optimistic online learning algorithms and using the online average of their predictions to query the (deterministic or stochastic) first-order optimization oracle at each time step.

Non-stationary Online Regression

no code implementations13 Nov 2020 Anant Raj, Pierre Gaillard, Christophe Saad

To the best of our knowledge, this work is the first extension of non-stationary online regression to non-stationary kernel regression.

Time Series

Model-specific Data Subsampling with Influence Functions

no code implementations20 Oct 2020 Anant Raj, Cameron Musco, Lester Mackey, Nicolo Fusi

Model selection requires repeatedly evaluating models on a given dataset and measuring their relative performances.

Model Selection

Stochastic Stein Discrepancies

1 code implementation NeurIPS 2020 Jackson Gorham, Anant Raj, Lester Mackey

Stein discrepancies (SDs) monitor convergence and non-convergence in approximate inference when exact integration and sampling are intractable.

Causal Feature Selection via Orthogonal Search

no code implementations6 Jul 2020 Anant Raj, Stefan Bauer, Ashkan Soleymani, Michel Besserve, Bernhard Schölkopf

The problem of inferring the direct causal parents of a response variable among a large set of explanatory variables is of high practical importance in many disciplines.

Causal Discovery Feature Selection

Explicit Regularization of Stochastic Gradient Methods through Duality

no code implementations30 Mar 2020 Anant Raj, Francis Bach

For accelerated coordinate descent, we obtain a new algorithm that has better convergence properties than existing stochastic gradient methods in the interpolating regime.

Importance Sampling via Local Sensitivity

no code implementations4 Nov 2019 Anant Raj, Cameron Musco, Lester Mackey

Unfortunately, sensitivity sampling is difficult to apply since (1) it is unclear how to efficiently compute the sensitivity scores and (2) the sample size required is often impractically large.

Dual Instrumental Variable Regression

1 code implementation NeurIPS 2020 Krikamol Muandet, Arash Mehrjou, Si Kai Lee, Anant Raj

We present a novel algorithm for non-linear instrumental variable (IV) regression, DualIV, which simplifies traditional two-stage methods via a dual formulation.

Orthogonal Structure Search for Efficient Causal Discovery from Observational Data

no code implementations6 Mar 2019 Anant Raj, Luigi Gresele, Michel Besserve, Bernhard Schölkopf, Stefan Bauer

The problem of inferring the direct causal parents of a response variable among a large set of explanatory variables is of high practical importance in many disciplines.

Causal Discovery

A Differentially Private Kernel Two-Sample Test

1 code implementation1 Aug 2018 Anant Raj, Ho Chung Leon Law, Dino Sejdinovic, Mijung Park

As a result, a simple chi-squared test is obtained, where a test statistic depends on a mean and covariance of empirical differences between the samples, which we perturb for a privacy guarantee.

Two-sample testing

Sobolev Descent

no code implementations30 May 2018 Youssef Mroueh, Tom Sercu, Anant Raj

We study a simplification of GAN training: the problem of transporting particles from a source to a target distribution.

k-SVRG: Variance Reduction for Large Scale Optimization

no code implementations2 May 2018 Anant Raj, Sebastian U. Stich

Variance reduced stochastic gradient (SGD) methods converge significantly faster than the vanilla SGD counterpart.

On Matching Pursuit and Coordinate Descent

no code implementations ICML 2018 Francesco Locatello, Anant Raj, Sai Praneeth Karimireddy, Gunnar Rätsch, Bernhard Schölkopf, Sebastian U. Stich, Martin Jaggi

Exploiting the connection between the two algorithms, we present a unified analysis of both, providing affine invariant sublinear $\mathcal{O}(1/t)$ rates on smooth objectives and linear convergence on strongly convex objectives.

Sobolev GAN

1 code implementation ICLR 2018 Youssef Mroueh, Chun-Liang Li, Tom Sercu, Anant Raj, Yu Cheng

We show that the Sobolev IPM compares two distributions in high dimensions based on weighted conditional Cumulative Distribution Functions (CDF) of each coordinate on a leave one out basis.

Text Generation

Safe Adaptive Importance Sampling

no code implementations NeurIPS 2017 Sebastian U. Stich, Anant Raj, Martin Jaggi

Importance sampling has become an indispensable strategy to speed up optimization algorithms for large-scale applications.

Approximate Steepest Coordinate Descent

no code implementations ICML 2017 Sebastian U. Stich, Anant Raj, Martin Jaggi

We propose a new selection rule for the coordinate selection in coordinate descent methods for huge-scale optimization.

Local Group Invariant Representations via Orbit Embeddings

no code implementations6 Dec 2016 Anant Raj, Abhishek Kumar, Youssef Mroueh, P. Thomas Fletcher, Bernhard Schölkopf

We consider transformations that form a \emph{group} and propose an approach based on kernel methods to derive local group invariant representations.

Rotated MNIST

Screening Rules for Convex Problems

no code implementations23 Sep 2016 Anant Raj, Jakob Olbrich, Bernd Gärtner, Bernhard Schölkopf, Martin Jaggi

We propose a new framework for deriving screening rules for convex optimization problems.

Unsupervised Domain Adaptation in the Wild: Dealing with Asymmetric Label Sets

no code implementations26 Mar 2016 Ayush Mittal, Anant Raj, Vinay P. Namboodiri, Tinne Tuytelaars

Most methods for unsupervised domain adaptation proposed in the literature to date, assume that the set of classes present in the target domain is identical to the set of classes present in the source domain.

General Classification Unsupervised Domain Adaptation

Subspace Alignment Based Domain Adaptation for RCNN Detector

no code implementations20 Jul 2015 Anant Raj, Vinay P. Namboodiri, Tinne Tuytelaars

In this paper, we propose subspace alignment based domain adaptation of the state of the art RCNN based object detector.

Object Classification Object Detection +1

Mind the Gap: Subspace based Hierarchical Domain Adaptation

no code implementations16 Jan 2015 Anant Raj, Vinay P. Namboodiri, Tinne Tuytelaars

Domain adaptation techniques aim at adapting a classifier learnt on a source domain to work on the target domain.

Domain Adaptation

Scalable Kernel Methods via Doubly Stochastic Gradients

1 code implementation NeurIPS 2014 Bo Dai, Bo Xie, Niao He, YIngyu Liang, Anant Raj, Maria-Florina Balcan, Le Song

The general perception is that kernel methods are not scalable, and neural nets are the methods of choice for nonlinear learning problems.

Cannot find the paper you are looking for? You can Submit a new open access paper.