Search Results for author: tejas kulkarni

Found 13 papers, 4 papers with code

Practical Differentially Private Hyperparameter Tuning with Subsampling

no code implementations NeurIPS 2023 Antti Koskela, tejas kulkarni

Tuning the hyperparameters of differentially private (DP) machine learning (ML) algorithms often requires use of sensitive data and this may leak private information via hyperparameter values.

Locally Differentially Private Bayesian Inference

no code implementations27 Oct 2021 tejas kulkarni, Joonas Jälkö, Samuel Kaski, Antti Honkela

In recent years, local differential privacy (LDP) has emerged as a technique of choice for privacy-preserving data collection in several scenarios when the aggregator is not trustworthy.

Bayesian Inference Privacy Preserving +1

Differentially Private Bayesian Inference for Generalized Linear Models

no code implementations1 Nov 2020 tejas kulkarni, Joonas Jälkö, Antti Koskela, Samuel Kaski, Antti Honkela

Generalized linear models (GLMs) such as logistic regression are among the most widely used arms in data analyst's repertoire and often used on sensitive datasets.

Bayesian Inference regression

Private Protocols for U-Statistics in the Local Model and Beyond

no code implementations9 Oct 2019 James Bell, Aurélien Bellet, Adrià Gascón, tejas kulkarni

In this paper, we study the problem of computing $U$-statistics of degree $2$, i. e., quantities that come in the form of averages over pairs of data points, in the local model of differential privacy (LDP).

Clustering Metric Learning

Generating Diverse Programs with Instruction Conditioned Reinforced Adversarial Learning

no code implementations3 Dec 2018 Aishwarya Agrawal, Mateusz Malinowski, Felix Hill, Ali Eslami, Oriol Vinyals, tejas kulkarni

In this work, we study the setting in which an agent must learn to generate programs for diverse scenes conditioned on a given symbolic instruction.

Unsupervised Control Through Non-Parametric Discriminative Rewards

no code implementations ICLR 2019 David Warde-Farley, Tom Van de Wiele, tejas kulkarni, Catalin Ionescu, Steven Hansen, Volodymyr Mnih

Learning to control an environment without hand-crafted rewards or expert data remains challenging and is at the frontier of reinforcement learning research.

Reinforcement Learning (RL)

Understanding Visual Concepts with Continuation Learning

no code implementations22 Feb 2016 William F. Whitney, Michael Chang, tejas kulkarni, Joshua B. Tenenbaum

We introduce a neural network architecture and a learning algorithm to produce factorized symbolic representations.

Atari Games

Cannot find the paper you are looking for? You can Submit a new open access paper.