Search Results for author: tejas kulkarni

Found 12 papers, 4 papers with code

Locally Differentially Private Bayesian Inference

no code implementations27 Oct 2021 tejas kulkarni, Joonas Jälkö, Samuel Kaski, Antti Honkela

In recent years, local differential privacy (LDP) has emerged as a technique of choice for privacy-preserving data collection in several scenarios when the aggregator is not trustworthy.

Bayesian Inference

Differentially Private Bayesian Inference for Generalized Linear Models

no code implementations1 Nov 2020 tejas kulkarni, Joonas Jälkö, Antti Koskela, Samuel Kaski, Antti Honkela

Generalized linear models (GLMs) such as logistic regression are among the most widely used arms in data analyst's repertoire and often used on sensitive datasets.

Bayesian Inference

Private Protocols for U-Statistics in the Local Model and Beyond

no code implementations9 Oct 2019 James Bell, Aurélien Bellet, Adrià Gascón, tejas kulkarni

In this paper, we study the problem of computing $U$-statistics of degree $2$, i. e., quantities that come in the form of averages over pairs of data points, in the local model of differential privacy (LDP).

Metric Learning

Generating Diverse Programs with Instruction Conditioned Reinforced Adversarial Learning

no code implementations3 Dec 2018 Aishwarya Agrawal, Mateusz Malinowski, Felix Hill, Ali Eslami, Oriol Vinyals, tejas kulkarni

In this work, we study the setting in which an agent must learn to generate programs for diverse scenes conditioned on a given symbolic instruction.

Unsupervised Control Through Non-Parametric Discriminative Rewards

no code implementations ICLR 2019 David Warde-Farley, Tom Van de Wiele, tejas kulkarni, Catalin Ionescu, Steven Hansen, Volodymyr Mnih

Learning to control an environment without hand-crafted rewards or expert data remains challenging and is at the frontier of reinforcement learning research.

reinforcement-learning

Understanding Visual Concepts with Continuation Learning

no code implementations22 Feb 2016 William F. Whitney, Michael Chang, tejas kulkarni, Joshua B. Tenenbaum

We introduce a neural network architecture and a learning algorithm to produce factorized symbolic representations.

Atari Games Frame

Language Understanding for Text-based Games Using Deep Reinforcement Learning

3 code implementations EMNLP 2015 Karthik Narasimhan, tejas kulkarni, Regina Barzilay

We evaluate our approach on two game worlds, comparing against baselines using bag-of-words and bag-of-bigrams for state representations.

reinforcement-learning text-based games

Cannot find the paper you are looking for? You can Submit a new open access paper.