no code implementations • NeurIPS 2023 • Antti Koskela, tejas kulkarni
Tuning the hyperparameters of differentially private (DP) machine learning (ML) algorithms often requires use of sensitive data and this may leak private information via hyperparameter values.
no code implementations • 27 Oct 2021 • tejas kulkarni, Joonas Jälkö, Samuel Kaski, Antti Honkela
In recent years, local differential privacy (LDP) has emerged as a technique of choice for privacy-preserving data collection in several scenarios when the aggregator is not trustworthy.
no code implementations • 3 Nov 2020 • Markus Wulfmeier, Arunkumar Byravan, Tim Hertweck, Irina Higgins, Ankush Gupta, tejas kulkarni, Malcolm Reynolds, Denis Teplyashin, Roland Hafner, Thomas Lampe, Martin Riedmiller
Furthermore, the value of each representation is evaluated in terms of three properties: dimensionality, observability and disentanglement.
no code implementations • 1 Nov 2020 • tejas kulkarni, Joonas Jälkö, Antti Koskela, Samuel Kaski, Antti Honkela
Generalized linear models (GLMs) such as logistic regression are among the most widely used arms in data analyst's repertoire and often used on sensitive datasets.
no code implementations • 9 Oct 2019 • James Bell, Aurélien Bellet, Adrià Gascón, tejas kulkarni
In this paper, we study the problem of computing $U$-statistics of degree $2$, i. e., quantities that come in the form of averages over pairs of data points, in the local model of differential privacy (LDP).
1 code implementation • 2 Oct 2019 • John F. J. Mellor, Eunbyung Park, Yaroslav Ganin, Igor Babuschkin, tejas kulkarni, Dan Rosenbaum, Andy Ballard, Theophane Weber, Oriol Vinyals, S. M. Ali Eslami
We investigate using reinforcement learning agents as generative models of images (extending arXiv:1804. 01118).
6 code implementations • NeurIPS 2019 • Tejas Kulkarni, Ankush Gupta, Catalin Ionescu, Sebastian Borgeaud, Malcolm Reynolds, Andrew Zisserman, Volodymyr Mnih
In this work we aim to learn object representations that are useful for control and reinforcement learning (RL).
no code implementations • ICLR 2019 • catalin ionescu, tejas kulkarni, aaron van de oord, andriy mnih, Vlad Mnih
Exploration in environments with sparse rewards is a key challenge for reinforcement learning.
no code implementations • 3 Dec 2018 • Aishwarya Agrawal, Mateusz Malinowski, Felix Hill, Ali Eslami, Oriol Vinyals, tejas kulkarni
In this work, we study the setting in which an agent must learn to generate programs for diverse scenes conditioned on a given symbolic instruction.
no code implementations • ICLR 2019 • David Warde-Farley, Tom Van de Wiele, tejas kulkarni, Catalin Ionescu, Steven Hansen, Volodymyr Mnih
Learning to control an environment without hand-crafted rewards or expert data remains challenging and is at the frontier of reinforcement learning research.
2 code implementations • ICML 2018 • Yaroslav Ganin, tejas kulkarni, Igor Babuschkin, S. M. Ali Eslami, Oriol Vinyals
Advances in deep generative networks have led to impressive results in recent years.
no code implementations • 22 Feb 2016 • William F. Whitney, Michael Chang, tejas kulkarni, Joshua B. Tenenbaum
We introduce a neural network architecture and a learning algorithm to produce factorized symbolic representations.
3 code implementations • EMNLP 2015 • Karthik Narasimhan, tejas kulkarni, Regina Barzilay
We evaluate our approach on two game worlds, comparing against baselines using bag-of-words and bag-of-bigrams for state representations.