Search Results for author: Nikita Dhawan

Found 7 papers, 2 papers with code

Leveraging Function Space Aggregation for Federated Learning at Scale

no code implementations17 Nov 2023 Nikita Dhawan, Nicole Mitchell, Zachary Charles, Zachary Garrett, Gintare Karolina Dziugaite

Many federated learning algorithms, including the canonical Federated Averaging (FedAvg), take a direct (possibly weighted) average of the client parameter updates, motivated by results in distributed optimization.

Distributed Optimization Federated Learning

Efficient Parametric Approximations of Neural Network Function Space Distance

no code implementations7 Feb 2023 Nikita Dhawan, Sicong Huang, Juhan Bae, Roger Grosse

It is often useful to compactly summarize important properties of model parameters and training data so that they can be used later without storing and/or iterating over the entire dataset.

Continual Learning

Dataset Inference for Self-Supervised Models

no code implementations16 Sep 2022 Adam Dziedzic, Haonan Duan, Muhammad Ahmad Kaleem, Nikita Dhawan, Jonas Guan, Yannis Cattan, Franziska Boenisch, Nicolas Papernot

We introduce a new dataset inference defense, which uses the private training set of the victim encoder model to attribute its ownership in the event of stealing.

Attribute Density Estimation

On the Difficulty of Defending Self-Supervised Learning against Model Extraction

1 code implementation16 May 2022 Adam Dziedzic, Nikita Dhawan, Muhammad Ahmad Kaleem, Jonas Guan, Nicolas Papernot

We construct several novel attacks and find that approaches that train directly on a victim's stolen representations are query efficient and enable high accuracy for downstream models.

Model extraction Self-Supervised Learning

Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift

no code implementations28 Sep 2020 Marvin Mengxin Zhang, Henrik Marklund, Nikita Dhawan, Abhishek Gupta, Sergey Levine, Chelsea Finn

A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.

Image Classification Meta-Learning

Adaptive Risk Minimization: Learning to Adapt to Domain Shift

3 code implementations NeurIPS 2021 Marvin Zhang, Henrik Marklund, Nikita Dhawan, Abhishek Gupta, Sergey Levine, Chelsea Finn

A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.

BIG-bench Machine Learning Domain Generalization +2

AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos

no code implementations10 Dec 2019 Laura Smith, Nikita Dhawan, Marvin Zhang, Pieter Abbeel, Sergey Levine

In this paper, we study how these challenges can be alleviated with an automated robotic learning framework, in which multi-stage tasks are defined simply by providing videos of a human demonstrator and then learned autonomously by the robot from raw image observations.

Reinforcement Learning (RL) Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.