Search Results for author: Kristian Georgiev

Found 6 papers, 5 papers with code

Rethinking Backdoor Attacks

no code implementations19 Jul 2023 Alaa Khaddaj, Guillaume Leclerc, Aleksandar Makelov, Kristian Georgiev, Hadi Salman, Andrew Ilyas, Aleksander Madry

In a backdoor attack, an adversary inserts maliciously constructed backdoor examples into a training set to make the resulting model vulnerable to manipulation.

Backdoor Attack

TRAK: Attributing Model Behavior at Scale

2 code implementations24 Mar 2023 Sung Min Park, Kristian Georgiev, Andrew Ilyas, Guillaume Leclerc, Aleksander Madry

That is, computationally tractable methods can struggle with accurately attributing model predictions in non-convex settings (e. g., in the context of deep neural networks), while methods that are effective in such regimes require training thousands of models, which makes them impractical for large models or datasets.

Privacy Induces Robustness: Information-Computation Gaps and Sparse Mean Estimation

1 code implementation1 Nov 2022 Kristian Georgiev, Samuel B. Hopkins

We establish a simple connection between robust and differentially-private algorithms: private mechanisms which perform well with very high probability are automatically robust in the sense that they retain accuracy even if a constant fraction of the samples they receive are adversarially corrupted.

Computational Efficiency PAC learning

Implicit Bias of Linear Equivariant Networks

1 code implementation12 Oct 2021 Hannah Lawrence, Kristian Georgiev, Andrew Dienes, Bobak T. Kiani

Group equivariant convolutional neural networks (G-CNNs) are generalizations of convolutional neural networks (CNNs) which excel in a wide range of technical applications by explicitly encoding symmetries, such as rotations and permutations, in their architectures.

Binary Classification

On the Convergence Theory of Debiased Model-Agnostic Meta-Reinforcement Learning

1 code implementation NeurIPS 2021 Alireza Fallah, Kristian Georgiev, Aryan Mokhtari, Asuman Ozdaglar

We consider Model-Agnostic Meta-Learning (MAML) methods for Reinforcement Learning (RL) problems, where the goal is to find a policy using data from several tasks represented by Markov Decision Processes (MDPs) that can be updated by one step of stochastic policy gradient for the realized MDP.

Meta-Learning Meta Reinforcement Learning +3

Cannot find the paper you are looking for? You can Submit a new open access paper.