Search Results for author: Gauri Jagatap

Found 9 papers, 2 papers with code

Can deepfakes be created by novice users?

no code implementations28 Apr 2023 Pulak Mehta, Gauri Jagatap, Kevin Gallagher, Brian Timmerman, Progga Deb, Siddharth Garg, Rachel Greenstadt, Brendan Dolan-Gavitt

We conclude that creating Deepfakes is a simple enough task for a novice user given adequate tools and time; however, the resulting Deepfakes are not sufficiently real-looking and are unable to completely fool detection software as well as human examiners

DeepFake Detection Face Swapping

Adversarial Token Attacks on Vision Transformers

no code implementations8 Oct 2021 Ameya Joshi, Gauri Jagatap, Chinmay Hegde

Vision transformers rely on a patch token based self attention mechanism, in contrast to convolutional networks.

Provable Compressed Sensing with Generative Priors via Langevin Dynamics

no code implementations25 Feb 2021 Thanh V. Nguyen, Gauri Jagatap, Chinmay Hegde

Deep generative models have emerged as a powerful class of priors for signals in various inverse problems such as compressed sensing, phase retrieval and super-resolution.

Retrieval Super-Resolution

Phase Retrieval using Untrained Neural Network Priors

no code implementations NeurIPS Workshop Deep_Invers 2019 Gauri Jagatap, Chinmay Hegde

Untrained deep neural networks as image priors have been recently introduced for linear inverse imaging problems such as denoising, super-resolution, inpainting and compressive sensing with promising performance gains over hand-crafted image priors such as sparsity.

Compressive Sensing Denoising +2

Algorithmic Guarantees for Inverse Imaging with Untrained Network Priors

2 code implementations NeurIPS 2019 Gauri Jagatap, Chinmay Hegde

Specifically, we consider the problem of solving linear inverse problems, such as compressive sensing, as well as non-linear problems, such as compressive phase retrieval.

Compressive Sensing Denoising +2

Learning ReLU Networks via Alternating Minimization

no code implementations20 Jun 2018 Gauri Jagatap, Chinmay Hegde

We propose and analyze a new family of algorithms for training neural networks with ReLU activations.

Fast, Sample-Efficient Algorithms for Structured Phase Retrieval

no code implementations NeurIPS 2017 Gauri Jagatap, Chinmay Hegde

For this problem, we design a recovery algorithm that we call Block CoPRAM that further reduces the sample complexity to O(ks log n).

Retrieval

Sample-Efficient Algorithms for Recovering Structured Signals from Magnitude-Only Measurements

1 code implementation18 May 2017 Gauri Jagatap, Chinmay Hegde

For this problem, we design a recovery algorithm Block CoPRAM that further reduces the sample complexity to $O(ks\log n)$.

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.