1 code implementation • ICML 2020 • Zinan Lin, Kiran Thekumparampil, Giulia Fanti, Sewoong Oh
This contrastive regularizer is inspired by a natural notion of disentanglement: latent traversal.
no code implementations • 18 Oct 2024 • Sangyun Lee, Yilun Xu, Tomas Geffner, Giulia Fanti, Karsten Kreis, Arash Vahdat, Weili Nie
Consistency models have recently been introduced to accelerate sampling from diffusion models by directly predicting the solution (i. e., data) of the probability flow ODE (PF ODE) from initial noise.
1 code implementation • 6 Oct 2024 • Xinyi Xu, Shuaiqi Wang, Chuan-Sheng Foo, Bryan Kian Hsiang Low, Giulia Fanti
Data valuation is a class of techniques for quantitatively assessing the value of data for applications like pricing in data marketplaces.
1 code implementation • 5 Jun 2024 • Charlie Hou, Akshat Shrivastava, Hongyuan Zhan, Rylan Conway, Trang Le, Adithya Sagar, Giulia Fanti, Daniel Lazar
Altogether, these results suggest that training on DP synthetic data can be a better option than training a model on-device on private distributed data.
1 code implementation • 30 May 2024 • Sangyun Lee, Zinan Lin, Giulia Fanti
In this work, we propose improved techniques for training rectified flows, allowing them to compete with \emph{knowledge distillation} methods even in the low NFE setting.
Ranked #17 on Image Generation on ImageNet 64x64
no code implementations • 29 Feb 2024 • Shuqi Ke, Charlie Hou, Giulia Fanti, Sewoong Oh
We provide theoretical insights into the convergence of DP fine-tuning within an overparameterized neural network and establish a utility curve that determines the allocation of privacy budget between linear probing and full fine-tuning.
1 code implementation • 11 Dec 2023 • Ronghao Ni, Zinan Lin, Shuaiqi Wang, Giulia Fanti
By using MoLE existing linear-centric models can achieve SOTA LTSF results in 68% of the experiments that PatchTST reports and we compare to, whereas existing single-head linear-centric models achieve SOTA results in only 25% of cases.
Ranked #1 on Time Series Forecasting on Electricity (720)
no code implementations • 31 Jul 2023 • Charlie Hou, Kiran Koshy Thekumparampil, Michael Shavlovsky, Giulia Fanti, Yesh Dattatreya, Sujay Sanghavi
On tabular data, a significant body of literature has shown that current deep learning (DL) models perform at best similarly to Gradient Boosted Decision Trees (GBDTs), while significantly underperforming them on outlier data.
1 code implementation • 3 Mar 2023 • Zinan Lin, Shuaiqi Wang, Vyas Sekar, Giulia Fanti
We study a setting where a data holder wishes to share data with a receiver, without revealing certain summary statistics of the data distribution (e. g., mean, standard deviation).
no code implementations • 17 Feb 2023 • Charlie Hou, Hongyuan Zhan, Akshat Shrivastava, Sid Wang, Aleksandr Livshits, Giulia Fanti, Daniel Lazar
To this end, we propose FreD (Federated Private Fr\'echet Distance) -- a privately computed distance between a prefinetuning dataset and federated datasets.
no code implementations • 3 Jun 2022 • Zinan Lin, Vyas Sekar, Giulia Fanti
By drawing connections to the generalization properties of GANs, we prove that under some assumptions, GAN-generated samples inherently satisfy some (weak) privacy guarantees.
1 code implementation • 24 May 2022 • Shuaiqi Wang, Jonathan Hayase, Giulia Fanti, Sewoong Oh
We propose shadow learning, a framework for defending against backdoor attacks in the FL setting under long-range training.
1 code implementation • 20 Mar 2022 • Zinan Lin, Hao Liang, Giulia Fanti, Vyas Sekar
We study the problem of learning generative adversarial networks (GANs) for a rare class of an unlabeled dataset subject to a labeling budget.
no code implementations • ICLR 2022 • Charlie Hou, Kiran K. Thekumparampil, Giulia Fanti, Sewoong Oh
We propose FedChain, an algorithmic framework that combines the strengths of local methods and global methods to achieve fast convergence in terms of R while leveraging the similarity between clients.
1 code implementation • 24 May 2021 • Hongyu Gong, Alberto Valido, Katherine M. Ingram, Giulia Fanti, Suma Bhat, Dorothy L. Espelage
Abusive language is a massive problem in online social platforms.
1 code implementation • 31 Mar 2021 • Wanzheng Zhu, Hongyu Gong, Rohan Bansal, Zachary Weinberg, Nicolas Christin, Giulia Fanti, Suma Bhat
It is usually apparent to a human moderator that a word is being used euphemistically, but they may not know what the secret meaning is, and therefore whether the message violates policy.
no code implementations • 12 Feb 2021 • Charlie Hou, Kiran K. Thekumparampil, Giulia Fanti, Sewoong Oh
Our goal is to design an algorithm that can harness the benefit of similarity in the clients while recovering the Minibatch Mirror-prox performance under arbitrary heterogeneity (up to log factors).
1 code implementation • NeurIPS 2021 • Zinan Lin, Vyas Sekar, Giulia Fanti
Spectral normalization (SN) is a widely-used technique for improving the stability and sample quality of Generative Adversarial Networks (GANs).
1 code implementation • 4 Dec 2019 • Charlie Hou, Mingxun Zhou, Yan Ji, Phil Daian, Florian Tramer, Giulia Fanti, Ari Juels
Incentive mechanisms are central to the functionality of permissionless blockchains: they incentivize participants to run and secure the underlying consensus protocol.
Cryptography and Security
4 code implementations • 30 Sep 2019 • Zinan Lin, Alankar Jain, Chen Wang, Giulia Fanti, Vyas Sekar
By shedding light on the promise and challenges, we hope our work can rekindle the conversation on workflows for data sharing.
2 code implementations • 25 Sep 2019 • Lei Yang, Vivek Bagaria, Gerui Wang, Mohammad Alizadeh, David Tse, Giulia Fanti, Pramod Viswanath
Bitcoin is the first fully-decentralized permissionless blockchain protocol to achieve a high level of security, but at the expense of poor throughput and latency.
Distributed, Parallel, and Cluster Computing Cryptography and Security Networking and Internet Architecture
1 code implementation • 14 Jun 2019 • Zinan Lin, Kiran Koshy Thekumparampil, Giulia Fanti, Sewoong Oh
Disentangled generative models map a latent code vector to a target space, while enforcing that a subset of the learned latent codes are interpretable and associated with distinct properties of the target distribution.
7 code implementations • NeurIPS 2018 • Zinan Lin, Ashish Khetan, Giulia Fanti, Sewoong Oh
Generative adversarial networks (GANs) are innovative techniques for learning generative models of complex data distributions from samples.
1 code implementation • NeurIPS 2017 • Giulia Fanti, Pramod Viswanath
Recent attacks on Bitcoin's peer-to-peer (P2P) network demonstrated that its transaction-flooding protocols, which are used to ensure network consistency, may enable user deanonymization---the linkage of a user's IP address with her pseudonym in the Bitcoin network.
2 code implementations • 16 Jan 2017 • Shaileshh Bojja Venkatakrishnan, Giulia Fanti, Pramod Viswanath
We propose a simple networking policy called Dandelion, which achieves nearly-optimal anonymity guarantees at minimal cost to the network's utility.
Cryptography and Security Information Theory Information Theory
1 code implementation • 4 Mar 2015 • Giulia Fanti, Vasyl Pihur, Úlfar Erlingsson
Techniques based on randomized response enable the collection of potentially sensitive data from clients in a privacy-preserving manner with strong local differential privacy guarantees.
Cryptography and Security
no code implementations • 29 Dec 2014 • Giulia Fanti, Peter Kairouz, Sewoong Oh, Pramod Viswanath
Whether for fear of judgment or personal endangerment, it is crucial to keep anonymous the identity of the user who initially posted a sensitive message.