Search Results for author: Giulia Fanti

Found 27 papers, 19 papers with code

Truncated Consistency Models

no code implementations18 Oct 2024 Sangyun Lee, Yilun Xu, Tomas Geffner, Giulia Fanti, Karsten Kreis, Arash Vahdat, Weili Nie

Consistency models have recently been introduced to accelerate sampling from diffusion models by directly predicting the solution (i. e., data) of the probability flow ODE (PF ODE) from initial noise.

Denoising

Data Distribution Valuation

1 code implementation6 Oct 2024 Xinyi Xu, Shuaiqi Wang, Chuan-Sheng Foo, Bryan Kian Hsiang Low, Giulia Fanti

Data valuation is a class of techniques for quantitatively assessing the value of data for applications like pricing in data marketplaces.

Data Valuation Fraud Detection +1

PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs

1 code implementation5 Jun 2024 Charlie Hou, Akshat Shrivastava, Hongyuan Zhan, Rylan Conway, Trang Le, Adithya Sagar, Giulia Fanti, Daniel Lazar

Altogether, these results suggest that training on DP synthetic data can be a better option than training a model on-device on private distributed data.

Language Modelling Large Language Model

Improving the Training of Rectified Flows

1 code implementation30 May 2024 Sangyun Lee, Zinan Lin, Giulia Fanti

In this work, we propose improved techniques for training rectified flows, allowing them to compete with \emph{knowledge distillation} methods even in the low NFE setting.

Image Generation Knowledge Distillation +2

On the Convergence of Differentially-Private Fine-tuning: To Linearly Probe or to Fully Fine-tune?

no code implementations29 Feb 2024 Shuqi Ke, Charlie Hou, Giulia Fanti, Sewoong Oh

We provide theoretical insights into the convergence of DP fine-tuning within an overparameterized neural network and establish a utility curve that determines the allocation of privacy budget between linear probing and full fine-tuning.

Mixture-of-Linear-Experts for Long-term Time Series Forecasting

1 code implementation11 Dec 2023 Ronghao Ni, Zinan Lin, Shuaiqi Wang, Giulia Fanti

By using MoLE existing linear-centric models can achieve SOTA LTSF results in 68% of the experiments that PatchTST reports and we compare to, whereas existing single-head linear-centric models achieve SOTA results in only 25% of cases.

Time Series Time Series Forecasting

Pretrained deep models outperform GBDTs in Learning-To-Rank under label scarcity

no code implementations31 Jul 2023 Charlie Hou, Kiran Koshy Thekumparampil, Michael Shavlovsky, Giulia Fanti, Yesh Dattatreya, Sujay Sanghavi

On tabular data, a significant body of literature has shown that current deep learning (DL) models perform at best similarly to Gradient Boosted Decision Trees (GBDTs), while significantly underperforming them on outlier data.

Learning-To-Rank

Summary Statistic Privacy in Data Sharing

1 code implementation3 Mar 2023 Zinan Lin, Shuaiqi Wang, Vyas Sekar, Giulia Fanti

We study a setting where a data holder wishes to share data with a receiver, without revealing certain summary statistics of the data distribution (e. g., mean, standard deviation).

Quantization

Privately Customizing Prefinetuning to Better Match User Data in Federated Learning

no code implementations17 Feb 2023 Charlie Hou, Hongyuan Zhan, Akshat Shrivastava, Sid Wang, Aleksandr Livshits, Giulia Fanti, Daniel Lazar

To this end, we propose FreD (Federated Private Fr\'echet Distance) -- a privately computed distance between a prefinetuning dataset and federated datasets.

Federated Learning Language Modelling +2

On the Privacy Properties of GAN-generated Samples

no code implementations3 Jun 2022 Zinan Lin, Vyas Sekar, Giulia Fanti

By drawing connections to the generalization properties of GANs, we prove that under some assumptions, GAN-generated samples inherently satisfy some (weak) privacy guarantees.

Towards a Defense Against Federated Backdoor Attacks Under Continuous Training

1 code implementation24 May 2022 Shuaiqi Wang, Jonathan Hayase, Giulia Fanti, Sewoong Oh

We propose shadow learning, a framework for defending against backdoor attacks in the FL setting under long-range training.

Continual Learning Federated Learning

RareGAN: Generating Samples for Rare Classes

1 code implementation20 Mar 2022 Zinan Lin, Hao Liang, Giulia Fanti, Vyas Sekar

We study the problem of learning generative adversarial networks (GANs) for a rare class of an unlabeled dataset subject to a labeling budget.

Active Learning Diversity

FedChain: Chained Algorithms for Near-Optimal Communication Cost in Federated Learning

no code implementations ICLR 2022 Charlie Hou, Kiran K. Thekumparampil, Giulia Fanti, Sewoong Oh

We propose FedChain, an algorithmic framework that combines the strengths of local methods and global methods to achieve fast convergence in terms of R while leveraging the similarity between clients.

Federated Learning Image Classification

Self-Supervised Euphemism Detection and Identification for Content Moderation

1 code implementation31 Mar 2021 Wanzheng Zhu, Hongyu Gong, Rohan Bansal, Zachary Weinberg, Nicolas Christin, Giulia Fanti, Suma Bhat

It is usually apparent to a human moderator that a word is being used euphemistically, but they may not know what the secret meaning is, and therefore whether the message violates policy.

Sentence Word Embeddings

Efficient Algorithms for Federated Saddle Point Optimization

no code implementations12 Feb 2021 Charlie Hou, Kiran K. Thekumparampil, Giulia Fanti, Sewoong Oh

Our goal is to design an algorithm that can harness the benefit of similarity in the clients while recovering the Minibatch Mirror-prox performance under arbitrary heterogeneity (up to log factors).

Why Spectral Normalization Stabilizes GANs: Analysis and Improvements

1 code implementation NeurIPS 2021 Zinan Lin, Vyas Sekar, Giulia Fanti

Spectral normalization (SN) is a widely-used technique for improving the stability and sample quality of Generative Adversarial Networks (GANs).

SquirRL: Automating Attack Discovery on Blockchain Incentive Mechanisms with Deep Reinforcement Learning

1 code implementation4 Dec 2019 Charlie Hou, Mingxun Zhou, Yan Ji, Phil Daian, Florian Tramer, Giulia Fanti, Ari Juels

Incentive mechanisms are central to the functionality of permissionless blockchains: they incentivize participants to run and secure the underlying consensus protocol.

Cryptography and Security

Using GANs for Sharing Networked Time Series Data: Challenges, Initial Promise, and Open Questions

4 code implementations30 Sep 2019 Zinan Lin, Alankar Jain, Chen Wang, Giulia Fanti, Vyas Sekar

By shedding light on the promise and challenges, we hope our work can rekindle the conversation on workflows for data sharing.

Synthetic Data Generation Time Series +1

Practical Low Latency Proof of Work Consensus

2 code implementations25 Sep 2019 Lei Yang, Vivek Bagaria, Gerui Wang, Mohammad Alizadeh, David Tse, Giulia Fanti, Pramod Viswanath

Bitcoin is the first fully-decentralized permissionless blockchain protocol to achieve a high level of security, but at the expense of poor throughput and latency.

Distributed, Parallel, and Cluster Computing Cryptography and Security Networking and Internet Architecture

InfoGAN-CR and ModelCentrality: Self-supervised Model Training and Selection for Disentangling GANs

1 code implementation14 Jun 2019 Zinan Lin, Kiran Koshy Thekumparampil, Giulia Fanti, Sewoong Oh

Disentangled generative models map a latent code vector to a target space, while enforcing that a subset of the learned latent codes are interpretable and associated with distinct properties of the target distribution.

Disentanglement Model Selection

PacGAN: The power of two samples in generative adversarial networks

7 code implementations NeurIPS 2018 Zinan Lin, Ashish Khetan, Giulia Fanti, Sewoong Oh

Generative adversarial networks (GANs) are innovative techniques for learning generative models of complex data distributions from samples.

Diversity Two-sample testing +1

Deanonymization in the Bitcoin P2P Network

1 code implementation NeurIPS 2017 Giulia Fanti, Pramod Viswanath

Recent attacks on Bitcoin's peer-to-peer (P2P) network demonstrated that its transaction-flooding protocols, which are used to ensure network consistency, may enable user deanonymization---the linkage of a user's IP address with her pseudonym in the Bitcoin network.

Dandelion: Redesigning the Bitcoin Network for Anonymity

2 code implementations16 Jan 2017 Shaileshh Bojja Venkatakrishnan, Giulia Fanti, Pramod Viswanath

We propose a simple networking policy called Dandelion, which achieves nearly-optimal anonymity guarantees at minimal cost to the network's utility.

Cryptography and Security Information Theory Information Theory

Building a RAPPOR with the Unknown: Privacy-Preserving Learning of Associations and Data Dictionaries

1 code implementation4 Mar 2015 Giulia Fanti, Vasyl Pihur, Úlfar Erlingsson

Techniques based on randomized response enable the collection of potentially sensitive data from clients in a privacy-preserving manner with strong local differential privacy guarantees.

Cryptography and Security

Spy vs. Spy: Rumor Source Obfuscation

no code implementations29 Dec 2014 Giulia Fanti, Peter Kairouz, Sewoong Oh, Pramod Viswanath

Whether for fear of judgment or personal endangerment, it is crucial to keep anonymous the identity of the user who initially posted a sensitive message.

Cannot find the paper you are looking for? You can Submit a new open access paper.